Skip to content
Blog

How to build trust in optimization: Let’s do better than BECAUSESAID SO

Optimization algorithms can spit out mathematically brilliant schedules, but if planners can’t see the rationale, those perfect” plans end up in the trash. In this blog post, Tom Cools explains how explainability turns planning optimization into a trusted ally instead of a black box.

# Why trust matters

I love being a dad. It’s really amazing to see this little human learn and explore every day with that child-like curiosity. Kids are so full of “why” questions.

  • "Why don’t fish drown?”
  • “Why can’t I play on my video game console today?"
  • “Why can’t I have ice cream for breakfast if it's made from milk?”

Sometimes we take our time to answer. Other times when parents are stressed, juggling four others things at the same time or are just plain tired, they sometimes whip out the universal catch-all answer: “Because I said so!”.

I don’t like that answer. It’s a shortcut. It ends the conversation. Worst of all, it shuts down curiosity and that is something we can’t afford to lose, especially in this age of ever-more complex systems. Yet, this is also what a lot of scheduling software does.

An algorithm spits out a schedule and the experts, human planners are left with questions:

  • Why did Alina get her preferred shift but Myey didn’t?
  • Why not fix Thomas’ electricity problem first before moving into the city for the other assignments?
  • Why assign Pieter when Maarten lives closer?

When the only answer to those questions is “because the algorithm said so”, planners will not trust the solution. And without trust, they’ll toss the plans aside and will stick to the techniques they have been using before. Our solutions do not just need to find the best schedules… they need to be able to explain them as well.

# From explainability to trust

If we want people to accept the output of our planning systems we should give them a system worthy of their trust. A big part in building a trustworthy system is being able to answer the questions mentioned above. It helps you move from blind trust to informed confidence.

In our experience building Timefold, explainability leads to 3 major benefits:

# 1. Clarity

Giving planners answers to all their “why” brings clarity. They are able to understand why the schedule is what it is. They gain deep insights and are not forced to blindly accept a plan.

With our Timefold solutions, we track constraint violations, how much they affect the outcome, and which decisions caused them. This transparency helps planners understand the trade-offs. At a higher level we also transform these constraints into clear-cut KPIs, making it easier to reason about them.

# 2. Error backtracking

If something breaks in the real world, you need to know why. In optimization systems which operate like black boxes, it can be hard to figure out exactly where things went wrong.

We have captured this in our Score Analysis functionality which allows you to analyze any schedule, even when it’s not been created by Timefold. This gives planners a powerful tool to diagnose issues and avoid future errors.

# 3. Insightful adjustments

Planners often make tweaks based on gut instinct. They want to make changes and see the impact of those changes on the schedule.

Next to the Score Analysis mentioned above, Timefold allows you to compare two plans. If new work needs to be assigned to a resource Timefold’s Recommendations assist the planners in making an informed choice.

Dashboards on cost, efficiency, happiness, coverage... can be pulled from the data.

# 4. Strategic insights

Explainability in planning systems benefits more than just end users. For decision-makers, they reveal actionable patterns that support strategic decision-making at the executive level.

We're building features in our platform that display operational imbalances and indicate room for improvement.

Clarity, Error Backtracking, Insightful adjustments, and Strategic insights all sprout from the core concept of explainability, and contribute to the trustworthiness of a system. Planners stop overriding schedules, collaborate with the planning tool and are able to solve larger problems with ease, while leaders can use the planning for better decision-making.

In short, they stop seeing PlanningAI as a threat and start seeing it as a planning partner.

# Don’t let "Because I said so" kill your plans

We wouldn’t accept that answer from a parent. We shouldn’t expect people to accept it from a planning engine either. In fact, taking a moment to focus on explainability is just as important as finding the “best” possible schedule. A great schedule isn’t just optimal, it’s understandable.

So next time someone asks, “Why this schedule?”, make sure your system can answer with something better than “Because the algorithm said so?!?”.

Continue reading