Score calculation
Using Java’s Streams API, we could implement an easy score calculator that uses a functional approach:
private int doNotAssignAnn() {
int softScore = 0;
schedule.getShiftList().stream()
.filter(Shift::isEmployeeAnn)
.forEach(shift -> {
softScore -= 1;
});
return softScore;
}
However, that scales poorly because it doesn’t do an incremental calculation:
When the planning variable of a single Shift
changes, to recalculate the score,
the normal Streams API has to execute the entire stream from scratch.
1. Introducing constraint streams
Constraint streams are a Functional Programming form of incremental score calculation in plain Java that is easy to read, write and debug. The API should feel familiar if you’re familiar with Java Streams or SQL.
The Constraint Streams API enables you to write similar code in pure Java, while reaping the performance benefits of incremental score calculation. This is an example of the same code we’ve shown above, only using the Constraint Streams API:
private Constraint doNotAssignAnn(ConstraintFactory factory) {
return factory.forEach(Shift.class)
.filter(Shift::isEmployeeAnn)
.penalize(HardSoftScore.ONE_SOFT)
.asConstraint("Don't assign Ann");
}
This constraint stream iterates over all instances of class Shift
in the problem facts and
planning entities in the planning problem.
It finds every Shift
which is assigned to employee Ann
and for every such instance (also called a match), it adds a
soft penalty of 1
to the overall score.
The following figure illustrates this process on a problem with 4 different shifts:

If any of the instances change during solving, the constraint stream automatically detects the change and only recalculates the minimum necessary portion of the problem that is affected by the change. The following figure illustrates this incremental score calculation:

Constraint Streams API also has advanced support for score explanation through custom justifications and indictments.

2. Creating a constraint stream
To use the Constraint Streams API in your project, first write a pure Java ConstraintProvider
implementation similar
to the following example.
public class MyConstraintProvider implements ConstraintProvider {
@Override
public Constraint[] defineConstraints(ConstraintFactory factory) {
return new Constraint[] {
penalizeEveryShift(factory)
};
}
private Constraint penalizeEveryShift(ConstraintFactory factory) {
return factory.forEach(Shift.class)
.penalize(HardSoftScore.ONE_SOFT)
.asConstraint("Penalize a shift");
}
}
This example contains one constraint, |
Add the following code to your solver configuration:
<solver xmlns="https://timefold.ai/xsd/solver" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://timefold.ai/xsd/solver https://timefold.ai/xsd/solver/solver.xsd">
<scoreDirectorFactory>
<constraintProviderClass>org.acme.schooltimetabling.solver.TimeTableConstraintProvider</constraintProviderClass>
</scoreDirectorFactory>
...
</solver>
3. Constraint stream cardinality
Constraint stream cardinality is a measure of how many objects a single constraint match consists of.
The simplest constraint stream has a cardinality of 1, meaning each constraint match only consists of 1 object.
Therefore, it is called a UniConstraintStream
:
private Constraint doNotAssignAnn(ConstraintFactory factory) {
return factory.forEach(Shift.class) // Returns UniStream<Shift>.
...
}
Some constraint stream building blocks can increase stream cardinality, such as join or groupBy:
private Constraint doNotAssignAnn(ConstraintFactory factory) {
return factory.forEach(Shift.class) // Returns Uni<Shift>.
.join(Employee.class) // Returns Bi<Shift, Employee>.
.join(DayOff.class) // Returns Tri<Shift, Employee, DayOff>.
.join(Country.class) // Returns Quad<Shift, Employee, DayOff, Country>.
...
}
The latter can also decrease stream cardinality:
private Constraint doNotAssignAnn(ConstraintFactory factory) {
return factory.forEach(Shift.class) // Returns UniStream<Shift>.
.join(Employee.class) // Returns BiStream<Shift, Employee>.
.groupBy((shift, employee) -> employee) // Returns UniStream<Employee>.
...
}
The following constraint stream cardinalities are currently supported:
Cardinality |
Prefix |
Defining interface |
1 |
Uni |
|
2 |
Bi |
|
3 |
Tri |
|
4 |
Quad |
|
3.1. Achieving higher cardinalities
Timefold Solver currently does not support constraint stream cardinalities higher than 4. However, with tuple mapping effectively infinite cardinality is possible:
private Constraint pentaStreamExample(ConstraintFactory factory) {
return factory.forEach(Shift.class) // UniConstraintStream<Shift>
.join(Shift.class) // BiConstraintStream<Shift, Shift>
.join(Shift.class) // TriConstraintStream<Shift, Shift, Shift>
.join(Shift.class) // QuadConstraintStream<Shift, Shift, Shift, Shift>
.map(MyTuple::of) // UniConstraintStream<MyTuple<Shift, Shift, Shift, Shift>>
.join(Shift.class) // BiConstraintStream<MyTuple<Shift, Shift, Shift, Shift>, Shift>
... // This BiConstraintStream carries 5 Shift elements.
}
Timefold Solver does not provide any tuple implementations out of the box. It’s recommended to use one of the freely available 3rd party implementations. Should a custom implementation be necessary, see guidelines for mapping functions. |
4. Building blocks
Constraint streams are chains of different operations, called building blocks.
Each constraint stream starts with a forEach(…)
building block and is terminated by either a penalty or a reward.
The following example shows the simplest possible constraint stream:
private Constraint penalizeInitializedShifts(ConstraintFactory factory) {
return factory.forEach(Shift.class)
.penalize(HardSoftScore.ONE_SOFT)
.asConstraint("Initialized shift");
}
This constraint stream penalizes each known and initialized instance of Shift
.
4.1. ForEach
The .forEach(T)
building block selects every T
instance that
is in a problem fact collection
or a planning entity collection
and has no null
genuine planning variables.
To include instances with a null
genuine planning variable,
replace the forEach()
building block by forEachIncludingNullVars()
:
private Constraint penalizeAllShifts(ConstraintFactory factory) {
return factory.forEachIncludingNullVars(Shift.class)
.penalize(HardSoftScore.ONE_SOFT)
.asConstraint("A shift");
}
The |
4.2. Penalties and rewards
The purpose of constraint streams is to build up a score for a solution.
To do this, every constraint stream must contain a call to either a penalize()
or a reward()
building block.
The penalize()
building block makes the score worse and the reward()
building block improves the score.
Each constraint stream is then terminated by calling asConstraint()
method, which finally builds the constraint. Constraints have several components:
-
Constraint package is the Java package that contains the constraint. The default value is the package that contains the
ConstraintProvider
implementation or the value from constraint configuration, if implemented. -
Constraint name is the human-readable descriptive name for the constraint, which (together with the constraint package) must be unique within the entire
ConstraintProvider
implementation. -
Constraint weight is a constant score value indicating how much every breach of the constraint affects the score. Valid examples include
SimpleScore.ONE
,HardSoftScore.ONE_HARD
andHardMediumSoftScore.of(1, 2, 3)
. -
Constraint match weigher is an optional function indicating how many times the constraint weight should be applied in the score. The penalty or reward score impact is the constraint weight multiplied by the match weight. The default value is
1
.
Constraints with zero constraint weight are automatically disabled and do not impose any performance penalty. |
The Constraint Streams API supports many different types of penalties. Browse the API in your IDE for the full list of method overloads. Here are some examples:
-
Simple penalty (
penalize(SimpleScore.ONE)
) makes the score worse by1
per every match in the constraint stream. The score type must be the same type as used on the@PlanningScore
annotated member on the planning solution. -
Dynamic penalty (
penalize(SimpleScore.ONE, Shift::getHours)
) makes the score worse by the number of hours in every matchingShift
in the constraint stream. This is an example of using a constraint match weigher. -
Configurable penalty (
penalizeConfigurable()
) makes the score worse using constraint weights defined in constraint configuration. -
Configurable dynamic penalty(
penalizeConfigurable(Shift::getHours)
) makes the score worse using constraint weights defined in constraint configuration, multiplied by the number of hours in every matchingShift
in the constraint stream.
By replacing the keyword penalize
by reward
in the name of these building blocks, you get operations that
affect score in the opposite direction.
4.2.1. Customizing justifications and indictments
One of important Timefold Solver features is its ability to explain the score of solutions it produced
through the use of justifications and indictments.
By default, each constraint is justified with ai.timefold.solver.core.api.score.stream.DefaultConstraintJustification
,
and the final tuple makes up the indicted objects.
For example, in the following constraint, the indicted objects will be of type Vehicle
and an Integer
:
protected Constraint vehicleCapacity(ConstraintFactory factory) {
return factory.forEach(Customer.class)
.filter(customer -> customer.getVehicle() != null)
.groupBy(Customer::getVehicle, sum(Customer::getDemand))
.filter((vehicle, demand) -> demand > vehicle.getCapacity())
.penalizeLong(HardSoftLongScore.ONE_HARD,
(vehicle, demand) -> demand - vehicle.getCapacity())
.asConstraint("vehicleCapacity");
}
For the purposes of creating a heat map, the Vehicle
is very important, but the naked Integer
carries no semantics.
We can remove it by providing the `indictWith(…) method with a custom indictment mapping:
protected Constraint vehicleCapacity(ConstraintFactory factory) {
return factory.forEach(Customer.class)
.filter(customer -> customer.getVehicle() != null)
.groupBy(Customer::getVehicle, sum(Customer::getDemand))
.filter((vehicle, demand) -> demand > vehicle.getCapacity())
.penalizeLong(HardSoftLongScore.ONE_HARD,
(vehicle, demand) -> demand - vehicle.getCapacity())
.indictWith((vehicle, demand) -> List.of(vehicle))
.asConstraint("vehicleCapacity");
}
The same mechanism can also be used to transform any of the indicted objects to any other object.
To present the constraint matches to the user or to send them over the wire where they can be further processed, use the justifyWith(…)
method to provide a custom constraint justification:
protected Constraint vehicleCapacity(ConstraintFactory factory) {
return factory.forEach(Customer.class)
.filter(customer -> customer.getVehicle() != null)
.groupBy(Customer::getVehicle, sum(Customer::getDemand))
.filter((vehicle, demand) -> demand > vehicle.getCapacity())
.penalizeLong(HardSoftLongScore.ONE_HARD,
(vehicle, demand) -> demand - vehicle.getCapacity())
.justifyWith((vehicle, demand, score) ->
new VehicleDemandOveruse(vehicle, demand, score))
.indictWith((vehicle, demand) -> List.of(vehicle))
.asConstraint("vehicleCapacity");
}
VehicleDemandOveruse
is a custom type you have to implement.
You have complete control over the type, its name or methods exposed.
If you choose to decorate it with the proper annotations,
you will be able to send it over HTTP or store it in a database.
The only limitation is that it must implement the ai.timefold.solver.core.api.score.stream.ConstraintJustification
marker interface.
4.3. Filtering
Filtering enables you to reduce the number of constraint matches in your stream.
It first enumerates all constraint matches and then applies a predicate to filter some matches out.
The predicate is a function that only returns true
if the match is to continue in the stream.
The following constraint stream removes all of Beth’s shifts from all Shift
matches:
private Constraint penalizeAnnShifts(ConstraintFactory factory) {
return factory.forEach(Shift.class)
.filter(shift -> shift.getEmployeeName().equals("Ann"))
.penalize(SimpleScore.ONE)
.asConstraint("Ann's shift");
}
The following example retrieves a list of shifts where an employee has asked for a day off from a bi-constraint match
of Shift
and DayOff
:
private Constraint penalizeShiftsOnOffDays(ConstraintFactory factory) {
return factory.forEach(Shift.class)
.join(DayOff.class)
.filter((shift, dayOff) -> shift.date == dayOff.date && shift.employee == dayOff.employee)
.penalize(SimpleScore.ONE)
.asConstraint("Shift on an off-day");
}
The following figure illustrates both these examples:

For performance reasons, using the join building block with the appropriate |
The following functions are required for filtering constraint streams of different cardinality:
Cardinality |
Filtering Predicate |
1 |
|
2 |
|
3 |
|
4 |
|
4.4. Joining
Joining is a way to increase stream cardinality and it is similar to the inner join
operation in SQL. As the following figure illustrates,
a join()
creates a cartesian product of the streams being joined:

Doing this is inefficient if the resulting stream contains a lot of constraint matches that need to be filtered out immediately.
Instead, use a Joiner
condition to restrict the joined matches only to those that are interesting:

For example:
import static ai.timefold.solver.core.api.score.stream.Joiners.*;
...
private Constraint shiftOnDayOff(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(Shift.class)
.join(DayOff.class,
equal(Shift::getDate, DayOff::getDate),
equal(Shift::getEmployee, DayOff::getEmployee))
.penalize(HardSoftScore.ONE_HARD)
.asConstraint("Shift on an off-day");
}
Through the Joiners
class, the following Joiner
conditions are supported to join two streams,
pairing a match from each side:
-
equal()
: the paired matches have a property that areequals()
. This relies onhashCode()
. -
greaterThan()
,greaterThanOrEqual()
,lessThan()
andlessThanOrEqual()
: the paired matches have aComparable
property following the prescribed ordering. -
overlapping()
: the paired matches have two properties (a start and an end property) of the sameComparable
type that both represent an interval which overlap.
All Joiners
methods have an overloaded method to use the same property of the same class on both stream sides.
For example, calling equal(Shift::getEmployee)
is the same as calling equal(Shift::getEmployee, Shift::getEmployee)
.
If the other stream might match multiple times, but it must only impact the score once (for each element of the original stream), use ifExists instead. It does not create cartesian products and therefore generally performs better. |
4.4.1. Evaluation of multiple joiners
When using multiple joiners, there are some important considerations to keep in mind. Consider the following example:
factory.forEach(VehicleShift.class)
.join(Visit.class,
Joiners.equal(Function.identity(), Visit::getVehicleShift), // Visit's VehicleShift is not null...
Joiners.lessThan(
vehicleShift -> vehicleShift.getMaxTravelTime(),
visit -> visit.getVehicleShift().getMaxTravelTime() // ... yet NPE may be thrown here.
))
When indexing joiners (such as equal()
and lessThan()
) check their indexes,
they take the input tuple and create a set of keys that will enter the index.
These keys are different for the left and right side of the joiner.
In the above example, from the left side,
the key is [VehicleShift instance && result of calling VehicleShift.getMaxTravelTime()]
.
(Using the first mapping function of each joiner.)
From the right side,
the key is [the result of calling Visit.getVehicleShift() && result of calling Visit.getVehicleShift().getMaxTravelTime()]
.
(Using the second mapping function of each joiner.)
However, both of the key mapping functions are calculated independently of the other,
and therefore the lessThan()
joiner’s mapping functions will be executed even in cases
when the equal()
joiner would not match.
This leads to a NullPointerException
being thrown in the example above,
where the lessThan()
joiner’s mapping functions are executed on a Visit
instance
that has a null
vehicleShift
property which wasn’t (yet) filtered out by the equal()
joiner.
The filtering only happens inside the joiner’s indexes and to access them,
we need these keys to be generated first.
To avoid these issues, do not assume that subsequent joiners' mapping functions only apply after the previous joiners have matched. Alternatively (and possibly at the cost of reduced performance) use the filtering joiner, which is processed differently and does not suffer from this issue:
factory.forEach(VehicleShift.class)
.join(Visit.class,
Joiners.equal(Function.identity(), Visit::getVehicleShift), // Visit's VehicleShift is not null...
Joiners.filtering((vehicleShift, visit) ->
vehicleShift.getMaxTravelTime() < visit.getVehicleShift().getMaxTravelTime()
))
4.5. Grouping and collectors
Grouping collects items in a stream according to user-provider criteria (also called "group key"), similar to what a
GROUP BY
SQL clause does. Additionally, some grouping operations also accept one or more Collector
instances, which
provide various aggregation functions. The following figure illustrates a simple groupBy()
operation:

Objects used as group key must obey the general contract of For this reason, it is not recommended to use mutable objects (especially mutable collections) as group keys. If planning entities are used as group keys, their hashCode must not be computed off of planning variables. Failure to follow this recommendation may result in runtime exceptions being thrown. |
For example, the following code snippet first groups all processes by the computer they run on, sums up all the power
required by the processes on that computer using the ConstraintCollectors.sum(…)
collector, and finally penalizes
every computer whose processes consume more power than is available.
import static ai.timefold.solver.core.api.score.stream.ConstraintCollectors.*;
...
private Constraint requiredCpuPowerTotal(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class)
.groupBy(CloudProcess::getComputer, sum(CloudProcess::getRequiredCpuPower))
.filter((computer, requiredCpuPower) -> requiredCpuPower > computer.getCpuPower())
.penalize(HardSoftScore.ONE_HARD,
(computer, requiredCpuPower) -> requiredCpuPower - computer.getCpuPower())
.asConstraint("requiredCpuPowerTotal");
}
Information might be lost during grouping.
In the previous example, |
There are several collectors available out of the box. You can also provide your own collectors by implementing the
ai.timefold.solver.core.api.score.stream.uni.UniConstraintCollector
interface, or its Bi…
, Tri…
and Quad…
counterparts.
4.5.1. Out-of-the-box collectors
The following collectors are provided out of the box:
count()
collector
The ConstraintCollectors.count(…)
counts all elements per group. For example, the following use of the collector
gives a number of items for two separate groups - one where the talks have unavailable speakers, and one where they
don’t.
private Constraint speakerAvailability(ConstraintFactory factory) {
return factory.forEach(Talk.class)
.groupBy(Talk::hasAnyUnavailableSpeaker, count())
.penalize(HardSoftScore.ONE_HARD,
(hasUnavailableSpeaker, count) -> ...)
.asConstraint("speakerAvailability");
}
The count is collected in an int
. Variants of this collector:
-
countLong()
collects along
value instead of anint
value.
To count a bi, tri or quad stream, use countBi()
, countTri()
or countQuad()
respectively,
because - unlike the other built-in collectors - they aren’t overloaded methods due to Java’s generics erasure.
countDistinct()
collector
The ConstraintCollectors.countDistinct(…)
counts any element per group once, regardless of how many times it
occurs. For example, the following use of the collector gives a number of talks in each unique room.
private Constraint roomCount(ConstraintFactory factory) {
return factory.forEach(Talk.class)
.groupBy(Talk::getRoom, countDistinct())
.penalize(HardSoftScore.ONE_SOFT,
(room, count) -> ...)
.asConstraint("roomCount");
}
The distinct count is collected in an int
. Variants of this collector:
-
countDistinctLong()
collects along
value instead of anint
value.
sum()
collector
To sum the values of a particular property of all elements per group, use the ConstraintCollectors.sum(…)
collector. The following code snippet first groups all processes by the computer they run on and sums up all the power
required by the processes on that computer using the ConstraintCollectors.sum(…)
collector.
private Constraint requiredCpuPowerTotal(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class)
.groupBy(CloudProcess::getComputer, sum(CloudProcess::getRequiredCpuPower))
.penalize(HardSoftScore.ONE_SOFT,
(computer, requiredCpuPower) -> requiredCpuPower)
.asConstraint("requiredCpuPowerTotal");
}
The sum is collected in an int
. Variants of this collector:
-
sumLong()
collects along
value instead of anint
value. -
sumBigDecimal()
collects ajava.math.BigDecimal
value instead of anint
value. -
sumBigInteger()
collects ajava.math.BigInteger
value instead of anint
value. -
sumDuration()
collects ajava.time.Duration
value instead of anint
value. -
sumPeriod()
collects ajava.time.Period
value instead of anint
value. -
a generic
sum()
variant for summing up custom types
average()
collector
To calculate the average of a particular property of all elements per group, use the ConstraintCollectors.average(…)
collector.
The following code snippet first groups all processes by the computer they run on and averages all the power
required by the processes on that computer using the ConstraintCollectors.average(…)
collector.
private Constraint requiredCpuPowerTotal(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class)
.groupBy(CloudProcess::getComputer, average(CloudProcess::getRequiredCpuPower))
.penalize(HardSoftScore.ONE_SOFT,
(computer, averageCpuPower) -> averageCpuPower)
.asConstraint("averageCpuPower");
}
The average is collected as a double
, and the average of no elements is null
.
Variants of this collector:
-
averageLong()
collects along
value instead of anint
value. -
averageBigDecimal()
collects ajava.math.BigDecimal
value instead of anint
value, resulting in aBigDecimal
average. -
averageBigInteger()
collects ajava.math.BigInteger
value instead of anint
value, resulting in aBigDecimal
average. -
averageDuration()
collects ajava.time.Duration
value instead of anint
value, resulting in aDuration
average.
min()
and max()
collectors
To extract the minimum or maximum per group, use the ConstraintCollectors.min(…)
and
ConstraintCollectors.max(…)
collectors respectively.
These collectors operate on values of properties which are Comparable
(such as Integer
, String
or Duration
),
although there are also variants of these collectors which allow you to provide your own Comparator
.
The following example finds a computer which runs the most power-demanding process:
private Constraint computerWithBiggestProcess(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class)
.groupBy(CloudProcess::getComputer, max(CloudProcess::getRequiredCpuPower))
.penalize(HardSoftScore.ONE_HARD,
(computer, biggestProcess) -> ...)
.asConstraint("computerWithBiggestProcess");
}
|
toList()
, toSet()
and toMap()
collectors
To extract all elements per group into a collection, use the ConstraintCollectors.toList(…)
.
The following example retrieves all processes running on a computer in a List
:
private Constraint computerWithBiggestProcess(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class)
.groupBy(CloudProcess::getComputer, toList())
.penalize(HardSoftScore.ONE_HARD,
(computer, processList) -> ...)
.asConstraint("computerAndItsProcesses");
}
Variants of this collector:
-
toList()
collects aList
value. -
toSet()
collects aSet
value. -
toSortedSet()
collects aSortedSet
value. -
toMap()
collects aMap
value. -
toSortedMap()
collects aSortedMap
value.
The iteration order of elements in the resulting collection is not guaranteed to be stable,
unless it is a sorted collector such as |
Conditional collectors
The constraint collector framework enables you to create constraint collectors which will only collect in certain circumstances.
This is achieved using the ConstraintCollectors.conditionally(…)
constraint collector.
This collector accepts a predicate, and another collector to which it will delegate if the predicate is true. The following example returns a count of long-running processes assigned to a given computer, excluding processes which are not long-running:
private Constraint computerWithLongRunningProcesses(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class)
.groupBy(CloudProcess::getComputer, conditionally(
CloudProcess::isLongRunning,
count()
))
.penalize(HardSoftScore.ONE_HARD,
(computer, longRunningProcessCount) -> ...)
.asConstraint("longRunningProcesses");
}
This is useful in situations where multiple collectors are used and only some of them need to be restricted.
If all of them needed to be restricted in the same way,
then applying a filter()
before the grouping is preferable.
4.5.2. Composing collectors
The constraint collector framework enables you to create complex collectors utilizing simpler ones.
This is achieved using the ConstraintCollectors.compose(…)
constraint collector.
This collector accepts 2 to 4 other constraint collectors,
and a function to merge their results into one.
The following example builds an average()
constraint collector
using the count
constraint collector and sum()
constraint collector:
public static <A> UniConstraintCollector<A, ?, Double>
average(ToIntFunction<A> groupValueMapping) {
return compose(count(), sum(groupValueMapping), (count, sum) -> {
if (count == 0) {
return null;
} else {
return sum / (double) count;
}
});
}
Similarly, the compose()
collector enables you to work around the limitation of Constraint Stream cardinality
and use as many as 4 collectors in your groupBy()
statements:
UniConstraintCollector<A, ?, Triple<Integer, Integer, Integer>> collector =
compose(count(),
min(),
max(),
(count, min, max) -> Triple.of(count, min, max));
}
Such a composite collector returns a Triple
instance which allows you to access
each of the sub collectors individually.
Timefold Solver does not provide any |
4.6. Conditional propagation
Conditional propagation enables you to exclude constraint matches from the constraint stream based on the presence or absence of some other object.

The following example penalizes computers which have at least one process running:
private Constraint runningComputer(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudComputer.class)
.ifExists(CloudProcess.class, Joiners.equal(Function.identity(), CloudProcess::getComputer))
.penalize(HardSoftScore.ONE_SOFT,
computer -> ...)
.asConstraint("runningComputer");
}
Note the use of the ifExists()
building block.
On UniConstraintStream
, the ifExistsOther()
building block is also available which is useful in situations where the
forEach()
constraint match type is the same as the ifExists()
type.
Conversely, if the ifNotExists()
building block is used (as well as the ifNotExistsOther()
building block on
UniConstraintStream
) you can achieve the opposite effect:
private Constraint unusedComputer(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudComputer.class)
.ifNotExists(CloudProcess.class, Joiners.equal(Function.identity(), CloudProcess::getComputer))
.penalize(HardSoftScore.ONE_HARD,
computer -> ...)
.asConstraint("unusedComputer");
}
Here, only the computers without processes running are penalized.
Also note the use of the Joiner
class to limit the constraint matches.
For a description of available joiners, see joining.
Conditional propagation operates much like joining,
but it does not increase stream cardinality.
Matches from these building blocks are not available further down the stream.
For performance reasons, using conditional propagation with the appropriate |
4.7. Mapping tuples
Mapping enables you to transform each tuple in a constraint stream by applying a mapping function to it. The result of such mapping is another constraint stream of the mapped tuples.
private Constraint computerWithBiggestProcess(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class) // UniConstraintStream<CloudProcess>
.map(CloudProcess::getComputer) // UniConstraintStream<CloudComputer>
...
}
In the example above, the mapping function produces duplicate tuples if two different |
Mapping can be used to transform streams of all cardinalities.
The following example maps a pair of CloudProcess
instances to a pair of CloudComputer
instances running them:
private Constraint computerWithBiggestProcess(ConstraintFactory constraintFactory) {
return constraintFactory.forEachUniquePair(CloudProcess.class) // BiConstraintStream<CloudProcess, CloudProcess>
.map(CloudProcess::getComputer, CloudProcess::getComputer) // BiConstraintStream<CloudComputer, CloudComputer>
...
}
4.7.1. Designing the mapping function
When designing the mapping function, follow these guidelines for optimal performance:
-
Keep the function pure. The mapping function should only depend on its input. That is, given the same input, it always returns the same output.
-
Keep the function bijective. No two input tuples should map to the same output tuple, or to tuples that are equal. Not following this recommendation creates a constraint stream with duplicate tuples, and may force you to use
distinct()
later. -
Use immutable data carriers. The tuples returned by the mapping function should be immutable and identified by their contents and nothing else. If two tuples carry objects which equal one another, those two tuples should likewise equal and preferably be the same instance.
4.7.2. Dealing with duplicate tuples using distinct()
As a general rule, tuples in constraint streams are distinct. That is, no two tuples that equal one another. However, certain operations such as tuple mapping may produce constraint streams where that is not true.
If a constraint stream produces duplicate tuples, you can use the distinct()
building block
to have the duplicate copies eliminated.
private Constraint computerWithBiggestProcess(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(CloudProcess.class) // UniConstraintStream<CloudProcess>
.map(CloudProcess::getComputer) // UniConstraintStream<CloudComputer>
.distinct() // The same, each CloudComputer just once.
...
}
There is a performance cost to |
4.7.3. Expanding tuples
Tuple expansion is a special case of tuple mapping which only increases stream cardinality and can not introduce duplicate tuples. It enables you to add extra facts to each tuple in a constraint stream by applying a mapping function to it. This is useful in situations where an expensive computations needs to be cached for use later in the stream.
In the following example,
the method Talk.prevailingSpeakerUndesiredTimeslotTagCount()
internally iterates over collections to find overlapping tags
and returns the number of such tags.
It is expensive and it is called for each Talk
in the stream,
possibly being called many thousands of times per second.
Importantly, it is first called to filter out talks that have zero overlap,
and then again to penalize overlap on talks which suffer from it.
Constraint speakerUndesiredTimeslotTags(ConstraintFactory factory) {
return factory.forEach(Talk.class)
.filter(talk -> talk.prevailingSpeakerUndesiredTimeslotTagCount() > 0)
.penalizeConfigurable(talk -> talk.prevailingSpeakerUndesiredTimeslotTagCount() * talk.getDurationInMinutes())
.asConstraint(SPEAKER_UNDESIRED_TIMESLOT_TAGS);
}
We can improve this by using tuple expansion to cache the result of the expensive computation, possibly significantly reducing the number of times it is called.
Constraint speakerUndesiredTimeslotTags(ConstraintFactory factory) {
return factory.forEach(Talk.class)
.expand(Talk::prevailingSpeakerUndesiredTimeslotTagCount)
.filter((talk, undesiredTagCount) -> undesiredTagCount > 0)
.penalizeConfigurable((talk, undesiredTagCount) -> undesiredTagCount * talk.getDurationInMinutes())
.asConstraint(SPEAKER_UNDESIRED_TIMESLOT_TAGS);
}
Once the tuple for a Talk
has been created and passed through the filter,
the expensive computation will not be reevaluated again unless the Talk
itself changes.
There is a performance cost to |
4.8. Flattening
Flattening enables you to transform any Java Iterable
(such as List
or Set
)
into a set of tuples, which are sent downstream.
(Similar to Java Stream’s flatMap(…)
.)
This is done by applying a mapping function to the final element in the source tuple.
private Constraint requiredJobRoles(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(Person.class) // UniConstraintStream<Person>
.join(Job.class,
equal(Function.identity(), Job::getAssignee)) // BiConstraintStream<Person, Job>
.flattenLast(Job::getRequiredRoles) // BiConstraintStream<Person, Role>
.filter((person, requiredRole) -> ...)
...
}
In the example above, the mapping function produces duplicate tuples
if |
4.9. Concatenation
The concat
building block allows you to create a constraint stream containing tuples
of two other constraint streams.
If join acts like a cartesian product of two lists,
concat
acts like a concatenation of two lists.
Unlike union of sets, concatenation of lists repeats duplicated elements.
If the two constraint concatenating streams share tuples, which happens eg.
when they come from the same source of data,
the tuples will be repeated downstream.
If this is undesired, use the distinct
building block.

For example, to ensure each employee has a minimum number of assigned shifts:
private Constraint ensureEachEmployeeHasAtLeastTwoShifts(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(Employee.class)
.join(Shift.class, equal(Function.identity(), Shift::getEmployee))
.concat(
constraintFactory.forEach(Employee.class)
.ifNotExists(Shift.class, equal(Function.identity(), Shift::getEmployee))
)
.groupBy((employee, shift) -> employee,
conditionally((employee, shift) -> shift != null,
countBi())
)
.filter((employee, shiftCount) -> shiftCount < employee.minimumAssignedShifts)
.penalize(HardSoftScore.ONE_SOFT, (employee, shiftCount) -> employee.minimumAssignedShifts - shiftCount)
.asConstraint("Minimum number of assigned shifts");
}
This correctly counts the number of shifts each Employee has, even when the Employee has no shifts.
Consider the following naive implementation without concat
:
private Constraint incorrectEnsureEachEmployeeHasAtLeastTwoShifts(ConstraintFactory constraintFactory) {
return constraintFactory.forEach(Employee.class)
.join(Shift.class, equal(Function.identity(), Shift::getEmployee))
.groupBy((employee, shift) -> employee,
countBi())
)
.filter((employee, shiftCount) -> shiftCount < employee.minimumAssignedShifts)
.penalize(HardSoftScore.ONE_SOFT, (employee, shiftCount) -> employee.minimumAssignedShifts - shiftCount)
.asConstraint("Minimum number of assigned shifts (incorrect)");
}
An employee with no assigned shifts wouldn’t have been penalized
because no tuples were passed to the groupBy
building block.
5. Testing a constraint stream
We recommend that you test your constraints to ensure that they behave as expected.
Constraint streams include the Constraint Verifier unit testing harness.
To use it, first add a test scoped dependency to the timefold-solver-test
JAR.
5.1. Testing constraints in isolation
Consider the following constraint stream:
protected Constraint horizontalConflict(ConstraintFactory factory) {
return factory
.forEachUniquePair(Queen.class, equal(Queen::getRowIndex))
.penalize(SimpleScore.ONE)
.asConstraint("Horizontal conflict");
}
The following example uses the Constraint Verifier API to create a simple unit test for the preceding constraint stream:
private ConstraintVerifier<NQueensConstraintProvider, NQueens> constraintVerifier
= ConstraintVerifier.build(new NQueensConstraintProvider(), NQueens.class, Queen.class);
@Test
public void horizontalConflictWithTwoQueens() {
Row row1 = new Row(0);
Column column1 = new Column(0);
Column column2 = new Column(1);
Queen queen1 = new Queen(0, row1, column1);
Queen queen2 = new Queen(1, row1, column2);
constraintVerifier.verifyThat(NQueensConstraintProvider::horizontalConflict)
.given(queen1, queen2)
.penalizesBy(1);
}
This test ensures that the horizontal conflict constraint assigns a penalty of 1
when there are two queens on the same
row.
The following line creates a shared ConstraintVerifier
instance and initializes the instance with the
NQueensConstraintProvider
:
private ConstraintVerifier<NQueensConstraintProvider, NQueens> constraintVerifier
= ConstraintVerifier.build(new NQueensConstraintProvider(), NQueens.class, Queen.class);
The @Test
annotation indicates that the method is a unit test in a testing framework of your choice.
Constraint Verifier works with many testing frameworks including JUnit and AssertJ.
The first part of the test prepares the test data.
In this case, the test data includes two instances of the Queen
planning entity and their dependencies
(Row
, Column
):
Row row1 = new Row(0);
Column column1 = new Column(0);
Column column2 = new Column(1);
Queen queen1 = new Queen(0, row1, column1);
Queen queen2 = new Queen(1, row1, column2);
Further down, the following code tests the constraint:
constraintVerifier.verifyThat(NQueensConstraintProvider::horizontalConflict)
.given(queen1, queen2)
.penalizesBy(1);
The verifyThat(…)
call is used to specify a method on the NQueensConstraintProvider
class which is under test.
This method must be visible to the test class, which the Java compiler enforces.
The given(…)
call is used to enumerate all the facts that the constraint stream operates on.
In this case, the given(…)
call takes the queen1
and queen2
instances previously created.
Alternatively, you can use a givenSolution(…)
method here and provide a planning solution instead.
Finally, the penalizesBy(…)
call completes the test, making sure that the horizontal conflict constraint, given
one Queen
, results in a penalty of 1
.
This number is a product of multiplying the match weight, as defined in the constraint stream, by the number of matches.
Alternatively, you can use a rewardsWith(…)
call to check for rewards instead of penalties.
The method to use here depends on whether the constraint stream in question is terminated with a penalize
or a
reward
building block.
|
5.2. Testing all constraints together
In addition to testing individual constraints, you can test the entire ConstraintProvider
instance.
Consider the following test:
@Test
public void givenFactsMultipleConstraints() {
Queen queen1 = new Queen(0, row1, column1);
Queen queen2 = new Queen(1, row2, column2);
Queen queen3 = new Queen(2, row3, column3);
constraintVerifier.verifyThat()
.given(queen1, queen2, queen3)
.scores(SimpleScore.of(-3));
}
There are only two notable differences to the previous example.
First, the verifyThat()
call takes no argument here, signifying that the entire ConstraintProvider
instance is
being tested.
Second, instead of either a penalizesBy()
or rewardsWith()
call, the scores(…)
method is used.
This runs the ConstraintProvider
on the given facts and returns a sum of Score
s of all constraint matches resulting
from the given facts.
Using this method, you ensure that the constraint provider does not miss any constraints and that the scoring function
remains consistent as your code base evolves.
It is therefore necessary for the given(…)
method to list all planning entities and problem facts,
or provide the entire planning solution instead.
|
6. Other types of score calculation
Timefold Solver supports two other types of score calculation.
6.1. Easy Java score calculation
An easy way to implement your score calculation in Java.
-
Advantages:
-
Plain old Java: no learning curve.
-
Opportunity to delegate score calculation to an existing code base or legacy system.
-
Useful for prototyping.
-
-
Disadvantages:
-
Slower, typically not suitable for production.
-
Does not scale because there is no incremental score calculation.
-
Can not explain the score.
-
To start using Easy Java score calculation,
implement the one method of the interface EasyScoreCalculator
:
public interface EasyScoreCalculator<Solution_, Score_ extends Score<Score_>> {
Score_ calculateScore(Solution_ solution);
}
For example in N-queens:
public class NQueensEasyScoreCalculator
implements EasyScoreCalculator<NQueens, SimpleScore> {
@Override
public SimpleScore calculateScore(NQueens nQueens) {
int n = nQueens.getN();
List<Queen> queenList = nQueens.getQueenList();
int score = 0;
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
Queen leftQueen = queenList.get(i);
Queen rightQueen = queenList.get(j);
if (leftQueen.getRow() != null && rightQueen.getRow() != null) {
if (leftQueen.getRowIndex() == rightQueen.getRowIndex()) {
score--;
}
if (leftQueen.getAscendingDiagonalIndex() == rightQueen.getAscendingDiagonalIndex()) {
score--;
}
if (leftQueen.getDescendingDiagonalIndex() == rightQueen.getDescendingDiagonalIndex()) {
score--;
}
}
}
}
return SimpleScore.valueOf(score);
}
}
Configure it in the solver configuration:
<scoreDirectorFactory>
<easyScoreCalculatorClass>ai.timefold.solver.examples.nqueens.optional.score.NQueensEasyScoreCalculator</easyScoreCalculatorClass>
</scoreDirectorFactory>
To configure values of an EasyScoreCalculator
dynamically in the solver configuration
(so the Benchmarker can tweak those parameters),
add the easyScoreCalculatorCustomProperties
element and use custom properties:
<scoreDirectorFactory>
<easyScoreCalculatorClass>...MyEasyScoreCalculator</easyScoreCalculatorClass>
<easyScoreCalculatorCustomProperties>
<property name="myCacheSize" value="1000" />
</easyScoreCalculatorCustomProperties>
</scoreDirectorFactory>
6.2. Incremental Java score calculation
A way to implement your score calculation incrementally in Java.
-
Advantages:
-
Very fast and scalable; currently the fastest if implemented correctly.
-
-
Disadvantages:
-
Hard to write
-
A scalable implementation heavily uses maps, indexes etc. You have to learn, design, write and improve all these performance optimizations yourself. Why not have constraint streams do the hard work for you?
-
-
Hard to read
-
Regular score constraint changes can lead to high maintenance costs.
-
-
To start using Incremental Java score calculation,
implement all the methods of the interface IncrementalScoreCalculator
:
public interface IncrementalScoreCalculator<Solution_, Score_ extends Score<Score_>> {
void resetWorkingSolution(Solution_ workingSolution);
void beforeEntityAdded(Object entity);
void afterEntityAdded(Object entity);
void beforeVariableChanged(Object entity, String variableName);
void afterVariableChanged(Object entity, String variableName);
void beforeEntityRemoved(Object entity);
void afterEntityRemoved(Object entity);
Score_ calculateScore();
}

For example in n queens:
public class NQueensAdvancedIncrementalScoreCalculator
implements IncrementalScoreCalculator<NQueens, SimpleScore> {
private Map<Integer, List<Queen>> rowIndexMap;
private Map<Integer, List<Queen>> ascendingDiagonalIndexMap;
private Map<Integer, List<Queen>> descendingDiagonalIndexMap;
private int score;
public void resetWorkingSolution(NQueens nQueens) {
int n = nQueens.getN();
rowIndexMap = new HashMap<Integer, List<Queen>>(n);
ascendingDiagonalIndexMap = new HashMap<Integer, List<Queen>>(n * 2);
descendingDiagonalIndexMap = new HashMap<Integer, List<Queen>>(n * 2);
for (int i = 0; i < n; i++) {
rowIndexMap.put(i, new ArrayList<Queen>(n));
ascendingDiagonalIndexMap.put(i, new ArrayList<Queen>(n));
descendingDiagonalIndexMap.put(i, new ArrayList<Queen>(n));
if (i != 0) {
ascendingDiagonalIndexMap.put(n - 1 + i, new ArrayList<Queen>(n));
descendingDiagonalIndexMap.put((-i), new ArrayList<Queen>(n));
}
}
score = 0;
for (Queen queen : nQueens.getQueenList()) {
insert(queen);
}
}
public void beforeEntityAdded(Object entity) {
// Do nothing
}
public void afterEntityAdded(Object entity) {
insert((Queen) entity);
}
public void beforeVariableChanged(Object entity, String variableName) {
retract((Queen) entity);
}
public void afterVariableChanged(Object entity, String variableName) {
insert((Queen) entity);
}
public void beforeEntityRemoved(Object entity) {
retract((Queen) entity);
}
public void afterEntityRemoved(Object entity) {
// Do nothing
}
private void insert(Queen queen) {
Row row = queen.getRow();
if (row != null) {
int rowIndex = queen.getRowIndex();
List<Queen> rowIndexList = rowIndexMap.get(rowIndex);
score -= rowIndexList.size();
rowIndexList.add(queen);
List<Queen> ascendingDiagonalIndexList = ascendingDiagonalIndexMap.get(queen.getAscendingDiagonalIndex());
score -= ascendingDiagonalIndexList.size();
ascendingDiagonalIndexList.add(queen);
List<Queen> descendingDiagonalIndexList = descendingDiagonalIndexMap.get(queen.getDescendingDiagonalIndex());
score -= descendingDiagonalIndexList.size();
descendingDiagonalIndexList.add(queen);
}
}
private void retract(Queen queen) {
Row row = queen.getRow();
if (row != null) {
List<Queen> rowIndexList = rowIndexMap.get(queen.getRowIndex());
rowIndexList.remove(queen);
score += rowIndexList.size();
List<Queen> ascendingDiagonalIndexList = ascendingDiagonalIndexMap.get(queen.getAscendingDiagonalIndex());
ascendingDiagonalIndexList.remove(queen);
score += ascendingDiagonalIndexList.size();
List<Queen> descendingDiagonalIndexList = descendingDiagonalIndexMap.get(queen.getDescendingDiagonalIndex());
descendingDiagonalIndexList.remove(queen);
score += descendingDiagonalIndexList.size();
}
}
public SimpleScore calculateScore() {
return SimpleScore.valueOf(score);
}
}
Configure it in the solver configuration:
<scoreDirectorFactory>
<incrementalScoreCalculatorClass>ai.timefold.solver.examples.nqueens.optional.score.NQueensAdvancedIncrementalScoreCalculator</incrementalScoreCalculatorClass>
</scoreDirectorFactory>
A piece of incremental score calculator code can be difficult to write and to review.
Assert its correctness by using an |
To configure values of an IncrementalScoreCalculator
dynamically in the solver configuration
(so the Benchmarker can tweak those parameters),
add the incrementalScoreCalculatorCustomProperties
element and use custom properties:
<scoreDirectorFactory>
<incrementalScoreCalculatorClass>...MyIncrementalScoreCalculator</incrementalScoreCalculatorClass>
<incrementalScoreCalculatorCustomProperties>
<property name="myCacheSize" value="1000"/>
</incrementalScoreCalculatorCustomProperties>
</scoreDirectorFactory>
6.2.1. ConstraintMatchAwareIncrementalScoreCalculator
To add support for score analysis,
optionally also implement the ConstraintMatchAwareIncrementalScoreCalculator
interface:
public interface ConstraintMatchAwareIncrementalScoreCalculator<Solution_, Score_ extends Score<Score_>> {
void resetWorkingSolution(Solution_ workingSolution, boolean constraintMatchEnabled);
Collection<ConstraintMatchTotal<Score_>> getConstraintMatchTotals();
Map<Object, Indictment<Score_>> getIndictmentMap();
}
For example in machine reassignment, create one ConstraintMatchTotal
per constraint type and call addConstraintMatch()
for each constraint match:
public class MachineReassignmentIncrementalScoreCalculator
implements ConstraintMatchAwareIncrementalScoreCalculator<MachineReassignment, HardSoftLongScore> {
...
@Override
public void resetWorkingSolution(MachineReassignment workingSolution, boolean constraintMatchEnabled) {
resetWorkingSolution(workingSolution);
// ignore constraintMatchEnabled, it is always presumed enabled
}
@Override
public Collection<ConstraintMatchTotal<HardSoftLongScore>> getConstraintMatchTotals() {
ConstraintMatchTotal<HardSoftLongScore> maximumCapacityMatchTotal = new DefaultConstraintMatchTotal<>(CONSTRAINT_PACKAGE,
"maximumCapacity", HardSoftLongScore.ZERO);
...
for (MrMachineScorePart machineScorePart : machineScorePartMap.values()) {
for (MrMachineCapacityScorePart machineCapacityScorePart : machineScorePart.machineCapacityScorePartList) {
if (machineCapacityScorePart.maximumAvailable < 0L) {
maximumCapacityMatchTotal.addConstraintMatch(
Arrays.asList(machineCapacityScorePart.machineCapacity),
HardSoftLongScore.valueOf(machineCapacityScorePart.maximumAvailable, 0));
}
}
}
...
List<ConstraintMatchTotal<HardSoftLongScore>> constraintMatchTotalList = new ArrayList<>(4);
constraintMatchTotalList.add(maximumCapacityMatchTotal);
...
return constraintMatchTotalList;
}
@Override
public Map<Object, Indictment<HardSoftLongScore>> getIndictmentMap() {
return null; // Calculate it non-incrementally from getConstraintMatchTotals()
}
}
That getConstraintMatchTotals()
code often duplicates some of the logic of the normal IncrementalScoreCalculator
methods.
Constraint Streams doesn’t have this disadvantage, because they are constraint match aware automatically when needed,
without any extra domain-specific code.