Back to Blog

Resource leveling, a graph theory approach

Daniel ChernowitzLinkedIn
December 8, 2025
AIConstructionOptimizationResource Leveling
Resource leveling, a graph theory approach

This is the first of a (three) part series on resource leveling.

Let's say we have a project that consists of many tasks that have interdependencies, and also some of them require the usage of a finite set of resources.

Project dependencies diagram

So you need to dig before you can pour concrete before you can paint. And the digging takes an excavator, and the pouring takes a concrete truck, and the painting takes Jackson Pollock. And of course, there's only one Jackson Pollock!

Now there are often super-exponential (in the number of tasks) ways to order the schedule. Resource leveling is the art of finding an ordering that never exceeds the available resources, and hopefully also does it in a fast (= cheap) way. This typically means you use your resources continuously. If they're idle in the middle, that means you must do more work later.

In industry, this is often administered by hand. Planners use artificial lag between activities, or by creating 'soft' prerequisites between activities that use the same resource. This has obvious disadvantages. How does your colleague know which lag is a real requirement, or that the prereq you created could also go the other way, when you're just trying to say two activities can't be simultaneous. Moreover, it's very laborious to check even one solution, and there's no guarantee solving one resource bottleneck won't create another. (A waterbed effect).

Instead, why not use some good old code? This post is a basic intro to a procedural way to find solutions that by construction satisfy the resource constraints.

Baseline

CPM scheduling diagram

If there were no resources, every task would start exactly when its prerequisite tasks were done (assuming all relationships are 'start-after-finish', which most are). Then there is typically only one fastest schedule. We call this method CPM. Just find the graph of dependencies, put the tasks on a timeline, squish everything to the left, and that's your output.

But, if there's resource constraints, we have to be more careful. Let's say we've scheduled some tasks, and we've noticed some of them consume resources.

Resource constraint diagram

This won't do. Both Task A and Task B are placed as early as they can due to their prerequisites. But together, they consume 5 excavators, and we only have 4. So we make a choice: We give task B 'priority'. That means, we will plan it first. Then when task A comes along, we plan it as early as we can, taking into account prerequisites AND resource availability. This results in

Resource leveling solution

We've also kept a running tally of the available excavators, for future tasks.

In fact, in order to execute the full scheduling constructively, we've put all possible tasks in a priority order. Let's consider the ones still waiting to be scheduled into this partial schedule.

Priority order scheduling

The loop is the following:

  1. Go down the priority order list, until you find the highest task for which prereqs are satisfied
  2. Find the earliest point that task could be scheduled due to prereqs
  3. If it has resource usage, move it to later until there's enough resources available
  4. Go to 1.

This procedure of course also works if there are multiple resources on a task, we just need to keep track of multiple of the above graphs at the same time.

Now imagine there's more tasks, C (1 excavator) and D (2 excavators), that could in principle start with the same prereqs as A and B. And let's say, C has the higher priority.

We end up with this:

Suboptimal resource allocation

That's actually a poor use of resources. Task C was planned as soon as it could, in parallel with B. Task D could have gone there, with C on top of A. But now it can't anymore, so D has to wait until after A. A better priority would be (B → A → D → C) or (A → B → C → D). It's clear a good choice of priority means a good schedule.

The algorithm that takes a priority order, and outputs a schedule is called AIRL (Activity Iteration Resource Leveling). Instead of searching the space of schedules for cheap and efficient solutions, we can search the space of orderings! We've parameterized our problem.

This algorithm forms a mapping from the space of orderings (SN, the symmetric group, for the fellow nerds out there), into the space of schedule solutions with N tasks. It's not surjective, we can't describe arbitrary lags, or ALAP timing. This is actually a good thing. We typically don't want those features, and not allowing those options reduces our search space.

AIRL mapping diagram

This mapping also isn't injective: If there's no (practical) resource limits, then all solutions are CPM. Moreover, if we increase the priority of a task beyond its prerequisites, it doesn't help to increase it more. So many elements of S_N map to the same solution.

This gives a hint to a great speed up when searching for good solutions. If we can only consider changes to the input that will likely change the output, we can search far more powerfully for good schedules. Stay tuned to see how we exploit this insight.