Home

From Project Uncertainty to Capital Control.

Probabilistic project control that scales from one project to full portfolio governance.

Our project control framework turns probabilistic forecasts into repeatable decision inputs. Uncertainty becomes measurable, comparable, and governable across an enterprise.

A working mockup

The Control Framework you can explore here is a guided demonstration of how we would structure and steer a real project. The outputs are representative of what the method produces when planning and risk are modeled our way.

How most projects plan and manage risk today

Plans and risk registers often look controlled, until uncertainty accumulates. The result is late escalation and reactive steering.

Deterministic planning

A single baseline hides exposure and delays escalation.

Isolated risk registers

Lists without quantified impact do not translate into capital or schedule confidence.

Portfolio reporting without a probabilistic backbone

Aggregation without confidence creates weak auditability and inconsistent governance.

Baseline €268M
P85 €326M
+21.7%

Fan chart (cost over time)

Uncertainty intensity over time (P90–P10)

These are demo visuals (static). They illustrate the output types the Control Framework produces. In the journey, the same outputs are generated from your plan and risk inputs.

What we add

What this framework adds - and why it works

Why these outputs matter

  • The fan chart turns uncertainty into a timeline: it shows how “best case” and “worst case” diverge as time passes.
  • The intensity chart shows when uncertainty is concentrated. Those are the moments where management attention has the highest leverage.

Method in one glance

Inputs: a baseline plan and a structured risk model linked to tasks.

Engine: sampling, dependency propagation, and Monte Carlo to produce distributions.

Outputs: percentile bands, breach probabilities, and quantified drivers for steering.

Outcomes

• Budget approval: set capital envelopes using a confidence percentile (for example P85), not a single baseline number.

• Re plan trigger: define measurable thresholds and escalation rules based on breach probability and exposure magnitude.

• Mitigation prioritization: focus effort on the few drivers that move the distribution, not the longest risk list.

Why this works at enterprise scale

The goal is not to analyze uncertainty for a single project in isolation. It is to create a calibrated uncertainty unit that can be reused across many projects.

When every project produces comparable percentile bands and confidence metrics, you can aggregate exposure into a portfolio or enterprise capital envelope and steer consistently.

This is distribution-first aggregation: roll up iteration outcomes, then compute percentiles. That keeps the math consistent when you scale.

Use confidence levels (for example P85) to quantify reserveand to set decision thresholds (approve, re-plan, mitigate) based on measurable confidence, not optimism.

How it gets used (examples)

• Reserve sizing: quantify the buffer required to reach a chosen confidence level (e.g. P85).

• Breach probability: estimate the chance a program exceeds a budget envelope, and by how much.

• Risk appetite: compare exposure to governance thresholds consistently across projects.

What you get

• P85 budget and finish targets for governance planning.

• Breach probability and exposure sizing against your thresholds.

• Top drivers that explain what moves cost and schedule outcomes.

• An uncertainty calendar that shows when variance is concentrated over time.

• Governance bullets derived from the distributions for steering updates.

• Portfolio roll up logic once projects produce comparable outputs.

Method

Our method, in short

A spine from plan intake to governance confidence, designed for repeatability and auditability.

How the method works

• Start from a baseline plan (tasks, sequencing, and a monthly cost curve). This is the controlled reference.

• Model risks as ranges linked to tasks. Each Monte Carlo iteration samples cost and delay from those ranges.

• Propagate sampled delays through dependencies, then shift the monthly cost curve with the simulated schedule movement.

• Aggregate iterations into distributions over time and at completion. Percentiles (P50, P85) and breach probabilities become measurable.

• Translate distributions into governance inputs: reserve sizing, escalation triggers, and driver focused mitigation priorities.

References

• Metropolis, N. and Ulam, S. (1949). The Monte Carlo method.

• Hertz, D. (1964). Risk analysis in capital investment.

• Vose, D. (2008). Risk Analysis: A Quantitative Guide (3rd ed.).

• PMI (2019). Practice Standard for Project Risk Management.

1

Name

Identity and traceability for governance outputs.

2

Planning

Baseline scope, logic, and cost timing.

3

Risks

Ranges linked to tasks and dependencies.

4

Calculation

Monte Carlo and percentile band construction.

5

Results

Decision outputs and quantified drivers.

6

Governance

Confidence against targets for steering.

7

Scale

Project to program and portfolio aggregation.

One framework. Repeatable across projects.

For which enterprises is this useful?

Where this fits

• Government capital programs

• Infrastructure portfolios

• Multi-year complex investments

• Teams with structured planning and risk processes

What this requires

Disciplined data and process maturity. Not plug-and-play.

If inputs are not traceable to a baseline plan and structured risk model, confidence claims cannot be defended.

ProSim Control Studio · v0.1.0-alpha