Skip to main content
Back to News
Lessons learned: Delivering software programs (part 2)

Lessons learned: Delivering software programs (part 2)

14 August 2024
  • Software Engineering

By Stephen McCarron, Head of Forecasting Engineering

This is the second blog in a five-part series from our Head of Forecasting Engineering on the complexities of software program delivery – and the practices that more often predict a program’s likelihood of success.

Addressing Uncertainty

Every large software program has uncertainty, problems that you can’t yet see. Be it a green field project or changing an existing system, it is hard – and often futile – to try to predict and plan everything ahead of time.

Successful programs prioritise dealing with uncertainty

Uncertainty makes it hard to predict how a program can deliver its intended outcomes and how expensive it might be to do so. Reducing this uncertainty is pivotal to ensure success.

Emergent problems will make it harder to achieve the intended outcome, the sooner you discover them the sooner you can do something about them.

When a program starts with delivering simple tasks that are well understood, all that does is defer learning about the things that will be hard. When you see the first milestones in a project are not about reducing the known uncertainty then I’d start to worry about the potential success.

They try to make the program more manageable by making the complexity smaller.

By focusing on uncertainties, you chip away at the complexity of the problem and develop your understanding of how and when the program can deliver value as you go. Learn as fast as you can.

Lessons learned: Delivering software programs (part 3)

“The impact of this is that the plans in place the prior day might not be the best plans for the next. Well run programs acknowledge this and make adaptations quickly to better deal with those new realities.”

Read part 3 now

Building sophisticated software systems involves too many interacting and evolving components of various ages and configurations, and many details will only emerge during the build out.

In my experience uncertainties come in two main types:

1. Where multiple solutions are possible but the best solution is unclear

Example: You may not know how best to store your data but there are many established methods available, so you just need to choose the most appropriate.

In these cases this is a lower order uncertainty. The uncertainty is not over whether it can be done, it’s about finding the most effective way of doing it.  Treat these as key decision points. If there is no significant cost or outcome difference then these decisions should be left to the delivery team to resolve.

2. Where no solution is obvious

Example: You are attempting some new paradigm that requires interfacing with a system that is locked down and has never granted access before; there is no pattern established as to how this might work.

These types of uncertainties should be dealt with urgently. If this problem can’t be overcome then this could delay, halt or cause cost overruns as you work around it. You need to understand problems like these as quickly as possible. Quick, short experiments ought be run to understand the problem better until you have found at least one path forward.

Having addressed immediate uncertainties, inevitably more uncertainties will emerge. These subsequent problems tend to have a smaller impact and become more localised as the overall domain becomes better understood. The circles of uncertainty should keep closing in, from large problems to smaller and smaller ones.

Uncertainty around estimating time and cost of delivery  – delivering on time

When there is so much uncertainty around delivery, managing expectations on cost and timings is not straightforward.

Emergent work is hard to estimate accurately as little experience exists to use as a baseline. The only reasonably accurate estimates in a program tend to be in those parts that are founded on practical experience, things that have been done before.

Successful projects, however, do tend to find ways to predict the effort. They work effectively with sponsors and stakeholders and land their outcomes on expectation.

Successful programs show a preference for continuous delivery of value in order to offset the ambiguity around cost. This swaps the problem from being an estimation problem to being a prioritisation problem. As value gets delivered, decisions can be made at regular intervals to continue the investment to garner further value.

There will however be projects where continual delivery is less practical – where the value isn’t realised until all the parts are assembled – despite all the warnings from textbooks. Assuredly, the program will still be asked when? And, how much?

In this case, there are two strategies I’ve seen work:

  • Rather than estimate everything at once, work backwards from a target date

Example: If the outcome is to land in a year what needs to be true by mid-year, and from there, what needs to be true by end of Q1? At this point, ask delivery teams to quote on achieving that more digestible effort. That will result in a target date and an idea of what the first quarter will cost, both of which you can use to inform stakeholders and sponsors.

  • If you have compartmentalised the program well, then quite a few work streams should be less uncertain

On those streams take a rough estimate from the senior engineers (or the ones who have done this before and understand the platform best) and use this to baseline an indicative cost.

For the other streams, ask the most experienced engineers for a blink estimate, i.e. a rough guess at how hard it is. And just use that. It will tend to be more accurate and a lot less effort than many other methods I’ve seen over the years.

When communicating with stakeholders and sponsors, quote the uncertainty and as uncertainties get resolved refine the estimates and report the changed expectations, particularly if they breach some ceiling. This gives them control over cost management.

Catch up on the rest of the series!

1. Alignment on outcomes

Ensuring everyone is shooting for the same thing

Read part one
2. Addressing uncertainty

Removing complexities early

3. Constant adaptation

Adjusting quickly when the state changes

Read part three
4. Compartmentalising

Breaking down problems into smaller, independent units

Read part four
5. Communicating

Ensuring everyone is up to date with the ever-changing state

Read part five

Latest News

G-Research March 2025 Grant Winners
  • 22 Apr 2025

Each month, we provide up to £2,000 in grant money to early career researchers in quantitative disciplines. Hear from our March grant winners.

Read article
Invisible Work of OpenStack: Eventlet Migration
  • 25 Mar 2025

Hear from Jay, an Open Source Software Engineer, on tackling technical debt in OpenStack. As technology evolves, outdated code becomes inefficient and harder to maintain. Jay highlights the importance of refactoring legacy systems to keep open-source projects sustainable and future-proof.

Read article
SXSW 2025: Key takeaways from our Engineers
  • 24 Mar 2025

At G-Research we stay at the cutting edge by prioritising learning and development. That’s why we encourage our people to attend events like SXSW, where they can engage with industry experts and explore new ideas. Hear from two Dallas-based Engineers, as they share their key takeaways from SXSW 2025.

Read article

Latest Events

  • Quantitative Engineering
  • Quantitative Research

SIAM Conference on Financial Mathematics and Engineering

15 Jul 2025 - 18 Jul 2025 Hyatt Regency Miami, 400 SE 2nd St, Miami, FL 33131, United States
  • Quantitative Engineering
  • Quantitative Research

MPP/MPQ Career Day

30 Apr 2025 Max Planck Institute for Physics, Boltzmannstraße 8, 85748 Garching bei München, Germany
  • Quantitative Engineering
  • Quantitative Research

Imperial PhD Careers Fair

10 Jun 2025 Queen's Tower Rooms, Sherfield Building, South Kensington Campus, Imperial College London, London, SW7 2AZ

Stay up to date with
G-Research