From Meteorology to Project Management: The Power of Predictive Modelling- Part 2
Monte Carlo Simulation forecasts project completion dates with a methodology akin to how meteorologists predict hurricane paths, embracing uncertainty to chart a course through complex variables.
In our initial exploration, we delved into the rich history of predictive modelling and unravelled the intricacies of Monte Carlo simulation (MCS). Now, in this second part, our journey takes a pragmatic turn. We will immerse ourselves in the practical application of these concepts, using the software as our tool.
This journey into the application phase is not just about understanding the software; itโs about unlocking the potential of predictive modelling in real-world scenarios. As we navigate this, remember that the practical application transforms theory into transformative action. Letโs embark on this enlightening path, where the abstract meets the concrete and our knowledge finds its true power in application.
Software Solution:
Below are a few software solutions for forecasting completion dates with MCS that meet various needs.
ActionableAgile- Jira plugin, Azure Plugin and standalone SaaS
Nave- Kanban software solution with MCS integrated
Capacity Planning & Feature Monte Carlo for Azure DevOps and Jira (free)
In the examples below, we will be exploring the ActionableAgile MCS modules.
The ActionableAgile Jira Plugin, a robust extension of Jira, empowers teams with sophisticated analytics and actionable insights, enhancing their workflow processes. This tool offers real-time analytics, facilitating prompt identification of bottlenecks and performance measurement and enabling teams to fine-tune their strategies, thereby improving flow and delivery.
Harnessing the real-time data within Jira, the plugin provides an extensive array of metrics and visualisations. These tools aid in making informed decisions and optimising processes, ensuring that teams have the knowledge they need to excel in their tasks.
For more information on configuring your project to work with ActionableAgile, visit the 55 Degrees website. We will use the sample historical data set provided with the Jira plugin for our examples below.
When Will it be done?
Below is a screenshot of the ActionableAgile plugin โWhenโ module. Using MCS, you would use this to forecast a range of probable delivery dates.
Data Set- Used to select a teamโs data for analysis.
Charts- Type of chart used. In this case, โWhenโ.
Histogram- A picture of the various completion dates and how often they occurred.
Control for all Charts- Where you control the information needed for running the Monte Carlo simulation.
Calendar- The Histogram in a format that is easier to read.
Legend- Colour-coded legend for the Calendar denoting percentile outcomes.ย
How to Interpret Monte Carlo Simulation Results:
In this example, we set a backlog of 100 tasks and want to start working on it on 1 February.ย Thatโs all you need to input for a fundamental forecast once your Jira project has been integrated with the plugin.
The simulation weโve conducted offers a compelling insight: thereโs an 85% probability that we can complete all the backlog items by the 20th of March. This statistical probability isnโt just a number; itโs a guiding light in the maze of project management.
As we project further into the future, the certainty of completing all tasks increases. However, itโs essential to understand that our forecast isnโt about these specific 100 tasks in our backlog being delivered by the 20th of March. Thatโs not the claim here.
Instead, we are asserting our capability to deliver 100 work items by that date, with an 85% probability of achieving this goal. This distinction is crucial. Itโs not about the exact tasks but the volume of work and our ability to handle it efficiently within the given timeframe. This approach shifts our focus from the minutiae of individual tasks to the broader capability of our team and processes.
How Many?
Below is a screenshot of the ActionableAgile plugin โHow Manyโ module. You would use this to forecast how many items you can complete by a specific date within a probable range of certainty. This is an excellent tool if you have a drop-dead date for delivery and want to determine how many units can be completed in that time.
Data Set- Used to select a teamโs data for analysis.
Charts- Type of chart used. In this case, โHow Manyโ.
Histogram- A picture of the various completion dates and how often they occurred.
Control for all Charts- Where you control the information needed for running the Monte Carlo simulation.
Calendar- The Histogram in a format that โHow Manyโ can be completed on a specific date.
Legend- User-selectable percentile
Advanced features:
I filtered out a few items in the above examples for easier reading. Below is the out-of-the-box module with filters off.
Daily Throughput- The throughput date control that can be useful if some of your data was generated in conditions unlike that for which your future work will be completed (for example, you now have a different team size, set of organisational constraints, balance of work, etc.)
Selected Throughput- What you choose in the throughput date control will be reflected on the throughput basis. This is where you can see the data used for the histogram and calendar. The throughput basis and throughput date control show a daily throughput (the number of items finished on a particular calendar day).
Scale Throughput- Play with this field to see how an improvement in your throughput will impact your forecasts. You can use decimal points here. So, if you think getting help will improve your throughput by 10%, you can update this to 1.1 and see how that changes your forecast. Conversely, if half your team is away on holiday for the duration sampled, you can scale by .5.
How will this impact my team?
The most significant impact your development teams will see is less work for them. All they need to do is keep Jira up-to-date. Story point planning for velocity is not mandatory in estimating completion dates. This approach shifts the responsibility for forecasting completion date ranges squarely onto the shoulders of the production team.
In this framework, production teams are empowered to be fully accountable for predicting when a project or set of tasks will be completed. This alleviates the need for individual contributors from various teams to estimate the time required to complete a task in evaluating a feature delivery date.
Early engagement in User Story Mapping is encouraged. This process should ideally take place at the outset to determine the initial user story count. By doing so, teams can understand the projectโs scope from the beginning, laying a solid foundation for more accurate forecasting and planning. This early stage mapping is critical in aligning expectations and setting the stage for successful project execution.
Hold on, are we not still estimating the number of user stories?
Yes! While itโs true that these estimates of user story counts are subject to the same biases inherent in any form of estimation, the methodology we employ here carries a distinct advantage. Estimating user story counts hinges on a singular number, a simplicity that allows for rapid calculations in MCS. This streamlined process is efficient and minimises the cognitive load and time required for estimation.
The beauty of this approach lies in its fluidity and adaptability. Our simulations are updated weekly, accommodating any additions or removals of stories. This flexibility ensures that our forecasts remain relevant and responsive to the ever-evolving landscape of the project, all without placing additional burden on the team.
As with any forecast, the precision and accuracy of our predictions will naturally enhance as the project progresses. As more data becomes available and as we traverse further into the projectโs timeline, our understanding deepens, sharpening the accuracy of our forecasts. This evolving clarity is a testament to the effectiveness of our approach, blending agility with strategic foresight.
But what about user stories of differing size or complexity?
Indeed, the question of user stories of differing sizes or complexities is pertinent in the context of MCS.
MCS excels in creating probabilistic forecasts, drawing upon the variability observed in historical data. The focus here is not on the specific complexity of individual work items. Instead, the emphasis is on understanding the overall throughput pattern of the team or process over time.
This methodology assumes that historical data, encompassing the completion times of a diverse array of user stories with varying complexities, offers a reliable basis for predicting future outcomes. By leveraging this data, MCS provides a spectrum of probable future outcomes, acknowledging software development's inherent variability and unpredictability.
While the complexity of individual user stories does play a role, itโs indirectly reflected within the historical data. However, itโs not explicitly considered or dissected in the simulation model. This approach allows for a broader, more holistic view of project progression, steering clear of the pitfalls of trying to predict exact completion dates for specific items.
In essence, MCS offers a pragmatic and realistic approach to forecasting in software development, one that accommodates the natural fluctuations and uncertainties of the process while providing a range of probable outcomes to aid in planning and decision-making.
Closing Thoughts:
Employing Monte Carlo simulations, especially when informed by Jira data on user story completion, is a significant stride forward in augmenting business operations. This technique enables producers to make more accurate projections, leading to a better alignment of team capacity with the project's demands, thus ensuring an efficient allocation of resources. Moreover, relying on empirical data to predict project outcomes cultivates a more profound sense of trust in the studioโs commitments. Stakeholders can visibly correlate past performance with future projections, bolstering their confidence in the studioโs capabilities.
In addition to enhancing stakeholder trust, this approach also empowers producers to make more nuanced decisions. They can effectively prioritise user stories and adjust the project scope based on the likelihood of completion, thereby honing their decision-making prowess. Furthermore, studios can swiftly respond to market changes and player expectations by anticipating project timelines with greater precision. This agility offers a decisive competitive advantage in a fast-paced industry.
Lastly, the boon of accurate forecasting extends to financial aspects as well. It aids in curtailing the tendency for costly project overruns, optimising operational expenses. This improves the studioโs financial health and ensures a more sustainable and profitable operation. By integrating Monte Carlo simulations, informed by robust data analytics, studios can radically transform their operational efficacy and strategic foresight.