Budgetary Case Study
I’m sure many of us have heard the phrase “correlation does not equal causation”. However, causation can be incredibly difficult to pinpoint in the face of numerous variables. Therefore, if testing a number of categories that drive to the same causation conclusion, it drives our confidence in the root cause of an issue. It’s rare to find a 100% degree of confidence that one factor is directly driving an issue; however, the closer to that number, the more ideal. Considering the complexity of the interaction between people and process, one might find that multiple factors are drivers of project roadblocks. Define the relationships between those multiple factors and the project pain point to understand how to implement solutions. Let’s delve into how data is leveraged to draw correlations based on our initial hypothesis or pivot to a new hypothesis supported through evidence.
Organisations that have fewer organisational process assets will require project managers to collect their own data to determine trends, which will lead to a lengthier hypothesis pressure testing time period. Pain points derived from a combination of external and internal stakeholders may require a secondary document managed manually by a project manager to determine correlations (e.g. an issue log). The method of hypothesis validation through data is just as important as the hypothesis itself. If you don’t have the correct data, you’re not ready to test. Consider testing multiple hypotheses in parallel if there is alternative data ready to go. Maybe the project sponsor isn’t creating any issues and team members are unaware of internal pain points when work transitions between teams? Reference a timeline and compare it against internal financial data to identify if budgetary inefficiencies correlate with internal project milestones. And now that I’m thinking of an alternative hypothesis, this has also revealed that looking at client project milestones in a timeline against finances could circumvent an immediate need for an issue log. Developing alternative hypotheses drives an ability to develop new ideas as I just did. Although I don’t recommend implementing multiple major process updates at the same time, pressure testing multiple hypotheses in parallel helps to avoid pigeonholing thought processes.
As is frequently the case, sample size is incredibly important. Looking at singular projects to determine trends will be misleading as differing environmental factors are distinct across projects. Therefore, when attempting to define broad correlations, define singular correlations for each project and log those into a data set. Then determine if there is a trend across projects. Since this process can be laborious, be mindful to work smart rather than laboriously. Once you’ve come up with a technique of determining internal vs external stakeholder-driven budget issues, is it possible to automate that or refine the methodology for future evaluations? It ultimately depends on how data is accessed and presented. Large sample sizes produce better conclusions, but if they are laboriously produced, it drains bandwidth. A streamlined approach will often lead to a willingness to drive larger sample sizes to reach more accurate correlations.
As trends are evaluated, there may be outliers in the data. Don’t discard these outliers, they can be incredibly revealing. Instead, dig deeper. Why do you frequently see a trend in one direction but occasionally in the opposite direction? The next key here is considering alternative variables to avoid the risk associated with confounding factors. Confounding factors are unmeasured or unconsidered variables. Many issues are not a result of just two competing variables but are influenced by three or more. If the first round of correlation analysis focused exclusively on comparing internal vs external project milestones to budgetary drives, maybe adding in another variable such as project type could reveal a more complex trend. Could certain types of projects be driven by internal efficiencies, whereas others are propelled via external pivots? Look to the data.
The data analysis test is often determined by how the data has been collected and what types of conclusions we are looking to determine. In this instance, I have suggested creating data categories to understand correlations or trends between the different categories. In this instance, a technique known as correlation analysis is used to inform if two categories are closely related in a data set. A plethora of closely related, but distinct tests exist for multivariate analysis if there are multiple variables to understand.
If data is collected and organised consistently, use these parameters to determine how the data should be analysed. With information at the touch of our fingertips, delving into the value of differing types of analyses is a few search engine clicks away then an excel formula will do the remaining leg work. Alternatively, if organisational process assets are available to facilitate data analysis, leverage those to drive a work smart over hard approach.
If a hypothesis has been shown to be uncorrelated with a pain point via correlation analysis, it may be time to revisit the data to develop a new hypothesis. Are there correlations elsewhere in the data that align to prior team observations? The collected data is not wasted if unsuspected correlations are revealed. If a strong correlation is identified that aligns with the initial hypothesis, follow that data to the next step: process development and implementation.
Although uncovering a correlation is a significant moment, challenges remain ahead. After all, you still need to develop a process solution that is achievable, implementable, and can be sustained in the long term. Tune in next week to hear my thoughts on process development.