Generally speaking, it’s more ideal to approach the portfolio/program process issues first. This may end up alleviating project-specific pain points as a result. Level-set the list of process issues based on Process Risk Scores and let’s try to develop some testable hypotheses. Hypotheses should be informed based on observations, but the key element is that they need to be testable. Trial and error is a vital element of troubleshooting, but repeatedly implementing process changes with the team, only to have them consistently fail in practice is going to wear down the team and project sponsor. Rather than play the trial and error game at the level of process implementation, start at pressure testing the early hypotheses. If a hypothesis is proven wrong, you have not failed, you have succeeded in eliminating an incorrect assumption. If there is no data to support a hypothesis, dig deeper or develop alternatives. You’re not ready to implement a change, if there aren’t any data-based justifications to do so.
Many organisations collect project data that will allow for the evaluation of hypotheses against quantitative data. Watch out for your bias. Don’t fall in love with a hypothesis as you might then ignore data that refutes your initial idea or twist data to support it. Many innovations have been developed by proving that the first dozen hypotheses were incorrect. Although I love to talk up collaboration, I find the hypothesis pressure testing phase is ideal when done in a bit more isolation to allow for heads-down time to evaluate data. Leverage internal teams for supplementary data to further validate, but be mindful that running the team through the full gauntlet of hypothesis pressure testing might cause fatigue. Moreover, draining team bandwidth to focus on hypothetical process investigations may create issues for ongoing projects. Find the balance of bouncing a high-level idea off a team member without fully engaging them to evaluate the data.
A Budgetary Case Study in Hypothesis Development
Let’s think about a case study in budgetary efficiency. Sure, projects are going over-budget, but why is that happening? Individual team inefficiency? Difficulties between teams? Project sponsor pivots? For the initial hypothesis, rely on the early communication with key stakeholders along with process risk scores. If every single person on the team has indicated that the project sponsor pivots are driving budgetary issues, it’s likely a good starting point to investigate. Keep in mind, the point here is to attempt to understand the issue rather than to assign blame. More often than not, if we haven’t defined the problem, the stakeholder driving an adverse issue may not even realise it. Stakeholders, whether external or internally, are not deliberately sabotaging their own project. Be cognizant of Hanlon’s Razor: never attribute to malice that which is adequately explained by other causes.
Use the early stakeholder conversations to guide a hypothesis. The next step will be determining: is there data that will allow for validation? Some organisations log how finances on a project have continuously evolved, others purely focus on the end-state finances. If project sponsor-driven issues are suspected, create an issue log to facilitate trend analysis to determine if client issues do indeed drive budget inefficiencies. If you don’t have the appropriate data to test a hypothesis, do not try to make a square shape fit into a round hole. Take the time to generate supplementary data, when necessary, to understand how project decisions correlate with budgetary change across a project and portfolio.
In the next article, I’ll be continuing to discuss this budgetary case study with a focus on how to leverage data to validate hypotheses. Stay tuned next week!