Project data analytics impact (and promise): hype or reality?

This is the question that the panel set out to answer at the most recent meeting of the Project Data Analytics Meetup, hosted as usual by Martin Paver. I’d like to focus on a few key things that resonated with me during the evening…

This is the question that the panel set out to answer at the most recent meeting of the Project Data Analytics Meetup, hosted as usual by Martin Paver.

I’d like to share with you a few key things that resonated with me during the evening…

Our projects are not as unique as we’d like to think

Most definitions of what a project is make reference to the idea that a project is unique (1; 2). This idea can be limiting as it means that projects cannot learn from other projects because each one is unique so any learning is not transferable.

Pragmatic PMO perspective: It can sometimes look as though project managers have a creed that they repeat to themselves

“No-one has ever built one of these,
or managed as much complexity as this,
or worked under such tight constraints.
So nothing that has ever happened
in any project before
could possibly be applicable
to the one we are running now.

Many Project Managers,
Not learning from experience, Now

But of course that is unrealistic.

If you scratch off the surface veneer of uniqueness, all projects deliver, launch or move stuff.

No matter how unique we think our project is…

Projects have delivered stuff like ours before.

Stuff has been delivered in industries like ours before.

Stuff has been delivered against constraints like the ones we are facing before.

But maybe not in this combination, and maybe we haven’t done it ourselves before.

There are more common themes that unite projects than there are differences that separate them. Viewed through this lens, there is plenty of scope to transfer knowledge learned from one project to be implemented on another.

We should share data to enable us to make better decisions

…or at least more consistent decisions.

This does not have to mean sharing the things that are essential to our competitive edge.

We can start with the basics like things that help to improve safety or to reduce the harm we do to the environment. Why wouldn’t we want to do that?

Pragmatic PMO perspective: We should start doing that. Industry sectors that work with significant hazards (e.g. nuclear energy, aviation, rail) have well-developed reporting systems that enable them to learn from their mistakes and a commitment to improvement that transcends commercial and competitive considerations.

In recent years, the health care industry has adopted practices similar to the aviation industry and is showing improvement as a result (3; 4).

In low-hazard industries, commercial considerations are likely to override the need to share, creating a barrier to learning. That needs to change. Maybe environmental considerations will provide the catalyst.

We need to be brave enough to share when we get it wrong

One of the panel members posed the question – what difference would it make to the way that future projects are run if we published all the data at the end of our projects (so that others could learn from it)?

We would no longer be able to rely on taking the single biggest thing we have learned from a recent project, keeping it to ourselves and repeating it (because others would have learned it too).

Instead we would have to make incremental improvements with each project that we run to maintain a competitive edge – but the whole project community would benefit as a result.

Another panel member responded that in some industries where this has already happened, sometimes the data is ‘sanitised’ first before being provided for publishing, and the most useful bits are lost in the process.

Pragmatic PMO perspective: At least as interesting as that question is the one that formed in my mind – what difference would it make to the decisions being made on projects right now if we knew that all the data would be published at the end (so that others could judge the ethics of our decisions, and hold us to account with the benefit of hindsight)? We would need to be much more mindful of how history would judge our actions, and would be much less tempted to take that dodgy short-cut.

Benefits will come from automating the simple stuff

Pooling data so that it can only be seen by algorithms reduces reputational risk and makes organisations more likely to share their data. Improvements will come from automating simple, transactional decisions to reduce delivery risk.

Pragmatic PMO perspective: But this introduces a new risk – if we automate decisions using poorly designed algorithms, written by humans and codifying all their human prejudices and biases, won’t we just continue to make bad decisions, only faster and more efficiently?

Lessons learned don’t work because human brains can’t hold enough information

One panel member asserted that lessons learned don’t work because they are based on people trying to learn everything about everything, and the human brain can’t hold enough to know everything about everything. Even if we record the knowledge in spreadsheets and databases then the human brain can’t retain enough information to know that line 3456 in the database contains a lesson that may be useful.

Instead, we need to get computers to hold all the knowledge (because they are better at that than we are), and flag up to us that line 3456 contains some information that may be useful to us in our project so that we become sensitised to that as a risk. We need to stop “learning lessons” and start “leveraging experience”.

Pragmatic PMO perspective: Does that really mean that lessons learned don’t work? It is unrealistic to suggest that any one project manager should try and absorb all the lessons learned on all the projects.

The approach suggested sounds like we should have one really big spreadsheet / database of all the data / lessons from all the projects, and have a computer monitoring it for things that will be useful to us so that it can tell us that line 945,607,612 captures something that we should we should consider as a risk on our current project. That is of course a useful development.

What would be really useful however is to have the computer spot patterns that would be hard for humans to detect – for example if a project usually submits status reports on time, and starts to submit them late, maybe history shows that means the project is heading for trouble. It sounds as though what we need is a really big pool of data, with good classification, and a system to monitor this and spot patterns. Computers can certainly help with that.

In summary…

The evening provoked plenty of discussion and thinking, and did not (of course) answer the original question. But here’s the Pragmatic PMO perspective:

  • There is little doubt that project data analytics can help to improve how projects are delivered, through the sheer power of computers, machine learning and so on to process large volumes of data and to spot patterns that humans wouldn’t even think to look at.
  • We need to be careful to make sure that whatever we code into the project data analytics algorithms doesn’t merely reproduce human prejudices and biases. Otherwise we will end with those prejudices and biases ever more deeply entrenched into ever faster and more ruthlessly efficient processes.
  • All of this comes from the concept that knowledge is a thing that can be captured and shared. Not all knowledge works like this. Some of it only emerges when people interact (in both social and working situations), and some only comes from practical experience (5). These other types of knowledge will be much harder for computers to deal with.
  • Project data analytics looks set to become more prevalent, and its usefulness will doubtless improve with time and development, so it is something that should be on the radar of every project management professional. And I can’t think of a better place to start than the London Project Data Analytics Meetup – see you there?!
  • Some of these ideas are explored further in my book Learning Lessons from Projects, available from Amazon.

The panel

Further reading

1. Association for Project Management. Body of Knowledge / Project Management. Association for Project Management. [Online] [Cited: 23 April 2018.] https://www.apm.org.uk/body-of-knowledge/context/governance/project-management/.

2. Project Management Institute. What is Project Management? PMI.org. [Online] [Cited: 23 April 2018.] https://www.pmi.org/about/learn-about-pmi/what-is-project-management.

3. Duffield, Stephen Mark. An advanced systemic lesson learned knowledge model for project organisations. [Online] 2017. [Cited: 01 October 2018.] https://eprints.usq.edu.au/32822/.

4. Syed, Matthew. Black box thinking: marginal gains and the secrets of high performance. s.l. : John Murray, 2016. ISBN 978-1473613805.

5. Payne, Judy, Roden, Eileen J and Simister, Steve. Managing Knowledge in Project Environments (Fundamentals of Project Management). Abingdon, England, and New York : Routledge, 2019. ISBN 978-1472480279

Author: Ken Burrell

Ken Burrell is a Programme and Portfolio Office (PMO) Professional, who through his company Pragmatic PMO makes targeted improvements to PMO practices to add value to Projects, Programmes and Portfolios. He provides senior management with the analysis they need to make decisions, and gives project and programme managers the support they need to deliver solutions.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.