SMART PRACTICES

How AI transforms project management

Brad Hipps

9-20-2023

Already, AI is the most transformative technology since the iPhone. And that statement may soon be a wild understatement, given where we seem to be headed. The forthcoming possibility (threat?) of general intelligence and the wizardly powers of ChatGPT, DALL-E and the like, have reinvigorated all the old hopes and fears about sentient machines.

While many fields have already seen disruption from AI (see education and journalism), the AI dice aren’t yet settled in the field of project management. However, industry watchers predict that AI will transform it soon. Gartner thinks that 80 percent of project management tasks will be automated with AI by 2030.

The good news? Where project management is concerned, AI forecasts as an enabling technology, not a replacing technology. What it will do—and is doing already—is allow project managers and their teams to make better, faster decisions, with far less legwork and guesswork.

Buzzword killer: what kind of “artificial intelligence” do we mean?

“AI”, like “cloud” before it, has fast become one of those tech buzzwords that everyone uses to cover over a host of different capabilities and meanings. At a kind of wiki-level summary, we can say AI is the simulation of human intelligence with machines via computer software and algorithms. This includes things like pattern recognition, deductive reasoning and the perception of sense data.

The other relevant term here is machine learning, technically considered a subcategory of AI, which describes algorithms trained on data to build smart models that can do all kinds of cool, complex stuff. In an attempt to minimize clumsy writing and irritating pedantry, I’m going to use “AI” and “machine learning” interchangeably.

AI has most charismatically been deployed in the form of Large Language Models (LLMs) like ChatGPT. Trained on terabytes of text data, ChatGPT has learned to understand both the meaning of textual inputs provided by humans, as well as to assign probabilistic values to the text responses it gives back.

This process of understanding patterns from vast datasets is characteristic of AI. Among other things, this enables (to borrow a phrase from Tomasz Tunguz) the “fracking of information” from huge bodies of text. An LLM can summarize thousand-page documents in a matter of minutes, complete with citations.

Of course textual communication and summary matters to project management. But the core questions project managers seek to own and answer—When will this project finish? Do we have enough people? How much will it cost? etc.—aren’t answerable by combing textual data sets. This is why, though the LLM flavor of AI is useful to project management, it hasn’t (yet) risen to the level of “transformative.”

There is another significant body of data, however, that’s highly relevant to project management. That dataset is the work activity of the organization, as captured in tools like Jira and GitHub. This “work activity” includes things like how work is moving (or failing to move), where workloads are growing or shrinking, how long work takes to finish, and how all of this compares to the historical activity of past projects.

This idea of applying machine intelligence to an organization’s work activity isn’t all that different from the way professional sporting leagues have used advanced analytics to unlock team performance. The “activity” in the sporting case includes things like how successful batters are based on type pitch, or the scoring efficiency of the team when facing different defensive sets.

The opportunity for AI in the realm of project management, then, is to see what patterns can be observed, what predictions made, based on the way teams work, and what drives improvement over time.

How does AI change project management?

Let’s first consider the kinds of questions project managers are expected to answer:

  • How long will this project take to finish?

  • Are we on track?

  • Do we need more people?

  • What are the risks?

  • Who’s underwater? Who has capacity?

To answer these questions, the project manager traditionally relies on sentiment: their own, and that of their teams. The job, by and large, is to aggregate and analyze this sentiment data—via meetings, reports, chat threads, etc.—and present a unified whole of project state.

In plainer terms, project managers wrangle informed opinions to answer the questions above. Historically, there’s been no real alternative. Given this reliance on manually collected opinion, it’s small wonder that the statistics around project failure rates are high. (Sixty-six percent, according to the Standish Group’s CHAOS report on project management.)

Fortunately, this bucket-brigading of opinions and educated guesswork is already being upended by machine intelligence. The move is toward hard data, instead of sentiment and opinion alone.

This change is better understood by some examples. Let's take a look at how a few of the primary functions of project management are (or soon will be) transformed by AI.

Defining project scope

When project managers and other leaders decide what's involved in a project, we’re making judgements on what is achievable within the constraints of time and money.

This is a matter of educated guesses. Often we have one or two known variables: the number of people available to do the work, and/or when the work is due. The rest relies on the judgment of experienced people to decide how much work is involved, how to break it down into its logical parts, and to guess how long the parts are likely to take to finish.

From there, things get murkier still. If our collective best guesses tell us that we’re likely to break the bounds of what our time/people/money can achieve, what should we change? Add more people? Reduce scope? Both? How much, and where?

But this kind of prediction and what-if analyses are where machines excel.

How do you throw machine learning at this particular use case? Via Monte Carlo simulation. Monte Carlo is a statistical technique used to model complex systems across a wide range of fields—traffic control, stock predictions, and nuclear reactors, to name a few. The idea is to use the law of large numbers to predict how a complex system might behave.

Monte Carlo runs scenarios using our best empirical data and knowledge of the "rules of physics" for the system at hand. By collecting results from enough randomized scenarios, we can get an idea not only of what is most likely to occur, but also visualize the range of possible outcomes.

This makes Monte Carlo tailor-made for experimenting with the size of scope, and/or the number of people available, to see how these inputs change forecast time to complete. In this case, the “empirical data” fed into the model includes the historical actual time-to-complete previous work, as automatically derived from the system where work is managed (e.g. Jira, Socratic, etc.). It’s the kind of capability that’s best understood by way of example, which you can try here.

Resource and capacity planning

A big part of project management is understanding how many people—and who—may be available to take on new work, or to help with existing projects needing more firepower.

In a small organization, this is a relatively trivial job. You know, or can quickly assess, whether anyone has spare cycles. As organizations grow and/or become geographically spread out, however, the picture gets muddy fast.

It’s simple enough with existing, non-AI project management systems to see who has what work on their plates. The hard part is figuring out how long will that work take to finish, and when a person might start to have free cycles.

But newer, AI-driven solutions like Socratic can derive these answers automatically. This is because the systems observes, by person and project, [the actual historical average time-to-complete work](https://socraticworks.com/methods/Stop-estimating.-Start-shipping.).

Going forward, these kind of data further enrich scenario planning, especially as conducted via Monte Carlo simulation as described above. Now, for instance, the “empirical data” used in the simulation can incorporate how many people are due to become available, who, and when. And because the data is personalized, the model will also account for variables like new hires, whose ramp times may naturally be longer than more experienced team members.

Forecasting progress and end dates

Once projects become active, there are really only two questions stakeholders care about: “How are we progressing, and when will it be done?”

Progress alone is reasonably straightforward to show, a matter of looking at completed work (usually captured as tickets in a work system) against open work for the project in question. But this simple math of X% complete, while useful, is missing some important context.

Is our progress speeding up, or slowing down? By how much?

In other words, what we want to understand in addition to progress is our momentum. If a long-running project shows as 90% complete, we can start to feel good. But if we see that our momentum—essentially, the change in work delivered versus open work—is slowing by a significant amount, we feel… less good. At that late stage of a project, we should be completing more work than we’re adding. Something is off.

What does this mean for our ideal smart work system? It would know, with every change in work state (tickets completed, new tickets added, tickets reopened, etc.) how the new state compares to the prior state, and what this means for our rate of work delivered. Granted, the calculation involved here hardly requires AI. But it’s in keeping with the thematic spirit—how the right, smart data lets project managers jump from opinion gathering to problem solving.

Very much needing the power of AI is the forecasting of end dates. Where earlier we looked at how Monte Carlo simulation changes the game for planning new work, here we’re concerned with predicting when active work will finish.

For active work, we have richer, more fine-grained effects to feed into the model. A short list of those effects might include:

  • The rate at which work tickets falls idle;

  • The number and relationship of blocked tickets;

  • The amount of rework involved before a ticket is completed;

  • The application of people with specialized skills.

These are examples of the kinds of granular or unforeseen changes that impact project finish dates. Where a human couldn’t hope to know or account for them all, a machine can easily observe and absorb them into its models. Accurate project forecasts, long the Holy Grail of project managers and business stakeholders everywhere, are an imminent reality.

Communicating with stakeholders

This one’s simple.

In a low-data, high-sentiment world, both project managers and stakeholders rely on a manual collection of opinions to gauge the likely outcomes of the project. Commonly, the feelings of the team are aggregated into a kind of sentiment summary: maybe color-coded along the lines of “green”, “yellow”, “red”.

Now, AI doesn’t and shouldn’t replace the perspective of managers or team members. Sentiment—informed, experienced opinion—matters. But if all a stakeholder has to go on is a color marker on a slide, the project manager should expect some… questions. Some scrutiny. What’s wanted are the hard data to answer the questions explored here. With data comes transparency, authority, and trust.