Tag Archives: project management

3 Keys to Effective Project Management

I did a Google search on the number of project management methodologies that exist currently, and I kept getting results like “9 methodologies made simple” and “15 methodologies you must know.” In the world of “methodologies galore” such as Waterfall, Agile, Six Sigma, Lean, Lean Startup, Kanban, etc., etc., with each body of knowledge or pundit preaching that they are the best and most effective, how might one “declutter” the jargon and really apply effective project management for their initiatives?

As insurtechs, we also need to take into consideration client methodologies and what is being used in their organizations. Here are three key principles that I’ve applied to my nearly one-plus decade of project management, whether I’ve implemented Scrum, Lean, etc.:

Keep It Simple: Don’t overcomplicate your project management process. Simplicity is the ultimate sophistication. I have seen slides from companies talking about their project management approach that make me go take a Tylenol. Project management is not about methodology, process or tooling (they are essential — don’t get me wrong). The focus is about people and communication.

Keep It Nimble: At Benekiva, how we manage internal projects and our road map is different than implementing a client engagement. We are nimble to adapt based on clients’ feedback. If they want status reports, they are getting them. If they want an Excel output, they get it.

Keep It Short: This relates to how we schedule iterations/sprints and work effort. Keeping them short allows us to respond to any change or “aha” moments effectively with minimal impacts. Keeping it short also eliminates procrastination, where things get pushed to the last minute.

See also: How to Improve ‘Model Risk Management’  

Applying these three principles to your project management practice will allow your company to do enough project management while giving you more time to communicate to your stakeholders and adding value where it needs to be added.

What Socrates Says on Customer Insight

Are you and your Customer Insight team too often frustrated that you’re not making a difference in your business? Do your internal customers ever criticize what they receive from your team, asking, “Where’s the insight?” Sometimes this is because of technical skills or barriers that need to be addressed, but very often it’s because of poor communication. Do you need to get a better brief?

What I mean is this: Marketers or other stakeholders within your business can come to Customer Insight and ask for a piece of data/analysis/research. If the analyst just gives them what they asked for (or a version of that based on their understanding of what they heard), it’s often a recipe for disappointment. Analysts can feel limited by work that’s not creative or using their technical skills. Your internal customers can be disappointed, to receive something other than what they meant, and that doesn’t meet their real need.

This communication challenge is of, course, well-known in the field of project management. This tree swing example normally helps to illustrate this dilemma.

But there is more, beyond the challenge of documenting requirements clearly, in a good brief. Have you also found that what your internal customers doesn’t ask for is what they really need? Is what they want not what they need? That’s my experience, too. So, to help analysts improve their questioning skills in this area, I’ve been borrowing a technique from the world of leadership coaching.

Trained coaches will likely have come across Socratic questioning. It is a style of inquiry, aimed at helping the one being questioned to critique his own thinking, assumptions and viewpoint. Working with both experienced and junior analysts, I’ve found that the principles of Socratic questioning can help them in questioning what they are asked to provide, to get to the real need.

Here’s a very brief intro to this style of questioning, as proposed by the great Socrates himself:

Conceptual clarification questions: “What exactly does this mean?”; “Can you rephrase that, please?”; “Can you give me an example?”

Probing assumptions: “You seem to be assuming…?”; “Please explain why/how…?”; “How can you verify or disprove that assumption?”

Probing rationale, reasons and evidence: “Why is this happening?”; “Would it stand up in court?”; “How can I be sure?”

Questioning viewpoints and perspectives: “Another way of looking at this is…, does this seem reasonable?”; “What would… say about it?”;

Probing implications and consequences: “Then what would happen?”; “Why is… important?”; “How does… fit with what we learned before?”

Given previous advice on being action-oriented throughout any customer insight work, I find it helps to add another line of questioning to this model. That is to explicitly ask what action is going to be taken as a result of this request. This is important, to avoid precious analyst time being taken up with questions that are just out of curiosity. You need to know what action is planned.

None of the above is intended to be used word for word, or imposed without intelligent interpretation, in the language and culture of your organization. However, applied sensibly, I’ve seen that it can help empower analysts to question more and to improve their skills in eliciting real business needs.

When the real need is understood and captured in a clear brief, then you stand a much better chance of getting real insight.

What have you found works? How do you get a better brief?

3 Main Mistakes in Change Management

In my last blog, my engineer self admitted that the root causes for why core systems replacement projects don’t hit the mark in the business case are more likely related to people, not the technology. I stated that the business only changes when individual contributors each do their jobs differently.

Now let’s take a more detailed look.

There are many models out there that provide a framework for understanding change. One that we use frequently at Wipfli is the Prosci model, which is focused on understanding change at the individual level. Boiling it down to its simplest form, this model says the change must progress for each individual from awareness to desire to knowledge to ability to reinforcement.

Understanding that, Mistake #1 to avoid is measuring the need for change management based on executives’ paths, not their people’s. The executives responsible for the program and ultimately for the change management strategy, approach and investment are by definition the leaders furthest down their own change paths. That is, they are, in all probability, way beyond the awareness and desire stages. (Hint, hint: That’s why this core systems project is underway). And, not uncommonly, because of where they are, they may not understand the need to make a significant investment in change management.

Once you embrace the need for change management, there are an array of tools and techniques at your disposal. These include communications, sponsorship, coaching, training and resistance management. Mistake #2 to avoid is loading everything into communications as a one-and-done approach. In fact, I would guess that when most of us hear the term change management, we immediately think of communication. That’s good because change starts with awareness. But did you know that it takes something like five to seven communications for a message to be truly heard and understood by all? Remember that perfect project kickoff email you sent last week that summarized everything perfectly? Yeah – maybe 20% of your audience remembers it today. So communication must be multiple messages using multiple channels coming from multiple stakeholders.

Multiple studies over the years have reaffirmed the significant correlation between a project’s success and change management’s impact and, more specifically, the importance of the project sponsor’s role in both. Succinctly, the earlier the project sponsor is engaged in the project and the earlier the project sponsor embraces change management, the better the chance for success.

Mistake #3 concerns the project sponsor and her change management role. Just because you have a smart and engaged leader as your sponsor, don’t assume she knows what’s supposed to be done every week in a transformational core systems project if she hasn’t played that role before. For example, does the project sponsor know to build a coalition among the key managers and supervisors whom the affected employees will most want to hear from? At the end of the day, the employee will turn to his immediate boss and not the project sponsor to really get the WIIFM (what’s-in-it-for-me).

You get the idea. As much as agile project management and delivery approaches and methodologies have been embraced, used and hardened over the past 10 years, we need to do the same for change management.

Walking in the Shoes of Our Customers

I have spent the bulk of my software career as a member of the sales camp. My comfort zone is nurturing big ideas and helping to motivate clients to embrace change. It is thrilling to earn the right to engage with clients through the decision-making process, help clients gain confidence that transformation is possible and support the first steps in execution. Pretty lofty, I know.

But something happened this past year…the tables turned, and I became a software-buying customer. The loftiness of strategic vision met the cold, hard pavement of execution. I found the descent both rapid and eye-opening.

First, a little context — my sales enablement team convinced me the time had come to implement a learning management system (LMS). A LMS was a necessary platform for our team’s and company’s growth ambitions. A LMS system would eliminate a ton of manual processing, freeing resources on the team. At the same time, it would help us focus learner and management attention on building skills that matter, a benefit to the larger sales organization. I agreed, and, in doing so, I stepped into the shoes of our customers. For sure, a LMS implementation is not the size, scale or complexity our Guidewire customers face replacing core systems. But, even at a smaller scale, the implementation has been a valuable education.

  • Success depends upon strong partnership between business and IT. There is just no way IT can run a project without involvement from the business, and the business needs strong project management partners and the technical subject matter expertise from IT. It’s just that simple.
  • If you don’t have the resources to dedicate to the project, don’t do it. It’s hard to find the time to focus on software implementation when there is a business to run. But if there isn’t someone on the business side getting up every day to advance the project, the project is at risk. Asking someone from the business to manage a software project as a part-time job is the myth of multitasking in action. Projects by their nature need focused attention.
  • Process matters. I can hear the words of Alex Naddaff, senior vice president, programs, at Guidewire (who led our professional services organization for the first decade of our company’s history), ringing in my ear: “Project success depended on small teams, empowered to make decisions, who can do so quickly.” He’s right. Without an agile process that promotes consistent communication and team transparency, the project will find rough going.

These aren’t new lessons. These are the same lessons we bring to the table every time we engage with Guidewire prospects and customers. We preach that success depends on:

  • Strong business and IT partnerships;
  • Focused dedication of small teams; and
  • Transparent processes.

The lesson for me is just how hard it is to stay true to these principles. It requires trade-offs, budget allocation and the prioritization of team members’ time. It means accepting that some things won’t get done.

I will share the good news: Because we are following these fundamentals, our project is green, and we are closing in on our deployment date. I’ve got nothing but thanks and praise for the team leading the charge (Sarah from IT and Wendy from enablement, you both rock). We’re not there yet – there are more weeks and months of tough decisions and trade-offs ahead. But we’re close, the goal line is in sight and the realization of benefits is just around the corner.

Even more than the deployment, the biggest win for me is that next time I get the chance to talk to customers and prospects about the perils of software implementation, I can engage with this first-hand experience and empathy for the process. I can say with complete sincerity that the work sucks, but that it’s worth it.


What Risk Reports Won’t Tell You

Usually, the first questions the project director asks are,

  1. “What are the top 10 risks by cost P80?”
  2.  “What is the P80 of cost risk?”
  3. “How does the total compare with the cost contingency?”

These seem like fundamental, simple questions for a project director, but they actually display a complete failure to understand the nature of risk or risk over time.

In this short paper, I want to summarize just what information monthly risk reports can provide that is useful to the project managers.

1.     Quantitative Risk Analysis

Monte Carlo simulation is the core of quantitative risk analysis (QRA) and is used to combine risk distribution assessments for probability and consequence.

Risk is historically defined as the product of probability and consequence (De Moivre 1711). But multiplying two distributions together is no casual mathematical exercise. On a mega-project, there can easily be a thousand-plus risks. The sum of all the products of the individual risks is a distribution for the total risk.

Risk has two components:

i. Probability of occurrence, the subjective belief that it will occur. This is a binary distribution because it has two states — i.e., it happens or doesn’t happen — and is called a Bernoulli distribution

ii. A consequence measured in terms of cost, delay or performance deterioration. This is also a distribution. In project risk, three-point triangular or PERT distributions are commonly used.

With the understanding that risk is composed of two probability distributions, one can see that describing risk magnitude in the “project management way,” by a single value (the P80 of cost) doesn’t make any sense at all.

The usual way to show a risk distribution for either an individual risk or for total risk is with a Pareto graph, which combines a probability density function (pdf) and a cumulative density function (cdf). These are also known as a histogram with an S-curve.

Figure 1. A Pareto Graph

2.    What are the Top 10 Risks?

It is common for the project director to request the top 10 risks in monthly risk reports for both cost risk and schedule (delay) risk. These are usually ranked in descending of P80.

What is P80? This means the 80 percentile of the distribution — 80% of the data points are to the left of the 80th percentile and 20% to the right.

The interpretation of this is that one can be 80% sure that the cost or delay will be at that value or less and, conversely, that one can be 20% sure that the cost/delay will be greater.

Some companies use the P90, which suggest they are more risk averse. Some use P75, which is the upper quartile, Some use P68.2, which is one standard deviation – the statistical metric for uncertainty. And some companies use the P50, which is the same as tossing a coin.

It is not possible to use Pareto graphs to identify the top risks. This is best done using either or all of the following graph types:

  1. Box and whisker graph
  2. Tornado diagram
  3. Density strip

All of these three methods work well in visually presenting the risks in order of magnitude, although the tornado chart is rather a “black box” method that may give different results from the other two graphs.

Figure 2 Box & Whisker Graph

Figure 3 Tornado Diagram

Figure 4 Impact Density Strips

It is important to understand that the P80 value does not tell one which is the biggest risk; the P80 is a single point on the pdf that simply means that one can be 80% sure that it will cost $X or less or that you can be 20% sure that it will cost $X or more!

Do you get the message there about uncertainty?

To truly explain this important point, I have plotted 10 risks, all with approximately the same P80 = 54.2, in the iso-contour graph below. Each of the risks has a different consequence and different probability.

Figure 5 Iso-Contour Chart of 10 Risks With P80=54.2

Using the box & whisker plot and impact density strip, it should be immediately apparent, even to the untrained eye, that the risks are very different in terms of uncertainty and consequence. The challenge is determine which is biggest.

Figure 6 Density Strip of the 10 Risks



Figure 7 Box & Whisker Plot of the 10 Risks

We can see that risk 5 is actually quite certain, whereas risk 2 is very uncertain, and yet they both have the same P80. Here we need to understand how to deal with a risk and its certainty.

It should now be clear that ranking and prioritizing risks on the basis of P80 alone is neither correct nor particularly meaningful, as all evidence of the probability distribution and impact distribution are missing. The three graphical solutions – box plot, density strip and tornado diagram — make it easier for the managers to prioritize the risks visually by relating directly to both consequence and uncertainty.

3.    What Is the Significance of the Total P80 Cost?

Almost the very first number that appears in the monthly risk report will be the P80 total for all cost risks. You might wonder why the P80 instead of the P90 or P50 or the standard deviation (P68.2).

To project directors, the P80 is a magic number that can be shared with colleagues, the directors, the client. Why the P80 became the popular percentile is unknown. There is obviously a relationship between risk aversion and risk taking — the more risk averse, the higher the P value that is preferred.

— Contingency as a percentage of baseline cost

The project planning process will involve detailed cost estimates by quantity surveyors and cost engineers. These estimates will become the baseline cost of the project covering materials, labor and inflation. The risk manager will endeavor to get the cost team to do a risk review and build a range of uncertainties around the costs. During the design stage, this will usually be a +/-25% ball park figure, with the range narrowing as design and time progress.

The formula used for determining cost based contingency is usually:

P80 of cost estimate – base cost = contingency

Often, the cost team includes project risks in the calculations, which are based on their personal experiences, which are usually undocumented and which inflate the base cost. You do not want this to happen.

The planning team will, at the outset, establish some percentage of the total cost as a contingency. On the most recent mega-project valued at $2.3 billion, the contingency was 7% of the total forecast cost. How this contingency was determined was undocumented but presumably based on some experiential rule of thumb of the planning team. Curiously, this figure was shown on the management reports as a P80, presumably in an endeavor to give credibility to the contingency figure.

— Contingency as a function of risk assessment

The risk management process is a journey over the duration of the project. It starts at the design phase, progresses through manufacturing, then on to construction and finally to commissioning. Although these are broadly distinct phases, there will be many overlapping time periods.

The time of greatest risk will be during the design phase, when everything is pretty much unknown to all the project team. The uncertainties will be legion, from planning permission to technology, contracts to quality control, civil engineering works to change management.

The risk should appear as a series of waves, growing rapidly during the design phase and then decreasing until approaching zero as the problems are solved. After all, you wouldn’t begin a project with huge quantities of unresolved risk.

The graph below gives a idea of the risk over time over the course of the project:


Figure 8 Risk Over Time

As each phase progresses, the risk will ebb and flow, progressively decreasing as the project concludes successfully.

The risk total for the month will have meaning only in the context of the previous month’s risk total, the phase of the project and the forecast for the future risk over the course of the project.


Figure 9 A Box & Whisker Plot of the First 10 Monthly Total Risk Values

It can be seen from Figure 9 that the risk is progressively increasing until month 9, after which it appears to start declining. Risk will follow the phases described in Figure 8. Risk can be graphed according to each individual phase or a global overview.

It should be apparent that the P80 doesn’t help the project director understand the current or future risk on the project, the nature of the uncertainty or the risk over time.

A simple enhancement in Excel combining Figures 8 and 9 is given in Figure 10 so that deviations from forecast are clearly visible and comparable.


Figure 10 Current Monthly Risk Total Vs. Forecast P50 & P90.

The range of uncertainty in the current situation and the forecast are clearly displayed. Alternative measures of uncertainty can be used — e.g., mean+/- 1 standard deviation.

In Figure 10, there is a noticeable  discrepancy between the current total risk and the forecast. It is essential to understand and report on the source — for example, possibilities such as these:

  1. Fewer risks have been identified than expected
  2. The quantification of risks is too optimistic, i.e., lower cost
  3. The handling plans are assessed as more effective
  4. The forecast risk is higher than actually being experienced during design stage
  5. The design phase is running behind schedule
  6. Improved estimation skills are required, so a calibration training course needs to be put in place

It is important for the project director to understand exactly what is being measured in this concept of total risk.

Useful reference: How to Manage Project Opportunity and Risk: Why Uncertainty Management Can be a Much Better Approach Than Risk Management, by Stephen Ward and Chris Chapman.