Tag Archives: research

Why To-Do Lists Don’t Work

Do you really think Richard Branson and Bill Gates write a long to-do list with prioritized items as A1, A2, B1, B2, C1 and on and on?

In my research into time management and productivity best practices, I’ve interviewed more than 200 billionaires, Olympians, straight-A students and entrepreneurs. I always ask them to give me their best time management and productivity advice. And none of them has ever mentioned a to-do list.

There are three big problems with to-do lists:

First, a to-do list doesn’t account for time. When we have a long list of tasks, we tend to tackle those that can be completed quickly, leaving the longer items left undone. Research from the company iDoneThis indicates that 41% of all to-do list items are never completed!

Second, a to-do list doesn’t distinguish between urgent and important. Once again, our impulse is to fight the urgent and ignore the important. (Are you overdue for your next colonoscopy or mammogram?)

Third, to-do lists contribute to stress. In what’s known in psychology as the Zeigarnik effect, unfinished tasks contribute to intrusive, uncontrolled thoughts. It’s no wonder we feel so overwhelmed in the day but fight insomnia at night.

In all my research, there is one consistent theme that keeps coming up:

Ultra-productive people don’t work from a to-do list, but they do live and work from their calendar.

Shannon Miller won seven Olympic medals as a member of the 1992 and 1996 U.S. Olympic gymnastics team, and today she is a busy entrepreneur and author of It’s Not About Perfect. In a recent interview, she told me:

“During training, I balanced family time, chores, schoolwork, Olympic training, appearances and other obligations by outlining a very specific schedule. I was forced to prioritize…To this day, I keep a schedule that is almost minute-by-minute.”

Dave Kerpen is the cofounder of two successful start-ups and a New York Times-best-selling author. When I asked him to reveal his secrets for getting things done, he replied:

“If it’s not in my calendar, it won’t get done. But if it is in my calendar, it will get done. I schedule out every 15 minutes of every day to conduct meetings, review materials, write and do any activities I need to get done. And while I take meetings with just about anyone who wants to meet with me, I reserve just one hour a week for these ‘office hours.'”

Chris Ducker successfully juggles multiple roles as an entrepreneur, best-selling author and host of the New Business Podcast. What did he tell me his secret was?

“I simply put everything on my schedule. That’s it. Everything I do on a day-to-day basis gets put on my schedule. Thirty minutes of social media–on the schedule. Forty-five minutes of email management–on the schedule. Catching up with my virtual team–on the schedule…Bottom line, if it doesn’t get scheduled, it doesn’t get done.”

There are several key concepts to managing your life using your calendar instead of a to-do list:

First, make the default event duration in your calendar only 15 minutes. If you use Google Calendar or the calendar in Outlook, it’s likely that when you add an event to your calendar it is automatically scheduled for 30 or even 60 minutes. Ultra-productive people only spend as much time as is necessary for each task. Yahoo CEO Marissa Mayer is notorious for conducting meetings with colleagues in as little as five minutes. When your default setting is 15 minutes, you’ll automatically discover that you can fit more tasks into each day.

Second, time-block the most important things in your life, first. Don’t let your calendar fill up randomly by accepting every request that comes your way. You should first get clear on your life and career priorities and pre-schedule sacred time-blocks for these items. That might include two hours each morning to work on the strategic plan your boss asked you for. But your calendar should also include time blocks for things like exercise, date night or other items that align with your core life values.

Third, schedule everything. Instead of checking email every few minutes, schedule three times a day to process it. Instead of writing “Call back my sister” on your to-do list, go ahead and put it on your calendar or even better establish a recurring time block each afternoon to “return phone calls.”

That which is scheduled actually gets done.

How much less stress would you feel, and more productive would you be, if you could rip up your to-do list and work from your calendar instead?

How to Use All the New Data

Most people who purchase an insurance policy are faced with the daunting task of filling out an extensive application. The insurance company – either directly or through an intermediary – asks a myriad of questions about the “risk” for which insurance is being sought. The data requested includes information about the entity seeking to purchase insurance, the nature of the risk, prior loss experience and the amount of coverage requested. Insurers may supplement that information with a limited amount of external data such as motor vehicle records and credit scores. The majority of information used to inform the valuation process, however, has been provided by the applicant. This approach is much like turning off your satellite and data-driven GPS navigation system to ask a local for directions.

According to the EMC Digital Universe with research and analysis by IDC in 2014, the digital universe is “doubling in size every two years, and by 2020 the digital universe – the data we create and copy annually – will reach 44 zettabytes.” That explosion in the information ecosystem expands the data potentially available to insurers and the value they can provide to their clients. But it requires new analytical tools and approaches to unlock the value. The resulting benefits can be grouped generally into two categories:

  • Providing Risk Insights: Mining a wider variety of data sources yields valuable risk insights more quickly
  • Improving Customer Experience: Improving the origination policy service and claims processes through technology enhances client satisfaction

For each of these areas, I’ll highlight a vision for a better client value proposition, identify some of the foundational work that is used to deliver that value and flesh out some of the tools needed to realize this potential.

Risk Insights
Insurance professionals have expertise that gives them insight into the core drivers of risk. From there, they have the opportunity to identify existing data that will help them understand the evolving risk landscape or identify data that could be captured with today’s technology. One can see the potential value of coupling an insurer’s own data with that from various currently available sources:

  • Research findings from universities are almost universally available digitally, and these can provide deep insights into risk.
  • Publicly available data on marine vessel position can be used to provide valuable insights to shippers regarding potentially hazardous routes and ports, from both a hull and cargo perspective.
  • Satellite imagery can be used to assess everything from damage after a storm to proximity of other structures to the ground water levels, providing a wealth of insights into risk.

The list of potential sources is impressive, limited in some sense only by our imagination.

When using the broad digital landscape to understand risk — say, exposure to a potentially harmful chemical — we know that two important aspects to consider are scientific evidence and the legal landscape. Historically, insurers would have relied on expert judgment to assess these risks, but in a world where court proceedings and academic literature are both digitized, we can do better, using analytical approaches that move beyond those generally employed.

Praedicat is a company doing pioneering work in this field that is deriving deep insights by systematically and electronically evaluating evidence from various sources. According to the CEO Dr. Robert Reville, “Our success did not come solely from our ability to mine data bases and create meta data, which many companies today can do. While that work was complex, given the myriad of text-based data sources, others could have done that work. What we do that is unique is overlay an underlying model of the evolution of science, the legal process and the dynamics of litigation that we created from the domain expertise of our experts to provide context that allows us to create useful information from that data built to convert the metadata into quantitative risk metrics ready to guide decisions.”

The key point is that if the insurance industry wants to generate insights of value to clients, identifying or creating valuable data sources is necessary, but making sense of it all requires a mental model to provide relevance to the data. The work of Praedicat, and others like it, should not stop on the underwriter’s desktop. One underexploited value of the insurance industry is to provide insights into risk that gives clients the ability to fundamentally change their own destiny. Accordingly, advances in analytics enable a deeper value proposition for those insurers willing to take the leap.

Customer Experience
Requiring clients to provide copious amounts of application data in this information age is unnecessary and burdensome. I contrast the experience of many insurance purchasers with my own experience as a credit card customer. I, like thousands of other consumers, routinely receive “preapproved” offers in the mail from credit card companies soliciting my business. However appealing it may be to interpret this phenomenon as a benevolent gesture of trust, I know I have found myself on the receiving end of a lending process whereby banks efficiently employ available data ecosystems to gather insights that allow the assessment of risk without ever needing to ask me a single question before extending an offer. I contrast this with my experience as an insurance purchaser, where I fill out lengthy applications, providing information that could be gained from readily available government data, satellite imagery or a litany of other sources.

Imagine a time when much of the insurance buying process is inverted, beginning with an offer for coverage, rather than a lengthy application and quote request. In that future, an insurer provides both an assessment of the risks faced, mitigations that could be undertaken (and the savings associated), along with the price it would charge.

While no doubt more client-friendly, is such a structure possible? As Louis Bode, former senior enterprise architect and solution architect manager at Great American Insurance group and current CSO of a new startup in stealth-mode observes, “The insurance industry will be challenged to assimilate and digest the fire hose of big data needed to achieve ease of use and more powerful data analytics.”

According to Bode, “Two elements that will be most important for us as an industry will be to 1) ensure our data is good through a process of dynamic data scoring; and 2) utilize algorithmic risk determination to break down the large amounts of data into meaningful granular risk indexes.” Bode predicts “a future where insurers will be able to underwrite policies more easily, more quickly and with less human touch than ever imagined.”

The potential to use a broader array of data sources to improve customer experience extends well beyond the origination process. Imagine crowdsourcing in real time the analysis of images to an area affected by a natural disaster, getting real time insights into where to send adjusters before a claim is submitted. Tomnod is already crowdsourcing the kinds of analysis that would make this possible. Or imagine being able to settle an automobile claim by simply snapping a picture and getting an estimate in real time. Tractable is already enabling that enhanced level of customer experience.

The future for insurance clients is bright. Data and analytics will enable insurers to deliver more value to clients, not for additional fees, but as a fundamental part of the value they provide. Clients can, and should, demand more from their insurance experience. Current players will deliver or be replaced by those who can.

I’d like to finish with a brief, three-question poll to see how well readers think the industry is performing in its delivery of value through data and analytics to clients. Here is my google forms survey.

Is Research Ready for ‘Gamification’?

It has been interesting that, after several years of excitement around the topic of “gamification,” this year more commentators have suggested that it’s “game over.” I certainly agree that this concept has moved through the Gartner hype-cycle, into the wonderfully named “trough of disillusionment.”

However, that is the springboard for entering into the stages of pragmatic realism. My experience is that it is often once technologies or ideas reach this stage that those interested in just delivering results can begin to realize benefits, without the distraction of hype/fashion.

Even though I can see the points made in this Forbes article, I think that the evidence cited concerns a failure to revolutionize business more broadly. What has not yet been exhausted, in my view, is the potential for gamification to help with market research.

One growing issue springs to mind. I’m thinking of the challenge faced by any client-side researcher seeking a representative sample for a large, quantitative study. The issue is that participation rates are falling, unless research is fun, interesting and rewarding. Coupled with that problem is the risk that some ways that agencies use to address it risk a higher skew toward “professional” research participants.

Gaining a sufficient sample, one that matches a company’s own customer base’s demographic or segments, can be important for experimentation. This issue is timely for financial services companies that are seeking to experiment with behavioral economics and need sufficient participation in tests to see choices made in response to “nudges.” So, there is a need to freshen up research with methods of delivery that better engage the consumer.

No doubt the full hype will not be realized for gamification. But I hope that, as the dust settles, customer insight leaders will not give up on the idea of gamification as a research execution tool. Some pioneers like Upfront Analytics are seeing positive results. Let’s hope others get a chance to “play” with this.

How Effective Is Your Marketing?

When speaking about the power of having different technical disciplines converge to yield customer insights, it’s common to focus on analytics and research.

However, another rich territory for seeing the benefit of multiple technical disciplines to deliver customer insight (CI) is measuring how effective your marketing is.

One reason for calling on the skills of two complementary CI disciplines is the need to measure different types of marketing spending. The most obvious example is probably the challenge of measuring the effectiveness of “below the line” vs. “above the line” marketing. For those not so familiar with this language, born out of accounting terminology, the difference can perhaps be best understood by considering the “purchase funnel.”

Most, if not all, marketers will be familiar with the concept of a purchase funnel. It represents the steps that need to be achieved in a consumer journey toward making a purchase. Although often now made more complex, to represent the nuanced stages of online engagement/research or the post-sale stages toward retention/loyalty, at its simplest a purchase funnel represents four challenges. These are to reach a mass of potential consumers and take some on the journey through awareness, consideration and preference to purchase. The analogy of the funnel represents that fewer people will progress to each subsequent stage.

Back to our twin types of marketing: Above-the-line marketing (ATL) is normally the use of broadcast or mass media to achieve brand awareness and consideration for meeting certain needs. Getting on the “consideration list,” if you will. Traditionally, ATL was often TV, radio, cinema, outdoor and newspaper advertising. Below-the-line marketing (BTL) is normally the use of targeted direct marketing communications to achieve brand/product preference and sales promotions. Traditionally, this was often direct mail, outbound calling and email marketing. In recent years, many marketers talk in terms of “through-the-line” (TTL) advertising, which is an integrated combination of ATL and BTL messages for a campaign. Social media marketing is often best categorized as TTL, but elements can be either ATL or BTL, largely distinguished by whether you can measure who saw the marketing and have feedback data on their response.

Let’s return to the theme of using multiple CI disciplines to measure the effectiveness of these different types of marketing. The simpler example is BTL. Here, the data that can be captured on both who was targeted and how they behaved enables the application of what is called the experimental or scientific method. In essence, the skills of database marketing teams, to set-up campaigns with control cells and feedback loops. To merge the resulting data and evidence incremental changes in behavior as a result of the stimuli of marketing campaigns and optimize future targeting.

ATL is more of a challenge. Because control cells do not exist and it is impossible to be certain who saw the marketing, the comparison needs to be based on time series data. Here, the expertise of analytics teams comes to the fore, especially econometric modeling. This can be best understood as a set of statistical techniques for identifying which of many possible factors can best explain changes in sales over time and then the ability to combine these into a model that can predict future sales patterns based on those inputs. There are many skills needed here, and the topic is worthy of a separate post, but for now suffice to say that analytical questioning techniques to elicit potential internal and external factors are as important as modelling skills.

I hope you can see that my definition of today’s TTL marketing campaigns thus necessitates making use of both database marketing and analytics team skills to measure marketing effectiveness. But beyond this simply being a division of labor between ATL elements being measured by analytics teams and BTL by database marketing ones, there is another way they need to work together.

Reaching the most accurate or helpful marketing attribution is an art as much as a science. In reality, even BTL marketing effectiveness measurement is imprecise (because of the complexities of media interdependencies and not knowing if the consumer really paid attention to communications received). In a world where your potential consumers are exposed to TTL marketing with omni-channel options of response, no one source of evidence or skill set provides a definitive answer. For that reason, I once again recommend convergence of customer insight evidence.

Best practice is to garner the evidence from: (a) incremental behavior models (econometric or experimental method); (b) sales reporting (reconciling with finance numbers); (c) market position (research trackers); (d) media effectiveness tracking (reconciling with behavior achieved throughout purchase funnel).

Converging all this evidence, provided by data, analytics, research and database marketing, provides the best opportunity to determine robust marketing attribution. But do keep a record of your assumptions and hypotheses to be tested in future campaigns.

I hope that was helpful. How are you doing at measuring the effectiveness of your marketing? I hope you’re focused on incremental profit, not “followers.”

Where Have the Hurricanes Gone?

Last year’s hurricane season passed off relatively quietly. Gonzalo, a Category 2 hurricane, hit Bermuda in October 2014, briefly making the world’s headlines, but it did relatively little damage, apart from uprooting trees and knocking out power temporarily to most of the island’s inhabitants.

It is now approaching 10 years since a major hurricane hit the U.S., when four powerful hurricanes — Dennis, Katrina, Rita and Wilma — slammed into the country in the space of a few months in 2005.

There have been a number of reasons put forward for why there has been a succession of seasons when no major storms have hit the US.

It shouldn’t be so quiet. Why? Put simply, the warmer the Atlantic Ocean is, the more potential there is for storms to develop. The temperatures in the Atlantic basin (the expanse of water where hurricanes form, encompassing the North Atlantic Ocean, the Gulf of Mexico and the Caribbean Sea) have been relatively high for roughly the past decade, meaning that there should have been plenty of hurricanes.

There have been a number of reasons put forward for why there has been a succession of seasons when no major storms have hit the U.S. They include: a much drier atmosphere in the Atlantic basin because of large amounts of dust blowing off the Sahara Desert; the El Niño effect; and warmer sea surface temperatures causing hurricanes to form further east in the Atlantic, meaning they stay out at sea rather than hitting land.

Although this is by far the longest run in recent times of no big storms hitting the U.S., it isn’t abnormal to go several years without a big hurricane. “From 2000 to 2003, there were no major land-falling hurricanes,” says Richard Dixon, group head of catastrophe research at Hiscox. “Indeed, there was only one between 1997 and 2003: Bret, a Category 3 hurricane that hit Texas in 1999.”

There then came two of the most devastating hurricane seasons on record in 2004 and 2005, during which seven powerful storms struck the U.S.

The quiet before the storm

An almost eerie calm has followed these very turbulent seasons. Could it be that we are entering a new, more unpredictable era when long periods of quiet are punctuated by intense bouts of violent storms?

It would be dangerous to assume there has been a step change in major-land-falling hurricane behavior.

“Not necessarily,” Dixon says. “Neither should we be lulled into a false sense of security just because no major hurricanes — that is Category 3 or higher — have hit the U.S. coast.”

There have, in fact, been plenty of hurricanes in recent years — it’s just that very few of them have hit the U.S. Those that have — Irene in 2011 and Sandy in 2013 — had only Category 1 hurricane wind speeds by the time they hit the U.S. mainland, although both still caused plenty of damage.

The number of hurricanes that formed in the Atlantic basin each year between 2006 and 2013 has been generally in line with the average number for the period since 1995, when the ocean temperatures have risen relative to the “cold phase” that stretched from the early 1960s to the mid-1990s.

On average, around seven hurricanes have formed each season in the period 2006-2013, roughly three of which have been major storms. “So, although we haven’t seen the big land-falling hurricanes, the potential for them has been there,” Dixon says.

Why the big storms that have brewed have not hit the U.S. is a mixture of complicated climate factors — such as atmospheric pressure over the Atlantic, which dictates the direction, speed and intensity of hurricanes, and wind shear, which can tear a hurricane apart.

There have been several near misses: Hurricane Ike, which hit Texas in 2008, was close to being a Category 3, while Hurricane Dean, which hit Mexico in 2007, was a Category 5 — the most powerful category of storm, with winds in excess of 155 miles per hour.

That’s not to say there is not plenty of curiosity as to why there have recently been no powerful U.S. land-falling hurricanes. This desire to understand exactly what’s going on has prompted new academic research. For example, Hiscox is sponsoring postdoctoral research at Reading University into the atmospheric troughs known as African easterly waves. Although it is known that many hurricanes originate from these waves, there is currently no understanding of how the intensity and location of these waves change from year to year and what impact they might have on hurricane activity.

Breezy optimism?

The dearth of big land-falling hurricanes has both helped and hurt the insurance industry. Years without any large bills to pay from hurricanes have helped the global reinsurance industry’s overall capital to reach a record level of $575 billion by January 2015, according to data from Aon Benfield.

But, as a result, competition for business is intense, and prices for catastrophe cover have been falling; a trend that continued at the latest Jan. 1 renewals.

We certainly shouldn’t think that next year will necessarily be as quiet as the past few have been.

Meanwhile, the values at risk from an intense hurricane are rising fast. Florida — perhaps the most hurricane-prone state in the U.S. — is experiencing a building boom. In 2013, permissions to build $18.2 billion of new residential property were granted in Florida, the second-highest amount in the country behind California, according to U.S. government statistics.

“The increasing risk resulting from greater building density in Florida has been offset by the bigger capital buffer the insurance industry has built up,” says Mike Palmer, head of analytics and research at Hiscox Re. But, he adds: “It will still be interesting to see how the situation pans out if there’s a major hurricane.”

Of course, a storm doesn’t need to be a powerful hurricane to create enormous damage. Sandy was downgraded from a hurricane to a post-tropical cyclone before making landfall along the southern New Jersey coast in October 2012, but it wreaked havoc as it churned up the northeastern U.S. coast. The estimated overall bill has been put at $68.5 billion by Munich Re, of which around $29.5 billion was picked up by insurers.

Although Dixon acknowledges that the current barren spell of major land-falling hurricanes is unusually long, he remains cautious. “It would be dangerous to assume there has been a step change in major-land-falling hurricane behavior.”

Scientists predict that climate change will lead to more powerful hurricanes in coming years. If global warming does lead to warmer sea surface temperatures, then evidence shows that it tends to make big storms grow in intensity.

Even without the effects of climate change, the factors are still in place for there to be some intense hurricane seasons for at least the next couple of years, Dixon argues. “The hurricane activity in the Atlantic basin in recent years suggests to me that we’re still in a warm phase of sea surface temperatures — a more active hurricane period, in other words. So we certainly shouldn’t think that 2015 will necessarily be as quiet as the past few have been.”

Storm warning

Predictions of hurricanes are made on a range of timescales, and the skill involved in these varies dramatically. On short timescales (from days to as much as a week), forecasts of hurricane tracks are now routinely made with impressive results. For example, Hurricane Gonzalo was forecast to pass very close to Bermuda more than a week before it hit the island, giving its inhabitants a chance to prepare. Such advances in weather forecasting have been helped by vast increases in computing power and by “dynamical models” of the atmosphere.

These models work using a grid system that encompasses all or part of the globe, in which they work out climatic factors, such as sea surface temperature and atmospheric conditions, in each particular grid square. Using this information and a range of equations, they are then able to forecast the behavior of the atmosphere over coming days, including the direction and strength of tropical storms.

But even though computing power has improved massively in recent years, each of the grid squares in the dynamical models typically corresponds to an area of many square miles, so it’s impossible to take into account every cloud or thunderstorm in that grid that would contribute to a hurricane’s strength. This, combined with the fact that it is impossible to know the condition of the atmosphere everywhere, means there will always be an element of uncertainty in the forecast. And while these models can do very well at predicting a hurricane’s track, they currently struggle to do as good a job with storm intensity.

Pre-season forecasts

Recent years have seen the advent of forecasts aimed at predicting the general character of the coming hurricane season some months in advance. These seasonal forecasts have been attracting increasing media fanfare and go as far as forecasting the number of named storms, of powerful hurricanes and even of land-falling hurricanes.

Most are not based on complicated dynamical models (although these do exist) but tend to be based on statistical models that link historical data on hurricanes with atmospheric variables, such as El Niño. But as Richard Dixon, Hiscox’s group head of catastrophe research, says:  “There is a range of factors that can affect the coming hurricane season, and these statistical schemes only account for some of them. As a result, they don’t tend to be very skillful, although they are often able to do better than simply basing your prediction on the historical average.”

It would be great if the information contained in seasonal forecasts could be used to help inform catastrophe risk underwriting, but as Mike Palmer, head of analytics and research for Hiscox Re, explains, this is a difficult proposition. “Let’s say, for example, that a seasonal forecast predicts an inactive hurricane season, with only one named storm compared with an average of five. It would be tempting to write more insurance and reinsurance on the basis of that forecast. However, even if it turns out to be true, if the single storm that occurs is a Category 5 hurricane that hits Miami, the downside would be huge.”

Catastrophe models

That’s not to say that there is no useful information about hurricane frequency that underwriters can use to inform their underwriting. Catastrophe models provide the framework to allow them to do just that. These models have become the dominant tools by which insurers try to predict the likely frequency and severity of natural disasters. “A cat model won’t tell you what will happen precisely in the coming year, but it will let you know what the range of possible outcomes may be,” Dixon says.

The danger comes if you blindly follow the numbers, Palmer says. That’s because although the models will provide a number for the estimated cost, for example, of the Category 5 hurricane hitting Miami, that figure masks an enormous number of assumptions, such as the expected damage to a wooden house as opposed to a brick apartment building.

These variables can cause actual losses to differ significantly from the model estimates. As a result, many reinsurers are increasingly using cat models as a starting point to working out their own risk, rather than using an off-the-shelf version to provide the final answer.