Tag Archives: jeffrey ma

What Blackjack Teaches on Analytics

During Aon’s Analytics Insights Conference, we focused on the variety of analytics software and solutions touching our industry. The conference was themed: “blending old and new: data and analytics in the modern era.” It will come as no surprise that terms such as blockchain, AI and machine learning might appear to be the holy grail of our industry. But there are other keys to making good data-driven decisions.

Blackjack happens to be the perfect Petri dish to remind ourselves about making better decisions. Data is easy to get, and systems never change. At this year’s conference, Jeffrey Ma, former VP of analytics and data science at Twitter and kingpin of the famous MIT blackjack team, shared his thoughts on the future of some of the new capabilities in analytics, arguing that “the biggest misconception is that AI is like magic and solves everything. In reality, it’s only going to be as good as the problems you point out and the data set that’s available to you.”

Tracy Hatlestad, chief operating officer – analytics within Aon’s reinsurance solutions business, sat down with Jeffrey to find out more.

Q:  In an industry like insurance where success with data and analytics is a clear differentiator, what are a few key things you think people need to remember about making data-driven decisions?

A: Quite a few things come to mind, but here are some that seem pertinent to the crowd today. The first is omission bias, or the idea of favoring inaction over action. In blackjack, there’s static math that helps override these biases that is harder to discern in insurance, but the logic still applies. The second is the fallacy of the gut result, or the idea that you can be a better predictor than science or math. The third, and potentially the most dangerous for  the financial industry, is the idea of right decision vs. right outcome. In blackjack, an incorrect decision can still lead to one-off wins, and, in those scenarios, undue credence can be given to those decisions or decision makers in the future.

See also: 3-Step Approach to Big Data Analytics  

Q: You talked about three levels of analytics – data, analysis and implementation. What are a few keys to success with those levels?

With level one

A: It’s imperative to remember that data is the building block for any analytical framework and any advantage that you can create. The adage, “garbage in, garbage out,” still applies. In many industries, there are a number of barriers that stand in the way of quality data, such as:

  • Data curation problems, often driven by legacy systems
  • Lack of commitment to data quality
  • Input by non-analytics professionals
  • The gathering of data well in advance of the ability to use it for strategic advantage

With level two

I really think of it more like science than analysis. The real skill is the ability to hypothesize. In fact, this has led me to hire people with advanced skillsets in economics, social sciences and physics. Simple data science is a commodity, and companies should be looking for people with the ability to ask questions, not just look for big patterns in data.

With level three

This is when you get to implementation. It separates successful companies from the rest. You’re moving into experimentation and always measuring the impact to see the outcome. It’s important to remember that you need the buy-in from everyone – sales, marketing, underwriting, etc. Without that, the ability to implement data-driven decisions gets lost. But when you find successes, you have the ability to operationalize those results with machine learning or artificial intelligence.

Q: Building on your comments about artificial intelligence, what’s a misconception about the power of artificial intelligence or machine learning?

A: The biggest misconception is that it’s magic and solves everything. In reality, machine learning or artificial intelligence is only going to be as good as the problems you point out and the data set that’s available. Artificial intelligence does not have the ability to explore outside the dataset. It can learn from that dataset, but, if it is not given the right questions or a skewed set of data, you can easily become misinformed.

Q:  That makes sense, and yet it still seems like something people might overlook. Are there other mistakes you see companies making in the artificial intelligence space?

A: Lately, I’ve heard a few companies talk about separating data science from machine learning and artificial intelligence. They believe that data science is closer to the data and analytics field or business applications while machine learning is more around computers and programming, infrastructure, etc. The reality is they need to act in concert because the data scientists are going to be the ones who come up with the heuristics that help inform an artificial intelligence or machine learning model. The best case is when business leaders are working collaboratively with data scientists to develop a hypothesis that can be tested. Without that, you’re not going to get the best return on your investment in terms of your talent and what they are doing.

Q:  What advice would you give senior leaders in insurance on implementing artificial intelligence or machine learning into their organization?

A: In any evolving field, it’s important to remember that the candidates that might have the best outcome can come from diverse backgrounds. There isn’t a typical hire when finding the best resources in these fields. Unlike long-time industry practitioners who can help you solve problems and create solutions with current methods, there’s going to be people who see the problem differently and really understand the possibilities. It’s also important for leaders to recognize that these people are likely some of the smartest in the building, but they need the business context to end up with the right results. You can’t treat them like the back-office number crunchers.

See also: Predictive Analytics: Now You See It….  

Q:  We touched on it a bit earlier, but let’s get back to why insurance is more difficult than blackjack?

A: There are a lot of things in this world that will test your belief in analytics. Belief in analytics for blackjack is a little easier because it’s already solved and understood. The rules and data don’t change, and there are known outcomes. I talked about a situation where I lost $100,000 through mathematically correct decisions, and it would be hard to stick with the decisions if you did not fully understand the game. That’s even more difficult when you introduce additional variability and unconditional probabilities in areas like insurance where data is not stationary. In these cases, when you have negative results in a short-term sample, it can be even harder to trust the process or model. The fascinating thing is that, because it’s more difficult, there are many more opportunities to differentiate and win on a bigger scale.