Download

Winning With Digital Confidence

PwC’s survey found that executive confidence in their digital IQ had dropped a stunning 15 percentage points from the year before.

sixthings
Today, if there’s a problem with the heat or hot water in your hotel room, you call the front desk and wait for maintenance to arrive. At some chains, you have the option of reporting the issue using a mobile device. But in the near future, many hotel rooms will be wired with connected devices that report potential breakdowns to maintenance and may even automatically fix them. For example, smart-building technology will turn the heat up when your app’s locator notices you are on the way back to your room. Of course, such developments have significant implications for hotel staff. George Corbin thinks about them from a scientific perspective. As the senior vice president of digital at Marriott, Corbin oversees Marriott.com and Marriott mobile, and he is responsible for about $13 billion of the company’s annual revenue. He says the “skills half-life” of a hotel industry worker is about 12 years, at least for those working in conventional areas such as sales, operations and finance. In other words, if people leave jobs in these functions, they could come back in 12 years and half their skills would still be relevant. But on the digital side, the skills half-life shrinks to a mere 18 months, according to Corbin. Virtually every other industry faces similar dynamics. Digital competency is practically mandatory in many sectors; if you don’t get on board, you’ll fall behind competitors that do. And yet the knowledge required for widespread digital competency is often in short supply, and the related skills in agility and collaboration are often difficult to achieve in large companies. In a few years, an 18-month skills half-life may seem like a luxury. As a result, many executives’ confidence in their organization’s “Digital IQ” — their ability to harness digital-driven change to unlock value — is at an all-time low. That’s one of the main findings from the 2017 edition of PwC’s Digital IQ survey. We interviewed more than 2,200 executives from 53 countries whose companies had annual revenues of at least $500 million and found that executive confidence had dropped a stunning 15 percentage points from the year before. These company leaders said they are no better equipped to handle the changes coming their way today than they were in 2007, when we first conducted this survey. Back in 2007, being a digital company was often seen as synonymous with using information technology. Today, digital has come to mean having an organizational mindset that embraces constant innovation, flat decision making and the integration of technology into all phases of the business. This is a laudable change; however, in many companies, workforce skills and organizational capabilities have not kept pace. As the definition of digital has grown more expansive, company leaders have recognized that there exists a gap between the digital ideal and their digital reality. See also: Digital Risk Profiling Transforms Insurance   The ideal is an organization in which everyone has bought into the digital agenda and is capable of supporting it. What does this look like? It’s a company in which the workforce is tech-fluent, with a culture that encourages the kind of collaboration that supports the adoption of digital initiatives. The organizational structure and systems enable leaders to make discerning choices about where to invest in new technologies. The company applies its talent and capabilities to create the best possible user experiences for all of its customers and employees. Simply upgrading your IT won’t get you there. Instead of spending indiscriminately, start by identifying a tangible business goal that addresses a problem that cannot be addressed with existing technology or past techniques. Then develop the talent, digital innovation capabilities and user experience to solve it. These three areas are where the new demands of digital competence are most evident. They are all equally important; choosing to focus on just one or two won’t be enough. Our findings from 10 years of survey data suggest the organizations that can best unite talent, digital innovation capabilities and user experience into a seamless, integrated whole have a higher Digital IQ and are generally further along in their transformation. Our data also shows that the companies that use cross-functional teams and agile approaches, prioritize innovation with dedicated resources and better understand human experience, among other practices, have financial performance superior to that of their peers. It’s time for company leaders to build their digital confidence and their digital acumen; they can’t afford to wait. Getting Tech-Savvy “We are now moving into a world with this innovation explosion, where we need full-stack businesspeople,” says Vijay Sondhi, senior vice president of innovation and strategic partnerships at Visa, drawing an analogy to the so-called full-stack engineers who know technology at every level. “We need people who understand tech, who understand business, who understand strategy. Innovation is so broad-based and so well stitched together now that we’re being forced to become much better at multiple skill sets. That’s the only way we’re going to survive and thrive.” In the past, digital talent could lie within the realm of specialists. Today, having a baseline of tech and design skills is a requirement for every employee. Yet overall digital skill levels have declined even further since our last report, published in 2015. Then, survey respondents said that skills in their organization were insufficient across a range of important areas, including cybersecurity and privacy, business development of new technologies and user experience and human-centered design. In fact, lack of properly skilled teams was cited this year as the No. 1 hurdle to achieving expected results from digital technology investments; 61% of respondents named it as an existing or emerging barrier. And 25% of respondents said they used external resources, even when they had skilled workers in-house, because it was too difficult or too slow to work with internal teams. The skills gap is significant, and closing it will require senior leaders to commit to widespread training. They need to teach employees the skills to harness technology, which may include, for example, a new customer platform or an artificial intelligence-supported initiative. They will also need to cross-train workers to be conversant in disciplines outside their own, as well as in skills that can support innovation and collaboration, such as agile approaches or design thinking. Digital change, says Marriott’s Corbin, is driven by using technology in ways that empower human moments. “Rather than replace (human interactions), we are actually finding it’s improving them. We need the human touch to be powered by digital.” One way that companies can accomplish these goals is by creating a cross-discipline group of specialists located in close proximity (we refer to this as a sandbox), whether physically or virtually, so each can observe how the others work. Such teams encourage interaction, collaboration, freedom and safety among a diverse group of individuals. Rather than working in isolation or only with peer groups, members develop a common working language that allows for the seamless collaboration and an increased efficiency vital to moving at the speed of technology. This approach avoids the typical workplace dysfunction that comes with breaking down silos: Because business issues are no longer isolated within one discipline but rather intertwined across many, colleagues from disparate parts of the organization are able to better understand one another and collaborate to come up with creative solutions. Part product development and part project management, the sandbox approach enables your workforce to visualize the journey from conception to prototype to revelation in one continuous image, helping spread innovation throughout the organization. The culture of collaboration can speed the adoption of emerging technologies. For example, this approach enabled the Make-A-Wish Foundation to bring employees together from across the organization, including some whose role in developing a new tech-based feature may not have been obvious, such as a tax expert and a lawyer. In just three months using this approach, the foundation created and operationalized a crowdfunding platform to benefit sick children. Investing in the Future At GE Healthcare, engineers are experimenting with augmented reality and assistant avatars. “Part of my job is to help pull in (great innovations) and apply them through a smart architecture,” says Jon Zimmerman, GE Healthcare’s general manager of value-based care solutions. “The innovations must be mobile native because … our job is to be able to serve people wherever they are. And that is going to include more and more sensors on bodies and, if you will, digital streaming so people can be monitored just as well as a jet engine can be monitored.” Amid an increasingly crowded field of emerging technologies, companies need strong digital innovation capabilities to guide their decision making. Yet this achievement often proves challenging as a result of organizational and financial constraints. Our survey revealed that fewer companies today have a team dedicated to exploring emerging technologies than was the case in years past. Many are relying on ad hoc teams or outsourcing. Moreover, 49% of companies surveyed said they still determine their adoption of new technologies by evaluating the latest available tools, rather than by evaluating how the technology can meet a specific human or business need. Equally troubling is that spending on emerging technologies is not much greater today, relative to overall digital technology budgets, than it was a decade ago. In 2007, the average investment in emerging technology was roughly 17% of technology budgets, a surprisingly robust figure at the time. Fast-forward 10 years, and that rate has grown to only about 18%, which may well be inadequate. It’s time to change these trends. You’ve identified a problem that existing technology cannot solve, but you shouldn’t just throw money at every shiny new thing. A digital innovation capability must become a central feature of any transformation effort. This approach goes beyond simply evaluating what to buy or where to invest to include how best to organize internal and external resources to find the emerging technologies that most closely match the direction and goals of the business. Nearly every company is experimenting with what we call the “essential eight” new technologies: the internet of things (IoT), artificial intelligence (AI), robotics, drones, 3D printing, augmented reality (AR), virtual reality (VR) and blockchain. The key is to have a dedicated in-house team with an accountable, systematic approach to determining which of these technologies is critical to evolving the business digitally and which, ultimately, will end up as distractions that provide little value to the overall operation. This approach should include establishing a formal listening framework, learning the true impact of bleeding-edge technologies, sharing results from pilots and quickly scaling throughout the enterprise. Perhaps most importantly, organizations need to have a certain tolerance for risk and failure when evaluating emerging technologies. Digital transformation requires organizations to be much more limber and rapid in their decision making. Says GE Healthcare’s Zimmerman, “One of our cultural pillars is to embrace constructive conflict. That means that when an organization transitions or transforms, things are going to be different tomorrow than they were yesterday. You must get comfortable with change and be open to the differing thoughts and diverse mind-sets that drive it.” See also: Systematic Approach to Digital Strategy   In a promising development, signs indicate that companies are starting to focus on bringing digital innovation capabilities in-house. According to the New York Times, investments by non-technology companies in technology startups grew to $125 billion in 2016, from just $20 billion five years ago. The Times, citing Bloomberg data, also noted that the number of technology companies sold to non-technology companies in 2016 surpassed intra-industry acquisitions for the first time since the internet era began. Walmart, General Motors, Unilever and others are among the non-technology giants that made startup acquisitions last year. General Electric, whose new tagline is, “The digital company. That’s also an industrial company,” spent $1.4 billion in September 2016 buying two 3D printing businesses in Europe. Other companies are engaging in innovative partnerships. At the annual Consumer Electronics Show in January 2017, Visa, Honda and IPS Group — a developer of internet-enabled smart parking meters — teamed up to unveil a digital technology that lets drivers pay their parking meter tab via an app in the car’s dashboard. By “tokenizing” the car, or allowing it to provision and manage its own credit card credential, they essentially make it an IoT device on wheels. “The car becomes a payment device,” explains Visa’s Sondhi. “And taking it even further, we can turn it into a smart asset by publishing information that’s related to the car onto the blockchain. This can enable a whole host of tasks to be simplified and served up to the driver, such as pushing competitive insurance rates or automatically paying annual registration fees.” Solving for “X” At United Airlines, Ravi Simhambhatla, vice president of commercial technology and corporate systems, views digital innovation as a way to break free from habits ingrained in his company over nine decades because they are no longer relevant to its customers and employees. The company plans to use machine learning to create personalized experiences for its customers. For example, when someone books a flight to San Francisco, the company's algorithm will know if that person is a basketball fan and, if so, offer Golden State Warriors tickets. “What we have been doing is really looking at our customer and employee journeys with regard to the travel experience and figuring out how we can apply design thinking to those journeys,” says Simhambhatla. “And, as we map out these journeys, we are focused on imagining how, if we had a clean slate, we would build them today.” With the right digital skills and capabilities comes great opportunity to improve the experience of both your employees and your customers. One constant that emerges from 10 years of Digital IQ surveys is that companies that focus on creating better user experiences report stronger financial performance. But, all too often, user experience is pushed to the back burner of digital priorities. Just 10% of respondents to this year’s survey ranked creating better customer experiences as their top priority, down from 25% a year ago. This imbalance between respondents’ focus on experience and its importance to both customers and employees has far-reaching effects. It creates problems in the marketplace, slows the assimilation of emerging technologies and hinders the ability of organizations to anticipate and adapt to change. Part of the reason user experience ranks as such a low priority is the fact that CEOs and CIOs, the executives who most often drive digital transformation, are much less likely to be responsible for customer-facing services and applications than for digital strategy investments. As a result, they place a higher priority on revenue growth and increased profitability than on customer and employee experiences. However, user experience is also downgraded because getting it right is extremely difficult. It is expensive, outcome-focused as opposed to deadline-driven and fraught with friction. However, unlike so many other aspects of technological change, how organizations shape the human experience is completely within their control. Companies need to connect the technology they are seeking to deploy and the behavior change they are looking to create. Making this connection will only become more critical as emerging technologies such as IoT, AI and VR grow to define the next decade of digital. These — and other technologies that simultaneously embrace consumers, producers and suppliers — will amplify the impact of the distinct behaviors and expectations of these groups on an organization’s digital transformation. Companies that focus too narrowly on small slivers of the customer experience will struggle to adapt, but overall experience-and-outcome companies that seamlessly handle multiple touch points across the customer journey will succeed. That’s because, when done right, the customer and employee experience translates great strategy, process and technology into something that solves a human or business need. You have the skills and the capabilities; now you need to think creatively about how to use them to improve the user experience in practical yet unexpected ways. Says United’s Simhambhatla, “To me, Digital IQ is all about finding sustainable technology solutions to remove the stress from an experience. This hinges on timely and contextually relevant information and being able to use technology to surprise and delight our customers and, equally, our employees.” The Human Touch When talent, innovation and experience come together, it changes the way your company operates. Your digital acumen informs what you do, and how you do it. For example, Visa realized back in 2014 that digital technology was changing not only its core business but also those of its partners so rapidly that it needed to bring its innovation capabilities in-house or risk being too dependent on external sources. It launched its first Innovation Center in 2014; the company now has eight such centers globally, and more are planned. Visa’s Innovation Centers are designed as collaborative, co-creation facilities for the company and its clients. “The idea was that the pace of change was so fast that we couldn’t develop products and services in a vertically integrated silo. We want the Innovation Centers to be a place where our clients could come in, roll up their sleeves, work with us, and build solutions rapidly within our new, open network,” says Visa’s Sondhi. “The aim is to match the speed and simplicity of today’s social- and mobile-first worlds by ideating with clients to quickly deploy new products into the marketplace in weeks instead of months or quarters.” See also: Huge Opportunity in Today’s Uncertainty   Across industries, company leaders have clearly bought into the importance of digital transformation: Sixty-eight percent of our respondents said their CEO is a champion for digital, up from just one-third in 2007. That’s a positive development. But now executives need to move from being champions to leading a company of champions. Understanding what drives your customers’ and employees’ success and how your organization can apply digital technology to facilitate it with a flexible, sustainable approach to innovation will be the deeper meaning of Digital IQ in the next decade. “It’s the blend that makes the magic,” says GE Healthcare’s Zimmerman. “It’s the high-impact technological innovations, plus the customer opportunities, plus the talent. You have to find a way to blend those things in a way that the markets can absorb, adopt, and gain value from in order to create a sustainable virtuous cycle.” This article was written by Chris Curran and Tom Puthiyamadam.

Chris Curran

Profile picture for user ChrisCurran

Chris Curran

Chris Curran is a principal and chief technologist for PwC's advisory practice in the U.S. Curran advises senior executives on their most complex and strategic technology issues and has global experience in designing and implementing high-value technology initiatives across industries.

Machine Learning – Art or Science?

Is machine learning really bias-free? And how can we leverage this tool much more consciously than we do now?

sixthings
The surge of big data and challenge of confirmation bias has led data scientists to seek a methodological approach to uncover hidden insights. In predictive analytics, they often turn to machine learning to save the day. Machine learning seems to be an ideal candidate to handle big data using training sets. It also enjoys a strong scientific scent by making data-driven predictions. But is machine learning really bias-free? And how can we leverage this tool more consciously? Why Science: We often hear that machine-learning algorithms learn and make predictions on data. As such, they are supposedly less exposed to human error and biases. We humans tend to seek confirmation of what we already think or believe, leading to confirmation bias that makes us overlook facts that contradict our theory and overemphasize ones that affirm it. In machine learning, the data is what teaches us, and what could be purer than that? When using a rule-based algorithm or expert system, we are counting on the expert to make up the “right” rules. We cannot avoid having the expert's judgments and positions infiltrate such rules. The study of intuition would go even further to say that we want the expert’s experiences and opinions to influence these rules — they are what make him/her an expert! Either way, when working our way bottom-up from the data, using machine-learning algorithms, we seem to have bypassed this bias. See also: Machine Learning: a New Force   Why Art: Facts are not science; neither is data. We invent scientific theories to give data context and explanation to help us distinguish causation from correlation. The apple falling on Newton’s head is a fact; gravity is the theory that explains it. But how do we come up with the theory? Is there a scientific way to predict “Eureka!” moments? We test assumptions using scientific tools, but we don’t generate assumptions that way — at least not innovative ones that manifest from out-of-the-box thinking. Art, on the other hand, takes on an imaginative skill to express and create something. In behavioral analytics, it can take the form of a rational or irrational human behavior. The user clicking on content is fact; the theory that explains causation could be that it answered a question the user was seeking or that it relates to an area of interest to the user based on previous actions. The inherent ambiguity of human behaviors — and even more of our causation or motivation — gives art its honorable place in predictive analytics. Machine learning is the art of induction. Even unsupervised learning uses objective tools that were chosen, tweaked and validated by a human, based on his/her knowledge and creativity. Schrödinger: Another way is to think of machine learning as both an art and a science — much like Schrödinger’s cat (which is both alive and dead), the Buddhist middle way or quantum physics that tells us light is both a wave and a particle. At least, until we measure it. You see, if we use scientific tools to measure the predictiveness of a machine-learning-based model, we subscribe to the scientific approach giving our conclusions some sort of professional validation. Yet if we focus on measuring the underlying assumptions or the representation or evaluation method, we realize the model is only as “pure” as its creators. In behavioral analytics, a lot rides on the interpretation of human behavior into quantifiable events. This piece stems from the realm of art. When merging behavioral analytics with scientific facts — as often occurs when using medical or health research — we truly create an artistic science or a scientific art. We can never again separate the scientific nature from the behavioral nurture. Practical Implementation While this might be an interesting philosophical or academic discussion, the purpose here is to help with practical tools and tips. So what does this mean for people developing machine-learning-based models or relying on those models for behavioral analytics?
  1. Invest in the methodology. Data is not enough. The theory that narrates the data is what gives it the context. The choices you make along the three stages of representation, evaluation and optimization are susceptible to bad art. So, when in need of a machine-learning model, consult with a variety of experts about choosing the best methodology for your situation before running to develop something.
  2. Garbage in, garbage out. Machine learning is not alchemy. The model cannot turn coal into diamond. Preparing the data is often more art (or “black art”) than science, and it takes up most of the time. Keep a critical eye out for what goes into the model you are relying on, and be as transparent about it as possible if you are on the designing side. Remember that more relevant data beats smarter algorithms any day.
  3. Data preparation is domain-specific. There is no way to fully automate data preparation (i.e. feature engineering). Some features may only add value in combination with others, creating new events. Often, these events need to make product or business sense just as much as they need to make algorithmic sense. Remember that feature design or events extraction requires a very different skill than modeling.
  4. The key is iterations across the entire chain. You collect raw data, prepare it, learn and optimize it, test and validate it and finally put  it to use in a product or business context. But this cycle is only the first iteration. A well-endowed algorithm often sends you to re-collect slightly different raw data; curve it in another angle; model; tweak and validate it differently; and even use it differently. Your ability to foster collaboration across this chain, especially where involving Martian modelers and Venusian marketers, is key!
  5. Make your assumptions carefully. Archimedes said, “Give me a lever long enough and a fulcrum on which to place it and I shall move the world.” Machine learning is a lever, not magic. It relies on induction. The knowledge and creative assumptions you make going into the process determine where you stand. The science of induction will take care of the rest — provided you chose the right lever (i.e. methodology). But it’s your artistic judgment that decides on the rules of engagement.
  6. If you can, get experimental data. Machine learning can help predict results based on a training data set. Split testing (aka A/B testing) is used for measuring causal relationships, and cohort analysis helps split and tailor solutions per segment. Combining experimental data from split testing and cohort analysis with machine learning can prove to be more efficient than sticking to one or the other. The way you chose to integrate these two scientific approaches is very creative.
  7. Contamination alert! Do not let the artistic process of tweaking the algorithm contaminate your scientific testing of its predictiveness. Remember to keep complete separation of training and test sets. If possible, do not expose the test set to the developers until after the algorithm is fully optimized.
  8. The king is dead, long live the king! The model (and its underlying theory) is only valid until a better one comes along. If you don’t want to be the dead king, it is a good idea to start developing the next generation of the model at the moment the previous one is released. Don’t spend your energy defending your model; spend your energy trying to replace it. The longer you fail, the stronger it becomes…
See also: Machine Learning to the Rescue on Cyber?   Machine-learning algorithms are often used to help make data-driven decisions. But machine learning algorithms are not all science, especially when applied to behavioral analytics. Understanding the “artistic” side of these algorithms and its relationship with the scientific one can help make better machine-learning algorithms and more productive use of them.

Oren Steinberg

Profile picture for user OrenSteinberg

Oren Steinberg

Oren Steinberg is an experienced CEO and entrepreneur with a demonstrated history of working in the big-data, digital-health and insurtech industries.

Suicide and the Perspective of Truth

It seems so obvious: The hand of the taker is responsible for the deliberate action of suicide. But that perspective is too limited.

||||
Let’s talk about an obvious truth: Suicide is a choice, unlike cancer. People with cancer don’t make a conscious choice; they don’t take a deliberate action. But people commit suicide. Over the last two years, two beloved actors died. We offered genuine respect and love to Alan Rickman, who, it was said, succumbed to cancer. “He lost his battle,” the headlines read. By contrast, our response to Robin Williams' death was much less clear. He “committed” suicide. Many headlines added that he hanged himself. In the suicide-prevention community, many have discontinued the use of the word “commit,” but many have not. I mean, it kind of works, right? This isn’t the year 1800 — we don’t think of suicide as a sin or crime any more. But we do think of it as a choice, as a deliberate action. Isn’t that right? Earlier this year, hip hop star B.o.B made the headlines. If you didn’t already know him from songs like “Magic” and “Airplanes,” you may have heard about his epic Twitter feud with astrophysicist Neil deGrasse Tyson. It started here at Stone Mountain, which overlooks metro Atlanta all the way up to Sandy Springs. B.o.B tweeted, “The cities in the background are approx. 16miles apart....where is the curve? please explain this. " Look, it’s obvious the Earth is flat. Going back a thousand years, the Earth would in fact have looked downright flat to every one of us. From the every-man perspective, with a limited view, this appeared to be obvious for thousands of years. Of course, there have always been signs that our limited view as humans was, well, limited. The first clue is that in every lunar eclipse we see the shadow of the earth cast against the moon. And we see a circle. Tyson also explained to B.o.B that the Foucault pendulum demonstrates that the earth rotates. These clues could have been put together (and were) long before satellites or space travel. The conclusion: The world must be a ball! Apparently, this was way too much looking through a glass darkly and didn’t persuade B.o.B. He believes the pictures of the round earth are the CGI creations of a conspiracy, and, in reality, most humans have not seen this view with their own eyes. However, we could try to change his perspective. Instead of 16 miles across, let’s go one more mile. Let’s make it 17 miles — but straight up. Now, the curvature of the great, great big planet begins to emerge. The “Aha!” moment. See also: Blueprint for Suicide Prevention   In life, we don’t always get the 17-mile perspective. Sometimes we fall one mile short. What seems obvious could not be more wrong, and sometimes, unlike with B.o.B's tweets, there are consequences. I wish we could zip up 17 miles to see the true perspective on suicide, but it’s going to take some faith. Let’s look at the clues and what doesn’t fit, like that nagging circle shadow of the Earth on the moon. The approach I describe in the caption sounded really good… until the moment the platform underneath me dropped away. I was immediately slipping on the bar, struggling to hold on, my hands sweaty. I doubled down on my grip, but, quickly, my muscles began to ache, and my forearms ballooned like Popeye's. The pain intensified as the seconds passed. I relaxed my breathing and went to my happy place (a beach in my mind with gentle waves lapping). That strategy was good for a couple seconds, but it still didn't work. Finally, I was simply repeating to myself, “Hold on one more second, one more second.” It was a long way to fall, so I desperately wanted to hang on. But I could not. Gravity and fatigue forced me to succumb to the pain. You can watch my embarrassing fall. (YouTube Video). Pain is not a choice Many of us somehow think we've experienced enough pain through the normal ups and downs of being human that we have at least some insight into what leads people to suicide. One of America’s top novelists, William Styron, said, Not a chance. His book, “A Darkness Visible,” about his own debilitating and suicidal depression, is titled after John Milton’s description of Hell in “Paradise Lost.” No light; but rather darkness visible Where peace and rest can never dwell, hope never comes That comes to all, but torture without end One of our most talented writers ever, Styron said his depression was so mysteriously painful and elusive as to verge on being beyond description. He wrote, “It thus remains nearly incomprehensible to those who haven’t experienced extreme mode.” If you haven’t experienced this kind of darkness, anguish, the clinical phrase “psychic distress” probably doesn’t help much. Styron offers the metaphor of physical pain to help us grasp what it’s like. But, frankly, many with lived experience say they would definitely prefer physical pain to this anguish. Putting the Clues Together So, some of you are thinking, I get what you are saying, but my loved one didn’t fall passively. I’m sure they were in pain, but they took a deliberate action. They pulled a trigger. They ingested a poison. So, let’s put these two clues together but reverse the order. The pain. And the response. After my first marathon, when my legs had cramped badly, I decided to try an ice bath and jumped right in. I bolted. I was propelled. Exiting the tub filled every neural pathway of my mind, and my hands and body flailed as if completely disconnected from my conscious decision-making process. My example references an acute pain, but extend that into a chronic day-over-day anguish that blinds the person to the possibility of a better day. Perhaps people do not choose suicide so much as they finally succumb because they just don’t have the supports, resources, hope, etc. to hold on any longer. Their strength is extinguished and utterly fails. See also: Employers’ Role in Preventing Suicide   Is Suicide a Choice? The every-man perspective is that suicide is a choice. Robin Williams committed suicide. And it’s the hand of the taker that is completely responsible for the choice and deliberate action. It seems so obvious. But it’s the limited, 16-mile perspective, the one we all have, and it's one mile short of the truth. Someday, we’ll have the space-station view — and with it the solutions to create Zero Suicide. But, for now, it’s time we study the signs, trust the clues and be brave to stand behind them. Here’s a different headline: “Robin Williams lost his battle. Tragically, he succumbed and died of suicide.” Loving, respectful, true. When you can’t hang on any longer, you can’t hang on. As I watch the video of my fall on Fear Factor, it looks like my right hand is still holding on to an invisible bar. I never, ever stopped choosing to hang on. But I fell. Believe the signs. Change your perspective. Use your voice. Let’s change that great big beautiful round planet we live on, and let’s do it together by doubling down on our efforts to help others hold on.

David Covington

Profile picture for user DavidCovington

David Covington

David Covington, LPC, MBA is CEO and president of RI International, a partner in Behavioral Health Link, co-founder of CrisisTech 360, and leads the international initiatives “Crisis Now” and “Zero Suicide.”

Strategist’s Guide to Artificial Intelligence

As you contemplate the introduction of artificial intelligence, you should articulate what mix of three approaches works best for you.

sixthings
Jeff Heepke knows where to plant corn on his 4,500-acre farm in Illinois because of artificial intelligence (AI). He uses a smartphone app called Climate Basic, which divides Heepke’s farmland (and, in fact, the entire continental U.S.) into plots that are 10 meters square. The app draws on local temperature and erosion records, expected precipitation, soil quality and other agricultural data to determine how to maximize yields for each plot. If a rainy cold front is expected to pass by, Heepke knows which areas to avoid watering or irrigating that afternoon. As the U.S. Department of Agriculture noted, this use of artificial intelligence across the industry has produced the largest crops in the country’s history. Climate Corp., the Silicon Valley–based developer of Climate Basic, also offers a more advanced AI app that operates autonomously. If a storm hits a region, or a drought occurs, it lowers local yield numbers. Farmers who have bought insurance to supplement their government coverage get a check; no questions asked, no paper filing necessary. The insurance companies and farmers both benefit from having a much less labor-intensive, more streamlined and less expensive automated claims process. Monsanto paid nearly $1 billion to buy Climate Corp. in 2013, giving the company’s models added legitimacy. Since then, Monsanto has continued to upgrade the AI models, integrating data from farm equipment and sensors planted in the fields so that they improve their accuracy and insight as more data is fed into them. One result is a better understanding of climate change and its effects — for example, the northward migration of arable land for corn, or the increasing frequency of severe storms. Applications like this are typical of the new wave of artificial intelligence in business. AI is generating new approaches to business models, operations and the deployment of people that are likely to fundamentally change the way business operates. And if it can transform an earthbound industry like agriculture, how long will it be before your company is affected? An Unavoidable Opportunity Many business leaders are keenly aware of the potential value of artificial intelligence but are not yet poised to take advantage of it. In PwC’s 2017 Digital IQ survey of senior executives worldwide, 54% of the respondents said they were making substantial investments in AI today. But only 20% said their organizations had the skills necessary to succeed with this technology (see “Winning with Digital Confidence,” by Chris Curran and Tom Puthiyamadam). Reports on artificial intelligence tend to portray it as either a servant, making all technology more responsive, or an overlord, eliminating jobs and destroying privacy. But for business decision makers, AI is primarily an enabler of productivity. It will eliminate jobs, to be sure, but it will also fundamentally change work processes and might create jobs in the long run. The nature of decision making, collaboration, creative art and scientific research will all be affected; so will enterprise structures. Technological systems, including potentially your products and services, as well as your office and factory equipment, will respond to people (and one another) in ways that feel as if they are coming to life. In their book Artificial Intelligence: A Modern Approach (Pearson, 1995), Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” The most critical difference between AI and general-purpose software is in the phrase “take actions.” AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate. The fastest-growing category of AI is machine learning, or the ability of software to improve its own activity by analyzing interactions with the world at large (see “The Road to Deep Learning,” below). This technology, which has been a continual force in the history of computing since the 1940s, has grown dramatically in sophistication during the last few years. See also: Seriously? Artificial Intelligence?   The Road to Deep Learning This may be the first moment in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s, led by legendary computer scientists such as Marvin Minsky and John McCarthy, its future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks — still extremely slow and limited in comparison with natural brains, but useful in practical ways. The best-known AI triumphs — in which software systems beat expert human players in Jeopardy, chess, Go, poker and soccer — differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be trained through multiple failures with no serious risks. You can’t say the same of an autonomous vehicle crash, a factory failure or a mistranslation. There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience — that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For instance, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past. The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability, without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligence, selection and insemination. No one has programmed the search engine to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior. The type of machine learning called deep learning has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level, then eyes, mouths and noses, and then faces, and then specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human-machine conversation, language translation and vehicle navigation (see Exhibit A). Though it is the closest machine to a human brain, a deep learning neural network is not suitable for all problems. It requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent. News aggregation software, for example, had long relied on rudimentary AI to curate articles based on people’s requests. Then it evolved to analyze behavior, tracking the way people clicked on articles and the time they spent reading, and adjusting the selections accordingly. Next it aggregated individual users’ behavior with the larger population, particularly those who had similar media habits. Now it is incorporating broader data about the way readers’ interests change over time, to anticipate what people are likely to want to see next, even if they have never clicked on that topic before. Tomorrow’s AI aggregators will be able to detect and counter “fake news” by scanning for inconsistencies and routing people to alternative perspectives. AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home and much of the emerging Industrial Internet. Some AI apps are targeted at minor frustrations — DoNotPay, an online legal bot, has reversed thousands of parking tickets — and others, such as connected car and language translation technologies, represent fundamental shifts in the way people live. A growing number are aimed at improving human behavior; for instance, GM’s 2016 Chevrolet Malibu feeds data from sensors into a backseat driver–like guidance system for teenagers at the wheel. Despite all this activity, the market for AI is still small. Market research firm Tractica estimated 2016 revenues at just $644 million. But it expects hockey stick-style growth, reaching $15 billion by 2022 and accelerating thereafter. In late 2016, there were about 1,500 AI-related startups in the U.S. alone, and total funding in 2016 reached a record $5 billion. Google, Facebook, Microsoft, Salesforce.com and other tech companies are snapping up AI software companies, and large, established companies are recruiting deep learning talent and, like Monsanto, buying AI companies specializing in their markets. To make the most of this technology in your enterprise, consider the three main ways that businesses can or will use AI:
  • Assisted intelligence, now widely available, improves what people and organizations are already doing.
  • Augmented intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do.
  • Autonomous intelligence, being developed for the future, creates and deploys machines that act on their own.
See also: Is AI the End of Jobs or a Beginning?   Many companies will make investments in all three during the next few years, drawing from a wide variety of applications (see Exhibit 1). They complement one another but require different types of investment, different staffing considerations and different business models. Assisted Intelligence Assisted intelligence amplifies the value of existing activity. For example, Google’s Gmail sorts incoming email into “Primary,” “Social" and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides. Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks. These include automated assembly lines and other uses of physical robots; robotic process automation, in which software-based agents simulate the online activities of a human being; and back-office functions such as billing, finance and regulatory compliance. This form of AI can be used to verify and cross-check data — for example, when paper checks are read and verified by a bank’s ATM. Assisted intelligence has already become common in some enterprise software processes. In “opportunity to order” (basic sales) and “order to cash” (receiving and processing customer orders), the software offers guidance and direction that was formerly available only from people. The Oscar W. Larson Co. used assisted intelligence to improve its field service operations. This is a 70-plus-year-old family-owned general contractor, which, among other services to the oil and gas industry, provides maintenance and repair for point-of-sales systems and fuel dispensers at gas stations. One costly and irritating problem is “truck rerolls”: service calls that have to be rescheduled because the technician lacks the tools, parts or expertise for a particular issue. After analyzing data on service calls, the AI software showed how to reduce truck rerolls by 20%, a rate that should continue to improve as the software learns to recognize more patterns. Assisted intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behavior, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles and the variations in those patterns for different city topologies, marketing approaches and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate. AI-based packages of this sort are available on more and more enterprise software platforms. Success with assisted intelligence should lead to improvements in conventional business metrics such as labor productivity, revenues or margins per employee and average time to completion for processes. Much of the cost involved is in the staff you hire, who must be skilled at marshaling and interpreting data. To evaluate where to deploy assisted intelligence, consider two questions: What products or services could you easily make more marketable if they were more automatically responsive to your customers? Which of your current processes and practices, including your decision-making practices, would be more powerful with more intelligence? Augmented Intelligence Augmented intelligence software lends new capability to human activity, permitting enterprises to do things they couldn’t do before. Unlike assisted intelligence, it fundamentally alters the nature of the task, and business models change accordingly. For example, Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behavior but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI). Left outside this virtuous circle are conventional advertising and television networks. No wonder other video channels, such as HBO and Amazon, as well as recorded music channels such as Spotify, have moved to similar models. Over time, as algorithms grow more sophisticated, the symbiotic relationship between human and AI will further change entertainment industry practices. The unit of viewing decision will probably become the scene, not the story; algorithms will link scenes to audience emotions. A consumer might ask to see only scenes where a Meryl Streep character is falling in love, or to trace a particular type of swordplay from one action movie to another. Data accumulating from these choices will further refine the ability of the entertainment industry to spark people’s emotions, satisfy their curiosity and gain their loyalty. Another current use of augmented intelligence is in legal research. Though most cases are searchable online, finding relevant precedents still requires many hours of sifting through past opinions. Luminance, a startup specializing in legal research, can run through thousands of cases in a very short time, providing inferences about their relevance to a current proceeding. Systems like these don’t yet replace human legal research. But they dramatically reduce the rote work conducted by associate attorneys, a job rated as the least satisfying in the U.S. Similar applications are emerging for other types of data sifting, including financial audits, interpreting regulations, finding patterns in epidemiological data and (as noted above) farming. To develop applications like these, you’ll need to marshal your own imagination to look for products, services or processes that would not be possible at all without AI. For example, an AI system can track a wide number of product features, warranty costs, repeat purchase rates and more general purchasing metrics, bringing only unusual or noteworthy correlations to your attention. Are a high number of repairs associated with a particular region, material or line of products? Could you use this information to redesign your products, avoid recalls or spark innovation in some way? The success of an augmented intelligence effort depends on whether it has enabled your company to do new things. To assess this capability, track your margins, innovation cycles, customer experience and revenue growth as potential proxies. Also watch your impact on disruption: Are your new innovations doing to some part of the business ecosystem what, say, ride-hailing services are doing to conventional taxi companies? You won’t find many off-the-shelf applications for augmented intelligence. They involve advanced forms of machine learning and natural language processing, plus specialized interfaces tailored to your company and industry. However, you can build bespoke augmented intelligence applications on cloud-based enterprise platforms, most of which allow modifications in open source code. Given the unstructured nature of your most critical decision processes, an augmented intelligence application would require voluminous historical data from your own company, along with data from the rest of your industry and related fields (such as demographics). This will help the system distinguish external factors, such as competition and economic conditions, from the impact of your own decisions. The greatest change from augmented intelligence may be felt by senior decision makers, as the new models often give them new alternatives to consider that don’t match their past experience or gut feelings. They should be open to those alternatives, but also skeptical. AI systems are not infallible; just like any human guide, they must show consistency, explain their decisions and counter biases, or they will lose their value. Autonomous Intelligence Very few autonomous intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75% of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations and perform other tasks inherently unsafe for people. The most eagerly anticipated forms of autonomous intelligence — self-driving cars and full-fledged language translation programs — are not yet ready for general use. The closest autonomous service so far is Tencent’s messaging and social media platform WeChat, which has close to 800 million daily active users, most of them in China. The program, which was designed primarily for use on smartphones, offers relatively sophisticated voice recognition, Chinese-to-English language translation, facial recognition (including suggestions of celebrities who look like the person holding the phone) and virtual bot friends that can play guessing games. Notwithstanding their cleverness and their pioneering use of natural language processing, these are still niche applications, and still very limited by technology. Some of the most popular AI apps, for example, are small, menu- and rule-driven programs, which conduct fairly rudimentary conversations around a limited group of options. See also: Machine Learning to the Rescue on Cyber?   Despite the lead time required to bring the technology further along, any business prepared to base a strategy on advanced digital technology should be thinking seriously about autonomous intelligence now. The Internet of Things will generate vast amounts of information, more than humans can reasonably interpret. In commercial aircraft, for example, so much flight data is gathered that engineers can’t process it all; thus, Boeing has announced a $7.5 million partnership with Carnegie Mellon University, along with other efforts to develop AI systems that can, for example, predict when airplanes will need maintenance. Autonomous intelligence’s greatest challenge may not be technological at all — it may be companies’ ability to build in enough transparency for people to trust these systems to act in their best interest. First Steps As you contemplate the introduction of artificial intelligence, articulate what mix of the three approaches works best for you.
  • Are you primarily interested in upgrading your existing processes, reducing costs and improving productivity? If so, then start with assisted intelligence, probably with a small group of services from a cloud-based provider.
  • Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an augmented intelligence approach, probably with more complex AI applications resident on the cloud.
  • Are you developing a genuinely new technology? Most companies will be better off primarily using someone else’s AI platforms, but, if you can justify building your own, you may become one of the leaders in your market.
The transition among these forms of AI is not clean-cut; they sit on a continuum. In developing their own AI strategy, many companies begin somewhere between assisted and augmented, while expecting to move toward autonomous eventually (see Exhibit 2). Though investments in AI may seem expensive now, the costs will decline over the next 10 years as the software becomes more commoditized. “As this technology continues to mature,” writes Daniel Eckert, a managing director in emerging technology services for PwC US, “we should see the price adhere toward a utility model and flatten out. We expect a tiered pricing model to be introduced: a free (or freemium model) for simple activities, and a premium model for discrete, business-differentiating services.” AI is often sold on the premise that it will replace human labor at lower cost — and the effect on employment could be devastating, though no one knows for sure. Carl Benedikt Frey and Michael Osborne of Oxford University’s engineering school have calculated that AI will put 47% of the jobs in the U.S. at risk; a 2016 Forrester research report estimated it at 6%, at least by 2025. On the other hand, Baidu Research head (and deep learning pioneer) Andrew Ng recently said, “AI is the new electricity,” meaning that it will be found everywhere and create jobs that weren’t imaginable before its appearance. At the same time that AI threatens the loss of an almost unimaginable number of jobs, it is also a hungry, unsatisfied employer. The lack of capable talent — people skilled in deep learning technology and analytics — may well turn out to be the biggest obstacle for large companies. The greatest opportunities may thus be for independent businesspeople, including farmers like Jeff Heepke, who no longer need scale to compete with large companies, because AI has leveled the playing field. It is still too early to say which types of companies will be the most successful in this area — and we don’t yet have an AI model to predict it for us. In the end, we cannot even say for sure that the companies that enter the field first will be the most successful. The dominant players will be those that, like Climate Corp., Oscar W. Larson, Netflix and many other companies large and small, have taken AI to heart as a way to become far more capable, in a far more relevant way, than they otherwise would ever be.

Anand Rao

Profile picture for user Anand_Rao

Anand Rao

Anand Rao is a principal in PwC’s advisory practice. He leads the insurance analytics practice, is the innovation lead for the U.S. firm’s analytics group and is the co-lead for the Global Project Blue, Future of Insurance research. Before joining PwC, Rao was with Mitchell Madison Group in London.

How to Build Actionable Analytics?

There are three keys to remember as we start on the path to better analytics so that there are no surprises along the way.

sixthings
I have gone through a few product implementations using analytics and have come to realize that there is a simple success mantra: the human brain. Humans have an amazing inherent capability to comprehend patterns and apply what we learn. It all starts with how our senses respond and make the IoT within us (which has been designed so meticulously and flawlessly) the most powerful among all IoT applications. So what do we need to be able to mimic the same “sense and respond” mechanism when it comes to our business growth and analytics? How can we predict the success rate before we embark on an analytics journey to grow our business? How much failure should we tolerate before calling our analytics engagements a total disaster? See also: Applied Analytics Are Key for Progress   There are three keys to remember as we start on this path so that there are no surprises along the way: 1. People: The right talent can bring silos within an organization together. Use your internal business experts or practitioners to define their everyday issues and try to find the origin and impact. What is keeping you up at night? Without a goal, analytics is the magic that will remain within the genie's oil lamp. Once you have a goal, prepare to make some changes based on the results of the analytics. If the analytics are not actionable at the end, your goals are inaccurate. If you are a person who says, “I will let analytics tell me what to do. Where do I start?,” I would advise you to start with hiring the right talent who can help you define those goals and underline the problems or establish strategies for product and company growth. Finding answers through analytics may seem less daunting. Internal and external collaboration and goals are the first milestones. 2. Process: If you are not intending to make business process changes based on feedback from analytics, you are not ready for analytics — whether it's predictive, AI, IoT, machine learning or blockchain. Once you identify your goals, you have established a destination. Now you need the right driver, vehicle, sufficient gas and the right path to get you there. But if you never intended to make that journey, then the effort behind it is fruitless to begin with. Analytics is just the fuel, so you need a driver who will make the journey. Without actionable outcomes, your analytics will sound like glorified expensive reporting. Management must be prepared for strategic changes based on what analytics reveals and must expect this to be a continuing effort. Analytics should run parallel to — and a little ahead of — your business so that you have time to put it into action and see if the results pivot or move ahead. 3. Technology: Questions such as “What can your system do?,” “Can you do social media?” and “When can I get predictions?” will confuse you on your analytics journey. Predictions should be the end product if you can claim success in automating data gathering, modeling, enrichment, pattern detection, deep learning and artificial intelligence. As a business owner, own the process. Own its deficiencies and its growth path so you can then forecast where you would like to be. Now rephrase the question, “Can ‘X’ technology help me solve my problem?” Try to focus on your process and on how the technology can help solve your problem. Solutions, technology and software should be the flexible part — the variant should be replaceable/enhanceable when your goals change. See also: Why Data Analytics Are Like Interest   Build or buy? If you are not a software or technology company, invest in business experts, people, process changes and customer engagement. Invest less on building something from scratch. Software companies can monetize by reusing their solutions and evolving their products, but core businesses must maintain, support and enhance everything they build. Technologies have limitations, which is why they evolve so frequently. To take advantage, buy technologies and find solutions that will give you immediate ROI. But if you like to build from scratch, be prepared to fail, detect missing ingredient, replace and move on!

Sri Ramaswamy

Profile picture for user SriRamaswamy

Sri Ramaswamy

Sri Ramaswamy is the founder and CEO of Infinilytics, a technology company offering AI and big data analytics solutions for the insurance industry. As an entrepreneur at the age of 19, she made a brand new product category a huge success in the Indian marketplace.

Producing Data’s Motion Pictures

How is your insurance company instrumented? Could you make money if you learned something in five minutes instead of five weeks?

sixthings
Reality is tough to capture. It keeps moving. But somehow we’re growing faster and better at capturing it. Consider visual reality. In 200 years, we’ve moved from illustrations and paintings, through still photography and into motion pictures. We then created technologies to transport the motion pictures across space to the places we wanted it. We’re now looking at 4K televisions and talking to family with FaceTime or Skype on displays that have the same or greater resolution than our eyes. Data’s reality is no different. Back in the late 1980s, I did work for a paint manufacturer, trying to monitor the real-time operating conditions in one of its paint plants. We connected some PCs to the plant’s programmable logic controllers and then asked the controllers every 30 seconds, “How are things going? What are you working on?” The controllers spit out lots of data on operating conditions. We charted, we graphed (in real time!), and the plant operators had new insights on how things were going with paint production. We were augmenting the physical instrumentation of the plant with virtual instrumentation. Instrumentation — Data’s Virtual Reality   So how is your insurance company instrumented? Are things running a little hot? Do you find yourself running short on any raw materials? How full is the pipeline? When do you find that out? Is it tucked into a spreadsheet a few weeks after the end of the month? Could you make more money if you found out in five minutes instead of five weeks? Are “modern” insurers still living on static pictures of data’s reality? Insurance leaders are creating real-time instrumentation for their companies, allowing them to open and close everything from granular geographies to wind risk and monitor premium production compared with last week, last month, last quarter, last year, as of today or any day. To better instrument our companies we need to think about: acquisition and transportation; accuracy; presentation timing and type; automation and cognitive capabilities; and actions and reactions. When you finish this post, I think you’ll agree with me that instrumentation should carry a high priority in insurance’s digital agenda. See also: How Virtual Reality Reimagines Data   Acquisition and Transportation of Data How do we monitor the data in a flow of information in constant motion, not just the discrete sets that are static and in place? First, our goal is to NOT be another weigh station in a step-by-step process. We need to be tapped into the flow without impeding it. To do this, we set up measurement devices that allow us to peek into the flow and draw of our information, then shuttle it to where we need it. This is not unlike the earliest “vampire” network connectors, feeding on Ethernet cables as opposed to a light socket sitting within a circuit. There are any number of tools that one can use for real-time streaming and visualization, but the key to having any of them working properly is the setup of the data acquisition. A vampire approach will allow for real-time monitoring, as opposed to relying on continual requests and responses from data sources. Accuracy of Data One of the challenges in looking at continuous data is that spurious results may throw off the averages, so we need to be careful about outlier events. When looking at real-time data, it is far more likely that outliers will appear. For example, as I was driving the other day, one of the “Your speed is...” signs I passed registered 110 mph. (I’ve driven 110 mph before, but not this day.) It quickly corrected itself to 55 mph. Data “in flight” like that needs the right periodicity to make sure that it is capturing the 55s, not the spurious 110s. And data obviously needs to be trained on what to notice and what to ignore. Automated removal of outliers helps keep the data pure. Keeping a concrete set of rules regarding data’s use will be very important in allowing people to trust the data when it is presented. Presentation Timing and Type In 2007 and 2008, Starbucks began opening stores as an undisciplined growth strategy. Eighteen months later, many of them were shuttered in a massive restructuring. In 2011 and 2012, Starbucks was adding stores again, but this time based on GIS traffic-flow data and demographics. Real-time reporting had become a more valued part of the business structure. Former Starbucks CEO Howard Schultz reportedly received store performance numbers as frequently as four times each day. How often an insurer needs data and how it wishes to have information presented is a matter of need and preference, but it can clearly be tied to business strategy. For one client we worked with, they realized that continual data visualization in public locations, such as lobbies and meeting areas, helped the whole community see how important data was to the decision process. Others may wish to keep their data tucked out of sight but still available via tablet or cell phone. Depending on the insurer and the insurer’s reactive capability, they may want feedback every day, every hour or every few minutes. Whether you choose to use dashboards, standard reports or e-mailed updates will also depend on your role and your need to know. Automation and Cognitive Use One of the drawbacks to data visuals of any kind is that they are subject to perspective. Trends and movements can be hard to spot over time. Anyone familiar with Excel line graphs will understand what I’m talking about. The graph below looks fairly flat.  But it shows a 5% move from start to finish.  Identifying that size movement will be important. Here is where automation in data’s motion pictures plays an important role. If the system can “learn” what good performance looks like, then it can also improve its ability to communicate vital information in a timely manner. I was just on a call where we discussed facial recognition in insurance. The use case was that there are teams working to identify faces and emotions on facesIf we have tools that can tell if someone is unhappy, surely we can use those tools to recognize a hidden pattern in our data. Data’s flow won’t just represent current trends, it will also identify oft-hidden patterns. What we think we know from our common snapshot approach to data may be overturned when cognitive capabilities start to bring new insights to our eyes. Once again, data’s motion pictures aren’t just for our own amusement, but they greatly enhance our strategies and decisions. Actions and Reactions If I run a chemical plant, I’m deeply concerned with monitoring real-time flow. Every action I take to tune that plant has a reaction. As insurers, we should also be concerned with real-time flow, capturing our understanding of reality. But there is also a historical component to data’s adjustments. In the chemical plant, if I change the mixture of a certain compound based on my data and the new mixture works, then I need to capture that moment in time as well. It is equally important for insurers to capture the timing of their corrective actions to make sure that we can see the relationship between action and reaction. See also: Your Data Strategies: #Same or #Goals?   Overlaying notes to explain that “we reduced available capacity in less profitable zip codes in June” should show some point of inflection in our results. Having that as a part of our reporting is critical to creating the positive action, a reaction cycle that we want to reinforce. We have an embarrassment of riches when it comes to data, and we are only going to get richer in the coming years. By instrumenting our organizations and realizing that we need some new tools and techniques to turn that information into actions that create the right reactions in our organizations, we can improve our results every day, week and month — not just when we close the books.

John Johansen

Profile picture for user JohnJohansen

John Johansen

John Johansen is a senior vice president at Majesco. He leads the company's data strategy and business intelligence consulting practice areas. Johansen consults to the insurance industry on the effective use of advanced analytics, data warehousing, business intelligence and strategic application architectures.

To Predict the Future, Try Creating It

Life insurers can offer modular products--that can be built up or down and switched on and off to reflect much better how life’s risks ebb and flow.

sixthings
Backed with new capital, powered by digital technology and using decentralized administration, a new model for transparent, simple and customer-focused life insurance couldn’t be easier to visualize. And competition from newcomers means existing providers must innovate. But what can traditional insurers do specifically to -- to paraphrase management theorist Peter Drucker -- predict the future by creating it? Today’s insurance market is a customer-centric, buyer’s arena that reflects a palpable shift in power from the producer to the consumer. Insurers’ service offerings need enhancement. If it is felt little value is added to consumers’ daily lives, customers often fail to see the relevance of the importance of cover. Technology can help insurers to innovate and address this gap and deliver enhanced services. By striving for simplicity, insurers can also increase transparency. That said, no matter how simple the front end is made for the customer, acquiring cover remains an intricate process. Advice, compliance and regulation can clog the process but offer important protection to consumers. There is a delicate balance to achieve. See also: 7 Steps for Inventing the Future   Letting people engage in the ways they want is crucial. Trust and advice seem somehow less important to people than before. Today, people make emotional decisions with far fewer facts, and for many a community-based recommendation will do. This combination suggests that social brokering will only grow in importance and that demand for automation with robo-advice will increase. Consider the disintermediation -- the reduction in intermediaries - that transformed High Street banking. An appointment with the manager is no longer needed to set straight one's personal financial affairs. We fend for ourselves by banking online and using mobile-first apps to view statements, to set up transactions and to move money about. Customers now have similar expectations of life insurance. To provide more flexibility, insurers can offer products that work in a completely modular way -- products that can be built up or down and switched on and off to reflect much better how life’s risks ebb and flow. It’s likely the silo-based approach to the design and sale of line-of-business products is not sustainable. Product fragmentation with more diverse offerings will offer tailored products that fit with the way people live their lives. Personalization gives insurers the opportunity to transform the services they offer and take a real stake in the future health of their policyholders. One way is to shift from risk identification to risk prevention that is based on knowledge of behavioral change. While using data from wearables is a start, more support can be provided -- not just to the fittest customers -- by developing apps and technology that engage their unique health needs. Data from health apps, for example, is just one source that will give insurers access to a real-time view from which to assess risk, instead of relying on past data. However, continual engagement requires transformational change in the industry. To achieve this, insurers can -- and are -- engaging with experts and companies outside the sector. As the boundaries between insurance and adjacent businesses fade, roles and skill sets within insurance will also change, resulting in a need for more diverse recruitment. See also: How to Build ‘Cities of the Future’   Much is being said about big data, in particular how better use of the insights can make insurers' operations leaner. But analysis of large datasets gives established corporates and newcomers to the industry identical insight. While agility of execution may favor startups, it’s industry knowledge that puts insurers in a strong position to turn data into actionable insights. For more perspective on how technology is changing life insurance, click here.

The True Face of Opioid Addiction

The tough reality is that addicts are everywhere. We need to start using behavioral analytics to help identify them and help them in time.

sixthings
It’s likely that when people hear about the growing opioid addiction problem in America, the face that comes to mind is the one commonly shown on TV and in the movies, which is a very broad generalization : the young, strung-out heroin addict living on the streets. Or dying of an overdose. Heroin abuse is definitely a growing problem in America. But it’s not the only opioid-related issue we’re facing. In 2012, an estimated 2.1 million people were suffering from substance abuse disorders from prescription opioid use, and deaths from accidental overdoses of prescription pain relievers quadrupled between 1999 and 2015. Sales of prescription opioids also quadrupled during this period. While prescription pain killers are often seen as a gateway drug to heroin among the young, the issue is much broader than just one demographic group. The reality is that the face of opioid addiction could be the soccer mom down the block who has been experiencing back pain. It could be the marathon runner who is trying to come back after knee surgery. It could be your grandmother baking cookies as she works on recovering from hip replacement surgery. In fact, it could be anyone. And that diversity is what has made prescription opioid addiction so difficult to manage. Drivers of addiction What is driving this explosive growth of such a potentially dangerous substance? Part of it, quite frankly, has been the incredible improvements in healthcare over the last 20-some years. Hip replacements, knee replacements, spinal surgery and other procedures that were once rare are now fairly common. More surgeries mean more patients who need pain relievers to help them with recovery. The greater focus on patient satisfaction, especially as the healthcare industry shifts from fee-for-service to value-based care, has also had some unintended consequences. Physicians concerned about patient feedback from Healthcare Effectiveness Data and Information Set (HEDIS) measures or Medicare Star ratings have additional incentive to ensure patients leave the hospital pain-free. Physicians may prescribe opioids, particularly if patients request them, rather than relying on less addictive forms of pain management. See also: In Opioid Guidelines We Trust?   Here’s how that translates to real numbers. An analysis of 800,000 Medicaid patients in a reasonably affluent state showed that 10,000 of them were taking a medication used to wean patients off a dependency on opiates. This particular medication is very expensive and difficult to obtain – physicians need a specific certification to prescribe it. So it is safe to assume that the actual number of patients using prescription opiates is two to three times higher. Those numbers aren’t always obvious, however, because the prescriptions may be obscured under diagnoses for other conditions such as depression. Indeed, more than half of uninsured nonelderly adults with opioid addiction had a mental illness in the prior year and more than 20% had a serious mental illness, such as depression, bipolar disorder or schizophrenia, according to the Kaiser Family Foundation. The result is that, without sophisticated behavioral analytics, it can be difficult to determine all the patients who are addicted to opioids. And what you don’t know can have a significant impact on care, costs and risk. Complications, risk, and prioritization Opioid addiction tends to interfere with the treatment of other concerns, especially chronic conditions such as depression, congestive heart failure, blindness/eye impairment and diabetes. As a result, physicians must first take care of the addiction before they can effectively treat these other conditions. That is what makes identifying patients with an addiction, and prioritizing their care, so critical. Failure to do so can be devastating, not just clinically but financially – especially as healthcare organizations take on more risk in the shift to value-based care. Take two patients with an opioid addiction who are on a withdrawal medication. Patient A also has eye impairment while Patient B is a diabetic. If the baseline for cost is 1, analytics have shown that Patient A will typically have a risk factor of 1.5 times the norm while Patient B, the diabetic, will have a risk factor of 5 times. Under value-based care, especially an Accountable Care Organization (ACO) where the payment is fixed, the organization can lose a significant amount of money on patients who are costing five times the contracted amount. For example, if the per member per month (PMPM) reimbursement for the year is $2,000, this patient -- who is using this medication for withdrawal from an opiate dependency and is a diabetic -- will end up costing $10,000. It is easy to see why that is unsustainable, especially when multiplied across hundreds or thousands of patients. Yet the underlying reason for failure to treat the diabetes effectively – the opioid addiction – may not be obvious. Healthcare organizations that can use behavioral analytics to uncover patients with hidden opioid dependencies, including those on withdrawal medications, will know they need to address the addiction first, removing it as a barrier to treating other chronic conditions. That will make patients more receptive to managing conditions such as diabetes, helping lower the total cost of care. They can also use the analytics to demonstrate to funding sources why they need more money to manage these higher-risk patients successfully. They can demonstrate why an investment in treating the addiction first will pay dividends in the long term with a variety of chronic conditions. See also: How to Attack the Opioid Crisis   Many faces It’s easy to see that opioid abuse in all forms has reached epidemic levels within the U.S. What is not so easy to see at face value is who the addicts are -- or could be. Despite popular media images, the reality is that opioid addition in America has many faces. Some of them may be closer to us than we think. Behavioral analytics can help us identify with much greater clarity who the likely candidates are so we can reverse the trend more effectively.

David Hom

Profile picture for user DavidHom

David Hom

David Hom is chief evangelist for SCIO. He interacts with strategic audiences with precise messaging of the value proposition of SCIO's innovative products and services and engages clients to solve their impending issues.

Is AI the End of Jobs or a Beginning?

With Google and Wikipedia, we can be experts on any topic; they don’t make us any dumber than encyclopedias, phone books and librarians did.

sixthings
Artificial intelligence (AI) is advancing so rapidly that even its developers are being caught off guard. Google co-founder Sergey Brin said in Davos, Switzerland, in January that it “touches every single one of our main projects, ranging from search to photos to ads … everything we do … it definitely surprised me, even though I was sitting right there.” The long-promised AI, the stuff we’ve seen in science fiction, is coming, and we need to be prepared. Today, AI is powering voice assistants such as Google Home, Amazon Alexa and Apple Siri, allowing them to have increasingly natural conversations with us and manage our lights, order food and schedule meetings. Businesses are infusing AI into their products to analyze the vast amounts of data and improve decision-making. In a decade or two, we will have robotic assistants that remind us of Rosie from “The Jetsons” and R2-D2 of “Star Wars.” See also: Seriously? Artificial Intelligence?   This has profound implications for how we live and work, for better and worse. AI is going to become our guide and companion — and take millions of jobs away from people. We can deny this is happening, be angry or simply ignore it. But, if we do, we will be the losers. As I discussed in my new book, “Driver in the Driverless Car,” technology is now advancing on an exponential curve and making science fiction a reality. We can’t stop it. All we can do is to understand it and use it to better ourselves — and humanity. Rosie and R2-D2 may be on their way, but AI is still very limited in its capability, and will be for a long time. The voice assistants are examples of what technologists call narrow AI: systems that are useful, can interact with humans and bear some of the hallmarks of intelligence — but would never be mistaken for a human.  They can, however, do a better job on a very specific range of tasks than humans can. I couldn’t, for example, recall the winning and losing pitcher in every baseball game of the major leagues from the previous night. Narrow-AI systems are much better than humans at accessing information stored in complex databases, but their capabilities exclude creative thought. If you asked Siri to find the perfect gift for your mother for Valentine’s Day, Siri might make a snarky comment but couldn’t venture an educated guess. If you asked her to write your term paper on the Napoleonic Wars, she couldn’t help. That is where the human element comes in and where the opportunities are for us to benefit from AI — and stay employed. In his book “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins,” chess grandmaster Garry Kasparov tells of his shock and anger at being defeated by IBM’s Deep Blue supercomputer in 1997. He acknowledges that he is a sore loser but was clearly traumatized by having a machine outsmart him. He was aware of the evolution of the technology but never believed it would beat him at his own game. After coming to grips with his defeat, 20 years later, he says fail-safes are required … but so is courage. Kasparov wrote: “When I sat across from Deep Blue 20 years ago, I sensed something new, something unsettling. Perhaps you will experience a similar feeling the first time you ride in a driverless car, or the first time your new computer boss issues an order at work. We must face these fears in order to get the most out of our technology and to get the most out of ourselves. Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives toward creativity, curiosity, beauty and joy. These are what truly make us human, not any particular activity or skill like swinging a hammer — or even playing chess.” In other words, we better get used to AI and ride the wave. Human superiority over animals is based on our ability to create and use tools. The mental capacity to make things that improved our chances of survival led to a natural selection of better toolmakers and tool users. Nearly everything a human does involves technology. For adding numbers, we used abacuses and mechanical calculators and now have spreadsheets. To improve our memory, we wrote on stones, parchment and paper, and now have disk drives and cloud storage. AI is the next step in improving our cognitive functions and decision-making. Think about it: When was the last time you tried memorizing your calendar or Rolodex or used a printed map? Just as we instinctively do everything on our smartphones, we will rely on AI. We may have forfeited skills such as the ability to add up the price of our groceries, but we are smarter and more productive. With the help of Google and Wikipedia, we can be experts on any topic, and these don’t make us any dumber than encyclopedias, phone books and librarians did. A valid concern is that dependence on AI may cause us to forfeit human creativity. As Kasparov observes, the chess games on our smartphones are many times more powerful than the supercomputers that defeated him, yet this didn’t cause human chess players to become less capable — the opposite happened. There are now stronger chess players all over the world, and the game is played in a better way. See also: Microinsurance? Let’s Try Macroinsurance   As Kasparov explains: “It used to be that young players might acquire the style of their early coaches. If you worked with a coach who preferred sharp openings and speculative attacking play himself, it would influence his pupils to play similarly. … What happens when the early influential coach is a computer? The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. It is entirely free of prejudice and doctrine. … The heavy use of computers for practice and analysis has contributed to the development of a generation of players who are almost as free of dogma as the machines with which they train.” Perhaps this is the greatest benefit that AI will bring — humanity can be free of dogma and historical bias; it can do more intelligent decision-making. And instead of doing repetitive data analysis and number crunching, human workers can focus on enhancing their knowledge and being more creative.

Vivek Wadhwa

Profile picture for user VivekWadhwa

Vivek Wadhwa

Vivek Wadhwa is a fellow at Arthur and Toni Rembe Rock Center for Corporate Governance, Stanford University; director of research at the Center for Entrepreneurship and Research Commercialization at the Pratt School of Engineering, Duke University; and distinguished fellow at Singularity University.

A Way to Reduce Healthcare Costs

Insurers and all healthcare stakeholders can benefit from broadening use of certified physician assistants.

As policymakers inside the beltway negotiate the future of the American Health Care Act (AHCA), the focus appears to be on who will pay for healthcare, how it will be subsidized and whether the state insurance exchanges will remain viable. The assumption is being made that access to care is the same as access to high quality care, and the driving force for change to the AHCA are these cost issues. In this changing marketplace, it is imperative that insurers consider the quality of care being provided, in addition to the finances, because medical errors and poor care cost us all in the long run. There is good news for insurers in this battle of ideologies. Certified Physician Assistants (PA-Cs) deliver on both fronts, providing high-quality care in a cost-effective manner. A 2016 article in the Journal of Clinical Outcomes Management showed no significant difference over 18 months in patient mortality, hospital readmissions, lengths of stay and consults with specialists when care was led by PAs compared with doctors. Additionally, PA-Cs can help meet the new and still confusing performance metrics designated by the Centers for Medicare and Medicaid services, such as the new Medicare Access and CHIP Reauthorization Act (MACRA). For these reasons, it is important that insurers and all healthcare stakeholders understand the role and qualifications of Certified PAs in healthcare today, including: education and commitment to lifelong learning; rigorous certification; how PAs are compensated and reimbursed; and the demographics and distribution of PAs around the U.S. These insights will help insurers understand how PA-Cs can contribute to improved cost management and patient satisfaction metrics while meeting patient needs and regulatory demands. First, consider the credentials of Certified PAs. Certified PAs are prepared and proven to meet the needs of patients today through a combination of a graduate level education and a rigorous certification and certification maintenance process. PA-Cs are educated in the medical model. Like physicians, they maintain certification at the highest level in healthcare. They must earn substantial continuing medical education (CME) credits every two years and sit for a proctored exam that covers general medical knowledge every 10 years to remain certified. Certification is a hotly debated topic in healthcare today. There is an anti-maintenance of certification (MOC) movement — a belief that initial assessment by exam after graduating from school is sufficient and maintenance of certification should be through CME only. See also: What Physicians Say on Workers’ Comp   Periodic assessment helps to ensure that PAs maintain and objectively demonstrate a baseline fund of knowledge that is essential for practice across the health care spectrum. The combination of substantive, relevant CME and periodic assessments ensure that PA-Cs maintain relevant knowledge throughout their careers. The National Commission on Certification of Physician Assistants (NCCPA) believes this combined approach reinforces the public trust and assures employers and payers that PA-Cs provide the safe, quality care patients should expect and demand. Who we are; where we practice NCCPA has the most comprehensive source of workforce data for the PA profession, with input from 94% of the nation’s PA-Cs. From that, we publish four reports annually detailing statistics on: all Certified PAs; those in 22 specialties; PA demographics by state; and on those PAs who were newly certified in the previous year. Here are some key findings:
  • More than 70% of Certified PAs now practice in specialties outside primary care. There are 103 Certified PAs for every 1,000 physicians in the U.S., with notably higher ratios in surgical subspecialties, emergency medicine and dermatology.
  • The median age of Certified PAs is only 38, so they are not nearing retirement age like many physicians. Only 0.6% planned to retire in 2016.
  • The states with the largest number of PAs are New York, California, Texas, Pennsylvania and Florida. However, three of the top five states with the largest number of PAs per capita are Alaska, South Dakota and Montana, indicating that Certified PAs often fill the void for healthcare in rural areas.
  • Certified PAs make an average salary of more than $104,000, which is less than half of a physician, making them an affordable provider who can still meet the clear majority of patient needs.
PA-Cs are everywhere, in every specialty, clinical setting and state, with services running the gamut from providing core medical services to performing surgical procedures, to assisting in complex surgical procedures.
  • Almost 19% practice in surgical specialties like cardiovascular and thoracic surgery and orthopedic surgery, handling pre-ops and post-ops but also performing procedures like vein harvesting, central IV-line placement, lumbar punctures and fracture reduction.
  • More than 14% are employed in emergency medicine, working in every area from fast track to admitting patients to the hospital or referring for follow up to a community physician.
  • Almost 1.5% practice in psychiatry, managing patients with the gamut of mental health issues from anxiety to schizophrenia, providing continuity of care for patients on long-term medications and helping to detoxify substance abuse patients and referring for counseling.
  • They manage complex patients with multiple co-morbidities and conditions such as diabetes, HIV and hypertension.
  • Certified PAs are also improving efficiency in work places across the country, working on task forces to develop telemedicine programs, observation units to reduce hospital admissions and processes to increase patient satisfaction.
How PAs are paid and reimbursed Most PAs are employed and salaried providers. In some states, Certified PAs can own their own business, with a physician as medical director. Medicare pays PAs at 85% of the physician fee to perform the same services. Medicare increases that to 100% if the service is “incident to” the physician’s care. To be considered as “incident to,” the physician must perform the full first visit, services must be rendered in the office/clinic and a physician must be onsite when PAs treat the patient. Hospitals that employ PAs bill for their clinical services under Medicare Part B. Most often, private insurers follow Medicare guidelines. Thus, Certified PAs represent immediate cost-savings for insurers. See also: Medicare Implements Value-Based Purchasing   Q. What do MACRA, HCHAPS, PCMH, ACO, ACA have in common? A. Value-based care! Whether the ACA is changed or repealed, the demand for quality and cost-effective care will not lessen. Every healthcare model is seeking data to back up its promises. As patients, we all want to see metrics that can be replicated so that we know we are getting the best value care for our money. Solutions need to be refined in everything from clinical setting to workflows. However, as in any business, staff is one of the most significant factors in success—what they do and how much it costs for them to do it. As Congress debates how we pay for this coverage, and wrangles about the details of exchanges and subsidies, insurers are being asked to reduce the cost of healthcare insurance, while at the same time being true to stakeholders, be they public or private, by remaining profitable. The simple answer is to reduce the cost of medical care. Employing Certified PAs is one way employers can do that. Knowing they maintain certification at the highest standards in healthcare provides a level of assurance that PAs are a quality solution, not just a lower-cost solution. That should boost confidence in reimbursing Certified PAs who, at the end of the day, are a bargain for payers.

Dawn Morton-Rias

Profile picture for user DawnMorton

Dawn Morton-Rias

Dawn Morton-Rias is president and CEO of the National Commission on Certification of Physician Assistants. She has served the PA profession for over 30 years, including as Dean of the College of Health Related Professions and Professor at SUNY Downstate Medical Center and President of the Physician Assistant Education Association. She is nationally recognized for her leadership in PA education and commitment to cultural competence in education and clinical practice.