Download

How to Avoid Summer Scams

As the weather gets warmer, mosquitos and ticks re-enter our lives, and along with them comes their larger cousin, the scam artist.

sixthings
As the weather gets warmer, mosquitos and ticks re-enter our lives, and along with them comes their larger cousin, the scam artist. There are ways to prepare for those seasonal meal stealers. The same goes for scams, as knowledge is the best repellent. Either way, some scams never seem to get old, as evidenced by the huge number of people that continue to fall for them no matter how many warnings we issue. There are always new variations that snare even the wariest consumers.Ticks and mosquitos aren’t harmless—they are well-known vectors for serious illnesses. Scam artists also are vectors for a plague that affects millions of people each year: identity theft. But sometimes a scam is of the simpler smash-and-grab variety. With that, I give you this summer’s smorgasbord of scams. 1. The summer rental scam It’s not the easiest thing to find a summer rental that has all the right elements: a reasonable distance from the beach, the right number of bedrooms and bathrooms, a pets welcome policy. So, when you do find the right one, the tendency for most people is to pounce. Don’t be most people. If you get scammed on a rental, you’re not going to know till you show up at the front door and a puzzled person peers back at you. The best thing you can do is visit the property in question beforehand. If you are working with a real estate agent, ask for his or her license number and check it, request references if there are no reviews online, and confirm that the address is real and the premises are truly available for rent. See also: Be on the Lookout for Tax Scams   2. Summer job as credit application It is sadly a common occurrence that when kids are offered a “job,” they provide their information for tax purposes, including their Social Security number, and then never hear back. The reason: The only “job” was a robbery. Their identity is stolen, and because kids will be kids, it often takes a long time for them to realize the jerk who flaked on a summer job offer gutted their creditworthiness. (Here are four ways identity theft can impact your credit.) Never provide sensitive personal information to a job site or anyone claiming to offer a job at the start of the process. Before you show up for an interview, make sure the job is legit: You can figure this out by doing an online search or making a few phone calls. 3. Door-knocker scams Summer is the time for door-knocking scams. Sometimes the knocker wants you to help save an endangered species or an embattled population far away, sometimes they are selling a lawn service, home maintenance or sustainably produced electricity—all these causes, services and products may be legitimate, but the person offering them … not so much. If a stranger comes to your door, your level of suspicion should be high from a personal and digital security perspective. If you like what a knocker has to say, tell them that you will go online to help their cause or buy a product, and send them on their way. 4. Wi-Fi scams This is a year-round thing, but people still get got all the time by phony Wi-Fi scams, and the problem is only getting worse now that more municipalities are offering free access to the internet. The problem is that free Wi-Fi doesn’t guarantee secure Wi-Fi. Always check with the network provider or someone of authority before logging on to any new wireless connection. Use a VPN, or virtual private network, to conduct any transactions that involve sensitive information. 5. Front desk, fake menu scams Hotel scams are many and various, and it’s best just to remember that you are a target whenever you are traveling, but there are two scams that are sufficiently common. The first is the front desk scam, which is pretty simple. You check in late, you’re tired and your phone rings. The scammer doesn’t know when you checked in. He or she is calling random rooms. You are told there is a problem with your credit card. Can you please confirm the number? The second scam to look out for is the menu scam. Scammers produce fake ones, and then steal your credit card information when you call to place an order. If you get a call from the front desk, hang up and call back or go in person to confirm your payment method. Use your smartphone to order food or call the front desk for suggestions. 6. Moving scams Summertime is moving time. Just make sure your relocation isn’t a moving experience of the hair-pulling kind. While there are many great services out there, there also are some fraudulent ones that could wind up costing you big time. With online services like Task Rabbit and Angie’s List to name but two, there are ways to choose a moving service that suits your needs and provides reviews. Just make sure you check out their reputation online before they show up at your door. You may have identity theft repellent If you think you might have been a victim of identity theft, it’s important to monitor your credit for anything out of the ordinary—primarily accounts and delinquencies you don’t recognize. You can get a copy of each of your three major credit reports for free once a year at AnnualCreditReport.com and you can use a free tool like Credit.com’s credit report card to check for signs of identity theft every month. It’s also a good idea to check with your insurance agent, bank, credit union or the HR department where you work. It is increasingly more common as a perk of your relationship with the institution to be offered free access to a program that provides education, proactive assistance and damage control if you become a victim of identity theft. See also: Are Scams Killing Direct Marketing?   If it’s not free, you may be able to get it at a minimal cost. (Full disclosure: CyberScout, a company I founded in 2003, provides these services to institutional clients, and they in turn offer the service to their clients, customers, members or employees.) This post originally appeared on ThirdCertainty. Full disclosure: CyberScout sponsors ThirdCertainty. This story originated as an Op/Ed contribution to Credit.com and does not necessarily represent the views of the company or its partners.

Adam Levin

Profile picture for user AdamLevin

Adam Levin

Adam K. Levin is a consumer advocate and a nationally recognized expert on security, privacy, identity theft, fraud, and personal finance. A former director of the New Jersey Division of Consumer Affairs, Levin is chairman and founder of IDT911 (Identity Theft 911) and chairman and co-founder of Credit.com .

10 Rules for CFOs, From the Fog of War

CFOs have limited awareness of the unnecessary risks and poor strategies deployed by the people they think are managing their healthcare spending.

|
The fog of war can be an excellent metaphor for the CFO in today’s rapidly changing business environment. Nowhere is change more frantic than trying to manage multiple financial battlefronts: profit margins, SG&A, FP&A, EBITDA and free cash flow. One of the largest battles in business today is the war between organizations and the healthcare supply chain that their employees and team members access for medical treatment. Investing millions of dollars in accessing the healthcare supply chain without actually knowing in advance the cost of almost all the services might as well be war, because it darn sure kills the income statements of companies and the standard of living for employees and families across America. See also: Where Are All Our Thought Leaders?   The recently released book titled “Extreme Ownership” delivers a how-to on managing multiple simultaneous risks across the organization. The lessons in the book provide a strategy outline on how to execute and eliminate risk when you have leaders and team members operating in hostile environments. For instance, ask your internal healthcare manager what the mission of your healthcare program is and see if it matches your goals and intentions. Have you communicated to the healthcare manager and his operations team the “why” of the investment in healthcare, or is health care just OpEx? As hard as it is to believe, some organizations allow non-P&L managers, or worse, operations-level administrators to dictate policy and strategy, and their decision supersedes the mission. The annual renewal process can be very reactive, and not enough effort is applied to identifying priorities. The result is the equivalent of friendly fire because the tactical plan focuses on the wrong targets and has minor impact. The enemy, the healthcare supply chain, reverse-engineers every government regulation change and cost-shifts to private employers. Not understanding this fundamental principle loses you the war in the long run and the battle every renewal. CFOs need to make sure they are not in a position where they are merely informed and not actually involved in healthcare strategy, because they will have limited situational awareness. Is there a formal process in place that requires the operations level staff to report all strategic and tactical options up to the C-suite and not just cherry pick what is disclosed? Is innovation preached but status quo and incrementalism actually reinforced? Are rate increases tolerated because they are lower than budgeted increases? CFOs need to honestly assess whether they abdicate their leadership role by avoiding the forced execution of strategic healthcare options, instead choosing to take the path of least resistance and defaulting to the ground forces that you pay to handle the details. After all that, is the question ever asked, “What is the best way to execute the mission?” Failed execution and badly supervised risk management can lose an organization millions of dollars, and now CFOs risk personal liability by not knowing the best way to execute the mission. There is a consequence for gambling with employee contributions in an ERISA plan, and not knowing with certainty that the organization’s healthcare claims will go down this year is the proof. See also: A Simple Model to Assess Insurtechs   CFOs have limited situational awareness of the unnecessary risks and poorly performing strategies being deployed by the people they believe they are paying to manage their healthcare investment. The C-suite must gain a new situational awareness of the healthcare budget risks, and ERISA compliance exposure facing the organization and potentially themselves individually. My book notes that soldiers died because of mistakes. In business, healthcare strategy mistakes crush the employees’ standard of living, waste millions in lost profits and expose the CFO to fiduciary risk because of a lack of situational awareness, the conviction of forced execution and extreme ownership.

Craig Lack

Profile picture for user CraigLack

Craig Lack

Craig Lack is "the most effective consultant you've never heard of," according to Inc. magazine. He consults nationwide with C-suites and independent healthcare broker consultants to eliminate employee out-of-pocket expenses, predictably lower healthcare claims and drive substantial revenue.

Key findings from State of the Internet report

sixthings

Last week, Mary Meeker released her annual opus on the state of the digital world, and I wanted to be sure you saw it. Her massive, 355-slide deck is now the bible about the state of the internet for the next year, so you'll be seeing some of the data a lot. You should also spend some time with the presentation because some of the trends will matter a lot for insurance. 

Beyond what you'd expect about the continued growth of the internet and about our obsessions with our mobile phones, three things stood out for me.

First is the continued improvement of voice recognition. Slide 48 says the technology is now roughly as accurate as we humans are. That still leaves room on the back end for figuring out how to turn those words into a query that can hit a data base and get a useful response—try asking your Amazon Echo what the leading cause of car accidents is—but the progress means we all have to keep working to incorporate voice recognition into interactions with customers. Just when you thought that moving to chatbots and texting put you on the cutting edge, you're getting another technology thrown at you.

Second is that apps are fading as the organizing principle for mobile devices. As slick as apps seemed to almost all of us at one point, it seems they're now just too cumbersome. Now, the trend is to make things happen "in-app"—inside whatever app someone is using. Slide 70, for instance, shows capability built by Google inside the Lowe's app that leads people in a Lowe's store to an item they're trying to find. I first asked a friend, the CIO of a regional grocery store chain, for that sort of capability more than a decade ago, and I'm reminded of that request every Mother's Day when I try to find where a store has hidden the tiny yellow cans of Hollandaise sauce that I need for Eggs Benedict. I'm delighted that my wait is ending. More importantly, from an insurance standpoint, the move toward in-app capabilities creates both a challenge and an opportunity. The challenge is that you have to move beyond the boundaries of your own app and establish a presence in whatever app your customer is using. The opportunity is for what might be called a do-you-want-fries-with-that strategy: "I see that you're heading toward the chainsaws at Lowe's; do you want some additional insurance to cover yourself and your house, for whatever happens when you take that tree down?"

Third is that the trends that Meeker identifies create liabilities that insurers need to consider and risks that they can cover. Some of the issues that she discusses at length will be familiar, such as the growing cyber risks that come with our increasingly connected world and the increasing, but still quite limited, health information from wearables. I'm more intrigued by some of the smaller examples she provides. For instance, Slide 66 says that apartment lobbies are becoming warehouses because of all the package deliveries and that the landlord/super is becoming the foreman of those warehouses. Sounds like a risk that the insurer should be aware of and possibly sell additional insurance to cover.

Happy exploring!

Cheers,

Paul Carroll,
Editor-in-Chief 


Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.

'AI' or Just 'I'? Most Adaptable Will Win!

Technology advancements are blurring the borders between humans and machines. Where is this all leading us?

sixthings
Charles Darwin once said, “It is not the strongest of the species that survives, nor the most intelligent; it is the one most adaptable to change.” What does that mean in today's world? Humans are accustomed to creating buzzwords and hypostatizing technology advancements such that the borders between humans and machines will become blurry. Where is this all leading us? What will humans do if machines take over? Will we have no jobs left? Do we need to start building Terminators? Questions like these are common these days. And then things get more complicated when we hear that technology companies are looking to hire more people to help filter out violent content on media. What does this all mean, then? Are we ready? Does AI work? This post is not a technical whitepaper on how to create AI systems. I want my views below to explain how we have approached creating our AI system and why our employees, leaders and customers are not threatened but instead excited to be part of this change. What did we do differently, and can that be replicated across the globe for every AI implementation? That’s really a choice the leaders of companies must make. Humans are social, intelligent beings; our intelligence develops each day through repetition, observation, pattern recognition and practice. Our brains are said to have over 10 billions neurons, and each neuron supposedly is a super-computer by itself. AI is essentially when you brain-dump these patterns, learnings, observations and experience into computer models and systems so that those tasks no longer require us. The most important ingredient in this equation is: human. Without experienced humans, AI would just be computer algorithms. See  also: Machine Learning to the Rescue on Cyber?   What I found to be most useful in picking the area best suited for AI is something that can solve yesterday’s problem, make today comfortable and responsibly evolve for tomorrow’s needs. Below are the four main elements of AI truths that we kept in mind while developing our AI solution: 1. Involve human practitioners: There is no substitute to human experience. This experience is valuable when rendered to the AI models. As leaders, we need to think about how to get the best practitioners for developing these models and then how to help them become the masters of these models. Human experience is undeniably the best feeder to AI. As leaders, we must allow for people to adapt and evolve and not leave the human factor behind AI. AI models can borrow human experience, but emotional intelligence and consciousness is definitely an ethical side of us that needs to be in our control. 2. Empower learning: Technology companies and their supporting industries must work with each other to ensure the jobs being displaced will be replaced responsibly. The people being displaced must be encouraged and empowered to learn other skills and move on to better things. If we don’t think and act responsibly on both sides of technology and business, we will risk displacing many humans without direction, which causes ripple effects in our society. An idle mind is devil’s workshop, they say, and we — as creators, founders and leaders of companies — need to fold in the human element as an intangible effect and account for this all while forecasting profits and expenses. 3. Enhance education systems: As a leader, we must hire today’s talent but contribute to developing tomorrow’s employees. Leaders like us can work with local schools and offer practical programs that can involve students with our creations and help them think beyond the obvious. We need to get everyone seeking education to search for“spiritual intelligence, which is the highest form of the conscious mind, and intelligence that no one can replicate into a machine. Help kids love the learning they get from the education system, not just love getting the degree. Knowledge is power. 4. Solve problems: As leaders and entrepreneurs, we need to focus on solving yesterday’s problems — which can make today easy — while thinking about tomorrow’s path. Technology is great if we can stay ahead of it; it cannot solve problems on its own. As leaders, we need to find out what problems technology can solve while figuring out how we can enhance humans so we can stay ahead. As leaders, we must think about both the obvious and not obvious effects of every decision moving forward. We cannot resist change; it will happen. But we must consider how can we solve problems using AI systems that will replace humans while still growing the company and without losing the human element. See also: Seriously? Artificial Intelligence?   Can we create a well-planned AI system responsibly and implement it successfully? This is a question that requires leaders (not managers) who can lead with spiritual intelligence, find an undistracted vision and help their workers transform and evolve. Every action has a consequences, and every inaction can make us unadaptable.

Sri Ramaswamy

Profile picture for user SriRamaswamy

Sri Ramaswamy

Sri Ramaswamy is the founder and CEO of Infinilytics, a technology company offering AI and big data analytics solutions for the insurance industry. As an entrepreneur at the age of 19, she made a brand new product category a huge success in the Indian marketplace.

Key Trends in Innovation (Part 3)

Why can’t insurance work in the same way as Amazon, easy, seamless, one-click, no hassle, managed through your mobile and regular updates?

sixthings
This article is the third in a series on key forces shaping the insurance industry. Parts One and Two can be found here and here. Trend #3: Just in time: The majority of the simple covers will be bought in standard units through a marketplace/exchange, permitting just-in-time, need and exposure-based protection through mobile access. Why can’t insurance work in the same way as Amazon, easy, seamless, one-click, no hassle, managed through your mobile and regular updates? Actually, this is starting to become a reality. Insurers and start-ups have already taken up this challenge and significant progress is being made. Aviva, for example, are piloting a home insurance product where customers won’t need to answer any questions and Digital Fineprint will autofill your insurance policy application form for you by using your social media information. See also: 10 Trends at Heart of Insurtech Revolution   Data availability and technology are enabling ‘blind rating’ of risks by insurance companies, providing guaranteed acceptance and prices to customer through direct or broker-assisted channels. Insurance still has many consumer challenges to overcome, from a lack of understanding, lack of trust and lack of perceived benefits. If it’s considered at all, it’s often as a grudge purchase. The comment that insurance is sold not bought remains true in many instances. As the digital economy evolves, the opportunity to change this dynamic will multiply. The key drivers of this change are:
  • Ability to interact with the customer through their mobile in real time
  • Ability to offer insurance at the point of sale or time of need
  • Ability to tailor the offering to the individual’s specific circumstances (location, time, activity, risk)
  • Ability to leverage available information to simplify the process
Innovative start-ups like Insure-A-Thing (IAT)are reinventing the insurance ecosystem by improving customer trust & transparency, and encouraging improved behavior through retrospective premium payments, based on actual claims. Democrance is revolutionizing the distribution and servicing of micro-insurance products at POS through telcos and Uber-like shared economy technologies. Other examples of where this is already happening include, Kasko which enables consumers to purchase insurance at the point of sale/demand – it’s relevant, it’s easy and it’s digital. Similarly, Spixii, an insurance focused chatbot knows if you’re in a ski resort and willout and let you know that your travel insurance doesn’t cover extreme sports and then allow you to purchase the additional protection – again it’s relevant, it’s easy and it’s digital. Our view is that many relatively simple personal lines products will evolve over time to these types of interactive model. Rather than standard policies covering fixed periods of time, these new products will switch on and off for the period they are needed and will cover the specific circumstances/risk. This will encourage adoption at more affordable prices and importantly demonstrate that insurance is providing real value when it’s most needed. The sharing economy is a further example of how innovative insurance solutions are being developed to meet new and emerging consumer needs. Start-ups like Slice and Oula.la are looking to provide tailored insurance protection for Airbnb property owners that switch on and off to cover the period when the property is rented. See also: Insurance Coverage Porn   We also expect to see market place or exchange platforms being developed to help facilitate the process. Again, this is already happening. As an example, Asset Vault allows customers to log their physical and financial assets in a secure online repository and can then help customers find and tailor optimal insurance coverage based on their specific circumstances. We hope you enjoy these insights, and look forward to collaborating with you as we create a new insurance future. Next article in the series: Trend #4: Solutions will continue to evolve from protection to behavioral change then to prevention – even across complex commercial insurance

Sam Evans

Profile picture for user SamEvans

Sam Evans

Sam Evans is founder and general partner of Eos Venture Partners. Evans founded Eos in 2016. Prior to that, he was head of KPMG’s Global Deal Advisory Business for Insurance. He has lived in Sydney, Hong Kong, Zurich and London, working with the world’s largest insurers and reinsurers.

5 Best Practices in Wake of WannaCry

The pendulum appears to be swinging back toward complacency for all too many companies, especially SMBs, after the WannaCry attack.

sixthings
In the world of cybersecurity—particularly for small- and midsize businesses—progress tends to be achieved in fits and starts. Rare is the SMB that has the patience and focus to take a methodical approach to improving network security over an extended period. So when news of the WannaCry outbreak grabbed the mainstream media’s attention recently, fear among SMBs spiked and attention turned to cyber issues. However, just as quickly, it seems, the pendulum appears to be swinging back toward complacency for all too many companies. See also: WannaCry Portends a Surge in Attacks   That shouldn’t be the case. Let’s consider five prominent WannaCry takeaways businesses of all sizes should pause to consider. These notions hold especially true for SMBs that can’t afford to have their reputations gouged, much less sustain material monetary losses, from a major network breach: Patch management. WannaCry took advantage of a vulnerability in the Server Messaging Block, a particular part of the Windows operating system. Microsoft had released a patch back in March, but not everyone had applied it, particularly on older Windows XP systems. You’d have to have a substandard patch management program in place to miss a critical security patch for two months, and those were the companies affected. All organizations require a robust patch management program. Guidance is available from the National Institute of Standards and Technology, under NIST standards 800-53 and 800-60. And the SANS Institute, a private cybersecurity think tank and training center, has put together helpful pointers in SANS’ Framework for Building a Comprehensive Enterprise Security Patch Management Program. Software inventories. WannaCry pummeled organizations using old or pirated versions of the Windows operating system, since those are systems that tend not to be patched automatically. All businesses can reduce their risk by knowing what applications and versions are in their networks. SMBs need to ensure that unauthorized copies of business applications are not present. The good news is that proven applications are available that can inventory the operating systems and business software your company regularly uses. Backup, backup, backup. Want to know the top three ways to beat ransomware? Back up to the cloud. Back up to the cloud. Back up to the cloud. What’s the best way to defeat ransomware if you are uncomfortable backing up to the cloud? Back up somewhere else that is off your network. Those organizations that had a readily available backup ready to go could simply delete the encrypted files, restore the good backup, sweep their networks for malware, and get back to business. We have seen that process take 15 minutes. There are many providers who will back up your data, usually for under $1,000 per year. Consider cloud security. Trusting mission critical data and processes to a cloud service provider still makes many company decision-makers very nervous. They’ll say: “I don’t want to trust a cloud provider with my data. Those guys get attacked all the time.” While that may be true, the reputable cloud service providers, by now, know what’s at risk and have made the investment in quality defenses. If you are one of the companies unsure about whether you were patched properly, whether you had good backups, or whether your response plan was going to be effective, then the reputable cloud services providers that deliver these types of services are doing better than you are. It may be time to look into moving functions like email, office automation and customer resource management to the cloud. Breach response planning. A good breach response plan would not have prevented infections from WannaCry; but it would have speeded recovery. If everyone in the organization knows where to go and what role to play in getting the network back to normal, expensive downtime can be minimized. A robust breach response plan needs to be in place, tested and accessible to key players. See also: 5 Ransomware Ideas, or You’ll WannaCry   These notions were true well before WannaCry. And they bear repeating in the aftermath of this landmark, self-spreading ransomware attack. No doubt there will be more lessons to learn, going forward. One thing seems assured: Sophisticated attacks designed to breach business networks indiscriminately are with us to stay. This article originally appeared on ThirdCertainty. It was written by Eric Hodge.

Byron Acohido

Profile picture for user byronacohido

Byron Acohido

Byron Acohido is a business journalist who has been writing about cybersecurity and privacy since 2004, and currently blogs at LastWatchdog.com.

Winning With Digital Confidence

PwC’s survey found that executive confidence in their digital IQ had dropped a stunning 15 percentage points from the year before.

sixthings
Today, if there’s a problem with the heat or hot water in your hotel room, you call the front desk and wait for maintenance to arrive. At some chains, you have the option of reporting the issue using a mobile device. But in the near future, many hotel rooms will be wired with connected devices that report potential breakdowns to maintenance and may even automatically fix them. For example, smart-building technology will turn the heat up when your app’s locator notices you are on the way back to your room. Of course, such developments have significant implications for hotel staff. George Corbin thinks about them from a scientific perspective. As the senior vice president of digital at Marriott, Corbin oversees Marriott.com and Marriott mobile, and he is responsible for about $13 billion of the company’s annual revenue. He says the “skills half-life” of a hotel industry worker is about 12 years, at least for those working in conventional areas such as sales, operations and finance. In other words, if people leave jobs in these functions, they could come back in 12 years and half their skills would still be relevant. But on the digital side, the skills half-life shrinks to a mere 18 months, according to Corbin. Virtually every other industry faces similar dynamics. Digital competency is practically mandatory in many sectors; if you don’t get on board, you’ll fall behind competitors that do. And yet the knowledge required for widespread digital competency is often in short supply, and the related skills in agility and collaboration are often difficult to achieve in large companies. In a few years, an 18-month skills half-life may seem like a luxury. As a result, many executives’ confidence in their organization’s “Digital IQ” — their ability to harness digital-driven change to unlock value — is at an all-time low. That’s one of the main findings from the 2017 edition of PwC’s Digital IQ survey. We interviewed more than 2,200 executives from 53 countries whose companies had annual revenues of at least $500 million and found that executive confidence had dropped a stunning 15 percentage points from the year before. These company leaders said they are no better equipped to handle the changes coming their way today than they were in 2007, when we first conducted this survey. Back in 2007, being a digital company was often seen as synonymous with using information technology. Today, digital has come to mean having an organizational mindset that embraces constant innovation, flat decision making and the integration of technology into all phases of the business. This is a laudable change; however, in many companies, workforce skills and organizational capabilities have not kept pace. As the definition of digital has grown more expansive, company leaders have recognized that there exists a gap between the digital ideal and their digital reality. See also: Digital Risk Profiling Transforms Insurance   The ideal is an organization in which everyone has bought into the digital agenda and is capable of supporting it. What does this look like? It’s a company in which the workforce is tech-fluent, with a culture that encourages the kind of collaboration that supports the adoption of digital initiatives. The organizational structure and systems enable leaders to make discerning choices about where to invest in new technologies. The company applies its talent and capabilities to create the best possible user experiences for all of its customers and employees. Simply upgrading your IT won’t get you there. Instead of spending indiscriminately, start by identifying a tangible business goal that addresses a problem that cannot be addressed with existing technology or past techniques. Then develop the talent, digital innovation capabilities and user experience to solve it. These three areas are where the new demands of digital competence are most evident. They are all equally important; choosing to focus on just one or two won’t be enough. Our findings from 10 years of survey data suggest the organizations that can best unite talent, digital innovation capabilities and user experience into a seamless, integrated whole have a higher Digital IQ and are generally further along in their transformation. Our data also shows that the companies that use cross-functional teams and agile approaches, prioritize innovation with dedicated resources and better understand human experience, among other practices, have financial performance superior to that of their peers. It’s time for company leaders to build their digital confidence and their digital acumen; they can’t afford to wait. Getting Tech-Savvy “We are now moving into a world with this innovation explosion, where we need full-stack businesspeople,” says Vijay Sondhi, senior vice president of innovation and strategic partnerships at Visa, drawing an analogy to the so-called full-stack engineers who know technology at every level. “We need people who understand tech, who understand business, who understand strategy. Innovation is so broad-based and so well stitched together now that we’re being forced to become much better at multiple skill sets. That’s the only way we’re going to survive and thrive.” In the past, digital talent could lie within the realm of specialists. Today, having a baseline of tech and design skills is a requirement for every employee. Yet overall digital skill levels have declined even further since our last report, published in 2015. Then, survey respondents said that skills in their organization were insufficient across a range of important areas, including cybersecurity and privacy, business development of new technologies and user experience and human-centered design. In fact, lack of properly skilled teams was cited this year as the No. 1 hurdle to achieving expected results from digital technology investments; 61% of respondents named it as an existing or emerging barrier. And 25% of respondents said they used external resources, even when they had skilled workers in-house, because it was too difficult or too slow to work with internal teams. The skills gap is significant, and closing it will require senior leaders to commit to widespread training. They need to teach employees the skills to harness technology, which may include, for example, a new customer platform or an artificial intelligence-supported initiative. They will also need to cross-train workers to be conversant in disciplines outside their own, as well as in skills that can support innovation and collaboration, such as agile approaches or design thinking. Digital change, says Marriott’s Corbin, is driven by using technology in ways that empower human moments. “Rather than replace (human interactions), we are actually finding it’s improving them. We need the human touch to be powered by digital.” One way that companies can accomplish these goals is by creating a cross-discipline group of specialists located in close proximity (we refer to this as a sandbox), whether physically or virtually, so each can observe how the others work. Such teams encourage interaction, collaboration, freedom and safety among a diverse group of individuals. Rather than working in isolation or only with peer groups, members develop a common working language that allows for the seamless collaboration and an increased efficiency vital to moving at the speed of technology. This approach avoids the typical workplace dysfunction that comes with breaking down silos: Because business issues are no longer isolated within one discipline but rather intertwined across many, colleagues from disparate parts of the organization are able to better understand one another and collaborate to come up with creative solutions. Part product development and part project management, the sandbox approach enables your workforce to visualize the journey from conception to prototype to revelation in one continuous image, helping spread innovation throughout the organization. The culture of collaboration can speed the adoption of emerging technologies. For example, this approach enabled the Make-A-Wish Foundation to bring employees together from across the organization, including some whose role in developing a new tech-based feature may not have been obvious, such as a tax expert and a lawyer. In just three months using this approach, the foundation created and operationalized a crowdfunding platform to benefit sick children. Investing in the Future At GE Healthcare, engineers are experimenting with augmented reality and assistant avatars. “Part of my job is to help pull in (great innovations) and apply them through a smart architecture,” says Jon Zimmerman, GE Healthcare’s general manager of value-based care solutions. “The innovations must be mobile native because … our job is to be able to serve people wherever they are. And that is going to include more and more sensors on bodies and, if you will, digital streaming so people can be monitored just as well as a jet engine can be monitored.” Amid an increasingly crowded field of emerging technologies, companies need strong digital innovation capabilities to guide their decision making. Yet this achievement often proves challenging as a result of organizational and financial constraints. Our survey revealed that fewer companies today have a team dedicated to exploring emerging technologies than was the case in years past. Many are relying on ad hoc teams or outsourcing. Moreover, 49% of companies surveyed said they still determine their adoption of new technologies by evaluating the latest available tools, rather than by evaluating how the technology can meet a specific human or business need. Equally troubling is that spending on emerging technologies is not much greater today, relative to overall digital technology budgets, than it was a decade ago. In 2007, the average investment in emerging technology was roughly 17% of technology budgets, a surprisingly robust figure at the time. Fast-forward 10 years, and that rate has grown to only about 18%, which may well be inadequate. It’s time to change these trends. You’ve identified a problem that existing technology cannot solve, but you shouldn’t just throw money at every shiny new thing. A digital innovation capability must become a central feature of any transformation effort. This approach goes beyond simply evaluating what to buy or where to invest to include how best to organize internal and external resources to find the emerging technologies that most closely match the direction and goals of the business. Nearly every company is experimenting with what we call the “essential eight” new technologies: the internet of things (IoT), artificial intelligence (AI), robotics, drones, 3D printing, augmented reality (AR), virtual reality (VR) and blockchain. The key is to have a dedicated in-house team with an accountable, systematic approach to determining which of these technologies is critical to evolving the business digitally and which, ultimately, will end up as distractions that provide little value to the overall operation. This approach should include establishing a formal listening framework, learning the true impact of bleeding-edge technologies, sharing results from pilots and quickly scaling throughout the enterprise. Perhaps most importantly, organizations need to have a certain tolerance for risk and failure when evaluating emerging technologies. Digital transformation requires organizations to be much more limber and rapid in their decision making. Says GE Healthcare’s Zimmerman, “One of our cultural pillars is to embrace constructive conflict. That means that when an organization transitions or transforms, things are going to be different tomorrow than they were yesterday. You must get comfortable with change and be open to the differing thoughts and diverse mind-sets that drive it.” See also: Systematic Approach to Digital Strategy   In a promising development, signs indicate that companies are starting to focus on bringing digital innovation capabilities in-house. According to the New York Times, investments by non-technology companies in technology startups grew to $125 billion in 2016, from just $20 billion five years ago. The Times, citing Bloomberg data, also noted that the number of technology companies sold to non-technology companies in 2016 surpassed intra-industry acquisitions for the first time since the internet era began. Walmart, General Motors, Unilever and others are among the non-technology giants that made startup acquisitions last year. General Electric, whose new tagline is, “The digital company. That’s also an industrial company,” spent $1.4 billion in September 2016 buying two 3D printing businesses in Europe. Other companies are engaging in innovative partnerships. At the annual Consumer Electronics Show in January 2017, Visa, Honda and IPS Group — a developer of internet-enabled smart parking meters — teamed up to unveil a digital technology that lets drivers pay their parking meter tab via an app in the car’s dashboard. By “tokenizing” the car, or allowing it to provision and manage its own credit card credential, they essentially make it an IoT device on wheels. “The car becomes a payment device,” explains Visa’s Sondhi. “And taking it even further, we can turn it into a smart asset by publishing information that’s related to the car onto the blockchain. This can enable a whole host of tasks to be simplified and served up to the driver, such as pushing competitive insurance rates or automatically paying annual registration fees.” Solving for “X” At United Airlines, Ravi Simhambhatla, vice president of commercial technology and corporate systems, views digital innovation as a way to break free from habits ingrained in his company over nine decades because they are no longer relevant to its customers and employees. The company plans to use machine learning to create personalized experiences for its customers. For example, when someone books a flight to San Francisco, the company's algorithm will know if that person is a basketball fan and, if so, offer Golden State Warriors tickets. “What we have been doing is really looking at our customer and employee journeys with regard to the travel experience and figuring out how we can apply design thinking to those journeys,” says Simhambhatla. “And, as we map out these journeys, we are focused on imagining how, if we had a clean slate, we would build them today.” With the right digital skills and capabilities comes great opportunity to improve the experience of both your employees and your customers. One constant that emerges from 10 years of Digital IQ surveys is that companies that focus on creating better user experiences report stronger financial performance. But, all too often, user experience is pushed to the back burner of digital priorities. Just 10% of respondents to this year’s survey ranked creating better customer experiences as their top priority, down from 25% a year ago. This imbalance between respondents’ focus on experience and its importance to both customers and employees has far-reaching effects. It creates problems in the marketplace, slows the assimilation of emerging technologies and hinders the ability of organizations to anticipate and adapt to change. Part of the reason user experience ranks as such a low priority is the fact that CEOs and CIOs, the executives who most often drive digital transformation, are much less likely to be responsible for customer-facing services and applications than for digital strategy investments. As a result, they place a higher priority on revenue growth and increased profitability than on customer and employee experiences. However, user experience is also downgraded because getting it right is extremely difficult. It is expensive, outcome-focused as opposed to deadline-driven and fraught with friction. However, unlike so many other aspects of technological change, how organizations shape the human experience is completely within their control. Companies need to connect the technology they are seeking to deploy and the behavior change they are looking to create. Making this connection will only become more critical as emerging technologies such as IoT, AI and VR grow to define the next decade of digital. These — and other technologies that simultaneously embrace consumers, producers and suppliers — will amplify the impact of the distinct behaviors and expectations of these groups on an organization’s digital transformation. Companies that focus too narrowly on small slivers of the customer experience will struggle to adapt, but overall experience-and-outcome companies that seamlessly handle multiple touch points across the customer journey will succeed. That’s because, when done right, the customer and employee experience translates great strategy, process and technology into something that solves a human or business need. You have the skills and the capabilities; now you need to think creatively about how to use them to improve the user experience in practical yet unexpected ways. Says United’s Simhambhatla, “To me, Digital IQ is all about finding sustainable technology solutions to remove the stress from an experience. This hinges on timely and contextually relevant information and being able to use technology to surprise and delight our customers and, equally, our employees.” The Human Touch When talent, innovation and experience come together, it changes the way your company operates. Your digital acumen informs what you do, and how you do it. For example, Visa realized back in 2014 that digital technology was changing not only its core business but also those of its partners so rapidly that it needed to bring its innovation capabilities in-house or risk being too dependent on external sources. It launched its first Innovation Center in 2014; the company now has eight such centers globally, and more are planned. Visa’s Innovation Centers are designed as collaborative, co-creation facilities for the company and its clients. “The idea was that the pace of change was so fast that we couldn’t develop products and services in a vertically integrated silo. We want the Innovation Centers to be a place where our clients could come in, roll up their sleeves, work with us, and build solutions rapidly within our new, open network,” says Visa’s Sondhi. “The aim is to match the speed and simplicity of today’s social- and mobile-first worlds by ideating with clients to quickly deploy new products into the marketplace in weeks instead of months or quarters.” See also: Huge Opportunity in Today’s Uncertainty   Across industries, company leaders have clearly bought into the importance of digital transformation: Sixty-eight percent of our respondents said their CEO is a champion for digital, up from just one-third in 2007. That’s a positive development. But now executives need to move from being champions to leading a company of champions. Understanding what drives your customers’ and employees’ success and how your organization can apply digital technology to facilitate it with a flexible, sustainable approach to innovation will be the deeper meaning of Digital IQ in the next decade. “It’s the blend that makes the magic,” says GE Healthcare’s Zimmerman. “It’s the high-impact technological innovations, plus the customer opportunities, plus the talent. You have to find a way to blend those things in a way that the markets can absorb, adopt, and gain value from in order to create a sustainable virtuous cycle.” This article was written by Chris Curran and Tom Puthiyamadam.

Chris Curran

Profile picture for user ChrisCurran

Chris Curran

Chris Curran is a principal and chief technologist for PwC's advisory practice in the U.S. Curran advises senior executives on their most complex and strategic technology issues and has global experience in designing and implementing high-value technology initiatives across industries.

Machine Learning – Art or Science?

Is machine learning really bias-free? And how can we leverage this tool much more consciously than we do now?

sixthings
The surge of big data and challenge of confirmation bias has led data scientists to seek a methodological approach to uncover hidden insights. In predictive analytics, they often turn to machine learning to save the day. Machine learning seems to be an ideal candidate to handle big data using training sets. It also enjoys a strong scientific scent by making data-driven predictions. But is machine learning really bias-free? And how can we leverage this tool more consciously? Why Science: We often hear that machine-learning algorithms learn and make predictions on data. As such, they are supposedly less exposed to human error and biases. We humans tend to seek confirmation of what we already think or believe, leading to confirmation bias that makes us overlook facts that contradict our theory and overemphasize ones that affirm it. In machine learning, the data is what teaches us, and what could be purer than that? When using a rule-based algorithm or expert system, we are counting on the expert to make up the “right” rules. We cannot avoid having the expert's judgments and positions infiltrate such rules. The study of intuition would go even further to say that we want the expert’s experiences and opinions to influence these rules — they are what make him/her an expert! Either way, when working our way bottom-up from the data, using machine-learning algorithms, we seem to have bypassed this bias. See also: Machine Learning: a New Force   Why Art: Facts are not science; neither is data. We invent scientific theories to give data context and explanation to help us distinguish causation from correlation. The apple falling on Newton’s head is a fact; gravity is the theory that explains it. But how do we come up with the theory? Is there a scientific way to predict “Eureka!” moments? We test assumptions using scientific tools, but we don’t generate assumptions that way — at least not innovative ones that manifest from out-of-the-box thinking. Art, on the other hand, takes on an imaginative skill to express and create something. In behavioral analytics, it can take the form of a rational or irrational human behavior. The user clicking on content is fact; the theory that explains causation could be that it answered a question the user was seeking or that it relates to an area of interest to the user based on previous actions. The inherent ambiguity of human behaviors — and even more of our causation or motivation — gives art its honorable place in predictive analytics. Machine learning is the art of induction. Even unsupervised learning uses objective tools that were chosen, tweaked and validated by a human, based on his/her knowledge and creativity. Schrödinger: Another way is to think of machine learning as both an art and a science — much like Schrödinger’s cat (which is both alive and dead), the Buddhist middle way or quantum physics that tells us light is both a wave and a particle. At least, until we measure it. You see, if we use scientific tools to measure the predictiveness of a machine-learning-based model, we subscribe to the scientific approach giving our conclusions some sort of professional validation. Yet if we focus on measuring the underlying assumptions or the representation or evaluation method, we realize the model is only as “pure” as its creators. In behavioral analytics, a lot rides on the interpretation of human behavior into quantifiable events. This piece stems from the realm of art. When merging behavioral analytics with scientific facts — as often occurs when using medical or health research — we truly create an artistic science or a scientific art. We can never again separate the scientific nature from the behavioral nurture. Practical Implementation While this might be an interesting philosophical or academic discussion, the purpose here is to help with practical tools and tips. So what does this mean for people developing machine-learning-based models or relying on those models for behavioral analytics?
  1. Invest in the methodology. Data is not enough. The theory that narrates the data is what gives it the context. The choices you make along the three stages of representation, evaluation and optimization are susceptible to bad art. So, when in need of a machine-learning model, consult with a variety of experts about choosing the best methodology for your situation before running to develop something.
  2. Garbage in, garbage out. Machine learning is not alchemy. The model cannot turn coal into diamond. Preparing the data is often more art (or “black art”) than science, and it takes up most of the time. Keep a critical eye out for what goes into the model you are relying on, and be as transparent about it as possible if you are on the designing side. Remember that more relevant data beats smarter algorithms any day.
  3. Data preparation is domain-specific. There is no way to fully automate data preparation (i.e. feature engineering). Some features may only add value in combination with others, creating new events. Often, these events need to make product or business sense just as much as they need to make algorithmic sense. Remember that feature design or events extraction requires a very different skill than modeling.
  4. The key is iterations across the entire chain. You collect raw data, prepare it, learn and optimize it, test and validate it and finally put  it to use in a product or business context. But this cycle is only the first iteration. A well-endowed algorithm often sends you to re-collect slightly different raw data; curve it in another angle; model; tweak and validate it differently; and even use it differently. Your ability to foster collaboration across this chain, especially where involving Martian modelers and Venusian marketers, is key!
  5. Make your assumptions carefully. Archimedes said, “Give me a lever long enough and a fulcrum on which to place it and I shall move the world.” Machine learning is a lever, not magic. It relies on induction. The knowledge and creative assumptions you make going into the process determine where you stand. The science of induction will take care of the rest — provided you chose the right lever (i.e. methodology). But it’s your artistic judgment that decides on the rules of engagement.
  6. If you can, get experimental data. Machine learning can help predict results based on a training data set. Split testing (aka A/B testing) is used for measuring causal relationships, and cohort analysis helps split and tailor solutions per segment. Combining experimental data from split testing and cohort analysis with machine learning can prove to be more efficient than sticking to one or the other. The way you chose to integrate these two scientific approaches is very creative.
  7. Contamination alert! Do not let the artistic process of tweaking the algorithm contaminate your scientific testing of its predictiveness. Remember to keep complete separation of training and test sets. If possible, do not expose the test set to the developers until after the algorithm is fully optimized.
  8. The king is dead, long live the king! The model (and its underlying theory) is only valid until a better one comes along. If you don’t want to be the dead king, it is a good idea to start developing the next generation of the model at the moment the previous one is released. Don’t spend your energy defending your model; spend your energy trying to replace it. The longer you fail, the stronger it becomes…
See also: Machine Learning to the Rescue on Cyber?   Machine-learning algorithms are often used to help make data-driven decisions. But machine learning algorithms are not all science, especially when applied to behavioral analytics. Understanding the “artistic” side of these algorithms and its relationship with the scientific one can help make better machine-learning algorithms and more productive use of them.

Oren Steinberg

Profile picture for user OrenSteinberg

Oren Steinberg

Oren Steinberg is an experienced CEO and entrepreneur with a demonstrated history of working in the big-data, digital-health and insurtech industries.

Suicide and the Perspective of Truth

It seems so obvious: The hand of the taker is responsible for the deliberate action of suicide. But that perspective is too limited.

||||
Let’s talk about an obvious truth: Suicide is a choice, unlike cancer. People with cancer don’t make a conscious choice; they don’t take a deliberate action. But people commit suicide. Over the last two years, two beloved actors died. We offered genuine respect and love to Alan Rickman, who, it was said, succumbed to cancer. “He lost his battle,” the headlines read. By contrast, our response to Robin Williams' death was much less clear. He “committed” suicide. Many headlines added that he hanged himself. In the suicide-prevention community, many have discontinued the use of the word “commit,” but many have not. I mean, it kind of works, right? This isn’t the year 1800 — we don’t think of suicide as a sin or crime any more. But we do think of it as a choice, as a deliberate action. Isn’t that right? Earlier this year, hip hop star B.o.B made the headlines. If you didn’t already know him from songs like “Magic” and “Airplanes,” you may have heard about his epic Twitter feud with astrophysicist Neil deGrasse Tyson. It started here at Stone Mountain, which overlooks metro Atlanta all the way up to Sandy Springs. B.o.B tweeted, “The cities in the background are approx. 16miles apart....where is the curve? please explain this. " Look, it’s obvious the Earth is flat. Going back a thousand years, the Earth would in fact have looked downright flat to every one of us. From the every-man perspective, with a limited view, this appeared to be obvious for thousands of years. Of course, there have always been signs that our limited view as humans was, well, limited. The first clue is that in every lunar eclipse we see the shadow of the earth cast against the moon. And we see a circle. Tyson also explained to B.o.B that the Foucault pendulum demonstrates that the earth rotates. These clues could have been put together (and were) long before satellites or space travel. The conclusion: The world must be a ball! Apparently, this was way too much looking through a glass darkly and didn’t persuade B.o.B. He believes the pictures of the round earth are the CGI creations of a conspiracy, and, in reality, most humans have not seen this view with their own eyes. However, we could try to change his perspective. Instead of 16 miles across, let’s go one more mile. Let’s make it 17 miles — but straight up. Now, the curvature of the great, great big planet begins to emerge. The “Aha!” moment. See also: Blueprint for Suicide Prevention   In life, we don’t always get the 17-mile perspective. Sometimes we fall one mile short. What seems obvious could not be more wrong, and sometimes, unlike with B.o.B's tweets, there are consequences. I wish we could zip up 17 miles to see the true perspective on suicide, but it’s going to take some faith. Let’s look at the clues and what doesn’t fit, like that nagging circle shadow of the Earth on the moon. The approach I describe in the caption sounded really good… until the moment the platform underneath me dropped away. I was immediately slipping on the bar, struggling to hold on, my hands sweaty. I doubled down on my grip, but, quickly, my muscles began to ache, and my forearms ballooned like Popeye's. The pain intensified as the seconds passed. I relaxed my breathing and went to my happy place (a beach in my mind with gentle waves lapping). That strategy was good for a couple seconds, but it still didn't work. Finally, I was simply repeating to myself, “Hold on one more second, one more second.” It was a long way to fall, so I desperately wanted to hang on. But I could not. Gravity and fatigue forced me to succumb to the pain. You can watch my embarrassing fall. (YouTube Video). Pain is not a choice Many of us somehow think we've experienced enough pain through the normal ups and downs of being human that we have at least some insight into what leads people to suicide. One of America’s top novelists, William Styron, said, Not a chance. His book, “A Darkness Visible,” about his own debilitating and suicidal depression, is titled after John Milton’s description of Hell in “Paradise Lost.” No light; but rather darkness visible Where peace and rest can never dwell, hope never comes That comes to all, but torture without end One of our most talented writers ever, Styron said his depression was so mysteriously painful and elusive as to verge on being beyond description. He wrote, “It thus remains nearly incomprehensible to those who haven’t experienced extreme mode.” If you haven’t experienced this kind of darkness, anguish, the clinical phrase “psychic distress” probably doesn’t help much. Styron offers the metaphor of physical pain to help us grasp what it’s like. But, frankly, many with lived experience say they would definitely prefer physical pain to this anguish. Putting the Clues Together So, some of you are thinking, I get what you are saying, but my loved one didn’t fall passively. I’m sure they were in pain, but they took a deliberate action. They pulled a trigger. They ingested a poison. So, let’s put these two clues together but reverse the order. The pain. And the response. After my first marathon, when my legs had cramped badly, I decided to try an ice bath and jumped right in. I bolted. I was propelled. Exiting the tub filled every neural pathway of my mind, and my hands and body flailed as if completely disconnected from my conscious decision-making process. My example references an acute pain, but extend that into a chronic day-over-day anguish that blinds the person to the possibility of a better day. Perhaps people do not choose suicide so much as they finally succumb because they just don’t have the supports, resources, hope, etc. to hold on any longer. Their strength is extinguished and utterly fails. See also: Employers’ Role in Preventing Suicide   Is Suicide a Choice? The every-man perspective is that suicide is a choice. Robin Williams committed suicide. And it’s the hand of the taker that is completely responsible for the choice and deliberate action. It seems so obvious. But it’s the limited, 16-mile perspective, the one we all have, and it's one mile short of the truth. Someday, we’ll have the space-station view — and with it the solutions to create Zero Suicide. But, for now, it’s time we study the signs, trust the clues and be brave to stand behind them. Here’s a different headline: “Robin Williams lost his battle. Tragically, he succumbed and died of suicide.” Loving, respectful, true. When you can’t hang on any longer, you can’t hang on. As I watch the video of my fall on Fear Factor, it looks like my right hand is still holding on to an invisible bar. I never, ever stopped choosing to hang on. But I fell. Believe the signs. Change your perspective. Use your voice. Let’s change that great big beautiful round planet we live on, and let’s do it together by doubling down on our efforts to help others hold on.

David Covington

Profile picture for user DavidCovington

David Covington

David Covington, LPC, MBA is CEO and president of RI International, a partner in Behavioral Health Link, co-founder of CrisisTech 360, and leads the international initiatives “Crisis Now” and “Zero Suicide.”

Strategist’s Guide to Artificial Intelligence

As you contemplate the introduction of artificial intelligence, you should articulate what mix of three approaches works best for you.

sixthings
Jeff Heepke knows where to plant corn on his 4,500-acre farm in Illinois because of artificial intelligence (AI). He uses a smartphone app called Climate Basic, which divides Heepke’s farmland (and, in fact, the entire continental U.S.) into plots that are 10 meters square. The app draws on local temperature and erosion records, expected precipitation, soil quality and other agricultural data to determine how to maximize yields for each plot. If a rainy cold front is expected to pass by, Heepke knows which areas to avoid watering or irrigating that afternoon. As the U.S. Department of Agriculture noted, this use of artificial intelligence across the industry has produced the largest crops in the country’s history. Climate Corp., the Silicon Valley–based developer of Climate Basic, also offers a more advanced AI app that operates autonomously. If a storm hits a region, or a drought occurs, it lowers local yield numbers. Farmers who have bought insurance to supplement their government coverage get a check; no questions asked, no paper filing necessary. The insurance companies and farmers both benefit from having a much less labor-intensive, more streamlined and less expensive automated claims process. Monsanto paid nearly $1 billion to buy Climate Corp. in 2013, giving the company’s models added legitimacy. Since then, Monsanto has continued to upgrade the AI models, integrating data from farm equipment and sensors planted in the fields so that they improve their accuracy and insight as more data is fed into them. One result is a better understanding of climate change and its effects — for example, the northward migration of arable land for corn, or the increasing frequency of severe storms. Applications like this are typical of the new wave of artificial intelligence in business. AI is generating new approaches to business models, operations and the deployment of people that are likely to fundamentally change the way business operates. And if it can transform an earthbound industry like agriculture, how long will it be before your company is affected? An Unavoidable Opportunity Many business leaders are keenly aware of the potential value of artificial intelligence but are not yet poised to take advantage of it. In PwC’s 2017 Digital IQ survey of senior executives worldwide, 54% of the respondents said they were making substantial investments in AI today. But only 20% said their organizations had the skills necessary to succeed with this technology (see “Winning with Digital Confidence,” by Chris Curran and Tom Puthiyamadam). Reports on artificial intelligence tend to portray it as either a servant, making all technology more responsive, or an overlord, eliminating jobs and destroying privacy. But for business decision makers, AI is primarily an enabler of productivity. It will eliminate jobs, to be sure, but it will also fundamentally change work processes and might create jobs in the long run. The nature of decision making, collaboration, creative art and scientific research will all be affected; so will enterprise structures. Technological systems, including potentially your products and services, as well as your office and factory equipment, will respond to people (and one another) in ways that feel as if they are coming to life. In their book Artificial Intelligence: A Modern Approach (Pearson, 1995), Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” The most critical difference between AI and general-purpose software is in the phrase “take actions.” AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate. The fastest-growing category of AI is machine learning, or the ability of software to improve its own activity by analyzing interactions with the world at large (see “The Road to Deep Learning,” below). This technology, which has been a continual force in the history of computing since the 1940s, has grown dramatically in sophistication during the last few years. See also: Seriously? Artificial Intelligence?   The Road to Deep Learning This may be the first moment in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s, led by legendary computer scientists such as Marvin Minsky and John McCarthy, its future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks — still extremely slow and limited in comparison with natural brains, but useful in practical ways. The best-known AI triumphs — in which software systems beat expert human players in Jeopardy, chess, Go, poker and soccer — differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be trained through multiple failures with no serious risks. You can’t say the same of an autonomous vehicle crash, a factory failure or a mistranslation. There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience — that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For instance, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past. The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability, without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligence, selection and insemination. No one has programmed the search engine to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior. The type of machine learning called deep learning has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level, then eyes, mouths and noses, and then faces, and then specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human-machine conversation, language translation and vehicle navigation (see Exhibit A). Though it is the closest machine to a human brain, a deep learning neural network is not suitable for all problems. It requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent. News aggregation software, for example, had long relied on rudimentary AI to curate articles based on people’s requests. Then it evolved to analyze behavior, tracking the way people clicked on articles and the time they spent reading, and adjusting the selections accordingly. Next it aggregated individual users’ behavior with the larger population, particularly those who had similar media habits. Now it is incorporating broader data about the way readers’ interests change over time, to anticipate what people are likely to want to see next, even if they have never clicked on that topic before. Tomorrow’s AI aggregators will be able to detect and counter “fake news” by scanning for inconsistencies and routing people to alternative perspectives. AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home and much of the emerging Industrial Internet. Some AI apps are targeted at minor frustrations — DoNotPay, an online legal bot, has reversed thousands of parking tickets — and others, such as connected car and language translation technologies, represent fundamental shifts in the way people live. A growing number are aimed at improving human behavior; for instance, GM’s 2016 Chevrolet Malibu feeds data from sensors into a backseat driver–like guidance system for teenagers at the wheel. Despite all this activity, the market for AI is still small. Market research firm Tractica estimated 2016 revenues at just $644 million. But it expects hockey stick-style growth, reaching $15 billion by 2022 and accelerating thereafter. In late 2016, there were about 1,500 AI-related startups in the U.S. alone, and total funding in 2016 reached a record $5 billion. Google, Facebook, Microsoft, Salesforce.com and other tech companies are snapping up AI software companies, and large, established companies are recruiting deep learning talent and, like Monsanto, buying AI companies specializing in their markets. To make the most of this technology in your enterprise, consider the three main ways that businesses can or will use AI:
  • Assisted intelligence, now widely available, improves what people and organizations are already doing.
  • Augmented intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do.
  • Autonomous intelligence, being developed for the future, creates and deploys machines that act on their own.
See also: Is AI the End of Jobs or a Beginning?   Many companies will make investments in all three during the next few years, drawing from a wide variety of applications (see Exhibit 1). They complement one another but require different types of investment, different staffing considerations and different business models. Assisted Intelligence Assisted intelligence amplifies the value of existing activity. For example, Google’s Gmail sorts incoming email into “Primary,” “Social" and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides. Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks. These include automated assembly lines and other uses of physical robots; robotic process automation, in which software-based agents simulate the online activities of a human being; and back-office functions such as billing, finance and regulatory compliance. This form of AI can be used to verify and cross-check data — for example, when paper checks are read and verified by a bank’s ATM. Assisted intelligence has already become common in some enterprise software processes. In “opportunity to order” (basic sales) and “order to cash” (receiving and processing customer orders), the software offers guidance and direction that was formerly available only from people. The Oscar W. Larson Co. used assisted intelligence to improve its field service operations. This is a 70-plus-year-old family-owned general contractor, which, among other services to the oil and gas industry, provides maintenance and repair for point-of-sales systems and fuel dispensers at gas stations. One costly and irritating problem is “truck rerolls”: service calls that have to be rescheduled because the technician lacks the tools, parts or expertise for a particular issue. After analyzing data on service calls, the AI software showed how to reduce truck rerolls by 20%, a rate that should continue to improve as the software learns to recognize more patterns. Assisted intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behavior, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles and the variations in those patterns for different city topologies, marketing approaches and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate. AI-based packages of this sort are available on more and more enterprise software platforms. Success with assisted intelligence should lead to improvements in conventional business metrics such as labor productivity, revenues or margins per employee and average time to completion for processes. Much of the cost involved is in the staff you hire, who must be skilled at marshaling and interpreting data. To evaluate where to deploy assisted intelligence, consider two questions: What products or services could you easily make more marketable if they were more automatically responsive to your customers? Which of your current processes and practices, including your decision-making practices, would be more powerful with more intelligence? Augmented Intelligence Augmented intelligence software lends new capability to human activity, permitting enterprises to do things they couldn’t do before. Unlike assisted intelligence, it fundamentally alters the nature of the task, and business models change accordingly. For example, Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behavior but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI). Left outside this virtuous circle are conventional advertising and television networks. No wonder other video channels, such as HBO and Amazon, as well as recorded music channels such as Spotify, have moved to similar models. Over time, as algorithms grow more sophisticated, the symbiotic relationship between human and AI will further change entertainment industry practices. The unit of viewing decision will probably become the scene, not the story; algorithms will link scenes to audience emotions. A consumer might ask to see only scenes where a Meryl Streep character is falling in love, or to trace a particular type of swordplay from one action movie to another. Data accumulating from these choices will further refine the ability of the entertainment industry to spark people’s emotions, satisfy their curiosity and gain their loyalty. Another current use of augmented intelligence is in legal research. Though most cases are searchable online, finding relevant precedents still requires many hours of sifting through past opinions. Luminance, a startup specializing in legal research, can run through thousands of cases in a very short time, providing inferences about their relevance to a current proceeding. Systems like these don’t yet replace human legal research. But they dramatically reduce the rote work conducted by associate attorneys, a job rated as the least satisfying in the U.S. Similar applications are emerging for other types of data sifting, including financial audits, interpreting regulations, finding patterns in epidemiological data and (as noted above) farming. To develop applications like these, you’ll need to marshal your own imagination to look for products, services or processes that would not be possible at all without AI. For example, an AI system can track a wide number of product features, warranty costs, repeat purchase rates and more general purchasing metrics, bringing only unusual or noteworthy correlations to your attention. Are a high number of repairs associated with a particular region, material or line of products? Could you use this information to redesign your products, avoid recalls or spark innovation in some way? The success of an augmented intelligence effort depends on whether it has enabled your company to do new things. To assess this capability, track your margins, innovation cycles, customer experience and revenue growth as potential proxies. Also watch your impact on disruption: Are your new innovations doing to some part of the business ecosystem what, say, ride-hailing services are doing to conventional taxi companies? You won’t find many off-the-shelf applications for augmented intelligence. They involve advanced forms of machine learning and natural language processing, plus specialized interfaces tailored to your company and industry. However, you can build bespoke augmented intelligence applications on cloud-based enterprise platforms, most of which allow modifications in open source code. Given the unstructured nature of your most critical decision processes, an augmented intelligence application would require voluminous historical data from your own company, along with data from the rest of your industry and related fields (such as demographics). This will help the system distinguish external factors, such as competition and economic conditions, from the impact of your own decisions. The greatest change from augmented intelligence may be felt by senior decision makers, as the new models often give them new alternatives to consider that don’t match their past experience or gut feelings. They should be open to those alternatives, but also skeptical. AI systems are not infallible; just like any human guide, they must show consistency, explain their decisions and counter biases, or they will lose their value. Autonomous Intelligence Very few autonomous intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75% of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations and perform other tasks inherently unsafe for people. The most eagerly anticipated forms of autonomous intelligence — self-driving cars and full-fledged language translation programs — are not yet ready for general use. The closest autonomous service so far is Tencent’s messaging and social media platform WeChat, which has close to 800 million daily active users, most of them in China. The program, which was designed primarily for use on smartphones, offers relatively sophisticated voice recognition, Chinese-to-English language translation, facial recognition (including suggestions of celebrities who look like the person holding the phone) and virtual bot friends that can play guessing games. Notwithstanding their cleverness and their pioneering use of natural language processing, these are still niche applications, and still very limited by technology. Some of the most popular AI apps, for example, are small, menu- and rule-driven programs, which conduct fairly rudimentary conversations around a limited group of options. See also: Machine Learning to the Rescue on Cyber?   Despite the lead time required to bring the technology further along, any business prepared to base a strategy on advanced digital technology should be thinking seriously about autonomous intelligence now. The Internet of Things will generate vast amounts of information, more than humans can reasonably interpret. In commercial aircraft, for example, so much flight data is gathered that engineers can’t process it all; thus, Boeing has announced a $7.5 million partnership with Carnegie Mellon University, along with other efforts to develop AI systems that can, for example, predict when airplanes will need maintenance. Autonomous intelligence’s greatest challenge may not be technological at all — it may be companies’ ability to build in enough transparency for people to trust these systems to act in their best interest. First Steps As you contemplate the introduction of artificial intelligence, articulate what mix of the three approaches works best for you.
  • Are you primarily interested in upgrading your existing processes, reducing costs and improving productivity? If so, then start with assisted intelligence, probably with a small group of services from a cloud-based provider.
  • Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an augmented intelligence approach, probably with more complex AI applications resident on the cloud.
  • Are you developing a genuinely new technology? Most companies will be better off primarily using someone else’s AI platforms, but, if you can justify building your own, you may become one of the leaders in your market.
The transition among these forms of AI is not clean-cut; they sit on a continuum. In developing their own AI strategy, many companies begin somewhere between assisted and augmented, while expecting to move toward autonomous eventually (see Exhibit 2). Though investments in AI may seem expensive now, the costs will decline over the next 10 years as the software becomes more commoditized. “As this technology continues to mature,” writes Daniel Eckert, a managing director in emerging technology services for PwC US, “we should see the price adhere toward a utility model and flatten out. We expect a tiered pricing model to be introduced: a free (or freemium model) for simple activities, and a premium model for discrete, business-differentiating services.” AI is often sold on the premise that it will replace human labor at lower cost — and the effect on employment could be devastating, though no one knows for sure. Carl Benedikt Frey and Michael Osborne of Oxford University’s engineering school have calculated that AI will put 47% of the jobs in the U.S. at risk; a 2016 Forrester research report estimated it at 6%, at least by 2025. On the other hand, Baidu Research head (and deep learning pioneer) Andrew Ng recently said, “AI is the new electricity,” meaning that it will be found everywhere and create jobs that weren’t imaginable before its appearance. At the same time that AI threatens the loss of an almost unimaginable number of jobs, it is also a hungry, unsatisfied employer. The lack of capable talent — people skilled in deep learning technology and analytics — may well turn out to be the biggest obstacle for large companies. The greatest opportunities may thus be for independent businesspeople, including farmers like Jeff Heepke, who no longer need scale to compete with large companies, because AI has leveled the playing field. It is still too early to say which types of companies will be the most successful in this area — and we don’t yet have an AI model to predict it for us. In the end, we cannot even say for sure that the companies that enter the field first will be the most successful. The dominant players will be those that, like Climate Corp., Oscar W. Larson, Netflix and many other companies large and small, have taken AI to heart as a way to become far more capable, in a far more relevant way, than they otherwise would ever be.

Anand Rao

Profile picture for user Anand_Rao

Anand Rao

Anand Rao is a principal in PwC’s advisory practice. He leads the insurance analytics practice, is the innovation lead for the U.S. firm’s analytics group and is the co-lead for the Global Project Blue, Future of Insurance research. Before joining PwC, Rao was with Mitchell Madison Group in London.