Is It Time to Buy a Biometric Scanner?
Biometric authenticators are slowly making their way into people’s homes and provide an important, third means of verification.
Biometric authenticators are slowly making their way into people’s homes and provide an important, third means of verification.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Adam K. Levin is a consumer advocate and a nationally recognized expert on security, privacy, identity theft, fraud, and personal finance. A former director of the New Jersey Division of Consumer Affairs, Levin is chairman and founder of IDT911 (Identity Theft 911) and chairman and co-founder of Credit.com .
A leading figure in the field predicts that self-driving car services will be available in certain communities within the next five years.
Google Car Crashes With Bus; Santa Clara Transportation Authority[/caption]
In the only accident where Google’s SDC was partially at fault, Google’s car was partially blocking the lane of a bus behind it (due to sand bags in its own lane). The car had to decide whether to wait for the bus to pass or merge fully into the lane. The car predicted that the remaining space in the bus’s lane was too narrow and that the bus driver would have to stop. The bus driver looked at the situation and thought “I can make it,” and didn’t stop. The car went. The bus did, too. Crunch.
Uber's Arizona Rollover
[caption id="attachment_25869" align="alignnone" width="530"]
Uber Driverless Car Crashes In Tempe, AZ[/caption]
The Uber SDC was in the leftmost lane of three lanes. The traffic in the two lanes to its right were stopped due to congested traffic. The Uber car’s lane was clear, so it continued to move at a good pace.
A human driver wanted to turn left across the three lanes. The turning car pulled out in front of the cars in the two stopped lanes. The driver probably could not see across the blocked lanes to the Uber car’s lane and, given the stopped traffic, expected that whatever might be driving down that lane would be moving slower. It pulled into the Uber car’s lane to make the turn, and the result was a sideways parked car.
See also: Who Is Leading in Driverless Cars?
Tesla's Deadly Florida Crash
[caption id="attachment_25870" align="alignnone" width="530"]
Tesla Car After Fatal Crash in Florida[/caption]
The driver had been using Tesla’s Autopilot for a long time, and he trusted it—despite Tesla saying, “Don’t trust it.” Tesla user manuals told drivers to keep their hands on the wheel, eyes in front, etc. The vehicle was expecting that the driver was paying attention and would act as the safety check. The driver thought that Autopilot worked well enough on its own. A big truck pulled in front of the car. Autopilot did not see it. The driver did not intervene. Fatal crash.
Tesla, to its credit, has made modifications to improve the car’s understanding about whether the driver is paying attention. To Urmson, however, the crash highlights the fundamental limitation of relying on human attentiveness as the safety mechanism against car inadequacies.
3. Incremental driver assistance systems will not evolve into driverless cars.
Urmson characterized “one of the big open debates” in the driverless car world as between Tesla's (and other automakers’) vs. Google’s approach. The former’s approach is “let’s just keep on making incremental systems and, one day, we’ll turn around and have a self-driving car." The latter is “No, no, these are two distinct problems. We need to apply different technologies.”
Urmson is still “fundamentally in the Google camp.” He believes there is a discrete step in the design space when you have to turn your back on human intervention and trust the car will not have anyone to take control. The incremental approach, he argues, will guide developers down a selection of technologies that will limit the ability to bridge over to fully driverless capabilities.
4. Don’t let the “Trolley Car Problem” make the perfect into the enemy of the great.
The “trolley car problem” is a thought experiment that asks how driverless cars should handle no-win, life-threatening scenarios—such as when the only possible choices are between killing the car’s passenger or an innocent bystander. Some argue that driverless cars should not be allowed to make such decisions.
Urmson, on the other hand, described this as an interesting philosophical problem that should not be driving the question of whether to bring the technology to market. To let it do so would be “to let the perfect be the enemy of the great.”
Urmson offered a two-fold pragmatic approach to this ethical dilemma. First, cars should never get into such situations. “If you got there, you’ve screwed up.” Driverless cars should be conservative, safety-first drivers that can anticipate and avoid such situations. “If you’re paying attention, they don’t just surprise and pop out at you,” he said. Second, if the eventuality arose, a car’s response should be predetermined and explicit. Tell consumers what to expect and let them make the choice. For example, tell consumers that the car will prefer the safety of pedestrians and will put passengers at risk to protect pedestrians. Such an explicit choice is better than what occurs with human drivers, Urmson argues, who react instinctually because there is not enough time to make any judgment at all.
5. The “mad rush” is justified.
Urmson reminisced about the early days when he would talk to automakers and tier 1 suppliers about the Google program and he “literally got laughed at.” A lot has changed in the last five years, and many of those skeptics have since invested billions in competing approaches.
Urmson points to the interaction between automation, environmental standards, electric vehicles and ride sharing as the driving forces behind the rush toward driverless. (Read more about this virtuous cycle.) Is it justified? He thinks so, and points to one simple equation to support his position:
3 Trillion VMT * $0.10 per mile = $300B per yearIn 2016, vehicles in the U.S. traveled about 3.2 trillion miles. If you could bring technology to bear to reduce the cost or increase the quality of those miles and charge 10 cents per mile, that would add up to $300 billion in annual revenue—just in the U.S. This equation, he points out, is driving the market infatuation with Transportation as a Service (TaaS) business models. The leading contenders in the emerging space, Uber, Lyft and Didi, have a combined market valuation of about $110 billion—roughly equal to the market value of GM, Ford and Chrysler. Urmson predicts that one of these clusters will see its market value double in the next four years. The race is to see who reaps this increased value. See also: 10 Questions That Reveal AI’s Limits 6. Deployment will happen “relatively quickly.” To the inevitable question of “when,” Urmson is very optimistic. He predicts that self-driving car services will be available in certain communities within the next five years.
You won’t get them everywhere. You certainly are not going to get them in incredibly challenging weather or incredibly challenging cultural regions. But, you’ll see neighborhoods and communities where you’ll be able to call a car, get in it, and it will take you where you want to go.(Based on recent Waymo announcements, Phoenix seems a likely candidate.) Then, over the next 20 years, Urmson believes we’ll see a large portion of the transportation infrastructure move over to automation. Urmson concluded his presentation by calling it an exciting time for roboticists. “It’s a pretty damn good time to be alive. We’re seeing fundamental transformations to the structure of labor and the structure transportation. To be a part of that and have a chance to be involved in it is exciting.”
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Chunka Mui is the co-author of the best-selling Unleashing the Killer App: Digital Strategies for Market Dominance, which in 2005 the Wall Street Journal named one of the five best books on business and the Internet. He also cowrote Billion Dollar Lessons: What You Can Learn from the Most Inexcusable Business Failures of the Last 25 Years and A Brief History of a Perfect Future: Inventing the World We Can Proudly Leave Our Kids by 2050.
Once your organization jumps the gap, you’ll put distance between your organization and those that didn’t act on their knowledge.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Denise Garth is senior vice president, strategic marketing, responsible for leading marketing, industry relations and innovation in support of Majesco's client-centric strategy.
Smartphone apps are perfect for, say, detecting depression by watching for a fall in exercise and movement and fewer social interactions.
Apps can also help with treatment by sending reminders about medication or appointments, regardless of the person’s location. And they can provide distraction from cravings or link with social networks at times of stress. This “nudging” is effective at altering behavior; for example, integrating text messaging in smoking cessation programs improved six-month cessation rates by 71% compared with the regular treatment.
However, work remains to be done before apps can integrate with insurers' processes. The confidentiality and use of personal data generated and stored by apps is complicated and needs clarification. The accuracy and sufficiency of information is a potential concern, and hardware constraints may limit potential. More evaluation of the impact of digital technology is needed in research and clinical practice.
See also: Not Your Mama’s Recipe for Healthcare
Meanwhile, insurers could engage with emerging providers of software solutions. Services like these will, over a relatively short time, become highly influential in the lives of people living with mental health problems. Pilot schemes that compare current insurance methods while evaluating new ones would take us one big step forward.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Ross Campbell is chief underwriter, research and development, based in Gen Re’s London office.
The sharing economy is exposing situations in which new liabilities need coverage. Many are not covered by standard insurance policies.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Robin Roberson is the managing director of North America for Claim Central, a pioneer in claims fulfillment technology with an open two-sided ecosystem. As previous CEO and co-founder of WeGoLook, she grew the business to over 45,000 global independent contractors.
The FTC appears to be taking preemptive measures against a company making IoT devices, not waiting for a cyberattack to occur first.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
John Farley is a vice president and cyber risk consulting practice leader for HUB International's risk services division. HUB International is a North American insurance brokerage that provides an array of property and casualty, life and health, employee benefits, reinsurance, investment and risk management products and services.
Why do you buy a product or pay for a service? What motivates your customers to say “yes” to what you are offering?
The ever-so-catchy “Every Kiss Begins with Kay” that’s helped the jeweler sell loads of diamonds; and
My local favorite, Digicel, “The Bigger, Better Network.”
A lot of companies understand the science behind what makes you say “yes,” and you can thank Dr. Robert Cialdini for it. In his book ,“Influence: The Psychology of Persuasion.” Dr. Cialdini showed that people do what they observe other people doing. It’s a principle that’s based on the idea of safety in numbers. For example, when I am feeling for a good doubles (a sandwich sold on the street that those of you not from Trinidad and Tobago are missing out on), I will automatically gravitate to the doubles man who has a lot of people around him. I will be very cautious of someone selling doubles who has just a few people buying.
But that is the science of social proof. If a group of people is looking to the back of the elevator, an individual who enters the elevator will copy it and do the same, even if it looks funny. Companies use this all the time. Anyone shopping on Amazon can read tons of customer feedback on any product. Some companies show their Facebook likes and Twitter followers.
Whether we admit it or not, most of us are impressed when someone has a ton of subscribers, Twitter followers, YouTube views, blog reviews, etc.
Calidini's six principles of persuasion (which are very similar to mine, even though I didn't know who he was until a month ago) are:
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Algorithms have been developed and are moving from a proof-of-concept phase in academia to implementations in insurance firms.
The impacts of full independence and partial dependence, which are inevitably present in a full insurance book of business, guarantee that the sub-additivity principle for premium accumulation comes into effect. In our case study sub-additivity has two related expressions. Between the two time periods the acquisition of the dependence data set θT which is used for modeling and definition of the correlation structure ρi,N provides that a temporal sub-additivity or inequality between the total premiums of the insurance firm can be justified in (10.1).
It is undesirable for any insurance firm to seek lowering its total cumulative premium intentionally because of reliance on diversification. However an underwriting guidelines’ implication could be that after the total firm premium is accumulated with a model taking account of inter-risk dependencies, then this total monetary amount can be back-allocated to individual risks and policies and thus provide a sustainable competitive edge in pricing. The business function of diversification and taking advantage of its consequent premium cost savings is achieved through two statistical operations: accumulating pure flood premium with a correlation structure, and then back-allocating the total firms’ premium down to single contributing risk granularity. A backwardation relationship for the back-allocated single risk and single policy premium π'T+1[rN] can be derived with a standard deviations’ proportional ratio. This per-risk back-allocation ratio is constructed from the single risk standard deviation of expected loss σT+1[rN] and the total linear sum of all per-risk standard deviations
in the insurance firm’s book of business.
From the temporal sub-additivity inequality between total firm premiums in (10.1) and the back-allocation process for total premium
In all of our case studies we have focused continuously on the impact of measuring geo-spatial dependencies and their interpretation and usability in risk and premium diversification. For the actuarial task of premium accumulation across business units, we assume that the insurance firm will simply roll - up unit total premiums, and will not look for competitive pricing as a result of diversification across business units. This practice is justified by underwriting and pricing guidelines being managed somewhat autonomously by geo-admin business unit, and premium and financial reporting being done in the same manner.
In our numerical case study we prove that the theoretical inequality (10.1), which defines temporal subadditivity of premium with and without dependence modeled impact is maintained. Total business unit premium computed without modeled correlation data and under assumption of full dependence
For each single risk we observe that the per-risk premium inequality (12.0) is maintained by the numerical results. Partial dependence, which can be viewed as the statistical – modeling expression of imperfect insurance risk diversification proves that it could lead to opportunities for competitive premium pricing and premium cost savings for the insured on a per-risk and per-policy cost savings.
3.0 Functions and Algorithms for Insurance Data Components
3.1 Definition of Insurance Big Data Components
Large insurance data component facilitate and practically enable the actuarial and statistical tasks of measuring dependencies, modeled loss accumulations and back-allocation of total business unit premium to single risk policies. For this study our definition of big insurance data components covers historical and modeled data at high geospatial granularity, structured in up to one million simulation maps. For modeling of a single (re)insurance product a single map can contain a few hundred historical, modeled, physical measure data points. At the large book of business or portfolio simulation, one map may contain millions of such data points. Time complexity is another feature of big data. Global but structured and distributed data sets are updates asynchronously and oftentimes without a schedule, depending on scientific and business requirements and computational resources. Thus such big data components have a critical and indispensable role in defining competitive premium cost savings for the insureds, which otherwise may not be found sustainable by the policy underwriters and the insurance firm.
3.2 Intersections of Exposure, Physical and Modeled Simulated data sets
Fast compute and big data platforms are designed to provide various geospatial modeling and analysis tasks. A fundamental task is the projection of an insured exposure map and computing its intersection with multiple simulated stochastic flood intensity maps and geo-physical properties maps containing coastal and river banks elevations and distances to water bodies. This particular algorithm performs spatial cashing and indexing of all latitude and longitude geo-coded units and grid-cells with insured risk exposure and modeled stochastic flood intensity. Geo-spatial interpolation is also employed to compute and adjust peril intensities to distances and geo-physical elevations of the insured risks.
3.3 Reduction and Optimization through Mapping and Parallelism
One relevant definition of Big Data to our own study is datasets that are too large and too complex to be processed by traditional technologies and algorithms. In principle moving data is the most computationally expensive task in solving big geo-spatial scale problems, such as modeling and measuring inter-risk dependencies and diversification in an insurance portfolio. The cost and expense of big geo-spatial solutions is magnified by large geo-spatial data sets typically being distributed across multiple hard physical computational environments as a result of their size and structure. The solution is distributed optimization, which is achieved by a sequence of algorithms. As a first step a mapping and splitting algorithm will divide large data sets into sub-sets and perform statistical and modeling computations on the smaller sub-sets. In our computational case study the smaller data chunks represent insurance risks and policies in geo-physically dependent zones, such as river basins and coastal segments. The smaller data sets are processed as smaller sub-problems in parallel by assigned appropriate computational resources. In our model we solve smaller scale and chunked data sets computations for flood intensity and then for modeling and estimating of fully simulated and probabilistic insurance loss. Once the cost effective sub-set operations are complete on the smaller sub-sets, a second algorithm will collect and map together the results of the first stage compute for consequent operations for data analytics and presentation. For single insurance products, business units and portfolios an ordered accumulation of risks is achieved via mapping by scale of the strength or lack thereof their dependencies. Data sets and tasks with identical characteristics could be grouped together and resources for their processing significantly reduced by avoiding replication or repetition of computational tasks, which we have already mapped and now can be reused. The stored post-analytics, post-processed data could also be distributed on different physical storage capacities by a secondary scheduling algorithm, which intelligently allocates chunks of modeled and post-processed data to available storage resources. This family of techniques is generally known as MapReduce.
3.4 Scheduling and Synchronization by Service Chaining
Distributed and service chaining algorithms process geo-spatial analysis tasks on data components simultaneously and automatically. For logically independent processes, such as computing intensities or losses on uncorrelated iterations of a simulation, service chaining algorithms will divide and manage the tasks among separate computing resources. Dependencies and correlations among such data chunks may not exist because of large geo-spatial distances, as we saw in the modeling and pricing of our cases studies. Hence they do not have to be accounted for computationally and performance improvements are gained. For such cases both input data and computational tasks can be broken down to pieces and sub-tasks respectively. For logically inter-dependent tasks, such as accumulations of inter-dependent quantities such as losses in geographic proximity, chaining algorithms automatically order the start and completion of dependent sub-tasks. In our modeled scenarios, the simulated loss distributions of risks in immediate proximity are accumulated first, where dependencies are expected to be strongest. A second tier of accumulations for risks with partial dependence and full independence measures is scheduled for once the first tier of accumulations of highly dependent risks is complete. Service chaining methodologies work in collaboration with auto-scaling memory algorithms, which provide or remove computational memory resources, depending on the intensity of modeling and statistical tasks. Challenges still are significant in processing shared data structures. An insurance risk management example, which we are currently developing for our a next working paper, would be pricing a complex multi-tiered product, comprised of many geo-spatially dependent risks, and then back-allocating a risk metric, such as tail value at risk down to single risk granularity. On the statistical level this back-allocation and risk management task involves a process called de-convolution or also component convolution. A computational and optimization challenge is present when highly dependent and logically connected statistical operations are performed with chunks of data distributed across different hard storage resources. Solutions are being developed for multi-threaded implementations of map-reduce algorithms, which address such computationally intensive tasks. In such procedures the mapping is done by task definition and not directly onto the raw and static data.
Some Conclusions and Further Work
With advances in computational methodologies for natural catastrophe and insurance portfolio modeling, practitioners are producing increasingly larger data sets. Simultaneously single product and portfolio optimization techniques are used in insurance premium underwriting, which take advantage of metrics in diversification and inter-risk dependencies. Such optimization techniques significantly increase the frequency of production of insurance underwriting data, and require new types of algorithms, which can process multiple large, distributed and frequently updated sets. Such algorithms have been developed theoretically and now they are entering from a proof of concept phase in the academic environments to implementations in production in the modeling and computational systems of insurance firms.
Both traditional statistical modeling methodologies such as premium pricing, and new advances in definition of inter-risk variance-covariance and correlation matrices and policy and portfolio accumulation principles, require significant data management and computational resources to account for the effects of dependencies and diversification. Accounting for these effects allows the insurance firm to support cost savings in premium value for the insurance policy holders.
With many of the reviewed advances at present, there are still open areas for research in statistical modeling, single product pricing and portfolio accumulation and their supporting optimal big insurance data structures and algorithms. Algorithmic communication and synchronization cost between global but distributed structured and dependent data is expensive. Optimizing and reducing computational processing cost for data analytics is a top priority for both scientists and practitioners. Optimal partitioning and clustering of data, and particularly so of geospatial images, is one other active area of research.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Ivelin Zvezdov is a financial economist by training with experience in quantitative analysis and risk management for (re)insurance and natural catastrophe modeling, fixed income and commodities trading. Since 2013 he leads the product development effort of AIR Worldwide's next generation modeling platform.
After years of cost-cutting and downsizing, companies have realized they can’t shrink their way to success.
Creating businesses is the challenge of the day for large organizations. After years of cost-cutting and downsizing, companies have realized they can’t shrink their way to success.
In a world where what’s possible is advancing at breakneck speeds, social behavior, technology and global economy are driving forces for change. Established brands have realized they can’t stay relevant, differentiate themselves or gain a competitive advantage by tweaking aging product portfolios, buying out rivals or expanding to developing nations.
Innovation is crucial now more than ever, so companies must become Janus-like — looking in two directions at once, with one face focused on the old that still accounts for the bulk of their revenue and the other seeking out the new.
Innovation brings the hope of new value and the fear of the unknown. It is often born at the fringes of an organization’s established divisions and, at times, it exists in the spaces between. The truth is that innovation is a messy business. The high levels of uncertainty associated with new ventures need adaptive organizational structures to succeed. A company's operating, financial and governance models are seldom the same as existing businesses. In fact, most new business models are not fully defined in the beginning; they become clearer as new strategies are tried, customer needs are understood and anticipated and new applications are developed to facilitate new experiences. This uncertainty results in half-baked superficial changes that happen at the edge because it is easiest there, that require minimal organizational effort and that get the most visibility. Launching innovation labs, incubators or venture units requires a few bodies on the ground in a trendy office — even if they don’t produce much tangible value after the post-launch media hype wears off.
See also: Secret Sauce for New Business Models?
Crossing the threshold to innovate is imperative, but transitions from the current tried-and-tested state to the new state with unfamiliar rules and values is daunting for most people. It takes clarity of vision to create momentum and inspire others. Above all, it’s a balancing act between the old and the new cultures that are often placed in conflict with one another if the company takes an either or approach to corporate entrepreneurship.
Even when a breakthrough innovation is ready to be implemented, delivery becomes impossible in this corporate environment. Most leaders find there’s a fine line between corporate entrepreneurship and insubordination.
I get asked by CEOs and heads of departments how we solve these problems. How do we make a real impact with consensus and harmony? I suggest a new approach is called for, one that blends these cultures to avoid extreme behavior and creates equilibrium in areas of strategy, operations and organization. We have only to look at any successful enterprise such as Apple, Uber or Netflix, and we’ll find innovation at its core. These companies are bold about taking risks, driving change for the better and doing it at scale through human-centered design. This understanding and building a collaborative culture to actively seek out solutions to challenging problems and identifying relevant strategies continues to expand the realm of the possible.
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Shahzadi Jehangir is an innovation leader and expert in building trust and value in the digital age, creating scalable new businesses generating millions of dollars in revenue each year, with more than $10 million last year alone.
Who knew proof of insurance could be a major news story? I suppose you could frame the question as, How many more ways can Dwight Howard get fans mad at him? But I prefer the insurance angle.
The story goes like this:
The 31-year-old center for the Atlantic Hawks was pulled over a bit after 2 in the morning on April 28 for driving 95mph in a 65mph zone about 10 miles from his home. Police found that he was driving his Audi Rs7 with a suspended registration and without proof of insurance. They let him off with a verbal warning for the speeding and the suspended registration—one of the perks of being an NBA star, I suppose—but towed his car because he couldn't prove his claim that he had insurance.
The incident might have stayed under the radar without the towing but popped up on sports sites this week and caused a stir among fans. Why was Howard out at 2 in the morning on the day of a playoff game? Was his late night the reason he played poorly later that day in a game that eliminated the Hawks from the playoffs? The car towing fed into the narrative about Howard, a supremely talented player who has never lived up to expectations, especially in the playoffs, and is now with his fourth team. Atlanta fans are outraged, while fans of his three previous teams are chuckling and saying, "Told you so."
All because Howard couldn't produce proof of insurance.
Did he have insurance? He certainly can afford it. He earns more than $23 million a year and has been making that kind of money for a long time now. But we don't know for sure, because of the archaic systems we use that mean most of us carry proof of insurance as little pieces of paper in our cars.
At its core, insurance is as digital as any industry there is—we basically track a whole lot of data on people, curate a mass of very precise promises and wire money—so it strikes me as odd that we turn the data into paper and PDFs and handle them manually. Why can't we leave the data in its native state and just make it available whenever and wherever the bits and bytes are needed?
That question is why I'm a fan of GAPro, a startup that is trying to rewire the industry to stop these unnatural acts that we perform on data and to make the industry much more efficient. If you share my belief that data should stay in its native state, then I encourage you to read the article on Luddites below, by Chet Gladkowski of GAPro, which lays out the company's argument in detail.
Speaking of rewiring the industry for efficiency...our friends at Pypestream and our friends at EY (yes, we introduced them to each other) made an important announcement this week. EY will help clients implement Pypestream's intelligent messaging, which is cutting the costs of customer service while making customers happier (how many times can you say that with a straight face?) and which is now moving into core operations at insurers, too. Pypestream is becoming the industry standard for chatbots and other aspects of intelligent messaging—as it should, in my humble opinion—and the alliance with EY will accelerate the trend.
Cheers,
Paul Carroll,
Editor-in-Chief
Get Involved
Our authors are what set Insurance Thought Leadership apart.
|
Partner with us
We’d love to talk to you about how we can improve your marketing ROI.
|
Paul Carroll is the editor-in-chief of Insurance Thought Leadership.
He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.
Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.