Tag Archives: fake news

‘Fake News’ Reaches Risk Management

With all the legitimate concerns about the wildfires in the West, I was dismayed to see that people in Oregon were declining to evacuate because they were convinced that antifa would loot their homes. To try to catch members of antifa, vigilantes even set up roadblocks and demanded that those trying to leave present ID.

Authorities have said that, despite the rumors, there is no evidence of any involvement by antifa in setting fires, and there have been no reports of looting. But such “fake news,” amplified on social media, is complicating crisis management in Oregon. I’m afraid the pernicious effects of “fake news” will only grow — and massively — for risk managers.

What’s happening with wildfires will surely happen, in exponentially greater fashion, when a vaccine for the coronavirus becomes available, likely within the next several months. We’ll head into a stretch where “fake news” about a vaccine could easily overwhelm real news and real science, given the environment of fear about the virus and suspicion about political motivations. Rumors, and even deliberately faked “news,” could determine millions of decisions about whether to take the vaccine, whether to fully reopen businesses and schools and whether individuals will try to resume their normal lives.

In a thoroughly rational world, risk managers could make thoroughly rational decisions about how a vaccine would be rolled out. Then risk managers could advise on how quickly offices, factories and small businesses could safely reopen and on the myriad other issues that will face companies as we try to feel our way back toward the pre-COVID economy.

But we don’t live in a thoroughly rational world. We live in what some call a “post-truth” world, where likes and shares on Facebook matter more than veracity, where the debunked “Plandemic” conspiracy video carries more weight for many than public health authorities do. We vote on truth these days, certainly when deciding what journalism to believe but even when choosing what science to accept. So, anyone who wants to predict what risks will look like over the next six months to a year, as the vaccine rolls out, will need to channel his or her inner Nate Silver.

Even in the best of circumstances, the next six months to a year would be tough for risk managers to plan for. The FDA will approve a virus as long as it’s 50% effective (once it’s determined to be safe), which leaves a lot of room for indecision. Those in vulnerable groups will still be cautious if told a vaccine is only 50% or 60% likely to protect them. At some point, through vaccines and through immunity achieved the hard way, by getting sick and recovering, enough people will become protected that we will achieve herd immunity — but at what level does that happen? I’ve seen estimates ranging from 20% to 80% as the level of immunity that needs to occur in a population to render us generally safe. There’s a lot of room for uncertainty between those numbers. And it’s not clear how long immunity will last — it could be as little as several months. Until people know how safe it is to venture out again, and start acting predictably, all planning will be iffy.

Now consider our actual circumstances and all the misinformation — and even disinformation — that will be tossed into the mix.

The Trump administration has pushed a political agenda that has often run roughshod over the science on issues related to the pandemic (hydroxychloroquine, masks, convalescent plasma, etc.). At times, the administration even contradicts itself, with the president saying one thing while public health authorities may say something very different. So, even if the president declares victory on a vaccine, many people might see that claim as “fake news.”

Those ignoring of disputing the president could, in turn, spark a reaction from his supporters, which could produce the kind of political divide that occurred over wearing masks and create the sort of angry environment that fosters misinformation and disinformation. Michael Caputo, a spokesman for the Department of Health and Human Services, may have given us a taste of what lies ahead when he bizarrely claimed over the weekend that there is a “resistance unit” within the Centers for Disease Control where officials are willing to commit “sedition” to undercut the president and that “they’re going to have to kill me.” [Note: After this article was published, Caputo apologized for his remarks, then announced that he is taking a 60-day leave of absence.]

The anti-vaxxers will have their say, too. So may the Russians and any other foreign actors interested in stirring up confusion and rancor in the U.S.

“Fake news” is going to have its way with us, complicating every decision that we make as individuals, as families and as leaders of our organizations as we try to determine the pandemic’s risk over the next six months to a year. Pity the poor risk managers who have to predict how all those individual decisions will play out, so they can spot the key risks and help all of us mitigate them.

Fasten your seatbelts, then check your airbags. Okay, maybe put on a helmet and some body armor, too.

This will be a bumpy ride.

Stay safe.

Paul

P.S. On a personal level, I ask that we all be the place where rumors go to die. Having spent too much time on Twitter, I’d say the most dangerous words known to man are, “Interesting, if true” — which almost always introduces some article that has a 0.0% chance of being true. I think we’ll all be better off if we only share information that we’ve personally vetted and would be prepared to defend on a witness stand.

P.P.S. Here are the six articles I’d like to highlight from the past week:

Creating the Future of Distribution

Having partnerships and an ecosystem becomes very strategic as insurers expand their reach and presence to where their customers will be.

For Agents, COVID Means Digital or Bust

Survival in the era of COVID-19 will be determined by the independent agent’s ability to implement digitization.

How to Evaluate AI Solutions

There are five main concerns when implementing regulatory technology, especially AI technology, in the financial sector.

You Can Still Have Personal Interactions

The challenge in these socially distant times is how to create real relationships with customers despite so much of the exchange being digital.

Navigating Security in the Remote Paradigm

While companies having been improving during the work-from-home phase, bad guys have been busy, too–and deep fakes are getting scary.

What My $18,289 Medical Bill Says

Systemic problems don’t sound catchy, don’t boil down to one sentence and take time to implement — but we need systemic solutions.

Facebook, WhatsApp Are Dangerous

Facebook’s woes are spreading globally, first from the U.S. to Europe and now in Asia.

A landmark study by researchers at the University of Warwick in the U.K. has established that Facebook has been fanning the flames of hatred in Germany. The study found that the rich and the poor, the educated and the uneducated, and those living in large cities and those in small towns were alike susceptible to online hate speech on refugees and its incitement to violence, with incidence of hate crimes relating directly to per-capita Facebook use.

And during Germany-wide Facebook outages, which resulted from programming or server problems at Facebook, anti-refugee hate crimes practically vanished — within weeks.

As the New York Times explains, Facebook’s  algorithms reshape a user’s reality: “These are built around a core mission: promote content that will maximize user engagement. Posts that tap into negative, primal emotions like anger or fear, studies have found, perform best and so proliferate.”

Facebook started out as a benign open social-media platform to bring friends and family together. Increasingly obsessed with making money, and unhindered by regulation or control, it began selling to anybody who would pay for its advertising access to its users. It focused on gathering all of the data it could about them and keeping them hooked to its platform. More sensational Facebook posts attracted more views, a win-win for Facebook and its hatemongers.

See also: Too Much Tech Is Ruining Lives  

India

In countries such as India, WhatsApp is the dominant form of communication. And sadly, it is causing even greater carnage than Facebook is in Germany; there have already been dozens of deaths.

WhatsApp was created to send text messages between mobile phones. Voice calling, group chat and end-to-end encryption were features that were bolted on to its platform much later. Facebook acquired WhatsApp in 2014 and started making it as addictive as its web platform — and capturing data from it.

The problem is that WhatsApp was never designed to be a social-media platform. It doesn’t allow even the most basic independent monitoring. For this reason, it has become an uncontrolled platform for spreading fake news and hate speech. It also poses serious privacy concerns due to its roots as a text-messaging tool: Users’ primary identification being a mobile number, people are susceptible everywhere and at all times to anonymous harassment by other chat-group members.

On Facebook, when you see a posting, you can, with a click, learn about the person who posted it and judge whether the source is credible. With no more than a phone number and possibly a name, there is no way to know the source or intent of a message. Moreover, anyone can contact users and use special tools to track them. Imagine the dangers to children who happen to post messages in WhatsApp groups, where it isn’t apparent who the other members are; or the risks to people being targeted by hate groups.

Facebook faced a severe backlash when it was revealed that it was seeking banking information to boost user engagement in the U.S. In India, it is taking a different tack, adding mobile-payment features to WhatsApp. This will dramatically increase the dangers. All those with whom a user has ever transacted can harass them, because they have their mobile number. People will be tracked in new ways.

Facebook is a flawed product, but its flaws pale in comparison with WhatsApp’s. If these were cars, Facebook would be the one without safety belts — and WhatsApp the one without brakes.

That is why India’s technology minister, Ravi Shankar Prasad, was right to demand that WhatsApp “find solutions to these challenges which are downright criminal and violation of Indian laws.” The demands he made, however, don’t go far enough.

Prasad asked WhatsApp to operate in India under an Indian corporate entity; to store Indian data in India; to appoint a grievance officer; and to trace the origins of fake messages. The problems with WhatsApp, though, are more fundamental. You can’t have public meeting spaces without any safety and security measures for unsuspecting citizens. WhatsApp’s group-chat feature needs to be disabled until it is completely redesigned with safety and security in mind. This on its own could halt the carnage that is happening across the country.

Lesson from Germany

India — and the rest of the world — also need to take a page from Germany, which last year approved a law against online hate speech, with fines of of as much as 50 million euros for platforms such as Facebook that fail to delete “criminal” content. The E.U. is considering taking this one step further and requiring content flagged by law enforcement to be removed within an hour.

The issue of where data are being stored may be a red herring. The problem with Facebook isn’t the location of its data storage; it is, rather, the uses the company makes of the data. Facebook requires its users to grant it “a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content” they post to the site. It assumes the right to use family photos and videos — and financial transactions — for marketing purposes and to resell them to anybody.

See also: The World Doesn’t Need Silicon Valley  

Every country needs to have laws that explicitly grant their citizens ownership of their own data. Then, if a company wants to use their data, it must tell them what is being collected and how it is being used, and seek permission to use it in exchange for a licensing fee.

The problems arising through faceless corporate pillage are soluble only through enforcement of respect for individual rights and legal answerability.

How Bad Leads Are Like Fake News

“Fake news” is a hot topic in the wake of the 2016 election. We’ve found ourselves questioning articles we read before we share them on our Facebook page or tweet it out to our followers. The internet is littered with stuff like the “Pope Francis shocks world, endorses Donald Trump for president” story, and it can be frustrating and time-consuming trying to determine if the news you’re reading is fake.

Similarly, many insurance marketers are challenged each day with the frustrating and time-consuming task of separating “fake leads” from legitimate leads.

Agents refer to low-quality leads as aged leads, fraudulent leads, manufactured leads, manipulated leads. Whatever you call them, they are leads sold to you that should not be sold at all. Those who are not following through on promises made or adhering to the directives in the ping post ecosystem are perpetuating this problem.

See also: Don’t Believe Your Own Fake News!  

Nothing diminishes agent morale more than when customers say they never filled out a form or they filled it out two months ago. Our recent work with insurance providers in examining the origin and history of leads purchased has revealed that as many as one out of every three leads are from consumers having no, or very low, intent.

Without clarity into the lead generation process and where consumers are in their purchasing journey, agents and carriers are often subject to fake leads, aged leads, a negative customer experience and ultimately, wasted spend and wasted effort.
Additionally, if a consumer never filled out a form (or did so months ago) and receives calls, insurance marketers are ripe targets for TCPA lawsuits.

As Jornaya clients have found, including one auto and home insurance company that shared its experiences in a recent case study, two key metrics that are especially effective in gauging consumer intent are lead duration and lead age.

Lead duration is the amount of time it takes a consumer to fill out a lead form, from the moment the first form field is filled out to the moment the form is submitted. A lead form that was filled out in less than five seconds is probably not a real person, likely the work of a bot, or automated program.

For recent aggregated data from insurance clients, we found that 17% of leads had a duration of under five seconds!

Generally, our clients have found that consumers who took two to 60 minutes to fill out the lead form had a much higher likelihood to convert.

Lead age is the actual, measurable time from the instant a consumer submits an online lead form — i.e., when the lead was born — to when the agent receives it.

Many leads that carriers and agents buy are actually much older than advertised. For recently aggregated data from insurance clients, we found that 13% of leads were more than one week old! Typically, leads that were more than an hour old had a much lower propensity to convert.

Armed with new consumer journey intelligence, insurance marketers can work more confidently with lead providers to improve these metrics or take action on the data to only buy leads that fall within the ideal duration and age parameters and then reinvest those dollars in better leads.

So, what now?

We have partnered with lead generators and aggregators to lead the charge in minimizing the bad leads, and we are working to recruit additional partners to the effort. We’re also partnering with a growing number of insurance carriers and agents to initiate actions that will expose and minimize the impact of bad actors in the insurance lead gen space.

This will have a positive impact on the ecosystem in a variety of ways:

  1. It will help lead buyers spend less time and money on no-intent leads. This will foster higher performance because those brands can spend more on leads feeling confident that they are higher-quality. This also means that they can scale their lead programs confidently without worrying about strong starts that go awry once they start to scale.
  2. Lead sellers will be confident that everything will become more efficient and effective for every lead they exchange with the ping post ecosystem. They will have fewer returned leads from buyers, which will allow them to accurately forecast monetization and improve their matching decisions. Not to mention that their lead buyers will be more satisfied with their service and increase demand for more leads. New insurance providers will initiate lead-buying programs, and carriers and agents that stopped buying leads will jump back onboard.
  3. There will be less TCPA exposure for all the players in the ecosystem.
  4. The fake leads will not survive – they will be flushed out, exposed and eliminated.

We’ve already seen first-hand how knowing where consumers are in their shopping journey can help brands drastically improve their lead programs and results. We look forward to expanding partnerships on both the lead seller and buyer sides and returning to the original promise of a great experience for the consumer and insurance provider.

See also: Are You Still Selling Newspapers?  

For more information on improving lead quality in your insurance marketing lead generation programs, read our white paper How Insurers Can Hit a Lead Gen Home Run.