Over the past half decade, several organizations in the workers’ compensation industry have promoted their business by saying they apply artificial intelligence, or AI. I looked at their offerings a few years ago. I found that what they meant by AI was worthy of consideration. However, AI today means computational power dramatically more effective than what was then promoted. AI has the potential in 2023 to dramatically raise the quality of claims handling and underwriting, to name two functions in workers’ compensation.
One might say that a breakthrough has just taken place that makes AI work in the way we’ve implicitly expected for a long time, and until now have not experienced.
In late 2022, OpenAI released for free public use a system called ChatGPT, heralding a new generation of what will be widely used AI products. As of the end of January 2023, some 100 million persons are estimated to have used it, making it the most rapid large-scale launch of a consumer product in history.
ChatGPT is the first sign of a quickly emerging AI generation. It is very easy to use. Responding to a request posed in conversational English, it writes plans, analyzes your writing, fixes computer software, writes a poem. I compare it to driving a car without any knowledge of how the engine operates. It is much more insightful and efficient in many instances than Google search.
I tested it out posting many workers comp-related requests. Some responses were vapid, but most ranged from useful to very insightful. With more preparation, the system has great promise as a practical tool. Claims staffs can use this generation of AI to more accurately predict claims outcomes and to select interventions, such as case management or subrogation reviews. Underwriting staff can better price premiums.
We’ll get into some scenarios in a moment. I want now to provide a very basic (albeit somewhat rough) introduction to machine learning using large language models, which is at the heart of ChatGPT.
Machine learning using large language models does not use what we might normally call algorithms-based or logical thinking to predict. It will confidently predict that the word “you” usually comes after “thank,” not because it knows the meaning of words and grammar. Its prediction process is so complex that it can’t be explained or audited by the user.
Consider how to predict the second to last elements in “The quick brown fox jumped over the lazy [word] [punctuation].”
The computer, without any intrinsic, original knowledge of what a word is, or word meaning, or grammar, will produce a numerical score for each element relative to the probability of each element coming before or after each of the others. Hence the three elements in “the” will have a relatively high predictive score to precede “quick.” And “jumped” will have a lower predictive score to follow “fox.” That is because the elements that precede "quick" are easier to predict than the elements that follow "fox."
The computer has been trained by inspecting many millions of texts, one at a time, converting letters and punctuation into numbers. For each string of numbers, the computer will recalibrate scores for each number relative to its probability of coming before or after each other number. It is searching eventually for strings of words followed by a period. It will assign a very high probability, with a great deal of confidence, that the missing word is “dog,” much more likely than it is “cat” and far more likely than it is “cloud.”
ChatGPT describes itself in these words: A model which is exposed to vast amounts of data and learns to predict and respond with “human-like text with a high level of coherence and relevance.” It is trained to handle tasks such as language translation and computer code writing. It can be fine-tuned to perform on limited data sets for claims or underwriting.
See also: The Key to 'Augmented Intelligence'
Consider the adjuster with a new workers’ compensation claim involving multiple bodily trauma. AI of this generation will predict ultimate claim cost – that doesn’t sound very impressive. But it will also advise the adjuster what additional information will improve the prediction. And, it will predict which medical treatment and what treatment will lower the cost, and by how much.
For the underwriter, AI predicts ultimate costs of all claims and advises on what information and what interventions (such as loss control and experience rating/deductibles) will improve prediction and will increase profits for the insurer.
Is this what we want? The answer is yes, with caveats. What if AI enables decisions that might conflict with ethics and public policy? For instance, it might predict that a combination of experience rating, high deductibles and no loss control may yield more injuries but higher insurer profits than a program of aggressive loss control.
And, how concerned should we be about not being able to track the computations, generating lessons on how to peel a boiled egg and the probability of medical fraud? As in the example of predicting “dog,” the computer operates at an unbelievable level of complexity. It cannot tell you the meaning of dog, or trauma, or subrogation, or high deductible, except to predict that a set of numbers compose what we humans call words, which we humans give meaning, such as “dog” as an animal rather than as the verb for giving chase. One can say that the computer is very useful and unaccountable.
Industry organizations such as large insurers, the National Council on Compensation Insurance, the Workers Compensation Research Institute and medical reimbursement services may be the first ones to bite into the apple. And none too soon for all others to grasp the potential of this new generation of AI.