“Instantaneous and delightful.” These words are not typically used in connection with the purchase of insurance, let alone with the filing of an insurance claim. But that is the mantra of Lemonade Insurance Company, a New York-based carrier recently licensed to sell insurance in Ohio. Its website boasts “killer prices” with renter’s insurance from $5 per month and homeowner’s insurance from $25 per month.
It also declares “world record claims handling” in only three seconds with “25 percent of claims paid in 3 seconds.” And you can “get insured” in only 90 seconds. With its “Zero Everything” option, the customer is promised “zero deductible, zero rate hikes, zero worries.” Apparently, nothing is too small to report stolen under a renter or homeowner’s policy, including flip-flops. Not kidding. The website actually says flip-flops are covered.
Here is how it works. Someone steals flip-flops out of your apartment. First, gross. Second, use the Lemonade App to record a video of yourself explaining what happened, what the flip-flops look like and what you think they are worth, say, $10. Send it to Lemonade, and its “AI runs 18 anti-fraud algorithms,” which is basically the computer deciding whether you might be lying. If it believes you, the “AI will pay you in 3 seconds” and send $10 to your bank account. If the AI thinks you might be lying, you get diverted to a human to process the claim. And there is the rub.
We have already seen reports on how facial recognition software has problems recognizing people who have darker skin. Lemonade’s AI goes beyond just deciding who you are – it enters the foothills of deciding what kind of a person you are. This begs immense questions. For example, do the “18 anti-fraud algorithms” adjust for racial or cultural bias? For that matter, how do the algorithms treat claimants whose speech patterns and delivery are uniquely influenced by where they live? Over time, will patterns emerge that the algorithms steer people for whom English is a second language to humans for claims review, thereby delaying payment? And by that very steering, are the humans predisposed to suspect the customer is lying because the computer had that suspicion? Will stats emerge showing a trend of white claimants passing the computer’s scrutiny, but Hispanic or black claimants being diverted to humans, who, again, know the computer suspects them of lying?
Sophisticated AI is designed to learn and to change from what it learns. If the dominant data inputs to the “anti-fraud algorithms” initially come from claims made by wealthy young renters living in the boroughs of New York City, who have driven Lemonade’s startup business growth, will that AI begin to discriminate against low income, elderly and less educated customers?
Depending upon your perspective, Lemonade is either an innovator or a disrupter. Its apparent business model is to eliminate as many humans as possible from the insurance experience – no agents, no brokers, no adjusters – resulting in lower overhead and more competitive rates. Where some companies emphasize their human interaction with customers, Lemonade emphasizes customer interaction with binary code.
While Lemonade remains a very small carrier with reported written premium of only $8.3 million in 2017 and with a tiny claims experience compared to traditional carriers, its backing includes large institutions, such as Lloyd’s. So, there is heft behind what could be a growing trend of computers judging humans. If the AI bots learn to become racist and prejudiced, the litigation and regulatory actions in the form of class actions, bad faith claims and market conduct exams could greatly disrupt this disrupter.