paulvelgos - stock.adobe.com

Recipe for lost customers: Flight cancellations and AI algorithms

As companies enjoy the benefits that automation and AI algorithms bring to next-generation CRM, leaders must closely examine the legal and customer retention ramifications.

"The crew is refusing to fly." That was what employees of Porter Airlines Inc. told passengers flying out of Billy Bishop Toronto City Airport, who were lined up on the jetway waiting to board a one-hour flight to Boston on July 13. It had been delayed due to weather so long that some -- including me -- had been in the building nearly nine hours. The airport was about to close for the night.

There was one more flight to Boston scheduled to depart before the airport closed, but it had space for only some of the passengers bumped off the canceled flight. Who would go, and who would have to endure the financial, personal and work disruptions of waiting until the next day?

"An algorithm" would decide, Porter employees at the airport assured us, and eventually it -- wherever it was hiding; in a box, in a cloud -- decided I'd be staying until 4:50 p.m. the next day. A Porter Airlines spokesperson later confirmed in an email that 65 people were reaccommodated, a majority of them that the same night.

"Passengers are moved to the next available flight(s) using an algorithm based on an extensive list of criteria," the Porter spokesperson wrote. "Some of the top criteria used to re-accommodate passengers on other flights include unaccompanied minors, passengers that require medical services and connecting passengers -- to name just a few."

It was a customer relations disaster, with a typically cranky Boston crowd getting crankier and more verbally profane by the minute. The situation, fortunately, didn't degenerate into violence, as it did after a United Airlines overbooking earlier this year. Also fortunately, Ann Coulter wasn't on the passenger manifest, either.

Lessons learned through flight cancellations

But it could have been handled a lot better, as all of us Porter Airlines customers were taking the cheaper and closer alternate route to downtown Toronto than booking the well-traveled Air Canada/Toronto Pearson International Airport itinerary.

Porter Airlines may very well have lost some customers; not because of the algorithm, but because the humans involved blamed this inanimate algorithm. It's something we deal with every day applying for jobs, dealing with credit card companies and, of course, hearing health insurance companies explain what services and prescriptions they do or don't cover.

Algorithms take the fall for customer service issues

Many airlines employ algorithms, or black boxes of rules to determine who's denied boarding in sticky situations. A different kind of algorithm, the training algorithm, serves as the backbone for setting the next-generation CRM artificial intelligence (AI) systems loose on enterprise data stores.

Both types of algorithms heavily influence -- or will soon -- the quality of customer experience.

Deep behind the systems administering CRM for airline service, retail businesses, B2B suppliers and even companies using it for human resources, there are data scientists tuning the algorithms and setting the rules they follow. Somebody had to write the rules for the black boxes.

While automation is attractive, algorithms -- whether they run rules engines or are training AI systems -- aren't people. The strength of the technology becomes a weakness when a whole group of customers inconvenienced by an algorithm walk away from your company.

The strength of [AI] becomes a weakness when a whole group of customers ... walk away from your company.

"We need to get back to real-world values again, and look at where software algorithms increase productivity without crossing the line of using math models and computer code strictly for profit," said Barbara Duck, who blogs about health data and worries about patients getting lost in the system or, worse yet, being unfairly tagged with medical problems and costs that insurance companies' predictive analytics systems uncover about patients, their customers.

Eight years ago, Duck half-jokingly suggested the U.S. government create a Department of Algorithms to protect patients and consumers.

"We've made that big mistake in healthcare with everyone taking in [flawed] prediction models versus working with cold, hard facts relating to the patient's health," she said.

Buyer beware: Discriminatory AI robots carry legal risk

In some scenarios, bad AI algorithms can bring legal risk, as well.

On the HR side, Christopher Fluckinger, who earned his Ph.D. in industrial and organizational psychology, and who happens to be my nephew, said algorithms may pose dangers to a company if a technology's data output is used too close to the decision-making process before it's vetted by humans.

Algorithms for screening job applicants absolutely cannot be one size fits all. They have to be customized in valid and nonsuperficial ways, said Fluckinger, an instructor at Bowling Green State University's Firelands campus in Huron, Ohio.

"A lot of assessment and candidate selection does involve a great deal of automation, aggregation, transformation and standardization of data," he said.

And Fluckinger ought to know, having concentrated his research on how to apply those technologies in the context of fairness and legality, as well as including human decision-making in the process.

In discussing these technologies with vendors at conferences he's attended, Fluckinger said salespeople selling HR systems rarely know much about tuning and validating algorithmic output to keep it useful, but nondiscriminatory. In his experience, salespeople often don't have the technical background to explain to customers this often expensive and time-consuming process, which leads to shaky or outright false claims about what these technologies can do.

Even worse than glossing over the difficult process of properly tuning algorithms, Fluckinger added, is when vendors cut corners in software development.

"The software side [of a technology vendor] will initially partner with test designers/validators, and then decide they can do the whole thing themselves," he said. "The end product looks basically the same and will make sales, but, of course, doesn't really work, and will be in shaky legal standing if challenged."

The ethics of AI algorithms

Some companies may believe that using algorithms for difficult decisions helps create fairness -- or the illusion thereof -- for consumers caught in situations such as overbooking or flight cancellations. That's wrong.

Alan Trefler, CEO of Pegasystems, held a question and answer session for media and analysts at the PegaWorld user conference in Las Vegas in June. He made the case that companies should consider monitoring the opacity of their AI systems; the more transparent the system, the more visibility companies have into how the machine trained itself to make decisions. The more opaque, the harder it is to figure out how the system arrived at a particular decision.

When AI does things like create Rembrandt-esque paintings, opacity is fine; no one really cares why the AI chose to make a particular brushstroke, Trefler said. But when it's supporting customer interactions in regulated industries -- for example, helping banks decide who gets a loan and who doesn't -- transparency becomes essential to make sure the AI cleaves to company values, as well as regulatory mandates. In fact, it should be a matter of the corporate record.

Anthropomorphizing algorithms as fair and benevolent decision-makers ... can lead to a bad customer experience.

Some customer engagements can be trusted to AI, Trefler continued, such as checking for markers of fraudulent activity. But the more sensitive the engagements are, the more they require human intervention.

"Having artificial intelligence that would sense when a particular decision was appropriate or not would require AI to have a sense of ethics, which I think is well beyond the level to which we should trust it today," Trefler said. "We need to, ourselves, govern it according to ethical standards."

Most companies probably won't get into such heady ethical discussions. They're more interested in bottom-line customer retention and finding CRM automation that can improve it. The one lesson from Trefler that companies should think about: Anthropomorphizing algorithms as fair and benevolent decision-makers, and letting front-line workers throw up their hands and blame the computer, can lead to a bad customer experience.

Certainly, for me, when it comes time to click OK on my reservations next time I'm headed to downtown Toronto for business or pleasure, I'll have to make a choice: Air Canada to Pearson or Porter Airlines to Billy Bishop.

Having gone through what I did, the company that earns my slim slice of the company travel budget spend will most likely be one whose AI algorithms haven't yet left me scrambling for an Uber after 11 p.m., standing alone with my luggage on the sidewalk outside Billy Bishop, employees streaming out for the night heading home. That's how I'll vote -- with my wallet.

Next Steps

How AI tools flush out fake news

How analytics, algorithms and machine learning play together in the tech sandbox

Can AI make your HR recruiters smarter?

FAA outage highlights importance of high availability

Dig Deeper on Marketing and sales