A challenge: Guiding generative AI toward responsible use

Transparency, explainability and lack of bias are principles for building generative AI systems that work according to ethical rules and are fair for everyone.

When Juliette Powell and Art Kleiner started working on their book, The AI Dilemma: 7 Principles for Responsible Technology, generative AI had not yet exploded into the public consciousness.

But after OpenAI released its blockbuster AI chatbot, ChatGPT, in October 2022, the co-authors went back to revise their narrative to accommodate the sudden emergence of a transformative force in business and society, one that needs guidelines and regulations for responsible use perhaps more than any other new software technology.

"Now that we have generative AI in our hands … we also have to have the responsibility of how they will impact not just the people around us, but also the billions of people that are coming online every year who have no idea to what extent algorithms shape their lives," Powell said on the Targeting AI podcast from TechTarget Editorial. "So I feel like we have a larger responsibility."

Powell, like Kleiner, with whom she is a partner in a tech consultancy, is an adjunct professor at New York University's Interactive Telecommunications Program.

The authors' second principle, "Open the closed box," is about transparency and explainability -- the ability to look into AI systems and understand how they work and are trained, Kleiner said.

"That doesn't just mean the algorithm, it means also the company that created it and the people who engineered it and the whole system of sociotechnical activity, people and processes and code that fits together and creates it," he said.

Another of the principles at the core of the book is "people own their own data."

"One of the things that human beings do is hold biases and assumptions, especially about other people. And that when it's frozen into an AI system has dramatic effect, particularly on vulnerable populations," Kleiner said. "We are our own data."

The book is largely based on Powell's undergraduate thesis at Columbia University about the limits and possibilities in self-regulation of AI and drew on her consulting work at Intel.

As for regulation of AI technology, Powell and Kleiner are proponents to the extent that it fosters responsible use of AI.

"It's important that companies be held accountable," Powell said. "And I also think that it's incredibly important … for computer and systems engineers to actually be held accountable for their work, to actually be trained in responsible work ethics so that if people get harmed, there's actually some form of accountability."

Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.

Dig Deeper on AI technologies