Risk & Repeat: What do Google's AI principles mean for cybersecurity?
In this week's Risk & Repeat podcast, SearchSecurity editors discuss Google's new principles for artificial intelligence and how they may impact the use of AI for cybersecurity.
Google's AI principles prohibit the company from using its technology for weapons, but several cybersecurity-related questions remain about how Google's rules will be applied and enforced.
Google CEO Sundar Pichai published a blog post last week detailing seven principles for its AI technology, including being socially beneficial and avoiding any unfair biases. Pichai also promised the company would not design or develop AI for weapons or other technologies that are likely to cause harm, and pledged to "develop and apply strong safety and security practices" for its AI to prevent any unintended results.
However, Google's AI principles are vague, and it's currently unclear what practices may be needed to better secure and protect AI technology and what the company considers unfair bias. In addition, Google doesn't specify what it considers a weapon, and the company said it will continue to work with governments and the military on AI for cybersecurity.
How could Google's AI principles help the technology industry? What are the biggest holes or omissions in these principles? Can Google's AI be used for intrusive surveillance or offensive cybersecurity measures? SearchSecurity editors Rob Wright and Peter Loshin discuss those questions and more in this episode of the Risk & Repeat podcast.