How and why are our devices listening to us?
Consumers are utilizing digital voice assistants and smartphones but may not realize how frequently companies listen in and sift through all the data these devices create.
Most people today carry a smartphone everywhere they go, and many also have voice assistants in their homes or on their phones. These devices have AI-powered technology that allows users to conduct a myriad of activities with just a few simple voice commands. The popularity of voice-controlled devices has grown so quickly that, as of 2017, a Pew Research study showed that nearly one-half of all Americans were using digital voice assistants.
Even as people are starting to realize the extent to which their daily activities online are being saved, they might not realize how tech giants like Facebook, Apple, Google and Amazon are monitoring and listening to the conversations they're having with their personal assistants. Recently, Amazon and Apple have admitted to listening to audio clips for training purposes -- concerning many users who were not aware that humans would be tuning in to these devices.
We're starting to see startups emerge that specifically listen to conversations to provide greater value to organizations. On the one hand, we have concerns about privacy, and on the other, we have new tools that provide convenience. At what point do we give up privacy for convenience?
What level of privacy should be expected with voice technology?
In order to provide AI capabilities, tech companies require humans to facilitate supervised machine learning systems. Human annotators transcribe the communications they capture in order to feed that data back into the algorithms to gain a better understanding of human speech and learn and improve over time. In the past, when a human called into a support line or a business, there was an expectation that a human would pick up on the other end. In those situations, it's disclosed that a human is listening and the call "may be recorded" for a range of purposes.
However, as users began to interact and communicate with voice assistants, the assumption that a human could potentially be listening in on the other end wasn't confirmed. It most likely would come as a surprise to many that their conversations are in fact being listening to, as they did not opt in and were not notified of this.
According to a report in the Guardian last year, Apple used contractors to listen in on conversations to improve the overall performance of their systems. However, it did so without disclosure to end users, resulting in their human contractors listening in on private conversations, including the scheduling of doctor's appointments, personal information disclosures of all sorts, and possible illegal behavior. Amazon admitted that it too uses a workforce of hundreds, possibly thousands, of humans that are listening in on Alexa conversations to improve overall accuracy and performance. This has led to uncomfortable, private conversations being overheard when users probably would not have otherwise consented to sharing the information.
Despite society's continued trepidations regarding privacy protections, voice-first interactions are growing. It's expected that there will be over 7 billion AI-powered digital assistants in 2020, and Gartner is predicting that 30% of web browsing and searches will be done without a screen by the end of 2020. Voice is proving a simple way for users to ask and receive information, especially when there are physical limitations that make typing and swiping difficult -- such as when driving or cooking.
Companies are building AI tools to help office efficiency
As we ask for help using voice technology, we're also increasingly asking AI-enabled devices to listen in on our conversations. Users are more comfortable with voice assistants listening when explicitly consenting to the service. With the help of artificial intelligence, companies are developing and using technology that can automate tasks, such as listening in on meetings and calls to provide transcriptions, meeting minutes, help with customer inquiries and help with follow up. These virtual assistants can essentially augment or replace customer service, administrative and sales tasks at organizations.
Companies like Vonage and Microsoft are providing services -- such as Vonage's Voicemail to Email and Microsoft Exchange Unified Messaging -- that allow user to receive voicemails in their email both as an MP3 attachment and as a text transcript. Cisco's EVA (Enterprise Voice Assistant) transcribes and sends meeting minutes and follow-up items quickly to integrated workflow tools. The program can provide audio recordings and transcripts of meetings as well as video meeting recordings and speaker insights. EVA automatically emails meeting recaps after every meeting. Transcription programs, such as those offered by startup Trint, digitally transcribe audio and video content and make searchable and editable transcripts.
Companies using these technologies are seeing the positive applications of giving up privacy -- perhaps due to the appropriate business setting of the conversations.
Risks associated with AI systems listening in on conversations
With the speedy deployment of AI into all aspects of our lives, it is important to consider the risks, privacy and ethical implications and how we can work to mitigate the adverse outcomes that may arise from the use of these systems. We give away bits of personal data in numerous interactions, both online and offline. Social media, government services, conversations, online meetings and group calls all spread out our personal data and increase the risk that it could be flowing into areas at risk of large-scale data breaches and privacy invasions.
At what point do we give up privacy for convenience? With AI usage and adoption only increasing, that's a question that companies and users are going to need to ask themselves to continue evolving with the AI landscape.