your123 - stock.adobe.com
Top 4 IoT data privacy issues developers must address
Regulatory changes mean changes for IoT device creation. Design considerations around API permissions, AI data set bias and physical access can improve product security.
IoT devices are increasingly part of people's daily lives, while simultaneously subjected to continuous valid criticisms from cybersecurity and privacy advocates.
A significant recent regulatory change from the European Commission will affect IoT device design, the data privacy challenges facing IoT devices and recommendations for professionals who design or implement IoT devices.
Regulatory changes signal design standard shift
In early 2022, the European Commission wrote a delegated act and placed it into effect. The act is related to radio equipment and gives device manufacturers until 2024 to perform data privacy self-assessments or conduct independent assessments to continue European Union (EU) sales.
As most manufacturers will choose not to design separate IoT products and services for the EU market, this act may force a de facto change in how companies design for IoT data privacy.
The delegated regulation updates Radio Equipment Directive (RED) 2014/53 to address elements of cybersecurity risks so that IoT devices do the following:
- 3(3)(d), ensure network protection;
- 3(3)(e), ensure safeguards for the protection of personal data and privacy; and
- 3(3)(f), ensure protection from fraud.
Technical standards for IoT device manufacturers are not released, but it is expected that the European Standardisation Organisations will define and release these standards at the European Commission's request before 2024.
As this specific technical standard does yet not exist, an understanding of the four primary data privacy risks associated with IoT devices helps device designers plan to meet the updated RED standard when it launches.
4 prevalent privacy risks to address
IoT device monetization strategies have nonobvious supply chain privacy risks. IoT device manufacturers may choose to incorporate one or more advertising or marketing APIs to generate incidental revenue that is unrelated to the primary subscription costs of the IoT device and service.
Those APIs are often from third parties, which means there is no guarantee that there is adequate consumer notification when the third party's privacy policy changes.
This means consumers may be unknowingly providing their usage data to a domestic intelligence service via a marketing API provider that resells data without consumer consent. In situations such as these, end consumers would likely not give their consent, even if they understand the primary IoT device's privacy policy.
A secondary consumer data privacy risk is the increasing use of AI to further optimize and monetize IoT services. Data anonymization techniques can theoretically limit bias in some AI-constructed models. Any AI-created model that uses IoT data has inherent bias; the data is based on consumers wealthy enough to afford IoT devices and accept the privacy tradeoffs for convenience.
IoT designers that plan to sell access to an AI-based consumer behavior model should note that marginalized populations are poorly represented in their data set, if at all. Due to the data set's limitations, certain data such as location can be easy to deanonymize, as individual travel patterns throughout the day can determine where someone lives.
A third privacy threat is due to the pervasive nature of IoT devices. They are commonly deployed in private spaces, which makes them a target for cybercriminals. Vulnerabilities or defects do not inherently cause this targeting; rather, it's financially more efficient to compromise a security camera than to send a bad actor to install one.
It's more cost-efficient to compromise a security camera provider -- or similar IoT service -- and have access to all the data the provider collects. This threat model extends to all IoT devices that can determine the presence of people, such as door locks, fire alarms, cameras, security alarms or personal assistants equipped with microphones.
The fourth and final privacy threat for IoT devices is associated with abusive intimate relationships. Researchers Karen Levy (Cornell) and Bruce Schneier (Harvard) outline the unique challenges of IoT devices in scenarios where all knowledge-based authentication challenges are known to the threat actor.
Further complications arise when users share the same residence and the threat actor has persistent physical IoT device access. In these cases, the traditional cybersecurity methods of access denial or user removal to the IoT devices may result in physical harm or jeopardize someone's safety.
Best practices to preserve privacy
IoT designers should consider the privacy threat models inherent in product and service design. Specific considerations include the following:
- Ensure that devices are secure by default. This includes password security and providing an update mechanism for IoT devices so that technical vulnerabilities are automatically remediated.
- Consider the privacy risks associated with the incremental revenue associated with sale of consumer data via advertising and marketing APIs.
- Ensure that inherent bias in AI-based models based on IoT devices is classified as a design constraint.
- Provide a means to introduce false or misleading data when a user requests this capability to help address some of the information risks associated with IoT devices and abusive intimate relationships.
It is imperative that IoT designers consider the privacy implications of their devices as the number of IoT devices grows over time. The number and extent of privacy breaches will drive more regulatory changes that make it more difficult to provide innovative services and solutions.