
By Mateusz Jankowski, senior associate, Data Privacy Team, Domanski Zakrzewski Palinka (DZP)
The Internet of Things (IoT) represents a revolutionary technological concept that fundamentally changes how we perceive and use everyday objects. The IoT is a network of physical objects equipped with sensors, software and other technologies that enable them to connect to the internet and exchange data with other devices and systems. This integration creates an intelligent ecosystem where devices autonomously communicate, analyse data, and make decisions without human intervention.
The diversity and scope of data collected by IoT devices in smart homes is staggering. Each IoT device is a miniature data collection centre that continuously monitors, records, and analyses various aspects of our private lives. Location data, behavioural patterns, preferences and usage habits are constantly tracked, creating detailed profiles of residents’ daily routines with unprecedented precision. Many executives don’t realise that this data goldmine comes with significant legal and business risks.
When Smart Devices Become Privacy Nightmares
Determining when data collected by IoT devices constitutes personal data under the General Data Protection Regulation (GDPR) requires analysing the data’s nature and processing context. The GDPR defines personal data as “any information relating to an identified or identifiable natural person”. In smart home contexts, this definition is broader than most business leaders imagine.
Location data, daily habits, music preferences and daily schedules are information that can lead to identifying a natural person. However, the real challenge emerges when seemingly neutral technical data may, in specific contexts, reveal sensitive information that requires special protection under the GDPR. Consider this: your company’s smart fitness trackers don’t just collect step counts – they potentially gather health data. Your smart speakers analysing music preferences could inadvertently reveal religious beliefs or sexual orientation.
The critical insight here is that the boundary between personal and non-personal data becomes fluid and context-dependent. This means your risk assessment must be dynamic, not static.
The Hidden Web of Data Controllers
Imagine the moment when a user activates their new smart thermostat. The device connects to the home wi-fi network, synchronises data with the manufacturer’s app, and transmits information to the cloud where algorithms analyse usage patterns. Several entities participate in this seemingly simple process: the manufacturer, app provider, cloud operator, and analytics company. Each processes personal data, but who is responsible for GDPR compliance?
This is where many companies get blindsided. A smart refrigerator manufacturer collaborating with an analytics firm may be jointly responsible for dietary habit data, while a speaker manufacturer and a radio station may be jointly responsible for music preference data. Under the GDPR, they are joint data controllers, so they must conclude agreements to define their respective responsibilities and are jointly liable for GDPR violations. Hence, executives must audit the entire processing chain and identify joint responsibility points.
The sobering reality is that your company might be legally responsible for privacy violations in systems you don’t directly control.
The AI inference trap
Smart thermostats learn user habits over months, with algorithms noticing temperature patterns that reveal daily routines. Over time, systems begin inferring more: frequent night-time temperature changes may indicate sleep problems, regular afternoon temperature increases might suggest elderly presence, and complete inactivity for days could mean travel or health issues. These inferences – though never directly input into the system – become part of user profiles.
Here’s what keeps privacy lawyers awake at night: if smart thermostats infer health conditions from usage patterns, such inferences may be treated as health data under the GDPR, requiring special protection and a dedicated legal basis for use. Your ‘simple’ home automation system could suddenly be processing sensitive health information without proper safeguards.
This creates a cascading compliance risk. As AI capabilities evolve, your legal obligations expand automatically.
The consent illusion
Under the GDPR, consent to personal data processing must be voluntary, informed and withdrawable. This seemingly simple principle becomes extremely complicated in IoT devices, especially those that use AI. Traditional consent approaches assume users’ consent to specific, predetermined processing purposes. But what happens when a smart home assistant, initially designed only to recognise voice commands, gradually begins analysing voice tone to detect users’ emotional states?
Contemporary users configuring their first smart home may be overwhelmed by dozens of consent requests. Faced with by this information avalanche, they often simply click ‘agree’ without reading the content or understanding the consequences. This combination of consent fatigue and manipulative design patterns leads to formal user consent potentially not meeting GDPR requirements for informed and voluntary consent.
The business implication is stark: your consent mechanisms may be legally worthless.
Global data flows, local compliance headaches
Contemporary IoT devices rarely process data locally. A smart thermostat sends information to servers in Ireland, from where it is transferred to analytical centres in the US, with analysis results returning through Singapore servers. This distributed cloud architecture creates enormous compliance challenges.
For executives, this means that every cloud service provider, every analytics partner, and every data processor in your IoT ecosystem represents a potential compliance failure point. The interconnected nature of modern IoT architecture multiplies your regulatory exposure exponentially.
Building Compliance That Actually Works
Personal data protection in smart environments requires technological vigilance as well as strategic GDPR compliance approaches. Technical security foundations include regular software updates, strong passwords, home network segmentation and user education. Equally important is manufacturer transparency: clear privacy policies and privacy setting configuration should be a standard, not an exception.
In the AI and machine learning era, IoT systems continuously develop analytical capabilities, creating risks of processing data beyond the original user consent. Therefore, implementing algorithm change monitoring mechanisms and procedures for obtaining additional consent when data usage changes is essential.
The executive takeaway
The IoT revolution brings unprecedented business opportunities but also hidden compliance landmines. Smart devices generate vast amounts of personal data, often processed by multiple entities across various jurisdictions. The key insight for business leaders is that privacy compliance in IoT isn’t a one-time checkbox exercise – it’s an ongoing strategic challenge requiring constant vigilance, clear vendor agreements, and robust governance frameworks.
The companies that will thrive in the IoT era are those that build privacy protection into their business strategy from day one, not those that treat it as an afterthought.

















