Skip to main content

Monitoring and surveillance of workers in the digital age

ef21071_digitalisation_digest_8_surveillance_image.png

The terms ‘monitoring’ and ‘surveillance’ are often conflated, understood to mean the same thing and used interchangeably. Although they involve similar management practices, there are some differences. Employee monitoring has a more benign connotation, and it is generally confined to work-related activities. Going beyond that, surveillance practices can take the form of intrusive and pervasive monitoring, tracking a broad range of (work- and non-work-related) information (including personal characteristics) and invading the private sphere of individuals. ‘Surveillance’ also implies that workers may not be aware that they are being monitored. In the public discourse, surveillance is associated with more dystopian characteristics and evokes a contemporary digital panopticon, where individuals – within or outside the workplace – are on the receiving end of asymmetrical, invisible and constant surveillance.

Despite these different connotations, Eurofound research on this topic does not make a clear-cut distinction between employee monitoring and employee surveillance. It should, however, be understood that organisational practices leveraging monitoring and surveillance technologies in the workplace can be placed on a continuum from a more benign to a more intrusive use of these technologies, with surveillance being more invasive of privacy.

A full list of references used to compile this research digest can be found at the end of the page. 

Author: Sara Riso

 

Overview

 

 

 

Policy pointers

  • There is a need to build upon the EU GDPR and national legislation on the right to disconnect and to modernise national regulatory frameworks with a view to addressing the challenges posed by digitally enabled employee monitoring.

  • Social dialogue, employee consultation and clear governance around employee monitoring are the cornerstones of prevention and management of the risks arising from the use of digital monitoring technologies.

  • Pervasive, ubiquitous and intrusive employee monitoring poses a risk not only to the rights to privacy and data protection but also to the right to good and fair working conditions – this warrants more attention in policymaking.

  • European and national regulatory frameworks should ensure transparency and accountability on the part of digital labour platforms with regard to the collection and use of workers’ data, regardless of the nature of the employment relationship.

  • The new EU legal framework on artificial intelligence (AI) should apply also to algorithm-based systems, given the negative implications for workers of algorithmic work management.

  • To avoid a downward spiral in employment and working conditions, it is vital to keep monitoring the spread of algorithm-based work management practices (similar to those used by digital labour platforms) to more traditional sectors of activity.

 

Digitalisation


Image of icon for digitalisation
Digitalisation: General and comparative perspective

Introduction

Employee monitoring and surveillance are not new; however, technological advances have made them more pervasive and ubiquitous and potentially more intrusive, pushing the boundaries of acceptability and posing new challenges for legislators and policymakers in the EU and beyond.

The COVID-19 crisis has expanded the market for surveillance technologies and accelerated their uptake. Employee monitoring software companies such as Sneek and Teramind reportedly increased their sales during the pandemic. Global consultancy firm PwC has developed a facial recognition tool that logs when employees are away from their computer screens – according to an official statement by the company, it is designed to support regulatory compliance while the staff of financial institutions are working from home.

National data sources also point to an increase in remote employee monitoring during the pandemic. According to a 2020 online survey (of 2,133 workers) conducted in the United Kingdom (UK) by the Trades Union Congress (TUC) among its members, one in seven employees reported that workplace monitoring and surveillance had increased during the COVID-19 crisis. This is, however, not an entirely new trend. The latest European Company Survey (ECS), carried out in 2019, found that 5% of 21,869 surveyed EU establishments were using data analytics to monitor employee performance (Figure 1). The prevalence of this use of data analytics increases with establishment size.

Figure 1: Shares of establishments using data analytics to monitor employee performance, EU27 and the UK, 2019 (%)

With many organisations shifting their workforce to remote working during the pandemic, employees are increasingly getting accustomed to the inevitability of being monitored in one way or another. This may be perceived by employees as the price they have to pay in exchange for the flexibility to work from home or elsewhere. Greater acceptance of surveillance in the workplace may partly mirror the more general trend towards pervasive tracking of online activities, which entails implicit acquiescence – or at least limited resistance – to monitoring technologies.

This increasing acceptance of surveillance technologies in society does not, however, justify practices that intrude on workers’ privacy, nor does it serve to address the ethical concerns that are often raised in public and policy debates around digitally enabled employee monitoring and surveillance. The risk of breach of privacy and data protection rights becomes even more acute in the context of remote working. The provision of digital devices by employers for work and personal use leads to an increasing enmeshing of employees’ private and working lives and results in the merging of personal with work-related data. It is not uncommon for employees to use computers or other digital devices provided by their employers during breaks or outside working hours.

Another important consideration is that digital technologies are reaching a high level of sophistication – as well as becoming more affordable – and have the potential to become central to work management systems, influencing behaviour and nudging workers as to how and when they should work (whether in or outside the workplace). With technologies taking on management functions, the risk is that work will be gamified (work activities becoming competitive, mimicking game-like dynamics and being tied to performance metrics) resulting in additional pressure on workers. Intense employee monitoring and surveillance bear a resemblance to the digital management practices underlying the functioning of much platform work, which are spreading to conventional forms of employment and giving rise to what is known as the ‘platformisation’ of work. The concern is not only the potential infringement of employees’ privacy rights but also the increasing objectification and quantification of employees enabled by digital monitoring technologies.

Furthermore, the use of wearables and biometric technologies – which are slowly but surely entering the world of work – can take employee monitoring and surveillance to a new level by increasing the level of control and factoring in employees’ personal characteristics as part of the monitoring. This personal information, combined with other information about performance and behaviours, can be used to draw inferences about employees’ future behaviours and attitudes; if its use is left unchecked, it could inform important management decisions. In this area, rapid developments in AI technologies are enabling so-called ‘people analytics’ and ‘profiling’ (algorithmic inferences drawn from personal data) and powering data-driven and intensive work management and human resources practices.

There is a body of research showing that excessive and pervasive employee monitoring and surveillance have many negative implications for workers, as they inhibit creative and independent thinking, limit autonomy, induce stress and erode trust in management. In a global survey conducted before the pandemic, Accenture found that 52% of employees believed that the use of new sources of workforce data reduced trust while 64% were concerned about potential mishandling of employee data collected as part of monitoring. In addition, over half of the respondents (56%) to the above-mentioned 2020 TUC online survey on the use of technologies for work management reported that the introduction of new technologies into the workplace for employee monitoring purposes had damaged trust between workers and employers. When monitoring is extended beyond what employees consider reasonable or necessary, it chips away at their trust in management.

Because of these wide-ranging implications for work, employee monitoring and surveillance should be discussed not only in relation to privacy and data protection rights but in terms of how they affect a broader range of fundamental rights and employee expectations. Without strong protections and adequate safeguards for employees, monitoring and surveillance in the workplace can compromise human dignity and lead to a deterioration in working conditions. Employee compliance is short-lived when employees are – or suspect that they are – under surveillance. Intense and unnecessary surveillance can increase employee resistance and opposition in the longer term and result in reputational damage for employers implementing such practices, proving to be a counterproductive work management strategy.

Opportunities

  • Digital real-time monitoring can ensure workers’ safety, especially in hazardous or emergency situations
  • Monitoring in some sectors and under certain circumstances is a necessity, for example to ensure regulatory compliance
  • Digitally enabled monitoring can be used to benefit employees by facilitating skills development and on-the-job learning

Employee monitoring and surveillance are typically discussed in relation to the risks that they pose for employees, rather than the benefits and opportunities that they can offer. Not all workplace monitoring, however, necessarily signals mistrust or is unethical. There are legitimate grounds for engaging in the monitoring of employees’ activities.

In some areas of the private sector, employee monitoring is deemed a necessity. In banking, it may be essential to prevent insider trading and to comply with regulatory requirements. Workplace monitoring can also uncover antisocial behaviours in or outside the workplace (for example, by customers towards those in public-facing occupations) and keep staff safe. Digital tracking technologies may provide greater security for workers who are required to travel extensively or to work alone as part of their job.

If used wisely, monitoring technologies can be also used as learning tools, giving workers opportunities to develop new skills. A case in point are wearables, which can be used to benefit employees by facilitating the acquisition of new skills or making it possible to perform challenging tasks under the supervision of a more experienced worker. The information recorded by these devices can be used by workers to review their learning and improve their performance.

According to the ECS 2019, establishments that exhibit features associated with ‘high-performance work practices’ – for example, high levels of training, performance-related pay and teamwork – are more likely to use data analytics to monitor employee performance (Figure 2).

Figure 2: Shares of establishments using data analytics to monitor employee performance by implementation of high-performance work practices, EU27 and the UK, 2019 (%)

Many workers expect a certain amount of monitoring and consider it part and parcel of their working life. In reviewing national research, including survey-based studies, on the topic of employee monitoring and surveillance, Eurofound found that the use of electronic monitoring systems is a common practice, particularly in the UK and the Nordic countries, with more advanced forms of monitoring gaining traction. Workplace monitoring is, however, not necessarily perceived by employees as negative across the studies reviewed and, under certain conditions, it can contribute to transparency. The research reviewed also suggests that the type and extent of monitoring, as well as employee involvement, play an important role in shaping employees’ perceptions of and attitudes towards it. Employees are more understanding of workplace monitoring if it is not considered excessive or intrusive. Another important aspect that fosters acceptance of workplace monitoring among employees is the extent to which they feel that they are adequately informed and consulted about the nature and method of monitoring. Perceptions around employee monitoring are also partly shaped by the prevailing organisational culture within the workplace and cultural norms in society. In countries such as the UK and Estonia, where a monitoring culture prevails, employees may be more forgiving of intrusive monitoring practices than in some continental countries – Germany in particular – that have more stringent legislation on employee monitoring and where more prominence is given to data protection rights. Perceptions of the acceptability of monitoring and surveillance also change over time as the technologies deployed for employee monitoring purposes develop.

Risks

  • There are risks of abuse or misuse of digital technologies, which can impinge on workers’ rights to privacy and data protection, particularly in the context of remote working
  • There is a risk that employers will use monitoring and surveillance technologies for unintended or unauthorised purposes
  • Digital technologies make monitoring and supervision processes more intangible and less visible (pushing the boundaries of acceptable and legitimate monitoring)
  • Asymmetries of power within organisations may be intensified
  • Invasive surveillance practices reduce work autonomy and trust in management, damaging staff motivation and employment relations

Technological development has expanded employee monitoring and surveillance capabilities, pushing the boundaries of what is necessary, legitimate and permissible. Issues arise in particular in the absence of prior consultation with workers or clear governance around employee monitoring and surveillance. In several EU countries, trade union organisations have been particularly vocal about the many risks arising from the use of digital monitoring and surveillance technologies – especially with regard to privacy and data protection rights. International and European trade union federations – for example, the European Trade Union Confederation (ETUC) and UNI Global Union – have called for stronger safeguards for employees in legislation and collective agreements in relation to digital workplace monitoring and surveillance.

Concerns have also been raised by national data protection authorities who routinely issue opinions, guidelines and good practices on employee monitoring in general, as well as on specific forms of monitoring in the workplace. The French data protection authority, for example, has warned against the use of digital technologies for employee monitoring, arguing that it could lead to workers being placed under permanent surveillance and could be used as a form of psychological harassment.

There is a wide range of digital technologies – with varying degrees of sophistication and intrusiveness – at the disposal of employers for employee monitoring purposes. There are, for example, technologies that monitor whether employees are simply logged in and working, and more privacy-intrusive technologies and software that log keystrokes or mouse movements, or take webcam shots of employees in front of their computers. With the shift to remote working expected to continue after the COVID-19 crisis and many large corporations already preparing for a move to hybrid working (combining remote and office-based working), the use of digitally enabled surveillance for monitoring employees is set to continue and probably intensify.

The use of wearables and biometric devices (retina and iris scanners, hand readers, fingerprint readers or facial recognition devices) to replace more standard access control systems – which has grown in recent times as organisations have sought to limit the spread of the virus – also poses important risks, as these technologies enable the gathering of fine-grained information about personal characteristics. The use of facial recognition technology is particularly controversial and has been criticised for its lack of technological maturity and the risk that it poses in relation to the infringement of fundamental rights. Precise information about employee performance, behaviours and personal characteristics can be used to draw inferences and make predictions about employees’ future behaviour, on the basis of which employers may make important decisions, for example about wages and promotions.

Another important concern is that digitally enabled surveillance technologies can provide a wider range of information than initially intended, enabling what is known as ‘function creep’: data collected for a specific purpose are subsequently used for another, unintended or unauthorised, purpose. This underlines the need to put in place sufficient protections and safeguards to limit the use of surveillance technologies and to prevent situations where monitoring is extended beyond what has been communicated to employees and the information collected is then used to guide or determine important employment decisions.

According to a 2017/2018 ETUC online survey on digitalisation conducted among trade unionists and company-level worker representatives (1,500 respondents), only 23% of respondents reported that the introduction of new technologies to monitor performance and behaviour or data protection issues had been addressed by information and consultation at company level. Lower shares of respondents reported that collective agreements linked to digitalisation had been concluded at company level to address issues related to the protection of personal data gathered in the context of ICT work and/or automation processes (19%) or the introduction of technologies to monitor performance and behaviour (17%).

Important employee rights other than privacy and data protection rights are affected by pervasive and intrusive employee monitoring and surveillance. They can interfere with the right to freedom of association and collective bargaining and potentially weaken the organising and negotiating power of workers. In this regard, a case that attracted a great deal of media attention in 2019 concerned Google. The tech giant was accused by some of its employees of using a new tool to monitor attempts by staff to organise protests and discuss labour rights. Allegedly, this surveillance tool consisted of a browser extension that automatically reported employees who created a calendar event with more than 10 rooms or 100 participants. Google has, however, denied such claims and rebuffed them by saying that the tool is an anti-spam device. Another recent high-profile case involved global fast-food chain McDonald’s, which was accused of carrying out surveillance of workers in the United States who were advocating for a USD 15 (approximately EUR 12.75) minimum wage using a range of monitoring tools, including social media. In addition to the interference with the right to freedom of association, such surveillance practices also signal a lack of trust in employees and impact negatively on employees’ engagement, commitment and motivation, ultimately reducing workers’ well-being.

Analysis of the ECS 2019 data using two composite indicators on workplace well-being and establishment performance shows a lower score for workplace well-being and a higher score for establishment performance where data analytics are used to monitor employee performance, compared with those establishments not using such technologies (Figure 3). Four questions were used to indirectly measure workplace well-being: one captured the quality of the relationship between management and employees and the other three questions concerned challenges in terms of human resources (such as low motivation, absenteeism and staff retention). Establishment performance was measured in relation to the following four variables: the profitability of the establishment, profit expectation, change in production volume and expected change in employment.

Figure 3: Use of data analytics to monitor employee performance and workplace outcomes, establishments in the EU27 and the UK, 2019 (%)

Reduced workplace well-being arising from employee monitoring has also been highlighted in previous research. The knowledge or suspicion of being under surveillance changes the behaviours of those being watched, reducing their sense of personal autonomy and inhibiting their independent thinking and creativity. When digital monitoring technologies take on work management functions, they can introduce game-like dynamics, give rise to an unhealthy competitive culture within the workplace and put additional pressure on workers.

Concluding commentary

State-of-the-art technologies offer greater possibilities to employers to monitor different aspects of work, whether in or outside the workplace. Pervasive and intrusive workplace monitoring and surveillance may accentuate hierarchies and deepen asymmetries of power within organisations, to the disadvantage of employees. Because of the greater power that modern monitoring and surveillance technologies can confer on employers, it is imperative to strike a balance between employers’ legitimate interest on the one hand and, on the other, the privacy rights of employees and their demands for transparency as to how the monitoring is done and what information is collected and used.

Capturing personal information by means of employee monitoring software outside working hours is an area of concern. There is a pressing need to offer greater protection to employees, especially in the context of increased remote working, and to build upon existing legislation on the right to disconnect in those countries (such as Belgium, France, Italy and Spain) where there is already formal legal recognition of that right.

While the EU GDPR is recognised globally as a beacon of excellence in data protection and digital privacy, there are grey areas that need to be addressed, such as the new risks posed by inferential analytics and profiling, particularly in the context of rapid developments in AI technologies.

The GDPR leaves it to individual Member States to introduce specific rules – for example, by means of collective agreements – in relation to personal data processing in the employment relationship (Article 88(1)). An important step forward in this direction is the framework agreement on digitalisation signed in June 2020 by the European social partners ETUC, BusinessEurope, CEEP and SMEunited. The agreement calls for rules in collective agreements at appropriate levels that, among other things, limit the risk of intrusive monitoring and misuse of personal data.

The many implications of employee monitoring and surveillance for job quality and work organisation also warrant more attention in the ongoing policy debate, which should go beyond the potential infringement of privacy and data protection rights and give greater consideration to the many other risks that the use of surveillance technologies in the workplace poses in relation to employment and working conditions.

 

Automation


Image of icon for automation
Automation

Automation is one of the ‘vectors of change’ identified as part of the broader notion of ‘digitalisation’ in Eurofound’s conceptual framework. It is the replacement of human input, in full or in part, by machine or software input. Advanced robotics, both for services and for manufacturing, is grouped with autonomous vehicles under the automation vector, since the ultimate aim of their application is to substitute machine for human input.

It should be noted that digital technologies do not necessarily fit neatly under one vector of change or another. They are often used in combination rather than in isolation, amplifying their monitoring and surveillance capabilities.

Introduction

Automation technologies have logging, reporting and monitoring functions that bring surveillance capabilities in the workplace to a new level. Most advanced robots, for example, are equipped with sensors and actuators that collect, transmit and process a vast amount of data in real time. In contrast to traditional robots, advanced robots are algorithmically controlled, general purpose machines that can be easily reprogrammed to carry out different tasks in production. With greater advances in AI, advanced robots will become more autonomous, learning from and responding to the environment and simulating intelligent behaviour.

The lifeblood of much automation is data that can feed data analytics and data mining programmes, which can also be used for employee monitoring, surveillance and performance appraisal. Accordingly, such data may be used to inform important employment decisions, for example about promotions, wage increases or even dismissals.

Robotic automation is already a reality, particularly in business logistics and automotive manufacturing. This is best exemplified by the fulfilment centres operated by the online retailer Amazon, where humans work alongside robots to execute warehousing and picking operations – often attracting negative publicity because of intrusive work surveillance practices. In an increasingly automated work environment, the collection of personal information is inevitable. It is therefore imperative that strong protections are put in place to safeguard workers’ rights to data protection and privacy and to avoid the risk of a downward spiral in working conditions.

In the automated workplace, the extent to which robots and other automation technologies access workers’ data and impact on workers’ privacy is likely to become an important topic in negotiations with trade unions and in collective bargaining. The European social partners’ framework agreement on digitalisation, signed in 2020, draws attention to the risks arising from surveillance practices in an increasing digitally connected work environment. Appropriate measures to deal with such risks and issues are expected to be agreed upon by the social partners at national or local level as part of the implementation of this agreement.

Opportunities

  • Data collection and monitoring can be a pre-condition for robots to interact safely with humans and can augment their capacities and performance
  • Monitoring of the work environment is part and parcel of robots taking over more menial or physically demanding tasks

Data related to the workplace and workers generated through constant monitoring by automation technologies such as advanced robotics can improve understanding of workflows, processes and procedures. This information can be used to reorganise the workplace to achieve better productivity, workplace practices and working conditions.

Among the variety of use cases of automation technologies, pre-programmed robots are the most prevalent type and have the advantage of taking over more menial or very physically demanding tasks from humans. The trend so far has been to demarcate and separate human and robot spaces in order to avoid industrial accidents, as these machines are only partially sensitive to changing ambient circumstances, including the presence of human workers. This is likely to change with rapid developments in AI technologies. The development of ‘cobots’ – smaller robots, increasingly endowed with AI and designed to co-work with humans – brings with it a desegregation of robots and human workers. Cobots are becoming more prevalent in distribution and fulfilment centres, making warehouse operations more efficient and helping companies to deal with labour shortages.

A cobot is designed to incorporate built-in safeguards that allow it to interact safely and work side by side with humans, taking on repetitive or strenuous tasks; to achieve this, however, a vast amount of data is collected from the surrounding environment and processed.

Risks

  • Increased capacity for gathering and recording data about workers’ performance, behaviour and movements heightens the risk of privacy and data protection breaches
  • Unfettered monitoring enabled by automated systems may alter workers’ behaviour and damage trust in management, with negative implications for job quality

Greater surveillance possibilities are one inevitable consequence of increasing automation. The gathering and processing of data are necessary to automate processes and optimise work. This, however, does not alleviate concerns about the legitimacy of monitoring workers in a highly automated work environment, with implications for employees’ data protection and privacy rights. The extent to which robots access workers’ data also has implications for any negotiations that trade unions might engage in in an increasingly automated workplace.

The expanded monitoring and recording capabilities of advanced robots give employers more power and control over workers by providing a great deal of information about their performance and their behaviour at work. Such data might be used not only for performance reviews but also in insurance cases and lawsuits.

Although the collection of data through robots can offer valuable insights to improve workplace practices and may be used by responsible employers to benign ends, it can also heighten workers’ perception of being under constant surveillance. Some research argues that the very fact of being under surveillance changes the behaviour of those being watched, curtailing their autonomy and infringing on their privacy. It also impacts on worker–employer relations, as monitoring and surveillance inevitably shift power dynamics in the workplace and lead to a breakdown in trust between workers and management, with repercussions for job quality. The presence of increasingly smarter robots in the workplace – monitoring, watching and recording workers’ activities and interactions – can also act as a constant reminder to workers that their jobs may be at risk if their performance fails to meet targets or expectations. This can create feelings of job insecurity, impacting negatively on working life quality.

Concluding commentary

The use of automation technologies in the workplace – particularly if powered by AI – raises concerns about workers’ data protection and privacy rights, as these technologies rely heavily on the collection and processing of a vast amount of data from an interconnected work environment. A 2020 report by the Scientific Foresight Unit of the European Parliamentary Research Service refers to the ‘new surveillance workplace’ as a growing phenomenon and states that:

Automation, robotics and artificial intelligence (AI) are part and parcel to the discussion of monitoring and surveillance of work, where a binding feature for how these processes emerge is the collection, storage, processing and usage of large data sets that are collected in more and more spheres of people’s everyday lives and in particular, workplaces.

(Scientific Foresight Unit, European Parliamentary Research Service, 2020, p. 1)

As automation technologies are increasingly dependent on processing data (including workers’ personal data), it is imperative to establish at workplace level clear governance around data collection, processing and usage and to apply the principles of data minimisation (collecting only data that are strictly necessary and not collecting data to pursue surveillance or for other purposes beyond what was initially specified) and proportionality set out in the GDPR.

Beyond the application of regulatory frameworks and enforcement, it is equally important that social partners engage in discussions about the implications of these technologies for human dignity and fundamental rights, including but not limited to the rights to privacy and the protection of personal data. The European social partners’ framework agreement on digitalisation – which draws attention to the risks of digital surveillance – has set such discussions in motion at national and local levels.

 

Digitisation


Image of icon for digitisation
Digitisation

Digitisation is one of the ‘vectors of change’ forming part of ‘digitalisation’ in Eurofound’s conceptual framework. It refers to the process through which aspects of the physical world are rendered into data and virtual models, and vice versa. Three main technologies fall under this vector of change, namely 3D printing, augmented and virtual reality (AR/VR) and the internet of things (IoT).

Introduction

To varying extents, most advanced digital technologies provide logging and reporting functionalities and enable the collection of fine-grained information about their use. These functionalities vary, depending on the specifics of the new technology. According to a recent Delphi (expert) survey conducted by Eurofound, within digitisation, 3D printing is generally not regarded as having a significant influence on workplace monitoring and control practices , whereas IoT and, to some extent, AR/VR are more intrusive technologies as they can harvest and/or record a wide range of data, which can potentially be used for monitoring and surveillance purposes.

For example, s mart glasses, combining AR and wearables, are increasingly used in warehousing and logistics operations. These glasses have functionalities that record employees’ movements and interactions. This gives rise to data protection and privacy concerns, especially in the event of misuse of the data. Even where the collected data are anonymised, the underlying algorithms may have the ability to identify the user by cross-referencing data with individuals’ ‘digital traces’.

In a digitised workplace, machines and wearable devices are increasingly endowed with networked sensors that connect physical and virtual objects and extract and share a high volume of data not only within the company but across the supply chain. IoT is also a technology that amplifies the disruptive power of other digital technologies. IoT sensors, for example, can be embedded in AR/VR devices used for training purposes, enabling the collection of a vast array of information – including biometric data – to determine personal factors hindering learning. Such use cases highlight the risks that the integration of advanced digital technologies can pose in terms of access to personal information and interference with the privacy of employees.

There is evidence that the COVI D-1 9 crisis has accelerated the use of digital technologies, including IoT-enabled wearables and devices, with the aim of containing the spread of the virus and for surveillance purposes. This is corroborated by studies such as a 2020 global survey of more than 1,600 businesses across 13 countries, which found that the pandemic has boosted IoT adoption. For 84% of IoT adopters, the integration of IoT devices has become an even higher priority, with a view to supporting remote working.

The merging of personal and work data – particularly in the context of remote working – is facilitated by the syncing of personal and work devices endowed with sensors and connected via the internet to the company’s network infrastructures. IoT-based solutions and devices – embedded not only in the work environment but in everyday objects, including wearables – generate a persistent flow of data, ready to be processed and analysed. Machine learning algorithms can then be applied to the high volume of data generated by IoT devices, thus further expanding surveillance capabilities and enabling profiling and scoring of employees, including potentially in relation to their out-of-work activities.

The wealth of data collected by interconnected devices at the disposal of employers – combined with the use of powerful data analytics technologies – can contribute to a deepening of hierarchies and asymmetries of power in employment relations and raise even greater concerns around data privacy and security. Although the GDPR does not specifically mention IoT, Article 35 makes a data protection impact assessment (DPIA) mandatory when using new technologies that could entail a high risk to the data subject ’ s rights and freedoms. This also applies to the employment context. The use of IoT devices is explicitly mentioned in lists drawn up by national data protection authorities in relation to high-risk data processing activities requiring a DPIA.

Another important concern is linked to the pervasiveness of IoT sensors and the invisibility of the control and supervision that IoT-based monitoring systems entail. This makes it more difficult for employees to contest management decisions based on sensor-collected data. Without clear governance and consultation with staff, the use of IoT technologies may lead to unhealthy surveillance practices and intensive data-driven human resource management, which would have negative implications for working conditions.

Opportunities

  • More adequate and timely support in the execution of tasks
  • Greater possibilities to adjust the workload and tailor the work environment to employees’ needs
  • Opportunities to harness the learning potential offered by sensor data and support skills development in the workplace
  • Greater traceability of outputs where quality and safety of products are of paramount importance

Research drawing on expert opinions and case studies suggests that digitisation technologies are primarily introduced to optimise business process and increase efficiency, rather than for employee monitoring and control purposes. The monitoring capabilities of digitisation technologies can help businesses to anticipate or be alerted to machine failures and ensure the traceability of processes and the quality of products, not only within the company but throughout the supply chain. This is essential in the case, for example, of medical and medtech products, where quality is of paramount importance.

During the COVID-19 pandemic, advanced technologies have helped to keep businesses running smoothly by enabling remote working. For instance, the engineering firm Bosch Rexroth in Lohr am Main, Germany, used VR technology to enable remote inspections of facilities that could no longer be inspected in person because of travel restrictions (see the case study of Bosch Rexroth). Such applications are expected to be used more frequently in the future and become standard practice. The use of AR/VR systems can also enable moves towards new forms of learning and collaboration, including tele-mentoring, where more experienced employees guide more junior staff remotely in the execution of complex and challenging tasks. The data collected and recorded by such devices may be essential for quality control purposes, rather than surveillance.

AR/VR, IoT and 3D printing can facilitate remote working, which can benefit employees in terms of improved work–life balance . Remote working can have a positive impact on work–life balance if employees are not expected to be always available and if monitoring can be switched off if necessary. In this regard, the right to disconnect, enshrined in national legislation in some EU Member States, is particularly useful, as it can set limits on the availability for work of remote employees. Teleworking or remote working should not imply always-on availability and assent to ongoing surveillance or monitoring just because the technology enables it.

Digitisation technologies can also be used to empower employees and help them to perform more challenging or demanding tasks. The data collected can be used to the benefit of employees, for example to enhance on-the-job learning or better tailor the work environment to their needs. A case in point is the AR technology, Light Guide System (LGS), deployed by Belgian non-profit association Mariasteen, which offers employment to people with disabilities. At the metal and assembly factory in Gits (West Flanders), instructions are projected onto assembly workers’ workstations by LGS, enabling them to take on more complex assignments in spite of their disabilities. LGS is equipped with sensors that monitor quality parameters, and the collected data are used to assess the complexity of the tasks each employee can handle and adjust the workload to better fit their capabilities. The research and development team at the Gits factory is considering adding stress sensors (biometric data) to the guidance system in order to adapt instructions and reduce stress. The management is, however, very aware of the privacy and data protection implications of extending the technology and making it more sensor driven.

Sensors are applied not only to machines but also to wearable devices (and connected with other systems) that can monitor both environmental and person-specific variables. This can be done to benign ends by responsible employers, for example to increase employees’ safety when operating in hazardous environments.

Some large corporations offer wearables to their employees as part of corporate well-being programmes, to encourage their workforce to be fitter and healthier. The COVID-19 pandemic may boost this trend and make health tracking more mainstream in the world of work.

Risks

  • Threats to privacy and data protection rights and other fundamental rights
  • Negative psychological effects (stress, anxiety, diminished trust in management) deriving from always-on monitoring
  • Difficulty for workers in contesting decisions based on sensor-collected data
  • Risk that sensor-based systems will track performance based on productivity metrics (or key performance indicators) and neglect the psychological and emotional elements of a job

IoT raises concerns with regard to potential negative implications in relation not only to privacy but also working conditions. Employees can suffer negative psychological effects deriving from being controlled in how they perform their tasks, potentially reducing work autonomy, creating a competitive work environment and instilling a feeling of alienation, particularly in a context where employees perceive themselves as highly replaceable.

However, IoT tends to be introduced primarily to optimise business processes rather than as a surveillance tool. Nonetheless, there is also evidence from case studies (for example, on Estonian gelling agent producer Est-Agar or the Italian Centro Seia plant nursery) of the introduction of IoT technology specifically to monitor the quality and quantity of work done by employees, sometimes impacting workers’ remuneration and promotion opportunities or resulting in poorly performing staff members being laid off.

A general concern is that, once introduced into the workplace, pervasive technologies such as IoT can be scaled up quickly, enabling higher levels of employee monitoring and control, and possibly being used for more intrusive purposes than initially intended. Such practices can have negative effects such as increased levels of stress and frustration, loss of privacy and reduced trust in management.

Concluding commentary

Digitisation can bring real and tangible benefits for employees and employers, but there are also risks to legal rights to privacy and negative implications for working conditions, particularly arising from the use of IoT and related technologies, which should be taken into account before they are rolled out in the workplace.

Although research suggests that IoT is not introduced primarily for employee monitoring purposes, once the technology enters the workplace it is technically feasible to use it to draw inferences about employees’ efficiency or other aspects of their performance. Given the intrinsic imbalance of power in employment relations, unless strong safeguards and a well-established social dialogue are in place, the use of digitised technologies may be extended, with sensor-collected data guiding important employment decisions, for example about wages or promotion. Even the most detailed data about employee performance (in line with pre-established performance metrics) are not a source of objective truth, and any decision solely based on such data may open the door to discriminatory practices.

The free flow of data from personal to work devices and the portability of most digital devices mean that employee monitoring and surveillance are no longer confined to the physical workplace. Although surveillance is not new, digital technologies such as IoT have made it more pervasive, fluid and intrusive. While IoT-based wearables can improve workers’ safety and enhance their overall well-being, vigilance should nonetheless be maintained with regard to the use of the collected data, and adequate mechanisms must be put in place to avoid any crossing of ethical lines or privacy breaches. These can occur even without employees or employers being aware of it. At company level, it is more important than ever to develop clear governance around employee monitoring and surveillance and to apply principles of privacy and data protection, including purpose limitation and data minimisation.

Social dialogue at European and national levels can be instrumental in creating awareness of the opportunities and challenges at stake. The autonomous framework agreement on digitalisation signed in June 2020 by the European social partners – ETUC, BusinessEurope, CEEP and SMEunited – is an important step in this direction. Measures and actions to be negotiated by social partners at national, regional or company level may bring forward solutions that highlight the positive use of digitisation technologies in a way that is respectful of human dignity and employees’ fundamental rights.

From a regulatory perspective, owing to the pervasiveness of IoT technologies, policymakers might consider building on Article 35 of the GDPR to develop a European DPIA framework for IoT. Privacy-invasive devices such as those equipped with IoT sensors should be designed with privacy and data protection principles in mind and their use governed by stringent adherence to privacy law.

 

Platforms


Image of icon for platforms
Platforms

Platform work is a form of employment in which organisations or individuals use an online platform to access other organisations or individuals to solve specific problems or provide specific services in exchange for payment.

Introduction

Surveys have found that people often take up platform work because they seek high levels of autonomy and flexibility. At the same time, there is clear evidence that both platforms and clients, at least in some types of platform work, exercise a substantial level of control over workers, by prescribing and monitoring the time and manner of conducting the assigned task. This is facilitated by the very nature of this employment form and business model – that is, its intrinsic reliance on the collection of data on the assignment and execution of tasks through the online platform or app.

Employee monitoring and surveillance tend to be more prominent in types of platform work characterised by small-scale, low-skilled tasks, in the assignment and performance of which platforms tend to play a more intrusive role. This kind of task can be delivered online (for example, ‘click work’ or micro tasks) or on location (for example, food delivery by bike). The algorithms underlying the functioning of such platforms enable the close monitoring of work and workers in real time. While monitoring and surveillance in online platform work draw on tools such as keystroke monitoring, screenshots or video recording of the worker, monitoring systems in on-location platform work rely on GPS data or data generated through the app when executing the task. For example, the Uber app uses GPS data from drivers’ smartphones to monitor speed information in real time. Apps used by food delivery platforms (installed on riders’ smartphones) routinely collect information on workers’ performance, for example average speed, deliveries per hour and the number of late or unassigned orders.

Employee monitoring and surveillance in platform work are, however, not limited to algorithmic management and control: they can extend to evaluation and rating systems. Ratings – either by the platform or by the clients – tend to influence workers’ access to tasks, or the quality of tasks they are assigned – for example, as regards scale, earnings, location and timing.

The extent to which algorithmic surveillance causes stress for platform workers may depend on a range of contextual factors – for example, whether platform work is chosen as a way to supplement income from other sources and/or is seen as a temporary arrangement that suits the worker’s current needs.

Another important issue is the extent to which platforms protect their workers’ data and preserve their privacy. Platforms are sometimes reluctant to share information on this topic, arguing that it may reveal their business model and undermine their competitive advantage. In their capacity as data handlers, however, platforms are obliged to implement mechanisms to preserve and safeguard workers’ data protection and privacy rights.

Opportunities

  • Automated/algorithmic task assignment can contribute to reduced discrimination and enhanced labour market access for some groups of workers
  • Automated/algorithmic performance monitoring can contribute to overcoming human bias in performance appraisal
  • Automated/algorithmic monitoring can improve efficiency in business processes and work organisation
  • Transparent and well-designed algorithms can enhance perceptions of fairness, ensure procedural justice and ultimately increase job satisfaction

Algorithmic monitoring and surveillance in platform work tend to be discussed more in relation to the risks that they pose than the opportunities that they create for workers. Opportunities are more on the employer or service provider side, as the technology enables employers to manage a variety of processes that would in the past have required different skill sets but now require limited human intervention, as the algorithm does most of the work. Platform work also creates opportunities for a larger proportion of the population, by providing people with greater access to services.

For workers, transparent and fair algorithms can improve procedural justice and job satisfaction. Intentional or unintentional human bias in selecting workers or rating them can be circumvented by automated management and monitoring, thus potentially reducing discrimination, which, notably for disadvantaged groups, can increase access to the labour market and earning opportunities.

Risks

  • Close and constant algorithmic monitoring can amplify the negative effects of platform work, such as limited or no discretion over task execution
  • There may be a lack of clear policies in relation to the creation, collection and use of workers’ data, or a lack of mechanisms to preserve and safeguard workers’ data protection and privacy rights
  • Algorithms can lead to discriminatory or unfair employment decisions (for example, in relation to suspension or non-renewal of contracts), with limited or no options for appeal
  • The extent to which algorithmic management in platform work entails fully automated decision-making and is therefore non-compliant with Article 22 of the General Data Protection Regulation (GDPR) is unclear
  • The spread of algorithmic work management practices to more traditional sectors of activity could contribute to the platformisation of work

In those types of platform work where either the platform or the client strongly determines and monitors work organisation, working time and how tasks are carried out, workers’ autonomy and flexibility are limited, which tends to result in decreased job quality. Some research finds that platforms’ algorithmic control results in lower pay, social isolation and higher working time intensity.

Such close surveillance is particularly problematic in the following cases.

  • Surveillance, in practice, results in a situation in which workers are subordinated to the platform or client but are classified as self-employed, hence bearing the entrepreneurial risk without being able to benefit from the discretion expected from self-employment.
  • Workers do not have full transparency about what is monitored – that is, what data are collected for what purposes.
  • Performance monitoring data that are collected affect workers’ access to task assignment, the quality of the tasks assigned (for example, as regards the related earnings or working time schedules) or other outcomes for workers (notably pay).
  • The surveillance mechanism is largely or exclusively based on an algorithm that rigidly assesses the worker’s performance based on the criteria and data fed into the system without allowing for situational considerations (for example, penalising a worker for a ‘delayed’ delivery caused by the client not immediately confirming receipt).
  • Workers do not have adequate redress options if they feel unfairly treated because of the above.

It may be argued that monitoring and surveillance are implicit in platform work and to some extent taken for granted by platform workers. This does not, however, mean that there are no risks for workers arising from the constant supervision exerted by the algorithm. Limited work autonomy as a result of constant algorithmic monitoring can lead to low job satisfaction and a perception that the work lacks meaningfulness. It can result in higher stress and work intensity, and – as regards taxi-like services or food delivery – a higher risk of accidents. Unfavourable algorithmic or client-based performance ratings can reduce workers’ income generation and work opportunities.

The extent of the monitoring matters. If it is perceived as excessive, it will amplify the negative outcomes, and it is more likely to result in resistance or counterproductive behaviour on the part of workers. This applies to platform work as much as any other kind of work, as too much monitoring ends up disempowering workers or causing resistance at either an individual or a collective level. For example, Uber drivers have been found to resist the algorithmic allocation of work by turning driver mode on their devices off. They might resort to this, for instance, if they do not want to pick up a passenger in a neighbourhood they consider unsafe. Similarly, platform workers delivering services online learn to circumvent automated monitoring by putting in place a number of strategies, for example timing work to the known rhythm of automated screenshots.

Furthermore, research points to the increasing platformisation of work, meaning that digital management practices, including monitoring and surveillance, that initially emerged on digital labour platforms start spreading to more traditional work environments. Surveillance enabled by algorithms opens the door to a reconfiguration of existing precarious work.

Concluding commentary

Much platform work – particularly the type exemplified by food delivery and ride hailing platforms – relies on algorithmic control, automated data collection and constant surveillance of workers. It remains unclear how digital labour platforms protect the data and privacy of workers. As platform work relies heavily on data collection, adequate regulatory frameworks should be put in place to protect workers’ rights as regards the creation, collection and use of data. Digital labour platforms need to provide clear data governance policies that explain how the algorithm works and to put in place adequate mechanisms to prevent any misuse or abuse of the algorithm. The issue of the transparency of the algorithm needs to be addressed by regulators and policymakers; workers cannot be reduced to objects under opaque algorithm-based systems. When algorithms take important decisions, fairness is eroded.

There is a need to build on the EU GDPR and address the issues arising from algorithmic management. The GDPR regulates the processing of personal data, but in many instances algorithmic management draws on a combination of personal and non-personal data, which makes it difficult to establish the applicability of the GDPR provisions to platform workers. Algorithmic management also poses challenges for compliance with core principles set out in the GDPR – for example, purpose limitation in the collection of personal data, data minimisation and storage limitation.

It is unclear to what extent algorithmic management in platform work entails fully automated decision-making. This is particularly important in the context of workers having their account deactivated or their contract suspended or not renewed owing to poor performance. If such decisions are based solely on automated processing, they are not compliant with Article 22 of the GDPR, which states that automated terminations without a ‘human in the loop’ are illegal. It is important that the new EU rules on AI apply also to algorithm-based monitoring systems embedded in digital labour platforms if they are found to take on core management functions, replacing human input. The issue of the ‘explainability’ of the algorithm is crucial, because unless algorithms are explainable it is impossible to identify whether and where discrimination occurs.

Spain is the first EU Member State to approve a groundbreaking law that not only grants employee status to food delivery riders working for digital platforms but also regulates algorithmic transparency in employment. The law grants workers and their representatives the right to be informed by the company about ‘the parameters, rules and instructions on which the algorithms or artificial intelligence systems are based that affect decision-making’ and the impact on working conditions.

Although algorithmic control and management are built into the mechanisms underlying the functioning of many digital labour platforms, they are not exclusive to that form of employment. They can be found in more traditional sectors of the economy, particularly in business logistics and warehousing operations. There is a trend towards algorithmic management spreading to more standard forms of employment and reshaping organisational control in more traditional work settings (especially in non-unionised workplaces), with negative implications for employment and working conditions. It is therefore important to continue tracking developments in this area, particularly with regard to advances in software algorithms and similar technologies, and their increasing use in human resources and productivity management and planning.

Related material

Related policy pointersRelated research digests

 

References


References

Eurofound sources

Eurofound (2017), Sixth European Working Conditions Survey – Overview report (2017 update) , Publications Office of the European Union, Luxembourg.

Eurofound (2018), Automation, digitisation and platforms: Implications for work and employment , Publications Office of the European Union, Luxembourg.

Eurofound (2018), Employment and working conditions of selected types of platform work , Publications Office of the European Union, Luxembourg.

Eurofound (2018), Game changing technologies: Exploring the impact on production processes and work , Publications Office of the European Union, Luxembourg.

Eurofound (2018), Industrial internet of things: Digitisation, value networks and changes in work , Eurofound working paper, Dublin.

Eurofound (2018), Platform work: Types and implications for work and employment – Literature review , Eurofound working paper, Dublin.

Eurofound (2019), Advanced robotics: Implications of game-changing technologies in the services sector in Europe , Eurofound working paper, Dublin.

Eurofound (2019), Wearable devices: Implications of game-changing technologies in services in Europe , Eurofound working paper, Dublin.

Eurofound (2020), Back to the future: Policy pointers from platform work scenarios , New forms of employment series, Publications Office of the European Union, Luxembourg.

Eurofound (2020), ‘ COVID-19: Fast-forward to a new era of employee surveillance ’, blog post, 9 December.

Eurofound (2020), Employee monitoring and surveillance: The challenges of digitalisation , Publications Office of the European Union, Luxembourg.

Eurofound (2020), Game-changing technologies: Transforming production and employment in Europe , Publications Office of the European Union, Luxembourg.

Eurofound (2020), Regulations to address work–life balance in digital flexible working arrangements , New forms of employment series, Publications Office of the European Union, Luxembourg.

Eurofound (forthcoming), Digitisation in the workplace: Uptake, drivers and impact on work organisation and job quality , Publications Office of the European Union, Luxembourg.

Eurofound (undated), Platform economy repository .

Eurofound and Cedefop (European Centre for the Development of Vocational Training) (2020), European Company Survey 2019: Workplace practices unlocking employee potential , European Company Survey 2019 series, Publications Office of the European Union, Luxembourg.

Other sources

Accenture (2019), ‘ More responsible use of workforce data required to strengthen employee trust and unlock growth, according to Accenture report  , press release, 21 January.

Ball, K. (2010), ‘Workplace surveillance: An overview’, Labour History, Vol. 51, No. 1, pp. 87–106.

Bloomberg (2019), ‘ Google accused of creating spy tool to squelch worker dissent ’, 23 October.

Business Insider (2020), ‘ Employees at home are being photographed every 5 minutes by an always-on video service to ensure they’re actually working – and the service is seeing a rapid expansion since the coronavirus outbreak ’, 23 March.

Business Insider (2021), ‘ McDonald's has reportedly been collecting ‘strategic intelligence’ on unionizing workers as they fight for a $15 minimum wage ’, 25 February.

Degryse, C. (2017), Shaping the world of work in the digital economy , Social Science Research Network, Rochester, New York.

ETUC (European Trade Union Confederation) (2018), Digitalisation and workers participation: What trade unions, company level workers and online platform workers in Europe think , Brussels.

ETUC, BusinessEurope, CEEP and SMEunited (2020), European social partners framework agreement on digitalisation , Brussels.

European Commission Directorate-General for Employment, Social Affairs and Inclusion (2018), Employment and social developments in Europe: Annual review 2018 , Brussels.

Huws, U., Spencer, N. H. and Coates, M. (2019), The platformisation of work in Europe: Highlights from research in 13 European countries , Foundation for European Progressive Studies, Brussels.

Lee M. K., Kusbit D., Metsky, E. and Dabbish, L. (2015), ‘ Working with machines: The impact of algorithmic, data-driven management on human workers ’, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 1603–1612.

McNall, L. A. and Stanton, J. M. (2011), ‘ Private eyes are watching you: Reactions to location sensing technologies ’, Journal of Business and Psychology, Vol. 26, pp. 299–309.

Mordini, E. and Massari. S. (2008), ‘Body, biometrics and identity’, Bioethics, Vol. 22, No. 9, pp. 488–498.

Ponce del Castillo, A. (2020), COVID-19 contact-tracing apps: How to prevent privacy from becoming the next victim , ETUI Policy Brief No. 5/2020, European Trade Union Institute, Brussels.

PrivazyPlan (undated), Article 35: EU GDPR – Data protection impact assessment , web page, accessed 12 August 2021.

PwC (2020), ‘ PwC statement on technology compliance tool ’, press release, 16 June.

Scientific Foresight Unit, European Parliamentary Research Service (2020), Data subjects, digital surveillance, AI and the future of work , Brussels.

Social Europe (2020), How digitalisation must be harnessed to save jobs , web page, accessed 12 August 2021.

Time (2016), ‘ Uber is tracking drivers’ phones to watch for speeding ’, 29 June.

Torpey, J. (2007), ‘Through thick and thin: Surveillance after 9/11’, Contemporary Sociology, Vol. 36, No. 2, pp. 116–119.

Trades Union Congress (2020), Intrusive technology at work on the rise during coronavirus , web page, accessed 12 August 2021.

Vargo, D., Zhu, L., Benwell, B. and Yan, Z. (2021), ‘ Digital technology use during COVID-19 pandemic: A rapid review ’, Human Behaviour and Emerging Technologies, Vol. 3, No. 1, pp. 13–24.

Vodafone (2020), IoT spotlight 2020 , web page, accessed 17 September 2021.

Warin, R. and McCann, D. (2018), Who watches the workers? Power and accountability in the digital economy , New Economics Foundation, London.

Wood, A. J. (2021), Algorithmic management: Consequences for work organisation and working conditions , European Commission, Seville.

Wood, A. J., Graham, M., Lehdonvirta, V. and Hjorth, I. (2019), ‘ Good gig, bad gig: Autonomy and algorithmic control in the global gig economy ’, Work Employment and Society, Vol. 33, No. 1, pp. 56–75.

Wood, A. J., Lehdonvirta, V. and Graham, M. (2018), ‘Workers of the internet unite? Online freelancer organisation among remote gig economy workers in six Asian and African countries’, New Technology, Work and Employment, Vol. 33, No. 2, pp. 95–112.

Zuboff, S. (2019), The age of surveillance capitalism: The fight for a human future at the new frontier of power , PublicAffairs, New York.

 

 

Image © SYARGEENKA/Adobe Stock Photos

Disclaimer

When freely submitting your request, you are consenting Eurofound in handling your personal data to reply to you. Your request will be handled in accordance with the provisions of Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data. More information, please read the Data Protection Notice.