EurWORK European Observatory of Working Life

Artificial intelligence


According to the proposal for a regulation laying down harmonised rules on artificial intelligence, presented on 21 April 2021 by the European Commission, artificial intelligence (AI) systems refer to:

software that is developed with one or more … techniques and approaches … and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The Commission explains that this definition ‘aims to be as technology neutral and future proof as possible, taking into account the fast technological and market developments related to AI’. To provide the required legal certainty, Annex I contains a detailed list of approaches and techniques for the development of AI to be adapted by the Commission in line with the latest technological developments.


The proposal for a regulation laying down harmonised rules on artificial intelligence constitutes the first legal framework regulating AI in the European Union. It is accompanied by a communication in which the Commission explains its initiative, a review of the 2018 coordinated plan on AI (to be taken forward in conjunction with the Member States) and a proposal for a regulation on machinery products, which is intended to replace the current Machinery Directive.

Once they have been passed, these regulations will apply directly in the Member States, without needing to be transposed into national law.

Employment aspects

The proposed regulation highlights the fact that some AI systems could pose significant risks to the health and safety or fundamental rights of persons at work and therefore their use needs to be supervised. Annex III lists some of these ‘high-risk’ AI systems, such as:

  • AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, or evaluating candidates in the course of interviews or tests.
  • AI systems intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation, and for monitoring and evaluating performance and behaviour of persons in such relationships.
  • AI systems intended to be used for the purpose of determining access to educational and vocational training or for assessing students in training institutions.

These systems are deemed ‘high risk’ because they could significantly affect the career prospects and future livelihoods of workers. The Commission states that when these systems are ‘improperly designed and used’, they may perpetuate historical patterns of discrimination, for example against women, particular age groups, people with disabilities, or people of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of workers may also infringe on their rights to data protection and privacy. The regulation stipulates that the Commission is entitled to amend the list of high-risk AI systems under certain conditions.

The fact that an AI system is classified as high risk gives rise in particular to two obligations to provide users with guarantees. First, these AI systems must be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately (Article 13). Second, they must be designed and developed using appropriate human–machine interface tools, so that they can be effectively overseen by natural persons during the period in which the AI system is in use. Human oversight should aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used (Article 14).

These two obligations are intended to establish the conditions whereby which people can trust AI systems classified as high risk. This trust will also be based on the compliance obligation on the AI supplier, which must carry out self-certification of its compliance before placing an AI system on the market or putting it into operation.

Social dialogue

On 30 November 2020, the European social partners in the telecommunications sector (the European Telecommunications Network Operators’ Association and UNI Europa) issued a joint declaration on artificial intelligence. It was the first time that sectoral-level European social partners had produced a text devoted to this subject. Their work followed on from the European framework agreement on digitalisation adopted in June 2020. Chapter 3 of that agreement deals with AI and enshrines the ‘human in control principle’ with regard to AI. The social partners in the telecommunications sector emphasise the importance of social dialogue in the design and deployment of AI, especially as regards matters such as data collection and management, ethical AI and the need for workforce reskilling/upskilling in the field of AI.

Following the telecommunications sector initiative, the European social partners in the insurance sector adopted a joint declaration on artificial intelligence, on 16 March 2021. According to the signatories, ‘it is important that the social partners engage in social dialogue’ to promote the responsible use of AI.

Related dictionary terms

Automation digital agenda digital economy digitisation platform work

Useful? Interesting? Tell us what you think. Hide comments

Add new comment