Artificial intelligence (AI) is attracting worldwide attention and has become a key issue in the corporate world. AI has the potential to fundamentally change the way we work. It is also the allure of the new that encourages workers to experiment with AI in the workplace. From the perspective of employers and employees, AI programmes are first and foremost work tools. The employer decides which work equipment is used in the company - positively or negatively. He can prohibit employees from using them - but then runs the risk that employees will use the little helper from a private computer or smartphone and thus leak data. Since AI can make work easier and thus also increase employee productivity, controlled use in the workplace is preferable to a general ban. Employers should therefore introduce policies on the use of AI in the workplace.
Employer policies on the use of AI as a technical tool at the workplace must be implemented in a legally effective manner. When designing the content of regulations on the use of AI, the limits on the permissibility of instructions under labour law and data protection law must be observed. In companies with a works council, it is advisable to conclude a works agreement in order to safeguard the co-determination rights of the works council, promote acceptance among the workforce and create a legal basis for data processing. In the event of a judicial review of the policy, the participation of the works council is considered an indication of a balanced regulation. In companies without a works council, special consideration must be given to safeguarding the rights of employees in order to ensure the permissibility of the instructions.
Compliance with the policy must be controlled, whereby data protection and labour law limits must be observed.
The responsibility for work results produced with AI should be clearly attributed to the employees in order to be able to draw consequences under labour law in the case of faulty results (e.g. warning/termination for incorrectly produced reports, presentations, research, etc.). The main point here is not to take away the autonomous control of AI deliveries from employees who are helped in their work.
In addition to written instructions on how to deal with AI, training of workers is necessary to raise their awareness and ensure that they read and understand the instructions.
With regard to the company's integrity and business secrecy protection, the higher up in the hierarchy an AI is used, the more dangerous it becomes, because strategically relevant information can then also flow off. In the case of listed companies, the conversation with ChatGPT can pass on insider information to the AI. Therefore, the board and managers must also be made aware of the dangers. Supervisory board members and the works council should also be considered. They are not subject to instructions from the management, but are themselves obliged to protect business secrets and must be made familiar with the technical dangers.