This article explores two of the many possible legal ramifications associated with integrating artificial intelligence tools and solutions into workplaces.
The article focuses on employers' obligations under the National Labor Relations Act, and health and safety issues concerning robots and AI.
For more guidance on AI, see Generative Artificial Intelligence (AI) Resource Kit.
Integration of AI in the Unionized Workplace
When an employer with a unionized workforce begins to consider the possible integration of AI tools in the workplace, the employer must evaluate whether the integration is a mandatory or permissive subject of bargaining under the NLRA.
Several of the factors to consider, in consultation with counsel, include:
- Collective bargaining agreement: Does the CBA cover issues related to the integration of new technology?
- Terms and conditions of employment: To what degree AI integration affect job responsibilities, work hours, wages and benefits?
- Potential job losses: Will jobs be lost and to what extent?
- Legal precedent: Are there applicable legal precedents to guide the bargaining analysis?
In addition to the foregoing, and depending upon circumstances, the employer should consider consultation with union leadership about the introduction of AI tools.
Protected Rights Under the NLRA
Employers should be vigilant to ensure that the integration of AI automated or algorithmic management tools in the workplace does not infringe upon employees' protected rights safeguarded by the NLRA.
Tools that analyze worker productivity, set quotas, automate the hiring processes, create schedules, monitor employees' activity, or otherwise affect the terms and conditions of employment could result in allegations of interference with employees' rights under NLRA Section 7 to engage in union activities and collective action.
Section 8 of the NLRA provides an enforcement mechanism for protecting employees' rights — including Section 7 rights — under the NLRA and outlines certain unfair labor practices.
On Oct. 31, 2022, the National Labor Relations Board's general counsel published memorandum GC 23- 02, Electronic Monitoring and Algorithmic Management of Employees Interfering with the Exercise of Section 7 Rights,[1] highlighting potential violations of employee rights under the NLRA resulting from using ''automated or semi-automated decision-making'' tools.
The general counsel asserts that employers may violate Section 8(a)(1) of the NLRA if they discipline employees who ''concertedly protest ... the pace of work set by algorithmic management.'' Additionally, the general counsel identifies the potential for a violation of Section 8(a)(3) if, in the screening process, employers use artificial intelligence algorithms that make decisions based on the employees' protected activity or ''discriminatorily apply [] production quotas or efficiency standards to rid themselves of union supporters.''
Based on these potential violations, the general counsel ''urge[s] the Board to find that an employer has presumptively violated Section 8(a)(1) where the employer's ... [automated] management practices, viewed as a whole, would interfere with, or prevent a reasonable employee from engaging in activity protected by the [NLRA].''
An employer must then show that the tools are ''narrowly tailored to address a legitimate business need.''
Even if an employer can show that its business needs outweigh the employees' Section 7 rights, the general counsel still ''urge[s] the Board to require the employer to disclose to employees the technologies it uses to monitor and manage them, its reasons for doing so, and how it is using the information it obtains.''
Complying With the NLRA Regarding AI
Using AI tools in connection with employment-related decisions and performance management could present challenges where unfair labor practice charges are filed following an employment-related decision.
The complexity of AI algorithms and decision-making processes, and their often black-box nature, may make it difficult for an employer to meet its burden to prepare its defense and demonstrate just cause for a particular decision.
Consider the following measures to minimize risk following the integration of AI in the workplace.
Pre-Implementation Transition Planning
Prepare plans for managing any anticipated effect on working conditions and job losses, including potential organizational restructuring and retraining.
Adhere to Collective Bargaining Obligations
AI systems can affect employees' wages, hours and other conditions of employment and, therefore, could be implicated as mandatory subjects of bargaining. Employers should recognize when AI may affect a condition of employment subject to mandatory bargaining, which triggers a duty to bargain over the subject in good faith with the union.
Examples of some circumstances that might require bargaining include using AI-generated performance metrics that may not be consistent with metrics identified in a CBA, using an AI tool to manage shift allocation based on volume predictions, automating tasks that could affect job responsibilities and headcount, and monitoring employees with AI systems.
Further, although employers are generally not required to bargain over core managerial decisions — e.g., implementing an AI system to increase workplace efficiency — they may have to bargain over the effects of such changes.
Ensure AI Systems Do Not Interfere With Employees' Rights
Ensuring that AI systems do not interfere with employees' Section 7 rights requires a combination of technical, procedural and human intervention. Some steps that employers can take include:
- Education: Make sure your company's information technology team and management are aware of the protections afforded by the NLRA and how an AI tool could potentially affect protected rights;
- Understanding the AI tool design: Opt for transparent, rather than black box, systems and make sure the critical data inputs and variables used by the AI tool to perform its functions and generate output are not related to protected activities;
- Periodic audits: Frequent audits can identify unintended biases or patterns resulting from the tool's output;
- Human oversight: Never rely solely on AI output for decision making. Instead, keep human's in the decision loop particularly where output may penalize employees or alter working conditions; and
- Engagement: Consider engaging with employees and unions to understand concerns they may have concerning a new AI tool.
Favor Transparency in Implementing AI Systems
Because AI tools can affect employment terms and conditions, employers need to be transparent about their use of an AI tool and its purposes.
For example, employers should notify employees as soon as possible after implementing a new AI system that affects their workers. In addition, employers should consider creating an AI systems policy — either standalone or as part of their employee handbook — that clearly outlines the employer's position on the appropriate use of AI systems.
Understand the Factors the AI System Relies Upon
Be prepared to defend against unfair labor practice charges and other challenges made to employment- related decisions that are assisted by an AI tool. Ensure there is a record of the factors and variables used by the AI tool and how the tool used those factors or variables to assist in the decision-making process.
Health and Safety Issues Concerning Robots and Artificial Intelligence
In addition to NLRA considerations, AI-driven robotics raise novel issues and concerns for employers regarding employee health and safety.
There are currently no Occupational Safety and Health Administration standards specifically for the robotics industry.[2] However, OSHA has highlighted general standards and directives applicable to employers utilizing robotics and provided guidelines for robotics safety in the ''Industrial Robot Systems and Industrial Robot System Safety'' chapter of its technical manual.[3]
Under the Occupational Safety and Health Act, a covered employer utilizing robotics — like any other employer the OSH Act covers — must conduct a hazard assessment, in which it reviews working environments for potential occupational hazards.[4]
An employer that identifies a hazard must implement a hazard control, in the following order of preference: hazard elimination, hazard substitution, engineering controls, administrative controls or personal protective equipment.[5]
Another OSHA standard requires employers to provide protections for operating machines, such as machine guards, to ''protect the operator and other employees in the machine area from hazards.''[6]
Further, subpart S of the OSHA Standards establishes that employers must set electrical equipment, such as wiring, conduit and breakers, to safe standards, and also provides for marking, labeling and safe distance requirements based on voltage and other potential electrical hazards.[7]
Potential Employer Hazards and Health and Safety Issues With Robots and AI
The same legal considerations with respect to potential bargaining obligations and NLRA rights when AI tools are introduced also apply to the introduction of robot technology and AI-driven robots.
Employers should consider taking the following actions to mitigate health and safety risks flowing from employee exposure to and interaction with robots in the workplace.
Develop a basic understanding of the robot's potential hazards and preventative measures the employer can take.
Assess how the introduction of a robot may affect the safety of employees.
Due to the complexity of sophisticated robots, the employer's managers and supervisors are unlikely to understand their inner workings. As a result, it may be difficult for employers to identify and eliminate their potential hazards.
Accordingly, the employer should train its management staff on the robot's decision-making processes, what actions the robot could take and under what circumstances it would take such actions, and how to eliminate the hazard should the robot malfunction, such as the steps for shutting it down. Employees should be provided with training to ensure that they can work safely alongside robots.
Know who to contact when a robot misbehaves.
Unlike human errors, which employers can address through discipline and retraining, when the root cause of a workplace accident involves the logic of a robot, such traditional methods are not applicable. Rather, the employer may need to consult highly trained engineers to understand why the robot malfunctioned and correct the robot's performance.
If doing so is not feasible, the employer could replace a manufacturing line entirely, but due to the significant cost and disruption to the business this would cause, an employer should only order a complete replacement as a last resort.
[1] 2022 NLRB GCM LEXIS 26. https://advance.lexis.com/api/document?collection=administrative-materials&id=urn:contentItem:66S6-N411-FJM6-61BW-00000-00&context=1000522.
[2] Robotics, Standards, Occupational Safety and Health Administration, Safety and Health Topics. https://www.osha.gov/SLTC/robotics/index.html.
[3] https://www.osha.gov/otm/section-4-safety-hazards/chapter-4.
[4] 29 C.F.R. § 1910.132(d). https://advance.lexis.com/api/document?collection=administrative- codes&id=urn:contentItem:608H-2J01-DYB7-W29R-00000-00&context=1000522.
[5] OSHA Recommended Practices for Safety and Health Programs, Identifying Hazard Control Options: The Hierarchy of Controls (osha.gov). https://www.osha.gov/sites/default/files/Hierarchy_of_Controls_02.01.23_form_508_2.pdf.
[6] 29 C.F.R. § 1910.212(a). https://advance.lexis.com/api/document?collection=administrative- codes&id=urn:contentItem:608H-2J01-DYB7-W2CT-00000-00&context=1000522.
[7] 29 C.F.R. § 1910.301, Electrical - Standards | Occupational Safety and Health Administration (osha.gov). https://advance.lexis.com/api/document?collection=administrative- codes&id=urn:contentItem:608H-2J01-DYB7-W2DY-00000-00&context=1000522.
Reproduced with permission. Originally published October 23, 2023, “AI At Work: Safety And NLRA Best Practices For Employers,” Law360.