Main Navigation

Generative AI and Compliance: Navigating Legalities in the Workplace

Image of a man holding a gavel

Image of a man holding a gavelIs your company ready for legal and compliance issues related to artificial intelligence (AI)? Gulf Coast area employers are facing new challenges as generative AI becomes more widely used across more sectors. An October 2023 executive order by the Biden Administration aims to create a legal framework for AI and how it’s used in the workforce. As AI technology advances, it’s increasingly being used in recruitment strategies as well as on the job. Here’s what you need to know about AI legalities and regulations on the horizon.

Understanding the AI Regulatory Environment

The Biden Administration’s Executive Order on AI outlines policies and principles to guide AI’s advancement while addressing concerns like fraud, discrimination, bias, and national security risks. Each of these principles are crucial for employers, especially as intellectual property (IP) challenges and competition regulations become more of an issue across most industries.

The information most applicable to employers is subsection C which will “seek to adapt job training and education to support a diverse workforce and help provide access to opportunities.” In a nutshell, the executive order states AI should be used in the workplace to advance worker opportunities and should be responsibly used to augment human work and positively impact workers’ lives.

Compliance Challenges in AI Implementation

A McKinsey & Company survey found that most employers lack policies to manage AI use in the workplace. Only 21% of respondents reported having established AI policies, which points to the need for written guidelines that address AI's risks and regulatory compliance. Employers should take proactive steps in these early days of AI to ensure their AI usage aligns with legal standards, particularly where it intersects with data privacy, employment law, and IP rights. These four key compliance issues should be addressed in a company’s AI policy:

  1. Data Privacy and Employment Law: The Equal Employment Opportunity Commission’s (EEOC) AI and Algorithmic Fairness Initiative aims to ensure AI compliance with federal discrimination laws and protects workers from AI-related discrimination. Employers need a policy to ensure their AI recruitment tools don’t unintentionally discriminate against certain groups or they face the risk of lawsuits. For example, the EEOC case against iTutor Group penalized the company because its AI tools were automatically rejecting older people and women. Have a clear understanding of how AI is being used at your company and evaluate whether it provides unbiased results.
  2. AI Training Risks: When training AI models, companies need to be aware of intellectual property rights and copyright infringement to ensure they aren’t opening themselves up to IP lawsuits. Any information that is uploaded to train generative AI must be owned by the company, available in the public domain, or expressly approved for use by the copyright holder, if applicable. Copyright infringement lawsuits related to AI are making their way into the court system, so establish guidelines around what information can (and can’t) be used to train AI systems.
  3. Ethics, Bias, and Validation: AI has the potential to revolutionize how we work and can expand a company’s ability to provide goods and services, but it doesn’t have the ability to use reason or good judgement. Consider establishing human oversight and testing protocols to limit bias and to validate responses. Define when and how people should train and test the AI system to ensure it provides accurate, unbiased results. In addition to making sure the system is free from bias, any employees working with the system should be trained in the ethical use of AI and how to comply with data privacy and intellectual property laws as well as when to stop and recalibrate the system.
  4. Liability Concerns: Depending on how the AI is used and/or trained, it may provide outputs that sound authoritative but are entirely made up and nonsensical. If false information is provided to customers, or used in a contract, it can create a liability risk. No matter where or how a company uses AI, there should be transparency about its limitations.

Practical Advice for Staying Compliant

To help ensure compliance, keep these guidelines in mind:

  • Join professional groups or organizations dedicated to this evolving topic and follow thought leaders in the space to stay abreast of the constantly changing technology and how it impacts your industry and workforce.
  • Establish clear AI policies in collaboration with IT and legal teams to define acceptable AI uses and manage risks. It’s a good idea to ensure employees are informed and consent to their personal data interacting with AI.
  • Review and update AI policies regularly, perhaps bi-annually or annually, to align with evolving laws and technologies.
  • Provide continuous education to employees on the nature of AI and the importance of adhering to compliance standards.
  • Conduct regular audits to monitor AI’s impact and accuracy in decision-making processes.

As AI continues to evolve, employers will need to stay informed and proactive. By implementing written AI policies, businesses can harness AI’s potential both responsibly and ethically without falling into legal landmines in the process.

Workforce Solutions consultants can help you evaluate AI in your workplace, so you can begin drafting policies! Contact us for more information or to have a consultation.

Image of a man holding a gavel