Friday, October 13, 2023

What Business Lawyers Needs to Ask their Clients About Generative AI Usage

Last week I had the pleasure of joining my fellow bloggers at the UT Connecting the Threads Conference on the legal issues related to generative AI (GAI) that lawyers need to understand for their clients and their own law practice. Here are some of the questions I posed to the audience and some recommendations for clients. I'll write about ethical issues for lawyers in a separate post. In the meantime, if you're using OpenAI or any other GAI, I strongly recommend that you read the terms of use. You may be surprised by certain clauses, including the indemnification provisions. 

I started by asking the audience members to consider what legal areas are most affected by GAI? Although there are many, I'll focus on data privacy and employment law in this post.

Data Privacy and Cybersecurity

Are the AI tools and technologies you use compliant with relevant data protection and privacy regulations, such as GDPR and CCPA? Are they leaving you open to a cyberattack?

This topic also came up today at a conference at NCCU when I served as a panelist on cybersecurity preparedness for lawyers.

Why is this important?

ChatGPT was banned in Italy for a time over concerns about violations of the GDPR. The Polish government is investigating OpenAI over privacy issues. And there are at least two class action lawsuits in California naming Microsoft and OpenAI. Just yesterday, a US government agency halted the use of GAI due to data security risks. 

It’s also much easier for bad actors to commit cybercrime because of the amount of personal data they can  scrape and analyze and because deepfake technology allows impersonation of images and voices in a matter of seconds. The NSA and FBI have warned people to be worried about misinformation and cyberthreats due to the technology. On a positive note, some are using GAI to fight cybercrime.

Surveillance and facial recognition technology can violate privacy and human rights. Governments have used surveillance technology to tamp down on and round up dissidents, protestors, and human rights defenders for years. Now better AI tools makes that easier. And if you haven't heard some of the cautions about Clearview AI and the misidentification of citizens, you should read this article. A new book claims that this company could "end privacy as we know it."

What should (you and) your clients do?

  • Ensure algorithms minimize collection and processing of personal data and build in confidentiality safeguards to comply with privacy laws
  • Revise privacy and terms of use policies on websites to account for GAI
  • Build in transparency for individuals to control how data is collected and used
  • Turn on privacy settings in all AI tools and don’t allow your data to be used for training the large language models
  • Turn off chat history in settings on all devices
  • Prevent browser add-ons
  • Check outside counsel guidelines for AI restrictions (or draft them for your clients)
  • Work with your IT provider or web authority to make sure your and your clients’ data is not being scraped for training
  • Use synthetic data sets instead of actual personally identifiable information
  • Ensure that you have a Generative AI Security Policy
  • Check vendor contracts for AI usage
  • Enhance cybersecurity training
  • Conduct a table top exercise and make sure that you have an incident response plan in place
  • Check cyberinsurance policies for AI clauses/exclusions

What about the employment law implications?

According to a Society for Human Resources Management Member Survey about AI usage:

• 79% use AI for recruiting and hiring

• 41% use AI for learning and development

• 38% use AI for performance management

• 18% use AI for productivity monitoring

• 8% use Ai for succession planning

• 4% use AI or promotional decisions

GAI algorithms can also have significant bias for skin color. The National Institute of Standards and Technology (NIST) released research showing that "not just dark African-American faces, but also Asian faces were up to 100 times more likely to be failed by these systems than the faces of white individuals.”

Then there’s the question of whether recruiters and hiring managers should use AI to read emotions during an an interview. The EU says absolutely not

The Equal Employment Opportunity Commission has taken notice. In a panel discussion, Commissioner Keith Sonderling explained, “carefully designed and properly used, AI has potential to enhance diversity and inclusion, accessibility in the workplace by mitigating the risk of unlawful discrimination. Poorly designed and carelessly implemented, AI can discriminate on a scale and magnitude greater than any individual HR professional.” The EEOC also recently settled the first of its kind AI bias case for $365,000.

What to do 

  • Use AI screening tools to disregard name, sec, age, national origin, etc.
  • Use bots for interviews to eliminate bias because of accents
  • Check local laws such as New York City's automated decision tools guidance for employers
  • Be careful about training large language models on current workforce data because that can perpetuate existing bias
  • Review the EEOC Resource on AI

Questions to Ask Your Clients:

• How are you integrating human rights considerations into your company's strategy and decision-making processes, particularly concerning the deployment and use of new technologies?

• Can you describe how your company's corporate governance structure accounts for human rights and ethical considerations, particularly with regards to the use and impact of emerging technologies?

• How does your company approach balancing the need for innovation and competitive advantage with the potential societal and human rights impact of technologies like facial recognition and surveillance?

• As data becomes more valuable, how is your company ensuring ethical data collection and usage practices?

• Are these practices in line with both domestic and international human rights and privacy standards?

• How is your organization addressing the potential for algorithmic bias in your technology, which can perpetuate and exacerbate systemic inequalities?

• What steps are you taking to ensure digital accessibility and inclusivity, thereby avoiding the risk of creating or enhancing digital divides?

• How is your company taking into account the potential environmental impacts of your technology, including e-waste and energy consumption, and what steps are being taken to mitigate these risks while promoting sustainable development?

• Are you at risk of a false advertising or unfair/deceptive trade practices act claim from the FTC or other regulatory body due to your use of AI?

Whether or not you're an AI expert or use GAI in your practice now, it's time to raise these issues with your clients. Future posts will address other legal issues and the ethical implications of using AI in legal practice. 

https://lawprofessors.typepad.com/business_law/2023/10/what-business-lawyers-needs-to-ask-their-clients-about-generative-ai-usage.html

Compliance, Corporate Governance, Corporations, CSR, Current Affairs, Employment Law, Ethics, Human Rights, Law Firms, Lawyering, Legislation, Marcia Narine Weldon | Permalink

Comments

Post a comment