Friday, July 7, 2023
Depending on who you talk to, you get some pretty extreme perspectives on generative AI. In a former life, I used to have oversight of the lobbying and PAC money for a multinational company. As we all know, companies never ask to be regulated. So when an industry begs for regulation, you know something is up.
Two weeks ago, I presented the keynote speech to the alumni of AESE, Portugal’s oldest business school, on the topic of my research on business, human rights, and technology with a special focus on AI. If you're attending Connecting the Threads in October, you'll hear some of what I discussed.
I may have overprepared, but given the C-Suite audience, that’s better than the alternative. For me that meant spending almost 100 hours reading books, articles, white papers, and watching videos by data scientists, lawyers, ethicists, government officials, CEOs, and software engineers.
Because I wanted the audience to really think about their role in our future, I spent quite a bit of time on the doom and gloom scenarios, which the Portuguese press highlighted. I cited the talk by the creators of the Social Dilemma, who warned about the dangers of social media algorithms and who are now raising the alarms about AI's potential existential threat to humanity in a talk called the AI Dilemma.
I used statistics from the Future of Jobs Report from the World Economic Forum on potential job displacement and from Yale's Jeffrey Sonnenfeld on what CEOs think and are planning for. Of the 119 CEOs from companies like Walmart, Coca-Cola, Xerox and Zoom, 34% of CEOs said AI could potentially destroy humanity in ten years, 8% said that it could happen in five years, and 58% said that could never happen and they are “not worried.” 42% said the doom and gloom is overstated, while 58% said it was not. I told the audience about deepfakes where AI can now mimic someone's voice in three seconds.
But in reality, there's also a lot of hope. For the past two days I've been up at zero dark thirty to watch the live stream of the AI For Good Global Summit in Geneva. The recordings are available on YouTube. While there was a more decidedly upbeat tone from these presenters, there was still some tamping down of the enthusiasm.
Fun random facts? People have been using algorithms to make music since the 60s. While many are worried about the intellectual property implications for AI and the arts, AI use was celebrated at the summit. Half of humanity's working satellites belong to Elon Musk. And a task force of 120 organizations is bringing the hammer down on illegal deforestation in Brazil using geospatial AI. They've already netted 2 billion in penalties.
For additional perspective, for two of the first guests on my new podcast, I've interviewed lawyer and mediator, Mitch Jackson, an AI enthusiast, and tech veteran, Stephanie Sylvestre, who's been working with OpenAI for years and developed her own AI product somehow managing to garner one million dollars worth of free services for her startup, Avatar Buddy. Links to their episodes are here (and don't forget to subscribe to the podcast).
If you’re in business or advising business, could you answer the following questions I asked the audience of executives and government officials in Portugal?
- How are you integrating human rights considerations into your company's strategy and decision-making processes, particularly concerning the deployment and use of new technologies?
- Can you describe how your company's corporate governance structure accounts for human rights and ethical considerations, particularly with regards to the use and impact of emerging technologies?
- How are you planning to navigate the tension between increasing automation in your business operations and the potential for job displacement among your workforce?
- How does your company approach balancing the need for innovation and competitive advantage with the potential societal and human rights impact of technologies like facial recognition and surveillance?
- In what ways is your company actively taking steps to ensure that your supply chain, especially for tech components, is free from forced labor or other human rights abuses?
- As data becomes more valuable, how is your company ensuring ethical data collection and usage practices? Are these practices in line with both domestic and international human rights and privacy standards?
- What steps are you taking to ensure digital accessibility and inclusivity, thereby avoiding the risk of creating or enhancing digital divides?
- How is your company taking into account the potential environmental impacts of your technology, including e-waste and energy consumption, and what steps are being taken to mitigate these risks while promoting sustainable development?
- What financial incentives do you have in place to do the ”right thing” even if it’s much less profitable? What penalties do you have in place for the “wrong” behavior?
- Will governments come together to regulate or will the fate of humanity lie in the hands of A few large companies?
Luckily, we had cocktails right after I asked those questions.
Are you using generative AI like ChatGPT4 or another source in your business 0r practice? If you teach, are you integrating it into the classroom? I'd love to hear your thoughts.