Generative AI offers companies the chance to make their businesses more efficient and productive. It also carries a whole bunch of risks.
As a result, some companies have established rules for the use of generative AI in the workplace—hoping to make the most of the new tools while mitigating some of the risks. Here’s a look at some of those rules.
Do be wary of bias
Companies should be aware that generative-AI models trained on publicly available data sets and information could reflect demographic biases in that material.
Such biases aren’t always obvious. But exercising human judgment on AI-generated material could help alleviate some of the risk of biases, says Jason Schloetzer, an associate professor at Georgetown University’s McDonough School of Business teaching corporate governance. Companies can implement a process where both individual employees and dedicated teams of people examine anything produced with the help of generative AI to ensure that the company can stand by the material, he says.
Don’t share sensitive business information with public programs
Most generative-AI programs store conversations and prompts inputted into the program and use the data to train their models, resulting in an increased risk that that information can re-emerge in response to someone else’s search.
Examples of information that could be exposed this way include computer code, customer information, transcripts of company meetings or email exchanges, and company financial or sales data.
“Making sure you’re being very clear about what you’re putting into the system” is a good rule of thumb, says Alicia Arnold, a managing director of consulting firm Fifty Five specializing in technology initiatives for companies. Some companies have restricted the use of public generative-AI programs or banned them for most employees because of concerns about the possibility of sensitive proprietary information being leaked.
Do be picky about which AI program you use
There are safer alternatives to public AI programs. Companies should use so-called enterprise-grade models—typically paid subscriptions for businesses that offer more security for business data—when possible as opposed to public versions, says Arun Chandrasekaran, an analyst with an emphasis in AI at consulting firm Gartner. Programs like ChatGPT Enterprise and Microsoft’s Bing Chat Enterprise promise to keep companies’ data private.
When choosing a generative-AI program, companies should be sure they fully understand how data input into the programs will be stored and who will have access to that data, Chandrasekaran says.
Don’t fully trust AI results to be accurate
Be wary of generative-AI “hallucinations,” experts say. Hallucinations are responses generated by AI that include false or inaccurate information, which could mislead anyone who sees it, within the company or outside it. “Misinformation can erode trust,” Chandrasekaran warns, and hallucinations can sometimes look legitimate enough to go undetected. Checking where the information is coming from before using AI-generated content can help reduce inaccuracies.
Or, to go a step further, Chandrasekaran says companies can negotiate their own contracts with generative-AI vendors to train the AI only on a database provided by the company, so that no potentially inaccurate outside information is introduced.
Some companies have also turned to developing their own proprietary programs to allow employees to more safely use generative AI. Schloetzer says most of the companies he’s consulted with that are encouraging the use of generative AI in the workplace have or are developing some proprietary programs to safeguard data.
Sathish Muthukrishnan, chief information, data and digital officer at financial-services company Ally Financial, says that Ally.ai, the firm’s internal AI platform, serves as a bridge between company information and external AI programs. He describes it as more of an assistive technology that always has “a human in the middle.”
Don’t use AI-generated content without disclosure
Disclosing when AI has been used to generate any content is crucial in maintaining transparency and is especially important for client-facing employees, Schloetzer says. Because of the liabilities that could come with inaccurate or inappropriate AI-generated content, clients should know upfront that material they are receiving or using—in an ad campaign, for example—was partly generated by AI.
Do be wary of copyright infringement
Some AI-generated content could be based on copyrighted work by an artist or writer, which could raise legal issues.
Early court cases have dismissed alleged copyright infringements, and the U.S. Copyright Office has repeatedly rejected copyright claims when it comes to AI-generated art, but multiple lawsuits by authors have yet to be decided, including a class-action lawsuit against OpenAI by a group of authors alleging copyright infringement that is still moving through a San Francisco district court. (OpenAI can’t comment on pending litigation, an OpenAI spokesperson said.)
The Copyright Office has been conducting an AI study since August, and the Federal Trade Commission has expressed interest in looking into this issue, posting a comment on the study that cited copyright concerns around programs “scraping work from public websites without consent.” But there aren’t yet any clear rules concerning AI’s use of copyright material.
Until such rules are in place, some experts suggest that companies’ legal teams should be proactive in screening any AI-generated material for possible copyright issues. Companies should also consider seeking legal indemnity from providers of the generative-AI programs, experts say. Google recently joined Microsoft and Adobe in offering legal-indemnification plans for generative-AI products
Lindsey Choo is a writer in Southern California. She can be reached at reports@wsj.com.