Data & intellectual property protection/security, wrt ChatGPT, Bard AI and other tools; What’s your company policy?

  • Samsung banned the use of generative AI tools like ChatGPT by its employees after an accidental leak of sensitive internal source code by an engineer. 
  • In January 2023, Amazon warned employees about sharing confidential information with ChatGPT after it noticed responses closely matching its existing material
  • Major banks which include Bank of America, Deutsche Bank, JPMorgan Chase etc have also restricted the use of such tools
  • Ikea’s global vice president for “Digital ethics and responsible AI” has warned of the risks that could arise from using tools like ChatGPT. Ikea has been working on its own language model which can be better controlled. 


Why is it bad to share confidential information with ChatGPT, Bard AI and other tools etc?

While this talks about ChatGPT as it’s the one of major concern this policy applies to processing, storing, sharing of any confidential information across all mediums.

  • The software/service can be breached causing your sensitive information to leak.
  • Confidential information shared can influence the response of ChatGPT which means it can reveal this information. Other users can possibly get details of the sensitive information by asking questions which ChatGPT responses to based on the confidential information shared. 
  • Using unauthorized confidential information or for unauthorized actions can violate laws and trust of the individual or organization that provided that information and cause financial and criminal liability.

On the other side of the story a lot of companies have also embraced these tools.


What can companies do to better handle employee usage of tools like ChatGPT?

It’s important first to have a policy of usage. While a general policy of confidential information all companies have and are aware of. Using it against AI tools is very addictive. To test the capabilities the information available at the hands better depicts real world cases and is more enticing to use than some junk data which might not give insightful results. Or someone might want to check if a protected and confidential source code can be made more efficient. The fact that there is no actual human behind answering/processing this also sort of reduces the usual alarm bells in one’s mind, it should however be of concern. These are the steps we recommend. 

  1. Frame a policy clearly explaining what confidential data is (eg it includes PII data, financial data, source code or any other information not publicly available and is of the employer, its clients, its supplier, end users, suppliers etc) How it’s to be kept secure.
  2. Ensure employees have signed a Non-Disclosure Agreement for confidential information. 
  3. Frame a policy about the usage of such api/tools. This should clearly explain that storage/processing/sharing/disclosure of this confidential information is not permitted. 
  4. In cases where there is a genuine need to process confidential information it’s required that a proper approval is available from security and management. 
  5. Additionally proper guidance and training on its use is necessary. This may include masking PII data or removing it so that only eg the statistical data required to be analyzed can be done. 
  6. Even if you have existing policies and framework now is a good time to revise it and include ChatGPT and other AI tools in this. For reasons explained above mistakes were made by various people across organisations in sharing confidential data without approval. It’s also important to communicate this effectively to your employees, in the form of an email specifically giving examples of  such AI tools.


What had sapnagroup done about this?

Sapnagroup has a handbook which is shared with all employees which covers a range of policies, guidelines and processes and recently had updated its handbook to include ChatGPT and other tools. This was also effectively communicated to all employees. All employees also sign NDA.

References:

https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/amp/ 
https://aibusiness.com/nlp/waicf-23-chatgpt-needs-bias-bounties-