Because of the rapid adoption of AI and especially OpenAI‘s chatGPT tools, companies are finding themselves on the wrong end of inadvertently disclosing proprietary information. Employees are so enamored by chatGPT that they engage with the free version or even chatGPT Plus without enabling data privacy options. You can watch this video about ensuring data privacy on a corporate level.
Data Privacy & Protection Of Corporate IP Is Mission Critical
In utilizing GPT models in the workplace, employees must understand the significance of the data they input into these systems. It’s important to avoid entering personal details, such as social security or credit card numbers, of themselves or clients. The same caution applies to proprietary information to maintain confidentiality. It’s also crucial to remember that data entered into GPT models might be stored and processed in ways that are beyond the company’s immediate oversight and could potentially be accessed by external entities. Therefore, it’s necessary to have training programs on data privacy, as well as systems for monitoring and auditing the use of GPT models. Employees are required to report any suspicions of data privacy violations. Non-compliance with these guidelines can lead to severe repercussions, including disciplinary measures and potential legal proceedings. By being aware of and prioritizing data privacy issues, employees can use GPT models responsibly, protecting sensitive data and the company’s reputation.
Third-party applications are being built on the OpenAI API, which may or may not have data privacy rules which are the same as OpenAI. All of this leads to one conclusion. Every company needs to have an AI usage policy for employees. This AI Usage policy can go into your employee handbook or be developed as a separate employee policy.
Drafting an AI Usage and Ethics Policy Is Mission Critical For Every Company.
When drafting an AI-acceptable usage policy, you must ensure that no AI tools are used without the company’s knowledge and permission. The company must ensure that all employees use paid versions of chat GPT and select the correct data privacy options offered by OpenAI. Companies can also use the OpenAI API secret key to enhance data privacy, control employee data usage, and monitor the usage of their employees. Employees need to be trained on chat GPT data security and privacy issues so they know how to navigate various tools out there.
What Are The Essential Items for Your Company’s AI Acceptable Use Policy?
If your company’s employees are using AI tools but not developing them, the use policy can be streamlined to focus more on usage, ethical considerations, data handling, and security. As innovative lawyers with expertise in technology company representation and IP law, Traverse Legal understands the importance of incorporating a well-defined Acceptable Use Policy (AUP) for employee AI usage within an organization. We can help your company develop and implement AUP for AI use.
What is an AUP for AI?
An AUP is a set of guidelines and rules agreed upon between an employer and their employees that outlines how an organization’s technology resources can be used. Regarding AI usage, several critical areas need to be addressed in any AUP. Each company is different, and each AUP needs to address the company’s specific risk tolerance, implementation resources, and proprietary information.
Acceptable Use Policy (AUP) for Employee AI Usage.
Here are some essential items that every AUP for AI should address:
- Preamble
- The objective of the policy for AI tool application
- Policy’s jurisdiction (applicable parties and relevant technologies)
- Terminology
- Explanation of significant terms related to AI and proprietary data
- Principles of AI Application
- Impartiality: AI tools should not introduce or amplify unjust bias
- Clarity: AI applications should be clear and understandable
- Confidentiality and safety: AI must uphold privacy and safeguard data
- Responsibility: AI tool users should be answerable for their use of AI
- Confidential Data
- Interpretation of confidential data in the company’s context
- Interaction between AI tools and confidential data
- Strategies to safeguard confidential data during AI tool application
- Data Governance
- Data acquisition: Guidelines on data that AI tools can gather, the method of collection, and the authorized collectors
- Data preservation: Guidelines on data storage locations and methods
- Data application: Guidelines on data utilization by AI tools, including usage restrictions
- Data dissemination: Guidelines on data sharing, both within and outside the organization
- Data removal: Guidelines on data deletion timings and methods
- AI Tool Application and Data Confidentiality
- Authorized AI Tools: AI tools sanctioned by the company’s IT division can be used. The IT division will keep an updated list of sanctioned AI tools. Employees are prohibited from using AI tools not on this list for company-related activities.
- Tool Approval Process: Employees must submit a request to the IT division if they believe a new AI tool could be advantageous. The IT division will assess the tool for safety, privacy, and compliance before approving or rejecting the request.
- Data Accessibility: AI tools should only have access to the data required to perform their tasks. Employees must not grant AI tools access to excess data.
- Data Confidentiality: AI tools must adhere to the company’s privacy policy. This includes respecting personal data privacy, confidential data, and proprietary information. Employees must ensure that any AI tool they use manages data in a manner consistent with this policy.
- Data Safety: AI tools must implement sufficient security measures to protect data from unauthorized access, modification, or deletion. This includes data encryption, access control, and regular security updates.
- Education: Employees must be trained on how to use AI tools in a way that respects data privacy and security. This includes understanding the data that AI tools can access, how to restrict this access, and how to identify and respond to potential data breaches.
- Supervision and Auditing: The company will regularly supervise and audit the use of AI tools to ensure policy compliance. This includes verifying that only approved tools are being used, that they are being used correctly, and that they are not accessing or storing data inappropriately.
- Incident Reporting: Employees must immediately report any suspected policy violations or issues related to AI tool usage and data privacy to the IT department.
- Non-Compliance Penalties: Non-compliance with this policy may lead to disciplinary action, including termination. In some cases, legal action may also be pursued.
- Education and Consciousness
- Requirements for educating staff on AI tool application and this policy
- Strategies to increase awareness of AI risks and responsibilities
- Incident Management
- Procedure to report policy violations or other AI-related issues
- Company’s response to incidents, including potential disciplinary actions
- Policy Revision and Updates
- Frequency of policy review and updates
- Party responsible for policy upkeep.
- Compliance and Consequences
- Repercussions for policy non-compliance
- Compliance monitoring methods
This policy emphasizes the user side of AI more, highlighting ethical application, data governance, and security. It’s still crucial to consult with various stakeholders and potential legal counsel to ensure the policy is comprehensive and compliant with all relevant laws and regulations.
The post AI Acceptable Usage Policy: Employee Handbook for Responsible AI Usage. first appeared on Traverse Legal.