Introduction
With technology advancements such as ChatGPT, Google Bard, and other artificial intelligence (AI)-driven platforms, there's growing enthusiasm within our community to leverage these tools and integrate them into the university context. The following advisory provides guidance on how to use these tools safely, without putting institutional, personal, or proprietary information at risk. Additional guidance may be forthcoming as circumstances evolve.
UC ANR recognizes the potential for AI to perpetuate biases and inequalities if not implemented and monitored carefully. Therefore, all AI used at UC ANR must undergo thorough scrutiny throughout its deployment and usage.
Implementing AI
To implement a new AI tool or process, please review the following guidelines and review the UC Responsible Use of Artificial Intelligence Report. Then, contact HR and IT for guidance:
- HR contact: Bethanie Brown at brbbrown@ucanr.edu
- IT contact: Jaki Hsieh Wojan at jhsiehw@ucanr.edu
HR and IT will ask you to fill out an AI Project Request Form to gain a better understanding of your use case. They will conduct a thorough assessment of the labor relations, privacy, and cybersecurity implications. They will also provide recommendations regarding the appropriateness of these tools.
Training: The UC AI Primer: Core Concepts and Fundamentals is a comprehensive introduction to the world of artificial intelligence tailored for non-technical audiences. This training is designed to demystify AI and equip takers with a foundational understanding of its key concepts and implications. We recommend all employees interested in using AI tools at UC ANR to take this course.
Prohibited Use
- Don't use AI tools in situations where they impact an employee's personal information, health and safety, or conditions of employment, unless otherwise specified by policy or law.
- Unless you have approval, don't use AI tools for any information classified as Protection Level P2, P3, or P4, such as confidential HR data or unpublished research data.
- Unless you have approval, don't use AI tools to generate non-public output. Examples include, but are not limited to, proprietary or unpublished research; legal analysis or advice; recruitment, personnel, or disciplinary decision making; and creation of non-public instructional materials.
Precautions
- Ethics: It is imperative to prioritize the well-being and rights of employees by ensuring human oversight and accountability in high-stakes personnel decisions. In addition to ethical considerations, there are processes and procedures (such as union notifications) that must be completed before leveraging AI that would impact conditions of employment.
- Hallucinations: Watch out for "hallucinations" — moments when the AI generates incorrect or misleading content. Ensure all facts and figures generated by these tools are independently verified through non-AI sources before use. In other words, don't simply copy and paste what is produced into your work.
- Bias: Large Language Models (LLMs) like ChatGPT may have been trained on incomplete or biased data. Be careful not to use LLM output in a way that amplifies these biases. For instance, check its wording of public job postings, as biased wording may discourage certain groups from applying, potentially impacting the diversity of the applicant pool.
- Scams: Be wary of fake websites attempting to mask as popular AI apps. For more information, see How to Tell ChatGPT Scams Apart From the Real Thing (Wayback Machine archive of external website).
Potential Use Cases
Publicly available information (Protection Level P1) can be used freely in AI tools such as ChatGPT. Remember to check its work before publishing it! In all cases, use should be consistent with the UC ANR Principles of Community.
Some areas to consider the use of AI include:
- Promotional Materials: Use AI to create and edit images and voiceover tracks for videos intended for public use.
- Website and Communications Content: AI can edit text for clarity and grammar, as well as suggest optimal layouts, headlines and meta descriptions.
- Programming and Web Development: LLMs can draft code for common programming tasks in research, accelerating the development process.
- Please note that this is suitable for public research tasks, not internal infrastructure code or sensitive research data.
- In addition, Claude Code does not fit this use case because it is agentic AI that would have greater access to your computing environment. For these sensitive use cases, please contact HR and IT as described above.
- Job Descriptions and Postings: Use templates to suggest customized language for position overviews, key responsibilities and qualifications. Review the language to ensure it is free from bias.
- Training and Onboarding: LLMs can develop staff training materials and automate responses to common questions during training sessions.