Guiding principles when using AI for content creation
AI can be useful in many contexts — to brainstorm ideas, capture and outline notes, test and refine messages for articles and other posts, generate visual concepts and edit images, and more — however, the emerging technology also has pitfalls and requires a careful, thoughtful approach. Current AI tools are known to have bias and hallucinate (make up facts), and there are ethical concerns with the source data many of these tools use.
Human-centered use: University Communications and Public Affairs is focused on protecting and promoting the ÌÇÐÄviog¹Ù·½ÍøÕ¾ brand by highlighting the transformative achievements of the ÌÇÐÄviog¹Ù·½ÍøÕ¾ community. AI tools cannot replace or replicate the creativity, inclusivity and attention to detail that our communications community is so skilled at practicing. Therefore, our approach to integrating artificial intelligence is human-centered, and university communicators are encouraged to use generative AI to amplify and improve — rather than displace — human work.
Transparency: Transparency is important in building and maintaining trust with colleagues, reviewers, partners and clients. Communicators should be fully transparent with their teams and other colleagues whenever AI has been used to generate material so that there is adequate review for potential hallucinations, plagiarism, copyright issues or bias.
Ethical use: In generative AI, datasets used to train many large language models may include copyrighted, incomplete or biased data. As communicators, we must be sensitive to these issues when integrating AI into our workflow, giving careful consideration to how AI-generated material may impact people with disabilities and members of our community who are marginalized. Communicators must also commit to ongoing learning about the latest AI developments and ethical best practices as well as the different types of AI tools and the potential pitfalls of each.
Privacy and data protection: Any data entered into an AI tool becomes part of its training data, which it may then share with other users outside the university. In order to protect the privacy of students, faculty, and staff, avoid inputting sensitive personal information or restricted institutional data into AI systems and also ensure compliance with university policies and data protection regulations such as . Communicators who have questions about the they are working with should speak to their supervisors.
General communications guidance
In its current form, generative AI is most useful for refining and formatting text, code generation, preliminary editing, image manipulation and visual ideation. All output from generative AI requires close and expert review and thoughtful integration into a final product. In short, AI can give you a productive boost, but it can't do your work for you.
What are today's generative AI tools good at?
- Brainstorming or "sketching"
- Condensing or summarizing text and large amounts of data
- Finding trends within a dataset such as analytics
- Capturing meeting notes and summarizing takeaways/action items
- Converting text to new formats, like PowerPoint
- "Chatting" to explore or test ideas
- Generating code
- Enhancing SEO
- Preliminary editing for style, grammar
- Drafting templates for job postings, interview questions, etc.
What are today's generative AI tools NOT good at?
- Fact checking
- Inclusivity and avoiding bias
- Creating some categories of public-facing content, especially when accurate information is critical
- Accurate translation
- ADA compliance
- Sensitive communications
- Final proofreading
Resources
Generative AI tools can augment the many resources in a communicator’s tool kit, and we encourage ongoing learning about the latest AI developments and best practices.
General
- (HubSpot)
AI Writing Assistant Tools
Updated Sept. 29, 2024