Thumbnail

Content creator(s)
Nicole Hennig
Description
Explore guardrails, bias, and "hallucination."
Resource type
Tutorial
Format
Website
Topic
Generative AI
Learning outcomes
1. Explain what “guardrails” are in AI systems and why they matter, so that you can recognize efforts to reduce biased or harmful outputs.
2. Identify signs of hallucination in AI responses, so that you can apply strategies to spot and test reliability.
3. Describe how adding web search to AI models changes their accuracy, so that you can explain why it reduces but does not eliminate hallucinations.
Target audience
Anyone
License
CC BY 4.0 University of Arizona Libraries
