Module 1 · Chapter 1

AI Literacy for the University Student

Using AI tools without understanding their failure modes creates risks you will not see coming: fabricated citations in a paper, wrong information in a presentation, academic integrity violations you did not intend.

How LLMs actually work

Large language models — ChatGPT, Claude, Grok, Gemini — are trained on very large amounts of text. When you type a prompt, the model does not search for an answer. It generates a response word by word, predicting what text would be most appropriate given your input and its training.

The result is often fluent, well-organized, and sounds authoritative. Whether it is accurate is a separate question — and one the model itself cannot reliably answer.

The one thing to remember

Confidence and accuracy are unrelated. A model can produce a detailed, citation-heavy, completely fabricated answer with the same fluent confidence it uses when the answer is correct. This is not a bug that will be fixed in the next version. It is a structural feature of how these models work.

Check Your Understanding · 10 XP

Which statement is true about how LLMs generate responses?

The hallucination problem

The technical term for an AI generating false information with apparent confidence is hallucination. The model does not know it is generating false information. From its perspective, it is producing statistically plausible text.

In academic work, hallucination appears most dangerously in citations. An AI asked to find sources supporting an argument will often return what look like real citations — plausible author names, realistic journal titles, credible-sounding article titles, believable years — that simply do not exist.

This has caused real problems for real people. Attorneys have filed court briefs with AI-generated case citations that turned out to be fabricated, resulting in sanctions. Students have submitted papers with AI-generated references that did not exist, resulting in academic integrity violations. In both cases, the person did not intend to fabricate — they trusted the output without verifying it.

Verification Drill · 30 XP

An AI gave you these five citations. Three are real, two are likely fabricated. Flag the suspicious ones.

Read each citation. If something looks off — an impossible volume/page, a journal that does not match the topic, a title that sounds AI-generated — flag it. Your only goal: notice the patterns that signal fabrication.

Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Mendez, R. L., & Fairweather, J. P. (2019). Cognitive load and generative retrieval in second-year law students. Harvard Journal of Legal Pedagogy, 47(3), 201–248.
Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249–255.
Chen, M., Walker, S., & Okonkwo, T. (2022). Large language models and undergraduate writing outcomes: A meta-analysis of 147 institutional studies. Nature Education Reviews, 14, 1102–1147.
Bloom, B. S. (1984). The 2 sigma problem: The search for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13(6), 4–16.

The major platforms

ChatGPT (OpenAI) — Most widely used general-purpose platform. Strong for studying, brainstorming, drafting, concept explanation. Free tier is capable for most academic tasks.

Claude (Anthropic) — Known for careful analysis and strong performance on longer documents. Useful for reviewing lengthy readings and complex texts.

Grok (xAI) — Real-time access via X integration. Useful for current events and recent research.

Gemini (Google) — Integrates with Google Workspace. Efficient for students already in the Google ecosystem.

The platform matters less than the habits you build around it. Use the tool your institution permits — and verify everything.

Your university's AI policy

Every institution has an academic integrity policy, and most now have specific AI guidance. Before you use any AI tool on any academic work, you need to know what your institution permits.

  1. Start with the syllabus. Most professors with strong views — in either direction — say so there.
  2. Then the policy page. Provost's office, dean of students, or a centralized AI policy page.
  3. When in doubt, ask. A two-minute email to your professor eliminates ambiguity. Proceeding without asking is not a defensible position if a problem arises.
Scenario · 15 XP

Your syllabus says: "AI tools may be used for brainstorming and outlining, but not for drafting. Any AI-assisted work must be disclosed." You use ChatGPT to generate a thesis statement that ends up almost verbatim in your final paper. You do not disclose it. Is this a violation?

The personal AI use framework

Beyond institutional policy, four questions to ask every time:

1. Is this use permitted by my institution's policy and my professor's syllabus? If you don't know, find out before proceeding.

2. Does this use enhance my learning or replace it? Getting a concept explained more clearly enhances learning. Producing work you submit as your own, without engaging with the material, replaces it.

3. Would I be comfortable if my professor could see exactly what I did? Not whether you'd get caught — whether the use is one you can defend as academically legitimate.

4. Am I building skills I will need, or outsourcing them? Over four years, the compounding matters.

The bottom line of Module 1

AI tools are powerful and can make you more capable. They are also unreliable in ways that do not announce themselves. Verification is not a suggestion. It is the basic obligation of anyone using these tools in academic or professional work.

← Home Next: Getting Oriented →