Generative AI has made “chat logs” a new kind of routine exhibit. But in court, AI chats are still just evidence—meaning they rise or fall on the same familiar pillars:
relevance, authentication, hearsay, expert reliability, and rule-based exclusions.
Practical takeaway: When someone tries to introduce a “ChatGPT screenshot,” the fight is rarely about the screenshot itself. It’s about (1) whether the record is authentic and complete,
(2) whose “statement” is being offered, and (3) whether the proponent is sneaking in “expert-like” opinions without satisfying Rule 702.
1) Start by Classifying the AI Evidence
Most “AI evidence” in litigation falls into one of four buckets. The bucket often determines which evidentiary rules do the heavy lifting:
- AI-generated content (e.g., the chatbot’s output offered for its truth).
- AI-assisted human content (e.g., a draft contract or memo created with AI assistance, later adopted/edited by a human).
- AI as part of a business system (e.g., an internal “SafetyBot” that creates tickets, logs, and audit trails as part of routine operations).
- AI-altered or “deepfake” media (e.g., audio/video/image altered by AI, which intensifies authentication and prejudice concerns).
If you treat all four categories the same, you’ll miss the best arguments—especially on authentication and Rule 702 reliability.
2) A Federal Rule Checklist for Chatbot Chats
In federal court, AI chat evidence almost always triggers a predictable checklist:
- Relevance: Rules 401 & 402 (and then Rule 403 balancing).
- Authentication: Rule 901 (and, increasingly, Rule 902(13)–(14) certifications for electronic evidence).
- Hearsay: Rules 801–803 (including “party-opponent” admissions, business records, etc.).
- Expert gatekeeping: Rule 702 if the chatbot output functions like technical or specialized opinion.
- Completeness / context: Rule 106 to prevent cherry-picked snippets.
- Best Evidence: Rules 1001–1003 (especially where a screenshot is offered instead of the native export).
- Special exclusions when relevant: e.g., Rule 407 (subsequent remedial measures) in negligence cases.
3) Authentication: The “Is This What You Say It Is?” Foundation
Authentication is usually the first real battleground with AI chats. Under Rule 901, the proponent must offer enough evidence for a reasonable factfinder to conclude
the exhibit is what the proponent claims it is (e.g., a true and accurate export of a particular user’s chat, from a specific account, on a particular date).
Common ways to authenticate a chat transcript
- Witness-with-knowledge testimony: a user, custodian, or investigator explains how the chat was created and preserved.
- Distinctive characteristics: account identifiers, timestamps, conversation continuity, internal references, and corroborating surrounding evidence.
- Process/system evidence: a qualified witness explains the export process and how the system reliably produces accurate logs.
- Rule 902(13)–(14) certifications: for certain electronic records, a certification can substitute for live testimony (with proper notice).
Litigator tip: A bare screenshot is often the weakest form of proof. If you can obtain a native export plus audit logs (who accessed, edited, exported, and when),
you dramatically improve your authentication posture—and reduce the risk of a “this was altered” ambush.
Courts have long been skeptical of “internet printouts” offered without meaningful foundation—especially when authorship or control is disputed. AI chat logs can face the same
vulnerability if the proponent can’t tie the chat to the right account and user with reliable metadata and testimony.
4) Hearsay: Whose “Statement” Is an AI Output?
Here is the core hearsay puzzle with chatbots: the rules define hearsay around a person’s assertion and a human declarant.
That means the human user’s prompt is usually easy to categorize (it’s a person’s statement), while the chatbot’s output is harder—because it may be treated as machine output,
or as an “expert-like” opinion, or as a statement adopted by a party.
Practical framing options
- Non-hearsay use (often strongest): offer the AI output to show notice, knowledge, state of mind, or effect on the listener
(e.g., “the manager was warned and then decided to repair”), rather than to prove the output was true. - Party admissions: if a party’s employee typed the prompt, forwarded the output, or adopted it as accurate, portions of the exchange may qualify as admissions (or adoptive admissions).
- Business records: if the organization routinely keeps AI-chat logs as part of its operations (e.g., ticketing, compliance, incident reporting), the recordkeeping layer may fit a business-records theory—
but you still must confront reliability and purpose for which offered. - Rule 702 (expert-like evidence): if you’re offering the chatbot’s output as technical truth (“the stairs were structurally unsafe”), a judge may treat it as expert territory and require Rule 702 reliability.
Translation: the more the proponent uses AI output like an authoritative conclusion, the more likely the court is to demand robust foundation, context, and—sometimes—expert testimony.
5) Rule 702: When AI Output “Acts Like” Expert Testimony
If the AI output is being offered as specialized knowledge (medicine, engineering, finance, safety standards), it can collide with Rule 702.
That rule requires the proponent to show the opinion is helpful, rests on sufficient facts/data, and is reliably applied.
This is especially important because generative AI can produce fluent answers that are incomplete, wrong, or untraceable (“hallucinations”).
As a result, some judges and rulemakers have been actively debating whether to create a dedicated rule for AI-generated evidence that functions like expert testimony.
Trend watch: The federal evidence rules committee has publicly explored proposals addressing AI risks (including deepfakes and AI-generated “expert-like” outputs).
Even without a new rule, practitioners should expect tougher fights where AI output is used as substantive proof rather than context.
6) Best Evidence & Completeness: Screenshots Invite Rule 1002 / 106 Arguments
If the content of a chat is central, be ready for best-evidence and completeness disputes:
- Rule 106 (completeness): if one side offers a snippet, the other may demand surrounding context to avoid misleading impressions.
- Rules 1001–1003 (best evidence / duplicates): a screenshot might be a “duplicate” or “secondary” representation—fine in many cases, but risky if authenticity is challenged.
In practice, the “best” version of an AI chat exhibit is usually: a native export + metadata + custodian declaration + a clear explanation of how the system logs were generated and preserved.
7) Worked Example: Introducing a “Chat Box” Transcript at Trial
Hypothetical: Paul sues Dell’s Department Store in a slip-and-fall case, alleging a broken stair step caused his injury. Immediately after the incident, Dell’s manager Mark uses an internal AI assistant (“SafetyBot”) to create an incident ticket and get recommendations.
The SafetyBot chat (proposed exhibit)
Mark: “Customer fell near stairwell. Step 3 looked cracked/loose. Could that cause a fall?”
SafetyBot (AI): “Yes. A cracked/loose step is a common hazard that can cause falls. Recommend immediate repair and documenting condition. If there are repeated reports, consider closing the stairwell until fixed.”
At trial, while Mark is on the stand, the following occurs:
- Q: “Right after Paul fell, you typed into SafetyBot that Step 3 looked cracked and loose—correct?”
A: “Yes.” - Q: “And SafetyBot responded that a cracked step can cause falls and recommended immediate repair—correct?”
A: “Yes.” - Q: “You forwarded that SafetyBot response to maintenance the same day—right?”
A: “Yes.” - Q: “And the stair was repaired the next day—right?”
A: “Yes.”
Plaintiff moves to admit: (A) a printed screenshot of the chat and (B) the native export plus audit logs.
A) Likely Federal (FRE) objections & rulings
1) Authentication (Rule 901 / 902)
Objection: “Lack of foundation / not authenticated.”
Likely ruling: If plaintiff offers only a screenshot with no metadata and no witness who can explain where it came from and whether it was altered, the objection has traction.
If plaintiff offers a native export + audit logs + a custodian or Mark’s testimony tying it to his account and the system, authentication is much more likely to be satisfied.
2) Hearsay (Rules 801–803)
Mark’s prompt: If Mark is an employee speaking within the scope of his duties, the prompt can be offered against Dell as a party-opponent statement (commonly via the agent/employee path).
SafetyBot’s output: Defense will argue it’s being offered for its truth (“broken step causes falls”), and that the jury can’t cross-examine “SafetyBot.”
Plaintiff’s cleanest response is often: “Not for truth—offered to show notice/knowledge and why Dell acted,” which is a non-hearsay purpose.
If plaintiff insists it is offered for truth (i.e., as substantive proof of defect/causation), the judge may scrutinize whether this is really expert-like opinion requiring Rule 702 reliability.
3) Rule 702 (expert reliability) if offered as technical truth
Objection: “Improper expert opinion / unreliable methodology.”
Likely ruling: If the AI output is being used as a substitute for an engineer or safety expert, the court may exclude it absent a proper expert foundation.
If it’s used only to show Mark was advised and reacted, the Rule 702 pressure drops dramatically.
4) Rule 403 (unfair prejudice / misleading the jury)
Objection: “Even if relevant, the AI output will mislead the jury into treating it as authoritative.”
Likely ruling: Courts may consider limiting instructions, redactions, or allowing the evidence only for the non-hearsay purpose (notice) if the “AI authority effect” risks confusion.
5) Subsequent remedial measures (Rule 407) as to the repair
Objection: “Repair the next day is a subsequent remedial measure.”
Likely ruling: If plaintiff offers the repair to prove negligence or culpable conduct, Rule 407 typically bars it. If offered for another permitted purpose (e.g., ownership/control, feasibility if disputed, or impeachment), it may be allowed with care.
B) California (CEC) version: key objections and “exemptions/exceptions”
In California Superior Court, you’re still fighting the same war—just with CEC section numbers:
1) Relevance & balancing (CEC 210 / 352)
The SafetyBot exchange is likely relevant if it tends to show notice, condition, or causation. Even relevant evidence can be excluded if its probative value is substantially outweighed by undue prejudice, confusion, or misleading the jury.
2) Authentication (CEC 1400 / 1401)
Plaintiff must authenticate the writing (the chat record) before it is received. A native export plus audit logs and testimony from Mark or a custodian generally beats a screenshot standing alone.
3) Hearsay rule (CEC 1200) and key “exemptions/exceptions”
- Party admissions (CEC 1220) and related admissions:
Mark’s typed prompt may come in if it qualifies as an admission offered against the party (and related doctrines may apply depending on agency/authorization). - Adoptive admissions (CEC 1221):
If Dell (through Mark or another authorized actor) adopted or agreed with the AI output—e.g., forwarding it with “this is accurate”—it strengthens an admissions theory. - Business records (CEC 1271):
If Dell regularly keeps SafetyBot tickets/logs as part of routine operations and the foundational requirements are met, the recordkeeping layer may fit a business-records exception. - Non-hearsay purpose:
As in federal court, the cleanest move is often to offer the AI output to show notice/knowledge/effect on the listener rather than to prove the output’s truth.
4) Subsequent remedial measures (CEC 1151)
Evidence of the next-day repair is generally inadmissible to prove negligence or culpable conduct, though it may be permitted for other limited purposes in the right posture (ownership/control, impeachment, etc.).
5) “Best evidence” concept in California: Secondary Evidence Rule (CEC 1521)
California’s approach is often framed through the Secondary Evidence Rule: the content of a writing may be proved by otherwise admissible secondary evidence,
but the court may exclude it in certain circumstances—especially where fairness or authenticity concerns are substantial.
Practically, a screenshot is more vulnerable than a native export with reliable metadata.
8) A Field Checklist: How to Make AI Chats Trial-Ready
- Preserve early: treat AI chats like ESI (litigation holds, retention policies, export procedures).
- Prefer native exports: capture full threads, timestamps, account IDs, and system audit trails.
- Lock down integrity: hashes/digital signatures where available; document chain of custody.
- Capture the “prompt + output” together: context matters for meaning and for completeness objections.
- Decide your purpose: “notice/effect on listener” is often an easier admissibility path than “truth of the AI’s conclusion.”
- Anticipate Rule 702 fights: if the AI output sounds like an expert, expect expert standards.
- Prepare limiting instructions: if admitted, frame the permitted use tightly (especially to avoid jurors treating AI as authoritative).
9) Bottom Line
Courts already have tools to handle AI chats: authentication rules, hearsay doctrine, expert gatekeeping, and prejudice balancing.
The winning strategy usually comes from choosing the cleanest theory for why the chat matters—then building a record that proves authenticity and prevents the jury from over-weighting AI output.
Disclaimer: This article is for general informational purposes and is not legal advice.
Sources (for footnoting or “Further Reading”)
- Federal Rules of Evidence (U.S. Courts PDF, Dec. 1, 2024)
- FRE 901 (Cornell LII)
- FRE 902 (Cornell LII; includes 902(13)–(14))
- FRE 801 (Cornell LII; hearsay definitions)
- FRE 702 (Cornell LII; expert testimony)
- FRE 407 (Cornell LII; subsequent remedial measures)
- United States v. Zhyltsou (2d Cir. 2014) (authentication of web/social media evidence)
- People v. Goldsmith (Cal. 2014) (machine-generated evidence; foundation/hearsay issues)
- CEC 210 (definition of relevant evidence)
- CEC 352 (balancing / exclusion)
- CEC 1200 (hearsay rule)
- CEC 1220 (party admissions)
- CEC 1271 (business records)
- CEC 1400 (authentication of a writing)
- CEC 1521 (Secondary Evidence Rule)
- CEC 1151 (subsequent remedial measures)
- Advisory Committee on Evidence Rules Agenda Book (U.S. Courts, April 2024)


