As organizations encourage employees to integrate AI tools into day-to-day business operations, a question that in-house counsel and senior leadership should be considering is whether the prompts and conversations those employees generate are producible in litigation. The short answer is yes: a party’s AI prompts would almost certainly be producible on the same footing as any other record where they meet the requirements of relevance and materiality to the issues raised in the pleadings, possession or control, and proportionality (and are not privileged).
The legal framework is not new. The practical challenges of identifying, capturing, and preserving AI records, however, differ from those associated with email and other established categories of business communication, and organizations should understand where those differences matter.
The Discovery Framework
In Alberta, a record is producible if it is relevant and material to the issues raised in the pleadings, under a party’s possession or control, and production is proportionate to the needs of the case. A “record” is broadly defined without specific carve-outs for AI or other electronic content, and there is no AI-specific privilege or exclusion. As such, AI prompts and logs would be producible on the same footing as emails and regular business records.
Materiality requires that the record could reasonably be expected to significantly help determine an issue, evaluate credibility, or quantify damages; marginal or speculative utility is insufficient. Proportionality is also part of the analysis. Courts will consider the importance of the issues, the amounts at stake, the complexity of the matter, and the burden of retrieval and review when determining the appropriate scope of production.
While this analysis is anchored in Alberta’s standards, courts across Canada apply functionally similar relevance and proportionality concepts, and counsel should account for provincial rules where they may affect scope or process.
Cases Where AI Prompts Could Be Material
Unless voluntarily disclosed, a party seeking to obtain disclosure of AI prompts will need to articulate concretely how the prompts will significantly help resolve a pleaded issue, assess credibility, or quantify damages. Prompts are most likely to meet the materiality threshold where they would establish something that conventional discovery does not already address. If emails, memoranda, and other produced documents already show what a person knew, intended, or did, a request for prompts may fail on the “significant help” threshold or on proportionality grounds, as the burden would outweigh any incremental value.
Prompts will tend to matter most in disputes that turn on knowledge, intent, or the adequacy of process, particularly where equivalent records do not exist. The following are illustrative examples of circumstances where prompts are likely to meet the materiality threshold:
- Claims involving the gap between knowledge and disclosure. Fraud, misrepresentation, and bad faith claims often turn on what a party knew but chose not to disclose. Conventional discovery shows what was communicated. If produced documents do not establish that the party was aware of the undisclosed information, prompts are one category of record that might. For example, if a seller is alleged to have known about a material defect but did not disclose it, a prompt asking an AI about the defect before the transaction closed could, depending on the specifics of the case, be relevant and material.
- Claims where the real reason for a decision is in issue. In a case where it matters when concerns were first documented, when decisions were first discussed, or when communications were made (for example, in wrongful dismissal or discrimination cases), AI prompt logs used to draft termination rationales before any performance documentation existed could corroborate or undermine the stated explanation and may be ordered to be produced, particularly if the timing and motive could not already be established through other records.
- Trade secret claims where AI was used to extract or reformulate confidential information. In trade secret cases, conventional discovery may establish that a former employee joined a competitor, but not what confidential information they took. If the employee used an AI tool to extract, summarize, or reformulate proprietary information, the prompt may be the only contemporaneous record of the act of misappropriation. In this type of circumstance, the prompt could be producible precisely because equivalent records do not exist; the act of misappropriation occurred through the AI interaction itself.
- Professional liability claims turning on adequacy of process. In professional negligence cases, the work product shows the output. The claim is about whether the process met the standard of care. If, for example, an engineer used AI to perform calculations, conduct research, or draft analysis, and adequacy of the process is at issue (and the process is otherwise undocumented), the engineer’s AI prompts could meet the producibility threshold.
Seeking Production of an Opposing Party’s Prompts
The same analysis applies when considering whether to seek production of an opposing party’s AI prompts. In cases with the characteristics described above, counsel may wish to explore AI use during oral questioning, with the aim of establishing that AI was used in a context that would significantly help resolve a pleaded issue. Requests that cannot meet this standard, or that seek information already addressed by other produced documents, are less likely to succeed.
Preservation Obligations and Deletion
A natural response to the producibility of AI prompts is to consider whether employees can simply delete their conversation histories. If deletion occurs in the normal course, the prompts may not be accessible to the company but be aware that the prompts could still be accessible by the AI service provider. Consumer AI platforms vary in their retention windows, so there is no certainty of data being purged before litigation arises.
Once litigation is reasonably anticipated, the ordinary preservation obligations apply. Deleting AI prompts after that point could constitute spoliation. Organizations would be well advised to ensure that litigation hold procedures account for AI records, and that employees understand AI conversations are subject to preservation in the same manner as email and other electronic records.
In addition, many employees increasingly rely on their AI conversation histories as a working resource, building up context on projects over extended periods. Deleting those records would mean losing valuable work product and reference material that has become part of day-to-day practice. The more embedded AI tools become in workplace workflows, the less practical routine deletion becomes.
Emerging Practice in the United States
No Canadian court has yet addressed the producibility of AI prompts. In the United States, however, practitioners are beginning to pursue this category of discovery. Discussions within the Sedona Conference Working Groups indicate that requests for production of generative AI inputs, including prompts and conversation logs from platforms such as Microsoft Copilot, Google Gemini, and ChatGPT, are becoming more common in cases where the content may bear on the issues.
These platforms make such records searchable and exportable, and service providers now exist to compile AI interaction records for litigation purposes. There is no reason to expect Canadian courts would approach the producibility analysis differently where the foundational requirements of relevance, materiality, and proportionality are met.
Practical Considerations
Litigation Holds
Organizations may wish to review litigation hold templates to ensure they reference AI platforms by name and instruct custodians not to delete prompts or prompt histories once litigation is reasonably anticipated. A general reference to electronic documents is unlikely to prompt employees to preserve AI conversations, which they often do not perceive as records in the same way they perceive email.
Retention
Retention is a distinct question from litigation holds. A litigation hold preserves records once a dispute is anticipated. Retention policies govern how long records are kept in the ordinary course, before any dispute arises.
Organizations that have developed retention policies for collaboration platforms like Slack or Teams may be inclined to apply the same framework to AI tools. In principle, the underlying question is the same: what retention obligations apply to the content of this record, and how does the organization capture and preserve it? Retention obligation may depend on what the prompt contains; a prompt used to draft a routine communication may attract different considerations than one containing confidential client information or trade secrets.
In practice, however, AI prompts present complications that collaboration platforms do not.
Slack and Teams are enterprise tools with centralized administrative controls for retention, legal holds, and export. Enterprise AI platforms offer comparable controls, but often they must be affirmatively configured; they are not set up for litigation readiness by default. A meaningful volume of business-related AI use also occurs on consumer platforms where retention is governed by the provider’s terms of service and controlled by the individual user, not the organization.
While a full treatment of AI-specific retention is beyond the scope of this article (and the answer will depend on the organization’s industry, regulatory environment, and the platforms in use), the narrower point is that organizations should not assume existing retention policies automatically capture AI records, even where those policies are broad enough to encompass electronic records generally. AI platforms should be specifically identified in retention policies, and the practical question of whether the organization can actually enforce retention on those platforms should be assessed rather than assumed.
Personal AI Accounts
The legal analysis for producibility does not change based on whether an AI account is enterprise or consumer, company-provisioned or personal. What matters is whether the content relates to the matters in issue. However, personal accounts are harder for an opposing party to access. With a personal account, the opposing party would likely need to obtain the records from the individual, and if the employee is uncooperative, no longer with the company, or claims the records were deleted, options become more limited and cost-intensive.
AI-Specific Discovery in Active Matters
In active matters involving claims that turn on knowledge, intent, or process, it may be worth considering whether AI-specific discovery is warranted, on either side of the file. This could include adding targeted questions about AI use to standard discovery checklists and custodian interviews, addressing whether prompts were used to develop analyses, drafts, or decisions material to the dispute, and where those prompts are stored or exportable.
Privilege
The privilege implications of AI use in a litigation context are significant and warrant brief comment, particularly in light of a recent decision in the United States where a federal court ruled that 31 documents a criminal defendant generated using a consumer AI tool and later shared with his defence lawyers were not protected by attorney-client privilege or the work product doctrine.
The court’s reasoning rested on established principles: the AI tool is not a lawyer, owes no duties of loyalty or confidentiality, is not supervised by courts or professional associations, and cannot form a solicitor-client relationship. The platform’s own terms of service disclaimed providing legal advice and its privacy policy permitted disclosure of user data to third parties. Further, sending pre-existing, non-privileged documents to a lawyer does not retroactively make them privileged. The defendant prepared the documents on his own initiative, not at counsel’s direction.
Employees are increasingly using AI tools to research legal questions that, until recently, would typically have been the subject of a privileged communication with counsel. Where an employee asks an AI tool these questions on their own initiative, the resulting records are unlikely to attract privilege.
Conclusion
AI tools are expanding the surface area of discoverable communications in ways that organizations should not underestimate. Further, AI tools invite a level of candour and informality that email does not, and the resulting records may be more revealing as a result. The volume of potentially producible material is growing, and much of it may not be privileged.
The potential producibility of prompts in certain litigation contexts should not discourage organizations from adopting AI tools. Production of AI prompts will need to meet the materiality threshold and bear on the pleaded issues. It is difficult to imagine a case where broad disclosure of all AI interactions would be warranted.
The practical takeaway is one of awareness and governance. Organizations should make clear that AI conversations, when used for business, are business records subject to the same standards as other written communications. Employees should also understand that AI interactions are not inherently private, that privilege will not attach to legal research conducted through an AI tool on the employee’s own initiative, and that inputting privileged information into a consumer AI platform could waive any existing privilege.