Australia
Document: Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (May 2024)
Duties to the Court
The Principles for Use of AI by litigants are as follows:
- Parties and practitioners who are using AI tools in the course of litigation should ensure they have an understanding of the manner in which those tools work, as well as their limitations.
- Parties and practitioners should be aware that the privacy and confidentiality of information and data provided to an external program that provides answers generated by AI may not be guaranteed and the information may not be secure.
- The use of AI programs by a party must not indirectly mislead another participant in the litigation process (including the Court) as to the nature of any work undertaken or the content produced by that program. Ordinarily parties and their practitioners should disclose to each other the assistance provided by AI programs to the legal task undertaken. Where appropriate (for example, where it is necessary to enable a proper understanding of the provenance of a document or the weight that can be placed upon its contents), the use of AI should be disclosed to other parties and the court.
- The use of AI programs to assist in the completion of legal tasks must be subject to the obligations of legal practitioners in the conduct of litigation, including the obligation of candour to the Court and, where applicable, to obligations imposed by the Civil Procedure Act 2010, by which practitioners and litigants represent that documents prepared and submissions made have a proper basis.
- Self-represented litigants (and witnesses) who use generative AI to prepare documents are encouraged to identify this by including a statement as to the AI tool used in the document that is to be filed or the report that is prepared. This will not detract from the contents of the document being considered by the relevant judicial officer on its merits but will provide useful context to assist the judicial officer. For example it will assist in forming a more accurate assessment about the level of legal knowledge or experience possessed by a self-represented party.
Document: Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (July 2024)
Duties to the Court
- The use of AI programs by a party must not indirectly mislead another participant in the litigation process (including the Court) as to the nature of any work undertaken or the content produced by that program. Ordinarily parties and their practitioners should disclose to each other the assistance provided by AI programs to the legal task undertaken. Where appropriate (for example, where it is necessary to enable a proper understanding of the provenance of a document or the weight that can be placed upon its contents), the use of AI should be disclosed to other parties and the court.
- The use of AI programs to assist in the completion of legal tasks must be subject to the obligations of legal practitioners in the conduct of litigation, including the obligation of candour to the Court and, where applicable, to obligations imposed by the Civil Procedure Act 2010, by which practitioners and litigants represent that documents prepared and submissions made have a proper basis.
- Self represented litigants (and witnesses) who use generative AI to prepare documents are encouraged to identify this by including a statement as to the AI tool used in the document that is to be filed or the report that is prepared. This will not detract from the contents of the document being considered by the relevant judicial officer on its merits but will provide useful context to assist the judicial officer. For example it will assist in forming a more accurate assessment about the level of legal knowledge or experience possessed by a self-represented party.
- Generative AI and Large Language Models create output that is not the product of reasoning. Nor are they a legal research tool. They use probability to predict a given sequence of words. Output is determined by the information provided to it and is not presumed to be correct. The use of commercial or freely available public programs such as ChatGPT and Google Gemini, is more likely to produce results that are inaccurate for the purpose of current litigation. Generative AI does not relieve the responsible legal practitioner of the need to exercise judgment and professional skill in reviewing the final product to be provided to the Court. AI generated text should be checked so as not to be:
- out of date, in that the model used may only have been trained on data up to a certain point in time, and therefore will be unaware of any more recent jurisprudence or other developments in the law that may be relevant to a case;
- incomplete, in that the tool may not generate material addressing all arguments that a party is required to make or all issues that would be in a party’s interests to cover, and summaries generated by such tools may not contain all relevant points;
- inaccurate or incorrect, in that the tool may not produce factually or legally correct output (for example in some situations, users have been adversely affected by placing reliance on made-up cases or incorrect legal propositions);
- inapplicable to the jurisdiction, as the data used to train the underlying model might be drawn from other jurisdictions with different substantive laws and procedural requirements; or
- biased, given the model will have been created based on data that the user is unaware of, but which may over- or under-represent certain demographics or otherwise prefer certain viewpoints over others in a way that will not be transparent to users.
- A party or practitioner signing or certifying a document, filing a document with the Court, or otherwise relying on a document’s contents in a proceeding, remains responsible for accuracy of the content. Whether a court document is signed by an individual or on behalf of a firm, the act of signing a document that is filed with the Court is a representation that the document is considered by those preparing it to be accurate and complete. Reliance on the fact that a document was prepared with the assistance of a generative AI tool is unlikely to be an adequate response to a document that contains errors or omissions.
- Particular caution needs to be exercised if generative AI tools are used to assist in the preparation of affidavit materials, witness statements or other documents created to represent the evidence or opinion of a witness. The relevant witness should ensure that documents are sworn/affirmed or finalised in a manner that reflects that person’s own knowledge and words. Similar considerations arise in the use and identification of such tools in compiling data in the preparation of any expert reports or opinions, and compliance with the Expert Witness Code of Conduct.
Document: The use of Generative Artificial Intelligence (AI): Guidelines for responsible use by non-lawyers (May 2024)
Duties to the Court
You are responsible for ensuring that all information you rely on or provide to the court or tribunal is accurate. You must check the accuracy of any information you get from a Generative AI chatbot before using that information in court or tribunal proceedings.
Document: A solicitor’s guide to responsible use of artificial intelligence
Duties to the Court
Relevant rules to consider under the Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015:
Rule 19 – Duty to the court Solicitors must ensure they do not mislead or deceive the Court, even if inadvertently. The validity of any material presented to the Court needs to be tested by solicitors, whether or not that material has been produced by generative AI.
When using AI, solicitors should be particularly cautious, given the limitations discussed above. Solicitors should not rely on generative AI to verify sources produced by AI. This has been known to fail.
Document: Issues Arising from the Use of AI Language Models (including ChatGPT) in Legal Practice (12 July 2023)
Duties to the Court
Under the Legal Profession Uniform Conduct (Barristers) Rules 2015 (NSW):
- Rule 23 provides: A barrister has an overriding duty to the court to act with independence in the interests of the administration of justice.
- Rule 24 provides: A barrister must not deceive or knowingly or recklessly mislead the court.
- Where AI tools are used the answers generated should be carefully checked and interrogated before the barrister relies upon them in any way to assist in producing work for clients or courts.
- In using material generated by an AI language model, barristers should note that the AI’s response may include language which may put them in a position of contravening Rule 8 and/or Rule 123. Any output produced from AI tools should be carefully reviewed to ensure that such language is not repeated or endorsed in any way in work produced by, or communications emanating from, the barrister. If biased, discriminatory and/or offensive language is deployed in any communication with the court …a barrister may thereby have contravened the above bar rules and exposed themselves to disciplinary action.
- The critical point in relation to AI technology is that it cannot be used as a substitute for the proper exercise of a barrister’s professional judgement in matters of law or in ignorance of their professional and ethical obligations. It is not a substitute for a barrister’s own work.
New Zealand
Documents:
- Guidelines for use of generative artificial intelligence in Courts and Tribunals (Lawyers) (7 December 2023)
- Guidelines for non-lawyers (7 December 2023)
Duties to the Court
Guidelines for lawyers:
- Lawyers have “a fundamental obligation to uphold the rule of law, to facilitate the administration of justice and the overriding duty of a lawyer as an officer of the court. As officers of the court, lawyers must not mislead the court. They must take all reasonable steps to ensure the accuracy of information (including legal citations) provided to the court, and to avoid any risk of breaching suppression orders.”
- You are responsible for ensuring that all information you provide to the court/tribunal is accurate. You must check the accuracy of any information you have been provided with by a GenAI chatbot (including legal citations) before using that information in court/tribunal proceedings.
- Have regard to ethical issues – particularly biases and the need to address them.
- You do not need to disclose use of a GenAI chatbot as a matter of course – unless asked by the court or tribunal.
- Provided these guidelines have been followed (in particular, checking for accuracy), the key risks associated with GenAI should have been adequately addressed. However, a court or tribunal may ask or require lawyers to disclose GenAI use.
United Kingdom
Document: Artificial Intelligence: Guidance for Judicial Office Holders (12 December 2023)
Duties to the Court
- All legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate. Provided AI is used responsibly, there is no reason why a legal representative ought to refer to its use, but this is dependent upon context.
- Until the legal profession becomes familiar with these new technologies, however, it may be necessary at times to remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot.
- AI chatbots are now being used by unrepresented litigants. They may be the only source of advice or assistance some litigants receive. Litigants rarely have the skills independently to verify legal information provided by AI chatbots and may not be aware that they are prone to error. If it appears an AI chatbot may have been used to prepare submissions or other documents, it is appropriate to inquire about this, and ask what checks for accuracy have been undertaken (if any).
- AI tools are now being used to produce fake material, including text, images and video. Courts and tribunals have always had to handle forgeries, and allegations of forgery, involving varying levels of sophistication. Judges should be aware of this new possibility and potential challenges posed by deepfake technology.
United States
Document: Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence (April 2024)
Candor to the Court: Attorneys’ signatures and attestations appear on legal documents submitted to the court, documents which make representations about case law and other authorities relied upon in support of the attorney’s case. Regardless of the use of and reliance upon new and emerging technologies like generative AI tools, as officers of the court and in the interest of justice, attorneys must identify, acknowledge and correct mistakes made or represented to the court.
Deepfakes – Synthetic Media as Evidence in Court
…evidentiary issues surrounding Deepfakes – a form of AI called deep learning that makes images of fake events – may also implicate the Duty of Candor to the Court. Deciding issues of relevance, reliability, admissibility and authenticity may still not prevent deepfake evidence from being presented in court and to a jury.
Preliminary guidelines for the use of artificial intelligence (25 January 2024)
[Source: https://btlaw.com/insights/alerts/2024/new-jersey-judiciary-releases-preliminary-guidelines-for-unavoidable-use-of-ai-by-attorneys - the Guidelines themselves do not appear to be generally available]
- Lawyers remain responsible for the validity of legal submissions, including those generated using AI.
- Lawyers must not submit false, fake, or misleading content, and they are prohibited from manipulating or creating evidence using AI.
- The use of AI will not excuse false, fake, or misleading content, and lawyers must uphold candor to the tribunal.
Several Judges from US federal courts have issued standing orders requiring lawyers to affirmatively disclose use of AI/file certifications regarding the use of AI. See legal update here.
Examples cited in legal update quoted below:
- Judge Brantley Starr of the U.S. District Court for the Northern District of Texas issued a standing order requiring counsel to file a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence or that any language drafted by generative artificial intelligence will be checked for accuracy.
- Magistrate Judge Gabriel A. Fuentes of the U.S. District Court for the Northern District of Illinois issued a standing order requiring any party using any generative AI tool in the preparation or drafting of documents for filing with the Court must disclose in the filing that AI was used and the specific AI tool that was used to conduct legal research and/or to draft the document.
- Judge Stephen Alexander Vaden of the U.S. Court of International Trade issued an Order on Artificial Intelligence requiring disclosure of any generative AI program used and of all portions of text drafted with the assistance of generative AI, as well as certify that the use of the generative AI tool did not disclose confidential information to unauthorized parties.
- Judge Michael Baylson of the U.S. District Court for the Eastern District of Pennsylvania has issued a broader order requiring the disclosure of any type of AI as opposed to limiting the disclosure requirement to the use of generative AI.
Canada
Document: Practice Resource: Guidance on Professional Responsibility and Generative AI (October 2023)
Courts in some jurisdictions in Canada, as well as some US states, require lawyers to disclose when generative AI was used to prepare their submissions. Some courts even require not just disclosure that generative AI was used, but how it was used. If you are thinking about using generative AI in your practice, you should check with the court, tribunal, or other relevant decision-maker to verify whether you are required to attribute, and to what degree, your use of generative AI.
Notice to the Parties and the Profession: The Use of Artificial Intelligence in Court Proceedings (20 December 2023)
- The Court expects parties to proceedings before the Court to inform it, and each other, if they have used artificial intelligence to create or generate new content in preparing a document filed with the Court. If any such content has been included in a document submitted to the Court by or on behalf of a party or a third-party participant (“intervener”), the first paragraph of the text in that document must disclose that AI has been used to create or generate that content.
- This Notice requires counsel, parties, and interveners in legal proceedings at the Federal Court to make Declaration for AI-generated content, and to consider certain principles when using AI to prepare documentation filed with the Court.
- The Court recognizes that counsel have duties as Officers of the Court. However, these duties do not extend to individuals representing themselves. It would be unfair to place AI-related responsibilities only on these self-represented individuals, and allow counsel to rely on their duties. Therefore, the Court provides this Notice to ensure fair treatment of all represented and self-represented parties and interveners.
General Practice Direction 29 (Use of Artificial Intelligence Tools) (26 June 2023)
If any counsel or party relies on artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter and in any form before the Court, they must advise the Court of the tool used and for what purpose.
Practice Direction: Use of Artificial Intelligence in Court Submissions (23 June 2023)
When artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.
Notice to the Public and Legal Profession: Ensuring the Integrity of Court Submissions when using Large Language Models (6 October 2023)
Note a similar practice direction is issued by the Nova Scotia Supreme Court
…urge practitioners and litigants to exercise caution when referencing legal authorities or analysis derived from LLMs in their submissions.
In the interest of maintaining the highest standards of accuracy and authenticity, any AI-generated submissions must be verified with meaningful human control. Verification can be achieved through cross-referencing with reliable legal databases, ensuring that the citations and their content hold up to scrutiny. This accords with the longstanding practice of legal professionals.
Other
Practical Guidance Note No. 2 of 2023 Guidelines on the use of large language models and generative AI in proceedings before the DIFC Courts (December 21, 2023)
Parties should declare at the earliest possible opportunity if they have used or intend to use AI-generated content during any part of proceedings. Any issues or concerns expressed by either party in respect of the use of AI should be resolved no later than the Case Management Conference stage. Early disclosure of the use or intention to use AI gives all parties the opportunity to raise any concerns they might have or to provide their consent to such use. It also provides the Courts with the opportunity to provide any necessary case management orders on the reliance on AI-generated content during proceedings. Parties should not wait until shortly before trial or the trial itself to declare that they intend to use AI-generated content. This is likely to lead to requests for adjournments and the loss of trial dates, which must be avoided. Where parties seek to use AI in the course of proceedings, they must ensure that such use is first discussed with the other side, and where no agreement is made, the request may be put before the Courts by way of a Part 23 application for determination.