A practical guide for forensic vocational experts, life care planners, and legal nurse consultants.
The short answer: if you use artificial intelligence (AI) in your consulting practice, you do not have to hide it, and in most jurisdictions you should not try. What you do have to be ready to do is explain in plain language what tool you used, what you put into it, what came out, and how you verified what came out. The experts whose testimony is currently getting excluded are not the ones who used AI. They are the ones who could not answer those questions when opposing counsel pressed them on it.
Two things changed in the last twelve months that make this conversation urgent. The vocational and rehab counseling boards both published formal AI guidance for the first time, with the American Board of Vocational Experts (ABVE) issuing its Guidelines for Ethical AI Use in Forensic Vocational Evaluations in July 2025 and the Commission on Rehabilitation Counselor Certification (CRCC) following with a detailed FAQ in February 2026. They were building on a foundation that the American Nurses Association (ANA) and the American Medical Association (AMA) had already laid years earlier, and on the American Bar Association's Formal Opinion 512 from July 2024, which now governs the lawyers who will be deposing you. The second thing that changed is harder to summarize but more important: courts started excluding expert testimony when AI use went sideways, and one court in February ruled that consumer AI conversations may be discoverable under any circumstances. Four published opinions now form the rough outline of what defensible AI use looks like.
Read every published opinion on AI in expert testimony and one fact jumps out. In none of them was AI use itself the disqualifying act. The disqualifying act was failing to verify the output and then putting it under oath. The Kohls court was explicit on this point, going out of its way to note that AI has the potential to transform legal practice for the better and faulting the Stanford professor not for using a large language model (LLM) in his research but for submitting unverified AI-generated content as sworn testimony.
This is the unifying principle of everything that follows, and it is the same principle running through every credentialing body's published guidance and through every recent court decision. The expert's job has not changed. It is the same job it has always been: to form an independent opinion based on facts the expert has personally verified. The tool that helps you organize the underlying material is no more and no less consequential than the highlighter you used on a paper chart in 1998. What matters is whether you confirmed every fact, date, citation, and reference against the source before you signed the report.
If you internalize this one idea, the rest of the post is implementation detail.
The single most useful practical move you can make is to stop treating AI as a special category and start treating it as one tool among many. Forensic experts have always disclosed methodology. Vocational experts list their transferable skills software, their labor market access tools, their O*NET searches, and their assessment instruments. Life care planners cite their cost databases. Legal nurse consultants list the records they reviewed and the specialty references they consulted. AI-assisted record review belongs in the same place, written in the same matter-of-fact tone.
A defensible methodology section in a vocational or life care planning report might read like this: "In preparing this report I reviewed the medical records produced by counsel, supplemented by labor market data from [database], transferable skills analysis using [software], and AI-assisted record organization and summarization through SecondLook Health, a SOC 2 and Health Insurance Portability and Accountability Act (HIPAA) compliant clinical analytics platform that operates under a Business Associate Agreement where required by the Customer's regulatory status and extends equivalent contractual privacy commitments to all customers regardless of that status. All facts, dates, and citations in this report were independently verified against the underlying source documents."
Written this way, the disclosure stops looking exotic and starts looking like infrastructure, which is what it is. The verification language is built in, so the deposition question about validation is half-answered before opposing counsel asks it. And because the same paragraph appears in every report you produce, opposing counsel cannot argue that you treated this case differently from any other. Pick the language you like, write it once, and use it from now on.
Start here, because this is the standard you will be measured against in a Frye or Daubert challenge. Opposing counsel does not need to invent a theory of why AI is bad. They will quote your own profession's published guidance back to you and ask whether you followed it. A year ago, the answer to that question varied wildly depending on which credentialing body you were certified through. Today, every major body has converged on roughly the same handful of principles, which makes building a defensible workflow much easier than it used to be.
ABVE released its Guidelines for Ethical AI Use in Forensic Vocational Evaluations on July 7, 2025. The document is short, principled, and binding on Diplomates and Fellows through the Code of Ethics. Five rules sit at its center.
ABVE does not mandate a specific disclosure sentence in every report. It does require that the expert remain the decision-maker, that the client know AI is involved, and that the data stay secure. If your workflow already meets those tests, you are inside the lines.
The CRCC FAQ released in February 2026 is more granular and applies to anyone holding the Certified Rehabilitation Counselor or Certified Vocational Evaluator credential. The hard rules are:
What makes the CRCC document worth reading carefully is the explicit recognition that closed BAA-backed platforms are a different category from public chatbots, paired with a requirement that you be able to explain how the tool works at a basic level. Counselors who use a tool they cannot describe (what it was trained on, where its outputs come from, what its known limitations are) are out of compliance with their own ethics code before they ever get to a courtroom.
Legal nurse consultants and any registered nurse (RN) working in medical-legal consulting are bound by the American Nurses Association Code of Ethics. The ANA's ethics body, the Center for Ethics and Human Rights, published its position statement on the Ethical Use of Artificial Intelligence in Nursing Practice in 2022. The 2025 revision of the Code of Ethics for Nurses now incorporates AI directly through a new section called Provision 7.5, which addresses technology, ethics, and policy. The key points map cleanly onto everything ABVE and CRCC are saying.
The American Association of Legal Nurse Consultants has not published its own AI position statement, but AALNC is formally aligned with the ANA, which makes the ANA position the binding guidance for legal nurse consultants in practice and a reasonable answer to give when opposing counsel asks which standard governs your work.
If you work alongside physician experts, or if you are a physician practicing in life care planning or medical-legal consulting, the AMA framework matters. The AMA released its Principles for Augmented Intelligence Development, Deployment, and Use in November 2023 and expanded that work in 2025 with a new policy on AI literacy in medical education and a STEPS Forward governance toolkit for clinical practices. The AMA framing is worth knowing because it originated the term "augmented intelligence" that the ANA and others have since adopted, and because it is unusually clear about physician accountability.
This is the one most experts overlook, and it is the one that changes the conversation in deposition. On July 29, 2024, the American Bar Association Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 on Generative Artificial Intelligence Tools. It is the binding ethics framework for any lawyer in a jurisdiction that follows the ABA Model Rules. The key points include:
Why does this matter for you? Because it means the attorney who hires you is operating under the same verification standard you are. When opposing counsel asks the deposition question about your AI use, your retaining attorney should already have asked you the same question and gotten the same answer. The ABA framework is also useful in the deposition itself. If pressed, you can note that your verification posture is consistent with the standard the questioning attorney's own profession requires of them. That is a stronger position than experts realize.
Several organizations that touch this work are developing AI content but have not published formal binding positions. The Commission for Case Manager Certification has run educational webinars and offers a CE course on AI in healthcare, but no position statement. The International Association of Rehabilitation Professionals and its International Academy of Life Care Planners section have run AI 101 webinar series for vocational evaluators and life care planners since 2024, again without a formal binding statement. The National Rehabilitation Association, the National Rehabilitation Counseling Association, and the American Academy of Physician Life Care Planners have not published AI guidance as of this writing. The absence of a formal position from any one of these bodies is not a license to ignore the issue. In a Frye hearing the relevant question is whether the practice is generally accepted in the field, and the published positions of ABVE, CRCC, ANA, and AMA will be treated as evidence of that field regardless of which credentialing body sits closest to your particular practice.
Four cases in the last fifteen months tell you most of what you need to know. Read them with the principle from the top of this post in mind: every one of these decisions was about verification failure or about consumer-tool sloppiness, not about AI use itself.
An expert in a trustee accounting matter used Microsoft Copilot to perform calculations. When questioned, he could not say what prompt he used, what sources Copilot drew on, or how the tool worked. The court excluded the testimony and went further, holding that counsel has an affirmative duty to disclose AI use in expert materials and that AI-generated evidence should be subject to a Frye hearing before admission. The fatal problem was not that the expert used Copilot. The fatal problem was that he could not describe what he had done with it or verify its output. Weber is the case opposing counsel will cite first.
A Stanford professor submitted an expert declaration in support of a Minnesota deepfake statute. He used GPT-4o to help draft it. The declaration cited two academic articles that did not exist and misattributed a third. The expert admitted he had not verified the output before signing under penalty of perjury. The court excluded the declaration entirely and noted that under Rule 11, attorneys may now be required to ask their experts directly whether they used AI and how they verified anything it produced. Again, the court took pains to say AI use was not the problem. Failure to check the output before signing was.
A defense expert declaration cited a nonexistent article with co-authors who had never worked together. Counsel called it an honest mistake from using an LLM to format citations. The court called it a plain and simple AI hallucination, struck the offending paragraph, and noted that the incident undermined the credibility of the expert's entire declaration. One unverified citation contaminated the credibility of every other opinion in the document.
This is the case that matters most for the question of what is discoverable. A securities fraud defendant used the consumer version of Claude to draft defense strategy documents after receiving a grand jury subpoena but without his attorneys directing him to do so. The FBI seized the documents from his home. His lawyers argued they were protected by attorney-client privilege and the work product doctrine. Judge Jed Rakoff in the Southern District of New York ruled that they were not, in what is widely described as the first decision of its kind nationwide.
The reasoning matters for every consultant using AI. Judge Rakoff held that there was no attorney-client communication because Claude is not an attorney. He held that there was no reasonable expectation of confidentiality because the consumer version of the tool operates under a public privacy policy. And he held that the work product doctrine did not apply because counsel had not directed the defendant to use the tool.
Two passages from the decision are worth knowing. First, the court explicitly noted that the consumer-tier privacy policy was central to its analysis, leaving open whether enterprise platforms with negotiated confidentiality terms might be treated differently. Several practitioner commentaries on the decision have made the same point: a Business Associate Agreement and an enterprise contract are not just better security posture, they are potentially the difference between privileged and discoverable. Second, the court wrote that "had counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege." That dicta is the roadmap for how expert use of AI should be structured: at the direction of counsel, on a BAA-backed enterprise platform, with the expert maintaining their own independent verification.
Heppner is the case that completes the picture. Weber, Kohls, and Concord Music are all about verification failure. Heppner is about discoverability, and it draws a sharp line between consumer-grade chatbots used on a whim and professional-grade tools used at the direction of counsel. The experts who structure their workflow on the right side of that line are protected by the same doctrines that have always protected expert work product. The ones who treat consumer chatbots as confidential workspaces are not, and the case law is now explicit about why.
You are in a deposition. Opposing counsel asks the question. What do you actually say?
Answer truthfully and specifically. Do not volunteer beyond the question, but do not evade it either. The cleanest answer names the platform, establishes its compliance posture, and asserts your independent judgment in a single breath. Something like: I used a SOC 2 and HIPAA compliant clinical analytics platform called SecondLook Health to help organize and summarize the medical records in this case. The platform operates under a Business Associate Agreement with customers who are HIPAA Covered Entities, and the vendor extends equivalent contractual privacy commitments to all customers regardless of regulatory status. I reviewed every output, verified the source citations against the underlying records, and the opinions in my report are my own. That phrasing confirms use without apology, names the tool by name, establishes the security posture in a way that is accurate whether your work touches a Covered Entity or not, asserts independent professional judgment, and lands cleanly inside the ABVE guidelines, the CRCC FAQ, the ANA position statement, and the AMA principles all at once.
You do not need to be a computer scientist to answer this. You do need to be able to describe in plain language what category of tool you are using and what its known limitations are. For SecondLook Health, that might sound like: it is a closed clinical AI platform that ingests medical records, generates a clinical summary and a chronological timeline, and lets me query the record set as I work through it. It does not train on case data. The vendor publishes information on how it handles records and on its accuracy benchmarks, and that documentation is available if needed. The exact phrasing matters less than the fact that you can produce a coherent two-sentence answer without reaching for jargon you do not fully understand.
Answer factually. If the tool is BAA-backed, the answer is the medical record set the attorney produced to you. If you used any public tool for any purpose, be ready to explain what you put in, and confirm that no PHI was included. The CRCC and ANA positions are aligned on this bright line: PHI does not go into public tools. Ever.
This is the question that everything turns on, and it is the question the Kohls and Concord Music decisions were really about. The right answer is concrete and specific: for every fact, date, and citation in the report, you went back to the underlying source document and confirmed it; where the platform provided a page reference, you opened that page and read it; the opinions in the report are yours, formed on the basis of records you personally reviewed. If you cannot say that honestly, no script can paper over the gap and the right move is to fix the workflow before the next case rather than to fix the deposition answer.
This is the question to talk to your own attorney about, well before you ever get asked it. The answer depends on your jurisdiction, on whether the material is treated as work product in your practice, and on whatever record retention policy you and your counsel have agreed to. The next section covers the principles that should shape that conversation.
Before going further: nothing in this section is legal advice, and SecondLook Health does not have a recommended retention policy. The right answer for your practice depends on your jurisdiction, the type of work you do, and your own attorney's read of the rules that govern expert work product in your state. The Heppner decision discussed earlier adds a wrinkle worth knowing: a federal court has now held that consumer AI conversations may not be protected by attorney-client privilege or the work product doctrine, while suggesting in dicta that AI use directed by counsel on a professional-grade platform may be treated very differently. What follows are three principles that should shape the conversation you have with your own attorney, not a substitute for that conversation.
From a vendor standpoint, SecondLook Health lets users configure case retention from delete-on-finalization through indefinite retention, so whatever policy you and your counsel settle on, the platform supports it. The choice is yours and your attorney's, not ours.
If you use AI in your consulting practice, run through this list before your next case closes.
The legal question about AI in consulting work is converging fast on a single standard, and it is the same standard your credentialing bodies have already adopted. Use the tool. Disclose it as part of your normal methodology section, not as a special category. Verify every output against the underlying source. Remain the expert. Consultants who follow that standard are not at meaningful risk in a Frye or Daubert challenge today. Consultants who hide AI use, or who cannot explain what they did with it, or who fail to verify what it produced, increasingly are.
Every part of this post points back to one principle. It is not the tool, it is the verification. That is what the published guidance from every major credentialing body amounts to when you strip out the vocabulary differences, and it is what every recent court decision turns on when you read past the headlines. Build your workflow around that principle and the rest of the questions get easier to answer.
If your work involves vocational evaluation, life care planning, or legal nurse consulting and you would find a one-page methodology fact sheet useful for attaching to case files or handing to opposing counsel, we have one. Email hello@secondlookhealth.ai and we will send it.