A practical guide for forensic vocational experts, life care planners, and legal nurse consultants.

The short answer: if you use artificial intelligence (AI) in your consulting practice, you do not have to hide it, and in most jurisdictions you should not try. What you do have to be ready to do is explain in plain language what tool you used, what you put into it, what came out, and how you verified what came out. The experts whose testimony is currently getting excluded are not the ones who used AI. They are the ones who could not answer those questions when opposing counsel pressed them on it.

Two things changed in the last twelve months that make this conversation urgent. The vocational and rehab counseling boards both published formal AI guidance for the first time, with the American Board of Vocational Experts (ABVE) issuing its Guidelines for Ethical AI Use in Forensic Vocational Evaluations in July 2025 and the Commission on Rehabilitation Counselor Certification (CRCC) following with a detailed FAQ in February 2026. They were building on a foundation that the American Nurses Association (ANA) and the American Medical Association (AMA) had already laid years earlier, and on the American Bar Association's Formal Opinion 512 from July 2024, which now governs the lawyers who will be deposing you. The second thing that changed is harder to summarize but more important: courts started excluding expert testimony when AI use went sideways, and one court in February ruled that consumer AI conversations may be discoverable under any circumstances. Four published opinions now form the rough outline of what defensible AI use looks like.

The principle: it is not the tool, it is the verification

Read every published opinion on AI in expert testimony and one fact jumps out. In none of them was AI use itself the disqualifying act. The disqualifying act was failing to verify the output and then putting it under oath. The Kohls court was explicit on this point, going out of its way to note that AI has the potential to transform legal practice for the better and faulting the Stanford professor not for using a large language model (LLM) in his research but for submitting unverified AI-generated content as sworn testimony.

This is the unifying principle of everything that follows, and it is the same principle running through every credentialing body's published guidance and through every recent court decision. The expert's job has not changed. It is the same job it has always been: to form an independent opinion based on facts the expert has personally verified. The tool that helps you organize the underlying material is no more and no less consequential than the highlighter you used on a paper chart in 1998. What matters is whether you confirmed every fact, date, citation, and reference against the source before you signed the report.

If you internalize this one idea, the rest of the post is implementation detail.

Treat AI like every other tool in your methodology section

The single most useful practical move you can make is to stop treating AI as a special category and start treating it as one tool among many. Forensic experts have always disclosed methodology. Vocational experts list their transferable skills software, their labor market access tools, their O*NET searches, and their assessment instruments. Life care planners cite their cost databases. Legal nurse consultants list the records they reviewed and the specialty references they consulted. AI-assisted record review belongs in the same place, written in the same matter-of-fact tone.

A defensible methodology section in a vocational or life care planning report might read like this: "In preparing this report I reviewed the medical records produced by counsel, supplemented by labor market data from [database], transferable skills analysis using [software], and AI-assisted record organization and summarization through SecondLook Health, a SOC 2 and Health Insurance Portability and Accountability Act (HIPAA) compliant clinical analytics platform that operates under a Business Associate Agreement where required by the Customer's regulatory status and extends equivalent contractual privacy commitments to all customers regardless of that status. All facts, dates, and citations in this report were independently verified against the underlying source documents."

Written this way, the disclosure stops looking exotic and starts looking like infrastructure, which is what it is. The verification language is built in, so the deposition question about validation is half-answered before opposing counsel asks it. And because the same paragraph appears in every report you produce, opposing counsel cannot argue that you treated this case differently from any other. Pick the language you like, write it once, and use it from now on.

What your credentialing body actually requires

Start here, because this is the standard you will be measured against in a Frye or Daubert challenge. Opposing counsel does not need to invent a theory of why AI is bad. They will quote your own profession's published guidance back to you and ask whether you followed it. A year ago, the answer to that question varied wildly depending on which credentialing body you were certified through. Today, every major body has converged on roughly the same handful of principles, which makes building a defensible workflow much easier than it used to be.

ABVE: AI may augment, never replace, expert judgment

ABVE released its Guidelines for Ethical AI Use in Forensic Vocational Evaluations on July 7, 2025. The document is short, principled, and binding on Diplomates and Fellows through the Code of Ethics. Five rules sit at its center.

  • AI tools must augment expert judgment, not replace it. Automated vocational conclusions are explicitly prohibited.
  • Evaluations using AI require transparency and informed consent.
  • Data handling must comply with HIPAA, the European Union's General Data Protection Regulation (GDPR), and accepted security best practices.
  • Experts must actively safeguard against algorithmic bias that could disadvantage vulnerable populations.
  • Reliance on unverified predictive models is prohibited.

ABVE does not mandate a specific disclosure sentence in every report. It does require that the expert remain the decision-maker, that the client know AI is involved, and that the data stay secure. If your workflow already meets those tests, you are inside the lines.

CRCC: BAA, no PHI in public tools, informed consent, opt-out

The CRCC FAQ released in February 2026 is more granular and applies to anyone holding the Certified Rehabilitation Counselor or Certified Vocational Evaluator credential. The hard rules are:

  • Only use HIPAA-compliant systems your agency has vetted and that have a signed Business Associate Agreement. Public-facing tools like ChatGPT are off limits for client work.
  • Strip all Protected Health Information and Personally Identifiable Information before any data goes into a public model. Assume anything entered into a public tool is no longer confidential.
  • Inform the client that AI is being used, explain what data is being processed, and document their consent. The client retains the right to opt out without penalty.
  • The counselor retains final authority and accountability. AI outputs are suggestions, not directives.
  • Treat AI literacy as Continuing Professional Development. CRCC ties this to Code Sections E.1 and E.2 on professional and functional competence.

What makes the CRCC document worth reading carefully is the explicit recognition that closed BAA-backed platforms are a different category from public chatbots, paired with a requirement that you be able to explain how the tool works at a basic level. Counselors who use a tool they cannot describe (what it was trained on, where its outputs come from, what its known limitations are) are out of compliance with their own ethics code before they ever get to a courtroom.

ANA: AI augments, supports, and streamlines expert clinical practice

Legal nurse consultants and any registered nurse (RN) working in medical-legal consulting are bound by the American Nurses Association Code of Ethics. The ANA's ethics body, the Center for Ethics and Human Rights, published its position statement on the Ethical Use of Artificial Intelligence in Nursing Practice in 2022. The 2025 revision of the Code of Ethics for Nurses now incorporates AI directly through a new section called Provision 7.5, which addresses technology, ethics, and policy. The key points map cleanly onto everything ABVE and CRCC are saying.

  • AI augments, supports, and streamlines expert clinical practice. It does not replace nursing knowledge, judgment, critical thinking, or assessment skills.
  • Nurses must ensure transparency, eliminate bias, prevent health disparities, and protect patient privacy and confidentiality.
  • Nurses are responsible for being informed about AI and for ensuring its appropriate use, which means understanding the tools they rely on rather than treating them as black boxes.
  • Provision 7.5 explicitly recognizes that machine learning and AI are already deeply embedded in healthcare, and asks nurses to critically question the assumptions of these tools rather than defer to them.

The American Association of Legal Nurse Consultants has not published its own AI position statement, but AALNC is formally aligned with the ANA, which makes the ANA position the binding guidance for legal nurse consultants in practice and a reasonable answer to give when opposing counsel asks which standard governs your work.

AMA: augmented intelligence, oversight, and physician accountability

If you work alongside physician experts, or if you are a physician practicing in life care planning or medical-legal consulting, the AMA framework matters. The AMA released its Principles for Augmented Intelligence Development, Deployment, and Use in November 2023 and expanded that work in 2025 with a new policy on AI literacy in medical education and a STEPS Forward governance toolkit for clinical practices. The AMA framing is worth knowing because it originated the term "augmented intelligence" that the ANA and others have since adopted, and because it is unusually clear about physician accountability.

  • AI in healthcare must be transparent to both physicians and patients. When AI directly impacts patient care or medical decision making, that use should be disclosed.
  • Clinical experts are best positioned to determine whether AI applications are valid from a clinical perspective. Voluntary compliance is not sufficient on its own.
  • AI augments, supports, and streamlines expert clinical practice. It does not replace the physician's clinical judgment or accountability.
  • By 2024, nearly 70 percent of physicians were using AI tools in some form, up from 38 percent the year before. The AMA's position is that this growth must be matched by literacy, governance, and oversight, not slowed.

ABA Formal Opinion 512: the lawyers deposing you are operating under the same rules

This is the one most experts overlook, and it is the one that changes the conversation in deposition. On July 29, 2024, the American Bar Association Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 on Generative Artificial Intelligence Tools. It is the binding ethics framework for any lawyer in a jurisdiction that follows the ABA Model Rules. The key points include:

  • Lawyers must understand the capabilities and limitations of any AI tool they use. They cannot delegate professional judgment to a tool they do not understand.
  • Lawyers must independently verify any AI output that becomes part of their work product. Hallucinations are a known failure mode, and the lawyer is responsible for catching them.
  • Lawyers must obtain informed client consent before entering client information into self-learning AI tools. Boilerplate consent in engagement letters is not enough.
  • Lawyers have a duty of candor toward the tribunal and a separate duty under Rule 11 to verify the truth of what they file. The Kohls court has already suggested this duty may now require attorneys to ask their experts directly whether they used AI and how they verified the output.

Why does this matter for you? Because it means the attorney who hires you is operating under the same verification standard you are. When opposing counsel asks the deposition question about your AI use, your retaining attorney should already have asked you the same question and gotten the same answer. The ABA framework is also useful in the deposition itself. If pressed, you can note that your verification posture is consistent with the standard the questioning attorney's own profession requires of them. That is a stronger position than experts realize.

Other bodies: educational efforts, no formal positions yet

Several organizations that touch this work are developing AI content but have not published formal binding positions. The Commission for Case Manager Certification has run educational webinars and offers a CE course on AI in healthcare, but no position statement. The International Association of Rehabilitation Professionals and its International Academy of Life Care Planners section have run AI 101 webinar series for vocational evaluators and life care planners since 2024, again without a formal binding statement. The National Rehabilitation Association, the National Rehabilitation Counseling Association, and the American Academy of Physician Life Care Planners have not published AI guidance as of this writing. The absence of a formal position from any one of these bodies is not a license to ignore the issue. In a Frye hearing the relevant question is whether the practice is generally accepted in the field, and the published positions of ABVE, CRCC, ANA, and AMA will be treated as evidence of that field regardless of which credentialing body sits closest to your particular practice.

What courts have actually done

Four cases in the last fifteen months tell you most of what you need to know. Read them with the principle from the top of this post in mind: every one of these decisions was about verification failure or about consumer-tool sloppiness, not about AI use itself.

Matter of Weber (NY Surrogate's Court, October 2024)

An expert in a trustee accounting matter used Microsoft Copilot to perform calculations. When questioned, he could not say what prompt he used, what sources Copilot drew on, or how the tool worked. The court excluded the testimony and went further, holding that counsel has an affirmative duty to disclose AI use in expert materials and that AI-generated evidence should be subject to a Frye hearing before admission. The fatal problem was not that the expert used Copilot. The fatal problem was that he could not describe what he had done with it or verify its output. Weber is the case opposing counsel will cite first.

Kohls v. Ellison (D. Minn., January 2025)

A Stanford professor submitted an expert declaration in support of a Minnesota deepfake statute. He used GPT-4o to help draft it. The declaration cited two academic articles that did not exist and misattributed a third. The expert admitted he had not verified the output before signing under penalty of perjury. The court excluded the declaration entirely and noted that under Rule 11, attorneys may now be required to ask their experts directly whether they used AI and how they verified anything it produced. Again, the court took pains to say AI use was not the problem. Failure to check the output before signing was.

Concord Music Group v. Anthropic (N.D. Cal., May 2025)

A defense expert declaration cited a nonexistent article with co-authors who had never worked together. Counsel called it an honest mistake from using an LLM to format citations. The court called it a plain and simple AI hallucination, struck the offending paragraph, and noted that the incident undermined the credibility of the expert's entire declaration. One unverified citation contaminated the credibility of every other opinion in the document.

United States v. Heppner (S.D.N.Y., February 2026)

This is the case that matters most for the question of what is discoverable. A securities fraud defendant used the consumer version of Claude to draft defense strategy documents after receiving a grand jury subpoena but without his attorneys directing him to do so. The FBI seized the documents from his home. His lawyers argued they were protected by attorney-client privilege and the work product doctrine. Judge Jed Rakoff in the Southern District of New York ruled that they were not, in what is widely described as the first decision of its kind nationwide.

The reasoning matters for every consultant using AI. Judge Rakoff held that there was no attorney-client communication because Claude is not an attorney. He held that there was no reasonable expectation of confidentiality because the consumer version of the tool operates under a public privacy policy. And he held that the work product doctrine did not apply because counsel had not directed the defendant to use the tool.

Two passages from the decision are worth knowing. First, the court explicitly noted that the consumer-tier privacy policy was central to its analysis, leaving open whether enterprise platforms with negotiated confidentiality terms might be treated differently. Several practitioner commentaries on the decision have made the same point: a Business Associate Agreement and an enterprise contract are not just better security posture, they are potentially the difference between privileged and discoverable. Second, the court wrote that "had counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege." That dicta is the roadmap for how expert use of AI should be structured: at the direction of counsel, on a BAA-backed enterprise platform, with the expert maintaining their own independent verification.

Heppner is the case that completes the picture. Weber, Kohls, and Concord Music are all about verification failure. Heppner is about discoverability, and it draws a sharp line between consumer-grade chatbots used on a whim and professional-grade tools used at the direction of counsel. The experts who structure their workflow on the right side of that line are protected by the same doctrines that have always protected expert work product. The ones who treat consumer chatbots as confidential workspaces are not, and the case law is now explicit about why.

The deposition playbook

You are in a deposition. Opposing counsel asks the question. What do you actually say?

Question 1: Did you use artificial intelligence in preparing your report?

Answer truthfully and specifically. Do not volunteer beyond the question, but do not evade it either. The cleanest answer names the platform, establishes its compliance posture, and asserts your independent judgment in a single breath. Something like: I used a SOC 2 and HIPAA compliant clinical analytics platform called SecondLook Health to help organize and summarize the medical records in this case. The platform operates under a Business Associate Agreement with customers who are HIPAA Covered Entities, and the vendor extends equivalent contractual privacy commitments to all customers regardless of regulatory status. I reviewed every output, verified the source citations against the underlying records, and the opinions in my report are my own. That phrasing confirms use without apology, names the tool by name, establishes the security posture in a way that is accurate whether your work touches a Covered Entity or not, asserts independent professional judgment, and lands cleanly inside the ABVE guidelines, the CRCC FAQ, the ANA position statement, and the AMA principles all at once.

Question 2: How does the tool work?

You do not need to be a computer scientist to answer this. You do need to be able to describe in plain language what category of tool you are using and what its known limitations are. For SecondLook Health, that might sound like: it is a closed clinical AI platform that ingests medical records, generates a clinical summary and a chronological timeline, and lets me query the record set as I work through it. It does not train on case data. The vendor publishes information on how it handles records and on its accuracy benchmarks, and that documentation is available if needed. The exact phrasing matters less than the fact that you can produce a coherent two-sentence answer without reaching for jargon you do not fully understand.

Question 3: What did you put into it?

Answer factually. If the tool is BAA-backed, the answer is the medical record set the attorney produced to you. If you used any public tool for any purpose, be ready to explain what you put in, and confirm that no PHI was included. The CRCC and ANA positions are aligned on this bright line: PHI does not go into public tools. Ever.

Question 4: How did you verify the output?

This is the question that everything turns on, and it is the question the Kohls and Concord Music decisions were really about. The right answer is concrete and specific: for every fact, date, and citation in the report, you went back to the underlying source document and confirmed it; where the platform provided a page reference, you opened that page and read it; the opinions in the report are yours, formed on the basis of records you personally reviewed. If you cannot say that honestly, no script can paper over the gap and the right move is to fix the workflow before the next case rather than to fix the deposition answer.

Question 5: Will you produce your prompts, queries, and the tool's outputs?

This is the question to talk to your own attorney about, well before you ever get asked it. The answer depends on your jurisdiction, on whether the material is treated as work product in your practice, and on whatever record retention policy you and your counsel have agreed to. The next section covers the principles that should shape that conversation.

A note on record retention

Before going further: nothing in this section is legal advice, and SecondLook Health does not have a recommended retention policy. The right answer for your practice depends on your jurisdiction, the type of work you do, and your own attorney's read of the rules that govern expert work product in your state. The Heppner decision discussed earlier adds a wrinkle worth knowing: a federal court has now held that consumer AI conversations may not be protected by attorney-client privilege or the work product doctrine, while suggesting in dicta that AI use directed by counsel on a professional-grade platform may be treated very differently. What follows are three principles that should shape the conversation you have with your own attorney, not a substitute for that conversation.

  • Decide before you need to. The strongest position any expert can take into a contested case is a written retention policy that predates the case and applies uniformly to every other case the expert handles. Routine destruction in the ordinary course of business is a long-established and well-protected concept in evidence law, and it is fundamentally different from destruction in anticipation of litigation. Whatever policy you and your attorney land on, document it in writing, date it, and apply it consistently from that day forward.
  • Apply it the same way every time. Inconsistent retention is worse than either extreme. An expert who keeps everything for some cases and destroys everything for others has a problem that no uniform policy in either direction would have created. Pick a rule, follow it without exception, and be able to describe it in deposition without hesitation.
  • Match retention to your work product theory. Some experts treat AI tool outputs as intermediate work product, similar to draft notes or margin annotations on a printed chart, and apply a destroy-on-finalization policy by default. Others treat the same outputs as part of the record reviewed and retain them with the case file indefinitely. Both positions have an internal logic and both are defensible if applied consistently. What is not defensible is holding one position when it suits you and the other when it does not, which is exactly the kind of inconsistency opposing counsel will look for. Talk this through with your attorney and pick a posture that matches how you treat all of your other intermediate work.

From a vendor standpoint, SecondLook Health lets users configure case retention from delete-on-finalization through indefinite retention, so whatever policy you and your counsel settle on, the platform supports it. The choice is yours and your attorney's, not ours.

A risk-reduction checklist

If you use AI in your consulting practice, run through this list before your next case closes.

  • Confirm your platform is HIPAA-compliant and that you have a signed Business Associate Agreement. If you cannot point to the BAA, you do not have one.
  • Never enter PHI into a public tool. Not ChatGPT, not Claude.ai, not Gemini, not Copilot, not any other consumer chatbot. Use only platforms vetted for your work.
  • Verify every fact, date, page reference, and citation in your report against the underlying record. If the platform produced it, you confirmed it. This is the single rule that all of the case law turns on.
  • Disclose AI use as part of a comprehensive methodology section that lists every tool, database, and resource you used. Treat AI as one item among many, not as a special category.
  • Keep a one-page methodology note in your case file describing what tool you used, what you used it for, and how you verified the output.
  • Talk to your attorney about a written, uniform record retention policy and apply it the same way to every case.
  • Read the published AI guidance from the credentialing body that governs your practice. ABVE, CRCC, ANA, AMA, and ABA Opinion 512 are all short and all worth a careful read.
  • Be able to explain, in two sentences, how your tool works and what its limitations are. If you cannot, you are not yet competent to use it under your own code of ethics.

Where this leaves you

The legal question about AI in consulting work is converging fast on a single standard, and it is the same standard your credentialing bodies have already adopted. Use the tool. Disclose it as part of your normal methodology section, not as a special category. Verify every output against the underlying source. Remain the expert. Consultants who follow that standard are not at meaningful risk in a Frye or Daubert challenge today. Consultants who hide AI use, or who cannot explain what they did with it, or who fail to verify what it produced, increasingly are.

Every part of this post points back to one principle. It is not the tool, it is the verification. That is what the published guidance from every major credentialing body amounts to when you strip out the vocabulary differences, and it is what every recent court decision turns on when you read past the headlines. Build your workflow around that principle and the rest of the questions get easier to answer.

If your work involves vocational evaluation, life care planning, or legal nurse consulting and you would find a one-page methodology fact sheet useful for attaching to case files or handing to opposing counsel, we have one. Email hello@secondlookhealth.ai and we will send it.

Want to save of copy of this guide?

Download it here

Sources

  • American Board of Vocational Experts. Guidelines for Ethical AI Use in Forensic Vocational Evaluations. Adopted July 7, 2025. abve.net/ai-guidelines
  • Commission on Rehabilitation Counselor Certification. Frequently Asked Questions and Guiding Statements to Support CRCs Using Artificial Intelligence. February 2026. crccertification.com
  • American Nurses Association. The Ethical Use of Artificial Intelligence in Nursing Practice. ANA Center for Ethics and Human Rights. 2022. nursingworld.org
  • American Nurses Association. Code of Ethics for Nurses with Interpretive Statements, Provision 7.5. 2025. codeofethics.ana.org
  • American Medical Association. Principles for Augmented Intelligence Development, Deployment, and Use. November 2023. ama-assn.org
  • American Medical Association. Policy on AI Literacy in Medical Education. November 2025. ama-assn.org
  • American Bar Association Standing Committee on Ethics and Professional Responsibility. Formal Opinion 512: Generative Artificial Intelligence Tools. July 29, 2024. americanbar.org
  • Matter of Weber, NY Surrogate's Court, Saratoga County, October 2024.
  • Kohls v. Ellison, No. 24-CV-3754, 2025 WL 66514 (D. Minn. Jan. 10, 2025).
  • Concord Music Group, Inc. v. Anthropic PBC, No. 24-CV-03811-EKL, 2025 WL 1482734 (N.D. Cal. May 23, 2025).
  • United States v. Heppner, No. 1:25-cr-00503-JSR (S.D.N.Y. Feb. 17, 2026) (Rakoff, J.).
  • Federal Rule of Evidence 702, as amended December 1, 2023.
  • Frye v. United States, 293 F. 1013 (D.C. Cir. 1923).
  • Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993).