Webinar Recording: AI Report Writers Are Dangerous To The Industry

AI Ethics in Psychological Assessment
How to Use AI Without Harming Patients, Undermining Reports, or Risking Your License

Webinar Recording & Full Transcript

Thank you to the hundreds of clinicians who joined this critical conversation led by Dr. Chris Barnes. If you missed the live session or want to revisit the details, the full webinar and transcript are available below.

What You’ll Learn:

This webinar is for psychologists, school psychs, neuropsychs, and clinical professionals who are curious, cautious, or already experimenting with AI tools. It’s not about fear—it’s about responsibility, rigor, and patient safety in a rapidly changing landscape.

Duration: 90 minutes | Presented by Chris Barnes, Ph.D., Clinical Psychologist

TRANSCRIPT:

Introduction

Thank you for spending your time with me this afternoon. I wanted to do this webinar because I am knee-deep in all of this AI stuff. I’m having conversations. There’s a lot of interest coming my way. And I’m also just really, really immersed in all of this. So my hope for us this afternoon is to do just a few things before we get too far into it.

However, I want to make sure that we’re doing a little bit of housekeeping. Everyone’s gonna get a recording of this. You’re going to have this exact slide deck sent to you. And my voice is gonna be in it. You’re gonna see my video. You’re gonna have all of these things. So feel like, take notes, sure. But don’t feel like you have to take a lot of notes. Also, there is a Questions and Answers button at the bottom of your Zoom and feel free to loft questions my way. Throw them in there, probably save them to the end. There’s a chance I’ll answer them throughout our time together this afternoon. But most importantly, I just want as many questions as I can get, because I can also reach out to you individually. I can send an email to everyone that attended. And I can even attach a document as I send out this slide as well.

Background

A quick intro. I’m Dr. Chris Barnes. I’m a licensed clinical psychologist. I’ve been doing this for 14 years. I own a group practice in Kalamazoo, Michigan. And I’m also just developing all kinds of cool things. About two years ago I started something called the Lazy Psychologist. It was a whim, and I was working with a lot of practices, many clinicians. We were working on automation. We were working on integrating AI responsibly into their practice, into their workflows.

As a result of that, I started thinking, you know, I’m doing all these things for all these people. I have these problems, these clinicians have these problems. And I think that there’s something here. I think there’s a way to do some things super responsibly. You’ll see throughout all the slides, PsychAssist in the bottom corner. That is a company that I’m building. It is an AI end-to-end platform. This presentation is not fully about that—this presentation is about a lot of the information I’ve obtained throughout the building of this.

Me and my team think through things really deeply, and we want to make sure that we’re doing right by the patient. As a result, a lot of things have come up. People have asked lots of questions. I’ve been working with people, learning about their practice, learning about their workflow, and many themes keep coming up. My hope is that this webinar, my experience not only as a clinician but certainly as a tech founder as well—clinician first, tech founder second—we can have a discussion about the risks and dangers of some of these AI tools out there (mine included), and what those risks are, how we can serve to mitigate them, and what you as a user can be paying attention to.

 

The Value of This Conversation

We think through things really deeply, and we want to make sure we’re doing right by the patient. As a result of that, a lot of things have come up. People have asked lots of questions. I’ve been working with people, learning about their practice, learning about their workflow, and many themes keep coming up.

So my hope is that this webinar—and my experience, not only as a clinician but certainly as a tech founder as well—can open up a discussion about the risks and dangers of some of these AI tools out there (mine included), what those risks are, how we can serve to mitigate them, and what you as a user can be paying attention to.

Agenda

Today we’ve got about an hour and a half together. I want to spend a chunk of this time talking about the dangers of AI – particularly when it comes to writing reports and using it in our clinical workflow.

We’re going to go through a brief demo just to outline how context is everything: how we can think about things in a secure way, how we can use this tool that’s available to us – AI -in a compliant way, and how, at the end of the day, context really is everything. You’re going to hear me say that a lot throughout our time together this afternoon.

The goal is to think about AI in a curious way. This isn’t supposed to produce tech panic. Rather, I want to encourage you to have some tech curiosity and to approach things in a really intelligent and objective way.

Context is Everything

We’ll spend a little bit of time talking about how this may play out in the future. AI is getting faster. It’s getting cheaper. It’s not going anywhere. I was just checking a feed before I jumped onto this webinar, and OpenAI is now opening up memory to have access to all your conversations with it – if you have an account with them. It just goes to show that we’re getting better and better with these models. They’re growing exponentially. I don’t think it’s going anywhere.

Whether you choose to use AI or not, it’s really important to be aware of the inherent risks that are associated with it.

The Role of AI in Report Writing

What AI should do is speed things up. It can create clarity – sometimes with or without losing depth. But what it often does is polish text. AI is a great writer, but at times it loses nuance.

We can use it to integrate some patient context. We can understand documentation we produce in the context of lots of different data sources. However, without the training that we go through as clinicians, oftentimes some of that nuance – or the signal amidst the noise – is lost.

Although it might sound good -even brilliant at times it may lack integrity, nuance, or key detail. Our patient might never know, but sometimes we don’t even know. That’s the risk.

Using AI Responsibly

What I hope AI does in the future is enhance our workflow. It shouldn’t replace our workflow – and it definitely shouldn’t replace our brains. We went to school for a long time. We sat with this stuff, worked with it, experienced it. As clinicians, we’re trained to identify subtle patterns, nuances, and inconsistencies – often on autopilot.

But when we use tools like AI, we risk losing that nuance. It may skip over those details that only a trained human would catch. That’s a serious clinical risk.

We need AI to help us integrate data across the full patient context. Too often, we write in silos – WISC here, CPT there, BASC in another section. We treat them like isolated pieces, but that’s not how people work. We need to understand data through data. When AI is used right, it can help us surface patterns we might miss. It can show us when a discrepancy matters.

But when AI is used irresponsibly, it can cause real harm. It can suggest diagnoses based on flawed logic or incomplete information. And those misdiagnoses? They don’t just disappear. They follow our patients to schools, to doctors, to life.

Even something as seemingly simple as mislabeling a child with ADHD – when it’s not warranted – can affect how they’re treated by teachers, family members, even themselves. We know how powerful labels can be. That’s why we have to stay vigilant.

Understanding the Risks

AI-generated reports sound brilliant – but they can lack clinical truth.

Reports, whether written by humans or AI, get stamped with authority. They look official. But without context? They might be missing the most important parts. That’s the danger.

So what do I think AI should do when it’s done right?

It should enhance our workflow – not replace it.

It should help us surface patterns – not decide what’s important for us.

It should support our decisions – not make them for us.

We can use AI to help build bridges between data sources. But it can’t replace the clinician’s role in interpretation. That’s where nuance lives.

Data in Silos vs. Data with Context

When we build reports in silos, we miss the full picture.

We write the WISC section. Then we write the CPT. Then the BASC. We treat each as a standalone element. But patients don’t exist in pieces. They’re not isolated data points. They’re whole people with complex experiences.

This is where AI can shine – if we let it.

We can use large language models to identify trends across data. We can use them to ask questions like:

  • How does this low processing speed line up with attention scores?

  • Is this behavioral presentation consistent with the cognitive profile?

  • What’s missing from this data set?

But that only works if we train the models well. If we build workflows that respect context. And if we, the clinicians, stay in charge.

The SPICE Framework

Anyone who knows me knows I love a good acronym. So I came up with the SPICE framework as a way to think about ethical AI use in clinical work. Like with any great recipe – especially a spicy one – each ingredient matters. If you’re missing one, the whole thing can fall apart.

Here’s what SPICE stands for:

  • Structured Prompting: You need to have intelligent, organized inputs. Vague prompts lead to vague output.

  • Professional Judgment: This is why you went to school. AI doesn’t replace your clinical eye—it supports it.

  • Integration: The ability to synthesize across data. This is where humans shine and where AI still needs a lot of help.

  • Context: Without it, nothing works. Data is meaningless without the story.

  • Ethical Accountability: If you wouldn’t feel good putting your name on it, don’t use it.

If you’re only using four out of five? That’s not enough. Five out of five? That’s when things start to really work.

Common Fears Clinicians Have About AI

Let’s talk about the fears clinicians bring up with me all the time. Maybe you’ve had one of these too:

  • “I already have a system that works.”

  • “I have great prompts.”

  • “AI is a better writer than I am.”

  • “I’ve built a whole prompt library for each test I use.”

And to be fair – those things may all be true. But here’s the question:

Are you integrating everything?

Are you capturing the full patient context?

Are you creating something that not only sounds good – but is also deeply accurate, deeply personal, and clinically safe?

Having a system is great. But don’t let that system keep you from seeing patterns in data. From noticing things that don’t add up. From capturing something truly unique about your patient.

AI Is a Better Writer… But That’s Not the Point

AI might be a better writer than you. Honestly, it probably is.

The issue isn’t whether it can craft a beautiful paragraph. It’s whether it knows what actually matters.

Does it highlight the right findings?

Does it recognize contradictory data?

Can it differentiate between something clinically meaningful and something that’s just noise?

That’s the work we do as clinicians. That’s why we’re needed.

Live Demo: Context Changes Everything

Let’s walk through a live example together. I’m going to use ChatGPT for this demo, just because everyone’s familiar with it.

First, I enter a very basic prompt:

“Take these WISC scores and summarize them in a narrative paragraph.”

That’s it. No context. Just raw scores.

What does it return? A generic paragraph. Sounds okay. Technically accurate. Quick. Polished. But entirely in isolation. No meaning. No clinical interpretation. It’s a paragraph you could paste into a report – but it wouldn’t mean much.

Now, let’s do the same thing, but add some referral context.

I modify the prompt:

“These scores come from a child referred due to concerns about sustained attention, inconsistent school performance, and possible ADHD.”

The result? Immediately more robust. It starts tying the numbers into the reason for referral. It connects symptoms with data. It’s more personalized. It’s still fast – but now it’s actually helpful.

Let’s take it a step further and add developmental history.

I update the prompt again to include:

“There’s a family history of ADHD. Teachers report the student ‘looks engaged but doesn’t finish work.’”

Now we’re seeing real insight. It begins to explain the story behind the scores. Not just what the numbers are – but what they might mean.

Still fast. But now it’s got depth.

Putting It All Together

Finally, let’s layer in results from other tests.

Let’s say the WISC shows high processing speed (PSI through the roof), but the CPT-3 was totally average. No indicators of inattention.

That’s where most AI tools break down. But with this context, we can start to infer:

Maybe this kid isn’t inattentive at all. Maybe they’re too fast for their environment. Maybe they’re not stimulated enough.

That’s the clinical conversation. That’s the nuance.

Without that cross-measure context, you’d never know. You might walk away with a report that sounds smart but gets the story completely wrong.

The Problem with Brilliant-Sounding Reports

A year and a half ago, AI started hitting our field hard. We saw a flood of tools, many claiming to be HIPAA-compliant. We got used to decent output. We got used to $20/month subscriptions. It was fast. It sounded good. And for a while, it felt like enough.

But something else happened too – many of us started to notice the cracks.

The reports sounded brilliant… but lacked depth. They missed context. They missed what matters.

That’s where the risk lies.

The Illusion of Intelligence

AI can mimic intelligence. It can mimic empathy. But it does not have clinical judgment. It doesn’t have experience. And it definitely doesn’t have true contextual understanding.

A report may look clean. It may sound convincing. But it may completely miss the point. And that’s dangerous.

Because people – real patients – make real decisions based on these documents. If those documents are missing the nuance that only you, the clinician, would have added, we’re putting people at risk.

Reports Without Substance

Some of these AI-generated reports can erode our credibility as clinicians.

They sound good. But they may lack formulation. They may miss integration. They may strip out the art of what we do – reducing it to pattern-matching without thought.

And here’s the hard truth: sometimes, we don’t even notice.

We sign off on something that seems “fine.” But is it something we’d stand behind? Is it truly helpful to the patient? Or does it just pass?

That’s not good enough.

Cultural and Contextual Blind Spots

Another key point: AI doesn’t understand cultural nuance. It doesn’t understand environmental context. And it doesn’t have the lived clinical experience to say, “Wait, this doesn’t quite line up.”

We, as trained professionals, were taught to see these things. AI wasn’t.

If we rely on it too much, we run the risk of flattening out our reports – making them sound smart but miss the human truth.

Building Context-Rich Tools

I want to shift gears a little and talk about PsychAssist – how we’re thinking through AI integration from the ground up. One of the biggest realizations I had was that I, as a clinician, am part of the context.

We are part of the context of the assessment. Our writing style, our clinic’s flow, our service model – all of that matters. And if we ignore those things, the output gets generic really fast.

So when I built PsychAssist, I didn’t just build it to write faster. I built it to write smarter. To configure to your style. To recognize that different services require different workflows. That different referral questions shape how we write. And that documentation is not just a static thing – it’s a reflection of how we think, how we assess, how we care for our patients.

Reports Are More Than Products

The report is not just a product – it’s a bridge between the assessment process and the patient’s understanding of what’s going on. It’s not just something you hand off at the end. It’s something that tells a story.

And that story? It’s made of many pieces:

  • The data

  • Your clinical observations

  • Your interpretation

  • The patient’s history

  • Their current struggles

  • Their strengths

  • Their goals

AI can help weave those pieces together – but only if you, the clinician, remain at the center of that process.

Patient First, Always

Everything I’m building – and everything I encourage you to build into your own workflows – has to be patient-first.

At least 51% of the output needs to reflect patient benefit. It’s not about saving time. That’s not the goal. That’s just a nice side effect.

The goal is clarity. The goal is better outcomes. The goal is giving patients something that reflects them – not just a regurgitation of numbers.

If AI helps us get there faster and cleaner, awesome. But only if it does it without cutting corners on meaning, safety, or integrity.

When AI Goes Wrong

Let’s talk about what happens when AI fails.

There are three main areas where failure tends to show up:

  1. The User

    That’s us. The clinicians. Many of us haven’t been trained to use these tools effectively. We’re dabbling, we’re testing, but we’re not always prompting in a way that’s structured or safe. And because of that, we might put garbage in – and get garbage out.

  2. The Training

    Think about how we were trained. Years of grad school. Supervised practice. Case conceptualization. Large Language Models don’t have that. They have data scraped from the internet. No supervision. No practical experience. That gap matters – and it shows up in what they produce.

  3. The Tool Itself

    These tools are designed to be fluent, not accurate. They sound good even when they’re wrong. That’s how hallucinations happen. The output is polished and confident. But it might be fabricated, misleading, or biased.

What Bad AI Looks Like

When AI fails, it often fails quietly. And that’s what makes it so dangerous.

Here are some red flags:

  • Hallucinations – Fabricated facts presented as truth

  • Overconfidence – Definitive language where nuance is needed

  • False coherence – Output that sounds logical but is factually wrong

  • Oversimplification – Reducing patients to a few datapoints

  • Fragmentation – Disconnected, non-integrated content

  • No differential thinking – No “what else could this be?” built in

A patient is not just a profile of scores. And if your AI report reads like a list of score description’s missing the point.

We need to go beyond summarizing data. We need to interpret it.

The Danger of Defaulting to Speed

Goodhart’s Law says:

“When a measure becomes a target, it ceases to be a good measure.”

If time savings becomes your primary goal with AI, then you’re measuring the wrong thing. Time savings is a side effect of a good system – not the reason to build one.

You don’t buy a Lamborghini because you want to go fast. You buy it for the experience. Speed is the byproduct.

Same with AI. If you’re only using it to save time, you risk building a system that lacks clinical integrity.

Let’s talk about compliance. This is one of the hottest issues clinicians bring up when discussing AI tools.

Many people ask:

“If the tool says it’s HIPAA-compliant, does that mean I’m covered?”

And my response is: maybe. But that label alone isn’t enough.

What you need to be looking for is a combination of things:

  • Do they offer a Business Associate Agreement (BAA)?

  • Do they document where and how your data is stored?

  • Have they had third-party security audits?

  • Can you get clear answers to these questions?

A “HIPAA-compliant” badge on a website isn’t a certification. It’s a claim. And we owe it to our patients – and ourselves – to dig deeper.

Should You Tell Patients You’re Using AI?

The next big issue: consent.

Should we tell patients we’re using AI in the process of their report writing? That’s a nuanced question. There are clinicians who say, “I don’t disclose that I use Microsoft Word or Grammarly. Why would I disclose that I’m using AI?”

And there are others who believe full transparency is necessary – ethically and legally.

Here’s what I think:

If you’re using AI to support your thinking, that’s one thing.

If you’re using AI to do your thinking, that’s another.

It’s not about hiding. It’s about understanding why and how we’re using a tool – and communicating that clearly when appropriate.

Informed consent doesn’t mean we have to disclose the model architecture. It means we give people enough information to make a meaningful decision about their care.

Who Owns the Data?

Another common question I get: “Who owns the data if I use AI?”

The answer should always be: the patient.

You, as the clinician, are a steward of that data. Any AI platform you use should make this clear. They should have documentation around:

  • Data ownership

  • Data portability

  • Data deletion

  • Access and control protocols

If they don’t? That’s a red flag.

Make sure any vendor you work with puts patient rights at the center of their policy stack – not buried in a footer.

Training the Next Generation of Psychologists

One of the questions I often get is:

“What does this mean for training future psychologists?”

How do we teach students to develop their own clinical judgment if they’re being handed powerful tools like AI?

Here’s how I do it in my own practice:

From Day One, I let my graduate student play with AI. I sit with them. We do it together. I watch their prompts. I teach them what to look out for. We use it as a tool – not a shortcut. And we build awareness of both its strengths and its blind spots.

Because if we don’t let them use it now, they’re going to use it anyway.

The real risk isn’t that students will use AI. The risk is they’ll use it unsupervised. Without structure. Without clinical guidance.

So my position is: give them access. Let them experiment. But do it with high supervision and a strong ethical foundation.

The AI Detector Problem

Here’s a scenario I want you to imagine:

You just completed a beautifully written, thoughtful, AI-assisted psychological report. You send it to a patient. They copy-paste it into an online AI detector. It comes back 100% AI-generated.

What do they now think of you?

What do they now think of your profession?

Even if your report is 100% accurate… if the patient doesn’t trust it, it doesn’t matter.

That’s the reputational risk we face if we’re not thoughtful about how we use these tools.

It’s not just about being right – it’s about being believed.

Final Thoughts

This past year of building PsychAssist has been one of the most challenging, rewarding experiences of my professional life. But it’s also been full of anxiety. Full of risk.

Because when we do this work – when we touch automation, when we integrate AI—we’re not just building software. We’re reshaping parts of our profession.

And that comes with responsibility.

The choices we make now, about how we use these tools, about what we automate and what we don’t, about what we disclose and how – we are setting the tone for the future of psychological assessment.

That future can be faster. It can be cleaner. It can be more efficient.

But only if it’s also more ethical. More transparent. And more human-centered.

Preparing for the Future of AI in Clinical Practice

Every morning, I wake up to a new AI headline. Something’s been updated. Something’s been released. Something’s been deprecated. The pace is staggering – and it’s only going to accelerate.

So whether you choose to use AI or not, you have to at least stay aware.

This is not about hype. It’s about preparation.

We need to ask:

  • What do these tools do well?

  • What are they terrible at?

  • How do we know when we’re making a clinical decision versus when we’re letting a model make it for us?

This isn’t about jumping on a trend. It’s about staying in the driver’s seat.

Final Q&A Themes

As we closed the session, here were some of the standout questions and reflections from attendees:

“How do I talk to clients about AI?”

Be clear about how you use it. Be transparent if it supports your writing process. But remember, patients don’t need the technical details – they need to trust your clinical reasoning.

“What if my AI tool makes a mistake I don’t catch?”

That’s why your role is critical. Always review the output. Always apply judgment. Never rubber-stamp what you didn’t critically think through.

“How do I train students to use AI responsibly?”

Let them explore. But supervise. Teach prompt design, critical review, ethical boundaries, and the importance of integrating their own thinking.

“Is AI coming for our profession?”

No. But it’s coming for our inefficiencies. The clinicians who thrive will be the ones who learn to collaborate with technology, not fear it.

Thank You

Thanks for being here. Thanks for thinking deeply. Thanks for caring enough to ask hard questions about your work and the tools you use to support it.

AI isn’t going anywhere. But neither is our responsibility to our patients, our profession, and our ethics.

Let’s keep building with integrity. Let’s keep asking questions. And let’s stay human – because that’s the part no model can replicate.

Secure your spot
in our beta program

Coming Q2 ’25