Webinar Recording: AI in Clinical Practice

Webinar Recording & Full Transcript

Thank you to everyone who attended our recent professional development webinar on implementing AI in psychological assessment. For those who couldn’t join us live or want to revisit the content, the complete recording and transcript are available below.

What You’ll Learn:

  • Practical AI Implementation: Discover how to effectively integrate AI tools like Bastian GPT, Notezap, Claude, and ChatGPT into your clinical workflow
  • Ethical Considerations: Navigate patient consent, data security, and professional responsibility when using AI in clinical practice
  • Step-by-Step Demonstrations: Watch real-time demonstrations of AI-powered documentation, report writing, and data analysis
  • Time-Saving Techniques: Learn strategies to reduce administrative burden while maintaining clinical quality and personal voice
  • Future-Proofing Your Practice: Understand upcoming trends in AI and how to prepare for continued technological evolution

This comprehensive session is designed for psychologists, school psychologists, and mental health professionals interested in leveraging AI to enhance efficiency without compromising quality or ethics. Whether you’re an AI skeptic or already implementing these tools, you’ll find valuable insights to help navigate this rapidly evolving landscape.

Duration: 90 minutes | Presented by Chris Barnes, Ph.D., Clinical Psychologist

TRANSCRIPT:

Introduction

All right. Lots of hands raised. This is great, so thanks for spending some time with me today. The goal today is to have a little fun, to get our fingers on the keyboard, really get some things going in terms of workflows and automation, and getting a little bit more comfortable in how we choose to use AI in our practice. We’ll also discuss some of the pitfalls you really need to be aware of. After all, we’re really thinking about people’s lives here, so we want to be really clear on how we approach these in a thoughtful and considerate way.

A little bit about me. I’m a clinical psychologist and I’ve spent the last couple years getting elbows deep into AI. I’m pretty lazy to be honest, although you wouldn’t tell by the amount of time I put into all of this AI stuff. But I have always sought ways to speed up my practice. I have a group practice where we do a ton of assessment, primarily ADHD, because that’s the flavor of the decade right now. But we do a lot of learning disabilities and we service a lot of clients. The way that we’re able to do that is by focusing on efficiency. There are a lot of known knowns that we engage as clinicians, and I’ve been able to really identify mine.

About two years ago I started to do some consulting work with clinicians and practices. Through that work, I realized that my problems were just everyone else’s problems, and we all share the same issues when it comes to bottlenecks regarding documentation, automation, etc. This time last year I started taking it more seriously. I have a logo in the bottom right corner – I am building a program for assessment psychologists. The goal is to incorporate AI in a very thoughtful, intelligent and secure way so we can take care of some of the back end things. This is 100% not going to be a sales call – there is no obligation whatsoever, but it would be remiss of me not to use some of my experience in this conversation because it’s been a really satisfying year seeing all the tools develop.

Agenda

The agenda today is we’ve got about an hour and a half. I want to spend about thirty minutes discussing how we can use AI in our practice. We can find ourselves on various parts of these journeys. We can get more comfortable about where we are, knowing that there’s always a developmental process that requires sufficient thought, some consideration, and awareness of pitfalls.

I’ve got some real-world demonstrations I want to do using a couple different tools. I sent an email out this afternoon and we’ll be using:

  • Bastian GPT, which I’m super comfortable with
  • Notezap, which is an AI scribe
  • Claude (Anthropic’s product that everyone knows about)
  • ChatGPT as well

We’ll spend quite a bit of time doing that, and I hope that it’s a very interactive part of our session. Furthermore, I hope that we can lob many questions into the chat today and have a little bit of fun, because I think being a nerd is fun. I think there’s some real value to take away from this conversation, regardless of where you are on this journey. If this is your first time ever doing something with AI, perfect. And if you feel like you have quite a bit of experience, then we’ll just dive into where you’re at as well.

Towards the end of our conversation, we’ll talk a bit about what’s to come. It seems like every morning I wake up, and of course, the algorithms have me so all my feeds are full of AI stuff. Nevertheless, it’s changing day by day, and it’s just going to get better and cheaper, and it’s not going anywhere. I think in many ways, it’s important for us to pay attention to the trends, whether we choose to implement them or not. We’ll spend some time discussing what to expect moving forward in the AI world, and how you can prepare for some of the changes ahead. Whether you choose to implement these things or not, that’s totally within your control.

AI Journey Continuum

What I’d like for you to think about here in just the next few seconds is to figure out where you are on this AI journey. We all fall somewhere on this continuum. We have people who are skeptical like, “I don’t know about this whole robots taking over the world thing.” We’ve got a few people, I’m sure in this crowd, who are just a little bit curious. Maybe they play around a bit. Maybe they let their kid talk to Santa on ChatGPT over Christmas.

Perhaps some of you have already implemented some of these tools, and in fact in that pre-webinar form I sent out yesterday, I do see many people are using some pretty interesting tools, which is great. But there’s always room for refinement, and you’ll see throughout our time this afternoon, the devil’s in the detail. Garbage in equals garbage out, and so when we think about how we can use some of these tools, it’s really going to be narrowing down, refining and iterating over and over.

We have a lot of different people in a lot of different places, so I’d like you to throw into the chat where you might be on this journey. I tend to wrap and cycle between many stages myself. I looked at some of the information on that form that you all submitted yesterday, and it looks like a lot of you are already in sort of that second or third category from the left, which is really good. And we also know we have to continue to refine and refine once again.

Take a couple seconds and think about how you may fall into this continuum, and then also how you may prepare for some of these next steps ahead because it is a constant journey. The irony is that the more we move to the right approaching the far right of the continuum, we’re right back to the cautious observer side because the more we know, the more we know we don’t know. It becomes a bit ominous at times.

Common AI Fears

So many of us have a lot of fears about AI use. I was looking through the results of some of the forms that you all filled out and many of them fell into these categories:

  • Am I going to lose my license?
  • What happens if my patients’ data gets released? There’s some concern about that.
  • Is what I’m getting actually accurate? AI hallucinations are an absolutely real thing, and we have to be aware of that.

You’ll hear me say this several times throughout our time: AI output should be used as like the 0.5 draft of your document creation. It is to create framework. We can teach you how to train your AI, especially using templates. Furthermore, we have to always be the final source of truth for our clients. That’s why we went to school, to be able to craft documentation and use these tools to help our patients. I don’t think that AI is there to actually do the help for the patients, but we can certainly leverage its utility. Some of the advances that keep coming around are really great to help us as clinicians, even challenging how we’re doing this work ourselves.

A lot of people were wondering, is a robot going to take my job? I don’t think so. I think that our job is going to get very easy, and I do think that there is going to be a reallocation of resources when it comes to how people interact with clinical psychologists, particularly at a systems level, and even within the practice.

Even here at my practice, I have an admin who for the longest time before I really started to leverage some of these tools, was very overqualified for all the work she did. She was sending links, doing this and that – she was super overqualified and it was a bit awful for her. When we started to implement some of these tools, she was able to really use her resources and be the face of the practice rather than burying herself in a tremendous amount of paperwork.

So I don’t think AI is going to take our jobs, particularly as clinicians. If anything, it’s going to augment our work in such a way where we can become even more skilled because we’re not going to be bogged down with administrative burden.

Then there’s another concern around control – who’s really in charge here? Where’s all the information coming from? At the end of the day, we have to remember that we are really at the end of the decision-making process as clinicians. Everything that we do, everything we choose to implement should really be in support of our clinical work and the relationship we’re forming with our patients.

Whether therapy or assessment – even for school psychologists who are working with little kids and families – some of this work can be very daunting and emotionally taxing. Cognitive burden can be overwhelming at times. When we can take some of that burden away by leveraging these very basic tools, I think that everyone wins and we can spend more time with our patients, with our families, or dogs, maybe just at the beach.

Patient Consent and Transparency

Another area of concern that many people have is, do I tell my patients? When I think about consent and chatting with patients, maybe even families or school systems, I really think there are 4 basic levels:

Level 1: This is the entry level. “Hey, I use AI.” Here it is, it’s in writing, it’s a super basic statement. This would be that really basic level of consent as it relates to working with AI and the transparency surrounding it with your patients.

Level 2: Moving up in that hierarchy, with a little more clarity, we can talk and actually note AI systems rather than just making general technology statements. You could also ensure the client at some basic level that we have some HIPAA compliance or FERPA or whatever regulating body you’re using, and also that it’s not really used for any decision making, although I don’t think it should be used for much decision making to begin with.

Level 3: Even further up into that hierarchical level, we see this next level where we’re really starting to hone in on the details about how and why and when you use these tools, so you can provide that super transparent level of understanding to the patients and people you work with. This doesn’t necessarily have to say “I use these tools, I use them on Thursday and I do XYZ with them.” What I’d rather see is “this is what we do and I feel very comfortable using these particular tools because we know we have our stuff sorted out on our end. We have these tools under business associate agreements, we have looked at their security,” etc.

Level 4: This highest level that I think about when I think about the transparency as it relates to the use of AI is just getting down to the nitty gritty. I think everyone’s going to fall somewhere between levels 1 and 4, maybe like 1.5 or 3.5. But this is the most detailed level of consent. Here I think it’s super important, if you feel like you need to be this transparent, to name the tools and name the thought-making process behind why you choose to use them and when you use them, etc.

I have seen some recent psych reports as I’ve been doing some document review for some of my own patients that even indicate in these documents that “we use AI” and there was one with an asterisk that indicated the generative piece of that document. I’m not here to judge any of those levels, but I think everyone here is going to fall somewhere on that continuum. The reason I bring these up is because I think it’s super important to pay attention to where you are, because where you are really guides what comfort level and what tools and maybe how experimental you can be with this use. The goal is to always reduce burden and as many of you know, the document burden is awful.

Us vs. AI or Collaborative Approach?

As we continue to work through where we are in this process and how we can integrate it, there’s this dialectic between “us versus AI” – “it’s going to take my job” or “I don’t want anything to do with it because if I look at it, it’s just going to grow a little bit bigger” – and then on the other side, a collaborative approach. The collaborative approach thinks about how we leverage certain tools in strategic areas that have been thought through, and we just use it as a refined tool from our toolkit.

When I think about myself as a clinician (I’m certainly biased, I’m building an AI program for psychologists), I really think about how to augment our own work – how do we use these tools in a way that don’t do all the work for us? In fact, I think some of the work that we do is quite fun – the thinking through and sifting through data and the connections with people we make – I would never want to give that away.

But writing a report, writing a document to a standard – all of those things have historically been the biggest burdens in our profession. When we can reduce the burden on those particular issues, I think it creates a lot of bandwidth. It’s a surplus at some level because I know that when I used to have to write reports at 4:00 in the morning the night before (it wasn’t a great night and it usually had to be written at 4:00 in the morning because I had been procrastinating for some time) – that was tough. What I think this does is it continues to create this flywheel of bandwidth, and it’s not bleeding out as fast as it’s building up because we’re able to achieve a bit more work-life balance. Overall, it’s just less taxing to get through some of the necessary parts of our job.

Bastian GPT Demonstration

Let me share Bastian GPT with you all. The beauty of Bastion, first and foremost, is that they will sign a business associate agreement (BAA). A business associate agreement is a sort of contract between you and this vendor, in this case, Bastian GPT. It outlines the security measures that they take to maintain your data, as well as some of the precautions they have in place, and also what they’re going to do if something goes wrong.

My wife works for a company that was hacked by a Russian hacker last fall, so if that happened to us, what are they going to do about it? Because we’re trading our dollars for their service and we need to feel super comfortable as clinicians that what we’re doing is A) safe, B) not going to cause any harm, and C) not put our license at risk either. Bastian is one of these tools that is very comfortable signing these BAAs, so it’s super HIPAA compliant. In fact, they market that they are HIPAA compliant.

Furthermore, they train their models on a lot of medical content. It’s not specific just for psychologists – it’s medical proper, but nevertheless we do share terminology. So through the use of Bastian, as a clinician, I would feel less concerned about putting some patient information in there. I could very well upload a previous assessment report. Some forensic psychologists like to use this because they can take these files and throw them in. With that in mind, though, we have to keep in the forefront that the size of those files is important. Unfortunately, Bastian won’t allow significant file upload, so forensic psychologists will go through, copy and paste the meat of everything, and then have Bastian write some summaries.

This is what Bastian looks like the first time you get into it. This thing right here in the middle, there’s a slider and this is called a temperature slider. As it moves to the right, as it becomes a larger number, it is a bit more free. It’s like the hippie version – it’s just gonna go and do its thing, and it doesn’t like being told what to do. And then if we go to the left, these are gonna be your surgeons. They’re strategic. I like to keep mine right in the middle, knowing that I’m gonna be reading everything and confirming everything afterwards.

There’s nothing on the side here – this is just a throwaway account for me that I just opened this morning. There wouldn’t be anything on the right here for you either. What we want to do is think about if we’re going to use a prompt once, chances are we’re going to use it again. The benefit of keeping things in this sidebar here is that we can continue to refine them. Some of the prompts that I use, I think I’ve changed weekly. The reason is because you get to start to assess your input versus your output, and you’ll start to see trends in the way that you interact with these tools.

Once you start to observe that you’re doing something over and over again, often times we’ll just get the output, throw it in a doc and change it in the doc. I argue let’s work smarter, not harder, and let’s start to play with the dials on some of these prompts that we’re actually using.

I want to show you how to put in a new prompt and save it. Over here on the right, you’ll see “new prompt” – click that and it will drop something down here. All you have to do is click there and it’ll bring up this window. I’m going to put in a basic prompt – in fact, you can find this on page 20 in the PDF I sent you.

I’m going to call this “Crappy Prompt 1” because it is indeed very crappy. I’d like you just to pay attention to how this process works. Chances are you’re not going to use a crappy prompt – maybe you’re going to use some other prompts and then just return to them and refine them as we go. After you have your name, so you recall what it is and what it does, you also have your prompt. The prompt is really what you’re asking of the large language model to do.

On the right side here I have behavioral observations, a FAQ, some advanced and some intermediate prompts that I put in there earlier this afternoon. This is just a way that if you know you’re going to do something repeatedly, you can just loft it over there. Early on in my AI process, I just had a Google Doc open and I had to sift through everything. It was daunting, but now one of the benefits of Bastian is that you can save these prompts and then call them back up.

After I save this, “Crappy Prompt 1” is over here on the right. Now what you could do is go back to “Crappy Prompt 1”, copy it, get out of there and paste. You could very well do that – it doesn’t take more than just a few seconds. But one thing about Bastian is that if you use your right slash (which is, on a Mac, the button to the left of my right shift button), all of your prompts are brought right up for you. If you have a tremendous list of them, like my account does that I’m not showing now, you might have “Crappy Prompt 1”, “Crappy Prompt 2”, etc. – I would just start to put in the first few letters and it will narrow those down for me. When I’m able to do that, I can narrow down very quickly with less keystrokes.

So I put in my “Crappy Prompt 1”. It’s very crappy. You can either click this or just hit enter. Let’s see what happens with a real crappy prompt.

Not bad – given 3 bullet points, it’s quite astonishing. “Astonishing” is a strong word, but nevertheless, I do believe it to be quite astonishing what it can do. Because these models, particularly Bastian, have been trained on what a psychotherapy note looks like and how they tend to look. So it knew to take my awful 3 bullet points (discuss their mother, discuss inadequacy activation network at volleyball practice, invoking some sort of fear failure), it identified what the progress note should look like, aggregated those three bullet points into the correct areas, and then organized it.

Clearly, if we had the dial of the temperature turned all the way to the right, it’d be really interesting to see what that would do. I encourage you to play around with that, but nevertheless, this isn’t bad. In fact, it’s a little bit better than “not bad,” but I think that there are better ways to do this too.

You could go back in and you could say, “write this all in narrative form, no bullet points.” Let’s see what happens. Brilliant. So it did that.

One thing to note is that I think the iterative process as you start to work with some of these models is incredibly important. The reason is because you’re going to learn your own downfalls. You’re going to figure out why you aren’t prompting these models in an efficient enough way. You’ll probably start doing the same thing over and over, and then you’ll start seeing some similarities in some of the output. The goal is for you to be aware and observe some of that before and after.

Then you start to ask yourself, “OK, what am I doing?” And if you can’t figure it out, maybe you ask the chatbot. That’s an interesting trick where you can say “this is what you gave me” (copy paste), edit it, maybe even edit it here in this text input, and say “this is what I want.” Depending on what model you’re using, say “Talk to me about how I should prompt next time to get a more accurate result.” So you can start to train yourself, you can start to train your model (depending of course on which one you’re using) in such a way that gets you more and more close to a consistent and repetitive voice that is yours.

Now let’s try something a little bit more intermediate. The one we just did was “Crappy Prompt 1”, so this is another one. You can see it’s just a bit longer, but still not that detailed.

To give you a better idea, I didn’t ask for a note – I asked for a CBT-focused psychotherapy SOAP note, and we gave it a little bit more information. We’ll say that it was a minimum of 53 minutes, and it was their third session. He uses he/him pronouns. In the previous session, they focused on hunger pains. We want this note to be approximately 2 to 3 paragraphs in length, and in this session we discussed maternal relationship, work-related stressors – some of those same things I had mentioned in that previous prompt. We still have our temperature at the same setting and we’ll see what this one does.

So you can see that this is a bit different. We indicated that we wanted a SOAP note, so we have our subjective data, we have our objective data, we have our assessment, we have our plan. It didn’t add any fluff – it didn’t say “the client, a 28 year old whatever.” It didn’t say anything. Didn’t leave any place markers for date or any clinician name, which sometimes it does and sometimes it doesn’t. But if that’s something you need, you ask for it.

Nevertheless, it knew what a SOAP note was. It knew to take some of those pretty awful bullet points that I had there and aggregate that in such a way that made sense to fit the style of a SOAP psychotherapy note in that order.

What we could do is say, “What would I do next session?” Clearly, you would never do this to strategically plan to the detail your treatment plan – that’s why you went to school. But nevertheless, the idea is that if you are feeling like you’re hitting a block or if you’re like, “Oh my God, I’m so tired, I can’t think straight. What did we do last time? These are our goals. What are some key points to talk about today?” Maybe it’s a way to prompt some of your own creativity.

Interestingly, it knew we were doing CBT. Here we have some homework in progress. It knew that there was some relationship dynamics from that previous input. Because of CBT, of course we want to continue to set some goals, and so it shows its intelligence as it relates to the nuance of our field because we prompted it to do so in that very first interaction.

Let’s open up a new chat and see what this advanced prompt does. In the advanced prompt I have variables that I placed within it, so the variables will prompt you to provide that information. Let’s say it was 3:00 PM and we use CBT and dream analysis. It was session #3 and our previous focus was on mother issues.

We have included that information here, and then we can talk about some key content: we can talk about the recent chicken wing shortage, we can talk about work stress, maybe an upcoming exam. There we go. Again, you can click this or you could just hit enter.

So the output is far more thorough because I trained it and asked it to do that. The goal of me walking through these prompts – from “Crappy Prompt 1,” to something a bit more intermediate, and then here with advanced – is not because advanced is great. It’s to show you that the amount of information you give before it starts turning the spokes, the better output you’re gonna get, and you’re in control of all of that.

If you feel like you need to make sure you’re saying certain things, or you always end a note or a section of a report with specific statements, you can absolutely throw those in. You can always be in control all the time. The goal is for you to, as you’re working through your workflow, think about “what am I always writing? What am I always saying?” Some people use this tool called TextExpander or they have hotkeys. But there are ways we can use and interact with these models, even just with Bastian, to make sure that in our prompt, especially if it’s pre-loaded, to “make sure to include the following statement” and provide that exact text.

The document I sent this afternoon is half text to think about and half prompts. It covers the basic things that we tend to see in our world, whether it be for assessment or therapy or whatever your specialty is. I encourage you to look through it, think through which ones you may want to start playing with, and then start playing with them.

If you’re using a tool like Bastian GPT, you can feel free putting in some screenshots, uploading some information about a patient, and furthermore, you can just start to get comfortable with input and output. If you want to get really advanced, you can start to train it a little bit. What I mean by that is, like I said earlier: “I gave you this prompt. You gave me this output. I edited this output and here it is. How would I prompt or how might I make my input source a bit more clear? Are there any other variables I might consider for this?”

Because we’re always working towards further refinement and structuring of the data. The less thinking and creativity these large language models have to incorporate, the more precise output we get. As assessment psychologists, our work has to be precise, and alluding to the question we had a bit ago about hallucinations – it’s a real thing, but we can minimize its impact or likelihood by being very thorough and intentional about the data we input. The earlier we do that, the better the outcome will be.

I have a few other prompts over here. Let’s say you’re in a room with a kid doing some testing, and you want to quickly write these behavioral observations after your session. Maybe you were taking notes in the WISC columns, maybe you have notes everywhere, maybe you have a dedicated system. If you have a prompt such as this already saved, and I should note that if you were to do something like this without taking the one from the document that I sent, you don’t have to give it all the information. In fact, it can infer a lot of things, and you can even ask it to fill in some things.

Particularly when it comes to behavioral observations, when I was using this method in the past, I would say “here are my observations. If I don’t mention it, consider it within normal limits and indicate as such.” So here are some comments. Let’s pull that up using that slash and then behavioral observation. There is a placeholder here, so we’ll say “they were on time,” “asking to leave every 5 minutes,” “appears disheveled” (I chose that word because it’s the most impossible to spell, but I’ll show you how it doesn’t even matter if you spell right), “frequent bathroom breaks,” “poor effort,” and “spilled water.”

Another thing that you’ll notice if you’re looking through that PDF document is that you’ll see some things like HTML code. This is how we start to work towards the structuring of the data. I can use tags like <info> and </info> to mark sections. So now we know as it’s going through and sifting, it’s structuring it a little better and it knows that anything between these tags is going to be the behavioral observations. The narrative should address specific points that I’ve outlined in the prompt.

Let’s see what happens. Well, this was a pretty crummy prompt because I didn’t give it a lot of extra information. But nevertheless it was able to do what I asked it to do. This is the fun part of getting in and seeing the capabilities. If we had our temperature changed, maybe we would have a different output. If we gave it a little more information, if we had some tweaks in the actual behavior observation output prompt itself, it would be interesting to play with this just to see.

One thing that I’ve seen a lot of clinicians do – many of the people I’ve done some consulting work with – we’ve done this together where we would take several behavioral observations from a series of reports that they’ve written. We would extract the text and use Bastian. They would put real examples in there so they knew it was like their stuff and their writing style. They would say, “Hey Bastian, take these 5 behavioral observations, extract my style, my headings, the variables I tend to talk about the most, the words I use most frequently, etc.” and it was able to respond back and say “Here’s a prompt that will produce an output similar to these examples.”

After that, they would just write “/behavioral” and the prompt was there with the formatting instructions, and they would just dictate into that: “Timmy was here and XYZ happened” etc., and it would produce consistently accurate behavioral observations in their voice. You can do that for any section in your report.

Many times we get “deer in headlights” as we start to play with some new tools – we just need to do and we get in, we start tinkering, and then we’re dissatisfied with the output. But I think that if we can engage this with curiosity and play around understanding these basic things we’re talking about today, you will be able to start tweaking and pulling on these levers to get closer and closer to your desired output.

Many of the things I’ve done as someone consulting on workflows, particularly for assessment psychologists, is recognize that everyone has their own voice. I don’t think I’ve ever seen the same report from all the people I’ve worked with. Everyone is a snowflake, and we don’t want to lose that voice. Yet many of the products out there right now have these boilerplate outputs. They run them through these models and they don’t really have our voice associated with it.

I think this is a fix for that – if we can start to find tools and tweak them in a way that is consistent with how we write, and continue to iterate, then we can have more freedom with the tools we’re using. Furthermore, we can feel more comfortable with them as well, knowing that we just don’t have to settle for what we get and edit 1000 times afterwards.

Questions and Answers

Question about how to export text without markdown formatting (hashtags, stars, etc.)

What that is is markdown – a formatting code. Often the output from large language models will do that. It’s your friend in many situations, annoying in others. I don’t know if there’s a specific workaround besides saying “don’t format the text.”

For instance, if I were to copy and paste this, it might show **Emotional and Social Presentation:** because two stars indicate that it needs to be bolded. To have output that isn’t using stars and hashtags, perhaps just ask it “not to bold, not to have headings, just to have a plain text output.”

Where markdown is your friend is when you want to make tables really quickly. I use Bastian sometimes where I’ll have a dataset or screenshots – maybe a screenshot of a WISC table with the domains and standard scores, or a Conners CPT3 with all the ratings. You can take those screenshots (you have to do it iteratively because Bastian won’t take a lot at once), or if you have an Excel file you can just take one screenshot of the whole dataset.

Then you can ask Bastian to take that information and provide you with a Markdown formatted table. It will take the data, aggregate it in the table in the way you need, and then you can put it into a Google Doc with one extra step using Gemini (which is free with the paid Google Workspace).

How many times have you as a clinician agonized over creating tables and moving margins? And “oh no, the cell is here and it threw everything else off”? If we can ask any of the models (but I know Bastian does it because I’ve used it), you can say “give me a markdown of XYZ, aggregate the data in such a way, make sure to include qualitative description, confidence intervals as well as percentile rank, and one row for each measure/heading.” It’s interesting to see what happens when you use markdown as your friend.

To avoid markdown when you don’t want it, you could add a simple instruction at the bottom of your prompts: “Do not format text.” It would be interesting to see if that eliminates the markdown. And there’s always your good friend Control+Find Replace for all of that as well.

Question: At what point would you consider introducing this in clinical training during grad school, externship, internship, or after licensure?

I’m bullish on this, so I’m very excited about how this works. I have PhD students that rotate through my clinic and they use it day one. My philosophy is heavily supervised in the beginning – looking over their shoulder, reviewing everything that happens. Of course my name’s on the document, so I want to be reading things before finalizing them. I’ll bring it back to the student if it just is a blatant cut and paste.

I do this for one particular reason: I firmly believe that this is going nowhere, and part of my job as a clinician is to prepare future clinicians to be ready to hit the workforce – not just clinically, but to not get themselves in trouble doing things they shouldn’t. Furthermore, if I’m doing it, I don’t think I should not let my students do it either.

That’s where this layer of supervision comes in. At the beginning of rotating through my practice, I’m over their shoulder watching. Towards the end, sometimes I can’t even tell what was written by the student and what was written by some sort of bot. I think it’s great because it’s out there everywhere, and when we can get more comfortable allowing the supervised, intelligent, thoughtful, and considerate use of these models, even in graduate training, I think it’s great.

As a caveat, I’m not allowing my students to take screenshots of datasets and say “write me 7 paragraphs.” If they do that, they’re really good at it because I haven’t been able to catch it yet! But what we do in the beginning is help them refine their prompts when what they get doesn’t make sense. I’m teaching these students how to use AI in a very thoughtful and clinical way to maintain a consistent clinical voice.

I hate writing updates and making charts, etc., so I don’t expect my students to do that either. Being able to teach them how to leverage these tools to create consistent tables or visual output benefits everyone. I’m not from the “we abuse our students because we were abused ourselves” school of thought.

Question: Can I put my BASC interpretation into Bastian, or do I need a screenshot of the scores only?

Bastian is HIPAA compliant, so technically you could put in the whole interpretation. It depends on the size of the file. There are three ways to do this. You could upload the whole document if it’s not too large, take a screenshot of the scores, or copy and paste the relevant parts.

Question: Can we upload previous assessments to help the GPT learn our voice and style?

The two important things there: when we say GPT, we want to make sure that we’re not uploading patient information to ChatGPT. That’s crucial. But if we use a compliant tool that we have a BAA with and we understand their security and we feel comfortable, we can upload sample documents. In fact, I think that’s a great approach.

What you could do, just using Bastian as an example since I’m familiar with it, is upload one report and say “extract my tone, voice, reading level, clinical style” and even ask it to identify “everything else that might be important for another psych report.” And then say, “OK cool, I gave you one, here’s another one. Refine. Here’s another. Refine.” You can play that game for a little bit, just making sure that the context window doesn’t get too big, because if the context window becomes too large, the AI starts to get less effective.

Question: Are you familiar with Click Report?

I know Jenna Washburn, and she’ll be at NASP next week, so if anyone’s heading there, you can chat with her. We’ve chatted a few times. I’ve never used her product. I collaborated on something with her last year. I’m not super familiar with it, but I know that she’s super smart.

Question: When you use Bastian, do you have multiple chats running at once?

Some people will stratify them by patient, so they’ll have one chat for Timmy and one chat for Susie and one chat for background on something else. That’s quite smart because it keeps all consistent information for that patient in one container. It doesn’t allow anything else to bleed in because remember, the more we’re giving to a specific context window or container of information, the more muddy everything inside of it gets. So keeping it short, strategic, and succinct is a really smart idea.

A friend of mine also mentioned once that if he died, he didn’t worry about his browser history – he worried about his ChatGPT history! Because I’m the same way – I just go there and ask it everything and don’t go to Google anymore. You’d be surprised how many three-message interaction chats I have in ChatGPT. It’s just use it and dump it, use it and dump it.

Creating Tables with AI

Let me demonstrate how to create tables with AI. I’ll use ChatGPT for this example. Let’s say I want to create a table with fake data for the WISC-5, all five domains, the BASC, and Conners CPT3.

[Demonstration of creating a table in ChatGPT]

This is what markdown looks like. When you cut and paste this into a Google Doc or Word document, it’s going to be awful – it makes no sense, but that’s OK because it’s the formatting that you’re really looking for.

What you would do is copy this and put it in a Google Doc. No one wants a table that looks like this. But if you have a paid Google Workspace, you can go to the Gemini icon and say “take this highlighted text and turn it into a table.” It’ll put it into a preview window, and then you click the insert arrow, and boom – just like that you have a formatted table.

It was crummy data in this example, but you can get in here and do anything you want with it. Let’s say you want that shaded – great. You also want to bold – great. But nevertheless, it’s pretty quick and easy. Just imagine what you can do if you’re dropping screenshots into something. If there’s no patient data associated with it, maybe you’re using Claude and you’re saying, “here are my 5 screenshots. Give me a markdown table with these specific things.” You can be as specific as you want, and then you loft it into Google Doc, open up Gemini, say “please turn this markdown into a table for me,” and it’s done.

Question: Can you use Claude with a BAA?

I don’t believe so. I don’t even know if they have an enterprise version, and I know that would be ridiculously expensive anyway. My rule of thumb is: if I don’t have a document from the owner of the company or the C-Suite of the company, if I wouldn’t put it on a billboard, I wouldn’t put it into a chat model.

But with Notezap, I have a signed, pen-on-paper, scanned and sent to me BAA, so I feel comfortable putting things there. If you use other models like Claude or ChatGPT, I would be cautious and be explicitly clear that you have no patient information. Maybe you take just a screen grab or a section of a document. I’ll hit Command+Shift+4 and stretch it over the area. I’ve had to catch myself a few times because sometimes there’s a name or something identifiable that you have to be cautious of.

AI Types: Automate, Augment, Accelerate

When we think about using AI, I have this model of the four A’s. I allowed myself a bit of creative license here:

Automate: This is where we use AI to handle repetitive tasks.

Augment: When we think about the use of AI, we can allow it to augment some of our work as well. The way I think about augment is maybe it’s allowing it to do a little thinking for us – not just hard coding an “if…then” statement.

For example, leveraging AI to be more savvy in how we handle intake information. If you’re an assessment psychologist, you’ve probably sent forms to a client because they’re bringing their child in for an evaluation. They send them back, the data sits there, and then you have an intake interview where you ask all the same questions and they give all the same answers. You wish you would have had some follow-up prior to your intake.

The way I think about augmentation is doing some real-time thinking. One feature I’ve implemented is that the system watches intake forms and if it detects a signal about something, it can suggest additional screeners or questions. It’s doing some thinking for you, but it’s not clinically and diagnostically focused – it’s information gathering. The goal is that we can use these tools to augment our pre-intake work so we have more refined information from the start.

Furthermore, it allows you to have more information to find things in the corners that sometimes we either run out of time for or there wasn’t enough signal to pick up on initially. Then when we review information, we really start to see that there was something there.

This is just one way you might be able to augment your assessment workflow. I’m not going to ask my AI bot to do my job for me, but I’ll ask it to do all the things I probably pay someone 20-25 dollars an hour to do. Therefore, when I do have someone come in and do work at the office, they’re leveraging their resources in a way that is hopefully fulfilling to them, but isn’t just bouncing around on a keyboard sending links to things.

Accelerate: I think this is what a lot of people think about when we’re discussing the use of AI – how do we just make things faster? If you’ve written a report before, you know they take too long. They take excruciatingly long, even when we are leveraging tools. All we want to do is see patients, crunch numbers, sound/look/feel smart, and maybe make some money along the way.

The big trend is how do we do it faster? How do we create a better experience for our patients? And I hope that’s where many people are landing, because if you’re like me, we have year-long wait lists. I myself am booked until August, which is an excruciating process for patients – they have to wait forever. If we ourselves, as clinicians, become more efficient, I think everyone wins. We can continue that flywheel and help so many more people.

One of the things that we can think about in terms of accelerating our process is using an AI scribe. Notezap is the one that I have used. It has some pretty basic prompting out of the box, but I’ve got some prompts in that PDF that I hope you’ll use.

An AI scribe works like this: you turn on an iPad or your phone, or you have your laptop open, and you’re having a conversation with a patient – in this case, an intake session. You go through your clinical interview. Prior to using a tool like this, you would then take your notes and dictate or type them up or avoid it until you eventually had to get that information out of your brain and onto some sort of document.

The benefit of using an AI scribe is that it will record and transcribe that session. It brings up the idea of consent and transparency, but nevertheless, it records the conversation in a way that produces an output you can use for an assessment report. I used it most for intakes where we’re asking all the questions we typically need to communicate in the background section of a document. If I can do it with just a few clicks of a button, go in and cross-check against my notes or what they sent me in PDFs, it gives me a framework. I myself work much more efficiently as an editor than as a narrator or as someone organically creating documents.

A lot of other people will use AI scribes for their feedback sessions. The trick of the trade is that if you’re using an AI scribe for your feedback, you almost want to intentionally follow an internal script because if you know the scribe is listening, you want to make sure you’re feeding it the information. That, in conjunction with a sophisticated prompt, can produce good output. Much of that output can then be leveraged for a summary section or integrated summary section, a rationale around diagnosis and rule-outs, and recommendations.

Notezap Demonstration

Notezap has a landing page with different templates correlated with different kinds of sessions. These are the things I have used in the past. I have one for intake, and in November I decided I wanted to change it. You can have as many templates as you want – some for kids, some for adults, some for specific workflows. You can really get in here and customize it. The more nuance you have that’s consistent with a specific workflow, the more you get out of it.

Each section lets you tell the AI what to do with the transcript. Many people I’ve worked with who use Notezap will have each of their developmental history sections here, but this is for a feedback session.

In this section, I say “take the role of a clinical psychologist” – so I’m setting that framework – and “write a summary section of a psych report.” I’m very specific: “Don’t make anything up. Don’t hallucinate.” You can really say that and it listens. I want it to be titled “Summary” and I indicate my desired length. Historically I have not written 6-8 paragraphs for an assessment summary, but I ask it to do too much work so I can whittle it down. I’d rather have 6-8 paragraphs, cut out the sentences that don’t work, throw it back into Bastian and say “make this better” with more strategic prompting.

I tell it to “integrate findings in a comprehensive way” and “make sure you’re including strengths and weaknesses that we observed.” You can also say “use this language” and “don’t use this language.” The first few times I used Notezap, I was getting very frustrated because it kept saying “the therapist said” or “the psychologist referred to,” and I didn’t like that. I started tinkering with it and thought, “I tell my kids not to do something and they don’t listen. Maybe I’ll ask the AI to DO something and it will listen.” So I added instructions like “don’t talk about me” or “don’t refer to me as a therapist.” If you have specific requirements about how you refer to a patient, do that as well – like “refer to the patient as [first name]” or “Timmy” or whatever you prefer.

I also asked it to do a cognitive ability summary, emotional functioning summary, and a general summary of overall findings. I ask it to note my recommendations, but I structure my recommendations in a psych assessment report into three different categories: medical, psychological, and executive functioning (as much of my work is ADHD). Then, if there are any other general recommendations, they go at the end.

This is just to put text on paper for me. I don’t need it to come up with new recommendations – I need it to listen to what I was saying during the session and then put that into the document that I can, of course, edit.

I always ask for support recommendations for the patient. This came through feedback I was getting from patients – maybe a husband would say “What can my wife do?” or parents bringing a child with behavioral issues saying “We don’t know what to do.” I started thinking about how to address this consistently, so I added this to my feedback template. The AI would listen to the transcript, be a little creative, and I’d edit it, but it was a way for me to get one thought turned into multiple paragraphs.

Furthermore, I tend to write in a more strengths-based way, so I always like to point out unique characteristics. My goal when using these tools was to get as much information as I can, sift through it, pull out what’s relevant, summarize, and then I was done. It was still tremendously more efficient than anything else.

The Notezap subscription – I think it’s $25 a month plus a bit more. When I was using it heavily last year, I think it was $75 I spent, but I see about 35-ish clinical hours a week, so that was heavy usage. Nevertheless, that’s money well spent considering the savings, and furthermore the comfort I had with it because they signed a BAA with me as well.

I’ll show you what my feedback session output from that prompt looks like. This is what someone was talking about earlier – this is markdown text. I didn’t edit much here – I changed some information and removed anything that could be identifiable. But this is the beginning of what the output was for a feedback session using that exact prompt series.

We got a 6-8 paragraph summary, cognitive abilities section, emotional section. I gave the PAI (Personality Assessment Inventory) and it really started to pull out specific things – it talked about the clinical scales, validity scales, and self-concept. Those are key terms within the PAI output that I review in my sessions.

Again, this is not intended to be your second brain, but if you are strategic in how you ask these tools to output their text, you’d be surprised by both the amount and quality of output, especially if you continue to iterate and train it repeatedly.

Ethics and Responsibility

When we think about ethics and responsibility with all these new technologies, it creates a bit of anxiety. It seems like every morning there’s a new version of something. ChatGPT now has a $200/month plan that will write pretty great research papers in maybe 15 minutes. I’m continually amazed at the level of output and consistency they’re achieving. I’m not paying $200 a month for one tool, but I am probably paying $200 a month for about 10 different tools.

When we think about the ethics and responsibility of our decisions, we have to apply frameworks to why we’re making these choices. Here are four main areas to pay attention to:

First and foremost, you’re a clinician. Your clinical judgment matters. AI will make suggestions, and you’ll evaluate them: yes, no, maybe. You really have to use the sniff test early on because AI certainly does hallucinate. We can combat that by being more structured in our data input and being specific – perhaps even overly specific – in our prompting. Garbage in equals garbage out.

You have to think about whether you’re going to discuss AI use with your patients. I’ve worked with clinicians who say, “No, they don’t need to know that I’m using Gmail or the name of my EHR. Why would they need to know what AI tools I’m using?” And there are other people who are always concerned about liability and want to disclose everything. That’s fine. It’s important to figure out where you are so you can feel confident that if anything comes into question, you have a thoughtful, documented decision-making process.

Of course, when we’re thinking about using AI tools, any data that leaves your control needs to be held securely. So if you’re using ChatGPT, don’t throw in a psych report and ask what’s going on with Timmy. I’ll reserve that for Bastian. But I’ll absolutely use ChatGPT to say “I’m stuck, help me think through this” or “compare and contrast the symptom presentation in a 15-year-old boy who uses cannabis seven times a day with trauma versus ADHD.” You have to think about how to use those non-HIPAA tools appropriately while using HIPAA-compliant tools like Bastian and Notezap more strategically for client-specific information.

I often laugh when I think about bias and awareness in AI. Last year, there was an absolute mess with Google trying to adjust its bias meters to combat cultural issues. We know that bias exists and that these companies have pulled some levers. Furthermore, all these models are trained on the entire internet, which has its own biases. So we have to be aware that the output from these large language models needs to go through our clinical sniff test. More importantly, your name is on that document and anything you use it for in clinical interaction. You want to make sure you’re paying attention to any bias, especially since we’re working with individuals who, although they may come from specific backgrounds, still have a lot of variance within those groups.

Looking Forward

Looking forward, I think things are going to change pretty quickly. In fact, they already have. Most of the tools up until about six months ago were pretty basic by today’s standards. We were using them to automate certain things, but they were limited in capacity and couldn’t add much nuance.

That’s totally changed. As we’re in this AI renaissance, we can leverage these tools in many efficient ways. Not only can we use AI to rewrite things, which I think is what everyone does, we can use it for pattern recognition. We could give it several screenshots of data, and it’s very likely to figure out what the measure is, what the domains are, what those domains measure, what the results are, and put together tables and integrative summaries.

It’s becoming a far more robust and sophisticated AI environment. Because of that, it becomes even more appealing, and therefore we have to be even more careful about how excited we get. Shiny object syndrome is real, and I know how much we hate writing. Put those two things together, and sometimes clinicians could make risky decisions.

As things continue to emerge and pick up even more pace – which they will, because AI is here to stay and will only get better and faster – we’re all going to be inundated with it. I hope some of our conversation today can help you be more informed about how you choose to use AI, both personally and professionally. I hope it helps you identify how to incorporate these tools responsibly into your clinical practice.

Conclusion

In summary, AI tools like Bastian, Notezap, Claude, and ChatGPT can significantly enhance your clinical workflow when used thoughtfully and responsibly. The key is to approach them with a clear understanding of:

  1. Where you stand on the AI adoption spectrum
  2. How much structure and guidance you need to provide in your prompts
  3. What level of transparency you want to maintain with clients
  4. How to keep sensitive information secure
  5. The importance of your clinical judgment as the final authority

Remember that these tools are meant to augment your work, not replace your expertise. As you continue to explore and refine your use of AI, focus on iterative improvement, be mindful of potential biases, and always prioritize client care and ethical considerations.

Whether you’re just starting out or already integrating AI deeply into your practice, there’s always room to learn and grow. The landscape is evolving rapidly, but with thoughtful implementation, AI can help reduce administrative burden and allow you to focus more on what matters most – providing quality care to your clients.

The Psychologist’s Roadmap to Implementation. How to Think, Understand, & Use AI in Assessment – Safely

Webinar Recording & Full Transcript

Thank you to everyone who attended our recent professional development webinar on implementing AI in psychological assessment. For those who couldn’t join us live or want to revisit the content, the complete recording and transcript are available below.

What You’ll Learn:

This comprehensive session is designed for psychologists, school psychologists, and mental health professionals interested in leveraging AI to enhance efficiency without compromising quality or ethics. Whether you’re an AI skeptic or already implementing these tools, you’ll find valuable insights to help navigate this rapidly evolving landscape.
Duration: 90 minutes | Presented by Chris Barnes, Ph.D., Clinical Psychologist

TRANSCRIPT:

Introduction

All right. Lots of hands raised. This is great, so thanks for spending some time with me today. The goal today is to have a little fun, to get our fingers on the keyboard, really get some things going in terms of workflows and automation, and getting a little bit more comfortable in how we choose to use AI in our practice. We’ll also discuss some of the pitfalls you really need to be aware of. After all, we’re really thinking about people’s lives here, so we want to be really clear on how we approach these in a thoughtful and considerate way.
A little bit about me. I’m a clinical psychologist and I’ve spent the last couple years getting elbows deep into AI. I’m pretty lazy to be honest, although you wouldn’t tell by the amount of time I put into all of this AI stuff. But I have always sought ways to speed up my practice. I have a group practice where we do a ton of assessment, primarily ADHD, because that’s the flavor of the decade right now. But we do a lot of learning disabilities and we service a lot of clients. The way that we’re able to do that is by focusing on efficiency. There are a lot of known knowns that we engage as clinicians, and I’ve been able to really identify mine.
About two years ago I started to do some consulting work with clinicians and practices. Through that work, I realized that my problems were just everyone else’s problems, and we all share the same issues when it comes to bottlenecks regarding documentation, automation, etc. This time last year I started taking it more seriously. I have a logo in the bottom right corner – I am building a program for assessment psychologists. The goal is to incorporate AI in a very thoughtful, intelligent and secure way so we can take care of some of the back end things. This is 100% not going to be a sales call – there is no obligation whatsoever, but it would be remiss of me not to use some of my experience in this conversation because it’s been a really satisfying year seeing all the tools develop.

Agenda

The agenda today is we’ve got about an hour and a half. I want to spend about thirty minutes discussing how we can use AI in our practice. We can find ourselves on various parts of these journeys. We can get more comfortable about where we are, knowing that there’s always a developmental process that requires sufficient thought, some consideration, and awareness of pitfalls.
I’ve got some real-world demonstrations I want to do using a couple different tools. I sent an email out this afternoon and we’ll be using:
We’ll spend quite a bit of time doing that, and I hope that it’s a very interactive part of our session. Furthermore, I hope that we can lob many questions into the chat today and have a little bit of fun, because I think being a nerd is fun. I think there’s some real value to take away from this conversation, regardless of where you are on this journey. If this is your first time ever doing something with AI, perfect. And if you feel like you have quite a bit of experience, then we’ll just dive into where you’re at as well.
Towards the end of our conversation, we’ll talk a bit about what’s to come. It seems like every morning I wake up, and of course, the algorithms have me so all my feeds are full of AI stuff. Nevertheless, it’s changing day by day, and it’s just going to get better and cheaper, and it’s not going anywhere. I think in many ways, it’s important for us to pay attention to the trends, whether we choose to implement them or not. We’ll spend some time discussing what to expect moving forward in the AI world, and how you can prepare for some of the changes ahead. Whether you choose to implement these things or not, that’s totally within your control.

AI Journey Continuum

What I’d like for you to think about here in just the next few seconds is to figure out where you are on this AI journey. We all fall somewhere on this continuum. We have people who are skeptical like, “I don’t know about this whole robots taking over the world thing.” We’ve got a few people, I’m sure in this crowd, who are just a little bit curious. Maybe they play around a bit. Maybe they let their kid talk to Santa on ChatGPT over Christmas.
Perhaps some of you have already implemented some of these tools, and in fact in that pre-webinar form I sent out yesterday, I do see many people are using some pretty interesting tools, which is great. But there’s always room for refinement, and you’ll see throughout our time this afternoon, the devil’s in the detail. Garbage in equals garbage out, and so when we think about how we can use some of these tools, it’s really going to be narrowing down, refining and iterating over and over.
We have a lot of different people in a lot of different places, so I’d like you to throw into the chat where you might be on this journey. I tend to wrap and cycle between many stages myself. I looked at some of the information on that form that you all submitted yesterday, and it looks like a lot of you are already in sort of that second or third category from the left, which is really good. And we also know we have to continue to refine and refine once again.
Take a couple seconds and think about how you may fall into this continuum, and then also how you may prepare for some of these next steps ahead because it is a constant journey. The irony is that the more we move to the right approaching the far right of the continuum, we’re right back to the cautious observer side because the more we know, the more we know we don’t know. It becomes a bit ominous at times.

Common AI Fears

So many of us have a lot of fears about AI use. I was looking through the results of some of the forms that you all filled out and many of them fell into these categories:
You’ll hear me say this several times throughout our time: AI output should be used as like the 0.5 draft of your document creation. It is to create framework. We can teach you how to train your AI, especially using templates. Furthermore, we have to always be the final source of truth for our clients. That’s why we went to school, to be able to craft documentation and use these tools to help our patients. I don’t think that AI is there to actually do the help for the patients, but we can certainly leverage its utility. Some of the advances that keep coming around are really great to help us as clinicians, even challenging how we’re doing this work ourselves.
A lot of people were wondering, is a robot going to take my job? I don’t think so. I think that our job is going to get very easy, and I do think that there is going to be a reallocation of resources when it comes to how people interact with clinical psychologists, particularly at a systems level, and even within the practice.
Even here at my practice, I have an admin who for the longest time before I really started to leverage some of these tools, was very overqualified for all the work she did. She was sending links, doing this and that – she was super overqualified and it was a bit awful for her. When we started to implement some of these tools, she was able to really use her resources and be the face of the practice rather than burying herself in a tremendous amount of paperwork.
So I don’t think AI is going to take our jobs, particularly as clinicians. If anything, it’s going to augment our work in such a way where we can become even more skilled because we’re not going to be bogged down with administrative burden.
Then there’s another concern around control – who’s really in charge here? Where’s all the information coming from? At the end of the day, we have to remember that we are really at the end of the decision-making process as clinicians. Everything that we do, everything we choose to implement should really be in support of our clinical work and the relationship we’re forming with our patients.
Whether therapy or assessment – even for school psychologists who are working with little kids and families – some of this work can be very daunting and emotionally taxing. Cognitive burden can be overwhelming at times. When we can take some of that burden away by leveraging these very basic tools, I think that everyone wins and we can spend more time with our patients, with our families, or dogs, maybe just at the beach.

Patient Consent and Transparency

Another area of concern that many people have is, do I tell my patients? When I think about consent and chatting with patients, maybe even families or school systems, I really think there are 4 basic levels:
Level 1:
This is the entry level. “Hey, I use AI.” Here it is, it’s in writing, it’s a super basic statement. This would be that really basic level of consent as it relates to working with AI and the transparency surrounding it with your patients.
Level 2:
Moving up in that hierarchy, with a little more clarity, we can talk and actually note AI systems rather than just making general technology statements. You could also ensure the client at some basic level that we have some HIPAA compliance or FERPA or whatever regulating body you’re using, and also that it’s not really used for any decision making, although I don’t think it should be used for much decision making to begin with.
Level 3:
Even further up into that hierarchical level, we see this next level where we’re really starting to hone in on the details about how and why and when you use these tools, so you can provide that super transparent level of understanding to the patients and people you work with. This doesn’t necessarily have to say “I use these tools, I use them on Thursday and I do XYZ with them.” What I’d rather see is “this is what we do and I feel very comfortable using these particular tools because we know we have our stuff sorted out on our end. We have these tools under business associate agreements, we have looked at their security,” etc.
Level 4:
This highest level that I think about when I think about the transparency as it relates to the use of AI is just getting down to the nitty gritty. I think everyone’s going to fall somewhere between levels 1 and 4, maybe like 1.5 or 3.5. But this is the most detailed level of consent. Here I think it’s super important, if you feel like you need to be this transparent, to name the tools and name the thought-making process behind why you choose to use them and when you use them, etc.
I have seen some recent psych reports as I’ve been doing some document review for some of my own patients that even indicate in these documents that “we use AI” and there was one with an asterisk that indicated the generative piece of that document. I’m not here to judge any of those levels, but I think everyone here is going to fall somewhere on that continuum. The reason I bring these up is because I think it’s super important to pay attention to where you are, because where you are really guides what comfort level and what tools and maybe how experimental you can be with this use. The goal is to always reduce burden and as many of you know, the document burden is awful.

Us vs. AI or Collaborative Approach?

As we continue to work through where we are in this process and how we can integrate it, there’s this dialectic between “us versus AI” – “it’s going to take my job” or “I don’t want anything to do with it because if I look at it, it’s just going to grow a little bit bigger” – and then on the other side, a collaborative approach. The collaborative approach thinks about how we leverage certain tools in strategic areas that have been thought through, and we just use it as a refined tool from our toolkit.
When I think about myself as a clinician (I’m certainly biased, I’m building an AI program for psychologists), I really think about how to augment our own work – how do we use these tools in a way that don’t do all the work for us? In fact, I think some of the work that we do is quite fun – the thinking through and sifting through data and the connections with people we make – I would never want to give that away.
But writing a report, writing a document to a standard – all of those things have historically been the biggest burdens in our profession. When we can reduce the burden on those particular issues, I think it creates a lot of bandwidth. It’s a surplus at some level because I know that when I used to have to write reports at 4:00 in the morning the night before (it wasn’t a great night and it usually had to be written at 4:00 in the morning because I had been procrastinating for some time) – that was tough. What I think this does is it continues to create this flywheel of bandwidth, and it’s not bleeding out as fast as it’s building up because we’re able to achieve a bit more work-life balance. Overall, it’s just less taxing to get through some of the necessary parts of our job.

Bastian GPT Demonstration

Let me share Bastian GPT with you all. The beauty of Bastion, first and foremost, is that they will sign a business associate agreement (BAA). A business associate agreement is a sort of contract between you and this vendor, in this case, Bastian GPT. It outlines the security measures that they take to maintain your data, as well as some of the precautions they have in place, and also what they’re going to do if something goes wrong.
My wife works for a company that was hacked by a Russian hacker last fall, so if that happened to us, what are they going to do about it? Because we’re trading our dollars for their service and we need to feel super comfortable as clinicians that what we’re doing is A) safe, B) not going to cause any harm, and C) not put our license at risk either. Bastian is one of these tools that is very comfortable signing these BAAs, so it’s super HIPAA compliant. In fact, they market that they are HIPAA compliant.
Furthermore, they train their models on a lot of medical content. It’s not specific just for psychologists – it’s medical proper, but nevertheless we do share terminology. So through the use of Bastian, as a clinician, I would feel less concerned about putting some patient information in there. I could very well upload a previous assessment report. Some forensic psychologists like to use this because they can take these files and throw them in. With that in mind, though, we have to keep in the forefront that the size of those files is important. Unfortunately, Bastian won’t allow significant file upload, so forensic psychologists will go through, copy and paste the meat of everything, and then have Bastian write some summaries.
This is what Bastian looks like the first time you get into it. This thing right here in the middle, there’s a slider and this is called a temperature slider. As it moves to the right, as it becomes a larger number, it is a bit more free. It’s like the hippie version – it’s just gonna go and do its thing, and it doesn’t like being told what to do. And then if we go to the left, these are gonna be your surgeons. They’re strategic. I like to keep mine right in the middle, knowing that I’m gonna be reading everything and confirming everything afterwards.
There’s nothing on the side here – this is just a throwaway account for me that I just opened this morning. There wouldn’t be anything on the right here for you either. What we want to do is think about if we’re going to use a prompt once, chances are we’re going to use it again. The benefit of keeping things in this sidebar here is that we can continue to refine them. Some of the prompts that I use, I think I’ve changed weekly. The reason is because you get to start to assess your input versus your output, and you’ll start to see trends in the way that you interact with these tools.
Once you start to observe that you’re doing something over and over again, often times we’ll just get the output, throw it in a doc and change it in the doc. I argue let’s work smarter, not harder, and let’s start to play with the dials on some of these prompts that we’re actually using.
I want to show you how to put in a new prompt and save it. Over here on the right, you’ll see “new prompt” – click that and it will drop something down here. All you have to do is click there and it’ll bring up this window. I’m going to put in a basic prompt – in fact, you can find this on page 20 in the PDF I sent you.
I’m going to call this “Crappy Prompt 1” because it is indeed very crappy. I’d like you just to pay attention to how this process works. Chances are you’re not going to use a crappy prompt – maybe you’re going to use some other prompts and then just return to them and refine them as we go. After you have your name, so you recall what it is and what it does, you also have your prompt. The prompt is really what you’re asking of the large language model to do.
On the right side here I have behavioral observations, a FAQ, some advanced and some intermediate prompts that I put in there earlier this afternoon. This is just a way that if you know you’re going to do something repeatedly, you can just loft it over there. Early on in my AI process, I just had a Google Doc open and I had to sift through everything. It was daunting, but now one of the benefits of Bastian is that you can save these prompts and then call them back up.
After I save this, “Crappy Prompt 1” is over here on the right. Now what you could do is go back to “Crappy Prompt 1”, copy it, get out of there and paste. You could very well do that – it doesn’t take more than just a few seconds. But one thing about Bastian is that if you use your right slash (which is, on a Mac, the button to the left of my right shift button), all of your prompts are brought right up for you. If you have a tremendous list of them, like my account does that I’m not showing now, you might have “Crappy Prompt 1”, “Crappy Prompt 2”, etc. – I would just start to put in the first few letters and it will narrow those down for me. When I’m able to do that, I can narrow down very quickly with less keystrokes.
So I put in my “Crappy Prompt 1”. It’s very crappy. You can either click this or just hit enter. Let’s see what happens with a real crappy prompt.
Not bad – given 3 bullet points, it’s quite astonishing. “Astonishing” is a strong word, but nevertheless, I do believe it to be quite astonishing what it can do. Because these models, particularly Bastian, have been trained on what a psychotherapy note looks like and how they tend to look. So it knew to take my awful 3 bullet points (discuss their mother, discuss inadequacy activation network at volleyball practice, invoking some sort of fear failure), it identified what the progress note should look like, aggregated those three bullet points into the correct areas, and then organized it.
Clearly, if we had the dial of the temperature turned all the way to the right, it’d be really interesting to see what that would do. I encourage you to play around with that, but nevertheless, this isn’t bad. In fact, it’s a little bit better than “not bad,” but I think that there are better ways to do this too.
You could go back in and you could say, “write this all in narrative form, no bullet points.” Let’s see what happens. Brilliant. So it did that.
One thing to note is that I think the iterative process as you start to work with some of these models is incredibly important. The reason is because you’re going to learn your own downfalls. You’re going to figure out why you aren’t prompting these models in an efficient enough way. You’ll probably start doing the same thing over and over, and then you’ll start seeing some similarities in some of the output. The goal is for you to be aware and observe some of that before and after.
Then you start to ask yourself, “OK, what am I doing?” And if you can’t figure it out, maybe you ask the chatbot. That’s an interesting trick where you can say “this is what you gave me” (copy paste), edit it, maybe even edit it here in this text input, and say “this is what I want.” Depending on what model you’re using, say “Talk to me about how I should prompt next time to get a more accurate result.” So you can start to train yourself, you can start to train your model (depending of course on which one you’re using) in such a way that gets you more and more close to a consistent and repetitive voice that is yours.
Now let’s try something a little bit more intermediate. The one we just did was “Crappy Prompt 1”, so this is another one. You can see it’s just a bit longer, but still not that detailed.
To give you a better idea, I didn’t ask for a note – I asked for a CBT-focused psychotherapy SOAP note, and we gave it a little bit more information. We’ll say that it was a minimum of 53 minutes, and it was their third session. He uses he/him pronouns. In the previous session, they focused on hunger pains. We want this note to be approximately 2 to 3 paragraphs in length, and in this session we discussed maternal relationship, work-related stressors – some of those same things I had mentioned in that previous prompt. We still have our temperature at the same setting and we’ll see what this one does.
So you can see that this is a bit different. We indicated that we wanted a SOAP note, so we have our subjective data, we have our objective data, we have our assessment, we have our plan. It didn’t add any fluff – it didn’t say “the client, a 28 year old whatever.” It didn’t say anything. Didn’t leave any place markers for date or any clinician name, which sometimes it does and sometimes it doesn’t. But if that’s something you need, you ask for it.
Nevertheless, it knew what a SOAP note was. It knew to take some of those pretty awful bullet points that I had there and aggregate that in such a way that made sense to fit the style of a SOAP psychotherapy note in that order.
What we could do is say, “What would I do next session?” Clearly, you would never do this to strategically plan to the detail your treatment plan – that’s why you went to school. But nevertheless, the idea is that if you are feeling like you’re hitting a block or if you’re like, “Oh my God, I’m so tired, I can’t think straight. What did we do last time? These are our goals. What are some key points to talk about today?” Maybe it’s a way to prompt some of your own creativity.
Interestingly, it knew we were doing CBT. Here we have some homework in progress. It knew that there was some relationship dynamics from that previous input. Because of CBT, of course we want to continue to set some goals, and so it shows its intelligence as it relates to the nuance of our field because we prompted it to do so in that very first interaction.
Let’s open up a new chat and see what this advanced prompt does. In the advanced prompt I have variables that I placed within it, so the variables will prompt you to provide that information. Let’s say it was 3:00 PM and we use CBT and dream analysis. It was session #3 and our previous focus was on mother issues.
We have included that information here, and then we can talk about some key content: we can talk about the recent chicken wing shortage, we can talk about work stress, maybe an upcoming exam. There we go. Again, you can click this or you could just hit enter.
So the output is far more thorough because I trained it and asked it to do that. The goal of me walking through these prompts – from “Crappy Prompt 1,” to something a bit more intermediate, and then here with advanced – is not because advanced is great. It’s to show you that the amount of information you give before it starts turning the spokes, the better output you’re gonna get, and you’re in control of all of that.
If you feel like you need to make sure you’re saying certain things, or you always end a note or a section of a report with specific statements, you can absolutely throw those in. You can always be in control all the time. The goal is for you to, as you’re working through your workflow, think about “what am I always writing? What am I always saying?” Some people use this tool called TextExpander or they have hotkeys. But there are ways we can use and interact with these models, even just with Bastian, to make sure that in our prompt, especially if it’s pre-loaded, to “make sure to include the following statement” and provide that exact text.
If you feel like you need to make sure you’re saying certain things, or you always end a note or a section of a report with specific statements, you can absolutely throw those in. You can always be in control all the time. The goal is for you to, as you’re working through your workflow, think about “what am I always writing? What am I always saying?” Some people use this tool called TextExpander or they have hotkeys. But there are ways we can use and interact with these models, even just with Bastian, to make sure that in our prompt, especially if it’s pre-loaded, to “make sure to include the following statement” and provide that exact text.
The document I sent this afternoon is half text to think about and half prompts. It covers the basic things that we tend to see in our world, whether it be for assessment or therapy or whatever your specialty is. I encourage you to look through it, think through which ones you may want to start playing with, and then start playing with them.
If you’re using a tool like Bastian GPT, you can feel free putting in some screenshots, uploading some information about a patient, and furthermore, you can just start to get comfortable with input and output. If you want to get really advanced, you can start to train it a little bit. What I mean by that is, like I said earlier: “I gave you this prompt. You gave me this output. I edited this output and here it is. How would I prompt or how might I make my input source a bit more clear? Are there any other variables I might consider for this?”
Because we’re always working towards further refinement and structuring of the data. The less thinking and creativity these large language models have to incorporate, the more precise output we get. As assessment psychologists, our work has to be precise, and alluding to the question we had a bit ago about hallucinations – it’s a real thing, but we can minimize its impact or likelihood by being very thorough and intentional about the data we input. The earlier we do that, the better the outcome will be.
I have a few other prompts over here. Let’s say you’re in a room with a kid doing some testing, and you want to quickly write these behavioral observations after your session. Maybe you were taking notes in the WISC columns, maybe you have notes everywhere, maybe you have a dedicated system. If you have a prompt such as this already saved, and I should note that if you were to do something like this without taking the one from the document that I sent, you don’t have to give it all the information. In fact, it can infer a lot of things, and you can even ask it to fill in some things.
Particularly when it comes to behavioral observations, when I was using this method in the past, I would say “here are my observations. If I don’t mention it, consider it within normal limits and indicate as such.” So here are some comments. Let’s pull that up using that slash and then behavioral observation. There is a placeholder here, so we’ll say “they were on time,” “asking to leave every 5 minutes,” “appears disheveled” (I chose that word because it’s the most impossible to spell, but I’ll show you how it doesn’t even matter if you spell right), “frequent bathroom breaks,” “poor effort,” and “spilled water.”
Another thing that you’ll notice if you’re looking through that PDF document is that you’ll see some things like HTML code. This is how we start to work towards the structuring of the data. I can use tags like and to mark sections. So now we know as it’s going through and sifting, it’s structuring it a little better and it knows that anything between these tags is going to be the behavioral observations. The narrative should address specific points that I’ve outlined in the prompt.
Let’s see what happens. Well, this was a pretty crummy prompt because I didn’t give it a lot of extra information. But nevertheless it was able to do what I asked it to do. This is the fun part of getting in and seeing the capabilities. If we had our temperature changed, maybe we would have a different output. If we gave it a little more information, if we had some tweaks in the actual behavior observation output prompt itself, it would be interesting to play with this just to see.
One thing that I’ve seen a lot of clinicians do – many of the people I’ve done some consulting work with – we’ve done this together where we would take several behavioral observations from a series of reports that they’ve written. We would extract the text and use Bastian. They would put real examples in there so they knew it was like their stuff and their writing style. They would say, “Hey Bastian, take these 5 behavioral observations, extract my style, my headings, the variables I tend to talk about the most, the words I use most frequently, etc.” and it was able to respond back and say “Here’s a prompt that will produce an output similar to these examples.”
After that, they would just write “/behavioral” and the prompt was there with the formatting instructions, and they would just dictate into that: “Timmy was here and XYZ happened” etc., and it would produce consistently accurate behavioral observations in their voice. You can do that for any section in your report.
Many times we get “deer in headlights” as we start to play with some new tools – we just need to do and we get in, we start tinkering, and then we’re dissatisfied with the output. But I think that if we can engage this with curiosity and play around understanding these basic things we’re talking about today, you will be able to start tweaking and pulling on these levers to get closer and closer to your desired output.
Many of the things I’ve done as someone consulting on workflows, particularly for assessment psychologists, is recognize that everyone has their own voice. I don’t think I’ve ever seen the same report from all the people I’ve worked with. Everyone is a snowflake, and we don’t want to lose that voice. Yet many of the products out there right now have these boilerplate outputs. They run them through these models and they don’t really have our voice associated with it.
I think this is a fix for that – if we can start to find tools and tweak them in a way that is consistent with how we write, and continue to iterate, then we can have more freedom with the tools we’re using. Furthermore, we can feel more comfortable with them as well, knowing that we just don’t have to settle for what we get and edit 1000 times afterwards.

Questions and Answers

Question about how to export text without markdown formatting (hashtags, stars, etc.)
What that is is markdown – a formatting code. Often the output from large language models will do that. It’s your friend in many situations, annoying in others. I don’t know if there’s a specific workaround besides saying “don’t format the text.”
For instance, if I were to copy and paste this, it might show **Emotional and Social Presentation:** because two stars indicate that it needs to be bolded. To have output that isn’t using stars and hashtags, perhaps just ask it “not to bold, not to have headings, just to have a plain text output.”
Where markdown is your friend is when you want to make tables really quickly. I use Bastian sometimes where I’ll have a dataset or screenshots – maybe a screenshot of a WISC table with the domains and standard scores, or a Conners CPT3 with all the ratings. You can take those screenshots (you have to do it iteratively because Bastian won’t take a lot at once), or if you have an Excel file you can just take one screenshot of the whole dataset.
Then you can ask Bastian to take that information and provide you with a Markdown formatted table. It will take the data, aggregate it in the table in the way you need, and then you can put it into a Google Doc with one extra step using Gemini (which is free with the paid Google Workspace).
How many times have you as a clinician agonized over creating tables and moving margins? And “oh no, the cell is here and it threw everything else off”? If we can ask any of the models (but I know Bastian does it because I’ve used it), you can say “give me a markdown of XYZ, aggregate the data in such a way, make sure to include qualitative description, confidence intervals as well as percentile rank, and one row for each measure/heading.” It’s interesting to see what happens when you use markdown as your friend.
To avoid markdown when you don’t want it, you could add a simple instruction at the bottom of your prompts: “Do not format text.” It would be interesting to see if that eliminates the markdown. And there’s always your good friend Control+Find Replace for all of that as well.
I’m bullish on this, so I’m very excited about how this works. I have PhD students that rotate through my clinic and they use it day one. My philosophy is heavily supervised in the beginning – looking over their shoulder, reviewing everything that happens. Of course my name’s on the document, so I want to be reading things before finalizing them. I’ll bring it back to the student if it just is a blatant cut and paste.
I do this for one particular reason: I firmly believe that this is going nowhere, and part of my job as a clinician is to prepare future clinicians to be ready to hit the workforce – not just clinically, but to not get themselves in trouble doing things they shouldn’t. Furthermore, if I’m doing it, I don’t think I should not let my students do it either.
That’s where this layer of supervision comes in. At the beginning of rotating through my practice, I’m over their shoulder watching. Towards the end, sometimes I can’t even tell what was written by the student and what was written by some sort of bot. I think it’s great because it’s out there everywhere, and when we can get more comfortable allowing the supervised, intelligent, thoughtful, and considerate use of these models, even in graduate training, I think it’s great.
As a caveat, I’m not allowing my students to take screenshots of datasets and say “write me 7 paragraphs.” If they do that, they’re really good at it because I haven’t been able to catch it yet! But what we do in the beginning is help them refine their prompts when what they get doesn’t make sense. I’m teaching these students how to use AI in a very thoughtful and clinical way to maintain a consistent clinical voice.
I hate writing updates and making charts, etc., so I don’t expect my students to do that either. Being able to teach them how to leverage these tools to create consistent tables or visual output benefits everyone. I’m not from the “we abuse our students because we were abused ourselves” school of thought.
Bastian is HIPAA compliant, so technically you could put in the whole interpretation. It depends on the size of the file. There are three ways to do this. You could upload the whole document if it’s not too large, take a screenshot of the scores, or copy and paste the relevant parts.
The two important things there: when we say GPT, we want to make sure that we’re not uploading patient information to ChatGPT. That’s crucial. But if we use a compliant tool that we have a BAA with and we understand their security and we feel comfortable, we can upload sample documents. In fact, I think that’s a great approach.
What you could do, just using Bastian as an example since I’m familiar with it, is upload one report and say “extract my tone, voice, reading level, clinical style” and even ask it to identify “everything else that might be important for another psych report.” And then say, “OK cool, I gave you one, here’s another one. Refine. Here’s another. Refine.” You can play that game for a little bit, just making sure that the context window doesn’t get too big, because if the context window becomes too large, the AI starts to get less effective.
I know Jenna Washburn, and she’ll be at NASP next week, so if anyone’s heading there, you can chat with her. We’ve chatted a few times. I’ve never used her product. I collaborated on something with her last year. I’m not super familiar with it, but I know that she’s super smart.
Some people will stratify them by patient, so they’ll have one chat for Timmy and one chat for Susie and one chat for background on something else. That’s quite smart because it keeps all consistent information for that patient in one container. It doesn’t allow anything else to bleed in because remember, the more we’re giving to a specific context window or container of information, the more muddy everything inside of it gets. So keeping it short, strategic, and succinct is a really smart idea.
A friend of mine also mentioned once that if he died, he didn’t worry about his browser history – he worried about his ChatGPT history! Because I’m the same way – I just go there and ask it everything and don’t go to Google anymore. You’d be surprised how many three-message interaction chats I have in ChatGPT. It’s just use it and dump it, use it and dump it.
Creating Tables with AI
Let me demonstrate how to create tables with AI. I’ll use ChatGPT for this example. Let’s say I want to create a table with fake data for the WISC-5, all five domains, the BASC, and Conners CPT3.
[Demonstration of creating a table in ChatGPT]
This is what markdown looks like. When you cut and paste this into a Google Doc or Word document, it’s going to be awful – it makes no sense, but that’s OK because it’s the formatting that you’re really looking for.
What you would do is copy this and put it in a Google Doc. No one wants a table that looks like this. But if you have a paid Google Workspace, you can go to the Gemini icon and say “take this highlighted text and turn it into a table.” It’ll put it into a preview window, and then you click the insert arrow, and boom – just like that you have a formatted table.
It was crummy data in this example, but you can get in here and do anything you want with it. Let’s say you want that shaded – great. You also want to bold – great. But nevertheless, it’s pretty quick and easy. Just imagine what you can do if you’re dropping screenshots into something. If there’s no patient data associated with it, maybe you’re using Claude and you’re saying, “here are my 5 screenshots. Give me a markdown table with these specific things.” You can be as specific as you want, and then you loft it into Google Doc, open up Gemini, say “please turn this markdown into a table for me,” and it’s done.

Question: Can you use Claude with a BAA?

I don’t believe so. I don’t even know if they have an enterprise version, and I know that would be ridiculously expensive anyway. My rule of thumb is: if I don’t have a document from the owner of the company or the C-Suite of the company, if I wouldn’t put it on a billboard, I wouldn’t put it into a chat model.
But with Notezap, I have a signed, pen-on-paper, scanned and sent to me BAA, so I feel comfortable putting things there. If you use other models like Claude or ChatGPT, I would be cautious and be explicitly clear that you have no patient information. Maybe you take just a screen grab or a section of a document. I’ll hit Command+Shift+4 and stretch it over the area. I’ve had to catch myself a few times because sometimes there’s a name or something identifiable that you have to be cautious of.
AI Types: Automate, Augment, Accelerate
When we think about using AI, I have this model of the four A’s. I allowed myself a bit of creative license here:
For example, leveraging AI to be more savvy in how we handle intake information. If you’re an assessment psychologist, you’ve probably sent forms to a client because they’re bringing their child in for an evaluation. They send them back, the data sits there, and then you have an intake interview where you ask all the same questions and they give all the same answers. You wish you would have had some follow-up prior to your intake.
The way I think about augmentation is doing some real-time thinking. One feature I’ve implemented is that the system watches intake forms and if it detects a signal about something, it can suggest additional screeners or questions. It’s doing some thinking for you, but it’s not clinically and diagnostically focused – it’s information gathering. The goal is that we can use these tools to augment our pre-intake work so we have more refined information from the start.
Furthermore, it allows you to have more information to find things in the corners that sometimes we either run out of time for or there wasn’t enough signal to pick up on initially. Then when we review information, we really start to see that there was something there.
This is just one way you might be able to augment your assessment workflow. I’m not going to ask my AI bot to do my job for me, but I’ll ask it to do all the things I probably pay someone 20-25 dollars an hour to do. Therefore, when I do have someone come in and do work at the office, they’re leveraging their resources in a way that is hopefully fulfilling to them, but isn’t just bouncing around on a keyboard sending links to things.
The big trend is how do we do it faster? How do we create a better experience for our patients? And I hope that’s where many people are landing, because if you’re like me, we have year-long wait lists. I myself am booked until August, which is an excruciating process for patients – they have to wait forever. If we ourselves, as clinicians, become more efficient, I think everyone wins. We can continue that flywheel and help so many more people.
One of the things that we can think about in terms of accelerating our process is using an AI scribe. Notezap is the one that I have used. It has some pretty basic prompting out of the box, but I’ve got some prompts in that PDF that I hope you’ll use.
An AI scribe works like this: you turn on an iPad or your phone, or you have your laptop open, and you’re having a conversation with a patient – in this case, an intake session. You go through your clinical interview. Prior to using a tool like this, you would then take your notes and dictate or type them up or avoid it until you eventually had to get that information out of your brain and onto some sort of document.
The benefit of using an AI scribe is that it will record and transcribe that session. It brings up the idea of consent and transparency, but nevertheless, it records the conversation in a way that produces an output you can use for an assessment report. I used it most for intakes where we’re asking all the questions we typically need to communicate in the background section of a document. If I can do it with just a few clicks of a button, go in and cross-check against my notes or what they sent me in PDFs, it gives me a framework. I myself work much more efficiently as an editor than as a narrator or as someone organically creating documents.
A lot of other people will use AI scribes for their feedback sessions. The trick of the trade is that if you’re using an AI scribe for your feedback, you almost want to intentionally follow an internal script because if you know the scribe is listening, you want to make sure you’re feeding it the information. That, in conjunction with a sophisticated prompt, can produce good output. Much of that output can then be leveraged for a summary section or integrated summary section, a rationale around diagnosis and rule-outs, and recommendations.
Notezap Demonstration
Notezap has a landing page with different templates correlated with different kinds of sessions. These are the things I have used in the past. I have one for intake, and in November I decided I wanted to change it. You can have as many templates as you want – some for kids, some for adults, some for specific workflows. You can really get in here and customize it. The more nuance you have that’s consistent with a specific workflow, the more you get out of it.
Each section lets you tell the AI what to do with the transcript. Many people I’ve worked with who use Notezap will have each of their developmental history sections here, but this is for a feedback session.
In this section, I say “take the role of a clinical psychologist” – so I’m setting that framework – and “write a summary section of a psych report.” I’m very specific: “Don’t make anything up. Don’t hallucinate.” You can really say that and it listens. I want it to be titled “Summary” and I indicate my desired length. Historically I have not written 6-8 paragraphs for an assessment summary, but I ask it to do too much work so I can whittle it down. I’d rather have 6-8 paragraphs, cut out the sentences that don’t work, throw it back into Bastian and say “make this better” with more strategic prompting.
I tell it to “integrate findings in a comprehensive way” and “make sure you’re including strengths and weaknesses that we observed.” You can also say “use this language” and “don’t use this language.” The first few times I used Notezap, I was getting very frustrated because it kept saying “the therapist said” or “the psychologist referred to,” and I didn’t like that. I started tinkering with it and thought, “I tell my kids not to do something and they don’t listen. Maybe I’ll ask the AI to DO something and it will listen.” So I added instructions like “don’t talk about me” or “don’t refer to me as a therapist.” If you have specific requirements about how you refer to a patient, do that as well – like “refer to the patient as [first name]” or “Timmy” or whatever you prefer.
I also asked it to do a cognitive ability summary, emotional functioning summary, and a general summary of overall findings. I ask it to note my recommendations, but I structure my recommendations in a psych assessment report into three different categories: medical, psychological, and executive functioning (as much of my work is ADHD). Then, if there are any other general recommendations, they go at the end.
This is just to put text on paper for me. I don’t need it to come up with new recommendations – I need it to listen to what I was saying during the session and then put that into the document that I can, of course, edit.
I always ask for support recommendations for the patient. This came through feedback I was getting from patients – maybe a husband would say “What can my wife do?” or parents bringing a child with behavioral issues saying “We don’t know what to do.” I started thinking about how to address this consistently, so I added this to my feedback template. The AI would listen to the transcript, be a little creative, and I’d edit it, but it was a way for me to get one thought turned into multiple paragraphs.
Furthermore, I tend to write in a more strengths-based way, so I always like to point out unique characteristics. My goal when using these tools was to get as much information as I can, sift through it, pull out what’s relevant, summarize, and then I was done. It was still tremendously more efficient than anything else.
The Notezap subscription – I think it’s $25 a month plus a bit more. When I was using it heavily last year, I think it was $75 I spent, but I see about 35-ish clinical hours a week, so that was heavy usage. Nevertheless, that’s money well spent considering the savings, and furthermore the comfort I had with it because they signed a BAA with me as well.
I’ll show you what my feedback session output from that prompt looks like. This is what someone was talking about earlier – this is markdown text. I didn’t edit much here – I changed some information and removed anything that could be identifiable. But this is the beginning of what the output was for a feedback session using that exact prompt series.
We got a 6-8 paragraph summary, cognitive abilities section, emotional section. I gave the PAI (Personality Assessment Inventory) and it really started to pull out specific things – it talked about the clinical scales, validity scales, and self-concept. Those are key terms within the PAI output that I review in my sessions.
Again, this is not intended to be your second brain, but if you are strategic in how you ask these tools to output their text, you’d be surprised by both the amount and quality of output, especially if you continue to iterate and train it repeatedly.
Ethics and Responsibility
When we think about ethics and responsibility with all these new technologies, it creates a bit of anxiety. It seems like every morning there’s a new version of something. ChatGPT now has a $200/month plan that will write pretty great research papers in maybe 15 minutes. I’m continually amazed at the level of output and consistency they’re achieving. I’m not paying $200 a month for one tool, but I am probably paying $200 a month for about 10 different tools.
When we think about the ethics and responsibility of our decisions, we have to apply frameworks to why we’re making these choices. Here are four main areas to pay attention to:
First and foremost, you’re a clinician. Your clinical judgment matters. AI will make suggestions, and you’ll evaluate them: yes, no, maybe. You really have to use the sniff test early on because AI certainly does hallucinate. We can combat that by being more structured in our data input and being specific – perhaps even overly specific – in our prompting. Garbage in equals garbage out.
You have to think about whether you’re going to discuss AI use with your patients. I’ve worked with clinicians who say, “No, they don’t need to know that I’m using Gmail or the name of my EHR. Why would they need to know what AI tools I’m using?” And there are other people who are always concerned about liability and want to disclose everything. That’s fine. It’s important to figure out where you are so you can feel confident that if anything comes into question, you have a thoughtful, documented decision-making process.
Of course, when we’re thinking about using AI tools, any data that leaves your control needs to be held securely. So if you’re using ChatGPT, don’t throw in a psych report and ask what’s going on with Timmy. I’ll reserve that for Bastian. But I’ll absolutely use ChatGPT to say “I’m stuck, help me think through this” or “compare and contrast the symptom presentation in a 15-year-old boy who uses cannabis seven times a day with trauma versus ADHD.” You have to think about how to use those non-HIPAA tools appropriately while using HIPAA-compliant tools like Bastian and Notezap more strategically for client-specific information.
I often laugh when I think about bias and awareness in AI. Last year, there was an absolute mess with Google trying to adjust its bias meters to combat cultural issues. We know that bias exists and that these companies have pulled some levers. Furthermore, all these models are trained on the entire internet, which has its own biases. So we have to be aware that the output from these large language models needs to go through our clinical sniff test. More importantly, your name is on that document and anything you use it for in clinical interaction. You want to make sure you’re paying attention to any bias, especially since we’re working with individuals who, although they may come from specific backgrounds, still have a lot of variance within those groups.

Looking Forward

Looking forward, I think things are going to change pretty quickly. In fact, they already have. Most of the tools up until about six months ago were pretty basic by today’s standards. We were using them to automate certain things, but they were limited in capacity and couldn’t add much nuance.
That’s totally changed. As we’re in this AI renaissance, we can leverage these tools in many efficient ways. Not only can we use AI to rewrite things, which I think is what everyone does, we can use it for pattern recognition. We could give it several screenshots of data, and it’s very likely to figure out what the measure is, what the domains are, what those domains measure, what the results are, and put together tables and integrative summaries.
It’s becoming a far more robust and sophisticated AI environment. Because of that, it becomes even more appealing, and therefore we have to be even more careful about how excited we get. Shiny object syndrome is real, and I know how much we hate writing. Put those two things together, and sometimes clinicians could make risky decisions.
As things continue to emerge and pick up even more pace – which they will, because AI is here to stay and will only get better and faster – we’re all going to be inundated with it. I hope some of our conversation today can help you be more informed about how you choose to use AI, both personally and professionally. I hope it helps you identify how to incorporate these tools responsibly into your clinical practice.

Conclusion

In summary, AI tools like Bastian, Notezap, Claude, and ChatGPT can significantly enhance your clinical workflow when used thoughtfully and responsibly. The key is to approach them with a clear understanding of:
Remember that these tools are meant to augment your work, not replace your expertise. As you continue to explore and refine your use of AI, focus on iterative improvement, be mindful of potential biases, and always prioritize client care and ethical considerations.
Whether you’re just starting out or already integrating AI deeply into your practice, there’s always room to learn and grow. The landscape is evolving rapidly, but with thoughtful implementation, AI can help reduce administrative burden and allow you to focus more on what matters most – providing quality care to your clients.

Secure your spot
in our beta program

Coming Q2 ’25