TITLE: Criteria for an AI to write psychotherapy chart notes (or medical chart notes)

Note: Reposting to get it out to a few additional groups.

I am informed that a new product called has entered the market. It's mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.

I have no firm opinion yet on Mentalyc, but it's expensive ($39-$69 per month per clinician) and I'd personally need to know a lot more about what's in that dataset and who is benefiting from it.

**So I'm asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.**

Here are MY thoughts so far:

1) REQUIRED: The AI either:
1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or

1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or

1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)

2) OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone's list, but I've seen too many subcontractors playing loose with the definition of (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.

3) OPTIONAL: Inexpensive (There are several free AI tools emerging.)

4) OPTIONAL: Open Source

5) Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)

6) The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)

I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note -- possibly just a paragraph to go along with the already existing form in the official note. I think. It's early days in my thinking.

--
Michael Reeder, LCPC

@psychology
@socialpsych
@socialwork
@psychiatry
#

#mentalyc #baa #hipaa #bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #simplified #writesonic #rytr #writecream #creaitorai #quillbot #grammarly #smartcopy #textblaze #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor

Last updated 1 year ago

TITLE: Criteria for an AI to write psychotherapy chart notes (or medical chart notes)

I am informed that a new product called has entered the market. It's mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.

I have no firm opinion yet on Mentalyc, but it's expensive ($39-$69 per month per clinician) and I'd personally need to know a lot more about what's in that dataset and who is benefiting from it.

**So I'm asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.**

Here are MY thoughts so far:

1) REQUIRED: The AI either:
1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or

1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or

1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)

2) OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone's list, but I've seen too many subcontractors playing loose with the definition of (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.

3) OPTIONAL: Inexpensive (There are several free AI tools emerging.)

4) OPTIONAL: Open Source

5) Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)

6) The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)

I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note -- possibly just a paragraph to go along with the already existing form in the official note. I think. It's early days in my thinking.

--
Michael Reeder, LCPC

@psychotherapist @psychotherapists @psychology.a.gup.pe @socialpsych.a.gup.pe @socialwork.a.gup.pe @psychiatry.a.gup.pe #

#mentalyc #baa #hipaa #bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #simplified #writesonic #rytr #writecream #creaitorai #quillbot #grammarly #smartcopy #textblaze #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor

Last updated 1 year ago

TITLE: Criteria for an AI to write psychotherapy chart notes (or medical chart notes)

I am informed that a new product called has entered the market. It's mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.

I have no firm opinion yet on Mentalyc, but it's expensive ($39-$69 per month per clinician) and I'd personally need to know a lot more about what's in that dataset and who is benefiting from it.

**So I'm asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.**

Here are MY thoughts so far:

1) REQUIRED: The AI either:
1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or

1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or

1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)

2) OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone's list, but I've seen too many subcontractors playing loose with the definition of (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.

3) OPTIONAL: Inexpensive (There are several free AI tools emerging.)

4) OPTIONAL: Open Source

5) Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)

6) The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)

I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note -- possibly just a paragraph to go along with the already existing form in the official note. I think. It's early days in my thinking.

--
Michael Reeder, LCPC

@psychotherapist.a.gup.pe @psychotherapists.a.gup.pe @psychology.a.gup.pe @socialpsych.a.gup.pe @socialwork.a.gup.pe @psychiatry.a.gup.pe #

#mentalyc #baa #hipaa #bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #simplified #writesonic #rytr #writecream #creaitorai #quillbot #grammarly #smartcopy #textblaze #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor

Last updated 1 year ago