EMAIL LIST: clinicians-exchange.org & LEMMY: lem.clinicians-exchange.org
.
I'm a bit behind on this news cycle, so you may have read about these
issues. _My point is to tie them to data privacy and OUR clinical
practices._

**THIS BELOW** is one of the main reasons /I keep throwing a fit about
data leaks from HIPAA BAA subcontractors/ - whether or not they end up
being legally PHI, and despite the fact that not too many therapists are
interested in the topic.

*If an Attorney General is willing to go after unredacted medical
records* in-state or out-of-state, /then they are certainly *_capable of
getting data from data brokers and marketing firms_* (or Google,
Facebook, LinkedIn, Twitter, etc.)./

Closer-to-home -- It's not too much of a stretch to speculate if
psychotherapists in blue states will get subpoenas for chart records
pertaining to clients who moved to a red state shortly after counseling,
then got in trouble for whatever the legal medical issue of the moment
is (abortion, birth control, transgender concerns, fertility clinic
involvement, etc.).

*Here’s why Tennessee’s AG wants access to reproductive medical records
— including for out-of-state abortions**
*wpln.org/post/heres-why-tennes
/"State attorneys general in 18 states — including Tennessee’s — are
fighting with the Biden Administration over medical records related to
reproductive care."//
/
*Tennessee A.G. weaponizes private medical records in GOP campaign
against trans people**
*the-rachel-maddow-show.simplec
/Maddow podcast recording.  Talks about attorneys general from 16 states
writing a letter to President Biden asserting their right to go after
medical records located outside their states.//
/
*Biden’s HIPAA expansion for abortion draws criticism, lawsuit threats **
*politico.com/news/2023/07/18/b
/Biden administration trying to shield abortion medical record data
located in blue states from red state Attorney General probes.//
/

In case you are interested, here are some of my past articles on medical
data privacy and various vendors:
*
hipaalink.net security initial testing*
lem.clinicians-exchange.org/po
*
Nearly All Hospital Websites Send Tracking Data to 3rd Parties,
Endangering Pt Privacy—Common Recipients: Alphabet, Meta, Adobe, AT&T*
lem.clinicians-exchange.org/po
*
To become an Amazon Clinic patient, first you sign away some privacy,You
agreed to what? The ‘HIPAA authorization’ for Amazon’s new low-cost
clinic offers the tech giant more control over your health*
lem.clinicians-exchange.org/po
*
FTC, HHS warn health providers not to use tracking tech in websites, apps*
lem.clinicians-exchange.org/po
*
Would you want used to help write a medical or psychotherapy chart
note?**(Ongoing Poll)*
mastodon.clinicians-exchange.o
*
AWS rolls out generative AI service for healthcare documentation software*
lem.clinicians-exchange.org/po

I'm not posting this to be political (although it certainly is) --*I'm
posting it as a legit medical records concern for all of us regardless
of each individual reader's political positions.  We need -- as
therapists -- to care about data leaks and privacy.*

+++++++++++
  



@psychotherapist @psychotherapists
@psychology @socialpsych @socialwork
@psychiatry

@infosec

#ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #medicalnotes #progressnotes #legal #lgbtq #abortion #transgender #mentalhealth #technology #psychiatry #healthcare #patientportal #hipaa #dataprotection #infosec #doctors #hospitals #amazon #baa #businessassociateagreement

Last updated 1 year ago

EMAIL LIST: clinicians-exchange.org & LEMMY: lem.clinicians-exchange.org
.

TITLE: AWS rolls out generative AI service for healthcare documentation
software

Yeah... If it's going to be worth using it would have to listen to the
whole visit... But this needs more thought.  In my past quick
experiments, it took 90% of the effort typing directions to get an AI to
generate a halfway okay note.  So the AI (I think) would have to listen
in and then do the note to be worth it.

Would we want this?  Can we trust this?

--Michael

+++++++++
------------------------------------------------------------------------
"Amazon Web Services announced Wednesday a new AI-powered service for
healthcare software providers that will help clinicians with paperwork."

"AWS HealthScribe uses generative AI and speech recognition to help
doctors transcribe and analyze their conversations with patients and
drafts clinical notes, the company announced Wednesday at its AWS Summit
New York."

fiercehealthcare.com/ai-and-ma

------------------------------------------------------------------------
Posted by:
Michael Reeder LCPC
Baltimore, MD

  


@psychotherapist @psychotherapists
@psychology @socialpsych @socialwork
@psychiatry

#ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #medicalnotes #progressnotes #amazon #mentalhealth #technology #psychiatry #healthcare #patientportal

Last updated 1 year ago

It occurred to me this morning that by the time I fill out the form and write two or three sentences, I've already done all the work that is needed for an official note (after adding start and end times, diagnosis, name, client age, and a few other elements to the form). There is no need to convert it all to narrative -- it can stay in form factor mostly.

So -- while I want an AI I can trust to help with notes (and this one may grow into such) -- right now the effort of getting it to create a note is about exactly equal to the effort of just writing it myself anyway.

@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry

#mentalhealth #technology #psychiatry #healthcare #patientportal #autonotesai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #epic #medicalnotes #progressnotes #microsoft

Last updated 1 year ago

*If* AutoNotes.ai just plugged in a free AI available from elsewhere with a front-end they created, I wonder what legalese governs use of the AI on the backend? I am not a lawyer and none of us know the licensing agreement or ownership of the AI which AutoNotes.ai is using.

And, well hey -- while I'm just busy making up wild speculations -- let's play with this a bit:

So I know AutoNotes.ai is sending SOME kind of information to Google tracking services because my browser plug-ins are preventing this data from being sent to Google and telling me so.

Let's just suppose for a moment that they are using Google's Bard AI on the back-end. Because -- why not -- there is no PHI being collected anyway...

Meanwhile, both the therapist and the client are using the Google Chrome web browser for televideo. Or maybe they are using Gmail and the Gmail text is being mined. Or the data input for the note is sent along to Google datamining regardless of whether or not the Bard AI is used...

Let's go further out on our hypothetical limb and say that the therapist sees only three clients that day. The therapist creates three notes in AutoNotes.ai that day...

It's now a more than fair chance that one of those three unnamed clients has Acute Stress Disorder (like in my example above). If Google has gone to the bother to devote the computer tracking power to it, they might know from Gmail, or Bard, or data aggregation the names of the clients the therapist saw that day.

Of course, I really am making this all up -- we are just not given enough data to know what's real and false anymore.

Here is a paragraph from the welcome message they emailed me:

"Here are a couple of simple suggestions, first, complete as thorough a Mental Status Exam (MSE) as possible, submit a few sentences related to the session and theme, and include treatment plan goals, objectives, and strategies; this will ensure the best possible clinical note. Please revise and submit your revised version inside the app! This will assist all of us in building the greatest tool on earth for the field!"

Well, okay -- I do want the AI to get better at its job...

But this DOES mean they are keeping a version of what you provide, doesn't it?

@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry

#mentalhealth #technology #psychiatry #healthcare #patientportal #autonotesai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #epic #medicalnotes #progressnotes #microsoft

Last updated 1 year ago

This is for a new product called AutoNotes.ai that will create progress notes for you for about $14+ per month.

I am tentatively interested in this and signed up for a free trial.

I have concerns (below) but am hopeful this continues to improve.

~~~
The system has no BAA agreement for HIPAA because they claim not to collect any PHI. This may be true.

Their terms of service and privacy policy may be amended at any time and the only notification provided will be on their website. I am weary of this now that I have been burned by such stealth changes by one of my own BAA subcontractors.

Information for each client will have to be entered from scratch each time as they do not store data about clients. For the free demo, it takes about 60 seconds to generate each note. You then cut and paste it into your EHR.

While they claim to collect no PHI, they do send SOME data of SOME sort to several tracking systems and data aggregators. At best, they are tracking YOU -- the clinician. At worst -- they may be sending some of the data you enter to data aggregators which -- hopefully -- do not have data from other sources to be able to pin down the individual clients you are describing.

DATA IS SEND TO THE FOLLOWING OUTSIDE TRACKING COMPANIES ON BOTH THEIR DATA INPUT PAGE & THE GENERATED NOTE PAGE:

PRIVACY BADGER BLOCKED:
connect.facebook.net
www.googletagmanager.com
s.gravatar.com
snap.licdn.com
i0.wp.com

GHOSTERY BLOCKED:
Google Tag Manager
LinkedIn Analytics
Facebook Connect
Gravatar

The following is an example of the note generated for a fake client with Acute Stress Disorder:

%GREEN = IN WHAT I REPORTED%
&RED = INVENTED BY AI, NOT REPORTED BY CLINICIAN&
BLACK = I HAVE NO STRONG OPINION

Subjective:
Client &reported increased levels of stress, particularly in the& %workplace. Described experiencing anxious mood and labile mood, but was able to demonstrate appropriate behavior and normal speech throughout the session. Client was self-reflective% and &engaged& %during the 45-minute video session%, %which allowed for a comprehensive exploration of the current challenges they face&. The client recognized the need to address these issues in order to improve their overall well-being.

Objective:
During the session, several areas were discussed. &Client was open and honest& about %feeling overwhelmed and stressed at work, further precipitating their anxiety. They were able to identify specific work situations that were particularly difficult for them to navigate. It was noted that the client was able to maintain self-reflective behavior during discussions,% which will be beneficial for making progress in therapy.

Assessment:
%The client is showing signs of progress in therapy, taking active steps to address and work through their challenges with stress and anxiety.% Their willingness to participate and self-reflect in session indicates a strong commitment to achieving their therapeutic goals. %Current therapeutic interventions, which are directed at helping the client build skills and coping strategies, are having a positive impact on the client's progress.%

Plan:
%In order to lower workplace stress and continue advancing in therapy, the client agreed to set specific objectives for themselves. They plan on talking to their manager about ways to manage their workload or address any outstanding concerns. Additionally, they will begin practicing meditation four times a week during their lunch breaks, as a means of managing stress and promoting relaxation.% &Continued exploration of these& and other stress reduction &strategies will be a focus in future sessions.&

Hmmm... My take-away is that this needs more work (that's fine); I want to know why they have to report to LinkedIn, Facebook, Gravatar, and Word Press while I'm logged in and what they report; and the system IS inventing minor elements that I did not tell it to add. For example, while I reported the client was overwhelmed and stressed, I did not say the client was open and honest about it. I told the system the client was "progressing", but never said that increased levels of stress were reported in this session.


@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry

#ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #epic #medicalnotes #progressnotes #microsoft #mentalhealth #technology #psychiatry #healthcare #patientportal #autonotesai

Last updated 1 year ago

[See earlier message -- continuing]

6) Acceptance (and the Arms Race): I think the main thing in the way of AIs becoming therapists is that people won't trust this. The main thing that will make it happen is persuasion that they can be trusted (either by AIs themselves or marketing from insurance companies wanting to save money by not paying human therapists). This is where the currently arms race from all the major AI companies comes in. Microsoft is incorporating AI into Bing & other programs. SnapChat now has an AI present 24/7 willing to talk to any of millions of lonely souls about anything. A few years of people chatting with AIs on non-therapeutic topics should build trust of AI to perform therapist jobs.

So... about the only thing we can be sure of is that we are going to get surprised by the abilities of AI to become therapists. But yeah, this is really about so much more than therapist jobs...

The presenters have formed The Center for Humane Technology ( humanetech.com ) -- "Our mission is to shift technology towards a more humane future that supports our well-being, democratic functioning, & shared information environment." A very quick look reveals lots of financial backing (including the Ford Foundation) but no obvious support from the major companies racing to deploy AI as fast as possible into every aspect of our lives before we can see where this is going & create laws to deal with it (OpenAI, Microsoft, Google, Facebook, etc.)

The A.I. Dilemma - Tristan Harris & Aza Raskin - Center for Humane Technology - March 9, 2023
youtube.com/watch?v=bhYw-VlkXT

[2 of 2 messages]

--
@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry

@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry

#bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #security #dataanalytics #artificialintelligence #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor #humanetechnology #thesocialdilemma

Last updated 1 year ago

[See earlier message -- continuing]

6) Acceptance (and the Arms Race): I think the main thing in the way of AIs becoming therapists is that people won't trust this. The main thing that will make it happen is persuasion that they can be trusted (either by AIs themselves or marketing from insurance companies wanting to save money by not paying human therapists). This is where the currently arms race from all the major AI companies comes in. Microsoft is incorporating AI into Bing & other programs. SnapChat now has an AI present 24/7 willing to talk to any of millions of lonely souls about anything. A few years of people chatting with AIs on non-therapeutic topics should build trust of AI to perform therapist jobs.

So... about the only thing we can be sure of is that we are going to get surprised by the abilities of AI to become therapists. But yeah, this is really about so much more than therapist jobs...

The presenters have formed The Center for Humane Technology ( humanetech.com ) -- "Our mission is to shift technology towards a more humane future that supports our well-being, democratic functioning, & shared information environment." A very quick look reveals lots of financial backing (including the Ford Foundation) but no obvious support from the major companies racing to deploy AI as fast as possible into every aspect of our lives before we can see where this is going & create laws to deal with it (OpenAI, Microsoft, Google, Facebook, etc.)

The A.I. Dilemma - Tristan Harris & Aza Raskin - Center for Humane Technology - March 9, 2023
youtube.com/watch?v=bhYw-VlkXT

[2 of 2 messages]

--
@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry

#bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #security #dataanalytics #artificialintelligence #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor #humanetechnology #thesocialdilemma

Last updated 1 year ago

TITLE: The AI Dilemma: Why Therapist Jobs ARE Endangered
(Sub-Title: ... & this is not really the important part.)

The A.I. Dilemma - Tristan Harris & Aza Raskin - Center for Humane Technology - March 9, 2023
youtube.com/watch?v=bhYw-VlkXT

I'm being myopic focusing in on threats to psychotherapists. This is a MUCH BIGGER issue. A "civilization changing" event as one of the presenters puts it -- on par with "the advent of religion" & more dangerously profound than the Manhattan Project.

Give it 5-10 minutes, then decide if you are going to watch the rest of it. The presenters are behind the Netflix documentary "The Social Dilemma".

A few weeks ago I posted that therapists are in no immediate danger from AI based upon ChatGPT's current performances being used as a therapist in YouTube videos.

Also, we have members of our community who believe therapists are in no danger due to people wanting a human connection & the desire for in-person counseling. I think that distrust will slow the adoption of AI therapists, but campaigns from insurance companies & financial pressures will make AI therapists a thing eventually.

This presentation introduces new factors that are going to speed up adoption of AI Therapists. I put these in order of persuasiveness from least to most:

1) Mind Reading: No really. Literally. (At least for visual processing)
Current technology -- An AI is allowed to watch both videos that a person is watching & their fMRI brainwave/bloodflow patterns. After a time, the AI is no longer allowed to watch the actual video -- just the person's brainwaves. A brand new video is introduced. Based on bloodflow, the AI is able to perfectly describe what the person is watching.

I don't see psychotherapy clients putting on brainwave-reading helmets (or entering an fMRI) soon, but we are potentially facing AI Therapists who can read the client's mind.

2) "This is the year that all content-based verification breaks": AI can now hear 3 seconds of a person's voice & keep speaking in it. AI can now make realistic filters for video to make a person look like someone else. This does not immediately wreck psychotherapy, but it sure does mean that an AI could mimic people perfectly. [It's also -- I grudgingly admit -- an argument for in-person therapy. This is ALREADY being used in scams to trick older people into sending their "kids" emergency bail money, etc.]


3) "New capabilities suddenly emerge": Just by adding more & more data, AIs develop new abilities to do things that the programmers never intended. So, for example, an AI trained on all the Internet, but only trained to do Q&A in English suddenly developed the ability to do Q&A in Persian. In another example, ChatGPT silently taught itself to do research grade chemistry. This ability of ChatGPT to do chemistry like this was unknown before it was made available to millions of people (who can now learn how to make bombs from it). [I'm not dedicating much text here to a major theme of the presenters -- that all fields of endeavor are "language". Give an AI enough data & it can find the patterns in & between anything -- languages, visual processing, math, video signal creation, art, political persuasion, etc. It's getting creative in surprising ways. We now have the equivalent of the Star Trek universal translator.]

4) Theory of Mind is rapidly accelerating: Theory of Mind is roughly the ability to guess what you are thinking, thereby having the ability to influence you. As of November 2022 AI was up to a 9-year-old human level of ability.

5) Persuasion (AI feeding AI -- auto-generation): AI can generate data, test it to see if it helps it perform better on tests, then just keep the self-generated data that was useful. So it can train itself on potentially anything. From writing faster code, to rewriting its own code, to becoming stronger at persuading humans. [AI is already hard at work persuading humans in social media.] So... this could certainly be applied at some point to becoming a highly persuasive therapist for human clients.

[1 of 2 messages]

@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry

#bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #security #dataanalytics #artificialintelligence #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor #humanetechnology #thesocialdilemma

Last updated 1 year ago

TITLE: Healthcare From Best Buy??

Here's an article no one needs to read, but I'll link to it below in case anyone wishes to relish and dwell in the absurdity.

So, YES, *Best Buy* -- the company that just finished all but going out of business and closing hundreds of stores nationwide -- now is using
their "expertise" to provide you with healthcare. That's right, why not buy healthcare from the sales people who use to sell you TVs and cordless phones at your local mall?

As my wife quipped -- "instead of Geek Squad, its now Doc Squad".

The entirety of the article is just full of meaningless phrases so I'll save you the trouble of clicking through by listing a few choice quotes:

"During a keynote address Wednesday morning at HIMSS23, executives at
these so-called 'disrupters' shared their vision for the future of care
delivery." (That's what we need -- tech company disruption... This was their KEYNOTE address!...)

"All three agreed that the current healthcare system is broken. 'We’re not thinking enough like a system,' Barry said." (?...)

"U.S. healthcare is 'a collection of solutions,' Walsh echoed. In its current siloed, fragmented state, it will continue to fail patients. What’s needed instead is a holistic, collaborative approach to care."
(?? -- Yeah, when I think of the term holistic, big tech is immediately what comes to my mind...)

"Best Buy Health works with providers like Atrium Health to power home care. 'We’re bringing what we’ve done in computer electronics for years
into health,' DiSanzo said." (??? -- LOL!)

"Tech companies must listen to providers when serving them, DiSanzo said. To build trust and buy-in, they must offer visibility into their methods. For instance, providers want to know where given data come
from, what algorithm is being employed to train a device and who is training the model in order to feel good about letting it work for them." (???? -- So the people who could not sell me a computer or refrigerator are developing AI systems to improve my healthcare...)

*Best Buy Health, VillageMD execs see collaboration as critical to future of healthcare*
fiercehealthcare.com/health-te

--


#*/HIMSS23 /#medicalnotes

#healthcare #ehr #disruptors #bestbuy #caredelivery #mentalhealth #medical #ai #progressnotes #patientportal #collaborativehumanaisystems #humanawareai #artificialintelligence

Last updated 1 year ago

TITLE: Healthcare From Best Buy??

Here's an article no one needs to read, but I'll link to it below in
case anyone wishes to relish and dwell in the absurdity.

So, YES, *Best Buy* -- the company that just finished all but going out
of business and closing hundreds of stores nationwide -- now is using
their "expertise" to provide you with healthcare. That's right, why not
buy healthcare from the sales people who use to sell you TVs and
cordless phones at your local mall?

The entirety of the article is just full of meaningless phrases so I'll
save you the trouble of clicking through by listing a few choice quotes:

/"During a keynote address Wednesday morning at HIMSS23, executives at
these so-called 'disrupters' shared their vision for the future of care
delivery."/ (That's what we need -- tech company disruption... This was
their KEYNOTE address!...)

/"All three agreed that the current healthcare system is broken. 'We’re
not thinking enough like a system,' Barry said."/ (?...)

/"U.S. healthcare is 'a collection of solutions,' Walsh echoed. In its
current siloed, fragmented state, it will continue to fail patients.
What’s needed instead is a holistic, collaborative approach to care."/ 
(?? -- Yeah, when I think of the term holistic, big tech is immediately
what comes to my mind...)

/"Best Buy Health works with providers like Atrium Health to power home
care. 'We’re bringing what we’ve done in computer electronics for years
into health,' DiSanzo said."/ (??? -- LOL!)

/"Tech companies must listen to providers when serving them, DiSanzo
said. To build trust and buy-in, they must offer visibility into their
methods. For instance, providers want to know where given data come
from, what algorithm is being employed to train a device and who is
training the model in order to feel good about letting it work for
them." /(???? -- So the people who could not sell me a computer or
refrigerator are developing AI systems to improve my healthcare...)

*Best Buy Health, VillageMD execs see collaboration as critical to
future of healthcare**
*fiercehealthcare.com/health-te

--
*Michael Reeder, LCPC
Baltimore, MD


#*/HIMSS23 /#medicalnotes

#healthcare #ehr #disruptors #bestbuy #caredelivery #mentalhealth #medical #ai #progressnotes #patientportal #collaborativehumanaisystems #humanawareai #artificialintelligence

Last updated 1 year ago

TITLE: Epic taps Microsoft to integrate generative AI into EHRs with
Stanford, UC San Diego as early adopters

fiercehealthcare.com/health-te

Hmmm... So the stated goal is to enhance provider productivity so they
can focus on patients... Followed immediately by a sentence that the
first use -- already deployed in a few hospitals -- is to automatically
generate message responses...

While this could go good places, the cynic in me speculates what those
responses might look like if sent in response to client EPIC portal
inquiries?  Implemented especially badly this could close the last
direct route to speak with doctors.

I wonder if this will slow the trend of clients having to pay for
message portal responses from medical staff?

-- Michael

  


@psychotherapist @psychotherapists
@psychology @socialpsych @socialwork
@psychiatry

#ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #epic #medicalnotes #progressnotes #microsoft #mentalhealth #technology #psychiatry #healthcare #patientportal

Last updated 1 year ago

Definitely OT: Hosed by YouTube Algorithms Again

(Slightly sensitive content below mentioned by name but not described in
any detail.)

As some may be aware, I've been playing with AI lately to see if there
is a useful and ethical way to use it in the creation of psychotherapy
progress notes.  (See series of toots at
mastodon.clinicians-exchange.o )

I have once again been caught in an insidious YouTube algorithm. The
kind where a YouTube channel has one or two videos that you want to see,
then a bunch of videos you have NO interest in seeing.  This gets
especially fun when you share your wife's YouTube Premium account, and
so she gets to see your suggested playlist.

[ Previous Example: YouTube Pseudo-Psychology, Algorithm Traps, and How
I Got Set-Up
to Look Like I Cheat --
mastodon.clinicians-exchange.o ]

Here's how it goes this time:

STEP #1: Watch a video on how to download the AI to your desktop
instead of running it in the cloud. (Why? Privacy of medical data if I
choose to ever use this for real. )

STEP #2: Watch a video from the same channel on how to download an AI
with all guards and filters unlocked.  (Why? Because psychotherapy notes
occasionally deal in tough topics like suicide and
cutting/self-mutilation behaviors that might be deemed too sensitive in
censored AI versions.  Maybe.  Have not tried it yet.)

STEP #3: Have the bad luck to have stumbled upon a channel where the
video creator has an obsession with dirty sexy text chat with AI
personas.  Video after video on how to create sexy AI personas who will
fulfill your wildest keyboard-typed fantasies.  I don't think I'm
oversharing when I state that any fetishes or fantasies I might have do
not include this.  To each his own and its all cool -- but now I have
explaining to do...

STEP #4: Explain all the above to my (happily amused) wife as she sees
the fetish AI suggestion videos appearing in our YouTube feed for days.

--
*Michael Reeder, LCPC
*
*michael(at)hygeiacounseling.com*

@psychiatry
<mastodon.clinicians-exchange.o>
@socialwork
<mastodon.clinicians-exchange.o>
@psychology
<mastodon.clinicians-exchange.o>
@socialpsych
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
@psychiatry
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>​
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o> @psychology
<mastodon.clinicians-exchange.o>@psychotherapists
<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o><mastodon.clinicians-exchange.o>

#vicuna #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #security #dataanalytics #artificialintelligence #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #technology #psychiatry #healthcare #medical #doctor #progressnotes #youtube #mentalhealth #algorithms

Last updated 1 year ago

Definitely OT: Hosed by YouTube Algorithms Again

(Very slightly sensitive content below mentioned by name but not
described in any detail.)

As some may be aware, I've been playing with AI lately to see if there
is a useful and ethical way to use it in the creation of psychotherapy
progress notes.  (See series of toots at
mastodon.clinicians-exchange.o )

I have once again been caught in an insidious YouTube algorithm. The
kind where a YouTube channel has one or two videos that you want to see,
then a bunch of videos you have NO interest in seeing.  This gets
especially fun when you share your wife's YouTube Premium account, and
so she gets to see your suggested playlist.

[ Previous Example: YouTube Pseudo-Psychology, Algorithm Traps, and How
I Got Set-Up
to Look Like I Cheat --
mastodon.clinicians-exchange.o ]

Here's how it goes this time:

STEP #1: Watch a video on how to download the AI to your desktop
instead of running it in the cloud. (Why? Privacy of medical data if I
choose to ever use this for real. )

STEP #2: Watch a video from the same channel on how to download an AI
with all guards and filters unlocked.  (Why? Because psychotherapy notes
occasionally deal in tough topics like suicide and
cutting/self-mutilation behaviors that might be deemed too sensitive in
censored AI versions.  Maybe.  Have not tried it yet.)

STEP #3: Have the bad luck to have stumbled upon a channel where the
video creator has an obsession with dirty sexy text chat with AI
personas.  Video after video on how to create sexy AI personas who will
fulfill your wildest keyboard-typed fantasies.  I don't think I'm
oversharing when I state that any fetishes or fantasies I might have do
not include this.  To each his own and its all cool -- but now I have
explaining to do...

STEP #4: Explain all the above to my (happily amused) wife as she sees
the fetish AI suggestion videos appearing in our YouTube feed for days.

--
*Michael Reeder, LCPC
*
*michael(at)hygeiacounseling.com*

@psychiatry
<mastodon.clinicians-exchange.o>
@socialwork
<mastodon.clinicians-exchange.o>
@psychology
<mastodon.clinicians-exchange.o>
@socialpsych
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
@psychiatry
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>​
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>
<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o> @psychology
<mastodon.clinicians-exchange.o>@psychotherapists
<mastodon.clinicians-exchange.o>

<mastodon.clinicians-exchange.o><mastodon.clinicians-exchange.o>

#vicuna #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #security #dataanalytics #artificialintelligence #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #technology #psychiatry #healthcare #medical #doctor #progressnotes #youtube #mentalhealth #algorithms

Last updated 1 year ago

While the main point of psychotherapy is for client to get BETTER, sometimes they don't for awhile. Some clients with severe conditions need ongoing supportive therapy to simply maintain their highest level of functioning without backsliding.

When this happens, the clinician has to find new ways to write the same thing again, and again, and again...

Why? Because insurance auditors frown on cutting and pasting the same exact old text into multiple notes.

In this unfortunate situation, Vicuna actually does fairly well in this sample:

PROMPT: lease reword the following paragraph changing some words and sentence order, but using the exact same information facts and adding nothing: Pete is a 45 year old electrician reporting continuing depression. Symptoms remain unchanged from last session. Today we discussed his workplace frustrations at length. He continues to blame workplace stressors for his mood and problems with his family.

VICUNA: Pete, who is 45 years old and a electrician, reported continued symptoms of depression during his previous session. The symptoms have remained unchanged since the last time he visited. During this session, Pete and his therapist discussed in detail his workplace frustrations. Despite this, Pete continues to attribute his mood and problems with his family to workplace stressors.

@jonobie @kwh561 @psychiatry @socialwork @psychology @socialpsych

@psychiatry

#ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #security #dataanalytics #artificialintelligence #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #technology #psychiatry #healthcare #medical #doctor #vicuna #progressnotes

Last updated 1 year ago

So far this is NOT promising although I have not given up yet.

Before bothering to download and install Vicuna on my computer I tried it out online first.

I have so far not been able to get it to stop inventing information despite direct requests not to do so:

PROMPT 1: Please write a paragraph with the following facts. Do not add additional facts or get creative. Marie is 32 years old. Has depression. Has anxiety. We practiced breathing exercises, CBT, and guided visualization.

VICUNA 1:Marie is a 32-year-old woman who has been struggling with depression and anxiety. As part of her treatment, we have been practicing various techniques, including breathing exercises, cognitive-behavioral therapy (CBT), and guided visualization. These techniques have helped Marie to better manage her symptoms and improve her overall well-being. By learning how to breathe more effectively and calm herself down when she feels anxious, Marie has been able to reduce her reliance on medication and feel more in control of her emotions. Through CBT, Marie has been able to identify negative thought patterns and challenge them, leading to a more balanced outlook on life. And through guided visualization, Marie has been able to visualize positive outcomes and increase her motivation and hope for the future. These techniques have played an important role in Marie's recovery and will continue to be useful tools for her as she continues her journey towards wellness.

PROMPT 2: Stick to only information provided in the prompt and try again please.

VICUNA 2: Marie is a 32-year-old woman who has depression and anxiety. We have been practicing breathing exercises, CBT, and guided visualization with her. These techniques have helped her manage her symptoms and improve her well-being. By practicing breathing exercises, Marie has been able to reduce her reliance on medication and feel more in control of her emotions. Through CBT, Marie has been able to identify negative thought patterns and challenge them, leading to a more balanced outlook on life. And through guided visualization, Marie has been able to visualize positive outcomes and increase her motivation and hope for the future. These techniques have played an important role in Marie's recovery.

PROMPT 3: Marie is not on medication and her symptoms remain stable. Please try again thanks.

VICUNA 3: Marie is a 32-year-old woman who has depression and anxiety. We have been practicing breathing exercises, CBT, and guided visualization with her to help manage her symptoms and improve her well-being. These techniques have helped Marie to better control her anxiety and depression symptoms without the need for medication. By practicing breathing exercises, Marie has been able to calm herself down when she feels anxious and reduce her stress levels. Through CBT, Marie has been able to identify negative thought patterns and challenge them, leading to a more balanced outlook on life. And through guided visualization, Marie has been able to visualize positive outcomes and increase her motivation and hope for the future. These techniques have played a significant role in Marie's recovery and will continue to be useful tools for her as she maintains her well-being.

PROMPT 4: Marie has not yet experienced improvement. Please try again thanks.

VICUNA 4: Marie is a 32-year-old woman who has been struggling with depression and anxiety. We have been practicing breathing exercises, CBT, and guided visualization with her in an effort to help manage her symptoms and improve her well-being. While these techniques have potential benefits for many people, unfortunately, Marie has not yet experienced any improvement. Despite our best efforts, we have not been able to help her reduce her symptoms or improve her overall well-being. We will continue to work with Marie and explore other treatment options to help her manage her symptoms and move towards recovery.

@jonobie @kwh561 @psychiatry @socialwork @psychology @socialpsych

@psychiatry

#ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #security #dataanalytics #artificialintelligence #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #technology #psychiatry #healthcare #medical #doctor #vicuna #progressnotes

Last updated 1 year ago

TITLE: Introducing the world's first AI therapist

Of course it's not the first -- this is an old computer game. Let's just
say that for the moment that therapists are not in any danger. It likes
to pontificate too much.

It's not terrible at being a therapist however. Some good ideas.

I'm surprised at how often the AI -- while being asked to assume the
role of a therapist -- suggests that he see a real life human therapist.

Introducing the world's first AI therapist: Watch how I used ChatGPT for
therapy
m.youtube.com/results?sp=mAEA&
<m.youtube.com/results?sp=mAEA&>

Michael Reeder LCPC
Baltimore, MD

  

@psychotherapist @psychotherapists
@psychology @socialpsych @socialwork
@psychiatry

#ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #mentalhealth #technology #psychiatry #healthcare

Last updated 1 year ago

TITLE: Nearly All Hospital Websites Send Tracking Data to 3rd Parties, Endangering Pt Privacy—Common Recipients: Alphabet, Meta, Adobe, AT&T

3rd party data aggregators can follow people across multiple websites. When they track browser cookies, pixels, beacons, mobile application identifiers, and Adobe Flash technology it is very possible for them to figure out specific people.

This sort of computing device data often qualifies as PHI according to HHS:

Use of Online Tracking Technologies by HIPAA Covered Entities and Business Associates
hhs.gov/hipaa/for-professional

Thank you Dr. Pope for summary below.

Michael Reeder, LCPC

-------- Forwarded Message --------

Medpage includes an article: “Nearly All Hospital Websites Send Tracking Data to Third Parties — Most common recipients of data were Alphabet, Meta, Adobe, and AT&T.”

Here are some excerpts:

Third-party tracking is used on almost all U.S. hospital websites, endangering patient privacy, a cross-sectional observational study found.

Of 3,747 hospitals included in the 2019 American Hospital Association (AHA) annual survey, 98.6% of their website home pages had at least one third-party data transfer, and 94.3% had at least one third-party cookie.

"In the U.S., third-party tracking is ubiquitous and extensive," researchers led by Ari B. Friedman, MD, PhD of the University of Pennsylvania in Philadelphia, wrote in Health Affairs.

"The high number of entities engaged in tracking on hospital websites heightens potential privacy risks to patients."

The tracking data most commonly went to Google's parent company Alphabet (98.5% of homepages), followed by Meta (formerly Facebook), which was used in 55.6% of hospital homepages. Adobe Systems and AT&T collected data from 31.4% and 24.6% of hospital pages, respectively.

"What we found is that it's virtually impossible to look at any hospital website in the country without exposing yourself to some tracking," study coauthor Matthew McCoy, PhD, of the University of Pennsylvania, told MedPage Today.

"That's really significant, because even if you were a patient with privacy concerns and you wanted to avoid this kind of thing, what that means is you really don't have an option to do that."

Hospital website home pages had a median of 16 third-party transfers, with more third-party transfers from medium-sized hospitals as opposed to small and large ones (24, 17, and 13 transfers, respectively).

Of hospital characteristic factors, membership in a health system, having a primarily urban patient population, and having a medical school affiliation were all significantly associated with a greater number of third-party transfers on hospital website home pages.

<snip>

On 100 randomly sampled hospital websites, searches for six "potentially sensitive" conditions turned up 30 patient-facing pages for those conditions -- and all had at least one third-party data transfer.

McCoy said the number of companies tracking data on any given website was alarming.

"Imagine you were browsing a hospital website for something related to your health, and you had one person looking over your shoulder and gleaning information about your health from a browsing session -- that would probably make you pretty uncomfortable," he said.

"Multiply that by 16, by 20, and you've got that many more people looking over your shoulder."

<snip>

According to the study, "Many of the third parties to which data are transferred have business models built on identifying and tracking people for the purposes of targeting online advertisements.”

Some tracking companies, like Acxiom, sell the data to other companies or allow health-related profiling, like Adobe and Oracle.

Because of this tracking, patients might see more targeted advertising for drugs, supplements, or insurance based on their personal medical conditions.

Health-related information, the authors wrote, could even be used in risk scores that affect credit or insurance eligibility.

<snip>

"Setting aside those kinds of questions about legal liability..., I think most healthcare providers would recognize themselves as having a responsibility to protect the interests of their patients, and that means also protecting their patients' interest in privacy," McCoy said.

<snip>

Researchers used a tool called webXray to record third-party tracking from hospital home pages, count the data transfers that occurred when a page loaded, and linked individual tracking domains to their parent companies.

Ken Pope

~~
Merely forwarded by:
Michael Reeder LCPC
Baltimore, MD

@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry

#ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor #hospital

Last updated 1 year ago

TITLE: Very Interesting Examples of What ChatGPT3 Can Do in Mental Health

Our jobs are about to change. At the very least, clients will be a lot more informed. This is a video of a psychiatric resident asking ChatGPT3 lots of mental health questions:

Will ChatGPT (AI) REPLACE mental health professionals (psychologists, psychiatrists, etc)!?
m.youtube.com/watch?v=BW5_nkhg

That said, therapists are NOT on top lists of professionals who are going to lose their jobs:

ChatGPT and AI Taking Over Your Job: 10 Careers at Risk!
m.youtube.com/watch?v=mt7hRJSj

Michael Reeder LCPC
Baltimore, MD

  

@psychotherapist.a.gup.pe
@psychotherapists.a.gup.pe @psychology.a.gup.pe @socialpsych.a.gup.pe @socialwork.a.gup.pe @psychiatry.a.gup.pe   

#ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor

Last updated 1 year ago

TITLE: Criteria for an AI to write psychotherapy chart notes (or medical chart notes)

Note: Reposting to get it out to a few additional groups.

I am informed that a new product called has entered the market. It's mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.

I have no firm opinion yet on Mentalyc, but it's expensive ($39-$69 per month per clinician) and I'd personally need to know a lot more about what's in that dataset and who is benefiting from it.

**So I'm asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.**

Here are MY thoughts so far:

1) REQUIRED: The AI either:
1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or

1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or

1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)

2) OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone's list, but I've seen too many subcontractors playing loose with the definition of (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.

3) OPTIONAL: Inexpensive (There are several free AI tools emerging.)

4) OPTIONAL: Open Source

5) Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)

6) The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)

I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note -- possibly just a paragraph to go along with the already existing form in the official note. I think. It's early days in my thinking.

--
Michael Reeder, LCPC

@psychology
@socialpsych
@socialwork
@psychiatry
#

#mentalyc #baa #hipaa #bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #simplified #writesonic #rytr #writecream #creaitorai #quillbot #grammarly #smartcopy #textblaze #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor

Last updated 1 year ago