Because my office is associated with a hospital that changed ownership, today is our 1st day with new instance of the #EHR. It's the same program, but connected to a different patient database.
Today's the first time in more than 5 years that I was handed a blank script pad. I made sure to show students how to hand write a prescription.
As we see each patient, we're supposed to import data from the old system.
It's better than LAST emr change, when records were scanned in fax-style.
As long as #EHR are not #patientcentered and not owned by patients, patients will be the last to benefit from medical discoveries.
I want my record to belong and be linked to me, so when a new 💊becomes available, I can benefit from it, and not just be a statistics in #HEOR report
EMAIL LIST: https://www.clinicians-exchange.org & LEMMY: https://lem.clinicians-exchange.org
.
I'm a bit behind on this news cycle, so you may have read about these
issues. _My point is to tie them to data privacy and OUR clinical
practices._
**THIS BELOW** is one of the main reasons /I keep throwing a fit about
data leaks from HIPAA BAA subcontractors/ - whether or not they end up
being legally PHI, and despite the fact that not too many therapists are
interested in the topic.
*If an Attorney General is willing to go after unredacted medical
records* in-state or out-of-state, /then they are certainly *_capable of
getting data from data brokers and marketing firms_* (or Google,
Facebook, LinkedIn, Twitter, etc.)./
Closer-to-home -- It's not too much of a stretch to speculate if
psychotherapists in blue states will get subpoenas for chart records
pertaining to clients who moved to a red state shortly after counseling,
then got in trouble for whatever the legal medical issue of the moment
is (abortion, birth control, transgender concerns, fertility clinic
involvement, etc.).
*Here’s why Tennessee’s AG wants access to reproductive medical records
— including for out-of-state abortions**
*https://wpln.org/post/heres-why-tennessees-ag-wants-access-to-reproductive-medical-records-including-for-out-of-state-abortions/
/"State attorneys general in 18 states — including Tennessee’s — are
fighting with the Biden Administration over medical records related to
reproductive care."//
/
*Tennessee A.G. weaponizes private medical records in GOP campaign
against trans people**
*https://the-rachel-maddow-show.simplecast.com/episodes/tennessee-ag-weaponizes-private-medical-records-in-gop-campaign-against-trans-people
/Maddow podcast recording. Talks about attorneys general from 16 states
writing a letter to President Biden asserting their right to go after
medical records located outside their states.//
/
*Biden’s HIPAA expansion for abortion draws criticism, lawsuit threats **
*https://www.politico.com/news/2023/07/18/biden-hipaa-expansion-abortion-00106694
/Biden administration trying to shield abortion medical record data
located in blue states from red state Attorney General probes.//
/
In case you are interested, here are some of my past articles on medical
data privacy and various vendors:
*
hipaalink.net security initial testing*
https://lem.clinicians-exchange.org/post/49122
*
Nearly All Hospital Websites Send Tracking Data to 3rd Parties,
Endangering Pt Privacy—Common Recipients: Alphabet, Meta, Adobe, AT&T*
https://lem.clinicians-exchange.org/post/24598
*
To become an Amazon Clinic patient, first you sign away some privacy,You
agreed to what? The ‘HIPAA authorization’ for Amazon’s new low-cost
clinic offers the tech giant more control over your health*
https://lem.clinicians-exchange.org/post/24603
*
FTC, HHS warn health providers not to use tracking tech in websites, apps*
https://lem.clinicians-exchange.org/post/44657
*
Would you want #AI used to help write a medical or psychotherapy chart
note?**(Ongoing Poll)*
https://mastodon.clinicians-exchange.org/@admin/110799586045837116
*
AWS rolls out generative AI service for healthcare documentation software*
https://lem.clinicians-exchange.org/post/57450
I'm not posting this to be political (although it certainly is) --*I'm
posting it as a legit medical records concern for all of us regardless
of each individual reader's political positions. We need -- as
therapists -- to care about data leaks and privacy.*
+++++++++++
#AIÂ Â #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #chatgpt
#artificialintelligence #psychology #counseling #socialwork
#psychotherapy #EHR #medicalnotes #progressnotes #legal #lgbtq #abortion
#transgender
@psychotherapist @psychotherapists
@psychology @socialpsych @socialwork
@psychiatry #mentalhealth #technology #psychiatry #healthcare
#patientportal
#HIPAA #dataprotection #infosec @infosec #doctors #hospitals
#amazon #BAA #businessassociateagreement
#ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #medicalnotes #progressnotes #legal #lgbtq #abortion #transgender #mentalhealth #technology #psychiatry #healthcare #patientportal #hipaa #dataprotection #infosec #doctors #hospitals #amazon #baa #businessassociateagreement
EMAIL LIST: https://www.clinicians-exchange.org & LEMMY: https://lem.clinicians-exchange.org
.
TITLE: AWS rolls out generative AI service for healthcare documentation
software
Yeah... If it's going to be worth using it would have to listen to the
whole visit... But this needs more thought. In my past quick
experiments, it took 90% of the effort typing directions to get an AI to
generate a halfway okay note. So the AI (I think) would have to listen
in and then do the note to be worth it.
Would we want this? Can we trust this?
--Michael
+++++++++
------------------------------------------------------------------------
"Amazon Web Services announced Wednesday a new AI-powered service for
healthcare software providers that will help clinicians with paperwork."
"AWS HealthScribe uses generative AI and speech recognition to help
doctors transcribe and analyze their conversations with patients and
drafts clinical notes, the company announced Wednesday at its AWS Summit
New York."
------------------------------------------------------------------------
Posted by:
Michael Reeder LCPC
Baltimore, MD
#AIÂ Â #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #chatgpt
#artificialintelligence #psychology #counseling #socialwork
#psychotherapy #EHR #medicalnotes #progressnotes #Amazon
@psychotherapist @psychotherapists
@psychology @socialpsych @socialwork
@psychiatry #mentalhealth #technology #psychiatry #healthcare
#patientportal
#ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #medicalnotes #progressnotes #amazon #mentalhealth #technology #psychiatry #healthcare #patientportal
Would you want #AI used to help write a medical or psychotherapy chart note?
Health professionals are under lots of time pressure -- often unpaid for charting time, and increasingly typing notes during appointments.
But, there may be privacy and accuracy concerns with using AI in charting.
++++
#CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #EHR #EPIC #medicalnotes #progressnotes #Microsoft
@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #technology #psychiatry #healthcare #patientportal
#ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #epic #medicalnotes #progressnotes #microsoft #mentalhealth #technology #psychiatry #healthcare #patientportal
Epic's Cosmos data show significant increases in occurrence of speech delays since the start of the #pandemic. No such increases were seen for motor, cognitive, or scholastic delay. These data are based on #EHR records 1.67M children. https://epicresearch.org/articles/childhood-speech-development-delays-increasing-since-the-start-of-the-pandemic
@RebelGeek99 The real Q is whether tensorflow can be convinced to fit regular #statistical models without the notational baggage of #machinelearning , or its numerical estimation garbage (component wise gradient descent is very slow, even with a very recent proposal https://arxiv.org/abs/2307.06324 to accelerate it). We left this as an extension of our evaluation of repeated observations in electronic health record #EHR https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8310602/ data. Wished I had time to visit this prospect.
#statistical #machinelearning #ehr
Everybody gets HIPAA wrong on Twitter, but how about patients, docs, and hospital administrators?
Yep, also wrong! Tons!
Great paper by Shachar, Cadario, Cohen, & Morewedge in Nature Medicine on HIPAA misunderstandings. Including:
- nice summary of EHR data rights according to property bundle-of-sticks framework
- <50% of patients, docs, admins know patients have rights to profit from or modify EHR data.
#ehr #HIPAA #bigdata #health #lawfedi
As we all work towards improving the #EHR #PatientPortal, it's important to look for unique ways to ease the #burden on our clinicians. Here's one: write a letter to your patients asking them to leverage the portal in fair and supportable ways. Success! https://jamanetwork.com/journals/jama/fullarticle/2807107
Why do people act like these 'EHR problems' never happened before EHR?
Bloated patient records are filled with false information, thanks to copy-paste
https://www.statnews.com/2023/06/20/medical-records-errors-copy-paste
It's obviously important to collect race and ethnicity data in a thoughtful and complete way. It's also essential that these data get entered in the electronic health record in a similarly thoughtful, consistent, and interchangeable way. #EHR https://www.healthaffairs.org/content/forefront/health-system-s-experience-inclusive-race-and-ethnicity-data-collection-and-need-data
SCOOP: If you're deep into #healthtech and #AI in #medicine, you've heard the question:
How much are people paying for DAX?
"It's highway robbery," a tipster told me.
Turns out, it's worse than that: everyone is paying a different price, and some don't think it's worth it.
#health #ROI #scribe #Nuance #Microsoft #EHR #doctors #physicians #nurses #hospital #healthcare #artificialintelligence
https://www.statnews.com/2023/06/01/hospitals-nuance-dax-scribe-artificial-intelligence/
#healthtech #ai #medicine #health #roi #scribe #nuance #microsoft #ehr #doctors #physicians #nurses #hospital #healthcare #artificialintelligence
Interesting article that shows what I think many physicians have felt for a while: the #EHR has allowed or even promoted fewer person-to-person interactions among clinicians. We should take action to reverse this trend. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2804851
Too big to fail? Sunk cost fallacy? Why weren't these accountability terms in the original contract?
VA renews Oracle Cerner EHR modernization contract, with renegotiated terms
"For the past few years, we've tried to fix this plane while flying it – and that hasn't delivered the results that veterans or our staff deserve," said Dr. Neil Evans, acting program executive director of the VA's EHRM Integration Office, in the agency's announcement."
Good read - observations and thoughts on open notes #EHR
"The Curious Side Effects of Medical Transparency"
https://www.newyorker.com/news/essay/the-curious-side-effects-of-medical-transparency
It occurred to me this morning that by the time I fill out the form and write two or three sentences, I've already done all the work that is needed for an official note (after adding start and end times, diagnosis, name, client age, and a few other elements to the form). There is no need to convert it all to narrative -- it can stay in form factor mostly.
So -- while I want an AI I can trust to help with notes (and this one may grow into such) -- right now the effort of getting it to create a note is about exactly equal to the effort of just writing it myself anyway.
@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry
#mentalhealth #technology #psychiatry #healthcare #patientportal #autonotesai #AI #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #EHR #EPIC #medicalnotes #progressnotes #Microsoft
#mentalhealth #technology #psychiatry #healthcare #patientportal #autonotesai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #epic #medicalnotes #progressnotes #microsoft
*If* AutoNotes.ai just plugged in a free AI available from elsewhere with a front-end they created, I wonder what legalese governs use of the AI on the backend? I am not a lawyer and none of us know the licensing agreement or ownership of the AI which AutoNotes.ai is using.
And, well hey -- while I'm just busy making up wild speculations -- let's play with this a bit:
So I know AutoNotes.ai is sending SOME kind of information to Google tracking services because my browser plug-ins are preventing this data from being sent to Google and telling me so.
Let's just suppose for a moment that they are using Google's Bard AI on the back-end. Because -- why not -- there is no PHI being collected anyway...
Meanwhile, both the therapist and the client are using the Google Chrome web browser for televideo. Or maybe they are using Gmail and the Gmail text is being mined. Or the data input for the note is sent along to Google datamining regardless of whether or not the Bard AI is used...
Let's go further out on our hypothetical limb and say that the therapist sees only three clients that day. The therapist creates three notes in AutoNotes.ai that day...
It's now a more than fair chance that one of those three unnamed clients has Acute Stress Disorder (like in my example above). If Google has gone to the bother to devote the computer tracking power to it, they might know from Gmail, or Bard, or data aggregation the names of the clients the therapist saw that day.
Of course, I really am making this all up -- we are just not given enough data to know what's real and false anymore.
Here is a paragraph from the welcome message they emailed me:
"Here are a couple of simple suggestions, first, complete as thorough a Mental Status Exam (MSE) as possible, submit a few sentences related to the session and theme, and include treatment plan goals, objectives, and strategies; this will ensure the best possible clinical note. Please revise and submit your revised version inside the app! This will assist all of us in building the greatest tool on earth for the field!"
Well, okay -- I do want the AI to get better at its job...
But this DOES mean they are keeping a version of what you provide, doesn't it?
@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry
#mentalhealth #technology #psychiatry #healthcare #patientportal #autonotesai #AI #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #EHR #EPIC #medicalnotes #progressnotes #Microsoft
#mentalhealth #technology #psychiatry #healthcare #patientportal #autonotesai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #epic #medicalnotes #progressnotes #microsoft
This is for a new product called AutoNotes.ai that will create progress notes for you for about $14+ per month.
I am tentatively interested in this and signed up for a free trial.
I have concerns (below) but am hopeful this continues to improve.
~~~
The system has no BAA agreement for HIPAA because they claim not to collect any PHI. This may be true.
Their terms of service and privacy policy may be amended at any time and the only notification provided will be on their website. I am weary of this now that I have been burned by such stealth changes by one of my own BAA subcontractors.
Information for each client will have to be entered from scratch each time as they do not store data about clients. For the free demo, it takes about 60 seconds to generate each note. You then cut and paste it into your EHR.
While they claim to collect no PHI, they do send SOME data of SOME sort to several tracking systems and data aggregators. At best, they are tracking YOU -- the clinician. At worst -- they may be sending some of the data you enter to data aggregators which -- hopefully -- do not have data from other sources to be able to pin down the individual clients you are describing.
DATA IS SEND TO THE FOLLOWING OUTSIDE TRACKING COMPANIES ON BOTH THEIR DATA INPUT PAGE & THE GENERATED NOTE PAGE:
PRIVACY BADGER BLOCKED:
connect.facebook.net
www.googletagmanager.com
s.gravatar.com
snap.licdn.com
i0.wp.com
GHOSTERY BLOCKED:
Google Tag Manager
LinkedIn Analytics
Facebook Connect
Gravatar
The following is an example of the note generated for a fake client with Acute Stress Disorder:
%GREEN = IN WHAT I REPORTED%
&RED = INVENTED BY AI, NOT REPORTED BY CLINICIAN&
BLACK = I HAVE NO STRONG OPINION
Subjective:
Client &reported increased levels of stress, particularly in the& %workplace. Described experiencing anxious mood and labile mood, but was able to demonstrate appropriate behavior and normal speech throughout the session. Client was self-reflective% and &engaged& %during the 45-minute video session%, %which allowed for a comprehensive exploration of the current challenges they face&. The client recognized the need to address these issues in order to improve their overall well-being.
Objective:
During the session, several areas were discussed. &Client was open and honest& about %feeling overwhelmed and stressed at work, further precipitating their anxiety. They were able to identify specific work situations that were particularly difficult for them to navigate. It was noted that the client was able to maintain self-reflective behavior during discussions,% which will be beneficial for making progress in therapy.
Assessment:
%The client is showing signs of progress in therapy, taking active steps to address and work through their challenges with stress and anxiety.% Their willingness to participate and self-reflect in session indicates a strong commitment to achieving their therapeutic goals. %Current therapeutic interventions, which are directed at helping the client build skills and coping strategies, are having a positive impact on the client's progress.%
Plan:
%In order to lower workplace stress and continue advancing in therapy, the client agreed to set specific objectives for themselves. They plan on talking to their manager about ways to manage their workload or address any outstanding concerns. Additionally, they will begin practicing meditation four times a week during their lunch breaks, as a means of managing stress and promoting relaxation.% &Continued exploration of these& and other stress reduction &strategies will be a focus in future sessions.&
Hmmm... My take-away is that this needs more work (that's fine); I want to know why they have to report to LinkedIn, Facebook, Gravatar, and Word Press while I'm logged in and what they report; and the system IS inventing minor elements that I did not tell it to add. For example, while I reported the client was overwhelmed and stressed, I did not say the client was open and honest about it. I told the system the client was "progressing", but never said that increased levels of stress were reported in this session.
#AI #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #EHR #EPIC #medicalnotes #progressnotes #Microsoft
@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #technology #psychiatry #healthcare #patientportal #autonotesai
#ai #collaborativehumanaisystems #humanawareai #chatbotgpt #chatgpt #artificialintelligence #psychology #counseling #socialwork #psychotherapy #ehr #epic #medicalnotes #progressnotes #microsoft #mentalhealth #technology #psychiatry #healthcare #patientportal #autonotesai
[See earlier message -- continuing]
6) Acceptance (and the Arms Race): I think the main thing in the way of AIs becoming therapists is that people won't trust this. The main thing that will make it happen is persuasion that they can be trusted (either by AIs themselves or marketing from insurance companies wanting to save money by not paying human therapists). This is where the currently arms race from all the major AI companies comes in. Microsoft is incorporating AI into Bing & other programs. SnapChat now has an AI present 24/7 willing to talk to any of millions of lonely souls about anything. A few years of people chatting with AIs on non-therapeutic topics should build trust of AI to perform therapist jobs.
So... about the only thing we can be sure of is that we are going to get surprised by the abilities of AI to become therapists. But yeah, this is really about so much more than therapist jobs...
The presenters have formed The Center for Humane Technology ( humanetech.com ) -- "Our mission is to shift technology towards a more humane future that supports our well-being, democratic functioning, & shared information environment." A very quick look reveals lots of financial backing (including the Ford Foundation) but no obvious support from the major companies racing to deploy AI as fast as possible into every aspect of our lives before we can see where this is going & create laws to deal with it (OpenAI, Microsoft, Google, Facebook, etc.)
The A.I. Dilemma - Tristan Harris & Aza Raskin - Center for Humane Technology - March 9, 2023
https://www.youtube.com/watch?v=bhYw-VlkXTU
[2 of 2 messages]
--
#Bias #Ethics #EthicalAI #AI #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #security #dataanalytics #artificialintelligence #HIPAA #privacy #psychology #counseling #socialwork #psychotherapy #research @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #SOAP #EHR #mentalhealth #technology #psychiatry #healthcare #medical #doctor #chatbotgpt #humanetechnology #thesocialdilemma
@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry
#bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #security #dataanalytics #artificialintelligence #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor #humanetechnology #thesocialdilemma
[See earlier message -- continuing]
6) Acceptance (and the Arms Race): I think the main thing in the way of AIs becoming therapists is that people won't trust this. The main thing that will make it happen is persuasion that they can be trusted (either by AIs themselves or marketing from insurance companies wanting to save money by not paying human therapists). This is where the currently arms race from all the major AI companies comes in. Microsoft is incorporating AI into Bing & other programs. SnapChat now has an AI present 24/7 willing to talk to any of millions of lonely souls about anything. A few years of people chatting with AIs on non-therapeutic topics should build trust of AI to perform therapist jobs.
So... about the only thing we can be sure of is that we are going to get surprised by the abilities of AI to become therapists. But yeah, this is really about so much more than therapist jobs...
The presenters have formed The Center for Humane Technology ( humanetech.com ) -- "Our mission is to shift technology towards a more humane future that supports our well-being, democratic functioning, & shared information environment." A very quick look reveals lots of financial backing (including the Ford Foundation) but no obvious support from the major companies racing to deploy AI as fast as possible into every aspect of our lives before we can see where this is going & create laws to deal with it (OpenAI, Microsoft, Google, Facebook, etc.)
The A.I. Dilemma - Tristan Harris & Aza Raskin - Center for Humane Technology - March 9, 2023
https://www.youtube.com/watch?v=bhYw-VlkXTU
[2 of 2 messages]
--
#Bias #Ethics #EthicalAI #AI #CollaborativeHumanAISystems #HumanAwareAI #chatbotgpt #security #dataanalytics #artificialintelligence #HIPAA #privacy #psychology #counseling #socialwork #psychotherapy #research @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #SOAP #EHR #mentalhealth #technology #psychiatry #healthcare #medical #doctor #chatbotgpt #humanetechnology #thesocialdilemma
#bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #security #dataanalytics #artificialintelligence #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor #humanetechnology #thesocialdilemma