TITLE: Nearly All Hospital Websites Send Tracking Data to 3rd Parties, Endangering Pt Privacy—Common Recipients: Alphabet, Meta, Adobe, AT&T

3rd party data aggregators can follow people across multiple websites. When they track browser cookies, pixels, beacons, mobile application identifiers, and Adobe Flash technology it is very possible for them to figure out specific people.

This sort of computing device data often qualifies as PHI according to HHS:

Use of Online Tracking Technologies by HIPAA Covered Entities and Business Associates
hhs.gov/hipaa/for-professional

Thank you Dr. Pope for summary below.

Michael Reeder, LCPC

-------- Forwarded Message --------

Medpage includes an article: “Nearly All Hospital Websites Send Tracking Data to Third Parties — Most common recipients of data were Alphabet, Meta, Adobe, and AT&T.”

Here are some excerpts:

Third-party tracking is used on almost all U.S. hospital websites, endangering patient privacy, a cross-sectional observational study found.

Of 3,747 hospitals included in the 2019 American Hospital Association (AHA) annual survey, 98.6% of their website home pages had at least one third-party data transfer, and 94.3% had at least one third-party cookie.

"In the U.S., third-party tracking is ubiquitous and extensive," researchers led by Ari B. Friedman, MD, PhD of the University of Pennsylvania in Philadelphia, wrote in Health Affairs.

"The high number of entities engaged in tracking on hospital websites heightens potential privacy risks to patients."

The tracking data most commonly went to Google's parent company Alphabet (98.5% of homepages), followed by Meta (formerly Facebook), which was used in 55.6% of hospital homepages. Adobe Systems and AT&T collected data from 31.4% and 24.6% of hospital pages, respectively.

"What we found is that it's virtually impossible to look at any hospital website in the country without exposing yourself to some tracking," study coauthor Matthew McCoy, PhD, of the University of Pennsylvania, told MedPage Today.

"That's really significant, because even if you were a patient with privacy concerns and you wanted to avoid this kind of thing, what that means is you really don't have an option to do that."

Hospital website home pages had a median of 16 third-party transfers, with more third-party transfers from medium-sized hospitals as opposed to small and large ones (24, 17, and 13 transfers, respectively).

Of hospital characteristic factors, membership in a health system, having a primarily urban patient population, and having a medical school affiliation were all significantly associated with a greater number of third-party transfers on hospital website home pages.

<snip>

On 100 randomly sampled hospital websites, searches for six "potentially sensitive" conditions turned up 30 patient-facing pages for those conditions -- and all had at least one third-party data transfer.

McCoy said the number of companies tracking data on any given website was alarming.

"Imagine you were browsing a hospital website for something related to your health, and you had one person looking over your shoulder and gleaning information about your health from a browsing session -- that would probably make you pretty uncomfortable," he said.

"Multiply that by 16, by 20, and you've got that many more people looking over your shoulder."

<snip>

According to the study, "Many of the third parties to which data are transferred have business models built on identifying and tracking people for the purposes of targeting online advertisements.”

Some tracking companies, like Acxiom, sell the data to other companies or allow health-related profiling, like Adobe and Oracle.

Because of this tracking, patients might see more targeted advertising for drugs, supplements, or insurance based on their personal medical conditions.

Health-related information, the authors wrote, could even be used in risk scores that affect credit or insurance eligibility.

<snip>

"Setting aside those kinds of questions about legal liability..., I think most healthcare providers would recognize themselves as having a responsibility to protect the interests of their patients, and that means also protecting their patients' interest in privacy," McCoy said.

<snip>

Researchers used a tool called webXray to record third-party tracking from hospital home pages, count the data transfers that occurred when a page loaded, and linked individual tracking domains to their parent companies.

Ken Pope

~~
Merely forwarded by:
Michael Reeder LCPC
Baltimore, MD

@psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry

#ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor #hospital

Last updated 1 year ago

TITLE: Very Interesting Examples of What ChatGPT3 Can Do in Mental Health

Our jobs are about to change. At the very least, clients will be a lot more informed. This is a video of a psychiatric resident asking ChatGPT3 lots of mental health questions:

Will ChatGPT (AI) REPLACE mental health professionals (psychologists, psychiatrists, etc)!?
m.youtube.com/watch?v=BW5_nkhg

That said, therapists are NOT on top lists of professionals who are going to lose their jobs:

ChatGPT and AI Taking Over Your Job: 10 Careers at Risk!
m.youtube.com/watch?v=mt7hRJSj

Michael Reeder LCPC
Baltimore, MD

  

@psychotherapist.a.gup.pe
@psychotherapists.a.gup.pe @psychology.a.gup.pe @socialpsych.a.gup.pe @socialwork.a.gup.pe @psychiatry.a.gup.pe   

#ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #hipaa #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor

Last updated 1 year ago

TITLE: Criteria for an AI to write psychotherapy chart notes (or medical chart notes)

Note: Reposting to get it out to a few additional groups.

I am informed that a new product called has entered the market. It's mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.

I have no firm opinion yet on Mentalyc, but it's expensive ($39-$69 per month per clinician) and I'd personally need to know a lot more about what's in that dataset and who is benefiting from it.

**So I'm asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.**

Here are MY thoughts so far:

1) REQUIRED: The AI either:
1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or

1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or

1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)

2) OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone's list, but I've seen too many subcontractors playing loose with the definition of (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.

3) OPTIONAL: Inexpensive (There are several free AI tools emerging.)

4) OPTIONAL: Open Source

5) Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)

6) The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)

I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note -- possibly just a paragraph to go along with the already existing form in the official note. I think. It's early days in my thinking.

--
Michael Reeder, LCPC

@psychology
@socialpsych
@socialwork
@psychiatry
#

#mentalyc #baa #hipaa #bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #simplified #writesonic #rytr #writecream #creaitorai #quillbot #grammarly #smartcopy #textblaze #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor

Last updated 1 year ago

TITLE: Criteria for an AI to write psychotherapy chart notes (or medical chart notes)

I am informed that a new product called has entered the market. It's mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.

I have no firm opinion yet on Mentalyc, but it's expensive ($39-$69 per month per clinician) and I'd personally need to know a lot more about what's in that dataset and who is benefiting from it.

**So I'm asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.**

Here are MY thoughts so far:

1) REQUIRED: The AI either:
1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or

1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or

1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)

2) OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone's list, but I've seen too many subcontractors playing loose with the definition of (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.

3) OPTIONAL: Inexpensive (There are several free AI tools emerging.)

4) OPTIONAL: Open Source

5) Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)

6) The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)

I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note -- possibly just a paragraph to go along with the already existing form in the official note. I think. It's early days in my thinking.

--
Michael Reeder, LCPC

@psychotherapist @psychotherapists @psychology.a.gup.pe @socialpsych.a.gup.pe @socialwork.a.gup.pe @psychiatry.a.gup.pe #

#mentalyc #baa #hipaa #bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #simplified #writesonic #rytr #writecream #creaitorai #quillbot #grammarly #smartcopy #textblaze #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor

Last updated 1 year ago

TITLE: Criteria for an AI to write psychotherapy chart notes (or medical chart notes)

I am informed that a new product called has entered the market. It's mission is to write psychotherapy notes for clinicians AND to gather a non-identifiable dataset for research into clinical best practices.

I have no firm opinion yet on Mentalyc, but it's expensive ($39-$69 per month per clinician) and I'd personally need to know a lot more about what's in that dataset and who is benefiting from it.

**So I'm asking the community for thoughts on what acceptable ethical and practical criteria would be for an AI to write psychotherapy notes or medical notes.**

Here are MY thoughts so far:

1) REQUIRED: The AI either:
1a) Invents NOTHING and takes 100% of the information in the note from the clinician, or

1b) Prompts the clinician for additional symptoms often present in the condition before writing the note, or

1c) Presents a very clear information page before writing that lets the clinician approve, delete, or modify anything the AI got creative with and was not told explicitly to include. (So, for example, in an experiment with Bard a clinician found that Bard added sleep problems as an invented symptom to a SOAP note for a person with depression and anxiety. This is a non-bizarre symptom addition that makes lots of sense, is very likely, but would have to be approved as valid for the person in question.)

2) OPTIONAL: The AI is on MY computer and NOT reporting anything back to the Internet. This will not be on everyone's list, but I've seen too many subcontractors playing loose with the definition of (medical privacy) and there is more money to be made in data sales than clinician subscriptions to an AI.

3) OPTIONAL: Inexpensive (There are several free AI tools emerging.)

4) OPTIONAL: Open Source

5) Inputting data to the AI to write the note is less work than just writing the note personally. (Maybe a complex tablet-based clickable form? But then, a pretty high percentage of a note can be in a clickable form format anyway.)

6) The AI does NOT record the entire session and then write a note based upon what was said. (It might accept dictation of note directions sort of like doctors dictate notes to transcribers today.)

I think I may be envisioning a checkbox and drop-down menu form along with a space for a clinician to write a few keywords and phrases, then the AI (on my laptop) takes this and writes a note -- possibly just a paragraph to go along with the already existing form in the official note. I think. It's early days in my thinking.

--
Michael Reeder, LCPC

@psychotherapist.a.gup.pe @psychotherapists.a.gup.pe @psychology.a.gup.pe @socialpsych.a.gup.pe @socialwork.a.gup.pe @psychiatry.a.gup.pe #

#mentalyc #baa #hipaa #bias #ethics #ethicalai #ai #collaborativehumanaisystems #humanawareai #chatbotgpt #bard #security #dataanalytics #artificialintelligence #copyai #simplified #writesonic #rytr #writecream #creaitorai #quillbot #grammarly #smartcopy #textblaze #privacy #psychology #counseling #socialwork #psychotherapy #research #soap #ehr #mentalhealth #technology #psychiatry #healthcare #medical #doctor

Last updated 1 year ago

Mathieu LESNIAK 🇫🇷 · @maverick
13 followers · 160 posts · Server toot.eskuel.net

Ok, Notion will wipe all small AI content generators with this one.
Really looking forward trying it : notion.so/ai?wr=702ced1294566d (joining the waitlist with this link will make me move up line 😇)

#copyai #generatecontent #notionai #notion

Last updated 2 years ago