[ad_1]
Sen. Mark Warner, D-Virginia, wrote a letter to Sundar Pichai, CEO of Google father or mother firm Alphabet, on Aug. 8, in search of readability into the expertise developer’s Med-PaLM 2, a synthetic intelligence chatbot, and the way it’s being deployed and skilled in healthcare settings.
WHY IT MATTERS
Within the letter, Warner expresses issues about some news reports highlighting inaccuracies within the expertise, and he asks Pichai to reply a collection of questions on Med-PaLM 2 (and different AI instruments prefer it), primarily based round its algorithmic transparency, its means to guard affected person privateness and different issues.
Warner questions whether or not Google is “prioritizing the race to determine market share over affected person well-being,” and whether or not the corporate is “skirting well being privateness because it skilled diagnostic fashions on delicate well being knowledge with out sufferers’ data or consent.”
The senator asks Pichai for readability about how the Med-PaLM 2 expertise is being rolled out and examined in numerous healthcare settings – together with on the Mayo Clinic, whose Care Community consists of Arlington, Virginia-based VHC Well being in Warner’s house state – what knowledge sources it is studying from and “how a lot info and company sufferers have over how AI is concerned of their care.”
Among the many questions (quoted from the letter) Warner requested the Google CEO:
-
Researchers have discovered giant language fashions to show a phenomenon described as “sycophancy,” whereby the mannequin generates responses that verify or cater to a person’s (tacit or express) most popular solutions, which might produce dangers of misdiagnosis within the medical context. Have you ever examined Med-PaLM 2 for this failure mode?
-
Giant language fashions ceaselessly reveal the tendency to memorize contents of their coaching knowledge, which may threat affected person privateness within the context of fashions skilled on delicate well being info. How has Google evaluated Med-PaLM 2 for this threat and what steps has Google taken to mitigate inadvertent privateness leaks of delicate well being info?
-
What documentation did Google present hospitals, reminiscent of Mayo Clinic, about Med-PaLM 2? Did it share mannequin or system playing cards, datasheets, data-statements, and/or take a look at and analysis outcomes?
-
Google’s personal analysis acknowledges that its scientific fashions mirror scientific data solely as of the time the mannequin is skilled, necessitating “continuous studying.” What’s the frequency with which Google totally or partially re-trains Med-PaLM 2? Does Google be certain that licensees use solely probably the most up-to-date mannequin model?
-
Google has not publicly offered documentation on Med-PaLM 2, together with refraining from disclosing the contents of the mannequin’s coaching knowledge. Does Med-PaLM 2’s coaching corpus embody protected well being info?
-
Does Google be certain that sufferers are knowledgeable when Med-PaLM 2, or different AI fashions provided or licensed by, are used of their care by well being care licensees? In that case, how is the disclosure offered? Is it a part of an extended disclosure or extra clearly offered?
-
Do sufferers have the choice to opt-out of getting AI used to facilitate their care? In that case, how is this feature communicated to sufferers?
-
Does Google retain immediate info from well being care licensees, together with protected well being info contained therein? Please listing every function Google has for retaining that info.
-
What license phrases exist in any product license to make use of Med-PaLM 2 to guard sufferers, guarantee moral guardrails, and forestall misuse or inappropriate use of Med-PaLM 2? How does Google guarantee compliance with these phrases within the post-deployment context?
-
What number of hospitals is Med-PaLM 2 at the moment getting used at? Please present an inventory of all hospitals and well being care methods Google has licensed or in any other case shared Med-Palm 2 with.
-
Does Google use protected well being info from hospitals utilizing Med-PaLM 2 to retrain or finetune Med-PaLM 2 or some other fashions? In that case, does Google require that hospitals inform sufferers that their protected well being info could also be used on this method?
-
In Google’s personal analysis publication asserting Med-PaLM 2, researchers cautioned about the necessity to undertake “guardrails to mitigate in opposition to over-reliance on the output of a medical assistant.” What guardrails has Google adopted to mitigate over-reliance on the output of Med-PaLM 2 in addition to when it notably ought to and shouldn’t be used? What guardrails has Google integrated by way of product license phrases to stop over-reliance on the output?
THE LARGER TREND
Warner, who has enterprise expertise within the expertise trade, has taken a eager curiosity in healthcare digital transformation initiatives reminiscent of telehealth and virtual care, cybersecurity, and AI ethics and security.
This isn’t the primary time he is written on to a Large Tech CEO. This previous October, Warner wrote to Meta CEO Mark Zuckerberg in search of readability on the corporate’s pixel expertise and knowledge monitoring practices in healthcare.
He has shared similar concerns concerning the potential dangers of synthetic intelligence and has asked the White House to work extra intently with the tech sector to assist foster safer deployments of AI in healthcare and elsewhere.
This previous April, Google started testing Med-PaLM 2 – which may reply medical questions, summarize paperwork and carry out different data-intensive duties – with healthcare prospects such because the Mayo Clinic, with which it has been working closely since 2019.
On the Mayo Clinic, in the meantime, modern work continues on generative AI throughout quite a lot of scientific and operational use instances. In June, Google and Mayo offered an update on some of the automation projects it is pursuing.
Mayo Clinic Platform President Dr. John Halamka spoke with Healthcare IT Information Managing Editor Invoice Siwicki just lately concerning the promise – and limitations – of generative AI, giant language fashions and different machine studying purposes for scientific care supply.
ON THE RECORD
“Whereas synthetic intelligence undoubtedly holds large potential to enhance affected person care and well being outcomes, I fear that untimely deployment of unproven expertise might result in the erosion of belief in our medical professionals and establishments, the exacerbation of present racial disparities in well being outcomes and an elevated threat of diagnostic and care-delivery errors,” mentioned Warner.
“It’s clear extra work is required to enhance this expertise in addition to to make sure the well being care group develops applicable requirements governing the deployment and use of AI,” he added.
Mike Miliard is govt editor of Healthcare IT Information
Electronic mail the author: mike.miliard@himssmedia.com
Healthcare IT Information is a HIMSS publication.
[ad_2]