These content links are provided by Content.ad. Both Content.ad and the web site upon which the links are displayed may receive compensation when readers click on these links. Some of the content you are redirected to may be sponsored content. View our privacy policy here.

To learn how you can use Content.ad to drive visitors to your content or add this service to your site, please contact us at [email protected].

Family-Friendly Content test

Website owners select the type of content that appears in our units. However, if you would like to ensure that Content.ad always displays family-friendly content on this device, regardless of what site you are on, check the option below. Learn More


FDA Begins Using Hallucinatory AI to Speed up Drug Approvals in Shock Move

In a shocking move, FDA Commissioner Martin Makary has ordered all FDA centers for drug approval to be fully AI-integrated by June 30th. The goal is to speed up the approval process for new drugs by eliminating repetitive tasks that scientists have to do.

This sounds like a shockingly bad idea. There seems to be a new religious belief—or perhaps it’s just ignorance—that these brand-new, still-in-development computer programs are perfect and incapable of making mistakes. The exact opposite is true, which is why this new drug approval process could get people killed.

The FDA stated in a public announcement, “By that date [June 30], all centers will be operating on a common, secure generative AI system integrated with FDA’s internal data platforms.”

 

For those who don’t know, AI or artificial intelligence doesn’t actually have any “intelligence.” Clever large language models (LLMs) like ChatGPT or Microsoft’s Co-Pilot are only as smart as the datasets that they’ve gobbled up off the internet. And as we all know, everything on the internet is true! (sarcasm)

For AI to be used to speed up drug approvals, it has to be fed all the published medical and health data that’s been preserved on the internet. We assume that part has already happened.

Unfortunately, we’ve known since 2005 that the majority of published scientific and medical studies are falsified or non-replicable. This has proven true any time a medical or scientific journal has gone back and tested the results of published research. John Ionnidis from the Stanford School of Medicine revealed this in his essay titled, “Why Most Published Research Findings Are False.”

Unscrupulous scientists and doctors invent data out of thin air to “get published” so they can gain more fart-sniffing prestige from their peers and draw in more research grants. But fake and falsified medical data is fake and falsified medical data—and that’s what the FDA’s new generative AI program has absorbed.

We’d all like to think that scientific and medical research is authoritative, but it’s obviously not. Why haven’t any of the global warming prophecies that the scientific community has been publishing for decades come true? And how did the medical community do during COVID?

So, this false information has now been absorbed by AI and is going to be used to accelerate drug approvals. Suppose the AI rapidly approves a new miracle drug for high blood pressure, but it accidentally kills anyone with type O-negative blood.

There’s also the fact that all these AI programs have a nasty tendency to just make sh*t up out of thin air. The creators of the programs call these “hallucinations.” When the AI can’t find an answer to a question, it makes up convincing lies to answer it. Someone put the latest ChatGPT model to the test and found that it gave false, made-up-out-of-thin-air answers to 48% of all questions.

Attorneys in at least two states are now going through disbarment proceedings because they relied on ChatGPT to write legal briefs that they turned in to the courts. The AI software had invented court precedents that sounded authoritative but were completely false. They involved court cases that had never happened.

The legal profession is running away from this technology because of its tendency to make stuff up. They know that publishing false information opens them up to tremendous legal liability and ethical issues. But we’re going to trust this technology to make and approve new medicines for us?

That’s the goal, by the way. When that creepy old Larry Summers announced a big AI investment at the White House (with other people’s money), he claimed that AI would cure cancer. It will be able, according to Summers, to create an individualized mRNA vaccine just for you and you alone, coded to your DNA, that will prevent you from ever getting cancer.

That’s no different than the veterinarian shyster in the 1700s who convinced the British parliament to let him inject pus from infected horse hooves in everyone’s babies. That was the first “smallpox vaccine,” by the way—which demonstrates just how incompetent the medical community has been for centuries. And now all those centuries of false data are going to be used to speed up approvals of new drugs, supported by a computer program that’s known for maliciously making things up out of thin air. What could go wrong?


Most Popular

These content links are provided by Content.ad. Both Content.ad and the web site upon which the links are displayed may receive compensation when readers click on these links. Some of the content you are redirected to may be sponsored content. View our privacy policy here.

To learn how you can use Content.ad to drive visitors to your content or add this service to your site, please contact us at [email protected].

Family-Friendly Content

Website owners select the type of content that appears in our units. However, if you would like to ensure that Content.ad always displays family-friendly content on this device, regardless of what site you are on, check the option below. Learn More



Most Popular
Sponsored Content

These content links are provided by Content.ad. Both Content.ad and the web site upon which the links are displayed may receive compensation when readers click on these links. Some of the content you are redirected to may be sponsored content. View our privacy policy here.

To learn how you can use Content.ad to drive visitors to your content or add this service to your site, please contact us at [email protected].

Family-Friendly Content

Website owners select the type of content that appears in our units. However, if you would like to ensure that Content.ad always displays family-friendly content on this device, regardless of what site you are on, check the option below. Learn More