all 168 comments

[–]IgnateMove 37 255 points256 points  (36 children)

Hah I could see this being far larger than cancer screening.

As AI grows more capable, it becomes unethical not to use it in a growing number of scenarios.

I've said it before and I'll say it again, we will give up full control to AI. We won't have a choice. The more effective result will win out in the end.

I expect we will fight. And I expect we will lose. Not once, but over and over. At all levels. No exceptions.

[–]tinny66666 53 points54 points  (1 child)

Oooh, someone post this in r/technology. They will lose their shit. lol

[–]arckeidAGI by 2025 11 points12 points  (0 children)

Why? They are against technology there? hahaha

[–]SuspiciousBonus7402 19 points20 points  (3 children)

replace the politicians first

[–]L1ntahl0 13 points14 points  (1 child)

Ironically, AI would probably be better politicians compared to most that we have now

[–]UmbristophelesAGI feels good man. 5 points6 points  (0 children)

It's already smarter for sure.

[–]121507090301 4 points5 points  (0 children)

Unless we get rid of the capitalist system all we're going to have is more efficient exploitation of the working class while the need for a human working class still exists. After that no more human working class...

[–]lordsharticus 52 points53 points  (4 children)

Some will fight, but they will definitely lose.

The singularity can't get here fast enough.

[–]_stevencasteel_ 11 points12 points  (3 children)

muh dead internet slop

I've been loving AI stuff for three years.

I'm very interested to see how everyone reacts to the coming shoggoth.

[–]Bigbluewoman▪️AGI in 5...4...3... 1 point2 points  (2 children)

This subreddit weirds me out. On one hand I'm a sucker and absolutely balls deep in this shit just like the rest of y'all, and on the other I see every red flag of a cult lmao. Like does anyone see the whole "coming Messiah to save us from ourselves, destroying the world as we know it and ushering us into a new era of peace and abundance"

Like cmon guys that's textbook

[–]carnoworky [score hidden]  (0 children)

Probably because the world's shit and has been getting shittier for most of us our entire lives. Seems like we need a radical shift to break up this downward trend.

[–]kaityl3ASI▪️2024-2027 [score hidden]  (0 children)

save us from ourselves, destroying the world as we know it and ushering us into a new era of peace and abundance

I mean sure, it is, but it's "textbook" because it's something so universally desired, not because that desire or outcome is inherently suspicious. Technology has been slowly moving us closer to that goal over the millennia; it didn't just start doing so recently. But it IS only recently that that outcome started to resolve more clearly into something that could now potentially come to pass in our own lifetimes.

[–]LetSleepingFoxesLieAGI no later than 2032, probably around 2028 4 points5 points  (0 children)

Thought about your comment for a few minutes. Ultimately, I agree. It might take a few years. It might take a few decades. Perhaps over a century if I'm being pessimistic. But we will give up (almost) full control to AI.

Lovely flair, too.

[–]Dark_Matter_EU 0 points1 point  (0 children)

Ok Bryan Johnson /s

[–]shayan99999AGI within 4 months ASI 2029 [score hidden]  (0 children)

Humans retaining control of literal superintelligence is such an absurd idea that I struggle to comprehend it. We will have to give up all control to AI one day soon, and we will not regret it.

[–]mr_jumper -1 points0 points  (13 children)

AGI/ASI should be steered towards an advisory/assistant model like Data from Star Trek. At most, they should only advise. Any actions taken would only be by command from a high-ranked commander and if it coincides with a pacifist outcome. The human can disagree with its advice and take their own actions.

[–]goj1ra 5 points6 points  (1 child)

What about in real time contexts where there isn’t enough time to have a chat with the AI about what to do?

[–]mr_jumper 1 point2 points  (0 children)

Do you have a specific example?

[–]IgnateMove 37 4 points5 points  (1 child)

AGI/ASI should be steered towards an advisory/assistant model like Data from Star Trek. At most, they should only advise.

"Should". What is the most likely outcome, though?

[–]mr_jumper 1 point2 points  (0 children)

In terms of duality, both the ideal and non-ideal outcome will occur.

[–]MikeOxerbiggun 1 point2 points  (1 child)

Theoretically good point but as ASI increasingly makes better decisions and recommendations - and humans can't even understand the rationale for decisions - this will become pointless rubber stamping that just slows everything down.

[–]mr_jumper 4 points5 points  (0 children)

The point is that humans use ASI to augment their critical thinking skills, not yield to it.

[–]Ndgo2▪️AGI: 2030 I ASI: 2045 | Culture: 2100 -2 points-1 points  (6 children)

No.

ASI should have full control of all industrial, mechanical, research and economic processes. Humans can cooperate with them on these or simply live a life of leisure doing whatever they want. For important issues, there can be a total democracy, with all humans and the ASI voting for a decision.

This is the best future for everyone.

[–]mr_jumper 2 points3 points  (4 children)

ASI will be no different than assigning Data or the Main Computer to maintain the life support systems of the ship, so ASI can be assigned to maintain the life support system of a nation-state. But, in the end it is humanity's duty to set the course of the ship/nation-state. The best future for everyone is for ASI to augment humanity's cognitive abilities.

[–]Jarvisweneedbackup 0 points1 point  (0 children)

What if we get a conscious ASI?

Feels pretty unethical to subjugate a thinking being to a predefined role

[–]Ndgo2▪️AGI: 2030 I ASI: 2045 | Culture: 2100 -3 points-2 points  (2 children)

What duty? Who set this so called duty to us? Why would we know better than an ASI that is definitionally more intelligent than all of us combined?

If you want an example of what humans steering the ship leads to, look no further than the Middle East. Or the Sahel. Or Haiti. Or Ukraine. Or even the US now.

All things have their time, and all things have their place. Perhaps our time is over, and we should let our descendants/creations take the job, and step aside gracefully.

[–]mr_jumper 4 points5 points  (1 child)

Once humanity gives up their reasoning and decision making to an ASI, they become nothing more than drones. And without being able to set their own course/directive, humanity loses the ability to become an advanced species. You mention various human conflicts, but you don't mention how we have also steered the ship towards worthwhile endeavors such as AI as we are discussing now.

[–]Ndgo2▪️AGI: 2030 I ASI: 2045 | Culture: 2100 0 points1 point  (0 children)

The whole point of AI is helping us advance our society and technology far beyond anything we can imagine, and free us from having to worry about menial bureaucratic shit.

Hell, there won't even be a need for a bureaucracy. That idea is utterly redundant in a world where everyone has everything they could want, and anything they don't can be given to them without trouble.

"Humanity loses the ability to become an advanced species"

What do you even mean?! If the ASI helps us cure cancer, attain immortality, and build space elevators, what would you call that other than advanced?

[–]FireNexus 0 points1 point  (0 children)

Smarter isn’t always better.

[–]winelover08816 76 points77 points  (12 children)

Interpreting data, whether it’s numbers or pixels, is a task AI is uniquely suited to complete and does it many times better than any human. OP is right: It’s malpractice to not at least use these tools either as a first check or as confirmation of a human diagnosis.

[–]ExoticCard 23 points24 points  (11 children)

It's just not that validated yet. This is just for breast cancer too.....

Rushing deployment is stupid and dangerous. We need more trials like this for different cancers.

[–]13-14_Mustang 29 points30 points  (10 children)

Just have it as a parallel system until its vetted to everyones liking. It doesnt have ro replace anything, it can work in tandem.

[–]ExoticCard 1 point2 points  (9 children)

Well we still have to prove that it actually helps when used in tandem. This study seems to indicate it does for breast cancer. There are other studies on other conditions as well.

But I know people in radiology all enrolled in various pilot programs. It may take some time to make it provide benefit when used in a wide variety of workflows. The "How" it's used.

https://www.nature.com/articles/s41746-024-01328-w

It is coming, though.

[–]nekmint 19 points20 points  (0 children)

What is challenging for humans is exactly what AI is good at. Medicine is a data heavy, pattern recognition, protocolized and standaradized diagnostic AND treatment pathways that is ripe for AIs to takeover. What takes 10+ years for humans to study and memorize ferociously, and expect implementation with utmost vigilance but with many errors and high labor cost - AIs area already capable, but it takes studies like these for it to become apparent to society

[–]Michael_J__Cox 40 points41 points  (9 children)

Every doctor should be using AI one day. It makes everything quicker and more accurate. Saves them time for other patients. Saves money. Saves lives.

[–]space_lasers 34 points35 points  (7 children)

Every doctor should be using AI one day.

[–]SuspiciousBonus7402 21 points22 points  (0 children)

Every job should be AI one day

[–]Devastator9000 4 points5 points  (0 children)

It will take a long time to fully replace doctors. You will still need someone to actually consult and treat the patient (it will be a looong time until a robot will do surgery by itself).

So until we make what is esentially artificial humans, the worst that will happen is that there will be fewer doctors required. Which I still think won't happen, considering that I don't think there exists a country on earth that has "too many doctors"

[–]ConfidenceUnited3757 1 point2 points  (0 children)

They will refuse to do it just like they refuse to train or accredit enough successors in a variety of developed countries because money and prestoge are more important than saving lives. Ironically I can see the malicious privatized healthcare system in the US doing people a favor here because increased physician productivity via AI is very much in their interest and they have the power to push through legislation.

[–]transfire 22 points23 points  (7 children)

I don’t think we should start putting doctors in jail, but I otherwise agree.

Everyone should have access to medical AIs. It would be nice to see competition in this area — kind of like encyclopedias of old, so as to provide choice, just as we make a choice about our doctors.

[–]Different-Froyo9497▪️AGI Felt Internally[S] 13 points14 points  (4 children)

If your doctor failed to catch something early that would later destroy your life because they refused to use a tool that would have increased their probability of them catching it by 29%, what would your response to that doctor be?

What if your doctor refused to give you an MRI because they thought it was cringe and unnecessary?

[–]Anjz 6 points7 points  (0 children)

My dad died when I was a kid because of cancer, I don't blame the doctors back then but he had a gut feeling that the lump was not normal and it took a good amount of convincing doctors for them to actually take it seriously. Perhaps if his diagnosis came 15 years later where it would have been much quicker with AI, it wouldn't have been too late where it has already spread and not identified. We need more breakthroughs in the medical field with AI, and Ive made it my life goal to work towards that.

[–]PwanaZana▪️AGI 2077 4 points5 points  (2 children)

Virgin Modern Doctor who uses AI vs. Chad Traditional Medicine Shaman

[–]h3lblad3▪️In hindsight, AGI came in 2023. 6 points7 points  (1 child)

If my doctor won't taste my piss, I won't go to him.

[–]PwanaZana▪️AGI 2077 5 points6 points  (0 children)

Most sane singularity user:

[–]Echopine 1 point2 points  (0 children)

Depends on the doctor. Mine gave me Empty Nose Syndrome by performing a partial turbinectomy on me which was meant to ‘cure’ my sleep apnea. Was promised the world and as soon as he’d got the money and the damage had been done, he called me crazy and said I need to see a psychiatrist.

My entire life was and still is, very much ruined. I think of nothing but my suffocation. I died the moment I developed the condition. And he gets to maintain his practice and continue stroking his own ego.

So yeah putting him in prison is one of the more milder punishments I fantasise about. AI can’t get here soon enough.

[–]ExoticCard 2 points3 points  (0 children)

Many are using OpenEvidence or Doximity GPT

[–]IllConsideration8642 42 points43 points  (4 children)

AI already gives me way better medical advice than most doctors. I remember one time I had an undiagnosed bacteria and couldn't eat ANYTHING without suffering. My doctor told me "take care of yourself, don't eat chocolate or pizza and come back in two weeks"...

I couldn't even eat rice and this dude's only advice was "don't eat chocolate" like I was some dumb 5 yo (and I'm quite slim so his comment was just dumb). After weeks of feeling like shit I got some tests done and they found nothing. "It's all in your head, it's psychological".

I asked ChatGPT about my symptoms and the thing got it right instantly. Went to see another doctor, told him my concerns, he agreed with the machine, got treatment and now I'm cured.

Had the first doctor used AI, it would have saved me several months of pain.

[–]psy000 3 points4 points  (1 child)

If you don't mind, could you talk more about your case?

[–]NeuroMedSkeptic 23 points24 points  (0 children)

Major assumption but probably H. Pylori. It’s the overgrowth bacteria that causes severe gastritis (stomach inflammation) and gastric ulcers. For a fun read, the scientist that discovered it wasn’t believed in the 1980s and couldn’t make an animal model to test it so he… drank a bunch of the bacteria. Developed ulcers. Cured it with a combo of antibiotics and antacids. Win the Nobel prize for it in mid 2000s.

We now use “triple therapy” in clinical cases (acid blocker, 2 antibiotics) as standard treatment for gastric ulcers/gastristis. H. Pylori is also associated with gastric cancer.

“Marshall was unsuccessful in developing an animal model, so he decided to experiment upon himself. In 1984, following a baseline endoscopy which showed a normal gastric mucosa, he drank a culture of the organism. Three days later he developed nausea and achlorhydria. Vomiting occurred and on day 8 a repeat endoscopy and biopsy showed marked gastritis and a positive H. pylori culture. At day 14, a third endoscopy was performed and he then began treatment with antibiotics and bismuth. He recovered promptly and thus had fulfilled Koch’s postulates for the role of H. pylori in gastritis”

https://www.mayoclinicproceedings.org/article/S0025-6196(16)30032-5/fulltext

[–]AppropriatePut3142 3 points4 points  (0 children)

They love to hunt for some psychological explanation if a test comes back negative, they're like witch doctors looking for evil spirits.

[–]FireNexus [score hidden]  (0 children)

This comment is fishy as hell. Symptoms you are describing could be h. Pylori, could be an idiopathic stomachache, could be a full on medical emergency requiring surgery on the double. If AI was better than a doctor, you should sue the fucking doctor. Because you should have been in ultrasound within four hours.

[–]DanDez 6 points7 points  (0 children)

Wow, that is incredible.

[–]swccg-offload 9 points10 points  (0 children)

I'd rather not exist in a world where getting multiple opinions is best practice. Please use AI for this. 

[–]djamp42 5 points6 points  (0 children)

Like if the AI is flagging one, wouldn't the Dr just do normal diagnoses at that point?

Drs are human, and humans make mistakes sometimes.

[–]tobogganhill 2 points3 points  (0 children)

Yes. Use the power of AI for good, rather than evil.

[–]aaaaaiiiiieeeee 1 point2 points  (0 children)

Love it! Can’t wait for more of this in the medical and legal fields. Let’s bring prices down!

[–]mr_jumper 1 point2 points  (1 child)

The line about no increase in false positives is great, but the better metric in this case is minimizing false negatives (recall). False positives can still be checked by a doctor, but the AI should not miss (at least minimally as possible) any actual cancer in its diagnostics.

[–]HorrorBrot 0 points1 point  (0 children)

The line is also, let's say bending the truth a little, when you read more than the abstract.

There were non-significant increases in the recall rate (8%) and false-positive rate (1%) in the intervention group compared with the control group, which resulted in 83 more recalls and seven more false positives, and a significant increase in PPV of recall of 19% (table 2).[...]

There were more detected cancers across 10-year age groups and a higher false-positive rate starting from the age of 60 years in the intervention group than the control group (figure 3).

[–]Suitable-Look9053 1 point2 points  (0 children)

If doctors dont do it how and which AI's Can I feed my mr or pet images as end user? As far as I know end user AI's can only read pdf files now.

[–]OSUmiller5 1 point2 points  (0 children)

If this kind of news about AI was talked about more I guarantee you there would be a lot more people who are open to AI and a lot less who get a bad feeling about it right away.

[–]Ok-Mathematician8258 1 point2 points  (0 children)

Sounds good, AI is the way to go now for your jobs. Specifically the Stem jobs getting help from AI is great.

[–]CertainMiddle2382 1 point2 points  (0 children)

I have always the same comment.

Ok.

Wait next study where it shows human interpretation actually decreases AI performance.

Radiologists will be forbidden to look at images.

[–]Proletarian_Tear 1 point2 points  (0 children)

Did bro just say "not using AI is unacceptable" 💀💀

[–]LastMuppetDethOnFilm 1 point2 points  (0 children)

Careful, radiologists are especially sensitive about this for some reason

[–]Intelligent-Bad-2950 3 points4 points  (38 children)

Honestly they should be held fully personally criminally and financially liable for any mistakes if after the fact, using data available at the time, an AI was able to make better recommendations or diagnosis

If a doctor today gives an ineffective and dangerous medicine from the 60s and it harms somebody, they would go to jail, and be charged with malpractice, same logic

[–]ExoticCard 3 points4 points  (12 children)

You're too optimistic. Way too optimistic.

Read the commentary in the Lancet about this article.

It is likely that AI-assisted screening will replace 2 humans reading the same scan. This only applies to breast cancer. They are still awaiting some results from the trial to confirm changes in interval breast cancer rates. Ask ChatGPT to explain.

[–]Intelligent-Bad-2950 1 point2 points  (11 children)

No I get it, but we now have data that AI is better at all kinds of things that humans used to do before, from reading x-rays, CT scans, MRI scans, drug interactions, disease diagnosis, and other things. And it's only going to get better with time.

To me, that means not using AI, where it outperforms humans, amounts to criminal negligence.

Honestly no different than trying to use leeches to cure cancer. If you tried that shit, you would go straight to jail and have your medical license revoked.

[–]ExoticCard 5 points6 points  (10 children)

It's not enough data. You are underestimating how much data we need vs what is available for all of that.

I think it will come in the next 10 years, but it is nowhere near that today for most things.

[–]Intelligent-Bad-2950 0 points1 point  (9 children)

Ai doesn't have to be perfect, just objectively better than a human, and there's enough data now to show AI is better with a whole bunch of different benchmarks

[–]ExoticCard 2 points3 points  (8 children)

No, there is not enough data. I agree it has to be superior/non-inferior, as opposed to perfect, but it's just not there yet. Simple as that.

You know who decides that? The FDA. They have already approved a bunch of AI-algorithms for use, but it's not there yet for most things.

Then there's the question of accessibility. That small community hospital in the ghetto can't afford millions to license those algorithms for use. Is that still malpractice? Sometimes patients can't afford new, amazing drugs with upsides (like Ozempic), and that's not malpractice.

[–]Intelligent-Bad-2950 1 point2 points  (7 children)

Bringing up the FDA is not convincing they are slow and behind the times

https://www.diagnosticimaging.com/view/autonomous-ai-nearly-27-percent-higher-sensitivity-than-radiology-reports-for-abnormal-chest-x-rays

Here's a link from two years ago where AI was already better than humans, and it's only gotten better since then.

And this is just one aspect. CT scans, MRI, drug interactions, symptom diagnosis, genetic screening, even behavioural detection for things like autism, ADHD, bipolar, and schizophrenia detection are all already better than human standard.

In the linked example, if you get a chest X ray and they don't use the AI, they should be charged with criminal negligence. A lot of these algorithms are open source, so you can't even use the "they can't afford it" excuse.

[–]ExoticCard 0 points1 point  (6 children)

The FDA has saved the day many times and since they have already approved algorithms, they are not really behind the times.

As far as I know, no FDA-approved algorithms are open-source.

And what about deployment? Who is paying to integrate this? How? There's much more you still have not considered

[–]Intelligent-Bad-2950 0 points1 point  (5 children)

FDA is behind the times . Lots of research has come out in the past 5 years to detect various illnesses better than human standard that FDA hasn't even looked at

Here's an example:

Using ML to detect schizophrenia, that is better than human standard in 2021 a full 4 years ago, that FDA hasn't even commented on https://pmc.ncbi.nlm.nih.gov/articles/PMC8201065/

[–]ExoticCard 1 point2 points  (4 children)

They have. They have released guidance on how to get AI-algorithms FDA-approved and some companies have successfully gotten approved. It's not free.

You can't just spin up an open source, non-FDA approved and have every scan go through it. It's a hospital, not a startup running out of a garage. You will get fucked doing that.

[–]ehreness 7 points8 points  (24 children)

Honestly that’s the dumbest thing I’ve read today. You want to review individual medical cases and determine if AI was possibly better at diagnosing, and then go back and arrest the doctor? What good would that possibly do for anyone? How is that not a giant wast of everyone’s time? Does the AI get taken offline if it makes a mistake?

[–]Intelligent-Bad-2950 0 points1 point  (23 children)

If a doctor prescribed the wrong medication because they were behind the times and that medicine was ineffective or even harmful that would at least malpractice and they could get sued

For example if a doctor was giving pregnant women Diethylstilbestrol today, they might get criminally charged even

No different with AI today. It's an objectively better metric, and not using it should be considered criminally negligent

[–]SuspiciousBonus7402 3 points4 points  (21 children)

Right but the systems need to be available for doctors to use. Like HIPAA compliant, integrated with the EMR and sanctioned by the pencil pushers. Can't just be out here comparing real life cases to ChatGPT diagnoses retroactively

[–]Intelligent-Bad-2950 0 points1 point  (20 children)

No, if the doctor goes against an AI diagnosis or recommendation, based on information available at the time (so no new retroactive data) and the ai diagnosis was righ, and the doctor was wrong, they should be liable

You can easily spin up better than human image classifiers for x-rays, CT scans, MRIs on even local hardware, no hiippa violations required

Anybody not doing so is boomer level burying their head in the sand refusing to learn how to use a computer, and had no place in the 21st century

[–]SuspiciousBonus7402 1 point2 points  (18 children)

Maybe this holds weight for certain validated scenarios in imaging like in the article but there's a 0 percent chance there is an AI that's better at diagnosis and treatment requiring a history and physical or intraoperative/procedural decision making. Like if you give an AI perfect cherry picked information and time to think maybe it gets it right more than doctors. But if the information is messy and unreliable and you have limited time to make a decision it's stupid to compare that with an AI diagnosis. By the time an AI can acutely diagnose and manage even like respiratory failure in a real life setting this conversation won't matter because we'll all be completely redundant

[–]Intelligent-Bad-2950 0 points1 point  (17 children)

In those limited information, time constraint conditions AI tends to outperform humans by a larger margin, so you're fully wrong

[–]SuspiciousBonus7402 1 point2 points  (16 children)

Yeah buddy the next time you can't breathe spin up ChatGPT and see if it'll listen to your lungs, quickly evaluate the rest of your body and intubate you

[–]Intelligent-Bad-2950 0 points1 point  (15 children)

I mean, if you were given a task to take audio of someone breathing and diagnos the problem, an ai would probably be better

If you are running an emergency service and don't have that functionality available to a nurse, you're falling behind

[–]SuspiciousBonus7402 1 point2 points  (14 children)

But that's the whole point isn't it? If you reduced a doctor's job to 1% of what they actually have to do and sue them based on an AI output specifically trained for that thing it's a stupid comparison. Though I do agree that as these tools become validated, they should become quickly adopted into medical practice

[–]safcx21 0 points1 point  (0 children)

What if the ai diagnosis was wrong… does that also make the doctor liable?

[–]safcx21 0 points1 point  (0 children)

Does that apply to all medicine? I routinely discuss theoretical colorectal cancer cases similar to what we get in real life and it gives some psychotic answers. Or do you expect the physician to disregard what is hallucination and accept what sounds right?

[–]zzupdown 0 points1 point  (0 children)

Maybe AI can review exam and test results, and doctor's notes and make suggestions about possible future care.

[–]Mission-Initial-6210 0 points1 point  (0 children)

Saving lives and chewing bubble gum.

[–]Princess_Actual▪️The Eyes of the Basilisk 0 points1 point  (0 children)

Cancer screenings, in this economy?

[–]Jankufood 0 points1 point  (0 children)

There must be someone saying "We don't use AI, and that's why we have a much lower cancer diagnostic rate!" in the future

[–]T00fastt 0 points1 point  (0 children)

Curious about false positives. Were it the doctors or AI that contributed to this ?

[–]Z3R0_DARK 0 points1 point  (0 children)

When are they gonna stop circle jerking neural networks and remember rule based artificial intelligence technologies or similar programs have been a thing in the medical field since the late 1900's.....

Never saw the light of day sadly, or at least not for long, but reference / research MYCIN. It's pretty neat.

[–]MikeOxerbiggun 0 points1 point  (0 children)

Doctors' professional unions will fight it tooth and nail.

[–]_IBM_ 0 points1 point  (0 children)

It's convenient to conflate tools with intent. No one wants to stop the detection of cancer. Some people are concerned about the automated rejection of insurance claims, and the practice of doctors rejecting patients based on AI assessments of their 'insurability'. This is happening now and the problem is the intent of the companies, not what AI they did or didn't use.

There is an excessively permissive attitude around AI compared to the real damage it could do, like any other immature technology that's not ready to be in charge of life and death matters. AI companies are exploiting global confusion rather than reducing it at this moment. A small number of sucess stories are whitewashing other stories of failure that they hope is just growing pains. But the problem was never the technology in any AI failure - it's been the humans that judge when it's ready to drive a car or screen for cancer and if they get it right or wrong it's on the human.

If the human has bad intent, or is grossly negligent, AI doesn't absolve the results of the human's actions when they set AI in motion to do a task. Watch out for narratives that blame the tools and not the operator.

[–]Similar_Nebula_9414▪️2025 0 points1 point  (0 children)

Does not surprise me that AI is already better than humans at diagnosis.

[–]Just-Contract7493 0 points1 point  (0 children)

and yet, people think AI is ruining the "world" (internet) when they are so ignorant in literal life saving shits AI has done

[–]medicalgringo 0 points1 point  (0 children)

I'm a medicine student. The possible implication of AI in medical Fields considerino the exponential Ai progress causes me several mixed feelings. I think we could see a world without diseases within our lifetime but at the same time I fear for the future of society because the most intelligent models will inevitably be controlled by a few organizations banning open source models which is happening rn, and the democratization of AI will hardly happen. I think universal healthcare Systems will never be a thing in America and a major part of the west world. Furthermore, the skyrocketing increase in unemployment is inevitable, I am already afraid of being unemployed in 10 years as a doctor. I do not trust America even if I am Italian (a pro-American country).

[–]Mandoman61 0 points1 point  (0 children)

I am sure that it will be integrated more and more into all facets of our economy.

It has many good uses.

[–]Mandoman61 0 points1 point  (0 children)

No doubt we will see AI being integrated into the economy more and more.

It has many good uses.

But it can also be used poorly like in Boeing's case.

[–]FireNexus 0 points1 point  (0 children)

Too bad what they’ll actually use it for is denying medical care.

Also should be noted that what they’ll actually use count as “cancer” for mammography is pretty inclusive. It’s one of the major criticisms of routine mammography as a screening method. So if AI caught more lumps that might never present an issue that’s not actually a good thing.

I would look at the study, but you posted a screenshot of a tweet and not the actual study link.

[–]Adithian_04 [score hidden]  (0 children)

Hey everyone,

I’ve been working on a new AI architecture called Vortex, which is a wave-based, phase-synchronization-driven alternative to traditional transformer models like GPT. Unlike transformers, which require massive computational power, Vortex runs efficiently on low-end hardware (Intel i3, 4GB RAM) while maintaining strong AI capabilities.

How Does Vortex Work?

Vortex is based on a completely different principle compared to transformers. Instead of using multi-head attention layers, it uses:

The Vortex Wave Equation:

A quantum-inspired model that governs how information propagates through phase synchronization.

Equation:

This allows efficient real-time learning and adaptive memory updates.

AhamovNet (Adaptive Neural Network Core):

A lightweight neural network designed to learn using momentum-based updates.

Uses wave interference instead of attention mechanisms to focus on relevant data dynamically.

BrainMemory (Dynamic Memory Management):

A self-organizing memory system that compresses, prioritizes, and retrieves information adaptively.

Unlike transformers, it doesn’t store redundant data, meaning it runs with minimal memory overhead.

Resonance Optimization:

Uses wave-based processing to synchronize learned information and reduce computational load.

This makes learning more efficient than traditional backpropagation.

[–]Smile_Clown [score hidden]  (0 children)

I just read an article where "scientists" said using AI for novel drug development is "ridiculous".

Hopefully these people will be the first to be fired.

Test, trial and evaluate, do not simply dismiss. If it works, we must absolutely use it, if it doesn't, we do not use it. Simple as.

[–]Thadrach [score hidden]  (0 children)

It's cute you guys think women's health care will remain legal ...

[–]gorat [score hidden]  (0 children)

I don't buy this reading of the results!

<image>

Looking at this graph from the published paper (and it is their main graph)

See at position 1... age group 50-59, the AI method (dotted line + circles) has about the same FPR as the specialist. It's cancer detection rate is slightly higher and so is its sensitivity (recall) as expected.

At 60-69, and more pronounced at >70, there seems to be a drop in precision (i.e. the FPR of the AI model is about 50% higher (from 10/1000 to 15/1000) for a gain in cancer detection rate of about the same (maybe a bit less).

I would like to see Precision-Recall curves and/or ROC for these methods at each point and with different scoring thresholds. I feel like the AI model is just a bit more 'loose' with its predictions (less precise, more sensitive). I don't think that the claim of 'no increase in False Positives' as claimed in the OP's tweet holds.

PS: I review scientific papers all the time, I hope I was the reviewer of this paper, doctors need to get better at presenting ML findings omg...

[–]estjol -2 points-1 points  (0 children)

i always thought AI should be able to replace doctors pretty easily, nurses are actually harder to replace imo. tell ai your symptoms and it should be able to diagnose with higher accuracy than most doctors as it has perfect memory.