Mount Sinai has become a laboratory for AI, trying to shape the future of medicine. But some healthcare workers fear the technology comes at a cost.

WP gift article expires in 14 days.

https://archive.ph/xCcPd

  • RiikkaTheIcePrincess@kbin.social
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    Interesting that the “AI” posts alternate between “Oh, AI will lead us into the shining, glorious future!” and “Wow, the new thing that people call ‘AI’ can’t reliably add 2+2, makes up lies that even supposed professionals fall for when asked for real info, and just generally mimics only the form of human communication with no idea whatsoever about the content, often resulting in hilarious claims, images, et cetera that a dog could recognize as wrong!”

    With bosses in charge… ugh. I assume soon if not already someone will be scheduled for heart surgery because WebMDGPT decided their cough was due to irritable bowel syndrome, which it will claim is a form of cancer because that’s the sort of crap these things do.

    “The ability to speak does not make you intelligent” somehow applies to Jar-Jar but everybody’s impressed with the chatbot that knows literally nothing.

    • ConsciousCode@beehaw.org
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      It makes me really sad because the techbros are a cargo cult with no understanding of the technology, and the anti-AI crowd is an overcorrection to the techbro hype train which overemphasizes the limitations without acknowledging that this is the first generation of general-purpose AI (distinct from AGI). Meanwhile I, someone who’s followed the AI field for 10 years waiting for this day, am overjoyed by the near miracle that is a general-purpose model that can handle any task you throw at it and simultaneously worried this yet-another-culture-war will distract people screeching about utopia vs skynet while capitalists use the technology to lay everyone off and send us into a neotechnofeudal society where labor has no power instead of the socialist utopia where work is optional we deserve…

      • RiikkaTheIcePrincess@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        I generally agree but struggle to see where there’s any proper “general-purpose AI” involved. The current “AI” seems to be a crop of, simply put, overgrown chat bots. They make things that kinda-sorta look like other things that humans have already made and are getting a lot of attention for getting things very wrong. Hands, mouths, maths, laws, wrong wrong wrong.

        From my perspective (as someone who loves novel tech, was thrilled to take a uni course on evolutionary computation, had PopSci, SciAm, Discover (tech magazines), etc.) people are blowing the hell up praising glorified chat bots as our lord and saviour and it baffles me endlessly. Like… I was evolving solutions to notoriously hard problems as an undergrad roughly a decade ago. The power of evolution itself! Wow, right? No, no one cares any more. Interesting is interesting but the hype train’s decided these I’m not going to stop calling them chat bots because that’s what they are represent a miracle of advanced, movie-style AGI. Unless my understanding of how it works is way off, it’s not really even a good starting point for AGI. I’d even go so far as to say it’s less technically interesting to me than Sierra’s AGI, but then I do have a deep, burning hatred of memes and excessive, blind popularity/hype and a bit of a taste for old tech so part of that’s just me. As for “utopia vs. Skynet” stuff… sigh. No technology is gonna do more to heal or harm humanity than this batch of buttholes is already doing to itself and ELIZA here isn’t going to change that.

        tl;dr: The current cultural idea of “AI” is (as always) a damn meme based on chat bots and exploitation, and not a miracle. Wake me when AI is capable of some interesting new kind of NLP or can create something entirely new or something beyond impressing fools (because I actually do like neat tech). Also yes, any big, moneyful/profitable tech-thing is 100% gonna serve money over all else because everything in this capitalist hell-world does. rant rant rant! … dozes off

          • RiikkaTheIcePrincess@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            Okay, see, that smells smarter than “we’re gonna cram the entire Internet into a box full of neurons and shove a shitload of compute through it.” It is, therefore, more interesting. Maybe I’ll have a deeper peek into it… In a few years when it’s not associated with any hype 😅 Here, have some of my pizza 🫴🍕

        • ConsciousCode@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          First I’d like to be a little pedantic and say LLMs are not chatbots. ChatGPT is a chatbot - LLMs are language models which can be used to build chatbots. They are models (like a physics model) of language, describing the causal joint probability distribution of language. ChatGPT only acts like an agent because OpenAI spent a lot of time retraining a foundation model (which has no such agent-like behavior) to model “language” as expressed by an individual. Then, they put it into a chatbot “cognitive architecture” which feeds it a truncated chat log. This is why the smaller models when improperly constrained may start typing as if they were you - they have no inherent distinction between the chatbot and yourself. LLMs are a lot more like broca’s area than a person or even chatbot.

          When I say they’re “general purpose”, this is more or less an emergent feature of language, which encodes some abstract sense of problem solving and tool use. Take the library I wrote to create “semantic functions” from natural language tasks - one of the examples I keep going to in order to demonstrate the usefulness is

          @semantic
          def list_people(text) -> list[str]:
              '''List the people mentioned in the given text.'''
          

          a year ago, this would’ve been literally impossible. I could approximate it with thousands of lines of code using SpaCy and other NLP libraries to do NER, maybe a massive dictionary of known names with fuzzy matching, some heuristics to rule out city names or more advanced sentence structure parsing for false positives, but the result would be guaranteed to be worse for significantly more effort. With LLMs, I just tell the AI to do it and it… does. Just like that. I can ask it to do anything and it will, within reason and with proper constraints.

          GPT-3 was the first generation of this technology and it was already miraculous for someone like me who’s been following the AI field for 10+ years. If you try GPT-4, it’s at least 10x subjectively more intelligent than ChatGPT/GPT-3.5. It costs $20/mo, but it’s also been irreplaceable for me for a wide variety of tasks - Linux troubleshooting, bash commands, ducking coding, random questions too complex to google, “what was that thing called again”, sensitivity reader, interactively exploring options to achieve a task (eg note-taking, SMTP, self-hosting, SSI/clustered computing), teaching me the basics of a topic so I can do further research, etc. I essentially use it as an extra brain lobe that knows everything as long as I remind it about what it knows.

          While LLMs are not people, or even “agents”, they are “inference engines” which can serve as building blocks to construct an “artificial person” or some gradiation therein. In the near future, I’m going to experiment with creating a cognitive architecture to start approaching it - long term memory, associative memory, internal thoughts, dossier curation, tool use via endpoints, etc so that eventually I have what Alexa should’ve been, hosted locally. That possibility is probably what techbros are freaking out about, they’re just uninformed about the technology and think GPT-4 is already that, or that GPT-5 will be (it won’t). But please don’t buy into the anti-hype, it robs you of the opportunity to explore the technology and could blindside you when it becomes more pervasive.

          What would AI have to do to qualify as “capable of some interesting new kind of NLP or can create something entirely new”? From where I stand, that’s exactly what generative AI is? And if it isn’t, I’m not sure what even could qualify unless you used necromancy to put a ghost in a machine…

    • keeb420@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      thats because everyone is trying to treat ai as if its what movies promise us. its not there yet, if itll get there at all. it can be a good tool in the tool chest but its far from being the only tool. like ai might be able to speed up blood tests or screen for more things but its not gonna replace a good doctor yet. or can monitor everyone whos vitals are being monitored and can flag times for doctors to follow up on that arent life threatening that could be missed currently.

  • Storksforlegs@beehaw.org
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    1 year ago

    This looks like another instance where AI could be used to really make doctors and nurses lives easier, provide more and better care at lower cost - but in the hands of greedy corporate types it wont go that way.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    “If we believe that in our most vulnerable moments … we want somebody who pays attention to us,” Michelle Mahon, the assistant director of nursing practice at the National Nurses United union, said, “then we need to be very careful in this moment.”

    Ironically, I recall there was a study done just recently wherein a modern AI chatbot was compared with human doctors when giving medical advice online and the patients much preferred the bedside manner of the AI chatbot. Human doctors and nurses can get tired, bored, annoyed, and just generally may not have the ideal personality for interacting with patients. An AI can be programmed to be always polite, attentive, positive, and so forth, and it will do so. I could easily imagine a situation just a few years from now when I would be reassured by having an AI “attendant” when I’m in a hospital for something.

    “There is something that technology can never do, and that is be human,” she said.

    Indeed. And that may actually be a strength sometimes.

    • gelberhut@lemdro.id
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I read about test which was do e on Reddit subreddit where chatGPT answers were compared with human doctor’s answers (who answer people on Reddit, probably not that representative selection of human doctor’s, though).

      The results were assessed by other human dictors, not just patients.

      The test found that answers provided by chatGPT had better quality and had higher emphathy.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    AI can be an amazing tool in healthcare, as a double check. For example, assume a doctor thinks you have something. Right now you could have

    • A simple non invasive diagnostic procedure that costs X that is 90% accurate.
    • A complex invasive diagnostic procedure that costs 10X that is 98% accurate.

    Doctor will always suggest the first one and then see if you need the second one based on other factors. AI can be a great tool as a double checker. You go in, do the simple test, and then you run your results and your inputs through a model and that’ll give you a second probability, and that could help determine that you should go in for the more invasive one.

    If it’s done that way it’ll be a great tool, but AI should never be used as the only check or to replace real proven tests. At the end of the day it’s still saying “From my information I’ve trained on, the answer to the question of 2+2 is probably 4”, it does not do any actual calculations. Only probabilities from trained data. So great at double checking, but bad at being the single source.

    • LallyLuckFarm@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      There was an interview I saw with a cancer researcher working with AI to improve cancer detection in early imaging - they’ve fed thousands of CT and X-ray images to their model, and have then gone back through the data when patients have had biopsies to confirm. This sort of high quality data and attentive follow up has the potential to provide such better screening for cancers and other conditions that patients could have additional months or years to address them.

    • FlowVoid@midwest.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Doctors will never use a test that is only 90% accurate.

      A more realistic scenario is to start with a simple test that has low false negative rate (<5%) but prone to obtain a false positive. If the test is negative then testing stops. If it is positive then they confirm the diagnosis with a more complex test with a low false positive rate.

    • shagie@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      AI can be an amazing tool in healthcare, as a double check. For example, assume a doctor thinks you have something. Right now you could have …

      Expert systems have been available as part of medical diagnoses for decades. I remember ahem finding one for the Apple ][+ back in the day.

      https://pubmed.ncbi.nlm.nih.gov/2663006/ was written back in '89 and you can easily find others going further back.

      AI in medical diagnostic capabilities is nothing new or surprising.

  • Roxxor@feddit.de
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    I am a doctor and I am sure that ChatGPT will answer better than me. I am also very supportive for AI being a support for me as a professional. I am working in internal medicine, and things change and progress all the time. I am not able to know as much as a machine does. The difference between an AI and a doctor will be the decisions made based on this information. Every action I do or do not do has consequences. You think any AI producer will take any responsibility? As a doctor I am always standing in prison with one foot.

    And also any diagnostic & therapeutic procedures can not be taken out of my hand by an AI. It cant resuscitate a person. But I wouldnt mind assistance. Robotics have to come a loooong way before this is going to happen.

    And to be clear: I am not talking about ChatGPT as the tool. Someone has to train an AI specifically on precise medical datasets, to give me hints about possible issues with e.g. lab data I don’t recognize, because it is maybe specific for some rare disease I didnt have any encounter with beforehand.

    Fun fact: I have some foreign colleagues who use ChatGPT to get the base for their patient reports. They give very short(and non personal!) instructions, a nice text pops out, they add in the details. Voila. Their report is better than mine as a native.

    Times are crazy

    • StringTheory@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Fun fact: I have some foreign colleagues who use ChatGPT to get the base for their patient reports. They give very short(and non personal!) instructions, a nice text pops out, they add in the details. Voila. Their report is better than mine as a native.

      In the end, how is this different than using a good Epic template? Sit down and create a wardrobe of templates and smart-phrases for your reports. It will end up as fast as those ChatGPT texts, but it will be your own writing and details that you control. Epic has several different ways to import and copy other people’s templates, too. You could even use one of those ChatGPT reports to create part of your template if you like.

  • They worry about the technology making wrong diagnoses,

    You know who I’ve seen make “wrong diagnoses” over and over again? Human fricken doctors. And not to me (a healthy, upper middle class white male professional) but to my wife (a disabled woman with a debilitating genetic disease from a shitty part of Texas). We had to fight for years and spend tons of money to get “official” diagnoses that we were able to make at home based on observation, Googling and knowledge of her family history. I’ve watched male neurologists talk to ME instead of her while staring at her boobs. I’ve watched ER doctors have her history and risks explained to them in excruciating detail, only to send her home (when it turns out she needs emergency surgery).

    revealing sensitive patient data

    Oh, 100%, this is gonna happen.

    becoming an excuse for insurance and hospital administrators to cut staff in the name of innovation and efficiency.

    Oh, 100% this is ALSO gonna happen. My wife recently had to visit the ER twice, receive scary spinal surgery and stay over for 2 weeks. The NUMBER ONE THING I noticed was that in this state of the art hospital, in a small, wealthy, highly gentrified town, was DANGEROUSLY understaffed. The nurses and orderlies were stretched so thin, they couldn’t even stop to breath (and they were OFTEN cranky and RUSHING to do delicate tasks where they could easily make mistakes). This reckless profiteering is already a problem (that probably needs some more aggressive regulation to deal with it, nothing else will work). If AI exposes it more and pushes it to a breaking point, maybe that could ultimately be a good thing.

  • bl_r@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    I’m not an expert at ML or cardiology, but I was able to create models that could detect heart arrhythmias with upwards of 90% accuracy, higher accuracy than a cardiologist, and do so much faster.

    Do I think AI can replace doctors? No. The amount of data needed to train a model is immense (granted I only had access to public sets), and detecting rarer conditions was not feasible. While AI will beat cardiologists in this one aspect, making predictions is not the only thing a cardiologist does.

    But I think positioning AI as a tool to assist in triage, and to provide second opinions could be a massive boon for the industry.

      • bl_r@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        That is a good thing and a bad thing. Self diagnosis will inevitably end with misdiagnosis.

        I think AI has the potential to increase the amount of patients seen, and maybe even decrease cost, but in the enshittified American system I’m willing to bet it would not be close to the best outcome

  • Banzai51@midwest.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    There are a lot of Doctors in favor of AI too. Imagine realtime patient monitoring that can alert doctors to a say, a possible heart attack. It is something that has been worked on for at least the last 15 years.