• RalfWausE@feddit.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 hours ago

    If the abominable intelligence is killing every corner of things we consider good its time to start killing the “AI”…

  • WanderingThoughts@europe.pub
    link
    fedilink
    English
    arrow-up
    105
    arrow-down
    1
    ·
    edit-2
    9 hours ago

    Only until AI investor money dries up and vibe coding gets very expensive quickly. Kinda how Uber isn’t way cheaper than a taxi now.

    • Zwuzelmaus@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      4 hours ago

      until AI investor money dries up

      Is that the latest term for “when hell freezes over”?

      • WanderingThoughts@europe.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        35 minutes ago

        Hah, they wish. It’s a business, and they need a return on investment eventually. Maybe if we were in a zero interest rate world again, but even that didn’t last.

      • massacre@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 hours ago

        Microsoft steeply lowered expectations on the AI Sales team, though they have denied this since they got pummelled in their quarterly and there’s been a lot of news about how investors are not happy with all the circular AI investments pumping those stocks. When the bubble pops (and all signs point to that), investors will flee. You’ll see consolidation, buy-outs, hell maybe even some bullshit bailouts, but ultimately it has to be a sustainable model and that means it will cost developers or they will be pummeled with ads (probably both).

        A Majority of CEOs are saying their AI spend has not paid off. Those are the primary customers, not your average joe. MIT reports 95% generative AI failure rate at companies. Altman still hasn’t turned a profit. There are Serious power build-out problems for new AI centers (let alone the chips needed). It’s an overheated reactionary market. It’s the Dot Com bubble all over again.

        There will be some more spending to make sure a good chunk of CEOs “add value” (FOMO) and then a critical juncture where AI spending contracts sharply when they continue to see no returns, accelerated if the US economy goes tits up. Then the domino’s fall.

    • percent@infosec.pub
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      7 hours ago

      I wouldn’t be surprised if that’s only a temporary problem - if it becomes one at all. People are quickly discovering ways to use LLMs more effectively, and open source models are starting to become competitive with commercial models. If we can continue finding ways to get more out of smaller, open-source models, then maybe we’ll be able to run them on consumer or prosumer-grade hardware.

      GPUs and TPUs have also been improving their energy efficiency. There seems to be a big commercial focus on that too, as energy availability is quickly becoming a bottleneck.

      • WanderingThoughts@europe.pub
        link
        fedilink
        English
        arrow-up
        13
        ·
        5 hours ago

        So far, there is serious cognitive step needed that LLM just can’t do to get productive. They can output code but they don’t understand what’s going on. They don’t grasp architecture. Large projects don’t fit on their token window. Debugging something vague doesn’t work. Fact checking isn’t something they do well.

        • VibeSurgeon@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          51 minutes ago

          So far, there is serious cognitive step needed that LLM just can’t do to get productive. They can output code but they don’t understand what’s going on. They don’t grasp architecture. Large projects don’t fit on their token window.

          There’s a remarkably effective solution for this, that helps both humans and models alike - write documentation.

          It’s actually kind of funny how the LLM wave has sparked a renaissance of high-quality documentation. Who would have thought?

          • WanderingThoughts@europe.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            30 minutes ago

            High-quality documentation assumes there’s someone with experience working on this. That’s not the vibe coding they’re selling.

            • VibeSurgeon@piefed.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              27 minutes ago

              Complete hands-off no-review no-technical experience vibe coding is obviously snake oil, yeah.

              This is a pretty large problem when it comes to learning about LLM-based tooling: lots of noise, very little signal.

        • percent@infosec.pub
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          edit-2
          5 hours ago

          They don’t need the entire project to fit in their token windows. There are ways to make them work effectively in large projects. It takes some learning and effort, but I see it regularly in multiple large, complex monorepos.

          I still feel somewhat new-ish to using LLMs for code (I was kinda forced to start learning), but when I first jumped into a big codebase with AI configs/docs from people who have been using LLMs for a while, I was kinda shocked. The LLM worked far better than I had ever experienced.

          It actually takes a bit of skill to set up a decent workflow/configuration for these things. If you just jump into a big repo that doesn’t have configs/docs/optimizations for LLMs, or you haven’t figured out a decent workflow, then they’ll be underwhelming and significantly less productive.


          (I know I’ll get downvoted just for describing my experience and observations here, but I don’t care. I miss the pre-LLM days very much, but they’re gone, whether we like it or not.)

          • WanderingThoughts@europe.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            32 minutes ago

            It actually takes a bit of skill to set up a decent workflow/configuration for these things

            Exactly this. You can’t just replace experienced people with it, and that’s basically how it’s sold.

          • RIotingPacifist@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            4 hours ago

            This sounds a lot like every framework, 20 years ago you could have written that about rails.

            Which IMO makes sense because if code isn’t solving anything interesting then you can dynamically generate it relatively easily, and it’s easy to get demos up and running, but neither can help you solve interesting problems.

            Which isn’t to say it won’t have a major impact on software for decades, especially low-effort apps.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        Can you cite some sources on the increased efficiency? Also, can you link to these lower priced, efficient (implied consumer grade) GPUs and TPUs?

        • percent@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          4 hours ago

          Oh, sorry, I didn’t mean to imply that consumer-grade hardware has gotten more efficient. I wouldn’t really know about that, but I assume most of the focus is on data centers.

          Those were two separate thoughts:

          1. Models are getting better, and tooling built around them are getting better, so hopefully we can get to a point where small models (capable of running on consumer-grade hardware) become much more useful.
          2. Some modern data center GPUs and TPUs compute more per watt-hour than previous generations.
          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            Can you provide evidence the “more efficient” models are actually more efficient for vibe coding? Results would be the best measure.

            It also seems like costs for these models are increasing, and companies like Cursor had to stoop to offering people services below cost (before pulling the rug out from them).

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    2
    ·
    9 hours ago

    Vibe coding is a black hole. I’ve had some colleagues try and pass stuff off.

    What I’m learning about what matters is that the code itself is secondary to the understanding you develop by creating the code. You don’t create the code? You don’t develop the understanding. Without the understanding, there is nothing.

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      1
      ·
      8 hours ago

      Yes. And using the LLM to generate then developing the requisite understanding and making it maintainable is slower than just writing it in the first place. And that effect compounds with repetition.

      • Paragone@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        6 hours ago

        TheRegister had an article, a year or 2 ago, about using AI in the opposite way: instead of creating the code, someone was using it to discover security-problems in it, & they said it was really useful for that, & most of its identified things, including some codebase which was sending private information off to some internet-server, which really are problems.

        I wonder if using LLM’s as editors, instead of writers, would be better-use for the things?

        _ /\ _

        • Alex@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          48 minutes ago

          They are pretty good at summarisation. If I want to catch up with a long review thread on a patch series I’ve just started looking at I occasionally ask Gemini to outline the development so far and the remaining issues.

        • Whostosay@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          ·
          edit-2
          6 hours ago

          A second pair of eyes has always been an acceptable way to use this imo, but it shouldnt be primary or only

  • phil@lymme.dynv6.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    7 hours ago

    Open source is not only about publishing code: it’s about quality, verifiable, reproducible code at work. If LLMs can’t do that, those “vibe coding” projects will hit a hard wall. Still, it’s quite clear they badly impact the FOSS ecosystem.

  • statelesz@slrpnk.net
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    9 hours ago

    LLMs definitely kills the trust in open source software, because now everything can be a vibe-coded mess and it’s sometimes hard to check.

    • RmDebArc_5@feddit.org
      link
      fedilink
      English
      arrow-up
      31
      ·
      9 hours ago

      LLMs definitely kills the trust in open source software, because now everything can be a vibe-coded mess and it’s sometimes hard to check.

      • bryndos@fedia.io
        link
        fedilink
        arrow-up
        14
        ·
        9 hours ago

        Might make open source more trustworthy, It can’t be any harder to check than closed source.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          5
          arrow-down
          4
          ·
          9 hours ago

          A week or two back there was a post on Reddit where someone was advertising a project they’d put up on GitHub, and when I went to look at it I didn’t find any documentation explaining how it actually worked - just how to install it and run it.

          So I gave Gemini the URL of the repository and asked it to generate a “Deep Research” report on how it worked. Got a very extensive and detailed breakdown, including some positives and negatives that weren’t mentioned in the existing readme.

      • Paragone@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        ttbomk, emojis are legal function-names in both Swift & Julia…

        The Swift example was damned incomprehensible, & … well, it was Apple stuff, so making it look idiotic might have been some kind of cultural-exclusivity intention…

        The Julia stuff, though, means that you can use Greek symbols, etc, for functions, & get things looking more like what they should…


        Also, I think emojis are actually better than my all-text style, for communicating intonation/emotion ( I’m old: learned last century ), & maybe us old geezers ought to adapt a bit, to such things…

        That does NOT mean that cartoon “code” is good-enough, whether it’s cartoonish in plaintext or in emojis, though…

        I’m just trying to keep the cultural-prejudice & the code-quality being distinct-categories of judgement, you know?

        ( & cultural-prejudice is an actual thing, though it’s usually called “religious wars”, isn’t it, in geekdom? )

        _ /\ _

      • mintiefresh@piefed.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 hours ago

        I used to use emojis in my documentation very lightly because I thought they were a good way to provide visual cues. But now with all the people vibe coding their own readme docs with freaking emojis everywhere I have to stop using them.

        Mildly annoying.

        • very_well_lost@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          7 hours ago

          Man… of all the vibe coding tools, Lovable has gotta be one of the most useless, too.

          I work with people (all middle managers) who love Loveable because they can type a two sentence description of an app and it will immediately vomit something into existence. But the code it generates is an absolute disaster and the UIs it designs (which is supposed to be its main draw) is some of the most generic crap I’ve ever seen.

          0/10, do not recommend.

        • Munkisquisher@lemmy.nz
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 hours ago

          Got a job application this with a one line cover letter “Iam interested to work with u are company” it was kinda refreshing to see that instead of a whole page of slop, like most of them are these days.

    • Tetsuo@jlai.lu
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 hours ago

      I think for someone that is very knowledgeable In a project they would probably somehow now if there is vibe coding. I think this will affect brand new projects but not that much of the older codebase. Even think it might enable finding old bugs in old open source codebase.

      • 123@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        You are more optimistic than the maintainers of those older projects that have started to ban LLM generated bug reports. They tend to be a waste of time for the maintainers (e.g.: cURL project).

  • whereIsTamara@lemmy.org
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    6
    ·
    8 hours ago

    This isn’t the problem with the AI, it’s the problem with the user. If you don’t know enough to select the library and make the AI use it, maybe you were never gonna finish the project without AI anyway.