Does vibe coding risk destroying the Open Source ecosystem? According to a pre-print paper by a number of high-profile researchers, this might indeed be the case based on observed patterns and some…
TheRegister had an article, a year or 2 ago, about using AI in the opposite way: instead of creating the code, someone was using it to discover security-problems in it, & they said it was really useful for that, & most of its identified things, including some codebase which was sending private information off to some internet-server, which really are problems.
I wonder if using LLM’s as editors, instead of writers, would be better-use for the things?
They are pretty good at summarisation. If I want to catch up with a long review thread on a patch series I’ve just started looking at I occasionally ask Gemini to outline the development so far and the remaining issues.
TheRegister had an article, a year or 2 ago, about using AI in the opposite way: instead of creating the code, someone was using it to discover security-problems in it, & they said it was really useful for that, & most of its identified things, including some codebase which was sending private information off to some internet-server, which really are problems.
I wonder if using LLM’s as editors, instead of writers, would be better-use for the things?
_ /\ _
A second pair of eyes has always been an acceptable way to use this imo, but it shouldnt be primary or only
They are pretty good at summarisation. If I want to catch up with a long review thread on a patch series I’ve just started looking at I occasionally ask Gemini to outline the development so far and the remaining issues.