• 1 Post
  • 15 Comments
Joined 1 year ago
cake
Cake day: September 27th, 2023

help-circle
  • I posted some of my experience with Kagi’s LLM features a few months ago here: https://literature.cafe/comment/6674957 . TL;DR: the summarizer and document discussion is fantastic, because it does not hallucinate. The search integration is as good as anyone else’s, but still nothing to write home about.

    The Kagi assistant isn’t new, by the way; I’ve been using it for almost a year now. It’s now out of beta and has an improved UI, but the core functionality seems mostly the same.

    As far as actual search goes, I don’t find it especially useful. It’s better than Bing Chat or whatever they call it now because it hallucinates less, but the core concept still needs work. It basically takes a few search results and feeds them into the LLM for a summary. That’s not useless, but it’s certainly not a game-changer. I typically want to check its references anyway, so it doesn’t really save me time in practice.

    Kagi’s search is primarily not LLM-based and I still find the results and features to be worth the price, after being increasingly frustrated with Google’s decay in recent years. I subscribed to the “Ultimate” Kagi plan specifically because I wanted access to all the premium language models, since subscribing to either ChatGPT or Claude would cost about the same as Kagi, while Kagi gives me access to both (plus Mistral and Gemini). So if you’re interested in playing around with the latest premium models, I still think Kagi’s Ultimate plan is a good deal.

    That said, I’ve been disappointed with the development of LLMs this year across the board, and I’m not convinced any of them are worth the money at this point. This isn’t so much a problem with Kagi as it is with all the LLM vendors. The models have gotten significantly worse for my use cases compared to last year, and I don’t quite understand why; I guess they are optimizing for benchmarks that simply don’t align with my needs. I had great success getting zsh or Python one-liners last year, for example, whereas now it always seems to give me wrong or incomplete answers.

    My biggest piece of advice when dealing with any LLM-based tools, including Kagi’s, is: don’t use it for anything you’re not able to validate and correct on your own. It’s just a time-saver, not a substitute for your own skills and knowledge.







  • Ah, somehow I didn’t see 18 there and only looked at 17. Thanks!

    I tried pulling just the one package from the sid repo, but that created a cascade of dependencies, including all of llvm. I was able to get those files installed but not able to get clinfo to succeed. I also tried installing llvm-19 from the repo at https://apt.llvm.org/, with similar results. clinfo didn’t throw the fatal errors anymore, but it didn’t work, either. It still reported Number of devices 0 and OpenCL-based tools crashed anyway. Not with the same error, but with something generic about not finding a device or possibly having corrupt drivers.

    Should I bite the bullet and do a full ugprade to sid, or is there some way to this more precisely that won’t muck up Bookworm?





  • This is correct, albeit not universal.

    KDE has a predefined schedule for “release candidates”, which includes RC2 later this month. So “RC1” is clearly not going to be the final version. See: https://community.kde.org/Schedules/February_2024_MegaRelease

    This is at least somewhat common. In fact, it’s the same way the Linux kernel development cycle works. They have 7 release candidates, released on a weekly basis between the beta period and final release. See: https://www.kernel.org/category/releases.html

    In the world of proprietary corporate software, I more often see release candidates presented as potentially final; i.e. literal candidates for release. The idea of scheduling multiple RCs in advance doesn’t make sense in that context, since each one is intended to be the last (with fingers crossed).

    It’s kind of splitting hairs, honestly, and I suspect this distinction has more to do with the transparency of open-source projects than anything else. Apple, for example, may indeed have a schedule for multiple macOS RCs right from the start and simply choose not to share that information. They present every “release candidate” as being potentially the final version (and indeed, the final version will be the same build as the final RC), but in practice there’s always more than one. Also, Apple is hardly an ideal example to follow, since they’ve apparently never even heard of semantic version numbering. Major compatibility-breaking changes are often introduced in minor point releases. It’s infuriating. But I digress.



  • hersh@literature.cafetoLinux@lemmy.mlIs anyone using awk?
    link
    fedilink
    arrow-up
    24
    arrow-down
    1
    ·
    9 months ago

    All the time. Not always by choice!

    A lot of my work involves writing scripts for systems I do not control, using as light a touch as is realistically possible. I know for a fact Python is NOT installed on many of my targets, and it doesn’t make sense to push out a whole Python environment of my own for something as trivial as string manipulation.

    awk is super powerful, but IMHO not powerful enough to justify its complexity, relative to other languages. If you have the freedom to use Python, then I suggest using that for anything advanced. Python skills will serve you better in a wider variety of use cases.



  • I used to run Tumbleweed with KDE on my Nvidia system. I found the rolling release structure of Tumbleweed to cause extra work for me, because kernel updates came frequently and occasionally broke the Nvidia drivers. As a workaround, I ended up pinning my kernel to an old version.

    Nvidia drivers have been at least a little troublesome on every distro I’ve used, particularly with the additional CUDA libraries.

    One nice thing about Suse is that it uses BTRFS by default, and you can use snapper to revert your whole system if something goes wrong. So if Nvidia shits the the bed after an update, it’s easy to roll back. Most distros default to ext4 and do not have snapshot support by default, which feels like living in the stone age to me after using Suse and BTRFS.

    Of course you CAN set up BTRFS and snapshots in any distro, but that’s a lot to ask for a beginner with Linux. I strongly recommend choosing a distro that does that for you, like Suse.


  • I feel this.

    Back in the 90s, there was a fantastic paint program for Mac called ColorIt! (The exclamation point is part of the name, though this is the last time I will respect that because it’s obnoxious; lookin’ at you, Yahoo!*)

    It was a commercial product, but ColorIt 2.3 was eventually released as freeware after newer major versions were released for sale. 2.3 was everything I needed, and while I did try ColorIt 4.0, it didn’t click with me the way 2.3 did. At the time I felt like they bowed to the pressure of Adobe’s success and instead of playing to their unique strengths, they made ColorIt’s UI a bit too much like Photoshop. So I stuck with version 2.3.

    By the time Mac OS X came around, ColorIt was no longer in active development. But OS X had the “Classic” environment, something akin to an OS 9 VM tightly integrated into OS X. Classic apps didn’t look or feel like native OS X apps, and running Classic came with a heavy RAM burden. But I did it anyway, because ColorIt 2.3 was da bomb.

    I continued using ColorIt 2.3 up until Apple killed support for Classic in 10.6 Snow Leopard.

    At that point, the intrepid developers came out of hiding and created a Carbon port of ColorIt 4.5 that could run natively on OS X. It was Carbon-only, which meant that it it didn’t run natively on Intel Macs, but it did run thanks to Apple’s Rosetta compatibility layer — at least until Apple axed that as well.

    If I ever get into pixel art again, I’ll probably run ColorIt 2.3 again in an OS 9 VM with Sheepshaver or whatever works best nowadays.

    *That exclamation point is strictly to emphasize my disdain for Yahoo.