• 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: June 19th, 2023

help-circle
  • There’s already talk-to-your-dog/cat products such as FluentPet. Probably the biggest issue with cats in particular is that their “vocabulary” is quite limited (usually less than a dozen distinct “meows”), but some of the FluentPet users (examples on Youtube such as BilliSpeaks) seem to suggest basic reasoning. A full-blown language is beyond them, but they do seem capable of understanding more concepts than we give them credit for.




  • Australis13@fedia.iotoTechnology@lemmy.worldaight... i'm out..
    link
    fedilink
    arrow-up
    43
    arrow-down
    1
    ·
    7 days ago

    The irony is that, according to the article, it already does. What is changing is that the LLM will be able to use more of that data:

    OpenAI is rolling out a new update to ChatGPT’s memory that allows the bot to access the contents of all of your previous chats. The idea is that by pulling from your past conversations, ChatGPT will be able to offer more relevant results to your questions, queries, and overall discussions.

    ChatGPT’s memory feature is a little over a year old at this point, but its function has been much more limited than the update OpenAI is rolling out today… Previously, the bot stored those data points in a bank of “saved memories.” You could access this memory bank at any time and see what the bot had stored based on your conversations… However, it wasn’t perfect, and couldn’t naturally pull from past conversations, as a feature like “memory” might imply.








  • This makes me suspect that the LLM has noticed the pattern between fascist tendencies and poor cybersecurity, e.g. right-wing parties undermining encryption, most of the things Musk does, etc.

    Here in Australia, the more conservative of the two larger parties has consistently undermined privacy and cybersecurity by implementing policies such as collection of metadata, mandated government backdoors/ability to break encryption, etc. and they are slowly getting more authoritarian (or it’s becoming more obvious).

    Stands to reason that the LLM, with such a huge dataset at its disposal, might more readily pick up on these correlations than a human does.