(They/Them) I like TTRPGs, history, (audio and written) horror and the history of occultism.

  • 0 Posts
  • 30 Comments
Joined 3 months ago
cake
Cake day: January 24th, 2025

help-circle
  • And, yes, I can prove that a human can understand things when I ask: Hey, go find some books on a subject, then read them and summarize them. If I ask for that, and they understood it, they can then tell me the names of those books because their summary is based on actually taking in the information, analyzing it and reorganizing it by apprehending it as actual information.

    They do not immediately tell me about the hypothetical summaries of fake books and then state with full confidence that those books are real. The LLM does not understand what I am asking for, but it knows what the shape is. It knows what an academic essay looks like and it can emulate that shape, and if you’re just using an LLM for entertainment that’s really all you need. The shape of a conversation for a D&D npc is the same as the actual content of it, but the shape of an essay is not the same as the content of that essay. They’re too diverse, and they have critical information in them and they are about that information. The LLM does not understand the information, which is why it makes up citations- it knows that a citation fits in the pattern, and that citations are structured with a book name and author and all the other relevant details. None of those are assured to be real, because it doesn’t understand what a citation is for or why it’s there, only that they should exist. It is not analyzing the books and reporting on them.


  • Hello again! So, I am interested in engaging with this question, but I have to say: My initial post is about how an LLM cannot provide actual, real citations with any degree of academic rigor for a random esoteric topic. This is because it cannot understand what a citation is, only what it is shaped like.

    An LLM deals with context over content. They create structures that are legible to humans, and they are quite good at that. An LLM can totally create an entire conversation with a fictional character in their style and voice- that doesn’t mean it knows what that character is. Consider how AI art can have problems that arise from the fact that they understand the shape of something, but they don’t know what it actually is- that’s why early AI art had a lot of problems with objects ambigiously becoming other objects. The fidelity of these creations has improved with the technology, but that doesn’t imply understanding of the content.

    Do you think an LLM understands the idea of truth? Do you think if you ask it to say a truthful thing, and be very sure of itself and think it over, it will produce something that’s actually more accurate or truthful- or just something that has the language hall-marks of being truthful? I know that an LLM will produce complete fabrications that distort the truth if you expect a base-line level of rigor from them, and I proved that above, in that the LLM couldn’t even accurately report the name of a book it was supposedly using as a source.

    What is understanding, if the LLM can make up an entire author, book and bibliography if you ask it to tell you about the real world?


  • What’s yours? I’m stating that LLMs are not capable of understanding the actual content of any words they arrange into patterns. This is why they create false information, especially in places like my examples with citations- they are purely the result of it creating “academic citation” sounding sets of words. It doesn’t know what a citation actually is.

    Can you prove otherwise? In my sense of “understanding” it’s actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely. An LLM cannot do this. It’s not designed to- there are neural network AI built on similar foundational principles towards divergent goals that can produce remarkable results in terms of data analysis, but not ChatGPT. It doesn’t understand anything, which is why you can repeatedly ask it about a book only to look it up and discover it doesn’t exist.



  • As I understand it, most LLM are almost literally the Chinese rooms thought experiment. They have a massive collection of data, strong algorithms for matching letters to letters in a productive order, and sufficiently advanced processing power to make use of that. An LLM is very good at presenting conversation; completing sentences, paragraphs or thoughts; or, answering questions of very simple fact- they’re not good at analysis, because that’s not what they were optimized for.

    This can be seen when people discovered that if ask them to do things like tell you how many times a letter shows up in a word, or do simple math that’s presented in a weird way, or to write a document with citations- they will hallucinate information because they are just doing what they were made to do: complete sentences, expand words along a probability curve that produces legible, intelligible text.

    I opened up chat-gpt and asked it to provide me with a short description of how Medieval European banking worked, with citations and it provided me with what I asked for. However, the citations it made were fake:

    The minute I asked it, I assume a bit of sleight of hand happened, where it’s been set up so that if someone asks a question like that it’s forwarded to a search engine that verifies if the book exists, probably using Worldcat or something. Then I assume another search is made to provide the prompt for the LLM to present the fact that the author does exist, and possibly accurately name some of their books.

    I say sleight of hand because this presents the idea that the model is capable of understanding it made a mistake, but I don’t think it does- if it knew that the book wasn’t real, why would it have mentioned it in the first place?

    I tested each of the citations it made. In one case, I asked it to tell me more about one of them and it ended up supplying an ISBN without me asking, which I dutifully checked. It was for a book that exists, but it didn’t share a title or author, because those were made up. The book itself was about the correct subject, but the LLM can’t even tell me what the name of the book is correctly; and, I’m expected to believe what it says about the book itself?


  • It’s complicated. The current state of the internet is dominated by corporate interests towards maximal profit, and that’s driving the way websites and services are structured towards very toxic and addictive patterns. This is bigger than just “social media.”

    However, as a queer person, I will say that if I didn’t have the ability to access the Internet and talk to other queer people without my parents knowing, I would be dead. There are lots of abused kids who lack any other outlets to seek help, talk to people and realize their problems, or otherwise find relief for the crushing weight of familial abuse.

    Navigating this issue will require grace, awareness and a willingness to actually address core problems and not just symptoms. It doesn’t help that there is an increasing uptick of purity culture and “for the children” legislation that will curtail people’s privacy, ability to use the internet and be used to push queer people and their art or narratives off of the stage.

    Requiring age verification reduces anonymity and makes it certain that some people will be unable to use the internet safely. Yes, it’s important in some cases, but it’s also a cost to that.

    There’s also the fact that western society has systemically ruined all third spaces and other places for children to exist in that isn’t their home or school. It used to be that it was possible for kids and teens to spend time at malls, or just wandering around a neighborhood. There were lots of places where they were implicitly allowed to be- but those are overwhelmingly being closed, commericalized or subject to the rising tide of moral panic and paranoia that drives people to call the cops on any group of unknown children they see on their street.

    Police violence and severity of response has also heightened, so things that used to be minor, almost expected misdemeanors for children wandering around now carry the literal risk of death.

    So children are increasingly isolated, locked down in a context where they cannot explore the world or their own sense of self outside the hovering presence of authority- so they turn to the internet. Cutting that off will have repercussions. Social media wouldn’t be so addictive for kids if they had other venues to engage with other people their age that weren’t subject to the constant scrutiny of adults.

    Without those spaces, they have to turn to the only remaining outlet. This article is woefully inadequate to answer the fundamental, core problems that produce the symptoms we are seeing; and, it’s implementation will not rectify the actual problem. It will only add additional stress to the system and produce a greater need to seek out even less safe locations for the people it ostensibly wishes to protect.





  • My suggestion is to either change the context you play games in, or pick games that are very cognitively different from what you normally do at work.

    You can change your context with a new console, but I think it may be cheaper to do something like buying a controller and playing games while standing up, or on your couch/armchair, or playing games while sitting on a yoga ball. The point is to trick your brain, because it’s associated sitting at a desk in front of a computer with boring tedium. Change the presentation and your subconscious will interpret it differently.

    You can also achieve this by identifying the things that you have to do in your job that mirror videogame genres you enjoy and picking a game that shares few of those qualities.

    I worked at the post office for years, doing mail processing, and my enjoyment of management and resource distribution style games went down sharply during that time because of the cognitive overlap- I played more roguelikes and RPGs as a consequence.


  • Thank you, I am trying to be less abrasive online, especially about LLM/GEN-AI stuff. I have come to terms with the fact that my desire for accuracy and truthfulness in things skews way past the median to the point that it’s almost pathological, which is why I ended up studying history in college, probably. To me, the idea of using a LLM to get information seems like a bad use of my time- I would methodically check everything it says, and the total time spent would vastly exceed any amount saved, but that’s because I’m weird.

    Like, it’s probably fine for anything you’d rely on a skimming a wikipedia article for. I wouldn’t use them for recipes or cooking, because that could give you food poisoning if something goes wrong, but if you’re just like, “Hey, what’s Ice-IV?” then the answer it gives is probably equivalent in 98% of cases to checking a few websites. People should invest their energy where they need it, or where they have to, and it’s less effort for me to not use the technology, but I know there are people who can benefit from it and have a good use-case situation to use it.

    My main point of caution for people reading this is that you shouldn’t rely on an LLM for important information- whatever that means to you, because if you want to be absolutely sure about something, then you shouldn’t risk an AI hallucination, even if it’s unlikely.


  • I’m not a frequent user of LLM, but this was pretty intuitive to me after using them for a few hours. However, I recognize that I’m a weirdo and so will pick up on the idea that the prompt leads the style.

    It’s not like the LLM actually understands that you are asking questions, it’s just that it’s generating a procedural response to the last statement given.

    Saying please and thank you isn’t the important part.

    Just preface your use with, like,

    “You are a helpful and enthusiastic with excellent communication skills. You are polite, informative and concise. A summary of follows in the style of your voice, explained in clearly and without technical jargon.”

    And you’ll probably get promising results, depending on the exact model. You may have to massage it a bit before you get consistent good results, but experimentation will show you the most reliable way to get the desired results.

    Now, I only trust LLM as a tool for amusing yourself by asking it to talk in the style of you favorite fictional characters about bizarre hypotheticals, but at this point I accept there’s nothing I can do to discourage people from putting their trust in them.


  • Hey, thank you so much for your contribution to this discussion. You presented me a really challenging thought and I have appreciated grappling with it for a few days. I think you’ve really shifted some bits of my perspective, and I think I understand now.

    I think there’s an ambiguity in my initial post here, and I wanted to check which of the following is the thing you read from it:

    • Generative AI art is inherently limited in these ways, even in the hands of skilled artists or those with technical expertise with it; or,
    • Generative AI art is inherently limited in these ways, because it will be ultimately used by souless executives who don’t respect or understand art.



  • The university I went to had an unusually large art department for the state it was in, most likely because due to a ridiculous chain of events and it’s unique history, it didn’t have any sports teams at all.

    I spent a lot of time there, because I had (and made) a lot of friends with the art students and enjoyed the company of weird, creative people. It was fun and beautiful and had a profound effect on how I look at art, craft and the people who make it.

    I mention this because I totally disagree with you on the subject of photography. It’s incredibly intentional in an entirely distinct but fundamentally related way, since you lack control over so many aspects of it- the things you can choose become all the more significant, personal and meaningful. I remember people comparing generative art and photography and it’s really… Aggravating, honestly.

    The photography student I knew did a whole project as part of her final year that was a display of nude figures that did a lot of work with background, lighting, dramatic shadow and use of color, angle and deeply considered compositions. It’s a lot of work!

    I don’t mean here to imply you’re disparaging photography in any way, or that you don’t know enough about it. I can’t know that, so I’m just sharing my feelings about the subject and art form.

    A lot of generative art has very similar lighting and positioning because it’s drawing on stock photographs which have a very standardized format. I think there’s a lot of different between that and the work someone who does photography as an art has to consider. Many of the people using generative art as tools lack the background skills that would allow them to use them properly as tools. Without that, it’s hard to identify what makes a piece of visual art not work, or what needs to be changed to convey a mood or idea.

    In an ideal world, there would be no concern for loss of employment because no one would have to work to live. In that world, these tools would be a wonderful addition to the panoply of artistic implements modern artists enjoy.



  • I did close my post by saying capitalism is responsible for the problems, so I think we’re on the same page about why it’s unethical to engage with AI art.

    I am interested in engaging in a discourse not about that (I am very firmly against the proliferation of AI because of the many and varied bad social implications), but I am interested in working on building better arguments against it.

    I have seen multiple people across the web making the argument that AI art is bad not just because of the fact that it will put artists out of work, but because the product is, itself, lacking in some vital and unnameable human spark or soul. Which is a bad argument, since it means the argument becomes about esoteric philosophy and not the practical argument that if we do nothing art stops being professionally viable, killing many people and also crushing something beautiful and wonderful about life forever.

    Rich people ruin everything, is what I want the argument to be.

    So I’m really glad you’re making that argument! Thanks, honestly, it’s great to see it!


  • The question about if AI art is art often fixates on some weird details that I either don’t care about or I think are based on fallacious reasoning. Like, I don’t like AI art as a concept and I think it’s going to often be bad art (I’ll get into that later), but some of the arguments I see are centered in this strangely essentialist idea that AI art is worse because of an inherent lack of humanity as a central and undifferentiated concept. That it lacks an essential spark that makes it into art. I’m a materialist, I think it’s totally possible for a completely inhuman machine to make something deeply stirring and beautiful- the current trends are unlikely to reliably do that, but I don’t think there’s something magic about humans that means they have a monopoly on beauty, creativity or art.

    However, I think a lot of AI art is going to end up being bad. This is especially true of corporate art, and less so for individuals (especially those who already have an art background). Part of the problem is that AI art will always lack the intense level of intentionality that human-made art has, simply by the way it’s currently constructed. A probabilistic algorithm that’s correlating words to shapes will always lack the kind of intention in small detail that a human artist making the same piece has, because there’s no reason for the small details other than either probabilistic weight or random element. I can look at a painting someone made and ask them why they picked the colors they did. I can ask why they chose the lighting, the angle, the individual elements. I can ask them why they decided to use certain techniques and not others, I can ask them about movements that they were trying to draw inspiration from or emotions they were trying to communicate.

    The reasons are personal and build on the beauty of art as a tool for communication in a deep, emotional and intimate way. A piece of AI art using the current technology can’t have that, not because of some essential nature, but just because of how it works. The lighting exists as it does because it is the most common way to light things with that prompt. The colors are the most likely colors for the prompt. The facial expressions are the most common ones for that prompt. The prompt is the only thing that really derives from human intention, the only thing you can really ask about, because asking, “Hey, why did you make the shoes in this blue? Is it about the modern movement towards dull, uninteresting colors in interior decoration, because they contrast a lot with the way the rest of the scene is set up,” will only ever give you the fact that the algorithm chose that.

    Sure, you can make the prompts more and more detailed to pack more and more intention in there, but there are small, individual elements of visual art that you can’t dictate by writing even to a human artist. The intentionality lost means a loss of the emotional connection. It means that instead of someone speaking to you, the only thing you can reliably read from AI art is what you are like. It’s only what you think.

    I’m not a visual artist, but I am a writer, and I have similar problems with LLMs as writing tools because of it. When I do proper writing, I put so much effort and focus into individual word choices. The way I phrase things transforms the meaning and impact of sentences, the same information can be conveyed so many ways to completely different focus and intended mood.

    A LLM prompt can’t convey that level of intentionality, because if it did, you would just be writing it directly.

    I don’t think this makes AI art (or AI writing) inherently immoral, but I do think it means it’s often going to be worse as an effective tool of deep, emotional connection.

    I think AI art/writing is bad because of capitalism, which isn’t an inherent factor. If we lived in fully-automated gay luxury space communism, I would have already spent years training an LLM as a next-generation oracle for tabletop-roleplaying games I like. They’re great for things like that, but alas, giving them money is potentially funding the recession of arts as a profession.


  • Those are only conflicting statements if you believe that the market will not embrace worse products. It totally will so long as you have a group of people who lack the critical analysis skills to compare the products and arrive at the conclusion that the new one is worse.

    It doesn’t help that the potential drivers of this action are massive conglomerates, so if a sweeping change comes from the top-down and is paired with a lot of propaganda (Marketing) then people will have no choice but to accept it as the standard.

    I think that a lot of criticism about the actual quality of AI art is mixed, though. I feel like it has flaws, but I’ve seen arguments about flaws I don’t think are actually real problems with the technical quality.