• 0 Posts
  • 43 Comments
Joined 7 months ago
cake
Cake day: February 13th, 2024

help-circle


  • Using IP laws to legislate this could also lead to disastrous consequences, like the monopolization of effective AI. If only those with billions in capital can make use of these tools, while free or open source models become illegal to distribute, it could mean a permanent power grab. If the capitalists end up controlling the “means of generation” and we the common folk can’t use it.


  • The US is indeed in a very good position, having only two borders and two oceans between everyone else. They just need to get Mexico to mine their southern border, while they mine theirs.

    But Europe, Russia, China, India have plenty of them and won’t be able to escape refuge streams or conflicts. Large parts of India might become uninhabitable. Food prices are going to fluctuate. Global trade will become unstable or collapse, disabling the complex globalized industrial economy. Nuclear war is very likely. People still don’t know what’s coming.



  • https://join-lemmy.org/donate

    It would be nice to have a “patreon” like monthy support and then an open accounting - so we know the money is split to development, instance server hosting costs and maybe admin wages. Or maybe can vote on it. I think fediverse is only the first step, we’re going to need some kind of global non profit funded by users to create federated software and content for users.


  • Flumpkin@slrpnk.nettoLemmy Shitpost@lemmy.worldAI or DEI?
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    7 months ago

    You’re just rephrasing the same approach, over, and over, and over. It’s like you’re not even reading what I’m saying.

    No I read what you are saying. I just think that you are something that “acts intelligent without actually being intelligent”. Here is why: All that you’ve written is based on very simple primitive brain cells and synapses and synaptic connections. It’s self evident that this is not really something that is designed to be intelligent. You’re just “really good at parroting sentences”. And you clearly agree that I’m doing the same 😄

    Clearly LLMs are not intelligent and don’t understand, and it would need many other systems to make them so. But what they do show is that the “creative spark” even though they are very mediocre in their quality, can be created by using a critical mass of quantity. It’s like it’s just one small part of our mind, the “creative writing center” without intelligence. But it’s there, just because we added more data and processing.

    Quality through quantity, that is what we seem to be and what is so shocking. And it’s obvious that there is a kind of disgust or bias against such a notion. A kind of embarrassment of the brain to just be thinking meat.

    Now you might be absolutely right that my specific suggestion for an approach is bullshit, I don’t know enough about it. But I am pretty sure we’ll get there without understanding exactly how it works.


  • And how do you determine who falls in this category? Again, by a set of parameters which we’ve chosen.

    Sure, that is my argument, that we choose to make social progress based on our nature and scientific understanding. I never claimed some 100% objective morality, I’m arguing that even though that does not exist, we can make progress. Basically I’m arguing against postmodernism / materialism.

    For example: If we can scientifically / objectively show that some people are born in the wrong body and it’s not some mental illness, and this causes suffering that we can alleviate, then moral arguments against this become invalid. Or like the gif says “can it”.

    I’m not arguing that some objective ground truth exists but that the majority of healthy human beings have certain values IF they are not tainted that if reinforced gravitate towards some sort of social progress.

    You needn’t argue for the elimination of meaning, because meaning isn’t a substance present in reality - it’s a value we ascribe to things and thoughts.

    Does mathematics exist? Is money real? Is love real?

    If nobody is left to think about them, they do not exist. If nobody is left to think about an argument, it becomes meaningless or “nonsense”.


  • I’m not arguing for “one single 100% objective morality”. I’m arguing for social progress - maybe towards one of an infinite number of meaningful, functioning moralities that are objectively better than what we have now. Like optimizing or approximating a function that we know has no precise solution.

    And “objective” can’t mean some kind of ground truth by e.g. a divine creator. But you can have objective statistical measurements for example about happiness or suffering, or have an objective determination if something is likely to lead to extinction or not.



  • Flumpkin@slrpnk.nettoLemmy Shitpost@lemmy.worldAI or DEI?
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    7 months ago

    Yeah, I imagine generative AI as like one small part of a human mind, so we’d need to create a whole lot more for AGI. But it’s shocking (at least for me) that it works at all just through more data and compute power. That you can make qualitative leaps with just increasing the quantity. Maybe we’ll see more progress now.



  • There’s no such thing as 100% objective morality.

    Maybe not, maybe there is an infinity of variation of objective morality. There will always be broken people with pathologies like sociopathy or narcissism that wouldn’t agree. But the vast majority, like 95% of people would agree for example on the universal human rights - at least if they had the rights and freedoms to express themselves and the education to understand and not be brainwashed. Basically given the options of a variety of moralities and the right circumstances (safety/not in danger, modicum of prosperity, education) you would get an overwhelming consensus on a large basis of human rights or “truths”. The argument would be that just because a complex machine is forever running badly, that there still can be an inherent objective ideal of how it should run, even if perfection isn’t desirable or the machine and ideal has to be constantly improved.

    There is another way to argue for a moral starting point: A civilization that is on the way to annihilate itself is “doing something wrong” - because any ideology or morality that argues for annihilation (even if that is not the intention, but the likely outcome) is at the very least nonsensical since it destroys meaning itself. You cannot argue for the elimination of meaning without using meaning itself, and after the fact it would have shown that your arguments were meaningless. So any ideology or philosophy that “accidentally” leads to extermination is nonsensical at least to a degree. There would still be an infinity of possible configurations for a civilization that “works” in that sense, but at least you can exclude another infinity of nonsense.

    “Who watches the watchers” is of course the big practical problem because any system so far has always been corrupted over time - objectively perverted from the original setup and intended outcome. But that does not mean that it cannot be solved or at least improved. A basic problem is that those who desire power/money above all else and prioritize and focus solely on the maximization of those two are statistically most likely to achieve it. That is adapted or natural sociopathy. We do not really have much words or thoughts about this and completely ignore it in our systems. But you could design government systems that rely on pure random sampling of the population (a “randocracy”). This could eliminate many of the political selection filtering and biases and manipulation. But there seems very little discussion on how to improve our democracies.

    Another rather hypothetical argument could come from scientific observation of other intelligent (alien) civilizations. Just like certain physical phenomena like stars, planets, organic life are naturally emergent from physical laws, philosophical and moral laws could naturally emerge from intelligent life (e.g. curiosity, education, rules to allow stability and advancement). Unfortunately it would take a million years for any scientific studies on that to conclude.

    Nick Bostrom talks a bit about the idea of a singleton here, but of course there be dragons too.

    It is quite possible that it’s too late now, or practically impossible to advance our social progress because of the current overwhelming forces at work in our civilization.


  • The Last Ringbearer (annas-archive) by the paleontologist Kirill Eskov.

    Eskov bases his novel on the premise that the Tolkien account is a “history written by the victors”.[2][3] Mordor is home to an “amazing city of alchemists and poets, mechanics and astronomers, philosophers and physicians, the heart of the only civilization in Middle-earth to bet on rational knowledge and bravely pitch its barely adolescent technology against ancient magic”, posing a threat to the war-mongering faction represented by Gandalf (whose attitude is described by Saruman as “crafting the Final Solution to the Mordorian problem”) and the Elves.[2]

    Macy Halford, in The New Yorker, writes that The Last Ringbearer retells The Lord of the Rings “from the perspective of the bad guys, written by a Russian paleontologist in the late nineties and wildly popular in Russia”.[4] The book was written in the context of other Russian reinterpretations of Tolkien’s works, such as Natalia Vasilyeva and Natalia Nekrasova’s The Black Book of Arda [ru], which treats Melkor as good and the Valar and Eru Ilúvatar as tyrannical rulers.




  • Flumpkin@slrpnk.nettoLemmy Shitpost@lemmy.worldAI or DEI?
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    7 months ago

    Yeah. But maybe this is how you teach an AI a broader understanding of the real world. Or really a slightly less narrow view. Human brains also have to learn and reconcile all these conflicting data points and then create a kind of understanding from it. For any machine learning it would only be an intuitive instinct.

    Like you would have a bunch of these “tables” that show relationships between various tokens and embody concepts. Maybe you need to combine different kind of models that are organized and trained differently to resolve such things. I only have a very surface level understanding of how machine learning works so I know this is very speculative. Maybe you’re right and it can only ever reflect the training data. Then maybe you’d need to edit the training data, but you could also maybe use other AIs to “reinterpret” training data based on other models.

    Like all the data on reddit, could you train a model to detect sarcasm or lies or to differentiate between liberal, leftist and fascist type of arguments? Not just recognizing the tokens or talking points, but the semantic of an argument? Like detecting a non sequitur. You probably need need “general knowledge” understanding for that. But any kind of AI like that would be incredibly interesting for social media so you client can tag certain posts, or root out bot / shill networks that work for special interests (fossil fuel, usa, china, russia).

    So all the stuff “conflicting with each other and making a giant spider web of issues to juggle” might be what you can train an AI to pull apart into “appeal emotion” and “materialistic view” or “belief in inequality” or “preemptive bias counteractor”. Maybe it actually could extract and help us communicate better.

    Eh I really need to learn more about AI to understand the limits.


  • Flumpkin@slrpnk.nettoLemmy Shitpost@lemmy.worldAI or DEI?
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    7 months ago

    Would it be possible to create a kind of “formula” to express the abstract relationship of ethical makeup, location, year and field? Like convert a table of population, country, ethnicity mix per year and then train the model on that. It’s clear that it doesn’t understand the meaning or abstract concept, but it can associate and extrapolate things. So it could “interpret” what the image description says while training and then use the prompt better. So if you’d prompt “english queen 1700” it would output white queen, if you input year 2087 it would be ever so slightly less pasty.


  • There is a very interesting documentary called “Professor Marston and the Wonder Women” and how they created her in 1940 as a feminist super hero.

    William Moulton Marston, a psychologist already famous for inventing the polygraph, struck upon an idea for a new kind of superhero, one who would triumph not with fists or firepower, but with love. “Fine,” said Elizabeth. “But make her a woman.”

    Not even girls want to be girls so long as our feminine archetype lacks force, strength, and power. Not wanting to be girls, they don’t want to be tender, submissive, peace-loving as good women are. Women’s strong qualities have become despised because of their weakness. The obvious remedy is to create a feminine character with all the strength of Superman plus all the allure of a good and beautiful woman.