4 years? Try 6 months from now.
4 years? Try 6 months from now.
If Europe gets into a war, the US will side with Russia. Trump has been a Russian asset since the 80s, and it’s clear that he gave Putin classified documents relating to American spies and informants during his last presidency when they suddenly started dying a few days after he met with Putin. And even more classified documents relating to American espionage against Russia than the ones he requested before that meeting were found in Mar a la Go.
Nah, the elephant gun bullet would stop because a horse isn’t an elephant.
We’re already living in a dystopia. Companies are selling your work to be used in training sets already. Every social media company that I’m aware of has already tried it at least once, and most are actively doing it. Though that’s not why we live in a dystopia, it’s just one more piece on the pile.
When I say licensing, I’m not talking about licensing fees like social media companies are already taking in, I’m talking about open source software style licensing - groups of predefined rules that artists can apply to their work that AI companies must abide by if they want to use their work. Under these licensing rules are everything from “do whatever you want with my code” to “my code can only be used to make not-for-profit software,” and all derivative works have the same license applied to them. Obviously, the closed source alternative doesn’t apply here - the d’jinn’s already out of the bottle and as you said, once your work is out there, there’s always the risk somebody is going to steal it.
I’m not against AI, I’m simply against corporations being left unregulated to do whatever the hell they want. That’s one of the reasons to make the distinction between people taking inspiration from a work and a LLM being trained off of analysing that work as part of its data set. Profit-motivation is largely antithetical to progress. Companies hate taking risks. Even at the height of corporate research spending, the so-called Blue Skies Research, the majority of research funding was done by the government. Today, medical research is done at colleges and universities on government dollars, with companies coming in afterward to patent a product out of the research when there is no longer any risk. This is how AI companies currently work. Letting people like you and me do all the work and then swooping in to take that and turn it into a multi-billion dollar profit. The work that made the COVID vaccines possible was done decades before, but no company could figure out how to make a profit off of it until COVID happened, so nothing was ever done with it.
As for walled off communities of artists, you should check out Cara, a new social media platform that’s a mix of Artstation and Instagam and 100% anti-AI. I forget the details, but AI art is banned on the site, and I believe they have Nightshade or something built in. I believe that when it was first announced, they had something like 200,000 people create accounts in the first 3 months.
People aren’t anti-AI. They’re anti late-stage capitalism. And with what little power they have, they’d rather poison the well or watch it all burn than be trampled on any further.
It’s not about “analysis” but about for-profit use. Public domain still falls under Fair Use. I think you’re being too optimistic about support for UBI, but I absolutely agree on that point. There are countries that believe UBI will be necessary in a decades time due to more and more of the population becoming permanently unemployed by jobs being replaced. I say myself that I don’t think anybody would really care if their livelihoods weren’t at stake (except for dealing with the people who look down on artists and say that writing prompts makes them just as good as if not better than artists). As it stands, artists are already forming their own walled off communities to isolate their work from being publicly available and creating software to poison LLMs. So either art becomes largely inaccessible to the public, or some form of horrible copyright action is taken because those are the only options available to artists.
Ultimately, I’d like a licensing system put in place, like for open source software where people can license their works and companies have to cite their sources for their training data. Academics have to cite their sources for research, and holding for-profit companies to the same standards seems like it would be a step in the right direction. Simply require your data scraper to keep track of where it got its data from in a publicly available list. That way, if they’ve used stuff that they legally shouldn’t, it can be proven.
Just installed an update to 10 2 days ago to find that it had installed Copilot and put an icon for it on my taskbar. Stuff like this is why 10 will be my last version of Windows.
Have you ever heard the saying that there are only 4 or 5 stories in the world? That’s basically what you’re arguing, and we’re getting into heavy philosophical areas here.
The difference is in the process. Anybody can take a photo, but it takes knowledge and experience to be a photographer. An artist understands concepts in the way that a physicist understands the rules that govern particles. The issue with AI isn’t that it’s derivative in the sense that “everything old is new again” or “nature doesn’t break her own laws,” it’s derivative in the sense that it merely regurgitates a collage of vectorized arrays of its training data. Even somebody who lives in a cave would understand how light falls and could extrapolate that knowledge to paint a sunset if you told them what a sunset is like. Given A and B, you can figure out C. The image generators we have today don’t understand how light works, even with all the images on the internet to examine. They can give you sets of A, B, and AB, but never C. If I draw a line and then tell you to draw a line, your line and my line will be different even though they’re both lines. If you tell an image generator to draw a line, it’ll spit out what is effectively a collage of lines from its training set.
And even this would only matter in terms of prompters saying that they are artists because they wrote the phrase that caused the tool to generate an image, but we live in a world where we must make money to live, and the way that the companies that make these tools work amounts to wage theft.
AI is like a camera. It’s a tool that will spawn entirely new genres of art and be used to improve the work of artists in many other areas. But like any other tool, it can be put together and used ethically or unethically, and that’s where the issues lie.
AI bros say that it’s like when the camera was first invented and all the painters freaked out. But that’s a strawman. Artists are asking, “Is a man not entitled to the sweat of his brow?”
Copyright is a whole mess and a dangerous can of worms, but before I get any further, I just want to quote a funny meme: “I’m not doing homework for you. I’ve known you for 30 seconds and enjoyed none of them.” If you’re going to make a point, give the actual point before citing sources because there’s no guarantee that the person you’re talking to will even understand what you’re trying to say.
Having said that, I agree that anything around copyright and AI is a dangerous road. Copyright is extremely flawed in its design.
I compare image generators to the Gaussian Blur tool for a reason - it’s a tool that outputs an algorithm based on its inputs. Your prompt and its training set, in this case. And like any other tool, its work on its own is derivative of all the works in its training set and therefore the burning question comes down to whether or not that training data was ethically sourced, ie used with permission. So the question comes down to whether or not the companies behind the tool had the right to use the images that they did and how to prove that. I’m a fan of requiring generators to list the works that they used for their training data somewhere. Basically, a similar licensing system as open source software. This way, people could openly license their work for use or not and have a way to prove if their works were used without their permission legally. There are some companies that are actually moving to commissioning artists to create works specifically for use in their training sets, and I think that’s great.
AI is a tool like any other, and like any other tool, it can be made using unethical means. In an ideal world, it wouldn’t matter because artists wouldn’t have to worry about putting food on the table and would be able to just make art for the sake of following their passions. But we don’t live in an ideal world, and the generators we have today are equivalent to the fast fashion industry.
Basically, I ask, “Is a man not entitled to the sweat of his brow?” And the AI companies of today respond, “No! It belongs to me.”
There’s a whole other discussion to be had about prompters and the attitude that they created the works generated by these tools and how similar they are to corporate middle managers taking credit for the work of the people under them, but that’s a discussion for another time.
Copyright is its own whole can of worms that could have entire essays just about how it and AI cause problems. But the issue at hand really comes down to one simple question:
Is a man not entitled to the sweat of his brow?
“No!” Says society. “It’s not worth anything.”
“No!” Says the prompter. “It belongs to the people.”
“No!” Says the corporation. “It belongs to me.”
But just about any artist isn’t reproducing a still from The Mandalorian in the middle of a picture like right-clicking and hitting “save as” on a picture you got from a Google search. Which these generators have done multiple times. A “sufficiently convoluted machine model” would be a senient machine. At the level required for what you’re talking about, we’re getting into the philosophical area of what it means to be a sentient being, which is so far removed from these generators as to be irrelevant to the point. And at that point, you’re not creating anything anyway. You’ve hired a machine to create for you.
These models are tools that use an algorithm to collage pre-existing works into a derivative work. They can not create. If you tell a generator to draw a cat, but it hasn’t any pictures of cats in its data set, you won’t get anything. If you feed AI images back into these generators, they quickly degrade into garbage. Because they don’t have a concept of anything. They don’t understand color theory or two point perspective or anything. They simply are programmed to output their collection of vectorized arrays in an algorithmic format based upon certain keywords.
Why not sell it? Pet Rocks were sold.
I didn’t know that pet rocks were made by breaking stolen statues and gluing googly eyes on them.
Ah yes, how dare artists make $5 an hour instead of $0 while you pay a corporation a subscription fee instead. That’ll show those lazy artists that they’ve had it too good for too long.
To quote a funny meme: “I’m not doing homework for you. I have known you for 30 seconds and enjoyed none of them.”
You should make an argument and then back it up with sources, not cite sources, and expect them to make your point for you. Not everybody is going to come to the same conclusions as you, nor will they understand your intent.
The issue has never been the tech itself. Image generators are basically just a more complicated Gaussian Blur tool.
The issue is, and always has been, the ethics involved in the creation of the tools. The companies steal the work they use to train these models without paying the artists for their efforts (wage theft). They’ve outright said that they couldn’t afford to make these tools if they had to pay copyright fees for the images that they scrape from the internet. They replace jobs with AI tools that aren’t fit for the task because it’s cheaper to fire people. They train these models on the works of those employees. When you pay for a subscription to these things, you’re paying a corporation to do all the things we hate about late stage capitalism.
Art doesn’t need the intention to create art in order to be art. Everything is “Art.” From the beauty of the Empire State Building to the most mundane office building, all buildings fall under the category of art known as architecture. The same way that McDonalds technically falls under the category of the culinary arts.
Your argument that image generators are okay because you don’t intend to make art is like arguing that you don’t want to wear fashion and then you buy your clothes on Temu. From the most ridiculous runway outfit to that t-shirt you got at Walmart, all clothes are fashion, but that’s not the issue. The issue would be that you bought fast fashion - an industry built entirely on horrible working conditions and poor wages that is an ecological nightmare. And this is the issue with these generators: they sell you a product made using stolen work (wage theft basically) that uses more electricity than every renewable energy resource on the planet.
The issue isn’t the tech. It’s the companies making the tech and the ethics involved. Though there’s an entire other discussion to be had about the people who call themselves artists because they generate images, but that’s not relevant here.
Except it isn’t copying a style. It’s taking the actual images and turning them into statistical arrays and then combining them into an algorithmic output based on your prompt. It’s basically a pixel by pixel collage of thousands of pictures. Copying a style implies an understanding of the artistic intent behind that style. The why and how the artist does what they do. Image generators can do that exactly as well as the Gaussian Blur tool can.
The difference between the two is that you can understand why an artist made a line and copy that intent, but you’ll never make exactly the same line. You’re not copying and pasting that one line into your own work, while that’s exactly what the generator is doing. It just doesn’t look like it because it’s buried under hundreds of other lines taken from hundreds of other images (sometimes - sometimes it just gives you straight-up Darth Vader in the image).
Image generators don’t produce anything new, though. All they can do is iterate on previously sampled works which have been broken down into statistical arrays and then output based on the probability that best matches your prompt. They’re a fancier Gaussian Blur tool that can collage. To compare to your examples, they’re making songs that are nothing but samples from other music used without permission without a single original note in them, and companies are selling the tool for profit while the people using it are claiming that they wrote the music.
Also, people absolutely do still argue that video games aren’t art (and they’re stupid for it), and it takes tons of artists to make games. The first thing they teach you about 3d modeling is how to pick up a pencil and do life drawing and color theory.
The issue with generative AI isn’t the tech. Like your examples, the tech is just a tool. The issues are the wage theft and copyright violations of using other people’s work without permission and taking credit for their work as your own. You can’t remix a song and then claim it as your own original work because you remixed 5 songs into 1. And neither should a company be allowed to sell the sampler filled with music used without permission and make billions in profit doing so.
Are they, though? Starfield was so lifeless that I felt scammed even getting it for under $50 on release.
I mean, Lemmy definitely runs more techy than most other places, but I don’t know if I’d go so far as to say the average user here knows any better than any Reddit idiot or something lol
And my point wasn’t to peer review your example or anything, just to say that people keep complaining about it because these snake oil salesmen keep getting richer while using the same tired lines about how AI will do everything and anything, and do a handstand while it’s at it.
It’s like all the complaints historians keep finding about that one guy selling shitty copper bars or whatever. Nobody is gonna shut up about it until the bubble finally bursts and these AI companies can’t unload their shitty copper on anyone anymore.
I wrote that incorrectly, it wasn’t a few days after he met Putin, but a marked spike over the months after. He requested classified documents related to CIA operations like 3 days before the meeting with Putin. It’s also a series of events over a couple of years because it wasn’t until at least a year later that the FBI raided Mar a Lago and found more classified documents related to intelligence operations. There’s no real way to know for sure unless Trump or Putin admit it, but the correlation was and is suspect by even the intelligence operations of the government and is partially why the inquiry into Trump was called for in the first place.
https://thehill.com/policy/national-security/575384-cia-admits-to-losing-dozens-of-informants-around-the-world-nyt/
https://www.foxnews.com/media/trump-fbi-raid-could-some-connection-murdered-cia-assets-msnbcs-joy-reid-speculates
https://www.nytimes.com/2022/08/26/us/politics/trump-affidavit-intelligence-spies.html