Absolutely no way this doesn’t explicitly target certain groups of people and end up in a lawsuit.
Absolutely no way this doesn’t explicitly target certain groups of people and end up in a lawsuit.
Right, like maybe talk about women’s rights?
Whatever campaign project manager set this up needs to take a step back.
Doesn’t seem like that gravy train will roll on forever
I mean, for all we know, only AI is playing this game!
Man, wait till this journalist hears about this little thing called the Fediverse.
Now how about forum signatures?
boots up cracked Photoshop
Seems more like Left 4 Dead or Deep Rock Galactic (without the mining)
But some of the frames seem like they have inspiration from Lethal Company… the setting (and Remedy’s humor) would definitely make it work if there’s more inspiration we don’t see yet.
I did actually, and it worked, though they may have changed it by now.
Think I have a screenshot somewhere…
Edit: they’ve definitely altered the way it works. I’m sure there’s a way to get around whatever guardrails they added with enough creativity, unless they’ve completely rebuilt the model and removed any programming training data.
Please place item in bagging area
Shit, at this rate we’re months away from being able to play Doom on Amazon product pages (fun fact, you can totally use the AI prompts like it’s chatGPT and ask for python code, lmao)
–BTW this article looks like an ad for Jason Schreier’s new book about this topic, that releases soon and is mentioned as the last sentence in the article with a single link that spans across 4 lines.–
He’s had a few other articles written, but given the detail and research, I think he can be forgiven. It’s not like he’s trying to sell something that doesn’t live up to what the book is about.
A llm making business decisions has no such control or safety mechanisms.
I wouldn’t say that - there’s nothing preventing them from building in (stronger) guardrails and retraining the model based on input.
If it turns out the model suggests someone killing themselves based on very specific input, do you not think they should be held accountable to retrain the model and prevent that from happening again?
From an accountability perspective, there’s no difference from a text generator machine and a soda generating machine.
The owner and builder should be held accountable and thereby put a financial incentive on making these tools more reliable and safer. You don’t hold Tesla not accountable when their self driving kills someone because they didn’t test it enough or build in enough safe guards – that’d be insane.
Switch 2 taking on a crazy form
Considering the staggering cost of AI models, waiting until AI solves the problem is going to do nothing but prove the Great Filter hypothesis.
EU approves steep tariffs on Chinese electric vehicles
European Union countries failed to agree on whether…
So… which is it?
Dumping Core
The Expanse intensifies
Oh it was great.
Want the regular desktop? Well now it’s an app you need to open!
It’s certainly better than Windows 8 and it’s awful mandatory touch interface.
Oh derp
Might also take a very long time (or a large amount of radiation).