

I’m running ollama in termux on a Samsung Galaxy A35 with 8GB of RAM (+8GB of swap, which is useless for AI), and the Ollama app. Models up to 3GB work reasonably fine on just the CPU.
Serendipity is a side effect of the temperature setting. LLMs randomly jump between related concepts, which exposes stuff you might, or might not, have thought about by yourself. It isn’t 100% spontaneous, but on average it ends up working “more than nothing”. Between that and bouncing ideas off it, they have a use.
With 12GB RAM, you might be able to load models up to 7GB or so… but without tensor acceleration, they’ll likely be pretty sluggish. 3GB CoT models already take a while to go through their paces on just the CPU.
Entities care about art… as much as they can benefit from it. Large entities make sure to get the rights for peanuts, small ones are fine with dropping it and replacing with someone else’s, still without paying. Pretty much the only way for small artists to get a fair compensation, is from people who want to support them… a case in which —ironically— copyright is irrelevant.
It isn’t US centric either. Corporations have used the US to pressure everyone into accepting a similar set of rules, with similar effects all over the world.
But I’m not even strictly against copyright itself. I’m against how the laws have been pushed over and over towards a twisted parody of the initial goals, while the real world has been going in a completely different direction.