Both Ubuntu and Fedora have made it official: support is coming soon for running local generative AI instances.
An epic and still-growing thread in the Fedora forums states one of the goals for the next version: the Fedora AI Developer Desktop Objective. It is causing some discontent, and at least one Fedora contributor, SUSE’s Fernando Mancera, has resigned.



In this case, the “bag” is a sucking black hole, and venture capitalists are throwing physics-defying amounts of money in it to drag the LLMs out. As soon as they stop that, the “cat” goes back in the “bag”.
Local LLM models are an exception, but they are also atrocious by comparison. Most users will get some limited utility from an LLM if they had one, maybe, but it is being accommodated and foisted everywhere like its the invention of the mouse. It is nowhere near as paradigm shifting, but is being hyped, advertised, and marketed more aggressively than any product in history. So, the roaring hype makes everyone think that if they don’t get on board too, they’ll be left in the dust, so now well-meaning projects are getting bloated up for it too.
Many of us just want this technology to get the fuck away from us until it is worth using or dies already. Is that so very much to ask?
Using an llm mondel that isn’t super advanced is actually quite freeing in my opinion, the generated output is always mediocre at best, but it’s usually good enough for boilerplate and can be decent if you need to unstuck yourself. It also isn’t good enough to lull you into just letting the llm do all the work for you since it makes obvious mistakes.
“It’s just good enough for some things once in a while, but is too bad to rely on in any serious way,” doesn’t sound like a great use of my electricity, but I guess I’ve wasted electricity on less. Still, doing it on purpose seems worse.
I mean, it sounds like a tool they occasionally find useful and don’t use otherwise. I’m not sure how “occasionally use a tool good enough for my purposes” is a waste. Whether it’s the most efficient application of that electricity is a different question, but without knowing their particular scenarios I can’t really compare whether other tools use less electricity for the same purpose.
(Yes, of course, “just do it all in your brain” is even more efficient, but if that’s an argument against utilities, you probably shouldn’t waste electricity on Lemmy either)
If I have a tool that consumes resources whether I use it or not, and I rarely, if ever, use it, that can be a net waste. Nothing in this world exists in a vacuum. You mentioned wasting electricity yourself, then failed to count it, for example.
Using resources does not equal wasting them. I find that tool uses an exceptional amount of resources, electrical, cognitive, and others, to achieve a goal that can typically already be achieved with tools that are older, better, more well established, and that use dramatically less resources.
Burning lumber in an abandoned alley would be a more efficient resource use than some of these AI applications.