Autopergamene
Rage Against the Machines
article🇺🇸 englishPublished 2025-03-28AISociety8mn to read

At madewithlove like at other tech oriented companies, we try to stay in the loop of AI because we see it change the world around us, and our work in particular. We dissect it, we try it, we comment, we debate. I’ve been very optimistic and hyped in the past, down to training my own models and following new papers as they come. But then so much happened since, that I’m left wondering how much of the promises delivered, how much of my hopes materialized compared to my fears?
Society
In terms of progress, so far it has been good at finding millions of potential solutions, which you then still need to painstakingly verify, only for most of them to make no sense, because you’re basically throwing a bucket of paint at your paint by numbers. Be it for finding security vulnerabilities or bug bounties, proteins, new drugs, new materials etc.
It’s all flooding everything, leaving us to sift through the noise with a faint hope that in the shit mountain lies a gold nugget. And sometimes discoveries are made that way for sure! But more often than not, the time taken to discard the false positives – and understanding the true ones –could have been aimed at finding them directly instead.
On the society front sure it was gonna revolutionize law, medicine and politics. Make all our lives easier, but in day to day hallucinations still plague ALL models even the ones backed by the richest people on the planet, to the point where most are not usable in any serious capacity without triple checking. Even for fun stuff like our internal quizz app, it regularly spits out nonsense to the point you need an AI to check the AI to check the AI and even then you’re not sure it’ll work.
So yes multimodal agents can see the world and help the blind and all! It’ll just tell you randomly it’s seeing zombie clowns in the street when there isn’t, but don’t worry you can trust your “eyes” 99% of the time! Even when scanning documents, a task that OCR has managed for years, it will regularly just return completely different words and numbers than the ground truth just because in the model’s head they “say the same”. And this is a core mechanism of these models, it’s why they work, it can’t be ironed out or thought away. And these same models that are “so good” at vision and trustworthy are being pushed to drive your cars and fight your wars.
Work
But at work it helps us! I can’t deny that one, AI did help me out of a bind a few times, helped solve problems, and generally avoid to do boring tasks. But I’d argue most of the jobs that have been truly eliminated by AI have been busy work. Work that in fact we don’t find nice for human to do either, so really it’s more of a giant spotlight on every profession that needlessly puffs itself with phantom layers and bullshit jobs.
It has eliminated middle men, but don’t think for a second the benefits went to the bottom rung. Instead all the extra value was pocketed by the people at the top with no benefit for you besides having to fight to justify your position. Aren’t you glad AI freed all this free time in your life so you can work more and harder. The biggest benefit AI has brought me at work is fixing search, which itself has been ruined by AI, so it just feels like a pyromaniac selling firefighting services.
And sure coding agents are getting there, but just like for art, it’s hard to ignore their unethical training and it’s hard to see how much these agents miss the intent and don’t think long term. Vibes coding may be getting somewhere but it’s plagued with the same issue of not consciously making something with a purpose in mind: you end up with applications that *look like *applications but that feel disjointed and buggy because you’re just constantly tacking on new things on top without reshaping it in a cohesive way.
But at least it brought us a lot of cool** new tools**! I think? Did it?? At the end of the cycle, the most we have to show is chatbots in every app. But how many of the assistants now shipping with every single app do you use consistently? At this point they feel like the last step of the enshittification cycle, when all ideas run out, tack on a “Foobar AI” and call it a day! But how many have actual useful features, and how many actually just pollute the UI? How many are even worth talking to and how many feel like talking to a wall or a clone of ChatGPT clone?
Most AI features pushed on us feel like they’re there to satisfy a shareholder business plan, because we’re in an AI bubble so everybody has to blow some more air into it just to be sure. Everything has to be AI whether it makes sense or not; even whether users want them or not, chatbots will be implemented until morale improves.
Humanity
On the art front it’s great… for some! We now know all the data got sucked without rhyme or reason, sometimes even pirated from their rightful owners, and now it’s good enough to replace one of the last genuine set of human professions with glorified stock photos, sub-literature and musak. I love seeing soulless ads and packagings everywhere and knowing no thought went into it at all. I love knowing that even if you’re a genuine artist you’re being accused of being AI the second you deviate too much from the norm of what is perceived as “real art”.
So yes we have great media generation now, enough to keep us occupied three eternities over while the world burns, but it sucks one of the main purposes of art which is the human element. The intention, the dedication, the message, which even abstract or procedural art has more of. It is not about effort or time, it is about connecting with others.
Even the little bit of communication facilitation it was supposed to bring is a train wreck, we are flooded by AI slop and articles and emails and noise and nobody reads it or has time to. Instead another AI reads it, then badly summarizes it. If anything it’s fastening a global communication breakdown because we can’t just say 5 words to each other we have to use the energy of a small country to say 50 and have the other person do the same to get back 5 words that don’t even convey the same spirit.
LLMs taught us that a lot of our human language is compressible, that we say so much to say so little, but instead of the realization being to cut through the crap and just value each other’s time, we instead doubled down into entropy and into more for the sake of more and of looking skilled and busy.
But at least it helps us brainstorm and conceptualize and think, we are smarter because of it right? Well at first, but the more we study LLM use, the more we realize even short consistent usage makes you dumber, less apt at reasoning yourself. Just like using LLMs as friend or therapist will isolate and ultimately stunt your actual social and emotional capacities. If our body feels like we don’t need part of it, it will start to atrophy. Just like relying on drugs for happiness stunts your real happiness, just like only consuming short form content stunts your attention span.
So yes you’re supposedly more efficient and have more time now, but you’re also likely gambling with yourself. Sometimes the effort is necessary, the reflexion is what keeps our minds rolling, the *work *is what makes us better. In all my anticapitalist passion I will never deny that: labor is what brought us here as a species, working is not taboo, when they’re motivated, people want to work, often on great things. So keeping that drive and capacity alive is essential to me. But this is not the kind of work that AI is currently freeing up, quite the opposite.
World
At least it’s ecological now right? Of course! I’ll just ask you all to refrain drinking water in the next decade to be sure the datacenters have enough. Even if running them can be light now, training them unfortunately still isn’t, and running the biggest ones definitely still isn’t. It’s a battle to escape the cost of them to the point where we might as well bring back air travel in full force. Because people growingly use LLMs for mundane tasks, and every time someone asks ChatGPT to multiply two numbers or translate a single word, it’s like they’re figuratively flying to the other side of the world to ask a question that could have been answered for nothing with a normal search query or any other existing tool, including their brain.
Hey maybe we’ll get AGI to save us all and undoom the planet! Except at this point the hype cycle is so blatant that it all just feels like a marketing ploy by people who have no idea where to go after LLMs. More and more we see the limits of these models, and all the fancy promises made by billionaires to usher a new digital utopia have so far rung hollow in the face of how they use the current models to instead further discord. Maybe future AI will save us, but current AI is dooming us. But at least we can sleep in peace know if AGI does happen it’ll be in the absolute worst fucking hands it could be and will totally be used for good!
But surely current AI is doing **something right **for so much money to still be thrown at it? Well yes it is indeed being deployed everywhere in every use case, and you know which one has become a great niche? Slapping “AI” on horrible ideas so you can say it was the black box’s fault! Like replacing all customer services with mindless chatbots so that nobody gets the help they need, or denying people healthcare based on the vibes of their entire data. What about states using AI to recognize you in protests based on the way you walk? Maybe even arresting you ahead of time because The Magic GPT in the Sky told them your social media history was starting to use a lot of scary words. And if someone pulls the alarm on how unethical and dangerous it is, well you just say it was a bug in the AI and we’ll train it better! It will already be too late for all the people who suffered, but hey better luck next round! If there is one. Is that all worth it?
Is it too late?
I know I’m in my doom era, but when I see how hopeful some things like the fediverse make me, I don’t think that’s it. I don’t think I’m burnt out of my hopes on AI, I still dream it could invert the scales of power. And I still find joy in it here or there, even video generation I still manage to find mindblowing at times, until I remember what it costs us all and the breakdown of truth and reality that it’s accelerating. To “repurpose” an old AI meme it’s increasingly harder to no see Shoggoth behind the happy smiley face, to not see all its ruining in addition to the little boosts it gives us.
So aaall that ranting and negativity to ask: are there still uses of AI that feel like it is all worth it? That makes you amazed? Not small productivity boosts, not “it avoided me the pain of doing a boring task”, things that used to feel like magic like in the early days of it. Things that used to give hope and not free you to be distracted by something else. Even the scientific papers that come out all feel like they’re on a train of hype and marketing and overblown promises and unchecked discoveries.
My personal hope lies in open source, in the capacity for small time individuals to run powerful models locally and offline, in ways that take** monopoly** away from the current tech elite. And no, not like DeepSeek. Actual bona fide open source, by the people for the people, without shady backing or manipulation and politics. For now, as long as training these gargantuan models requires more resources than 99.9% of Earth can afford, it stays locked in the hands of a class that has nobody’s interest at heart but their own. So I’m cautiously optimistic to seeing open source catch up – or maybe see monopolies lag behind. Only then will we really see an AI boom like we should, because only then will it truly be in the hands of the most. And from there maybe that’s my last slice of optimism talking, but I do believe most people are good and would use it for good. But I guess we’ll see who gets there first.