Don’t Buy the AI Hype
Tech hype is the norm in our industry. This post is inspired by the most recent circus around AI, which has seeped into the apps and services millions of people use every day.
But before AI became Frankenstein-stitched into Meta, X, and customer service chatbots, the GPT fervor hit individuals first. People started consulting AI not just about their homework assignments, but also their relationships. The lesson we should have learned then is that the bot will spit back at you exactly what you want to hear, so that you keep coming back to it, again and again. In the context of relationships, this can wind up encouraging abusive behaviors or misguiding peoples’ judgments — sometimes with tragic consequences.
On top of all of this are the devastating public health and environmental impacts of the technology. From IEEE:
“[We] calculated that training a single large generative AI model in the United States, such as Meta’s Llama 3.1, can produce as much PM 2.5 as more than 10,000 round trips by car between Los Angeles and New York City.
According to our research, in 2023, air pollution attributed to U.S. data centers was responsible for an estimated $6 billion in public health damages. If the current AI growth trend continues, this number is projected to reach $10 billion to $20 billion per year by 2030, rivaling the impact of emissions from California’s 30 million vehicles.”
Our understanding of these impacts is not keeping pace with the accelerating investment in these polluting, draining data centers: an insatiable ouroboros. A microcosm of the self-annihilating profit motive. A case study in manufactured death.
The logic
Progressive organizations seem to be following the lead of corporations in believing that AI can be leveraged in pursuit of their missions. We understand the desire to see AI as a tool that can allow organizations to make quicker progress towards their important social justice goals. Underlying that intention, however, are two assumptions:
- Using AI means higher productivity.
- If we integrate AI into our workflows, our workers will be able to do more work in less time (e.g., synthesize data faster, communicate more easily via automated note taking, email drafting, etc.)
The issue here is that while the intention is perfectly reasonable and valid, the assumptions are not necessarily true. And researchers attempting to examine these hyped up assumptions are now uncovering some interesting results. The group METR published a preliminary study just last month (July 2025) showing that computer programmers were actually 19% slower when using AI tools as part of their work (compared to not using it at all). But, fascinatingly, their perception was that they were 20% faster, likely because they spent less time-on-task and more time prompting the AI and finessing the prompts to get desired results.
One huge caveat here is that this research is very preliminary, but it is the only data that we have so far seen that assesses perception of worker efficiency versus reality. We will be looking out for similar research as it comes out — and if you find anything that you want to share, please reach out.
The implication
So, what do we have? A technology that cannot provide useful information (because it makes up potentially harmful or incorrect information as long as it satisfies you), nor does it increase efficiency (because it simply makes you feel like you are more efficient, again, to satisfy you). But that feeling of satisfaction is poison. We are being sold a short-term salve, a snake oil for challenging social problems. And people are buying it, in massive numbers, aided by the corporations that see these consumer “benefits” as a convenient side-hustle to supplement where the real money is: military applications.
How can mission-driven organizations become more effective? Is it truly through these tainted technologies?
Does the answer truly lie in an AI tool that summarizes your meetings so that humans don’t have to do the social and emotional labor of reaching consensus and shared understandings collectively?
Have we forgotten that powerful organizations are driven by the strength of the relationships between its members, not how quickly it can move on to the next Zoom meeting?
Are we really okay with feeding our voices, words, thoughts, and hard-earned donor dollars to an algorithm whose owners are committed to carrying out some of the most horrific crimes of the past century?
The alternative
As software developers and researchers who have followed trends over the past two decades, we also know that this is not the first time this has happened. We avoided being taken in by Web3/Crypto by applying skepticism and holding to our principles. Rolling the dice on new technology trends can wind up hurting the accessibility and longevity of our work.
Some of our best work has been creating websites that use tried-and-true but “old” technology: vanilla HTML and CSS, published on a static web server, easily editable by humans, is one of the most robust ways to make a long-lived website that’s accessible and even easier to maintain than a MySpace page.
Over the upcoming weeks and months, we will be thinking and writing more about AI and our position on it. We also will be reflecting on the hype cycles of the past, and how leaning on trendy tech has caused long-term maintenance problems for our work. This has taught us to evaluate with caution from a politically engaged perspective.
We at Sassafras are dedicated to demystifying the hype and supporting organizations in making sense of this rapidly evolving landscape. Interested in talking to us? Please reach out!
Alex Ahmed joined Sassafras in 2022 and became a worker-owner in 2023. Throughout her decade-long career as a researcher, she focused on understanding the human impacts of technology design and development, and now works as a software developer dedicated to cooperatively-owned and managed software solutions.