POV: The first thing you do upon waking up is reach for your phone.

You had a dream about something nice, maybe a vacation. You scroll. There’s an ad for that. You scroll. You go book that dream vacation. You scroll. You order from Amazon and have your very essential items delivered to your door that same day. Maybe you do some more online shopping. Perhaps you’ll finally go to write that essay you’ve been putting off for weeks (let’s be real, you’re just going to have Chatgpt write it for you anyway). You’ll bedrot while scrolling through TikTok, since the algorithm knows you better than anyone ever will. At some point, you’ll go outside (Some health app or step counter is running in the background) and go study in the library (while listening to Spotify, of course. You’re not a barbarian, you like to listen to music while pursuing your education). Go home. Scroll. Sleep. Repeat.

 

Does that sum up your daily routine? It also sums up the plot of Frederik Pohl’s The Tunnel Under the World.

A small-town man wakes up on June 15th and goes on with routine: familiar ads, friendly neighbours, nothing out of the ordinary. That is until we find out, as he does, that everyone in his town died in a factory explosion. What’s left of them are brains, preserved and wired into a simulation run by the very same corporation responsible for the decimated factory. This simulation is designed to test the corporation’s marketing campaigns, while gradually replacing the thoughts of the individuals with advertisements.

In the world Pohl imagined, the inhabitants didn’t realise they were living a false reality. Today, we do, and still, we participate. We willfully do this to ourselves and then wonder why everyone has gotten sobad at socialising, why media literacy rates are declining, or why children can’t read. Centuries ago, a peasant would call our AI chatbots oracles. AI is our modern god, and our attention is the sacrifice. We offer up our questions, our data, and our thoughts, and in return, AI gives us instant answers. It’s convenient and it’s easy, and one might even argue it boosts efficiency. Isn’t it easier to have Chatgpt automatically summarise a text for you than to waste your precious time focusing on other, more intellectual tasks? That might be the case if human beings were capable of moderation. As it is, we have begun to rely on generative AI for everything, be it academic, decision-making, or even emotional support. We throw silly prompts in there and laugh about the answers, while ignoring the detriment to society (Did you know that generative has negative consequences on the environment, increases electricity demand and water consumption, according to the Massachusetts Institute of Technology? You probably did, but why should you care? Everyone’s using it).

The erosion of critical thinking is the death of a society that no longer questions what it consumes, believes, or repeats. These models do not “know” anything in the way a human does, nor can they replicate the authentic human thought process, which is shaped by one’s culture, interests, personal philosophies, and experiences. They do not evaluate sources, nor do they weigh evidence or develop arguments. What they produce is based on statistical prediction: what is likely to come next, based on patterns in their training data. That training data often includes biased, incomplete, or unverified information scraped from the internet, which is then repackaged. This has profound implications foreducation. When users outsource tasks like summarising, analysing, or reflecting to AI systems, they risk not only misinformation but also the atrophy of the intellectual muscles these tasks are meant to exercise. In academic settings, for example, educators have expressed growing concern that students are using chatbots not just to assist with writing, but to replace the process entirely, bypassing the very point of critical assignments and undermining the institution of education in whole (NYT, 2025).

MAN VS MACHINE

As Frank Herbert wrote, “Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” (Dune, 1965)

It can be argued that we are currently experiencing a phenomenon dubbed ‘digital’ or ‘platform capitalism’. Following the global financial and economic crisis of 2007–2008, two interconnected developments created the conditions for the rise of digital capitalism: a shift in the political landscape and a major technological transformation. (Verdegem, 2022) Simultaneously, there was a technological shift driven by the proliferation of digital technologies since the early 2000s, with the emergence of social media and AI. Our data has become commodified. Our behaviours, habits, and social relationships are quantified into digital data, which is then utilised by large firms to fuel their platforms. In the digital economy, our personal data is the currency corporations exploit.

It goes beyond simple AI chatbots and the convenience of having every piece of information that has ever existed available to you, like a post-modern dystopian version of the Library of Alexandria. This is more than just the replacement of administrative tasks with automation. It is the commodification of our very intellectual and emotional integrity. Consider the proliferation of AI-generated art, the use of algorithms to drive political discourse by favouring one type of content over another, or the increasing shift of academic and creative fields to automated processes. Platforms like Instagram, TikTok, andYouTube, powered by machine learning, serve not just to entertain, but to condition and shape consumer behaviour. Not only do we passively consume content, but we also become the product.

Creativity is an innately human trait. For as long as mankind has existed, art has existed. Think about the earliest cave paintings, long before the advent of structured society. The innate spirit and soul of human work cannot be replicated by any machine. Alas, AI-generated has steadily been trying to blur the lines between human creativity and the outputs of a machine, raising profound questions about authenticity and ownership. When an AI generates a painting, a song, or even a research paper, who owns the idea behind it? And what does it say about the value of human work when it can be replicated, tweaked, and mass-produced in an instant? Allegations have been made against certain authors suspected of using AI in their works. In 2022, Jason Allen’s AI-generated work, Théâtre D’opéra Spatial, won first place at the Colorado State Fair. Should that be allowed, or rather, is that morally fair? Is that the kind of content you wish to consume? Does it not feel just a bit uncanny?

LONG LIVE THE ALGORITHM

Dilution of artistic labour aside, one has to wonder how far machine learning could go, if left unchecked. The AI race is on (and we, the average user, might just be on the losing side) Fitness app, Uber app, food delivery apps, language learning platforms, sleep monitors, step counters, smart home controllers… all of these app developers are duking it out in the app stores for a chance to have YOUR data! Your data is special because it allows developers to train their AI models, which in turn allows companies to make better advertising.

The reality is this: our data fuels the engines of a machine-driven economy, one where corporate interests, not individual autonomy, take precedence. Our data becomes the currency we use to barter our integrity to corporations. Thus, the more we embrace the convenience and ease of AI, the more we surrender our power to those who control the machines.

By Kristina Keymer

Want to read more?

Interested in the SIB?

Leave your email address and we'll send you some more information! Just one email, no strings attached.

SIB-Groningen.nl makes use of functional and analytical cookies. If you continue to use our site, we’ll assume that you’re okay with this.