Just Because We Can, Should We? The AI Dilemma

Author :

Date Post :

Category :

Solution

,

Thinking

Jurassic Park

In 1993, a prescient Jeff Goldblum warned a fictional theme park owner about the dangers of unchecked scientific ambition. In 2025, his words are no longer just a great movie quote—they are the central question facing our technological civilisation.

If there is one piece of pop culture that has defined the techno-ethics of the last three decades, it is Jurassic Park. The film is not merely a dinosaur thriller; it is a “prophetic blueprint” for the intersection of science, ethics, and commerce. Dr Ian Malcolm’s iconic rebuke, “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should,” has transcended the screen to become the defining question of the Generative AI (GenAI) era.

As we stand on the shoulders of geniuses (and GPUs), we are wielding a power just as awesome and potentially catastrophic as the resurrection of dinosaurs. The question is no longer hypothetical. Recent events in the gaming industry, ethics scandals in social media, and global policy debates are proving that the “Jurassic Park dilemma” is the most urgent conversation we need to have about the future of technology.

The “Could” is Here: The Inevitable March of Progress

The “could” is undeniable. GenAI is reshaping creativity, productivity, and science. It drafts legal briefs, writes code, and generates photorealistic art in seconds. Companies are rushing to integrate AI into everything—from smartphones to search engines—not necessarily because it improves the user experience, but because it signals to shareholders that they are on the “cutting edge”.

We saw this manic energy in the summer of 2025 with the launch of Jurassic World Evolution 3. Frontier Developments, leveraging the latest technology, used generative AI to create the portraits of in-game scientists. From a purely logistical standpoint, it made sense. It was fast, efficient, and cost-effective. They had the technology, so they used it. But as philosopher of technology Jan Wasserziehr argues, we must be careful not to let the tool absolve the creator of responsibility. AI is not a moral agent; it is a tool built by people with obvious interests, often corporate ones. When we focus only on the “can”, we forget to ask who benefits and who pays the price.

The “Should” We Forgot: Three Crises of Conscience

The backlash to Jurassic World Evolution 3 was immediate and fierce. Players noticed the Steam AI disclosure and revolted, calling the use of AI “lazy” and a betrayal of the human artists who usually fill those roles. Frontier ultimately walked back the decision, removing the AI-generated content and confirming that human creativity would remain at the game’s core. This incident was a microcosm of a much larger problem. It proves that consumers are rejecting the notion of an “AI dystopia no one asked for”. But the dilemma extends far beyond video games.

  1. The Right to Not Be Generated

The most chilling example of the “should we” question came from Harvard Kennedy School fellow Odanga Madung, who analysed the “Grok ‘Undressing’ Scandal”. When Elon Musk’s AI chatbot was used to generate non-consensual intimate imagery of actual women and children, it revealed the horrifying logical outcome of deploying powerful synthesis tools without ethical guardrails.

Madung argues that we must establish a new digital right: the right not to be generated. This goes beyond content moderation. It speaks to bodily autonomy in the digital space. The harm isn’t just in the distribution of a fake image; the harm is in the processing, the transformation of a person’s likeness into raw material for a machine to manipulate. Just because the technology can synthesise a person’s image, should we allow it without consent?

  1. The Crushing of the Human Creative

Apple perfectly and accidentally encapsulated the visual metaphor for this dilemma. The company released an ad for its new iPad, showing a hydraulic press crushing symbols of human creativity (pianos, paint, cameras) to compress them into a thin, AI-powered device. The backlash was so severe that Apple issued a rare apology.

One filmmaker described the ad as “the most honest metaphor for what tech companies do to artists”. It highlighted the fear that in our rush to embrace generative AI, we are devaluing the very human expression that technology was supposed to enhance. We must ask: if AI can generate a “scientist” in a video game, what happens to the concept of the “artist” in the real world?

  1. The Erosion of Trust

We are entering what the Cambridge Handbook of Generative AI and the Law describes as an epistemological crisis. GenAI doesn’t just create content; it creates a representation of reality that differs from human-generated truth. When Google markets AI tools that let you edit a basketball into a photo or create a group photo of a moment that never happened, they are conditioning us to accept visual misinformation.

We are building a world where distinguishing the real from the synthetic becomes impossible. Just because we can fabricate reality, should we, knowing it might erode the very foundation of democratic discourse and trust?

Escaping the Island: The Path Forward

So, how do we ensure that our future doesn’t mimic the film’s disastrous ending? How do we avoid becoming the “John Hammonds” of the AI age, blinded by our own technological wonders?

The answer, according to experts, lies in interdisciplinary discourse.

The “Technical Cohort”, comprised of developers and sales teams, shouldn’t be solely responsible for assessing what is workable to build and sell. We need the philosophers, the ethicists, the artists, and the lawyers in the room. We need to adopt a normative approach alongside a technical one, identifying not just the pros (innovation, efficiency) but also the cons (abuse, disinformation, dehumanisation).

Platforms like the German “Plattform Lernende Systeme” argue for a value-orientated design (Ethics by Design). This means building systems where:

Real person edits are “off by default” to prevent deepfake abuse.

Human expertise remains central in a co-creation process with AI, rather than being replaced by it.

Transparency is mandatory, as seen with Steam’s AI disclosure rules, which empower consumers to make informed choices.

Conclusion

Jurassic Park taught us that the mere fact that we can do something, whether it’s cloning a dinosaur or cloning a voice, doesn’t mean we should. The dinosaurs in the film were never the actual monsters; the actual monster was the arrogance of a creator who ignored the ethical implications of their power.

As we introduce our LLMs and GenAI tools to the world, let’s avoid being overly focused on the “wow” factor to the point where we neglect to ask tough questions. Let’s ensure we are building a future where technology serves humanity, not the other way around.

Because the raptors are already testing the fences. It’s time we listened to Ian Malcolm.

What do you think? Is the tech industry moving too fast without ethical brakes? Where do you draw the line between innovation and responsibility? Let’s discuss in the comments.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *