
When I was a child, one of my uncles shared a story about the iconic sliding doors from the sci-fi series Star Trek. According to him, the developers at NASA were so impressed by how the doors automatically opened for crewmembers that they reached out to the show’s producers to uncover their secret. The producers responded, “We have a man on both sides of the door, and they pull the doors open when someone approaches.” Regardless of its truthfulness, my uncle’s storytelling always brought laughter to anyone who would listen.
We often regard sci-fi as mere fiction, tales of uncharted galaxies and glimpses into our future. Growing up in a family of sci-fi enthusiasts, I’ve absorbed countless stories. As a realist, I’ve typically enjoyed these narratives without delving deeply into debates about their potential truth or prophetic nature.
However, my perspective has shifted in recent years. As our phones have evolved into powerful mobile computers and artificial intelligence has seeped into virtually every aspect of our lives, I’ve begun to see parallels between our reality and the sci-fi tales of the twentieth century. While I don’t expect a Terminator to arrive from the future, I do wonder if the plot line of machines taking over the world is less far-fetched than it once seemed.
The development of A.I. is progressing at an astonishing rate, far more rapid than many anticipated. Even Sam Altman, CEO of OpenAI, admitted in a recent podcast (Uncanny Valley, December 5, 2024) that the evolution of ChatGPT has far outpaced the expectations his team initially set. Versions with advanced capabilities that were projected to launch in ten years are now forecasted for release within just a few years.
A.I. is growing exponentially, and humanity is not entirely prepared for it. While there are undeniable benefits—like assisting in medical diagnoses or empowering those with disabilities to communicate more effectively—each technological leap brings with it new challenges.
Just as with any new technology, we must remain vigilant about potential dangers. I do not seek to halt progress, but I find myself asking: Are we truly aware of where we’re headed? How will we recognize the tipping point when A.I. shifts from being a tool we control to something that controls us?
Such questions often get brushed aside by tech developers who argue against interfering with their progress. While it’s crucial not to stifle innovation, we need to consider the implications of A.I. in defense and military contexts. How can we ensure that A.I. does not supersede human judgment? Who will ultimately hold the reins—the machine or mankind? What influence will A.I. exert on global economies, and what ramifications will follow? The need for a thoughtful strategy regarding A.I. limits has never been more pressing.
Ethical considerations loom large, too. As humans, we create rules based on our values and the societal norms of our time. Yet, as we assign more responsibilities to computers and A.I., ethical questions become paramount. Which spheres should remain exclusively human? Without preemptive limits, we risk being caught unprepared.
Human beings possess unique qualities—empathy, intuition, creativity—that A.I. cannot replicate. Ironically, some experts suggest that A.I. may even evoke greater empathy during interactions than real humans do. In the aforementioned podcast, Sam Altman noted how people often feel more understood by an A.I.-generated ‘person’ than by a flesh-and-blood counterpart. This raises profound questions about what constitutes genuine connection and humanity.
So, does it truly matter whether a computer or a real person is communicating with us? Does it matter if an algorithm decides what crops to plant, which nations wage war, or whether religion remains a cornerstone of society? Will it matter if computers dictate electoral outcomes or question the very necessity of elections?
For now, computers follow our instructions, their outputs relying heavily on our inputs. But that dynamic is bound to shift. We must prepare for a time when A.I. operates independently—and we need to decide what we want, who should maintain control, and what safeguards we can implement while we still have the power to do so. Skynet may have begun as a fictional nightmare, but it teeters on the edge of reality, threatening to materialize sooner rather than later.
