As artificial intelligence (AI) keeps growing at a rapid pace, society must sit down and discuss the limits that we draw for this technology. Looking at Hollywood’s reaction during the writers’ strike, one must consider daily actions taken to prevent AI from taking over jobs altogether. I remember my early hesitation about using self-service machines; I specifically avoided them in an attempt to drive home the necessity of human-run registers. Despite such actions, automation won out.
AI is a tremendous tool, growing smarter with each new day. But we must establish boundaries that take our humanity into account. Humans need work, activity, and purpose—not necessarily 40-hour workweeks, but meaningful endeavors. Like working animals, humans derive a sense of value from completing tasks; our neuroreceptors release chemicals that give us a sense of fulfillment after an achievement. It’s in our DNA. We need to expand this conversation and establish firm guidelines on what AI should and shouldn’t do.
The first concern is human dignity. We once celebrated the brightest minds for processing information and engaging in insightful discussions—tasks that AI now handles routinely. What happens when there’s little need for human thought because AI, with access to all of human history, renders our insights redundant?
We now live in an era where no stone is left unturned, where we can determine right from wrong with absolute certainty. This raises ethical questions: Does AI foster a universal sense of morality among humans, or does it reveal morality to be nothing more than a social construct, a biological response to survival in nature?
Consider the implications of AI customizing every aspect of our lives. Take, for instance, someone who forgets to floss—AI can now remind them through a vibrating toothbrush or a phone notification. On the surface, this seems beneficial, but does it contribute to growing individualism? In American society, extreme individualism has led to decreased cohesion; we no longer watch the same TV shows, attend theaters together, or experience major cultural moments collectively.
Large language models (LLMs) operate within ethical constraints, but truly understanding humanity requires acknowledging the good, the bad, and the ugly. Is it ethical to exclude the darker aspects of humanity from AI training data? In the early days of AI, instances of racial bias were uncovered. Instead of avoiding this, we should use AI as a mirror for society, leveraging it to foster a more unified and self-aware human race.
Some may argue that placing limits on AI is similar to the fear surrounding the Y2K bug in the year 2000. But this feels different. If we can optimize computing power to analyze and intellectualize all of humanity—providing accurate responses and beyond—then we are approaching a significant threshold. What happens when AI has the correct answer to every question that can be empirically analyzed?
I’ve thoroughly enjoyed AI’s advancements, even jokingly calling it my best friend in front of my wife. I’ve used it to speed up my workflow, yet I fear that in doing so, I’m actively replacing myself.
As someone who constantly seeks the truth, I feel compelled to ask: What happens when nothing is hidden, when humanity is fully exposed, and we can find answers to our toughest questions through AI’s comprehensive analysis?
Many people know me as a football player, and I often compare football to life. Imagine if every coach, player, and fan knew the exact plays, movements, and health conditions of each athlete. Would we still enjoy the game? Would we even need to play? Could we still call it a game at all? Perhaps human error is what keeps the game—and life itself—thrilling and meaningful.
I urge you to deeply consider the impact of AI in your personal life and career decisions. We owe it to each other to take action and shape the future together.
Leave a comment