See, I’m gonna start this with “Man, this is a reason for not trusting AI as it is now,” because I’ve put some stuff in here and some of it sounds like nonsense. So I’m going to try to repost my blog post based on at least the organizational part that I helped with.
Alright, so here we go. I don’t know where to start, but I’ll just skip this part. Honestly, as much flack as I give AI, I’m more excited than anything else about these technological developments. I mean, this is the kind of stuff a guy dreams of.
No, I’m not one to say that AI has the answer to everything, but I do think it’s fascinating to consider that we might reach a point where we can measure all variables. If there’s a way to actually isolate all of these variables, imagine how impactful that could be. Maybe that’s the determinist/reductionist in me, and maybe that’s not a good place—but that’s where I am, so deal with it. I mean, you could reason that if you had the ability to figure out everything from the sun and the time you slept the night before, all the way down to an atomic level (and were able to view all of the governing things in our universe at once), you could make informed decisions based on how impactful that would be in your life.
I guess those are rose-colored glasses I’m looking through, because quite honestly, we all know that at some point this is going to work against our favor, and we aren’t gonna use it to promote well-being. Odds are it’s likely gonna be some sort of marketing super-tool, but as of now, I think the view to have is an optimistic one.
I think I wrote a blog post last week about the need for ethical parameters around AI because, let’s be honest, this shit’s getting crazy. On the other hand, I’m so far behind that it’s overwhelming, and I’m not sure where to start—so I’m just diving right in and trying things. But I do find it inescapable how the Internet is changing. I mean, if you train your eye, you can see how AI has infiltrated most of our daily living, which is again a bit shocking, to say the least. It’s such a quick wave, from the way things are narrated and closed-captioned to the ads that we see. It’s incredible. It reminds me of when Instagram started with quick and easy filters and how those migrated from Instagram to Facebook and so on. You could quite obviously tell there was some sort of Instagram-specific filter that infiltrated our online lives.
If you remember the “blackout wave,” where folks were showing support for Black people in America by posting a black square (which was intended as a sign of solidarity), there was no formal discussion about it, but that’s just what people did. You were able to look at your friends list or your social media platforms and notice a collective movement. I know some people are embarrassed by that, thinking it might have been virtue signaling, but I think that kind of misses the greater point that you were part of a social movement that was entirely digital and really unlike anything we’d seen before. You can fact-check me on that, because I’m sure there’s something else, but some of these viral moments are shaping the ethical digital landscape. I think that’s novel. I’m not gonna go much further into that for now, but I thought it was interesting and felt the need to share.
Back to trying to focus on the original reason I’m here, which is to talk about the social landscape (digital or otherwise)—I don’t know what they call it, but if you know me, you understand that football is a big part of my life, and I often draw analogies to football because it’s one of the primary languages I speak. In the spirit of the Super Bowl being just a few days away, I wanted to imagine the practical applications of AI in football.
I mean, just imagine the possibilities for game-planning. Analytics could be revolutionized. (I know it’s already in place, but still…) Could we reach a level of hyper-efficiency in data analysis that could revolutionize the game of football? Maybe I’m going too deep into psychoanalysis territory, but at what point do you say, “Hey, we have the ability to measure the heart rate of an opposing quarterback or an opposing head coach, gather baseline data after a few years of this, and effectively predict their calls in certain situations?” Of course, that jumps into private medical information, but if there’s a way to do it, someone’s gonna do it until advised in the other direction.
Why stop at coaches? Why wouldn’t we analyze the opposing team’s quarterback? In the Deflategate era, the Patriots would’ve had a field day with this. The Patriots come to mind for a very specific reason: if anybody knows who Ernie Adams is, I imagine every NFL team has a clone of Ernie Adams as their “analytics” person. But what do I know? If you want insight into the kind of person I’m picturing, look up the Julian Edelman podcast Games With Names and find the episode with Ernie Adams. That’s exactly the type of person I imagine would salivate over this new technology.
If you could, in real time, analyze and estimate what an opposing quarterback is going to do by assessing heart-rate data, sweat analysis, or whatever else that can be processed, and feed it to an AI bot to reference prior game history in similar situations, you’d have a pretty good idea of what they’re going to do—effectively predicting the future.
What’s likely to happen is that this would be used in-house, and folks would be able to, for lack of a better term, police their own guys and fix any weaknesses on their team or assess a player’s readiness. I know that sounds a bit invasive, but if you know one thing about successful football, it’s that the attention is in the details. The more details you can get, the more you can refine and become a better program. Living in a deterministic, reductionist paradigm is almost the job of a head coach and their staff—the ability to isolate certain variables and make better, more accurate predictions on the outcome of a play. That stuff is the stuff of dreams. Maybe I’m getting outside what AI can actually do, because I kind of imagine it as a processing unit, so take whatever I say with a grain of salt. In my head, these implications are massive.
—
On the flip side (or the other side of this coin), during this thought process, my ADHD kicked in, and I went off on a different angle, thinking about agential bias, a term that was introduced to me by Alex O’Connor in one of his YouTube videos. (Damn, there’s a theme here.)
While doing this, I scared the shit out of myself because I totally forgot I have Speechify—an AI software that helps with my dyslexia—and it started reading part of an article about agential bias. Over the last month or so, I’d been watching a few of Alex O’Connor’s videos, and the term “agential bias” got stuck in my head. Basically, it means you tend to give people (or yourself) or inanimate objects anthropomorphized reasons for behaving the way they do. We give those things agency, i.e., we assume they’re acting with intentionality.
I find this to be pretty fascinating because large language models are literally referred to as agents, and I have a child who’s fast approaching that childlike over-application of agential bias. A rough quote from the podcast might explain it better than I can: The guest mentioned his child and an empty ink pen. The child said, after the pen didn’t draw, “Daddy, the pen lied to me.” It’s a cute and funny example of how children have agential bias.
For a better explanation, I’m gonna throw this into Gemini 2.0. Here’s the result explaining agential bias:
“Agential bias is a cognitive bias where people tend to attribute more agency to themselves and other intentional actors than is warranted in a given situation. It can lead to an overestimation of the role of intentional action and an underestimation of the role of chance, situational factors, or unintended consequences.
Here are some examples of how agential bias might manifest:
- In sports: After a team wins, fans and commentators might overemphasize the coach’s strategic decisions or the players’ individual brilliance, while downplaying the role of luck or the opponent’s mistakes.
- In history: When studying historical events, people might focus too much on the decisions of key leaders while neglecting broader social, economic, or environmental factors that may have played a significant role.
- In everyday life: If a friend cancels plans at the last minute, you might immediately assume they are inconsiderate or don’t value your time, without considering that they might have a legitimate emergency.
Agential bias can have several negative consequences:
- Inaccurate judgments: It can lead to inaccurate assessments of situations and people’s behavior, as it oversimplifies complex events and motivations.
- Unfair blame: It can result in unfairly blaming individuals for negative outcomes that were actually caused by a combination of factors beyond their control.
- Missed learning opportunities: By overemphasizing agency, people may overlook important lessons about systemic issues, unintended consequences, or the role of chance.
It’s important to be aware of agential bias and try to mitigate its effects by:
- Considering alternative explanations: Always consider factors beyond intentional action, such as chance, situational factors, or unintended consequences.
- Seeking diverse perspectives: Talk to people with different backgrounds and viewpoints to get a broader understanding of the situation.
- Being mindful of your own biases: Reflect on your own tendencies and how they might be influencing your judgment.
By being aware of agential bias and actively working to counteract it, you can make more accurate judgments, avoid unfair blame, and learn from a wider range of experiences.”
So the next time you hear something like “This toy is mad at me,” realize the child is assigning agency to an object that has none. It’s an interesting parallel to what we do as adults—except we’re not usually as obvious about it. We just do it in subtle ways, all day long.
And maybe that’s why I’ve been so fascinated by both my son’s progress and the rapid advancement of AI. The parallels aren’t perfect, sure, but it’s still crazy to watch two “systems of learning” evolve in real time. My big goal now is to catch up on the reading I need to do about these topics—while also playing around with all the fun AI tools that keep popping up. It feels like a brand-new frontier, at least from where I’m standing. I’ve been waiting for something to come along and give me a guiding hand, so I don’t let my ADHD scatter me into a thousand different directions before I land on anything meaningful.
Leave a comment