18 min read

Is this AI?

It becomes self-aware at 2:14 a.m. Eastern time, August 29th.
Is this AI?

This essay has been brewing in my head for a while. I've been wrestling with the technology and its impacts, both real and imaginary for a while. I had things to say, but didn't know where to start. I wanted to avoid anything that smelled of LinkedIn bait, was too flippant, or turned into some sort of rant about society and the fall of humanity...

I have found that it is very hard not to reference T2 writing about AI...

After wrestling with where to start on a six hour car ride the other day it hit me. My framing for AI that all of it is wrapped up with my very specific perspective. Everything spawns from that, so let me start there...

I'm pretty lucky. I'm a child of the 1970s, so I got to grow up in an analog world and be a part of the transition to a digital one. I also entered the workforce at the right time to help build this dystopia. I got to build a collaboration and file sharing tool from scratch, use early web technologies, stand up streaming servers, do video editing without an avid machine, and build a whole lot more before those things just were... well things.

Basically I got to build cool stuff with technology for a long career which has culminated in me now leading teams of brilliant people who do that kind of work. It has kept me continuously exposed and engaged with some of the newest technologies at a very practical, hands on level for a very long time.

I also happen to have been a writer, designer, and artist for that entire length of time as well. I was trained as an artist and a designer when that was still done with physical media and learned how to do those things with computers. I got to learn how to develop film and set type as well as digital layout for professional publications, which not many folks have the privilege of saying.

My point is, these two spaces I straddle give me skin in the game. I live in two worlds profoundly impacted by technology. I am both witness and participant. Not a day has gone by over the last several years where I haven't had to think about the impacts of artificial intelligence, its impact on me, my teams, my career, my passions, my friends, and my family.

This essay is me trying to make sense of it all. These are my well measured thoughts in this moment. As each day goes by, something new is happening. The change is so rapid it is almost impossible to keep up with all of it, despite my best efforts.

This is a long one, so good luck...

CTA Image

GenCon 2025 is almost here! If you plan on attending I will once again be at the Burning Wheel booth, #2708. It is going to be a packed booth with lots of great games for sale plus some freebies. I will also have some copies of my books with me! Come by and say hello!

First, What Are We Even Talking About?

Let's get this first one out of the way. AI is a marketing term basically invented by Sam Altman. Maybe Microsoft and Google are to blame too. I don't know. I don't care. It kind of doesn't matter.

The problem is that they, the tech industry, and their enabling partners, have coupled a whole host of different types of technology together and declared them AI. The term is almost meaningless at this point.

Now your toaster has AI in it. Every Kickstarter you see has AI in it. What used to just be shit like a software guided laser cutter is now an AI Infused C&C Magic Machine. AI is a magic machine inventing new physics! It is ridiculous and in a sane world would be considered fraud.

To avoid confusion, I am going to do my very best to be as specific as possible about the tools I am referring too. When I use the term AI, I will be referring to the common marketing usage. It is a confusing space for even those of us that understand it and talk about it every fucking day...

Breaking the Social Contract

Before I go head long into the common discourse about AI, I need to address a subject that is mostly ignored in the general discourse. We are in a moment in history where American society in particular is seeing the social contract unravel. It is impacting every single person who doesn't have fuck you money.

For those that are unaware, the social contract historically references the relationship between the state and those it governs. That is definitely falling apart in the United States. However, the social contract I am referring to, in this regard, is the one between corporations and society, which is inclusive of the state. That contract, particularly with tech companies, has been unraveling rapidly in this moment, in a large part because of AI.

I often find myself explaining to friends and family not in my field what {FAANG, OpenAI, Microsoft, Whomever's} AI really is. That is quickly followed by explaining the implications of it being in the device they own or are thinking about buying.

As you can imagine, they get angry. Not at me thankfully. Though, sometimes it is hard to tell...

Typically though, they are angry at the companies. Companies they used to trust, like Apple and Google. Companies they rely upon for services they use every day. Now they look at them with a level of skepticism once reserved for more obvious charlatans.

This is something that isn't talked about much in the AI discourse right now, but this marketing rush has damaged, if not outright destroyed the social contract with consumers. It was already hanging by a thread, and by overplaying their hands by using the current regime in the US to facilitate pushing AI into everything may have severed it.

People already turn off AI on their phones and browsers because they find it intrusive and a violation of privacy. They are also now doing it for political, reasons of personal safety, and because it doesn't work as advertised. Public trust is clearly damaged, possibly irrevocably.

This makes it difficult for even the most seasoned technologist to make informed decisions about the technology, let alone someone with only a passing exposure to it. Any good that may come of this technology is always going to be tarnished with the harm of the charlatans looking to grift you out of money.

And it turns out, they can do some real damage to all of us with from their fuck you money position...

Who Watches the Elon?

I bet the leaders at all his AI competitors are. Not because of the usual reasons, but because of his impact on the social contract. Not a lot of the press about Mechahitler covered this part. The thing I found interesting about this wasn't his well known antisemitism, but the blatantly obvious way he put his hand on the scales to make his AI do what he wanted it to do.

The story for some time now has been that these chatbots are honest because they are built on the data of the internet. The data of the world. Of all of humanity. So they are objective, somehow, and accurate.

This is an extension of the mythology that has been built around the technology industry. A mythology written by an egotistical industry and reinforced by a lazy media operating as public relations reps. We use terms like engineer, computer science, architecture, logic, reasoning, and objectivity which mask the absolute shit show underneath.

There is very little science in the technology industry these days. If bridges and buildings were architected and engineered with the same rigor as we use to build crap, the world would look like Fallout 76. The tech industry isn't the bastion of pure logic, facts, and objectivity it purports to be. It is a collection of consumer products that reflect the wants and needs of the people who make them.

At work I often talk about the dangers of shipping our org chart. It is the last thing you want to do because it doesn't meet your customer's need. When it comes to the tech industry, they don't ship their org chart so much as they ship their founders or CEOs philosophy.

It is almost always not a well thought out philosophy baked in a vacuum...

Generative AI chatbots are an ur example of this. The GPT class of chatbots that most people are familiar with are predictive, meaning they try to make a best guess at what you want when you send them a prompt based on their data set and your input. They aren't sold that way, but it is what they are.

I hope you can see where the problem is...

That data set, as Elon demonstrated recently, can be used by companies to skew how their tools work in one direction or another. Not perfectly. Not fully accurately, but with enough weight, they can make Mechahitler, Mechawhatever they want to force down your throat, or possibly more frighteningly, Mechasomething they don't understand that fucks with vulnerable people.

This is a another blow to any trust people might have with the companies that are asking society to trust them with everything. And I mean everything. They need it all to make their machine work, and they want to control it all for your benefit.

But again, if they can't hold up their end of the bargain as an honest broker about their own tools, how can we trust them at all? This is the great question of the moment and it leads to some very scary answers that I think most people fear.

The Big Scary Fear

Of course, corporations have been steadily eroding our trust since the 1970s. Everyone I know from my generation has a relative or family friend who retired with a pension after working for a company their entire life. That doesn't happen anymore as corporations by and large have, since the 1980s, shifted their view fully to treating employees as assets and not people.

I think this is why the primary concern so many people have about AI of all kinds is that it is going to replace jobs in the corporate world. That it will accelerate the enshitification of the internet and destroy artists, craftspeople, and small businesses.

They aren't wrong. It is going to do all those things.

Now, artists, craftspeople, and small businesses have been fucked by the corporations that control the internet for some time now. Google and Meta in particular have molded the internet through their domination of the advertising business to create a homogenized hellscape. Search is already broken within their walled gardens and continues to deteriorate with the flood of AI Slop.

Artists in particular are doubly fucked. Tools like Midjourney and DALL-E, most likely trained on the artist's own data, allow anyone to easily create AI Slop. In a way, it is the expected evolution of the digital tool set many artists have moved to over the last decade to create their works.

A friend once told me,

"If you need a computer to make it, eventually a computer will just make it for you."

I think about that all the time in this context and feel terrible for artists who face this threat. There is the very real possibility that the only way artist will be able to make a living in the near future is by selling original works in person at art fairs, gallery shows, and farmers markets.

Not exactly the democratized future we were all promised in the 90s, is it?

Corporate creatives are not immune to these threats either. Since my career started there has been a push by technologists to eliminate designers from the mix. Every tool has been trying to abstract away the need for designers and "get to code faster." The various AI-like utilities being embedded into the design tools like Figma and AdobeEverything reflect this.

Then there is Google's AI Slop wonder machine, Veo. It is the most visible class of AI tools in the video generation space, and isn't taking anyone's job just yet, but the goal of this class of tools is to replace people. A lot of the hoopla of the most recent strike in the entertainment industry was about writers and actors under threat, but I think the real threat is to the other people who make the industry go.

You don't need a key grip if there is no equipment to manage.

It isn't going to be much better in the corporate world, either. Here the threat is tools like Microsoft Copilot...

Ok. I got get this off my chest. Why is Microsoft so bad at this? Two AI products with the name Copilot? One for Office 365 and one for Github? As if companies and people don't use both? This company is the very definition of failing upwards...

Any way, Clippy the Return is traveling through the corporate world like a rash. It will write your emails, make your slides, write your employee reviews, reply to teams messages, analyze your spreadsheets, summarize meetings so you don't need to attend...

You see where this is going right?

Clippy the Return might be the most visible tool that is going to replace people because of marketing, but it is absolutely not the only one. Every enterprise player is embedding things within their platforms to sell to CTOs. From Salesforce down to the smallest player you have never heard of missing vowels in their company name, they are are sending me emails about their new AI features.

It has been a steady drum beat of tools designed to replace customer service reps, IT support technicians, travel agents, data analysts, communications folks, hr folks, you name it. The odds are that if you have a corporate job, there is a class of AI tool being designed or tested to replace part or all of your job.

Of course, right now, most of these tools don't work very well even when they work correctly. More than one person I talk to refers to the outputs as "starting points" or "drafts." Some people even talk about using the tools for the "boring part" of their jobs. That always astounds and dismays me.

The tools, like most technologies, suffer from some version of the garbage in garbage out axiom. Human data is notoriously hard to put into tables and columns. We try very hard to categorize everything, and our Workday-LinkedIn-Tinder profiles would have us believe our data is simple and quantifiable, but it just isn't.

The tools are also often using predictive models without a key ingredient, so they make mistakes that people normally don't. It is because they lack that all to human trait of intuition. An AI, no matter what version of it, is a bundle of information. It has no knowledge, wisdom, or most importantly experience.

This will make the adoption uneven. Things that RPA has already replaced will get replaced again with some new things added. Anything that could be automated will be rapidly tossed into this bucket. In some places jobs will disappear and in others that same job will still exist because the data in those sectors is hard.

We are in for a long ride of uncertainty, fear, and anxiety in the corporate world. There is no telling what this will do to a service economy.

Productivity, Productivity, Productivity!

But what about all the productivity we will gain, Keith? Haven't you seen all the numbers? Our engineers get 20, 30, 900% more productivity with AI agents. Isn't that awesome?

I think it is awesome every time one of my engineers finds a tool that helps them relieve anxiety. Most engineering productivity tools, when you get down to it, are about taking care of the repeatable things to relieve anxiety so they can focus on the hard problems. As I tell my teams, being an engineer is about problem solving more than it is about writing code.

In my long career we went from nothing, to basic autocomplete plugins, to powerful code completion tools baked into our IDEs, to some wild shit with this current class of predictive tools. I'm glad my engineers get access to this shit, if it makes their lives easier. So far though, the data hasn't come in yet on what the real return on the investment of time and energy may be.

I haven't even started to think about the cost yet here...

Yes, there is a lot of press out there with a lot of numbers. Some positive, some negative. Our own internals are inconclusive at this point and some of things I have seen have made me question its value entirely. When it goes well, it goes ok. When it goes bad, it goes really fucking bad and the human in the middle needs to do work.

But I got to ask the hard question... Does productivity even matter?

As an industry we have been chasing this dragon like addicts for decades. There is a cottage industry of books, seminars, and consultant frameworks on how to squeeze out more productivity out of your team. If I am a mess of an organization and losing money, these things are probably important. I have helped right the ship on enough teams in my life to know that.

But, what if we aren't a mess?

What if I ship regularly? What if I am making money? What if I am well positioned in my market? What does 30% more productivity out of my engineers get me? Why isn't that the follow up question to every one of these technology CEOs in those interviews?

The honest answer is nothing. We don't like that answer in the corporate world, but it is the truth. We can't draw a direct line from that increased productivity past a certain point to increased revenue or share prices. You can only squeeze so much out at a certain point. Physics is immutable.

If I can't measure any more value beyond a certain point, how much do I invest?

How Are We Measuring Success?

Speaking of measurement, why is the only thing anyone measuring seems to be adoption? Everyone I talk to in the corporate space is talking about their adoption metrics. Shopify posts about their adoption journey. Spotify has a podcast about it. Internal adoption of a product isn't a value metric.

I am going to say it again in a different way.

Internal adoption isn't a value metric

So where are the value metrics?

This is the thing that bugs me, on both the pro AI adoption and con AI adoption side of the debate. At best they talk about productivity. At worst they talk about adoption. Neither reflects business value.

Personally, I can find business value metrics for particular agentic use cases we are working on in my organization. However, that is because we started from a use case and found a tool we think fits the purpose. But again, we began with a problem and think an implementation of some AI tooling will help us achieve our goals.

In the tech space, it has always been a bad idea to build a product around a technology rather than build the right technology to fit the right product. It is the difference between the Apple of Job's second run and say, Google or Microsoft.

Lord I miss when that was the bench mark we strove for and not Sam Altman...

In my current role, it gets scary when I think about AI adoption as the metric to guide the idea of value for a company. The price of compute is not going down. All of the AI companies are investing billions of dollars and have no profit. Their price per user is first taste is free baby prices right now, but entirely unsustainable. At some point that has to level out.

Then what happens? How do all those CIOs and CTOs justify the spend to their CEOs? How do the CEOs justify the cost to their board? What do you do when you have changed your company's processes to rely upon th0se vendors? What happens if you laid off the work force that would now be cheaper than the reliance on what is likely multiple AI vendors?

What Will the Impacts of Adoption Be?

I have to say, I think some of the automation that we can build with agentic tools is pretty amazing. Some of my engineers have built tools to handle repeatable tasks with agents that cut down on their cognitive load, put them in the middle to check the work, and keep them focused on the right thing.

We are also exploring use cases centered on using agents to navigate unstructured and structured data sets to match criteria. This is the kind of powerful tooling that would let an organization, with a user's permission, find the right doctor, lawyer, advisor, or therapist. Connecting humans together, even in a professional capacity, is tricky stuff, and helping to facilitate that under the hood contains some magical possibilities.

The local models on our phones and other devices, walled off from vendors, if we can trust them, will also be a game changer. Small models privy to private information could open up interesting quality of life opportunities for people and families. By necessity, because they are walled off, would keep people in the loop of the decision making, which is what these tools should be facilitating, not removing.

Then of course there is the magic that has been happening continuously in the sciences with large models to crunch data. Whether it is using tools to reconstruct damaged artifacts through imagery analysis or hunting through data sets from the stars for new discoveries, science is doing amazing things. It makes me sad that so much of it is overshadowed by the larger AI hype machinery.

The things that concern me are the tools that remove the need to think and problem solve from the mix. I had mentioned before the admission from people of their pleasure that these tools can write emails or build presentations, and it fills me with dread.

I mean, I get it. There are emails I don't want to reply to. There are times I struggle to get going on a presentation. Even this essay took me forever to get going on.

But why would I ever give up my agency? Why would I ever choose to let a machine communicate for me when communication is a tremendously nuanced and human activity? Why, in particular would I trust a known imperfect machine to do this?

This removal of what is often considered low level or entry level tasks often goes hand in hand with this insane notion of not needing that talent any more. It even stretches into the wild idea of vibe coding where you need no understanding of development, just prompts. I am not sure that there are more short sighted views of the world than these.

Vibe coding makes no sense with a basic understanding of the imperfection of the technology we are talking about. Pushing this kind of code to production has the potential to be catastrophic financially and physically in some cases. Our world runs on code and computers and the last thing anyone wants is vibe code running the power grid.

The myth of vibe coding out of the way, engineering talent has to come from somewhere. Intuition, knowledge, and context are all built on experience, not a weekend learning Python. If these tools eliminate the opportunities to learn and we don't build new learning pathways, something corporate America absolutely sucks at, where will we be?

If you work in the industry you probably have seen this story before because we have lived it already. I have a single word for you. COBOL.

Most of the world, at its very foundations, is built on COBOL. You know what has a shrinking talent pool? COBOL developers. As these old heads retire, the knowledge and wisdom disappears with them and it gets harder and harder to maintain these foundational systems.

There is lesson there for the idea of letting skills atrophy in favor of the new hotness.

Making Choices at the End of an Era

I consider myself something of a student of history. Ever since I was little I have obsessed over where things come from. What is the origin of animals, of people, of countries, of language, of ideas?

This obsession eventually led me to the French historian, Marc Bloch. In his brilliant posthumous work, The Historian's Craft he asks,

"What is it, exactly, that constitutes the legitimacy of an intellectual endeavor?"

It is a question that haunts me, but it also is relevant to this subject. In this moment of massive change and potential upheaval, I ask myself this very question.

In my work, it is a nuanced dance trying to pick the legitimate choice. I personally need to experiment with these technologies to understand them. I need to ensure we don't lose pathways of experience for our talent in our organization because of the tools. I need to make sure we are picking tools because they are the right technology for solving a problem and not because they are the new hotness.

There is nothing worse than a hammer looking around for a nail...

In my art, the choice is more direct. I refuse to use the technology and limit the exposure of my art and writing to it. I have made the choice that the legitimacy of my endeavors exists in their physical characteristics, not in their digital manifestations. Its value is in the tactile nature of it. That which connects me to the person who holds it on the other side of the divide.

Thinking in terms of eras or as Bloch says, "short violent jerks, no one of which exceeds the space of a few lifetimes" is an illusion we cling to. History is movements of time that can span well beyond our vision. I won't try to predict what is going to happen next with this technology or what is going to happen to us all. No one can predict the future.

Not unless they have already been there...