Things I Believe About AI

Some version of this post has been kicking around in my head for a while. The basic genesis is that I have complex, and often contradictory, thoughts on generative AI, and I often want to express these to friends, colleagues, and others when the topic comes up. But it’s hard to quickly summarize these ideas without completely derailing where that conversation is going. This is basically an attempt to express these things in one place.

Two quick notes/disclaimers. First, these are things I believe today, but they could easily change in the future. Second, I’m using the term AI based on its current pop cultural definition; referring to generative AI tools such as ChatGPT, Claude Code, and the like. I understand that AI is much broader than just that, but for the purposes of this post, I like the shorthand.

AI is Useful

When ChatGPT first took off, and other AI tools started to emerge, I was pretty skeptical of their actual utility. I toyed with them a bit, but didn’t find them to be very capable of doing anything real. I sort of wrote them off for a while, and didn’t really come back until my employer started to push their use.

I was wrong. Generative AI is not crypto. These tools have lots of real utility, and I use them daily both personally and professionally for everything from brainstorming and research to writing and reviewing code.

That initial skepticism continues to exist in a way. I’m still pretty careful about reviewing the output when the results matter and looking for primary sources when I want to cite the information to others. But there is without doubt something there, and I can only see my own usage continuing to increase.

AI Will Only Get Better

This may be somewhat obvious, the inevitable forward march of technology and all that. But having watched how much AI models and tools have progressed in the last year, I think it’s pretty likely that the scenarios where AI can be successfully deployed continues to expand.

Given that, I think it’s worth continuing to experiment and evaluate AI tools and how they can be used to get things done and make our lives easier. If you tried something 3 or 6 months ago and it failed, it’s probably worth trying again today.

AI May Not Be As Useful As We Think

I recently read about a study, the results of which surprised me, given my own experience with AI coding tools.

After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%--AI tooling slowed developers down.
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

I don’t totally know what to make of that. Transparently, I didn’t read the full study. But I’m open to the idea that there is a bit of mental bias going on that creates a delta between perception and reality. Whether that is true or not, I think it’s worth keeping an eye on and trying to measure how much saving, time and otherwise, we are getting from AI tools, especially given their significant costs.

I Have Concerns About the Societal Impact of AI

Recent history tells me that the benefits of AI are likely to go overwhelmingly to the already wealthy and powerful. I think it’s inevitable that AI ultimately leads to many businesses needing fewer working humans. I have no idea how extensive that change will be, or how quickly it will come, but I feel confident saying it will happen.

When that happens, the result could be some combination of increasing wages, fewer working hours for the average person, or maybe something like UBI. But the much more likely scenario I foresee is that profits go up, unemployment goes up, and the wealth gap continues to increase as a result of these tools. Depending on your politics, you may or may not believe that this is a bad thing, but I do, and it worries me.

There are other concerns I have; how it impacts education, the environmental costs, the morals or using them given how they were trained, etc. but honestly, those feel so small in comparison that it’s hardly worth elaborating on.

Conservative AI Adoption Would Be Prudent

Given the above concerns, and that the real utility of these tools is at least somewhat unproven, I think it would be wise to move slowly. It’s going to be hard to go backwards once AI is widely adopted and starts to become a standard expectation rather than a novelty innovation.

I’d much rather see a conservative approach taken to deploying AI far and wide with a preference for human in the loop models wherever possible.

That said, I have no reason to believe that will be the case, and you can already see AI getting shoved into everything. So I’m not getting my hopes up here.

AGI Shouldn’t Be a Goal

There is a techno-optimist viewpoint that sees artificial general intelligence (AGI) leading to unprecedented advancement and growth, the scope of which has the potential to end human suffering. I think that’s pretty unlikely to be the real outcome. I don’t think I’d call myself a “human optimist,” but I think that humans are good at lots of things, and we are evolutionarily wired to remain active and engaged in the world around us.

Given that, I think AI is best deployed primarily as a tool, one that can help us do things we already want to do, and to expand our capabilities in the margins to perform tasks where we are limited.

I don’t see any compelling reason to want AGI or to think that it will actually be deployed in a responsible way that consistently benefits ordinary people. In short: a world where AGI exists isn’t one I’m excited about, or one I necessarily want to live in.