Advertise here with Carbon Ads

This site is made possible by member support. ❤️

Big thanks to Arcustech for hosting the site and offering amazing tech support.

When you buy through links on kottke.org, I may earn an affiliate commission. Thanks for supporting the site!

kottke.org. home of fine hypertext products since 1998.

🍔  💀  📸  😭  🕳️  🤠  🎬  🥔

A Coder Considers the Waning Days of the Craft. “I suspect that non-programmers who are skeptical by nature, and who have seen ChatGPT turn out wooden prose or bogus facts, are still underestimating what’s happening.”

Discussion  1 comment

David Leppik
🍕 👍 ⭐️  comment

As a software engineer, my job is to negotiate work contracts between humans and machines—and occasionally data contracts between teams of developers. People come to me with a request and I need to ask the right questions to find out what they actually need and how it fits into the existing workflow and software environment.

Then I need to write code that people can understand and tests to catch easy-to-miss mistakes.

Simply put, if I can explain it accurately enough to describe to a generative AI, I’m most of the way done and might as well code it myself.

Part of my job is to minimize complexity. There are two kinds of complexity: inherent and accidental. Inherent complexity is unavoidable: if you are writing tax software or a physics simulator, it must be at least as complex as the tax law or the physics. But there is a lot of accidental complexity as well. When England introduced two-way roads, they had to decide whether to drive on the left or right. It’s an arbitrary decision, but it must be made. It becomes really complex when they have to integrate with countries which made the opposite decision.

Generative AI is great for cutting through arbitrary complexity. So much of coding is arbitrary complexity—Python vs. JavaScript, iOS vs. Android—that it can feel like it’s doing all the work for you, especially if you’re working with an unfamiliar framework.

And that’s ignoring the fact that the current crop of AIs are really error prone when it comes to gotchas that are hard to see. A lot of that will eventually get caught with special purpose linters. But the AIs themselves are just code generators backed by statistical models, and they have the same problems as all code generators.

Perhaps the biggest problem with code generators is that they encourage bloated, hard-to-read code. Most developers spend far more time reading code than writing it. Good code isn’t just legible on its own, it is written at the same level of abstraction as the ideas it encodes, so it reads like technical documentation. It’s not just a series of StackOverflow code examples strung together until the computer produces what appears to be the right answer.

Hello! In order to leave a comment, you need to be a current kottke.org member. If you'd like to sign up for a membership to support the site and join the conversation, you can explore your options here.

Existing members can sign in here. If you're a former member, you can renew your membership.

Note: If you are a member and tried to log in, it didn't work, and now you're stuck in a neverending login loop of death, try disabling any ad blockers or extensions that you have installed on your browser...sometimes they can interfere with the Memberful links. Still having trouble? Email me!