Hinged Thoughts

Dall-E: create a cartoon image of a software engineer getting ready for the AI future

Prepping for the future (as a human software engineer)


I’ve been a software engineer for about 6 years. I currently work at an AI fintech startup in NYC and I’m wondering whether there will be virtually any (human) software engineers in about 20 years time.

The pace of AI-driven software engineering advancement is scary fast. If you told me 2 years ago that freely available AI models would be able to write a pong game1 within 10s, execute its own code, understand the error messages and rewrite the code until it works, or shame my coworker about his code not being performant, I would have laughed at you.

I’m not laughing 😓🥺😮.

The reaction from those in the software industry ranges from enthusiasm, to dismissal, to despair.

In particular, some who dismiss AI code generation point out the silly mistakes that AI generated code can contain. As though human generated code is immaculate. As though the technology isn’t continually improving, though not necessarily in a straight line. As though having an entire codebase in its context window isn’t already superhuman vs 95%+ software engineers.

This is also ignoring the very real applications of AI to essentially every human cognitive endeavour, from art, to lawyering, to linguisting, to teaching. Those not in denial are very busy integrating AI into everything, because they realize that if they don’t, they’ll be left behind.

Some will argue AI will never get as good as the best humans. Well, it doesn’t have to to have a deeply disruptive impact on basically all human activity.

So, if you are like me, an engineer who likes to build and also likes having a job, what should we do?

Let us, as our prompters would say, think this through step by step.

We know the following:

  1. AI allows engineers to be significantly more productive, at least at basic tasks such as writing scripts.
  2. AI has various limitations in terms of context windows, hallucinations, reasoning abilities, data biases and staleness.
  3. AI is publicly available and cheap.
  4. AI will likely continue to improve, though at what pace and to what limit (if any) is unclear.

So, we should probably do the following:

  1. Get really good at using AI code generation.
  2. Obtain at least a basic understanding of how AI works and its limitations.
  3. Get really good at valuable skills that AI is not very good at (currently) - e.g. understanding product requirements and their design implications, system design, low-level performance optimization.
  4. Keep up to date on the latest cutting edge AI tooling and research.
  5. (optional) buy a farm.

I am going to start on 1. today by playing around with Anthopic’s new Claude 3.5 Sonnet model for some learning. hbu?

1 Generated using GPT-4o (23/6/24) with the prompt: “write a pong game in Typescript that I can paste into CodeSandbox and run”