This article is a polemic against ChatGPT Will Replace Programmers Within 10 Years written by Adam Hughes. And also to Software 2.0 written by Andrej Karpathy (director of AI at Tesla/OpenAI).
the original article
Adam predicted, that AI (not necessarily ChatGPT) will take 99% of programming jobs, which will happen in a a few stages:
- Phase 0: The Prototypes (Q1 2023) – Job Loss Prediction: 2%
- rather by coincidence with tech layoffs started from Twitter
- Phase 1: Scale and IDE Infiltration (Q2-Q4 2023) – Job Loss Prediction: 5%
- AI can generate boilerplate code, directly from IDE, without copy-pasting from chat
- Phase 2: Advanced IDE Tooling and Consolidation (1-2 yr) – Job Loss Prediction: 25%:
- at this stage AI should be able to make project-wide suggestions
- the job loss hits hardest for low performers or AI-holdouts
- specialists like virtual reality or game developers are still safe
- Phase 3: SaaS and No-Code (2–5 yr) – Job Loss Prediction: 75%
- coding in an IDE will die off
- full-stack generalist is gone, as well as the product team and mobile developer
- niche fields like video gaming and medical software are now under threat
- Phase 4: AI Native and Domain Dominance (5–10 yr) – Job Loss Prediction: 95%
- as code is no longer maintained by humans, there’s no need to optimize it for cognitive complexity
- unit tests, documentation, and design patterns die off
- underlying code is a hidden detail, as inconsequential to its users as the firmware of a handheld calculator
- new hardware and compilers are designed for AI operators
- Phase 5: Heat Death (10+ yr) – Job Loss Prediction: 99%
- software is now unrecognizable, business rules are maintained by AI-powered SMEs
- code no longer resides in static repositories – it is ephemeral and dynamic
- there will still be hobbyist coders of course, but their work will be of little consequence… like oscilloscope art
should we be afraid?
This is a complicated question. We don’t fully agree with Adam’s predicted timeline and many details. However, denying that AI will take anyone’s job, based only on the current state of AI (example) is very shortsighted.
The correct question is: WHO exactly should be afraid? Because there are different types of programmers and programming jobs. There are 2 main dividing lines:
- Business applications: workflows, business rules, messaging etc. stuff vs technical software: virtual machines, frameworks, drivers, specialized high-performance algorithms (sound/video processing, data compression, encryption, advanced math), game engines, and other similar projects.
- Larger teams, in bigger companies and/or software houses, working in professional manner (test coverage, code reviews, quality metrics, versioning etc.) vs single developers working for small non-IT companies, working in dev/admin/other combined roles with old-style, bad quality code, without tests and metrics, with local changes in dependencies etc.
If your programming job focuses more on business applications, and you work either in larger team, and/or with good-enough-for-AI-takeover code, than you should start to worry.
Otherwise, at least your job is rather safe. Your salary level – well, that’s different story. But you will still earn more than in Walmart, doing what you love to do.
who will be safe in next 3-5-maybe-7 years?
Note that this is only our opinion. Based on over 25 years spent in IT, including IT management in 1000+ companies, but still just an opinion.
You are safe, if you are a key programmer (having any unique knowledge, so not eg. a new hire) and you:
- Work on a very big project, that:
- is distributed across many independent modules, that communicate through some API or service bus
- is split into many independent repositories, where a single IDE will never see an entire project, or even a majority of it
- is split across many independent teams, who don’t have access to other teams’ repositories, environments, databases and CI processes
- has no “big picture” at technical and product management level
- (optionally) has some legacy parts (quality too low for AI, language not [yet] supported by AI etc.)
- Work on a project, where business logic can’t [yet] be expressed in the form acceptable for AI:
- R&D (most areas)
- game development (most parts)
- development of anything highly innovative in strictly technical means (eg. using advanced math in complex contexts)
- Maintain business application, that relies heavily on original paper-based documents (not their digitized versions), eg. for regulatory reasons. National pension systems are good examples (at least for Poland).
who will be safe in next 10-15 years?
- You work on purely technical software, that for some reason can’t be replaced by AI (in 10-15 years):
- virtual machines, emulators and other low-level instrumentation software
- drivers and other integration interfaces – including drivers that connect AI instances with external data sources, managed devices or other APIs
- frameworks, application servers and other “core software”, on top of which business applications are run – to make it more AI-friendly and allow AI integration/takeover of older applications
- various high-performance algorithms, eg. video rendering algorithms in game engines, realtime sound processing
- R&D in data encryption area (implementing new encryption algorithms, trying to find weaknesses etc.)
- R&D in more specific areas (eg. hardware related)
- You have a programming background and you work in security/safety/compliance area, connecting IT security with functional safety and regulatory/compliance (eg. GDPR, DORA regulation and similar future acts) areas – eg. as CSO (or just a security specialist, but with autonomous knowledge in important area[s]). Expect that many details of your job will change – but in general your job should be safe, since AI will always be one step behind criminals that want to steal your company’s money.
- You have at least some programming background and you are a key person in product management or you lead a team of programmers focused on particular product – so you have both key knowledge and decision power in your area. Your job will change to something that Adam called “subject matter expert” – someone that creates products through AI. In our opinion this will probably be quite similar to Behat tests:
- You work as a key person on any area, that is tightly regulated, and it won’t be changed on purpose:
- military
- state-security (specific areas, eg. printing physical money)
- some particular medical areas (definitely not all!)
- any other state-level “last line of defence” area: power plants, sewage treatment etc., depending on particular country
what is the problem then?
AI will not takeover all (or 99%) of programming jobs. Forget it. Such predictions for the next 10 or even 15 years miss many important real-world details…
…but 80-85% is in scope. Assuming that another ~10% will have to reskill and change the technical area (not just the current job).
so, what should “I” do to stay in business?
(Not “we”, not all the programmers – just “I”.)
Look at repeating words above: key person, key knowledge. There is another term for it: organizational silos. For last 10-15 years, silos are seen and referred as something negative (at least from company’s point of view). As something to break.
However, imagine that you work in eg. 8 people team, and only 1 of you will “survive” after AI will take over the rest of your work. Do you want to become this one, or do you prefer let any of your 7 colleagues to do it? Ask yourself this simple question, and depending on the answer, start making a plan.
But don’t worry: you still have some time, regardless of your particular choice and vision of career. This time, in general, it is less important to be faster – but more important to be better prepared for any surprises. Including for foul play from your colleagues, who are not against you in general – but who will also compete for the same 1 job.
Good luck!