Reports of code's death are greatly exaggerated

(stevekrouse.com)

62 points | by stevekrouse 7 hours ago

12 comments

  • pacman128 31 minutes ago
    In a chat bot coding world, how do we ever progress to new technologies? The AI has been trained on numerous people's previous work. If there is no prior art, for say a new language or framework, the AI models will struggle. How will the vast amounts of new training data they require ever be generated if there is not a critical mass of developers?
    • derrak 0 minutes ago
      Maybe you’re right about modern LLMs. But you seem to be making an unstated assumption: “there is something special about humans that allow them to create new things and computers don’t have this thing.”

      Maybe you can’t teach current LLM backed systems new tricks. But do we have reason to believe that no AI system can synthesize novel technologies. What reason do you have to believe humans are special in this regard?

    • justonceokay 19 minutes ago
      Most art forms do not have a wildly changing landscape of materials and mediums. In software we are seeing things slow down in terms of tooling changes because the value provided by computers is becoming more clear and less reliant on specific technologies.

      I figure that all this AI coding might free us from NIH syndrome and reinventing relational databases for the 10th time, etc.

      • sd9 5 minutes ago
        LLMs are very much NIH machines
      • realusername 9 minutes ago
        The bar to create the new X framework has just been lowered so I expect the opposite, even more churn.
    • pklausler 7 minutes ago
      This would also mean that we should design new programming languages out of sight of LLMs in case we need to hide code from them.
    • jedberg 17 minutes ago
      People are doing this now. It's basically what skills.sh and its ilk are for -- to teach AIs how to do new things.

      For example, my company makes a new framework, and we have a skill we can point an agent at. Using that skill, it can one-shot fairly complicated code using our framework.

      The skill itself is pretty much just the documentation and some code examples.

      • andrei_says_ 8 minutes ago
        The question is, who made the new framework? Was it vibe coded by someone who does not understand its code?
        • jedberg 5 minutes ago
          No, it was created by our team of engineers over the last three years based on years of previous PhD research.
      • NewJazz 14 minutes ago
        A framework is different than a paradigm shift or new language.
        • jedberg 3 minutes ago
          Yes and no. How does a human learn a new language? They use their previous experience and the documentation to learn it. Oftentimes they way someone learns a new language is they take something in an old language and rewrite it.

          LLMs are really good at doing that. Arguably better than humans at RTFM and then applying what's there.

    • kstrauser 23 minutes ago
      That’s factually untrue. I’m using models to work on frameworks with nearly zero preexisting examples to train on, doing things no one’s ever done with them, and I know this because I ecosystem around these young frameworks.

      Models can RTFM (and code) and do novel things, demonstrably so.

      • allthetime 5 minutes ago
        Yeah. I work with bleeding edge zig. If you just ask Claude to write you a working tcp server with the new Io api, it doesn’t have any idea what it’s doing and the code doesn’t compile. But if you give it some minimal code examples, point it to the recent blog posts about it, and paste in relevant points from std it does incredibly well and produce code that it has not been trained on.
    • CamperBob2 7 minutes ago
      In a chat bot coding world, how do we ever progress to new technologies?

      Funny, I'd say the same thing about traditional programming.

      Someone from K&R's group at Bell Labs, straight out of 1972, would have no problem recognizing my day-to-day workflow. I fire up a text editor, edit some C code, compile it, and run it. Lather, rinse, repeat, all by hand.

      That's not OK. That's not the way this industry was ever supposed to evolve, doing the same old things the same old way for 50+ years. It's time for a real paradigm shift, and that's what we're seeing now.

      All of the code that will ever need to be written already has been. It just needs to be refactored, reorganized, and repurposed, and that's a robot's job if there ever was one.

    • danielbln 21 minutes ago
      Inject the prior art into the (ever increasing) context window, let in-context-learning to its thing and go?
  • idopmstuff 43 minutes ago
    I don't know that people are saying code is dead (or at least the ones who have even a vague understanding of AI's role) - more that humans are moving up a level of abstraction in their inputs. Rather than writing code, they can write specs in English and have AI write the code, much in the same way that humans moved from writing assembly to writing higher-level code.

    But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English. There are probably a lot of cases where you could write an instruction unambiguously in English, but it'd end up being much longer because English is much less precise than any coding language.

    I think we'll see the same in photo and video editing as AI gets better at that. If I need to make a change to a photo, I'll be able to ask a computer, and it'll be able to do it. But if I need the change to be pixel-perfect, it'll be much more efficient to just do it in Photoshop than to describe the change in English.

    But much like with photo editing, there'll be a lot of cases where you just don't need a high enough level of specificity to use a coding language. I build tools for myself using AI, and as long as they do what I expect them to do, they're fine. Code's probably not the best, but that just doesn't matter for my case.

    (There are of course also issues of code quality, tech debt, etc., but I think that as AI gets better and better over the next few years, it'll be able to write reliable, secure, production-grade code better than humans anyway.)

    • cactusplant7374 23 minutes ago
      > But of course writing code directly will always maintain the benefit of specificity. If you want to write instructions to a computer that are completely unambiguous, code will always be more useful than English.

      Unless the defect rate for humans is greater than LLMs at some point. A lot of claims are being made about hallucinations that seem to ignore that all software is extremely buggy. I can't use my phone without encountering a few bugs every day.

      • idopmstuff 17 minutes ago
        Yeah, I don't really accept the argument that AI makes mistakes and therefore cannot be trusted to write production code (in general, at least - obviously depends on the types of mistakes, which code, etc.).

        The reality is we have built complex organizational structures around the fact that humans also make mistakes, and there's no real reason you can't use the same structures for AI. You have someone write the code, then someone does code review, then someone QAs it.

        Even after it goes out to production, you have a customer support team and a process for them to file bug tickets. You have customer success managers to smooth over the relationships with things go wrong. In really bad cases, you've got the CEO getting on a plane to go take the important customer out for drinks.

        I've worked at startups that made a conscious decision to choose speed of development over quality. Whether or not it was the right decision is arguable, but the reality is they did so knowing that meant customers would encounter bugs. A couple of those startups are valuable at multiple billions of dollars now. Bugs just aren't the end of the world (again, most cases - I worked on B2B SaaS, not medical devices or what have you).

      • bryanrasmussen 18 minutes ago
        most human bugs are caused by failures in reasoning though, not by just making something up to leap to the conclusion considered most probable, so not sure if the comparison makes sense.
        • wiseowise 5 minutes ago
          > most human bugs are caused by failures in reasoning though

          Citation needed.

  • soumyaskartha 48 minutes ago
    Every few years something is going to kill code and here we are. The job changes, it does not disappear.
    • suzzer99 46 minutes ago
      For future greenfield projects, I can see a world where the only jobs are spec-writer and test-writer, with maybe one grumpy expert coder (aka code janitor) who occasionally has to go into the code to figure out super gnarly issues.
      • drzaiusx11 14 minutes ago
        This is already happening, many days I am that grumpy "code janitor" yelling at the damn kids to improve their slop after shit blows up in prod. I can tell you It's not "fun", but hopefully we'll converge on a scalable review system eventually that doesn't rely on a few "olds" to clean up. GenAI systems produce a lot of "mostly ok" code that has subtle issues you on catch with some experience.

        Maybe I should just retire a few years early and go back to fixing cars...

        • suzzer99 7 minutes ago
          Yeah I imagine it has to be utterly thankless being the code janitor right now when all the hype around AI is peaking. You're basically just the grumpy troll slowing things down. And God forbid you introduce a regression bug trying to clean up some AI slop code.

          Maybe in the future us olds will get more credit when apps fall over and the higher ups realize they actually need a high-powered cleaner/fixer, like the Wolf in Pulp Fiction.

          • allthetime 1 minute ago
            I’ve got a “I haven’t written a line of code in one year” buddy whose startup is gaining traction and contracts. He’s rewritten the whole stack twice already after hitting performance issues and is now hiring cheap juniors to clean up the things he generates. It is all relatively well defined CRUD that he’s just slapped a bunch of JS libs on top of that works well enough to sell, but I’m curious to see the long term effects of these decisions.

            Meanwhile I’m moving at about half the speed with a more hands on approach (still using the bots obviously) but my code quality and output are miles ahead of where I was last year without sacrificing maintain ability and performance for dev speed

  • rglover 11 minutes ago
    It's only dead to those who are ignorant to what it takes to build and run real systems that don't tip over all the time (or leak data, embroil you in extortion, etc). That will piss some people off but it's worth considering if you don't want to perma-railroad yourself long-term. Many seem to be so blinded by the glitz, glamour, and dollar signs that they don't realize they're actively destroying their future prospects/reputation by getting all emo about a non-deterministic printer.

    Valuable? Yep. World changing? Absolutely. The domain of people who haven't the slightest clue what they're doing? Not unless you enjoy lighting money on fire.

  • deadbabe 1 hour ago
    My problem is that while I know “code” isn’t going away, everyone seems to believe it is, and that’s influencing how we work.

    I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.

    • oooyay 45 minutes ago
      > I have not really found anything that shakes these people down to their core. Any argument or example is handwaved away by claims that better use of agents or advanced models will solve these “temporary” setbacks. How do you crack them? Especially upper management.

      You let them play out. Shift-left was similar to this and ultimately ended in part disaster, part non-accomplishment, and part success. Some percentage of the industry walked away from shift-left greatly more capable than the rest, a larger chunk left the industry entirely, and some people never changed. The same thing will likely happen here. We'll learn a lot of lessons, the Overton window will shift, the world will be different, and it will move on. We'll have new problems and topics to deal with as AI and how to use it shifts away from being a primary topic.

    • idopmstuff 21 minutes ago
      As a former PM, I will say that if you want to stop something from happening at your company, the best route is to come off very positive about it initially. This is critical because it gives you credibility. After my first few years of PMing, I developed a reflex that any time I heard a deeply stupid proposal, I would enthusiastically ask if I could take the lead on scoping it out.

      I would do the initial research/planning/etc. mostly honestly and fairly. I'd find the positives, build a real roadmap and lead meetings where I'd work to get people onboard.

      Then I'd find the fatal flaw. "Even though I'm very excited about this, as you know, dear leadership, I have to be realistic that in order to do this, we'd need many more resources than the initial plan because of these devastating unexpected things I have discovered! Drat!"

      I would then propose options. Usually three, which are: Continue with the full scope but expand the resources (knowing full well that the additional resources required cannot be spared), drastically cut scope and proceed, or shelve it until some specific thing changes. You want to give the specific thing because that makes them feel like there's a good, concrete reason to wait and you're not just punting for vague, hand-wavy reasons.

      Then the thing that we were waiting on happens, and I forget to mention it. Leadership's excited about something else by that point anyway, so we never revisit dumb project again.

      Some specific thoughts for you:

      1. Treat their arguments seriously. If they're handwaving your arguments away, don't respond by handwaving their arguments away, even if you think they're dumb. Even if they don't fully grasp what they're talking about, you can at least concede that agents and models will improve and that will help with some issues in the future.

      2. Having conceded that, they're now more likely to listen to you when you tell them that while it's definitely important to think about a future where agents are better, you've got to deal with the codebase right now.

      3. Put the problems in terms they'll understand. They see the agent that wrote this feature really quickly, which is good. You need to pull up the tickets that the senior developers on the team had to spend time on to fix the code that the agent wrote. Give the tradeoff - what new features were those developers not working on because they were spending time here?

      4. This all works better if you can position yourself as the AI expert. I'd try to pitch a project of creating internal evals for the stuff that matters in your org to try with new models when they come out. If you've volunteered to take something like that on and can give them the honest take that GPT-5.5 is good at X but terrible at Y, they're probably going to listen to that much more than if they feel like you're reflexively against AI.

      • pixl97 14 minutes ago
        It's even better when you guide them into finding the fatal flaw for themselves.
        • idopmstuff 7 minutes ago
          Hahaha yes this is absolutely true but often times so much more work.
  • gedy 23 minutes ago
    When I started my professional life in the 90s, we used Visual J++ (Java) and remember all this damn code it generated to do UIs...

    I remember being aghast at all the incomprehensible code and "do not modify" comments - and also at some of the devs who were like "isn't this great?".

    I remember bailing out asap to another company where we wrote Java Swing and was so happy we could write UIs directly and a lot less code to understand. I'm feeling the same vibe these days with the "isn't it great?". Not really!

    • justonceokay 16 minutes ago
      You just brought me back to my first internship where as interns we were asked to hand-manipulate a 30k lines auto-generated SOAP API definition because we lost the license to the software that generated it
    • drzaiusx11 11 minutes ago
      Oh the memories, but at least that generated code was deterministic...
  • rvz 6 hours ago
    From "code" to "no-code" to "vibe coding" and back to "code".

    What you are seeing here is that many are attempting to take shortcuts to building production-grade maintainable software with AI and now realizing that they have built their software on terrible architecture only to throw it away, rewriting it with now no-one truly understanding the code or can explain it.

    We have a term for that already and it is called "comprehension debt". [0]

    With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.

    This is exactly happening to engineers at AWS with Kiro causing outages [1] and now requiring engineers to manually review AI changes [2] (which slows them down even with AI).

    [0] https://addyosmani.com/blog/comprehension-debt/

    [1] https://www.theguardian.com/technology/2026/feb/20/amazon-cl...

    [2] https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77...

    • suzzer99 49 minutes ago
      > With the rise of over-reliance of agents, you will see "engineers" unable to explain technical decisions and will admit to having zero knowledge of what the agent has done.

      I've had to work on multiple legacy systems like this where the original devs are long gone, there's no documentation, and everyone at the company admits it's complete mess. They send you off with a sympathetic, "Good luck, just do the best you can!"

      I call it "throwing dye in the water." It's the opposite of fun programming.

      On the other hand, it often takes creativity and general cleverness to get the app to do what you want with minimally-invasive code changes. So it should be the hardest for AI.

    • Insanity 41 minutes ago
      While I agree with everything you said, Amazon’s problems aren’t just Kiro messing up. It’s a brain drain due to layoffs, and then people quitting because of the continuous layoff culture.

      While publicly they might say this is AI driven, I think that’s mostly BS.

      Anyway, that doesn’t take away from your point, just adds additional context to the outages.

    • zer00eyz 39 minutes ago
      > We have a term for that already and it is called "comprehension debt".

      This isn't any different than the "person who wrote it already doesn't work here any more".

      > now requiring engineers to manually review AI changes [2] (which slows them down even with AI).

      What does this say about the "code review" process if people cant understand the things they didn't write?

      Maybe we have had the wrong hiring criteria. The "leet code", brain teaser (FAANG style) write some code interview might not have been the best filter for the sorts of people you need working in your org today.

      Reading code, tooling up (debuggers, profilers), durable testing (Simulation, not unit) are the skill changes that NO ONE is talking about, and we have not been honing or hiring for.

      No one is talking about requirements, problem scoping, how you rationalize and think about building things.

      No one is talking about how your choice of dev environment is going to impact all of the above processes.

      I see a lot of hype, and a lot of hate, but not a lot of the pragmatic middle.

      • xienze 32 minutes ago
        > This isn't any different than the "person who wrote it already doesn't work here any more".

        Yeah but that takes years to play out. Now developers are cranking out thousands of lines of “he doesn’t work here anymore” code every day.

        • zer00eyz 16 minutes ago
          > Yeah but that takes years to play out.

          https://www.invene.com/blog/limiting-developer-turnover has some data, that aligns with my own experience putting the average at 2 years.

          I have been doing this a long time: my longest running piece of code was 20 years. My current is 10. Most of my code is long dead and replaced because businesses evolve, close, move on. A lot of my code was NEVER ment to be permanent. It solved a problem in a moment, it accomplished a task, fit for purpose and disposable (and riddled with cursing, manual loops and goofy exceptions just to get the job done).

          Meanwhile I have seen a LOT of god awful code written by humans. Business running on things that are SO BAD that I still have shell shock that they ever worked.

          AI is just a tool. It's going from hammers to nail guns. The people involved are still the ones who are ultimately accountable.

  • erichocean 47 minutes ago
    > If you know of any other snippet of code that can master all that complexity as beautifully, I'd love to see it.

    Electric Clojure: https://electric.hyperfiddle.net/fiddle/electric-tutorial.tw...

    • stevekrouse 17 minutes ago
      Sick!!! Great example! I'm actually a longtime friend and angel investor in Dustin but I hadn't seen this
  • Plutarco_ink 18 minutes ago
    [dead]
  • aplomb1026 51 minutes ago
    [dead]
  • lucasay 31 minutes ago
    [dead]
  • developic 7 hours ago
    What is this