Which programming languages are most token-efficient?

(martinalderson.com)

78 points | by tehnub 5 hours ago

20 comments

  • solomonb 2 hours ago
    I'm biased by my preferred style of programming languages but I think that pure statically typed functional languages are incredibly well suited for LLMs. The purity gives you referential transparency and static analysis powers that the LLM can leverage to stay correctly on task.

    The high level declarative nature and type driven development style of languages like Haskell also make it really easy for an experienced developer to review and validate the output of the LLM.

    Early on in the GPT era I had really bad experiences generating Haskell code with LLMs but I think that the combination of improved models, increased context size, and agentic tooling has allowed LLMs to really take advantage of functional programming.

    • keeda 46 minutes ago
      It's not just your bias, I too have found great success with a functional programming style, even from the earliest days of ChatGPT. (Not Haskell, but JS, which the models were always good at.)

      I think the underlying reason is that functional programming is very conducive to keeping the context tight and focused. For instance, most logic relevant to a task tends to be concentrated in a few functions and data structures across a smallish set of files. That's all you need to feed into the context.

      Contrast that with say, Java, where the logic is often spread across a deep inheritance hierarchy located in bunch of separate files. Add to that large frameworks that encapsulate a whole lot of boilerplate and bespoke logic with magic being injected from arbitrary places via e.g. annotations. You'd need to load all of those files (or more likely, simply the whole codebase) and relevant documentation to get accurate results. And even then the additional context is not just extraneous and expensive, but also polluted with irrelevant data that actually reduces accuracy.

      A common refrain of mine is that for the best results, you have to invest a lot of time experimenting AND adapt yourself to figure out what works best with AI. In my case, it was gradually shifting to a functional style after spending my whole career writting OO code.

    • eru 2 hours ago
      I'm inclined to agree with you in principle, but there's much, much less Haskell examples in their training corpus than for JavaScript or Python.
      • joelthelion 3 minutes ago
        For the little Haskell I've done with llms, I can tell you they're not bad at it.

        Actually, Haskell was a bit too hard for me on my own for real projects. Now with AI assistants, I think it could be a great pick.

      • solomonb 2 hours ago
        You are right that there is significantly more Javascript in the training data, but I can say from experience that I'm a little shocked at how well opus 4.5 has been for me writing Haskell. I'm fairly particular and I end up re-writing a lot of code for style reasons but it can often one shot an acceptable solution that is mostly inline with the rest of the code base.
      • energy123 1 hour ago
        True for now, but probably not a durable fact. Synthetic data pipelines should be mostly invariant to the programming language, as long as the output is correct. If anything the additional static analysis makes it more amenable to synthetic data generation.
        • eru 1 hour ago
          > Synthetic data pipelines should be mostly invariant to the programming language, as long as the output is correct.

          Well, you can adapt your PHP producing pipeline to produce Haskell code that is correct in the sense of solving the problem at hand, but getting it to produce idiomatic code is probably a lot harder.

      • kstrauser 2 hours ago
        And yet the models I've used have been great with Rust, which pales in lines of code to JavaScript (or Python or PHP or Perl or C or C++).
        • eru 2 hours ago
          I've also had decent experiences with Rust recently. I haven't done enough Haskell programming in the AI era to really say.

          But it could be that different programming languages are a bit like different human languages for these models: when they have more than some threshold of training data, they can express their general problem solving skills in any of them? And then it's down to how much the compiler and linters can yell at them.

          For Rust, I regularly tell them to make `clippy::pedantic` happy (and tell me explicitly when they think that the best way to do that is via an explicit ignore annotation in the code to disable a certain warning for a specific line).

          Pedantic clippy is usually too.. pedantic for humans, but it seems to work reasonably well with the agents. You can also add clippy::cargo which ain't included in clippy::pedantic.

          • solomonb 2 hours ago
            > But it could be that different programming languages are a bit like different human languages for these models: when they have more than some threshold of training data, they can express their general problem solving skills in any of them? And then it's down to how much the compiler and linters can yell at them.

            I think this is exactly right.

            • jaggederest 23 minutes ago
              Exactly my opinion - I think the more you lock down the "search space" by providing strong and opinionated tooling, the more LLMs perform well. I think of it as starting something like a simulated annealing trying to get a correct solution, versus the same simulated annealing run while using heuristics and bounds to narrow the solution space
  • kozika 34 minutes ago
    Someone has made a programming language called Sui, which is said to be designed for LLMs. However, using index-based variable names in order to "avoid typo bugs" makes it more difficult than general-purpose languages, and it also has poor token efficiency :(

    https://github.com/TakatoHonda/sui-lang

  • janalsncm 3 hours ago
    This is kind of just a measurement of how representative a language is in the distribution of the tokenizer training. You could have a single token equal to “public static void main”.
    • moelf 2 hours ago
      the most efficient languages are pretty unpopular, so this argument makes them even more efficient in reality?...
    • make3 1 hour ago
      If you look at the list, you'll see that you're incorrect, as C and JavaScript are not at the top.

      Seeing all the C languages and JavaScript at the bottom like this makes me wonder if it's not just that Curly brackets take a lot of tokens.

      • xigoi 18 minutes ago
        I imagine that having to write

          for (int index = 0; index < size; ++index)
        
        instead of

          for index in 0...size
        
        eats up a lot of tokens, especially in C where you also need this construct for iterating over arrays.
    • muyuu 2 hours ago
      You could, but you wouldn't when those keywords can all change in equivalent contexts.
      • eru 2 hours ago
        What do you mean?

        `public` might have a token by itself, even though you can have `pub` occurring in other contexts, too.

  • tzahifadida 24 minutes ago
    An agent can make summaries via Markdown files while processing. Then use that to break the problem to several issues and then tackle them one by one, even automatically, but more usually interactively. The problem is the technique now, not the llm. Yes, it costs a lot (lot) more. But, it can do it, and people work cost way more than tokens.
  • bicx 4 hours ago
    Realistically, it’s also a function of how many iterations it takes for an AI agent to correctly solve a problem with a given language. I’d imagine most AI agents would frequently have to redo J or F# code, as they are fairly uncommon languages with much smaller training set than JavaScript or Python.
    • Jacques2Marais 3 hours ago
      I can say that for F# this has been mostly true up until quite recently. We use F# at work and were mostly unable to use agents like Claude Code up until the release of Opus 4.5, which seems to know F# quite well.
  • 112233 23 minutes ago
    If language supports comments and LLM is allowed to write them (or docstrings, or any such), there go your tokens.

    Plus, they will strongly "pull" the context when LLM parses it back, to the point of overriding your instructions (true story)

  • verdverm 53 minutes ago
    Token efficiency is only one metric. Simplicity of syntax and semantics are another valuable one.

    re: tokens and session length, there are other ways to manage this than language choice. Summarization is one, something I do is to not out read_file content in the messages, but rather in the system prompt. This means that when it tries to reread after edit, we don't have two copies of the file in context.

    Going to 10M token sessions, keeping per turn context under 100k, working on Golang... language choice for the sake of tokens does not seem a good thing to decide based on

  • protocolture 3 hours ago
    I have always had concerns about physical robots making my work less safe in the real world.

    But had never considered that a programming language might be created thats less human readable/auditable to enable LLMs.

    Scares me a bit.

    • make3 1 hour ago
      LLMs in their current form rely heavily on the vast amount of human data that's available, to learn from it as a first step (the second step is RL).

      We're not building a language for LLMs just yet.

      • jaggederest 30 minutes ago
        > We're not building a language for LLMs just yet.

        Working on it, actually! I think it's a really interesting problem space - being efficient on tokens, readable by humans for review, strongly typed and static for reasoning purposes, and having extremely regular syntax. One of the biggest issues with symbols is that, to a human, matching parentheses is relatively easy, but the models struggle with it.

        I expect a language like the one I'm playing with will mature enough over the next couple years that models with a knowledge cutoff around 1/2027 will probably know how to program it well enough for it to start being more viable.

        One of the things I plan to do is build evals so that I can validate the performance of various models on my as yet only partially baked language. I'm also using only LLMs to build out the entire infrastructure, mostly to see if it's possible.

        • quinnjh 22 minutes ago
          do you expect the model to train on synthetic data or do you expect to grow a userbase that will generate organic training data?

          > One of the biggest issues with symbols is that, to a human, matching parentheses is relatively easy, but the models struggle with it.

          Great point. I find it near trivial to close parens but llms seem to struggle with the lisps ive played with because of this counting issue. To the point where ive not been working with them as much. typescript and functional js as other commentors note is usually smooth sailing.

          • jaggederest 16 minutes ago
            > do you expect the model to train on synthetic data or do you expect to grow a userbase that will generate organic training data?

            Both, essentially, I expect the code examples to grow organically but I expect most of them to come from LLMs, after all, that's the point of the language. I basically expect there to be a step function in effectiveness when the language has been ingested by the models, but they're already plenty decent-ish right now at it.

            The most fascinating thing to me, generating the whole thing, has been that the LLMs are really, really good at iterating in a tight loop by updating the interpreter with new syntax, updating the stdlib with that new syntax, building some small extension to try using it, and then surfacing the need for a new builtin or primitive to start the cycle over.

            I'm also leaning heavily on Chatgpt-5.2's insanely good math skills, and the language I'm building is very math heavy - it's essentially a distant cousin to Idris or any of the other dependently-typed theorem proving languages.

      • energy123 39 minutes ago
        It's worth asking why we haven't had the AlphaZero moment for general learning yet, where no human data is needed.
  • efitz 3 hours ago
    This is interesting research; thank you for doing it.

    I am not sure token efficiency is an interesting problem in the long term, though.

    And in the short term I wonder if prompts could be pre-compiled to “compressed tokens”; the idea would be to use a smaller number of tokens to represent a frequently needed concept; kind of like LZ compression. Or maybe token compression becomes a feature of future models optimized for specific tasks.

    I was wondering last year if it would be worthwhile trying to create a language that was especially LLM-friendly, eg that embedded more context in the language structure. The idea is to make more of the program and the thinking behind it, explicit to the LLM but in a programming language style to eliminate the ambiguity of natural language (one could just use comments).

    Then it occurred to me that with current LLM training methodology that there’s a chicken-and-egg problem; it doesn’t start to show rewards until there is a critical mass of good code in the language for LLMs to train on.

  • gpm 4 hours ago
    It strikes me that more tokens likely give the LLM more time/space to "think". Also that more redundant tokens, like local type declarations instead of type inference from far away, likely often reduce the portion of the code LLMs (and humans) have to read.

    So I'm not convinced this is either the right metric, or even if you got the right metric that it's a metric you want to minimize.

    • limoce 3 hours ago
      I think separating thinking tokens from "representing" tokens might be a better approach, like what those thinking models does
    • make3 1 hour ago
      With Chain of Thoughts (text thinking), the models can already use as much compute as they want in any language (determined by reinforcement learning training)
      • gpm 1 hour ago
        I'm not convinced that thinking tokens - which sort of have to serve a specific chain of thought purpose - are interchangeable with input tokens during which give the model compute without having it add new text.

        For a very imperfect human analogy, it feels like saying "a student can spend as much time thinking about the text as they want, so the textbook can be extremely terse".

        Definitely just gut feelings though - not well tested or anything. I could be wrong.

  • didip 3 hours ago
    Does it account for errors generated from Runtime bugs which caused rerunning of prompts?

    Because that’s what happened in the real world when generating a bunch of untyped Python code.

  • HarHarVeryFunny 4 hours ago
    I don't think context size is really the limit for larger codebases - it's more about how you use that context.

    Claude Code makes some efforts to reduce context size, but at the end of the day is loading entire source files into context (then keeping them there until told to remove them, or context is compacted). One of the major wins is to run subagents for some tasks, that use their own context rather than loading more into CCs own context.

    Cursor makes more efficient use of context by building a vector database of code chunks, then only loading matching chunks into context (I believe it does this for Composer/agentic use as well as for tab/autocomplete).

    One of the more obvious ways to reduce context use in a larger multi-module codebase would be to take advantage of the split between small module definition (e.g. C++ .h files) and large module implementations (.cpp files). Generally you'd only need to load module interfaces/definitions into context if you are working on code that uses the module, and Cursor's chunked approach can reduce that further.

    For whole codebase overview a language server can help locate things, and one could use the AI to itself generate shortish summaries/overviews of source files and the codebase and structure, similar to what a human developer might keep in their head, rather than repeatedly reading entire source files for code that isn't actually being modified.

    It seems we're really in the early days of agentic coding tools, and they have a lot of room to get better and more efficient.

    • CuriouslyC 4 hours ago
      The approaches used by Claude Code and Cursor are inefficient. It's possible to calculate a covering set for a piece of code and provide that to an agent directly via a tool, and it turns out that this can reduce context usage in SWE-bench style tasks by >90% over RAG and grep/read.

      If you're interested in learning more, https://github.com/sibyllinesoft/scribe

      • trueno 1 hour ago
        Like most LLM-made readme's and the six bajillion AI/agentic/llm tools now on Github I can barely get a grasp on what I'm looking at here, or how to use it practically.

        > Smart code bundler that turns repositories into optimized code bundles meeting a token budget in milliseconds

        Ok. So it's a tool, do I use it on my repo once? Then what? Do I use it as I go, does it sit somewhere accessible to something like Claude Code and the onus is on me to direct Claude to use this to search files instead of his out of box workflow ? I can see some CLI examples, what should I do with that where does that fit into what people are using with cursor / claude / gemini etc ?

        This is the part I've been trying to hammer home about LLM created stuff. It leaves us with vague not well-understood outcomes that might do something. People are shipping/delivering things they don't even understand now and they often times can't speak to what their thing does with an acceptable level of authority. I'm not against creating tools with LLM's but I'm actually pretty against people creating the basic readme with LLM's. Wanna make a tool in an LLM? More power to you. But make sure you understand what was made, because we need humans in here telling other humans how to use it, because LLMs flat out lose the plot over the course of a large project and I think a big issue is LLM's can sometimes be more eloquent at writing than a lot of people can, so they opt for the LLM-generated readme.

        But as someone who would maybe consider using something like this, I see that readme and it just looks like every claude code thing I've put together to date which is to say I've done some seemingly impossible things with Claude only to find that his ability to recap the entirety of it just ended up in a whole lot of seemingly meaningful words and phrases and sentences that actually paint a super disjointed picture of what exactly a repo is about.

  • switchbak 4 hours ago
    I would expect that we’ll end up compressing (or whatever term you would use) this at some point so many of those syntactical differences will not be as significant.

    But I would love for more expressive and compact languages to do better, selfish as I am. But I think training data size is more of a factor, and we won’t be all moving up Clojure any time soon.

  • btbytes 4 hours ago
    Not surprisingly, it is J [1], an APL dialect.

    [1] https://www.jsoftware.com/

    • eimrine 3 hours ago
      I knew it without the reading. But having each system call in 2 versions not even closely related to each other (monadic/diadic) requires me to have a hard time doing learning. I very appreciate this language for shortness but this kind of shortness might annoy.
  • bri-holt 4 hours ago
    I suspect DB queries will also benefit from token-efficient query languages as RAG queries grow exponentially. I've been working on one such language that is emitted in a token-efficient IR and compiles to SQL. https://memelang.net/
  • epolanski 4 hours ago
    I doubt this to be a meaningful metric for anything but code exploration in a larger codebase.

    E.g. when it comes to authoring code, C, which comes language, is by far one of the languages that LLMs excel most at.

    • anishgupta 3 hours ago
      I guess it also depends on which dataset LLM was trained on. Rare or niche languages get fragmented into more tokens even if the code itself is short. So two languages with the same number of characters can produce very different token counts because one aligns with what the model has seen millions of times and the other does not.
  • TZubiri 2 hours ago
    05a1be
    • xigoi 12 minutes ago
      Given that APL was punished for using a non-ASCII character set, this would presumably also affect 05AB1E.
  • awesome_dude 2 hours ago
    I'm finding that I have to share more and more code to ensure that various standards are being kept.

    For example I shared some Model code with Claude and Gemini (both via web interfaces) and they both tried to put Controller code into the Model, despite me multiple times telling them that the code wasn't wanted nor needed in there.

    I had to (eventually) share the entire project with the models (despite them having been working with the code all along) before they would comply with my request (whilst also congratulating me on my far superior architecture..)

    That costs more tokens for each problem than just saying "her look at this section and work toward this goal"

  • yeasku 4 hours ago
    [dead]
  • lngnmn2 4 hours ago
    [dead]