13 comments

  • barishnamazov 3 hours ago
    I like that this relies on generating SQL rather than just being a black-box chat bot. It feels like the right way to use LLMs for research: as a translator from natural language to a rigid query language, rather than as the database itself. Very cool project!

    Hopefully your API doesn't get exploited and you are doing timeouts/sandboxing -- it'd be easy to do a massive join on this.

    I also have a question mostly stemming from me being not knowledgeable in the area -- have you noticed any semantic bleeding when research is done between your datasets? e.g., "optimization" probably means different things under ArXiv, LessWrong, and HN. Wondering if vector searches account for this given a more specific question.

    • keeeba 2 hours ago
      I don’t have the experiments to prove this, but from my experience it’s highly variable between embedding models.

      Larger, more capable embedding models are better able to separate the different uses of a given word in the embedding space, smaller models are not.

      • A4ET8a8uTh0_v2 1 hour ago
        I was thinking about it a fair bit lately. We have all sorts of benchmarks that describe a lot of factors in detail, but all those are very abstract and yet, those do not seem to map clearly to well observed behaviors. I think we need to think of a different way to list those.
  • bonsai_spool 12 minutes ago
    This may exist already, but I'd like to find a way to query 'Supplementary Material' in biomedical research papers for genes / proteins or even biological processes.

    As it is, the Supplementary Materials are inconsistently indexed so a lot of insight you might get from the last 15 years of genomics or proteomics work is invisible.

    I imagine this approach could work, especially for Open Access data?

  • nielsole 1 hour ago
    I think a prompt + an external dataset is a very simple distribution channel right now to explore anything quickly with low friction. The curl | bash of 2026
  • kburman 2 hours ago
    > a state-of-the-art research tool over Hacker News, arXiv, LessWrong, and dozens

    what makes this state of the art?

    • 7moritz7 1 hour ago
      The scale. How many tools do you know that can query the content of all arxiv papers.
    • ashirviskas 2 hours ago
      First, so best in this?
    • nandomrumber 2 hours ago
      The tool is state of the art, the sources are historical.
  • 7777777phil 3 hours ago
    Really useful currently working on a autonomous academic research system [1] and thinking about integrating this. Currently using custom prompt + Edison Scientific API. Any plans of making this open source?

    [1] https://github.com/giatenica/gia-agentic-short

  • nineteen999 3 hours ago
    That's just not a good use of my Claude plan. If you can make it so a self-hosted Lllama or Qwen 7B can query it, then that's something.
    • mcintyre1994 2 hours ago
      I think that’s just a matter of their capabilities, rather than anything specific to this?
  • fragmede 1 hour ago
    > I can embed everything and all the other sources for cheap, I just literally don't have the money.

    How much do you need for the various leaks, like the paradise papers, the panama papers, the offshore leajay, the Bahamas leaks, the fincen files, the Uber files, etc. and what's your Venmo?

  • mentalgear 3 hours ago
    Nice, but would you consider open-sourcing it? I (and I assume others) are not keen on sharing my API keys with a 3rd party.
    • nielsole 2 hours ago
      I think you misunderstood. The API key is for their API, not Anthropic.

      If you take a look at the prompt you'll find that they have a static API key that they have created for this demo ("exopriors_public_readonly_v1_2025")

  • m11a 1 hour ago
    The quick setup is cool! I’ve not seen this onboarding flow for other tools, and I quite like its simplicity.
  • gtsnexp 3 hours ago
    Is the appeal of this tool its ability to identify semantic similarity?
    • A4ET8a8uTh0_v2 1 hour ago
      The use case could vary from person to person. When you think about it, hacker news has large enough data set ( and one that is widely accessible ) to allow all sorts of fun analyses. In a sense, the appeal is:

      who knows what kind of fun patterns could emerge

  • bugglebeetle 3 hours ago
    Seems very cool, but IMO you’d be better off doing an open source version and then hosted SAAS.
  • octoberfranklin 3 hours ago
    "Claude Code and Codex are essentially AGI at this point"

    Okaaaaaaay....

    • Closi 2 hours ago
      Just comes down to your own view of what AGI is, as it's not particularly well defined.

      While a bit 'time-machiney' - I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. If someone wrote a definition of AGI 20 years ago, we would probably have met that.

      We have certainly blasted past some science-fiction examples of AI like Agnes from The Twilight Zone, which 20 years ago looked a bit silly, and now looks like a remarkable prediction of LLMs.

      By todays definition of AGI we haven't met it yet, but eventually it comes down to 'I know it if I see it' - the problem with this definition is that it is polluted by what people have already seen.

      • sixtyj 12 minutes ago
        Charles Stross published Accelerando in 2005.

        The book is a collection of nine short stories telling the tale of three generations of a family before, during, and after a technological singularity.

      • nottorp 14 minutes ago
        > most people would probably say AGI has been achieved

        Most people who took a look at a carefully crafted demo. I.e. the CEOs who keep pouring money down this hole.

        If you actually use it you'll realize it's a tool, and not a particularly dependable tool unless you want to code what amounts to the React tutorial.

      • bananaflag 1 hour ago
        > If someone wrote a definition of AGI 20 years ago, we would probably have met that.

        No, as long as people can do work that a robot cannot do, we don't have AGI. That was always, if not the definition, at least implied by the definition.

        I don't know why the meme of AGI being not well defined has had such success over the past few years.

        • bonplan23 32 minutes ago
          "Someone" literally did that (+/- 2 years): https://link.springer.com/book/10.1007/978-3-540-68677-4

          I think it was supposed to be a more useful term than the earlier and more common "Strong AI". With regards to strong AI, there was a widely accepted definition - i.e. passing the Turing Test - and we are way past that point already: ( see https://arxiv.org/pdf/2503.23674 )

        • Closi 1 hour ago
          Completely disagree - Your definition (in my opinion) is more aligned to the concept of Artificial Super Intelligence.

          Surely the 'General Intelligence' definition has to be consistent between 'Artificial General Intelligence' and 'Human General Intelligence', and humans can be generally intelligent even if they can't solve calculus equations or protein folding problems. My definition of general intelligence is much lower than most - I think a dog is probably generally intelligent, although obviously in a different way (dogs are obviously better at learning how to run and catch a ball, and worse at programming python).

    • phatfish 2 hours ago
      I want to know what the "intelligence explosion" is, sounds much cooler than AGI.
      • adammarples 2 hours ago
        When AI gets so good it can improve on itself
        • peheje 17 minutes ago
          Actually, this has already happened in a very literal way. Back in 2022, Google DeepMind used an AI called AlphaTensor to "play" a game where the goal was to find a faster way to multiply matrices—the fundamental math that powers all AI.

          To understand how big this is, you have to look at the numbers:

          The Naive Method: This is what most people learn in school. To multiply two 4x4 matrices, you need 64 multiplications.

          The Human Record (1969): For over 50 years, the "gold standard" was Strassen’s algorithm, which used a clever trick to get it down to 49 multiplications.

          The AI Discovery (2022): AlphaTensor beat the human record by finding a way to do it in just 47 steps.

          The real "intelligence explosion" feedback loop happened even more recently with AlphaEvolve (2025). While the 2022 discovery only worked for specific "finite field" math (mostly used in cryptography), AlphaEvolve used Gemini to find a shortcut (48 steps) that works for the standard complex numbers AI actually uses for training.

          Because matrix multiplication accounts for the vast majority of the work an AI does, Google used these AI-discovered shortcuts to optimize the kernels in Gemini itself.

          It’s a literal cycle: the AI found a way to rewrite its own fundamental math to be more efficient, which then makes the next generation of AI faster and cheaper to build.

          https://deepmind.google/blog/discovering-novel-algorithms-wi... https://www.reddit.com/r/singularity/comments/1knem3r/i_dont...

    • Hamuko 2 hours ago
      I have noticed that Claude users seem to be about as intelligent as Claude itself, and wouldn't be able to surpass its output.