Just did a funding round. In a sign of the times clickhouse used to be an interesting DB product, but is now a "database software that companies can use as they develop AI agents "
<i>Database technology startup ClickHouse Inc. has raised $400 million in a new funding round that values the company at $15 billion — more than double its valuation less than a year ago. </i>
Investors are finicky creatures, if you've been relying on VC-funding since before, it's hard to stop until you are really successful, and if everyone starts to only look at shiny AI stuff and you still need investors, you end up with not much choice.
I wish there was less of it, we'd have better software then, but :/
Yeah, like FOSS which is drastically underfunded since birth, yet continues to put out software that the entire world ends up relying on, instead of relying on whatever VC-pumped companies are putting out.
I'm not talking "better software" as in "made a lot of money", I meant "better" as in "had a better impact on the world".
Note that the headline is from Langfuse, not ClickHouse. Reading the announcement from ClickHouse[0], the headline is "ClickHouse welcomes Langfuse: The future of open-source LLM observability". I think the Langfuse team is suggesting that they will be continuing to do the same work within ClickHouse, not that the entire ClickHouse organization has a goal of building the best LLM engineering platform.
Your notes aren't very good. They're not a time series database company, they're a columnar database company. But yeah the LLM bit is weird, database companies _always_ feel like charlatans when it comes to LLMs.
"Berkshire Hathaway Inc. is an American multinational conglomerate holding company" is a weird thing for a textile manufacturer to call itself. Almost like...businesses expand and evolve?
(they've never been a time series database company either lol)
For those building applications with Langfuse and Clickhouse - do you like these products? I get the odd request to do an AI thing, and my previous experience with LLM wrappers convinced me to stay away from them (Langchain, Llamaindex, Autogen, others). In some cases they were poorly written, and in other ways the march of progress rendered their tooling irrelevant fairly quickly. Are these better?
The observability stuff can be nice for deployments but really, these libraries/frameworks don't actually do much more than provide some structure, which unless you're expecting a team with high turnover to maintain it, doesn't really matter all that much, especially if you're an experienced developer, you'll find better design/architectures fitting for your use case without them.
Hm I find this very much a "please reinvent the wheel" take.
These frameworks provide structure for established patterns,but they also actually do a lot that you don't have to do anymore. If you are for example building an agentic application then these kind of frameworks make it very simple to create the workflows, do the chat with the model providers, provide structure for agentic skills, decision making and the human in the loop, etc. etc.
All stuff that I would consider "low level". All things you don't have to build.
If you have an aversion to frameworks then sure - by all means. But if you like to move faster and using good building blocks then these frameworks really help.
One thing to keep in mind - many of these AI frameworks are open source and work really well without needing backend services. Or you can self host them where needed. But for many that is also the premium model, please use and pay for our backend services. But that is also a choice of course.
As a big Clickhouse fan, agent evals are where their product really shines. They're buying into market segment where their product is succeeding so they can vertically integrate and tighten up the feedback loop.
Without the purchase price, it is unclear whether this deserves congratulations or condolences.
Two years in the LLM race will have definitely depleted their seed raise of $4m from 2023, and with no news of additional funds raised it's more than likely this was a fire sale.
It was not a fire sale I'm pretty sure. Langfuse has been consistently growing, they publish some stats about sdk usage etc so you can look that up.
They also say in the announcement that they had a term sheet for a good series a.
I think the team just took the chance to exit early before the llm hype crashes down. There is also a question of how big this market really is they mostly do observability for chatbots but there are only so many of those and with other players like openais tracing, pydantic logfire, posthog etc they become more a feature than a product of its own. Without a great distribution system they would eventually fall behind I think.
2 years to a decent exit (probably 100m cash out or so with a good chunk being Clickhouse shares) seems like a good idea rather than betting on that story to continue forever.
Anecdotally, from the AI startup scene in London, I do not know folks who swear by Langfuse. Honestly, evals platforms are still only just starting to catch on. I haven't used any tracing/monitoring tools for LLMs that made me feel like, say, Honeycomb does.
I predict it will be Pydantic next to get picked up by someone for logfire and agent framework.... fine as long as all these open source projects stay open source then good for them
We use it for our internal doc analysis tool. We can easily extract production genrrations, save them to datasets and test edge cases.
Also, it allows prompt separation in folders. With this, we have a pipeline for doc abalysis where we have default prompts and the user can set custom prompts for a part of of the pipeline. Execution checks for a user prompt before inference, if not, uses default prompt, which is already cached on code. We plan to evaluate user prompts to see which may perform better and use them to improve default prompt.
Iterating on LLM agents involves testing on production(-like) data. The most accurate way to see whether your agent is performing well is to see it working on production.
You want to see the best results you can get from a prompt, so you use features like prompt management an A/B testing to see what version of your prompt performs better (i.e. is fit to the model you are using) on production.
I do understand why it’s a product - it feels a bit like what databricks has with model artifacts. Ie having a repo of prompts so you can track performance changes against is good. Especially if say you have users other than engineers touching them (ie product manager wants to AB).
Having said that, I struggled a lot with actually implementing langfuse due to numerous bugs/confusing AI driven documentation. So I’m amazed that it’s being bought to be really frank. I was just on the free version in order to look at it and make a broader recommendation, I wasn’t particularly impressed. Mileage may vary though, perhaps it’s a me issue.
I thought the docs were pretty good just going through them to see what the product was. For me I just don't see the use-case but I'm not well versed in their industry.
I think the docs are great to read, but implementing was a completely different story for me, ie, the Ask AI recommended solution for implementing Claude just didn’t work for me.
They do have GitHub discussions where you can raise things, but I also encountered some issues with installation that just made me want to roll the dice on another provider.
They do have a new release coming in a few weeks so I’ll try it again then for sure.
Edit: I think I’m coming across as negative and do want to recommend that it is worth trying out langfuse for sure if you’re looking at observability!
<i>Database technology startup ClickHouse Inc. has raised $400 million in a new funding round that values the company at $15 billion — more than double its valuation less than a year ago. </i>
https://www.bloomberg.com/news/articles/2026-01-16/clickhous...
I wish there was less of it, we'd have better software then, but :/
Yeah, like FOSS which is drastically underfunded since birth, yet continues to put out software that the entire world ends up relying on, instead of relying on whatever VC-pumped companies are putting out.
I'm not talking "better software" as in "made a lot of money", I meant "better" as in "had a better impact on the world".
Interesting headline for a checks notes time series database company.
[0] https://clickhouse.com/blog/clickhouse-acquires-langfuse-ope...
It’s great when you get this insight as a student of NLP, because suddenly your toolset grows quite a bit.
(they've never been a time series database company either lol)
These frameworks provide structure for established patterns,but they also actually do a lot that you don't have to do anymore. If you are for example building an agentic application then these kind of frameworks make it very simple to create the workflows, do the chat with the model providers, provide structure for agentic skills, decision making and the human in the loop, etc. etc.
All stuff that I would consider "low level". All things you don't have to build.
If you have an aversion to frameworks then sure - by all means. But if you like to move faster and using good building blocks then these frameworks really help.
One thing to keep in mind - many of these AI frameworks are open source and work really well without needing backend services. Or you can self host them where needed. But for many that is also the premium model, please use and pay for our backend services. But that is also a choice of course.
Two years in the LLM race will have definitely depleted their seed raise of $4m from 2023, and with no news of additional funds raised it's more than likely this was a fire sale.
They also say in the announcement that they had a term sheet for a good series a.
I think the team just took the chance to exit early before the llm hype crashes down. There is also a question of how big this market really is they mostly do observability for chatbots but there are only so many of those and with other players like openais tracing, pydantic logfire, posthog etc they become more a feature than a product of its own. Without a great distribution system they would eventually fall behind I think.
2 years to a decent exit (probably 100m cash out or so with a good chunk being Clickhouse shares) seems like a good idea rather than betting on that story to continue forever.
every single day there is an acquisition on here. what's going on in the macro?
You want to see the best results you can get from a prompt, so you use features like prompt management an A/B testing to see what version of your prompt performs better (i.e. is fit to the model you are using) on production.
Having said that, I struggled a lot with actually implementing langfuse due to numerous bugs/confusing AI driven documentation. So I’m amazed that it’s being bought to be really frank. I was just on the free version in order to look at it and make a broader recommendation, I wasn’t particularly impressed. Mileage may vary though, perhaps it’s a me issue.
They do have GitHub discussions where you can raise things, but I also encountered some issues with installation that just made me want to roll the dice on another provider.
They do have a new release coming in a few weeks so I’ll try it again then for sure.
Edit: I think I’m coming across as negative and do want to recommend that it is worth trying out langfuse for sure if you’re looking at observability!
https://clickhouse.com/blog/clickhouse-raises-400-million-se...