I feel like we need more awareness on what is open-source and how does it work. This is NOT open source. This is, at best, source available but as there is no way to confirm that this code even runs anywhere ever it's entirely a bad faith performance to trick people, deceive regulators and stain the entire open source movement.
I sincerely hope that the main stream media does not fall for this and calls it out. It's not rocket science. It's really really simple - this is not good for anyone.
So in the end are we going by the OSI's definition of Open Source, or not? Can we make up our mind please?
Every time anyone posts here even a slightly modified Open Source license (e.g. a MIT license with an extra restriction that prevents megacorporations from using it but doesn't affect anyone else) people come out of the woodwork with their pitchforks screaming "this is not Open Source!", and insist that the Open Source Definition decides what is Open Source or not, and not to call anything which doesn't meet that definition "Open Source".
And yet here we are with a repository licensed under an actually Open Source license, and suddenly this is the most upvoted comment, and now people don't actually care about the Open Source Definition after all?
> there is no way to confirm that this code even runs anywhere ever
I'm confused what this has to do with "open source" or how it affects public perception.
I agree with you that it's totally possible to lie about what is actually running in production and that sharing some code doesn't mean it's that code, but how is this a new problem?
This clearly has the goal of muddying the water of the DSA transparency requirements. It's an opaque way of trying to mislead users into believing that X is being transparent while not being so at all.
They pretend to be transparent about their algorithms while denying researchers access to their API through exorbitant pricing and severely limited quotas.
It seems like what they've released is entirely useless. Just done for the headlines I guess. All the real information is the components not provided. They may as well have uploaded the CPython source and told us that was the algorithm, which executes a hand-engineered model of heuristics stored in a closed-source .py file.
Not really that surprising: all logic that used to be in the code is now in the model; the only code that is left is some glue to connect the outside world to the number crunching, just like Llama2 runs your LLMs with only 700 lines of C.
They're eating the code. They're eating the algorithms.
Social media apps do not compete in terms of code quality, but user-capture. People go to X because their friends are on X or there is someone on X they want to follow. The sole valuable aspect of any social media company is how many people use it. That's why, when Musk bought Twitter, he discarded the branding, the software engineers, rewrote the backend, and ditched the moderation. The only valuable thing that he was interested in buying was the captive users of Twitter and the embedded value in their social relations and generated content.
X has content moderation that relies on a mix of AI and human review, focusing on automated systems and user reports. There’s less emphasis on account suspensions and more on reach restriction, alongside community-led moderation like "Community Notes"
Someone will, and whoever does it will probably use an Agent CLI: Claude Code with Opus 4.5, Codex CLI with GPT‑5.2‑Codex, Gemini CLI with 3-Pro, GitHub Copilot CLI, etc. I’m 100% sure of it, I’d bet everything I have. Heck, even the code change was made by an AI Agent called “CI Agent” <support@x.ai> as seen here: https://github.com/xai-org/x-algorithm/commit/aaa167b3de8a67...
looks like this is the "for you" feed, once again shared without weights so we only have so much visibility into the actual influence of each trait.
"We have eliminated every single hand-engineered feature and most heuristics from the system. The Grok-based transformer does all the heavy lifting by understanding your engagement history (what you liked, replied to, shared, etc.) and using that to determine what content is relevant to you." aka it's a black box now.
- Phoenix (out of network) ranker seems to have all the interesting predictive ML work. it estimates P(favorite), P(reply), P(repost), P(quote), P(click), P(video_view), P(share), P(follow_author), P(not_interested), P(block_author), P(mute_author), P(report) independently and then the `WeightedScorer` combines them using configurable weights. there's an extra DiversityScore and OONScore to add some adjustments but again dont know the weights https://deepwiki.com/xai-org/x-algorithm/4.1-phoenix-candida...
- other scores of interest: photo_expand_score, and dwell_score and dwell_time. share via copy, share, and share via dm are all obviously "super like" buttons.
- Two-Tower retrieval uses dot product similarity between user features/engagement (User Tower) and normalized embeddings for all items (Candidate Tower). but when you look into the code and considering that this is probably the most important model for recommendations quality.... it's maybe a little disappointing that its a 2 layer MLP? https://deepwiki.com/search/what-models-are-used-for-user_98...
- the ten pre-scoring filters are minorly interesting, nothing super surprising here apart from AgeFilter (https://deepwiki.com/xai-org/x-algorithm/4.6.1-agefilter) which I guess means beyond a certain max_age (1 day?) nothing ever shows up on For You. surprising to have a simple flat cutoff vs i guess the alternative of an exponential aging algorithm.
- videoduration hydrator explicitly prioritizes video duration (https://deepwiki.com/xai-org/x-algorithm/4.5.6-videoduration...) but we dont know in what direction... do you recommend shorter or longer videos? and why a hydrator for what is presumably a pretty static property?
open questions from me
1. how large is the production reranker? default param count is here https://deepwiki.com/search/how-many-params-is-the-transfo_c... but that gives no indication. the latency felt ultra high initially last year and seems to have come down some, what budget are we working with?
2. can we make the retrieval better? i dont have a tooon of confidence in the User Tower / Candidate Tower system - is this SOTA (it's probably not - see how youtube does codebook semantic id's https://www.youtube.com/watch?v=LxQsQ3vZDqo&list=PLcfpQ4tk2k... )
3. no a/b testing / rollout infrastructure?
4. so many hydration subsystems - is this brittle?
I sincerely hope that the main stream media does not fall for this and calls it out. It's not rocket science. It's really really simple - this is not good for anyone.
So in the end are we going by the OSI's definition of Open Source, or not? Can we make up our mind please?
Every time anyone posts here even a slightly modified Open Source license (e.g. a MIT license with an extra restriction that prevents megacorporations from using it but doesn't affect anyone else) people come out of the woodwork with their pitchforks screaming "this is not Open Source!", and insist that the Open Source Definition decides what is Open Source or not, and not to call anything which doesn't meet that definition "Open Source".
And yet here we are with a repository licensed under an actually Open Source license, and suddenly this is the most upvoted comment, and now people don't actually care about the Open Source Definition after all?
I'm confused what this has to do with "open source" or how it affects public perception.
I agree with you that it's totally possible to lie about what is actually running in production and that sharing some code doesn't mean it's that code, but how is this a new problem?
If I "Open Source" windows 11 but lie and put some other junk there then I can't CLAIM to have open sourced windows 11 now can I?
They pretend to be transparent about their algorithms while denying researchers access to their API through exorbitant pricing and severely limited quotas.
Honestly, this looks like a PoC - Proof of Concept. They've open sourced what used to be a PoC at one point.
They're eating the code. They're eating the algorithms.
I'm sure there's many examples but here's the first Google search result: https://www.theguardian.com/us-news/2025/nov/12/elon-musk-gr...
"We have open-sourced our new algorithm, powered by the same transformer architecture as xAI's Grok model."
Is Grok not an LLM? Or do they have other models under that brand?
> Is Grok not an LLM?
Transformer is the underlying technology for (most) LLMs (GPT stands for “Generative Pre-Trained Transformer”)
I have character and I'm German who knows his history.
Oh I see it is not meant to be built really. Some code is omitted.
Someone explain.
They most likely have some secret sauce that they don't release to public.
xAI likely needs both more than usual nowadays.
Plus they had done this before and no real competitor raised since last time they did it. So why not do it again.
X has content moderation that relies on a mix of AI and human review, focusing on automated systems and user reports. There’s less emphasis on account suspensions and more on reach restriction, alongside community-led moderation like "Community Notes"
looks like this is the "for you" feed, once again shared without weights so we only have so much visibility into the actual influence of each trait.
"We have eliminated every single hand-engineered feature and most heuristics from the system. The Grok-based transformer does all the heavy lifting by understanding your engagement history (what you liked, replied to, shared, etc.) and using that to determine what content is relevant to you." aka it's a black box now.
the README is actually pretty nice, would recommend reading this. it doesnt look too different form Elon's original code review tweet/picture https://x.com/elonmusk/status/1593899029531803649?lang=en
sharing additonal notes while diving through the source: https://deepwiki.com/xai-org/x-algorithm
and a codemap of the signal generation pipeline: https://deepwiki.com/search/make-a-map-of-all-the-signals_3d...
- Phoenix (out of network) ranker seems to have all the interesting predictive ML work. it estimates P(favorite), P(reply), P(repost), P(quote), P(click), P(video_view), P(share), P(follow_author), P(not_interested), P(block_author), P(mute_author), P(report) independently and then the `WeightedScorer` combines them using configurable weights. there's an extra DiversityScore and OONScore to add some adjustments but again dont know the weights https://deepwiki.com/xai-org/x-algorithm/4.1-phoenix-candida... - other scores of interest: photo_expand_score, and dwell_score and dwell_time. share via copy, share, and share via dm are all obviously "super like" buttons.
- Two-Tower retrieval uses dot product similarity between user features/engagement (User Tower) and normalized embeddings for all items (Candidate Tower). but when you look into the code and considering that this is probably the most important model for recommendations quality.... it's maybe a little disappointing that its a 2 layer MLP? https://deepwiki.com/search/what-models-are-used-for-user_98...
- Grok-1 JAX transformer (https://github.com/xai-org/x-algorithm/blob/main/phoenix/REA...) uses special attention masking that prevents candidates from attending to each other during inference. Each candidate only attends to the user context (engagement history). This ensures a candidate's score is independent of which other candidates are in the batch, enabling score consistency and caching. nice image here https://github.com/xai-org/x-algorithm/blob/main/phoenix/REA...
- kind of nice usage of Rust traits to create a type safe data pipeline. look at this beautiful flow chart https://deepwiki.com/xai-org/x-algorithm/3-candidate-pipelin... and the "Field Ownership pattern" https://deepwiki.com/xai-org/x-algorithm/3.6-scorer-trait#fi...
- the ten pre-scoring filters are minorly interesting, nothing super surprising here apart from AgeFilter (https://deepwiki.com/xai-org/x-algorithm/4.6.1-agefilter) which I guess means beyond a certain max_age (1 day?) nothing ever shows up on For You. surprising to have a simple flat cutoff vs i guess the alternative of an exponential aging algorithm.
- videoduration hydrator explicitly prioritizes video duration (https://deepwiki.com/xai-org/x-algorithm/4.5.6-videoduration...) but we dont know in what direction... do you recommend shorter or longer videos? and why a hydrator for what is presumably a pretty static property?
open questions from me
1. how large is the production reranker? default param count is here https://deepwiki.com/search/how-many-params-is-the-transfo_c... but that gives no indication. the latency felt ultra high initially last year and seems to have come down some, what budget are we working with?
2. can we make the retrieval better? i dont have a tooon of confidence in the User Tower / Candidate Tower system - is this SOTA (it's probably not - see how youtube does codebook semantic id's https://www.youtube.com/watch?v=LxQsQ3vZDqo&list=PLcfpQ4tk2k... )
3. no a/b testing / rollout infrastructure?
4. so many hydration subsystems - is this brittle?