12 comments

  • dbalatero 2 hours ago
    I suspect you have something cool, but I think if you told a clearer example story that solves a real-world problem on the homepage it might alleviate some questions I'm seeing (and also having) in the thread here!
  • monster_truck 2 hours ago
    This is neat, what does the actual throughput look like though?

    Have been hacking on a wasm+webtransport stack for distributed simulation workers and found the ceiling on one connection/worker per machine pretty quick. Had to pin adapters/workers to cores to get the latency I was expecting, then needed to use dedicated tx/rx adapters to eliminate jitter. Some bullshit about interrupt scheduling

    • sambigeara 2 hours ago
      It’s a really interesting question.

      The real challenge is gating and reserving “slots” for downstream calls. If seed A on one node calls seed B on another, as it stands, Pollen holds that seed A instance up and waiting (with the memory overhead etc) until the response finds its way back across the cluster.

      You can probably imagine how latencies then start impacting this (espesh when a node in USW is generating traffic that needs to ultimately land on my laptop in the UK), not to mention all of the contention from other nodes elsewhere generating load too.

      In the demo, I see about 2500rps land on my laptop with 4k-5k generated across 4-5 nodes globally, but this is a multi hop scenario. If a call is only invoking a single, light WASM function, I see much higher throughout.

      The project is in its infancy, no doubt I’ll have lots of fun figuring out how to optimise as it progresses!

      In the first scenario above, memory seems to be the ceiling, in the latter, CPU.

      (Edit: these numbers are really quite meaningless but I wanted to give something tangible)

  • kaoD 3 hours ago
    I know the individual words in the description but I'm a bit confused about what this is.

    What would I use Pollen for?

    I'm not sure I understand the "seed" metaphor.

    • sambigeara 2 hours ago
      Well, that’s a good question. I think the best answer for now is “we’ll see”?

      I use it in place of Tailscale for some homelab applications. I’ve started to deploy other experiments on a “prod” cluster. The demo I showed shows how Pollen responds to a multi-step pipeline type application; two WASM seeds and a single egress communicating over the provided RPC mechanism (`pln://seed…` etc) whilst handling routing, back pressure and the like.

      Right now, the workloads need to be stateless. I’m coming up with a story for state at the moment, which’ll likely start as some WAL-like convergent structure with thin (KV store etc) abstractions layered over it. Probably not dissimilar from the pattern underpinning the current CRDT gossip state.

      • kaoD 2 hours ago
        Let's see if I got this right: so it's something like a private Yggdrasil Network (minus the IPv6 overlay?) meets self-distributing WASM-powered serverless functions? Plus some built-in functions for proxying/serving.
        • sambigeara 36 minutes ago
          Ha, at a hand-wavey level, yes? Like you say, there's no IPv6 overlay, each node just exposes it's own primary UDP port which talks Pollen's mesh protocol. It uses a single QUIC transport, one QUIC connection per peer, and a combination of streams and datagrams for different bits serving both the control/data layers.

          I'd say "WASM-powered serverless functions" is a reasonable analogy, if your serverless functions maintained a minimal number of live replicas at any one point Also, of course, you're tied to the physical ceiling of the explicit hosts that are underpinning your cluster (N machines which are not dynamic like, say, lambdas are when they auto-provision to match demand).

          And yeah, you can also `pln serve` arbitrary services which are then exposed to the cluster, but it's worth mentioning that these will of course not benefit from the inherent, organic autoscaling and locality mechanisms that come with the WASM blobs. I only added it in as a feature so I could retire my (basic) Tailscale usage.

          Also, you can `pln seed` arbitrary blobs which can be `pln fetch`ed from other nodes. You can also `pln seed ./public my-site` a static webpage which you can reach from any node with `curl -H "Host: my-site" http://<node-addr>:8080/` (8080 being a configurable port).

  • sambigeara 2 days ago
    Hi everyone, I'm Sam. I started Pollen as an experiment last summer, got carried away, and have landed here.

    It's a single Go binary. Install it on every machine you want in the cluster and they self-organise. Topology is derived deterministically from gossiped state, so workloads land where there's capacity, replicas migrate toward demand, and survivors rehost from failed nodes. The mesh is built on ed25519 identity with signed properties; any TCP or UDP service you pin gets mTLS. Connections punch direct between peers where possible, otherwise they relay through mutually accessible nodes.

    I built it because I'm fascinated by local-first, convergent systems, and because I wanted to see if said systems could be applied to flip the traditional workload orchestration patterns on their head. I also _despise_ the operational complexity of modern systems and the thousands of bolted-on tools they demand. So I've attempted to make Pollen's ergonomics a primary concern (two-ish commands to a cluster, etc).

    It serves busy, live, globally distributed clusters (per the demo), but it's very early days, so don't be surprised by any rough edges!

    Very happy to answer anything in the thread!

    Cheers.

    Docs: https://docs.pln.sh

    • Levitating 3 hours ago
      You have some workload demos which all definitely try out but could you paint us an example use-case of the technology?

      What are the workloads in the runtime capable of?

      • sambigeara 8 minutes ago
        OK bear with me on this, it'll probably be a idle thought-stream because I don't have a concrete answer right now.

        My intention is for Pollen to become a "generic blob of computational capability" into which you idly `pln seed` a workload and do not have to worry about ANY aspects of managing locality, scale, redundancy etc. You seed a workload onto any node, and you call it from any (other?) node. If you want to add more computational power to the cluster, you fire up Pollen on another machine and `pln invite` -> `pln join`.

        Every node also has it's own ed25519 cert. The root key pair (the "don't lose this or you're in trouble" key pair) is used to delegate admin certs to other nodes. I'm also working on a mechanism which allows you to bake any arbitrary properties into a cert (as it stands, these are lifted into the WASM guest code for, say, in-application authz purposes). I have more ideas about how this can be extended in the future.

        The root authority can invalidate a participating peer's cert at any point, currently just via a `pln deny` command which is eagerly gossiped around the cluster so other nodes stop talking to the denied node, too. I think this offers some opportunities for some fairly novel applications. Perhaps, in the future, you'll provision a node with a certain level or capability or authority to run on some external infrastructure. It'll have all of the (allowed) capabilities of your cluster, but will act like it's local to the external system. Plus, you can revoke it's access or re-set it's capabilities at any point; `pln grant` eagerly applies across the cluster, too.

        The workloads, at the moment, are just anything you can compile to WASM via the Extism PDK. Stateless, for now, but with a view to add shared state and persistence in the near future!

        Sorry this was rambly, hopefully it offered something useful.

    • anilgulecha 2 days ago
      Interesting project.

      In a potential modern cloud, having a globally named primitives (computer, store, messaging) can unlock very wider applications. Have you come across any such?

      • sambigeara 1 day ago
        To clarify, are you asking if I’ve considered incorporating those concepts into the project?

        If so, I have loose ideas around how I might introduce shared state, it’s an interesting problem that’ll require a lot of thought. Early days yet, though.

    • digdugdirk 3 hours ago
      From someone who definitely doesn't fully understand what you made, this looks really cool!

      I'm seeing some functionality that seems like it could replace some personal services I currently host via my tailscale network. Am I understanding this correctly? If so, do you have a feel for what the performance implications would be?

      • sambigeara 51 minutes ago
        Thanks! I think the classic answer: "it depends" applies here. It currently only supports stateless workloads (for now, see below), so if you have nice, isolated, functional workloads, the WASM seeds could be a good fit!

        My original intention was that the WASM seeds would be the primary workload entity in the system, because it fits nicely with the whole local-first, self-balancing ethos (and WASM modules are blobs that can be gossiped around readily as nodes claim them). That said, you can also register generic TCP/UDP servers on a host (`pln seed 8090 some_service`), which are callable from nodes and seeds (via `pln://service/`).

        On statelessness, as I've alluded to elsewhere in the thread, I'm looking at how (convergent) state can be introduced into the system and exposed to seeds, so this would ultimately add another layer of capabilities to the WASM functionality.

        On performance: again, it'll depend. I noted elsewhere in the thread the distributed implications of chaining multiple seeds in a call flow. You're not _just_ dealing with CPU, WASM boundary-hopping or even traditional IO ceilings, there's also a component of synchronising gates in proxying nodes (as seed A calling seed B needs to reserve the WASM instance locally until seed B responds, which has knock on memory implications, etc). Its a fun problem, no doubt the story will get better in the coming months. For a local, simple invocation, depending on the nature of your workload, I would expect the standard WASM overheads to apply (there's only a thin layer between the API and the underlying Wazero runtime). For a lot of applications, this should be neglible.

        Also a note on placement, if you run, say, 10 nodes globally, `pln seed` will, by default, only place two replicas into the cluster. However, as load is introduced, the seeds will propogate towards it, so a node that's acting as an ingress for `pln call` will generally claim the workload to benefit locality/latency.

      • sambigeara 50 minutes ago
        Feel free to message here or privately if you wanted to discuss your actual use-case, would be keen to understand how people might try to use it!
    • r3trohack3r 2 hours ago
      This is incredible.

      We’re building an AWS analogue catalogue of services (Databases, Compute, Auth, etc.) for distributed systems.

      Want a job do Pollen-like dev full time?

      william.blankenship@webai.com

      Either way, would be great to compare notes!

  • sambigeara 2 hours ago
    No idea why this post has picked up traction 2 days later, I’m out and about right now but will endeavour to respond thoughtfully when I’m back at my keyboard later on!
  • docheinestages 1 hour ago
    Even after looking at the homepage and the GitHub README, I don't really understand how this could help.
  • jitl 2 hours ago
    Wow, this is super cool. It almost feels like a DIY pocket-Cloudflare. I’m curious how a WASM binary gets mapped to HTTP endpoints that take JSON, how much of that is Pollen vs Extism? Are the routes encoded in the WASM binary somehow?
  • esafak 52 minutes ago
    Did you have any applications in mind when you were designing this? Any weakness in precedents that you wanted to rectify? Are you familiar with Lunatic (https://lunatic.solutions/), and wasmCloud (https://wasmcloud.com/) ?
  • hsaliak 2 hours ago
    Using CRDT gossip to inform scaling is a clever idea. You are on to something there. Perhaps extract it as a core library/concept from the runtime? I feel that would be generally useful!
    • sambigeara 2 hours ago
      Thanks! That’s certainly crossed my mind!
  • Remi_Etien 2 days ago
    [flagged]
  • Huzzi 1 hour ago
    [flagged]