OK bear with me on this, it'll probably be a idle thought-stream because I don't have a concrete answer right now.
My intention is for Pollen to become a "generic blob of computational capability" into which you idly `pln seed` a workload and do not have to worry about ANY aspects of managing locality, scale, redundancy etc. You seed a workload onto any node, and you call it from any (other?) node. If you want to add more computational power to the cluster, you fire up Pollen on another machine and `pln invite` -> `pln join`.
Every node also has it's own ed25519 cert. The root key pair (the "don't lose this or you're in trouble" key pair) is used to delegate admin certs to other nodes. I'm also working on a mechanism which allows you to bake any arbitrary properties into a cert (as it stands, these are lifted into the WASM guest code for, say, in-application authz purposes). I have more ideas about how this can be extended in the future.
The root authority can invalidate a participating peer's cert at any point, currently just via a `pln deny` command which is eagerly gossiped around the cluster so other nodes stop talking to the denied node, too. I think this offers some opportunities for some fairly novel applications. Perhaps, in the future, you'll provision a node with a certain level or capability or authority to run on some external infrastructure. It'll have all of the (allowed) capabilities of your cluster, but will act like it's local to the external system. Plus, you can revoke it's access or re-set it's capabilities at any point; `pln grant` eagerly applies across the cluster, too.
The workloads, at the moment, are just anything you can compile to WASM via the Extism PDK. Stateless, for now, but with a view to add shared state and persistence in the near future!
Sorry this was rambly, hopefully it offered something useful.
Splitting a big task (like anything ML-related) into a set of smaller ones and distribute them across the "fleet" of workers. Then reap the results, stitching it back into a single artifact at the end. This could be commercially viable. This could even become a p2p platform/market where some people basically buy computation while the others offer their hardware for temporary rent to earn a few bucks. You become the coordinator that just connects the demand with the supply and become rich from just commissions alone.
Absolutely! What's _really_ cool is that if you have disjoint computational steps that don't necessarily scale together linearly, you could split them into separately deployed `pln seeds` and let the cluster organically balance the compute as the different usage patterns occur. And yes, "p2p compute on demand" is certainly an intriguing idea.
I use Noctua for the silence but I also literally don't have any of the panels attached to my case. The main panel doesn't even fit because the DH-15[1] would stick out.
My DH-15 isn't particularly silent because the fans are silent but it's so effecient that the fans barely need to spin.
> But now the info is coming from an LLM that you generally trust
But it's not from the LLM, the LLM clearly cites the wikipedia article as its source. This is just performing an internet search with extra steps, and ending up with misinformation because somebody vandalized wikipedia.
It was just something a friend of mine came up with - we called it "Area Capture" or something (and was ironically, mostly vibe-coded).
There were 4 or 5 "color" teams. Each one carries a meshtastic node, and they all report to a central server back at base. The play field was roughly a square mile divided up into a grid of smaller squares. If you walk into one and it's past the cooldown time, it claims it for your team. Most squares at the end of two hours wins. The server would send out updates over meshtastic also: "Blue captures H12" "Red has 18", etc. If you were at the base station, you got to see it all play out live on a big map.
There was another one played at night which was a hide and seek game / capture the flag sort of game. It would tell the seekers some limited information about the seekers, and each side had special functions they could use. Hiders could "go invisible" or fake their location for a certain time. Seekers could call a limited number of "drone strikes" on different squares. The game ends when either the hiders are caught, or they make it to a specific target location.
Lots of possibility for that sort of thing with Meshtastic. I guess either could have run on a phone since now even rural camping areas have decent cell coverage these days, but that's not quite as impressive.
This for days. Cheap sensors that anyone can churn out for days and deploy to anywhere for weeks to years. No longer do I have to know if the garden beds need to be watered.
I was really surprised that term wasn't used in this article.
I agree it's old, but I think it's reached a new high with short-form content. Now that people don't subscribe to creators they like but rather get an algorithmic feed it's much harder to detect astroturfing.
Like the articles says, these creators are easily identifiable, they typically create multiple videos a day about the same topic. But for the platform that doesn't matter.
reply