What happens when a developer can't publish a security update because they don't have 2FA enabled for a popular package? I was almost in this position. I don't have a phone, I don't have 2FA enabled and am not looking to do so. It seems like a 2FA mandate is going to ensure that people lose access to their accounts and won't be able to publish updates at all.
You don't need a phone to have multi-factor. You presumably have a PC if you're contributing to a package, so you can use a computer-based OTP generator, like Authy.
Sure, if your computer gets owned, it won't help, but it's still much better than nothing and practically free.
What happens if tomorrow my laptop falls into the pool? Or gets stolen? I know the password of my password manager and my main emails, but for npm with 2FA I would either be:
- Locked out of npm because I don't have the 2FA key anymore
- Recover my 2FA with my email (which totally defeats the point of 2FA)
- Be asked for some gvmt ID to proof who I am (again a no-no)
In case you don't, you can write down the seeds of the OTPs on a piece of paper. Which you leave at home or at a friend's house or similar. The principle being that you won't lose both your laptop and piece of paper at the same time.
edit: for a "good enough" approach you can probably store your 2FA codes in your password manager, for which you have, presumably, some kind of backup.
I don't want a yubikey either. Hardware products are notorious for being subverted by agencies, honest companies can be bought by agency shell companies, etc.
I find the recent push by big tech and others to discredit open software solutions like PGP suspicious. Banks push out new apps on a yearly basis, we are supposed to insert USB sticks to contribute to open source. Big tech rarely acts in your best interests.
The website hasn't been updated in a while and the stack somewhat diverges from what is written there but we're very hard at work making all the pieces fit together. The p2p database (eyros) works pretty well with only some transfer size improvements left. The database is fully symmetric and runs fully in the browser with a ~400kb wasm build (will work on getting this down later). And the rendering stack works pretty well. The main hurdle at the moment which I have been working on is the ingest phase which consumes planet-osm.pbf and writes into the spatial database (eyros). I can process all the nodes in 1 hour and all the ways in 35 hours on a not very expensive vps but processing the relations I still run out of memory. Probably in the next few weeks I can get this ingest phase working and we will have an initial data release. There is still some rendering work with polish and label rendering but it basically works.
The main initial benefit for the project is to have completely free embeddable web maps hosted p2p (using ipfs, hyperdrive, webtorrent) where you can entirely customize the rendering. Then later the benefits of the fully symmetric nature of the database will start to make more sense and the project has the potential to become less centralized on open street map servers and data. For some prior work that I and other people who work on peermaps have worked on check out https://mapeo.world/ and this very old writeup I made about an early version of the osm-p2p database. The experiences of working on that project and prior versions heavily informs how the current peermaps stack works.
Or here is a slightly old slightly broken version of panning across a processed version switzerland using the end-to-end stack on my laptop (it looks better now) https://www.youtube.com/watch?v=gHEmmQ6GnDI
It's similar how with a torrent you can start seeking into a particular spot in a file and start playing by requesting particular chunks at that spot. Some clients like webtorrent support this behavior but it changes the dynamics of the network somewhat if many clients do this kind of thing. You can build some supplementary peer info to help the process along for different p2p networks depending on if they let you create side-channels or let you make more explicit connections to peers. For peermaps, the database is file and directory based so most of that peer tree traversal should be handled already by the network. And there are more ways to optimize the connections with additional tricks once you get the basics working with a somewhat slower and less sophisticated transfer method.
The VPS we're running on has 60GB of RAM which should be plenty but the ingest program needs more work to use less memory so it stops crashing when denormalizing multipolygon relations, which involves denormalizing ways which fetch nodes... all referenced by ID which has not much locality spread across the pbf file. And if you write to temporary storage it can use a lot of disk and denormalization based on the on-disk format can get really slow. It's just all very tricky to get working well within reasonable time constraints (less than a week of processing ideally) and a reasonable memory footprint.
Many projects[1][2][3][4] in the dat ecosystem use kappa-core[5] for multi-user applications on top of hypercores (the low-level append-only log used by dat). kappa-core is designed around the kappa architecture where the logs serve as the historical record and primary source of truth (so you get version control) and materialized views ingest the logs to answer application-specific queries.
Some nice properties of the kappa-core implementation of this architecture are that:
* works fully offline by default and many of the apps (especially mapeo) are designed for very offline use
* device to device replication is first-class
* you can completely rebuild the materialized views from the log data whenever your schema changes (very nice way of handling database migrations)
* there's a lot of flexibility in how you design the materialized views and an ecosystem of views on npm you can use instead of writing your own crdts
* works in the browser
There is also some progress in the ecosystem for sparse mode where content downloads from other feeds are driven by application-specific queries.
There is a kappa-core workshop[6] that covers some of the introductory topics.
That's not really a fair comparison. Just because something is "baked in" doesn't mean you can't get similar functionality from other libraries in a more ala-carte fashion.
Looking for remote webgl, p2p, data viz, and mapping projects.
I'm particularly interested in work that ties together industrial, engineering, CAD, hardware, physical simulations, and field work with computer software. Preference for web tech.
I think the real C++ part is so important because it respects the abilities and intelligence of children to participate in the adult world using a real (yet approachable) technology that is also used in industry. This also makes these skills earned poking around with a fun toy transferable to solving their own problems, doing coursework at university, and later on entering the technology labor market. Other programming toys aimed at kids like NXT graphical programming or scratch don't have the same transferability and I've noticed that kids see right through that.
I don't think that you need to worry about the transferability of skills from a 11-year-olds toy to professional software development. The far more important thing to transfer is interest/excitement, and that may (or may not -- you're the expert) be easiest to do with a language other than C++. Part of what makes Arduino so fun is how fast you can get to a blinky LED, and part of what makes Scratch so fun is how quickly you can get to interactive graphics. I feel like there has to be something (not necessarily existing today) which is powerful enough to make kids feel like they are in control of the device without annoying arcana like semicolon errors.
These things build on each other over time and learning a language is not exactly easy, especially if you are starting out. An important aspect of the transferability here is that you can go from programming a jewelbot in c++ to programming other microcontrollers. You don't have to learn a completely different language and toolchain to continue exploring programming on a different platform.
Some kids can see through anything, but I think it’s hugely naive to think many kids won’t struggle with it. Do you remember how hard kids find algebra? That syntax is downright trivial compared to explaining why you need to insert a space between angle brackets when composing templates, or why calling your method overloaded for pointers might sometimes call an integer one instead when passed a null literal, or how to debug e.g. memory alignment errors, which are often presented with cryptic segmentation faults.
I think it’s great for a certain crowd, but this could be quite frustrating for many pre teens and ultimately put them off of casual coding.
Besides, kids aren’t exactly leaping at micro controllers. And why should they? Software programming is applicable to more and more prospective jobs every year.
> Some kids can see through anything, but I think it’s hugely naive to think many kids won’t struggle with it. Do you remember how hard kids find algebra? That syntax is downright trivial compared to explaining why you need to insert a space between angle brackets when composing templates, or why calling your method overloaded for pointers might sometimes call an integer one instead when passed a null literal, or how to debug e.g. memory alignment errors, which are often presented with cryptic segmentation faults.
> I think it’s great for a certain crowd, but this could be quite frustrating for many pre teens and ultimately put them off of casual coding.
> Besides, kids aren’t exactly leaping at micro controllers. And why should they? Software programming is applicable to more and more prospective jobs every year.
>why you need to insert a space between angle brackets when composing templates,