Hacker Newsnew | past | comments | ask | show | jobs | submit | cbm-vic-20's commentslogin

TIL about "Charlieplexing", or how to use a reduced set of pins to drive a matrixed set of LEDs.

https://en.wikipedia.org/wiki/Charlieplexing


That's going to be funny when you try to make a photograph on a short shutter time. Half the display will be off!

True, but doesn't this apply to regular matrixed displays as well?

Super common technique. Aircraft cockpit videos usually exhibit stroboscopic effects because of the scan rate of the camera and the refresh rate of the displays, and those are very expensive devices. Short shutter time photos are just not a case I think most displays are designed for.

A significant number of consumer electronic devices (especially reduced-cost ones) already do this. I have yet to see an alarm clock that isn't at least matrixed...

I don't know much about how these models are trained, but is this behavior intentional (ie, the people pulling the levers knew that this is how it would end up), or is it emergent (ie, pulling the levers to see what happens)?

egacs: Eight Gigs And Constantly Swapping

Multi-user Unix? What will they think of next?

This is cool, though. Gives people a taste of what it used to be like with everyone in the university logged into the big time-sharing machines all together.


I kinda wish it stayed that way, or rather something better replaced multi-user systems as they aren't well suited to personal computers like we have nowadays. Plus I like the added bonus of not needing to spend much to have access to the kind of compute power needed for a compsci course, it makes compiling a lot faster

You apparently never had to share a 3B20 (around 1 MIP) with 200 other CS1401 students desperately trying to get their Pascal project to compile before the midnight deadline. 15m for 'hello world'? If you're lucky.

Eeeeeep. I was lucky that my big CS courses were done on a Sequent Balance 8000 equipped with six NS32032 CPUs and room for six more. Yup, SMP in the mid-1980s. That machine positively flew on loads that would bring the neighboring Pyramid 90x to its knees.

I had an account on a 3B20, and it did not impress me in the least, but the 3B2/400 boxes in one lab were pretty reasonable for being small systems. What a shame that the WE32000 didn't get any traction.

Before I went down to UIUC, the junior college had a Prime 650, and all compile jobs were run through a queue precisely to avoid having the machine get crushed.


That's kinda funny...we musta lived in parallel worlds.

The 3B20 might have been slow, but it had a solid I/O subsystem so you could load it up with users, and was generally reliable (it was originally built as a telco switch control processor looking for 5-nines uptime). But by the later 80s the industry had moved on, and we were in the process of migrating those CS classes from the 3B20 to either a Sequent Symmetry (follow-on to the Balance with something like 16x i386) or a Pyramid 90x, depending on the class. The Symmetry was...not reliable. The 90x was worse. The wails of a lab full of undergrads realizing the shared machine had just taken a dirt-nap and all their work with it was a far too common sound. Good times.

We also had a bunch of 3b2s, most with an AT&T 5620 'windowing terminal' attached, which is a really fascinating 'what might have been' if bitmapped workstations and X11 hadn't taken over that niche. I ended up with a Sun 3/160 for most of that time, and the rest is history.


Strange that the Balance was largely reliable. I recall one or two hiccups, but nothing that caused me lost work. There were other machines floating around, but they were pretty much reserved for faculty/staff/grad students, and undergrad plebes weren't welcome to use what passed for the internet at the time (but Usenet was was available, albeit via Ray Essick's "notes" software). Also, any student could get an account on the CDC Cyber 170, but few courses used it for actual coursework by the time I was there. Then there was PLATO, a world unto itself... it also ran on CDC hardware, with bespoke way-ahead-of-its-time touchscreen plasma display terminals, online forums, instant messaging, and multi-player online games.

We only had a few of the 5620s in the 3B2 lab, and I remember a wacky mechanical mouse with a metal ball that I can't imagine would have held up in the long run. The PLATO touchscreens were optical, with a grid of infrared beams to pick up touches.


Ha! We (gatech) were a CDC shop as well, with PLATO. We'd moved up to the Cyber 180s (I think we had a 810, 2 x 855 and a 990 vector machine). FORTRAN on the 990 was pretty fun, and I might be the last person to ever run the ALGOL-60 compiler.

> Multi-user Unix?

We could call it Multics!

But yeah, I remember those glory days of everybody on the school's Sun 3/280, when an accidental fork bomb would ruin everyone's homework.


If you're looking for more to add: ticking second hands have momentum. How about having the ticking hand go beyond the proper place by a degree, then snap back into proper position?


You're literally describing the worst thing about the quartz watches. The best of them (aka most expensive) go to really great lengths to not have any momentum. Why would you want to have it when not needed?


because it's fun

I'm gonna test this out how it looks, not promising anything!

It's as horrible as I expected it to be, great work!

Consider other hideous features, like having the second hand miss the marks because of its "weight" - so a bit forward at 3, a bit back at 9, gradually disappearing toward 6 and 12.


That weight really makes it look like our kitchen clock, but sure it will not look like expensive clock. That feature sure will divide opinions I think.

In the 30 years I've worked in software, I've seen more than one shop that worked this way. Then "eXtreme Programming" and Scrum rose up and morged into "agile", and that pretty much went away.


Or, like the Shakespeare programming language.

https://shakespearelang.com/


The 9front web site is certainly.. something.

https://9front.org


This isn't new- it's been happening for decades.


Not new. No. But will be more.


My alternate reality "one of these days" projects is to have a RISC-V RV32E core on a small FPGA (or even emulated by a different SOC) that sits on a 40- or 64-pin DIP carrier board, ready to be plugged into a breadboard. You could create a Ben Eater-style small computer around this, with RAM, a UART, maybe something like the VERA board from the Commander X16...

It would probably need a decent memory controller, since it wouldn't be able to dedicate 32 pins for a data bus, loads and stores would need to be done wither 8 or 16 bits at a time, depending on how many pins you want to use for that..


Have you thought about building a RISC-V “fantasy computer” core for the MiSTer FPGA platform? https://github.com/MiSTer-devel/Wiki_MiSTer/wiki

From a software-complexity standpoint, something like 64 MiB of RAM possibly even 32 MiB for a single-tasking system seems sufficient.

Projects such as PC/GEOS show that a full GUI OS written largely in assembly can live comfortably within just a few MiB: https://github.com/bluewaysw/pcgeos

At this point, re-targeting the stack to RISC-V is mostly an engineering effort rather than a research problem - small AI coding assistants could likely handle much of the porting work over a few months.


The really cool thing about RISC-V is that you can design your own core and get full access to a massive software ecosystem.

All you need is RV32I.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: