"We must now Single-click "Select" on that icon to actually bring the application to the forefront and activate it. I don't know what that's all about, but that's how it works."
What that was about was that all gui apps on riscos only ran one process, no matter how many files you had open. These machines had very little memory, so managing it was very important - there was actually a system panel you could open (I forget it's name) where you could drag sliders to change how much various things were allowed to allocate.
The downside, of course, was that if some app crashed, it would take out every file you had open with it. But then, it didn't really have very good isolation, so often a crashing app would take down the whole OS.
Author here. That still doesn't get at my confusion over the UX. It wouldn't change any of the memory management issues to have a double-click on an app icon jump straight into the application. That user intent seems pretty clear to me, so the extra step could have been automated away. Maybe RISC OS users just had a different way of working?
They distinguish between running the application and opening a document or view. Not all applications want or need to have a document or view (onto a document or otherwise) open at all times. Some of the demo programs actually demonstrate this. Eg, iirc !maestro can keep playing with window open or closed.
Mac OS X has a variant. There's a little dot below the icons that indicate that the program is currently active in ram versus just visible in the dock.
Ps: wrt the demo programs. Did you notice you can eg. 'Save' files from !Edit and !Paint directly into !Draw (and IIRC also back into itself) ?
Note that RISC OS had to contend with running off floppy disks too. So you might start one application off one floppy disk, and the next off another, and you'd have them both in ram, but neither had any data in them yet, because that might come of a third floppy disk.
That's at least one concrete scenario where "application loaded" need not be the same thing as "having data open in a window".
It's been a while though. I just know they were extremely consistent in keeping the distinction between "application loaded" on one hand and opening a window on the other (or taking over the screen, in games) . This does help with your mental model of where your RAM is going, since you have a limited amount of it. Closing applications might free up ram you need for something, but now you might need to juggle floppies again. And so it went.
ppps
Also, windows (and some of mac os) seems to confuse opening a file with opening an application. It's not the same thing. An application can have 0, 1, or infinity files open in memory at any one time. "Why the heck does eg ms windows always open nonsensical empty windows when there's nothing to show?" I'm not sure it's a hill I'd die on today, but I used to have Opinions on this! "The window is not the application, an application can have lots of windows!" (Ha, you're reminding me of old rants back from decades ago ;-) )
If you double clicked on a file it would load the associated application and open the file.
The only reason I can think of is to not disrupt the user's flow by opening a window on top of the Filer windows. Maybe they intend to open multiple applications to use together.
There was a carefully written programmer's guide for UX. That might have an explanation.
What's ironic is that originally one of the advantages of automation was that it was more impartial than human-delivered services. The inventor of the automated telephone exchange, Strowger, designed it because he was concerned that the local telephone operators we directing his calls to a competitor. We had several decades during which machines had only very limited decision-making ability, and so it was their ability to manipulate or discriminate was minimal. That's gone. It went years ago, but it's taken a while for the public's intuition to catch up. People are starting to get angry, but are still somewhat baffled. Industry believes that they can continue to get away with it since they've done it for 10-20 years, but I think this underestimates how strong the backlash can get.
I would argue that they are not the same, but there is a symmetry between them.
The central problem of cryptology is to prevent inference about either the key or the plaintext, despite the requirement to be able to reconstruct the plaintext from the ciphertext+key. So ciphers have to almost perfectly mix information.
Machine learning is possible because in the absence of perfect mixing, inference is possible (given many input output pairs), even if the information is many decibels down below the noise. So the information about what parameters need changing is present in the output despite many subsequent layers of processing. This means that a lot of mixing can be tolerated, and it's needed because you don't know in advance what the data flow should look like in detail, so the NN has to provide as many options as possible.
> So ciphers have to almost perfectly mix information.
yesn't
most modern stream ciphers basically use XOR for encryption with one time use keys per chunk (like. AES-CTR, AES-GCM, AEGIS, ChaCha20, etc.)
no mixing of bites is needed there just high entropy uniformly distributed one time use keys being generated per block, i.e. you need a "good enough" PRNG
practically the easiest way to get them is by doing something similar to a hash on the state(key, nonce, index) in some form. Which is likely done by mixing up information, hence the yes in yesn't.
but any PRNG with sufficient properties would do, and there probably are some which use some clever math which you probably wouldn't describe as "mix information".
It's just "shuffling bits" + "bad one way function" is often "sufficient" secure and faster then alternatives.
And historical many ciphers (e.g. AES block cipher) come from a time where we didn't yet had grate frameworks/know-how about how to assess security properties and write cryptography. Hence why they did all kinds of ways of mixing information and chaining which sometimes is quite.. arbitrary.
It might be easy to assume AES stuck around as it's "just grate" but that is plain wrong. It stuck around because it spread everywhere (including standards/requirements) before we knew how to best do things and due to that then ended up with hardware acceleration support on most chips. But no one would create it that way anymore (it is prone to side channel attacks if you don't have HW accl. xor use bitslicing trickery which makes it slow). But due to everything having AES hw acceleration it became a very fast building block. Hence why most modern cipher still use (part of) it and even some hashes and other algorithms use it... It's another example of how a "good enough" and wide spread technology often wins, not the best.
Mmm. It's true that stream cyphers do not need to mix information (of the plaintext) and block cyphers do. I'm not sure I fully agree with your comment, but I'm also not quite sure what you intend to say and it's late at night here. I'd suggest that anyone reading the above make sure they fully understand the different security properties of stream cyphers Vs block cyphers, before dismissing the latter.
ChaCha20 got discovered using a computer search testing out resistance to certain attacks. Hence, the architecture came first and then the parameters came next. Any link with NN gradient descent? It would likely be an abstract one.
I don't know how true this is? Salsa20 seems like pretty standard ARX design that builds a hash function in counter mode; there's a detailed paper explaining Bernstein's decisions.
This is not a drastically different technology like optane - it's almost solely a packaging change. It's applicable to exactly the same markets as normal DRAM, so if it dies customers will just switch to whatever DRAM variant wins instead.
This is overthinking it a bit. You mostly only need that stuff to tell you why it isn't working. If you want to know if it's up to the job, you can just measure the error rate, which just means sending a lot of data across and counting the errors. There might be some faults which only occur when the cable is in a particular position, but you can at least detect it when it happens.
The interface IC almost certainly also estimates signal quality, but it's likely hard to get that information out of it.
The problems of IPv6 deployment are ones of incentives, not design.
Increasingly, the vast majority of services are accessed via the service cone of various CDNs and IAAS providers directly at edge servers local to them, and at some point it may be that the industry decides that it's not worth providing ordinary internet users the ability to talk to each other directly at all. At which point, we might just as well have stuck with IPv4. I don't particularly like that outcome, but it's possible.
I think they worked that out long ago - that segmenting users has no downside for them and IPv6 has minor upside. It's only mobile devices that help us but I'm sure there will be kinks in the chain that don't get fixed.
The format is editable. The line chart seems always to be scaled so the minima is at the bottom, but you can get the zero point by changing it to bars.
The options do seem a bit idiosyncratic, but I guess they are useful for the kind of data the site users usually look at.
It doesn't look like a photo, because at that time, the only way to mass produce an image was for an artisan to reproduce it as a wood engraving. I don't know if the ILN (which still exists! In Shoreditch high street lol) still has the original.
When that happened to a former employer AWS was calling us within a day. Worth making sure a real phone number is on there, as that's how they contact you for anything serious (and also if your finance dept decided to change the credit card without telling anyone)
There are occasional cases of male lactation reported in humans. Very rare though.
In the guinea pig, the large head at birth is provided for by the carteliginous symphysis joint in the hips detaching. However unless the animal gives birth early enough (which always happens in the wild), they lose this capability and die if impregnated later. Some doctors thought it a good idea to try to emulate this in humans by cutting the cartilage there instead of doing a cesarian section, but this causes permanent problems, as in humans the joint does not reattach. Notoriously, for religious reasons some doctors decided to do so anyway, since cesarian section reduces the number of pregnancies a woman can have, which they regarded as more important than being able to walk easily and being continent.
What that was about was that all gui apps on riscos only ran one process, no matter how many files you had open. These machines had very little memory, so managing it was very important - there was actually a system panel you could open (I forget it's name) where you could drag sliders to change how much various things were allowed to allocate.
The downside, of course, was that if some app crashed, it would take out every file you had open with it. But then, it didn't really have very good isolation, so often a crashing app would take down the whole OS.
reply