I just ran these past OpenAssistant and I'm seeing a much higher error detection rate. I've tried 6 so far, and it's caught bugs in about 4. OpenAssistant actually had a very good technical explanation of the bug it found in the BubbleSort example.
Wifi is a pretty difficult protocol, but fortunately Linux ships with mac80211. So all that is needed is an SDR based PHY such as https://github.com/Nuand/bladeRF-wiphy/ .
The "fix" Intel pushed out this week is a microcode update that in my experience doesn't fix or address Meltdown at all. The update does however make Spectre slightly less reliable, so I'm going to assume that the microcode update has something to do with fixing, updating, or adding new controls to the branch predictor buffer.
So absent a microcode update that outright fixes Meltdown, there will always be some level of slow-down for vulnerable devices. System calls now jump from user mode code to a stub kernel in "supervisor memory". The stub kernel then does a full context switch (touching %cr3 paging register and wiping a good portion of the TLB), and once the real kernel finishes, it does a full context switch back to the stub kernel. It's all terribly inefficient, and realistically it's unlikely that there will only be negligible performance impacts. It should also be noted that this "work-around" doesn't fix processor, it just makes it so that that there's nothing juicy in the supervisor memory.
You may have to learn to live with this for a while. Even if it takes Intel a month to design and validate a fix for Meltdown, prototype and mass production turn around times mean that no customer will have a processor that isn't vulnerable to Meltdown until April-June 2019.
PCIe signals are generated by transceivers -- devices within chips that are specialized in signal conditioning e.g echo cancelling, emphasis/de-emphasis, dynamic impedance matching. These transceivers and the analog and digital techniques they implement get better with time. This is easily measurable by looking at the Bit Error Rate of data or by looking at eye diagrams (see slide 15). As data rates increase things like drive strengths, impedance mismatches, and a number of other properties of silicon will "close the eye" meaning the transmitted "0"s and "1"s are not different enough for them to be distinguished by a receiver enough of the time to successfully decode a packet. (PCIe is packet based, it's surprisingly somewhat similar to Ethernet). But essentially as our understanding and processes for manufacturing semiconductor devices increase, we're able to "open the eye" more, at which point the industry decides to increase data rates.
The TL;DR is that there's no silver bullet but lots of lead bullets. The big gains in PCIe from generation to generation are the result of accumulating lots of smaller gains in other places.
It helps that this isn't happening just for PCIe; there's lots of breakthroughs that benefit (and may have originated with) other high speed links.
The nature of problems is different, but fundamentally the two questions of "how to distinguish between 1 and 0 in the presence of noise" and "can the reciever and transmitter change state fast enough" apply.
Optical PCIe would be hugely handicapped by lack of a standard optical PCB construction method. You'd have to print waveguides onto the PCB. And then it stops working if you get dust in the socket.
Eventually yes, we can expect to see optical in desktops. Fiber connections are "better" but also much more expensive. The price should eventually come down as volumes increase. The main delaying factor is that copper is still good enough.
Multimedia is the driving force behind increased data usage, and I think we'll continue to need more throughput until we no longer get any benefits from higher resolutions (aka when we have substantially more pixels than rods and cones in our eyes). At the moment a phone with a 4K display saturates your eyes at any distance greater than 2 feet from your face. I think a 16x PCIe 4.0 link will likely provide more than enough bandwidth to generate fully immersive VR experiences, so the question then becomes... why and when will we need optical PCIe 5.0 to quadruple the datarate of PCIe 4.0...
I doubt this be true as HF is also modulated onto wire ... it's not the electron per se which wander but modulated HF. Sure, the carrying device is the electron buts it's not like a stream of water
Basically in the optical regime it won't be possible to propagate the E field in a conduit smaller than the wavelength. This is because a small conduit doesn't have the right boundary conditions to support fields (like trying to fit waves into a didgeridoo). So you're always going to have these massive, massive 1um structures compare to state-of-the-art nm scale semiconductors.
There are advantages of optics including that light moves faster than electrons (important for HPC where the figure of merit is latency in us between nodes, etc) and typically has higher fidelity. But the size of these structures is orders of magnitude larger than conventional semiconductors.
Been working on this for a few months, and just added the finishing touches. It's a high performance ADS-B receiver that can detect and correct multiple bit errors and packet collisions. The FPGA offload allows a Raspberry Pi to process samples in realtime, where as a recent i7 would not be able to keep up without the hardware offload.
I'm planning on doing a mini series explaining the MATLAB, C, and VHDL design flow behind building a high performance hardware modem. The first article can be found here, https://www.nuand.com/blog/bladerf-vhdl-ads-b-decoder/
Questions or suggestions are welcome, I would like to use them to improve my writing style and this mini series!