> Many projects including Chromium, boringssl, Firefox, and Rust call SystemFunction036 from advapi32.dll because it worked on versions older than Windows 8.
That's not true. They use ProcessPrng since versions earlier than 10 are no longer supported (well, rust also has a windows 7 target but that couldn't use ProcessPrng anyway since it wasn't available). The issue they linked is from a decade ago. E.g. here's Chromium: https://github.com/chromium/chromium/blob/dc7016d1ef67e3e128...
> If [ProcessPrng] fails it returns NO_MEMORY in a BOOL (documented behavior is to never fail, and always return TRUE).
From Windows 10 onward ProcessPrng will never fail. There's a whitepaper that gives the justification for this (https://aka.ms/win10rng):
> We also have the property that a request for random bytes never fails. In the past our
RNG functions could return an error code. We have observed that there are many callers
that never check for the error code, even if they are generating cryptographic key
material. This can lead to serious security vulnerabilities if an attacker manages to create
a situation in which the RNG infrastructure returns an error. For that reason, the Win10
RNG infrastructure will never return an error code and always produce high-quality
random bytes for every request...
> For each user-mode process, we have a (buffered) base PRNG maintained by
BCryptPrimitives.dll. When this DLL loads it requests a random seed from kernel mode
(where it is produced by the per-CPU states) and seeds the process base PRNG. If this
were to fail, BCryptPrimitive.dll fails to load, which in most cases causes the process to
terminate. This behavior ensures that we never have to return an error code from the
RNG system.
I think you're too stuck on the current implementation. Work is going into investigating how to evolve the standard library over editions. The "easiest" win would be to have a way to do edition-dependent re-exports of types.
> Aurich Lawson of Ars Technica deleted the original article
That's a very "shoot the messenger" statement. While Aurich is the community "face" of Ars, I very much doubt he has the power to do anything like that.
HTML diverged from SGML pretty early on. Various standards over the years have attempted to specify it as an application of SGML but in practice almost nobody properly conformed to those standards. HTML5 gave up the pretence entirely.
> It does mention Windows NT but honestly nobody really cared about that until NT 3.0/3.5 and it soon thereafter became Windows XP and laid the foundation for modern Windows.
Fun fact: NT 3.1 was the first version of NT, released in 1993. It was versioned like that to match Windows 3.1 which had been released the previous year.
And NT really took off with Windows 2000. Not just business people but more ordinary people were using it as a more stable alternative to Windows 95/98 (albeit lacking some compatibility, especially with games).
Eh. I don't think so. Sure, it was used more in business contexts but it wasn't until Windows 2000 that I saw more regular people using it and recommending it.
This article isn't about languages. It's about the protocol for two or more languages to talk to each other. There is no specification for this.
The System V ABI is as close as we get to an actual specification but not everyone uses it and in any case it only covers a small part of the protocol.
This is a meme which is repeated often, but not really true. If you disagree, please state specifically what property of PDP-11 you think it different from how modern computers work, and where this affects C but not other languages.
In a nutshell, the useful fiction of computer-as-Von-Neumann-meaning doesn’t adequately reflect the reality of modern hardware. Not only does the CPU itself not fit that model (with things like speculative execution, sophisticated power and load management…), but the system as a whole is increasingly an amalgamation of different processors and address spaces.
C compilers can emit SIMD instructions just fine and often have extensions to support writing it explicitly. Also few other languages have explicit support for them from the start and most have added them as some kind of extension later. So the idea that this is some fundamental computer architecture thing C got wrong seem pretty far-fetched. Support for multi-core processing would be a more plausible thing to look at, but even there it seems that C still does quite well.
> there is no guarantee `char` is 8 bits, nor that it represents text, or even a particular encoding.
True, but sizeof(char) is defined to be 1. In section 7.6.2.5:
"The result of sizeof applied to any of the narrow character types is 1"
In fact, char and associated types are the only types in the standard where the size is not implementation-defined.
So the only way that a C++ implementation can conform to the standard and have a char type that is not 8 bits is if the size of a byte is not 8 bits. There are historical systems that meet that constraint but no modern systems that I am aware of.
char8_t also isn't guaranteed to be 8-bits, because sizeof(char) == 1 and sizeof(char8_t) >= 1. On a platform where char is 16 bits, char8_t will be 16 bits as well
The cpp standard explicitly says that it has the same size, typed, signedness and alignment as unsigned char, but its a distinct type. So its pretty useless, and badly named
Wouldn't it be rather the case that char8_t just wouldn't exist on that platform? At least that's the case with the uintN_t types, they are just not available everywhere. If you want something that is always available you need to use uintN_least_t or uintN_fast_t.
It is pretty consistent. It is part of the C Standard and a feature meant to make string handling better, it would be crazy if it wasn't a complete clusterfuck.
> There's no guarantee char8_t is 8 bits either, it's only guaranteed to be at least 8 bits.
Have you read the standard? It says: "The result of sizeof applied to any of the narrow character types is 1." Here, "narrow character types" means char and char8_t. So technically they aren't guaranteed to be 8 bits, but they are guaranteed to be one byte.
Well platforms with CHAR_BIT != 8. In c and c++ char and there for byte is atleast 8 bytes not 8 bytes. POSIX does force CHAR_BIT == 8. I think only place is in embeded and that to some DSPs or ASICs like device. So in practice most code will break on those platforms and they are very rare. But they are still technically supported by c and c++ std. Similarly how c still suported non 2's complement arch till 2023.
That's where the standard should come in and say something like "starting with C++26 char is always 1 byte and signed. std::string is always UTF-8" Done, fixed unicode in C++.
But instead we get this mess. I guess it's because there's too much Microsoft in the standard and they are the only ones not having UTF-8 everywhere in Windows yet.
Of course it can be made UTF-8. Just add a codepoints_size() method and other helpers.
But it isn't really needed anyway: I'm using it for UTF-8 (with helper functions for the 1% cases where I need codepoints) and it works fine. But starting with C++20 it's starting to get annoying because I have to reinterpret_cast to the useless u8 versions.
First, because of existing constraints like mutability though direct buffer access, a hypothetical codepoints_size() would require recomputation each time which would be prohibitively expensive, in particular because std::string is virtually unbounded.
Second, there is also no way to be able to guarantee that a string encodes valid UTF-8, it could just be whatever.
You can still just use std::string to store valid encoded UTF-8, you just have to be a little bit careful. And functions like codepoints_size() are pretty fringe -- unless you're not doing specialized Unicode transformations, it's more typical to just treat strings as opaque byte slices in a typical C++ application.
I never said always. Just add some new methods for which it has to be UTF-8. All current functions that need an encoding (e.g. text IO) also switch to UTF-8.
Of course you could still save arbitrary binary data in it.
> That's where the standard should come in and say something like "starting with C++26 char is always 1 byte and signed. std::string is always UTF-8" Done, fixed unicode in C++.
> I never said always
Yes you totally did. And regarding "add some new methods for which it has to be UTF-8", there is no need at all to add UTF-8 methods to std::string. It would be a bad idea. UTF-8 is not bound to a particular type (or C++ type). It works on _any_ byte sequence.
Good point, so maybe the standard should just add some functions that take std::string_view. Definitely not add a whole new class like std::u8string ...
Is there a single esoteric DSP in active use that supports C++20? This is the umpteenth time I've seen DSP's brought up in casual conversations about C/C++ standards, so I did a little digging:
Aside from that, from what I can tell, those esoteric architectures are being phased out in lieu of running DSP workloads on Cortex-M, which is just ARM.
I'd love it if someone who was more familiar with DSP workloads would chime in, but it really does seem that trying to be the language for all possible and potential architectures might not be the right play for C++ in 202x.
Besides, it's not like those old standards or compilers are going anywhere.
Cadence DSPs have C++17 compatible compiler and will be c++20 soon, new CEVA cores also (both are are clang based).
TI C7x is still C++14 (C6000 is ancient core, yet still got c++14 support as you mentioned).
AFIR Cadence ASIP generator will give you C++17 toolchain and c++20 is on roadmap, but not 100% sure.
But for those devices you use limited subset of language features and you would be better of not linking c++ stdlib and even c stdlib at all (so junior developers don't have space for doing stupid things ;))
How common is it to use Green Hills compilers for those DSP targets? I was under the impression that their bread was buttered by more-familiar-looking embedded targets, and more recently ARM Cortex.
Dunno! My last project there was to add support for one of the TI DSPs, but as I said, that's decades past now.
Anyway, I think there are two takeaways:
1. There probably do exist non-8-bit-byte architectures targeted by compilers that provide support for at-least-somewhat-recent C++ versions
2. Such cases are certainly rare
Where that leaves things, in terms of what the C++ standard should specify, I don't know. IIRC JF Bastien or one of the other Apple folks that's driven things like "twos complement is the only integer representation C++ supports" tried to push for "bytes are 8 bits" and got shot down?
Judging by the lack of modern C++ in these crufty embedded compilers, maybe modern C++ is throwing too much good effort after bad. C++03 isn't going away, and it's not like these compilers always stuck to the standard anyway in terms of runtime type information, exceptions, and full template support.
Besides, I would argue that the selling point of C++ wasn't portability per se, but the fact that it was largely compatible with existing C codebases. It was embrace, extend, extinguish in language form.
> Judging by the lack of modern C++ in these crufty embedded compilers,
Being conservative with features and deliberately not implementing them are two different thing. Some embedded compilers go through certification, to be allowed to be used producing mission critical software. Chasing features is prohibitively expensive, for no obvious benefit. I'd bet in 2030s most embedded compiler would support C++ 14 or even 17. Good enough for me.
> Being conservative with features and deliberately not implementing them are two different thing.
There is no version of the C++ standard that lacks features like exceptions, RTTI, and fully functional templates.
If the compiler isn't implementing all of a particular standard then it's not standard C++. If an implementation has no interest in standard C++, why give those implementations a seat at the table in the first place? Those implementations can continue on with their C++ fork without mandating requirements to anyone else.
> Then they will diverge too much, like it happened with countless number of other languages, like Lisp.
Forgive me if I am unconvinced that the existence of DSP-friendly dialects of C++ will cause the kinds of language fracturing that befell Lisp.
DSP workloads are relatively rare compared to the other kinds of workloads C++ is tasked with, and even in those instances a lot of DSP work is starting to be done on more traditional architectures like ARM Cortex-M.
A cursory Chromium code search does not find anything outside third_party/ forcing either signed or unsigned char.
I suspect if I dug into the archives, I'd find a discussion on cxx@ with some comments about how doing this would result in some esoteric risk. If I was still on the Chrome team I'd go looking and see if it made sense to reraise the issue now; I know we had at least one stable branch security bug this caused.
Related: in C at least (C++ standards are tl;dr), type names like `int32_t` are not required to exist. Most uses, in portable code, should be `int_least32_t`, which is required.
That's not true. They use ProcessPrng since versions earlier than 10 are no longer supported (well, rust also has a windows 7 target but that couldn't use ProcessPrng anyway since it wasn't available). The issue they linked is from a decade ago. E.g. here's Chromium: https://github.com/chromium/chromium/blob/dc7016d1ef67e3e128...
> If [ProcessPrng] fails it returns NO_MEMORY in a BOOL (documented behavior is to never fail, and always return TRUE).
From Windows 10 onward ProcessPrng will never fail. There's a whitepaper that gives the justification for this (https://aka.ms/win10rng):
> We also have the property that a request for random bytes never fails. In the past our RNG functions could return an error code. We have observed that there are many callers that never check for the error code, even if they are generating cryptographic key material. This can lead to serious security vulnerabilities if an attacker manages to create a situation in which the RNG infrastructure returns an error. For that reason, the Win10 RNG infrastructure will never return an error code and always produce high-quality random bytes for every request...
> For each user-mode process, we have a (buffered) base PRNG maintained by BCryptPrimitives.dll. When this DLL loads it requests a random seed from kernel mode (where it is produced by the per-CPU states) and seeds the process base PRNG. If this were to fail, BCryptPrimitive.dll fails to load, which in most cases causes the process to terminate. This behavior ensures that we never have to return an error code from the RNG system.
reply