This was my experience of Ada as well. Beautiful language, that somehow seemed to combine the best parts of Haskell and C. But so difficult to find documentation online. C++ has it's footguns, but it's hard not to learn them all just from the background noise alone. The tooling and stackoverflow-ability makes C++ feel as fast to develop as a scripting language in comparison to Ada.
This list's Zig as an entry, despite the Zig project having very clear plans[0] for a 1.0 release. That's not 0ver, it's just the beta stage of semver.
I think I mostly agree, but I do have one war story of using a C++ library (Apache Avro) that parsed data and exposed a "get next std::string" method. When parsing a file, all the data was set to the last string in the file. I could see each string being returned correctly in a debugger, but once the next call to that method was made, all previous local variables were now set to the new string. Never looked too far into it but it seemed pretty clear that there was a bug in that library that was messing with the internals of std::string, (which if I understand is just a pointer to data). It was likely re-using the same data buffer to store the data for different std::string objects which shouldn't be possible (under the std::string "API contract"). It was a pain to debug because of how "private" std::string's internals are.
In other words, we can at best form API contracts in C++ that work 99% of the time.
In fairness there are also several ambiguities with JSON. How do you handle multiple copies of the same key? Does the order of keys have semantic meaning?
jq supports several pseudo-JSON formats that are quite useful like record separator separated JSON, newline separated JSON. These are obviously out of spec, but useful enough that I've used them and sometimes piped them into a .json file for storage.
Also, encoding things like IEEE NaN/Infinity, and raw byte arrays has to be in proprietary ways.
> The JSON syntax does not impose any restrictions on the strings used as names, does not require that name strings be unique, and does not assign any significance to the ordering of name/value pairs.
That IS unambiguous.
And for more justification:
> Meaningful data interchange requires agreement between a producer and consumer on the semantics attached to a particular use of the JSON syntax. What JSON does provide is the syntactic framework to which such semantics can be attached
> JSON is agnostic about the semantics of numbers. In any programming language, there can be a variety of number types of various capacities and complements, fixed or floating, binary or decimal.
> It is expected that other standards will refer to this one, strictly adhering to the JSON syntax, while imposing semantics interpretation and restrictions on various encoding details. Such standards may require specific behaviours. JSON itself specifies no behaviour.
It all makes sense when you understand JSON is just a specification for a grammar, not for behaviours.
> and does not assign any significance to the ordering of name/value pairs.
I think this is outdated? I believe that the order is preserved when parsing into a JavaScript Object. (Yes, Objects have a well-defined key order. Please don't actually rely on this...)
> Valid JSON text is a subset of the ECMAScript PrimaryExpression syntax. Step 2 verifies that jsonString conforms to that subset, and step 10 asserts that that parsing and evaluation returns a value of an appropriate type.
And in the algorithm
c. Else,
i. Let keys be ? EnumerableOwnProperties(val, KEY).
ii. For each String P of keys, do
1. Let newElement be ? InternalizeJSONProperty(val, P, reviver).
2. If newElement is undefined, then
a. Perform ? val.[[Delete]](P).
3. Else,
a. Perform ? CreateDataProperty(val, P, newElement).
If you theoretically (not practically) parse a JSON file into a normal JS AST then loop over it this way, because JS preserves key order, it seems like this would also wind up preserving key order. And because it would add those keys to the final JS object in that same order, the order would be preserved in the output.
> (Yes, Object's have a well-defined key order. Please don't actually rely on this...)
JS added this in 2009 (ES5) because browsers already did it and loads of code depended on it (accidentally or not).
There is theoretically a performance hit to using ordered hashtables. That doesn't seem like such a big deal with hidden classes except that `{a:1, b:2}` is a different inline cache entry than `{b:2, a:1}` which makes it easier to accidentally make your function polymorphic.
In any case, you are paying for it, you might as well use it if (IMO) it makes things easier. For example, `let copy = {...obj, updatedKey: 123}` is relying on the insertion order of `obj` to keep the same hidden class.
I-JSON (short for "Internet JSON") is a restricted profile of JSON designed to maximize interoperability and increase confidence that software can process it successfully with predictable results.
So it's not JSON, but a restricted version of it.
I wonder if use of these restrictions is popular. I had never heard of I-JSON.
I think it's rare for them to be explicilty stated, but common for them to be present in practice. I-JSON is just an explicit list of these common implicit limits. For any given tool/service that describes itself as accepting JSON I would expect I-JSON documents to be more likely to work as expected than non-I-JSON.
> How do you handle multiple copies of the same key? Does the order of keys have semantic meaning?
This is also an issue, due to the way that order of keys are working in JavaScript, too.
> record separator separated JSON, newline separated JSON.
There is also JSON with no separators, although that will not work very well if any of the top-level values are numbers.
> Also, encoding things like IEEE NaN/Infinity, and raw byte arrays has to be in proprietary ways.
Yes, as well as non-Unicode text (including (but not limited to) file names on some systems), and (depending on the implementation) 64-bit integers and big integers. Possibly also date/time.
I think DER avoids these problems. You can specify whether or not the order matters, you can store Unicode and non-Unicode text, NaN and Infinity, raw byte arrays, big integers, and date/time. (It avoids some other problems as well, including canonization (DER is already in canonical form) and other issues. Although, I have a variant of DER that avoids some of the excessive date/time types and adds a few additional types, but this does not affect the framing, which can still be parsed in the same way.)
A variant called "Multi-DER" could be made up, which is simply concatenating any number of DER files together. Converting Multi-DER to BER is easy just by adding a constant prefix and suffix. Converting Multi-DER to DER is almost as easy; you will need the length (in bytes) of the Multi-DER file and then add a prefix to specify the length. (In none of these cases does it require parsing or inspecting or modifying the data at all. However, converting the JSON variants into ordinary JSON does require inspecting the data in order to figure out where to add the commas.)
`JSON.parse` actually does give you that option via the `reviver` parameter, which gives you access to the original string of digits (to pass to `BigInt` or the number type of your choosing) – so per this conversation fits the "good parser" criteria.
To be specific (if anyone was curious), you can force BigInt with something like this:
//MAX_SAFE_INTEGER is actually 9007199254740991 which is 16 digits
//you can instead check if exactly 16 and compare size one string digit at a time if absolute precision is desired.
const bigIntReviver = (key, value, context) => typeof value === 'number' && Math.floor(value) === value && context.source.length > 15 ? BigInt(context.source) : value
const jsonWithBigInt = x => JSON.parse(x, bigIntReviver)
Generally, I'd rather throw if a number is unexpectedly too big otherwise you will mess up the types throughout the system (the field may not be monomorphic) and will outright fail if you try to use math functions not available to BigInts.
Sorry yes, i was thinking of the context object with source parameter.
The issue it solves is a big one though, since without it the JSON.parse functionality cannot parse numbers that are larger than 64bit float numbers (f.ex. bigints).
> I would look into countries where euthanasia has been already implemented.
That's what puts me off of the idea in the first place. Cases like Christine Gauthier (a former army corporal and paralympian) who was offered euthanasia when trying to seek government disability benefits to install a wheelchair ramp. If it takes someone with existing fame to speak out about this, how many more people has this been pushed on?
> from the utilitarianism view - allowing euthanasia will prevent much more suffering than it will cause.
I'm not totally convinced. I haven't run the numbers, and this also certainly takes into account my personal views on valuing life and family, but I do fear more pain and suffering will come with legal euthanasia than it will solve.
Just look at the end of the article. It gives several examples of the kind of thing that allows me as a utilitarian to say that the suffering of a few terminally ill is not as bad as the harassment of countless vulnerable people.
Should we keep medical assistance in dying illegal because bad eggs offer it outside the legal framework of their job in bad faith?
The Christine Gauthier case is used to justify the idea that the government will use it to reduce spending, when what happened to her is appalling, but was absolutely not something the government employee that offered it to her had the legal permission to do so.
What the Quebec law regarding medical assistance in dying does is guarantee its existence as a medical act. It does not allow any low-level government employee to offer it wily-nily to anyone. It is a medical act, reserved to doctors, to discuss assistance in dying.
> Just look at the end of the article. It gives several examples of the kind of thing that allows me as a utilitarian to say that the suffering of a few terminally ill is not as bad as the harassment of countless vulnerable people.
Countless vulnerable people haven't been harassed. There are 12 documented cases in the history of MAID in Canada where someone was allegedly offered MAID innapropriately. There have been inquests and reports that have counted them. Not one resulted in a death. Christine Gauthier's experience couldn't be substantiated when they reviewed her records, but they did find in that investigation that a single case worker had offered MAID to 4 veterans.
On the other hand there have been over 50k successful petitions for MAID most of which were for people with Cancer.
As a utilitarian, you should presumably look at the actual numbers, and balance the tens of thousands of people who chose not to suffer agonizing deaths against the 12 documented cases of people who were offered MAID as an option when they think they shouldn't have been.
I think these are valid concerns, but I would also say that there is an underlying issue with medical malpractice and disregard for the suffering and needs of certain groups of society which we tend to brush under the rug. I'm going to assume the concerns you have probably don't stop at just euthanasia - mine definitely don't, and I worry that a ban just makes the issue more... abstract, and PR-friendly.
If an individual in a difficult life situation comes to the state for help as a last resort, and there is a chance the representative they are assigned would recommend they should consider just dying as their last resort, the state has already failed to protect someone vulnerable, and obviously won't be giving them the help they deserve/need/should be entitled to as a human.
Any wrongful death is horrible, but I sincerely believe a "representative" like this and the harm they inflict is going to have an almost identical death toll, even if it's by way of consigning people to sub-human lives of physical or mental torment instead of pushing them towards a tool that "everyone" understands we need to keep a close eye on. My utilitarian take would be that many would happily extend the torment of the terminally ill and suffering, as long as they don't have to deal with the suffering their neglect inflicts on countless vulnerable people and the terminally ill already. (For e-clarity, I don't mean to imply that's your motivation here!)
> If an individual in a difficult life situation comes to the state for help as a last resort, and there is a chance the representative they are assigned would recommend they should consider just dying as their last resort, the state has already failed
Medical assistance in dying is a medical act, reserved to doctors. Just like a car salesman can't legally recommend you an abortion. No one in the government has the legal right to discuss it, even less offer it.
Yes, my point was that that person having a position where they are able to do that is already wrong. If a car salesman was telling every woman that came in they should get an abortion, there are places that person should be, and none of them are a car dealer's.
There is in other areas of copyright law, like romhacks and action replay codes. Romhacks seem like a very grey area but generally don't get DMCAed when they distribute large binary patch files of the original roms. And "Lewis Galoob Toys, Inc. v. Nintendo of America, Inc." would imply that the dead simple 16 byte[0] "patch files" in the form of game genie codes are legal.
To take a more practical example. Is there no meaningful difference between the dwm multimon patch files[1] and the full forked repo[2]? For context, lots of suckless software keeps extra features/addons in semi-offical out of tree patches files. The philosophy of suckless is generally to hardcode config options in source code and recompile instead of editing .rc files. This reduces the complexity of the code, so you end up with some very minimalistic easy to patch recompile and code. So it's a natural (if very esoteric) way of implementing plugins.
Obviously this is a bit contrived because all the suckless code is actually open source, so none of this matters to them. But I think it's fair to say that distributing the 7 .patch files at [1] wouldn't count as distributing a forked version of dwm. The patch files contain some context lines ripped straight from the main codebase, but not the main repo. Hell I'd even wonder if there's some kind of fair use argument for patch files. After all, often they boil down to a criticism of the codebase, saying that it's bad because it contains all the lines of code starting with '-' signs and really would be better if it had these extra lines of code after the '+' signs.
The license doesn't seem contradictory to me. Counter-intuitive, unclear, and paradoxical (in the most general sense of the word), yes. But not contradictory.
That's kind of almost been me with eurorack. It is in many ways just a toy to play with. But it's quite a lot of fun, and it's allowed me to experience what it's like to create music far further than just playing sheet music on a piano did. The OP-1 does seem much more focused on actually making the end product of a song than just playing around with sound design on eurorack though, so I'd probably react to it the same way as the OP.
I'd recommend anyone interested in any of this to have a play with Cardinal[1] for free. But I will say that it's a lot easier to play with the real thing. The difference reminds me a lot about when I was really into rubiks cubes and how much easier it is to learn the different variants of twisty puzzles with the physical puzzle in hand, compared to the computerised versions that required clicking and draging, and felt removed from the real thing. So for that reason I'd say that it has been worth actually getting eurorack for me. But even so I've barely played any real songs on the system.
It's funny seeing this after the arguments in the mpv thread a few days ago[1] where over if VLC's extra bloat (and lack of features like stepping back a frame) is justified by it coping with diverse formats. This file doesn't play correctly for me in mpv[2], but does fine in VLC.