Celestial navigation is still based on a geocentric coordinate system. Modern astronomical ephemerides use the Tychonic model--the sun is modeled as revolving around the Earth, the other planets as revolving around the sun.
Mathematically, in a two-body system, there's no actual difference between saying body A orbits body B or saying body B orbits body A, so in some sense, it's not even wrong.
> Mathematically, in a two-body system, there's no actual difference between saying body A orbits body B or saying body B orbits body A, so in some sense, it's not even wrong.
This isn't what the geocentric model claimed, though. It went beyond just a choice of reference frame, which as you say, you can do in math, or physics.
For a start, the geocentric model claimed a physically preferred reference frame, which already directly contradicts the coordinate relativism you described. In that sense, it was wrong.
Beyond that, it proposed a mathematical model based on epicycles, a model which was eventually falsified due to many failures to match observation. In that sense, it was also wrong.
These points also contradict your other claim:
> Modern astronomical ephemerides use the Tychonic model--the sun is modeled as revolving around the Earth, the other planets as revolving around the sun.
This is misleading at best. The ephemerides you mention are based on modern Newtonian many-body physics, but they do a coordinate transform on the results to express them in a way that's convenient for Earth-bound observers.
This is not "using the Tychonic model" in any meaningful sense. It's using a correct coordinate transform that is equivalent to the overall coordinate system that Tycho tried to use, but failed to get right. It doesn't rely on any aspects of Tycho's model, because that model was largely invalid, and would not produce correct results.
The C++ standards committee is pretty damn dysfunctional at this point for a variety of reasons.
Only like 10% of the committee are actually responsible for an implementation in some manner; the vast majority are users, often looking to get their feature into the standard. This also means that only a tiny minority of the committee actually understands things like the difference between a prototype hack and a proper implementation. I get the sense that it's extremely bad on the library front--all of the standard library implementors I know are basically pleading "please stop adding new features, we want time to catch up."
One of the big issues with library features is that library vendors can't just copy-paste existing implementations for licensing reasons, so they have to reimplement it largely from scratch, and they people doing so may not necessarily be skilled in that particular domain. On top of that, standard libraries are much more sensitive to ABI breaks than other libraries are, so a bad design gets ossified to a much worse degree than regular libraries. The best examples of baked-in bad implementations are std::unordered_map and std::regex, but honestly even std::unique_ptr has similar ABI-unfixable issues (it's not a pointer for ABI calling conventions). Yet you still see people cheer on additions to the standard library because obviously those people are going to make existing implementations better.
As a compiler guy, the complaint about "opaque templates and function calls" to me raises serious doubts that the author has any idea what they're talking about. std::simd is designed to be akin to taking vector operations as intrinsics on <4 x f32> and similar types and wrapping them in a more C++ dialect than bare compiler intrinsics (and then a second layer on top of that to make things somewhat more portable).
So the implementation of all of the std::simd at the bottom should be tiny functions that map to essentially a single instruction, specified via a header file in a mechanism that guarantees you always have the body. This makes the functions trivially obvious candidates for inlining. Since it's a C++26 addition, the dispatching logic through the layers can largely be done via if constexpr, which means most of the code is discarded by the frontend.
Given that the complaint seems to be about not vectorizing a call to a sin function, it's possible that it's implemented in libstdc++ in such a way that the library doesn't know about the compiler's -fveclib implementation. But then again, the complaint is based on the libstdc++ v14 implementation of C++ Parallelism std::simd, not the v16.1 implementation of C++26 std::simd, which is completely different (and landed circa 2 months ago).
So I've got a foot in each camp, I think you're just using different languages - you guys mean different things with the same words.
You can't claim he doesn't know what he's talking about with a single point he may have gotten run, the makes a tonne of valid points, especially around the existing problems of C++ that this library doesn't help with.
In addition to that, he's not wrong about this library from a user perspective. I can't use this. I wrote something very similar back in 2016 - at the time it served my needs but now it's hilariously outdated.
If you thought std::simd was a library nobody asked for, just wait until you hear about <linalg>. I feel like half the people looking forward to that think they're just going to get standard C++ bindings to LAPACK, when instead they're probably going to get an unoptimized, slapdash implementation of LAPACK written by people who aren't good at BLAS.
As for SIMD itself, designing a good SIMD library is difficult because there are several different SIMD approaches and some of them work poorly for certain use cases. For example, you can take an HPC-ish approach of "vectorize this loop" (à la #pragma omp simd) and have the compiler take care of a fairly mechanical transformation. Or you can take an opposite approach of treating a 128-bit SIMD vector as a fundamental data type in your language. Which approach is better depends on your use case.
Have you read the entire paper, and not just skimmed the front matter?
The interface is a generic template approach, which can work on any element type T, not just float/double/complex<float>/complex<double>, but custom types like bigint or rational or random_custom_finite_field. Or integration with units libraries (there's another dumpster fire coming down the line...). Your BLAS library will provide you just the four basic element types, so it takes a decent amount of dispatch logic to convert the template interface to the actual library calls, and you still need fallback logic anyways to handle the other types.
But the library is also not designed in a way to facilitate that kind of dispatch logic (std::simd is, which accounts for a not insubstantial portion of its complexity). Which is on top of the difficulty of linking to one of various BLAS implementations as a standard library. So it's a design that's all but guaranteed to let you link against an existing BLAS implementation, and indeed, carefully reading the rest of the section you wrote makes it clear that it's not a goal of the paper proposal to have implementations do that.
> Your BLAS library will provide you just the four basic element types, ..., and you still need fallback logic anyways to handle the other types.
so, your problem with it is that it does all you want (e.g. LAPACK bindings) AND give extra features??
> so it takes a decent amount of dispatch logic to convert the template interface to the actual library calls
I can't estimate how much this degrades performance. But, it feels very low overhead compared to the calculation itself (and probably should be resolved at compile time)
The work of one obsessive author, who never gave a good explanation for why the thing needed to be in the standard library instead of an external one. The committee was apathetic about the proposal and kept bringing up various trivial issues, in a clear attempt to stall him, but he refused to take the hint. So eventually they relented. Outside coverage I have seen so far seems to be to the tune of "WTF is this weird thing?" and quickly glosses over it.
I wonder if it's going to end up like the export keyword.
I feel like std::hive fits right in to the C++ stdlib group of collections
The least stupid is std::vector which is just the typical O(1) amortized growable array type found in most modern languages, with a mediocre API. 8/10 could do better.
std::array is just the built-in array type C++ should have but doesn't. This shouldn't be a library type, that's embarrassing.
std::deque looks like you're getting something like Rust's VecDeque but you aren't, it's a weird hybrid optimisation which presumably made sense on some 1980s hardware. I asked STL once to explain what it's even for and they didn't know. [[For reference STL is the name of the guy in charge of Microsoft's implementation of the C++ standard library, Microsoft also calls that library STL for reasons we needn't address]]
std::list is the extrusive doubly linked list. This type makes sense in a DSA class. Why is it in the C++ standard library? I dunno, maybe C++ is intended only as a teaching language?
std::forward_list is the extrusive singly linked list. You know, for a different seminar in that same DSA class. You might want the intrusive linked list, you don't want this.
std::map and std::set are probably red-black trees. OK, you might need those and for some reason not care about the details (which aren't specified here)
std::multimap and std::multiset are even less obviously useful. I have never seem them used in real software. Why are they in the standard library?
std::unordered_all_of_the_above_maps_and_sets look like the simplistic hash table you'd be shown in an intro DSA class either taught by somebody who doesn't know the subject well or aiming to cover the basics and get back to their research. This will perform poorly on any hardware with features like a cache.
The C++ stdlib carries broken garbage basically indefinitely. C++ doesn't have the same library stability promise that Rust has, but in practice stuff that nobody cares about is never removed.
That std::hive will fit right in. Another container type you probably shouldn't use, draining precious maintenance resource from groups who have better things they could be doing.
> These are in the standard library because someone proposed their inclusion.
As with std::hive. Indeed the "unordered" containers, just like std::hive were repeatedly knocked back and eventually got in decades after they were obsolete. Persistence really does pay off in C++
> They're fine for the majority of people who really don't want to roll their own data structures each time.
Sure, doubtless std::hive is fine for that same majority of people.
>The committee was apathetic about the proposal and kept bringing up various trivial issues, in a clear attempt to stall him, but he refused to take the hint.
That's a mean interpretation, mean both towards the committee and towards the author.
Han unification predates Unicode by about a decade; most of the early work in Unicode largely consists of copy-pasting the Japanese and Chinese governments' standards for unified CJK ideographs. Indeed, read some of the early histories of Han unification (e.g., https://www.unicode.org/versions/Unicode16.0.0/core-spec/app...), and you'll notice that there's a lot of liasoning with East Asian technology groups in East Asian cities going on. I don't think any East Asian government representatives would have actually objected to Han unification!
It's also worth noting that the original goal of Unicode wasn't to be able to faithfully represent all text, but rather to faithfully represent existing character sets. Only later do you get the impetus to actually include everything, as people become a lot less tolerant of "computer can't actually represent <X>" scenarios. Note too that a lot of the Han unification criticisms basically fall into the same bucket as, say, Medievalists, who want to preserve certain details of their source texts more faithfully than was the norm for computer systems in the 1980s.
Cars don't really need an online component in order to continue working. Some manufacturers have tried to force some features into online components, but the cars continue to work without it once they turn it off.
The contracts underlying the support for consumer automotive commonly run around 10 years. After that it is best effort and unofficial support by other companies if there is enough money to be made by offering it.
Large car manufacturers in the US are required to support their cars that they give warranties for by the the federal Magnuson-Moss Warranty Act, which are 10 or 12 years long by this point.
Replacing plastic straws with paper straws is at best little more than greenwashing (and honestly, possibly even worse), since the environmental effects are so minimal. Contrast this with plastic versus paper bags, where plastic bags due to their extremely light weight have a much greater tendency to become windblown litter, so they do have a comparatively greater impact on the environment.
One of the real problems of greenwashing is that it's trying to sell an idea that with just a tiny, almost unnoticeable change to lifestyle, you can keep doing what you're doing and still have the peace of mind that you're not doing anything bad for the environment. Plastic recycling falls into this category--oh, just recycle this thing instead of throwing it away, that means there's no more guilt to be had over the environmental costs of plastic production (meanwhile ignoring the fact that plastic recycling is largely nonviable and so all of that goes straight to the waste stream anyways.)
The hope is that in the alternative world, instead of praising companies for taking what are ultimately only token steps towards environmental stewardship, we'd actually castigate them harder and get them to take real steps to improving the environmental aftereffects of their activities.
Rust has lots of undefined behavior, in general a broadly similar set to that which exists in C. What Rust does that is different is that to trigger undefined behavior, you need to execute unsafe code. (This isn't the same as saying that you have to be in unsafe code--you can violate a precondition in unsafe code and have the UB itself trigger in safe code).
The short answer is because C was designed to give leeway to really dumb compilers on really diverse hardware.
This isn't quite the same case, but it's a good illustration of the effect: on gcc, if you have an expression f(a(), b()), the order that a and b get evaluated is [1] dependent on the architecture and calling-convention of f. If the calling convention wants you to push arguments from right to left, then b is evaluated first; otherwise, a is evaluated first. If you evaluate arguments in the right order, then after calling the function, you can immediately push the argument on the stack; in the wrong order, the result is now a live variable that needs to be carried over another function call, which is a couple more instructions. I don't have a specific example for increment/decrement instructions, but considering extremely register-poor machines and hardware instruction support for increment/decrement addressing modes, it's not hard to imagine that there are similar cases where forcing the compiler to insert the increment at the 'wrong' point is similarly expensive.
Now, with modern compilers using cross-architecture IRs as their main avenue of optimization, the benefit from this kind of flexibility is very limited, especially since the penalties on modern architectures for the 'wrong' order of things can be reduced to nothing with a bit more cleverness. But compiler developers tend to be loath to change observable behavior, and the standards committee unwilling to mandate that compiler developers have to modify their code, so the fact that some compilers have chosen to implement it in different manners means it's going to remain that way essentially forever. If you were making a new language from scratch, you could easily mandate a particular order of evaluation, and I imagine that every new language in the past several decades has in fact done that.
[1] Or at least was 20 years ago, when I was asked to look into this. GCC may have changed since then.
I'd say it's more like C was designed from really dumb compilers on really diverse hardware. The standard, at least the early versions of it, was more to codify what was out there than to declare what was correct. For most things like this in the standard, you can point to two pre-standardization compilers that did it differently.
Kind of both? There were pre-standard compilers, but when they created the standard, they tried to make it so that one could write really dumb compilers and still fulfill the standard.
gcc used to do this back in the day. Parameter expressions left to right on x86, and right to left on Sparc. I spent a week modifying a bunch of source code, removing expressions with side effects from parameter lists, into my own temporary variables, so that they would all evaluate in the same order.
Feudalism is a political system whereby land is granted to vassals on the conditional basis that they provide levy or taxes to their lord. While the system is notionally a reciprocal system (the fief is a conditional grant, both sides have obligations to each other), by comparing it with the dominant form of taxation that preceded and succeeded it--tax farming--it's fairly clear that the locus of power is decisively with the vassal here and not the liege. Whereas a state who engages in tax farming lets out a new contract every few years, and usually to the same people, a fief is an explicitly hereditary instrument that is abrogated by the liege only at great risk, since the power he has to enforce such a decision comes from his other vassals and his ability to personally persuade them of a course of action.
That is to say that the hallmark of a feudal society is one with very, very weak central authority and powerful local authorities, mediated by the personal interrelationships within and across different levels of authority. Apply that to your analysis of SpaceX and the mismatch is clear. In your analysis, SpaceX is an entity that is utterly dependent on the government for its existence, and need to invest a large amount of energy in acquiring the beneficence of said government. That's not the behavior associated with a feudal society but rather the absolutist monarchies that replaced them, pretty much the antithesis of a feudal society.
> That is to say that the hallmark of a feudal society is one with very, very weak central authority and powerful local authorities, mediated by the personal interrelationships within and across different levels of authority
This is precisely the state of affairs in the United States today. Where people get confused is that the idea of property being specific hectares of land rather than what property is at maximum is capitalism which is simply paper contracts and debt, per graeber
> In your analysis, SpaceX is an entity that is utterly dependent on the government for its existence, and need to invest a large amount of energy in acquiring the beneficence of said government. That's not the behavior associated with a feudal society
It is the behavior of a Lord.
The United States is not an absolute monarchy and it has a rotating set of governors
What doesn’t rotate are the capitalist leaders (investors) for the top 100 corporations and they are the actual governors of this society
Because they determine where capital flows they are the ones who you have to pay homage to in order to get property so that you can then become a Lord
Mathematically, in a two-body system, there's no actual difference between saying body A orbits body B or saying body B orbits body A, so in some sense, it's not even wrong.
reply