Feedback of someone who is used to manage large (>1500) software stack in C / C++ / Fortran / Python / Rust / etc:
- (1) Provide a way to compile without internet access and specify the associated dependencies path manually. This is absolutely critical.
Most 'serious' multi-language package managers and integration systems are building in a sandbox without internet access for security reasons and reproducibility reasons.
If your build system does not allow to build offline and with manually specified dependencies, you will make life of integrators and package managers miserable and they will avoid your project.
(2) Neverever build in '-03 -march=native' by default. This is always a red flag and a sign of immaturity. People expect code to be portable and shippable.
Good default options should be CMake equivalent of "RelWithDebInfo" (meaning: -O2 -g -DNDEBUG ).
-O3 can be argued. -march=native is always always a mistake.
- (3) Allow your build tool to be built by an other build tool (e.g CMake).
Anybody caring about reproducibility will want to start from sources, not from a pre-compiled binary. This also matter for cross compilation.
They are what will allow interoperability between your system and other build systems.
- (5) last but not least: Consider seriously the cross-compilation use case.
It is common in the world of embedded systems to cross compile. Any build system that does not support cross-compilation will be de facto banned from the embedded domain.
As someone who has also spent two decades wrangling C/C++ codebases, I wholeheartedly agree with every statement here.
I have an even stronger sentiment regarding cross compilation though - In any build system, I think the distinction between “cross” and “non-cross” compilation is an anti-pattern.
Always design build systems assuming cross compilation. It hurts nothing if it just so happens that your host and target platform/architecture end up being the same, and saves you everything down the line if you need to also build binaries for something else.
Amen. It always baffled me that cross compiling was ever considered a special, weird, off-nominal thing. I’d love to understand the history of that better, because it seems like it should have been obvious from the start that building for the exact same computer you’re compiling from is a special case.
Also the problem isn't creating a cargo like tool for C and C++, that is the easy part, the problem is getting more userbase than vcpkg or conan for it to matter for those communities.
Shipping anything built with -march=native is a horrible idea. Even on homogeneous targets like one of the clouds, you never know if they'll e.g. switch CPU vendors.
The correct thing to do is use microarch levels (e.g. x86-64-v2) or build fully generic if the target architecture doesn't have MA levels.
Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.
-mtune says "generate code that is optimised for this architecture" but it doesn't trigger arch specific features.
Whether these are right or not depends on what you are doing. If you are building gentoo on your laptop you should absolutely -mtune=native and -march=native. That's the whole point: you get the most optimised code you can for your hardware.
If you are shipping code for a wide variety of architectures and crucially the method of shipping is binary form then you want to think more about what you might want to support. You could do either: if you're shipping standard software pick a reasonable baseline (check what your distribution uses in its cflags). If however you're shipping compute-intensive software perhaps you load a shared object per CPU family or build your engine in place for best performance. The Intel compiler quite famously optimised per family, included all the copies in the output and selected the worst one on AMD ;) (https://medium.com/codex/fixing-intel-compilers-unfair-cpu-d...)
> Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.
Or on newer CPUs of the same vendor (e.g. AMD dropped some instructions in Zen that Intel didn't pick up) or even in different CPUs of the same generation (Intel market segmenting shenanigans with AVX512).
Just popping in here because people seem to be surprised by
> I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.
This is exactly the use case in HPC. We always build -march=native and go to some trouble to enable all the appropriate vectorization flags (e.g., for PowerPC) that don't come along automatically with the -march=native setting.
Every HPC machine is a special snowflake, often with its own proprietary network stack, so you can forget about binaries being portable. Even on your own machine you'll be recompiling your binaries every time the machine goes down for a major maintenance.
it certainly has scale issues when you need to support larger deployments.
[P.S.: the way I understand the words, "shipping" means "passing it off to someone else, likely across org boundaries" whereas what you're doing I'd call "deploying"]
On every project I've worked on, the PC I've had has been much better than the minimum PC required. Just because I'm writing code that will run nicely enough on a slow PC, that doesn't mean I need to use that same slow PC to build it!
And then, the binary that the end user receives will actually have been built on one of the CI systems. I bet they don't all have quite the same spec. And the above argument applies anyway.
If you use a cloud provider and use a remote development environment (VSCode remote/Jetbrains Gateway) then you’re wrong: cloud providers swap out the CPUs without telling you and can sell newer CPUs at older prices if theres less demand for the newer CPUs; you can’t rely on that.
To take an old naming convention, even an E3-Xeon CPU is not equivalent to an E5 of the same generation. I’m willing to bet it mostly works but your claim “I build on the exact hardware I ship on” is much more strict.
The majority of people I know use either laptops or workstations with Xeon workstation or Threadripper CPUs— but when deployed it will be a Xeon scalable datacenter CPU or an Epyc.
Hell, I work in gamedev and we cross compile basically everything for consoles.
We use physical hardware at work, but it's still not the way you build/deploy unless it's for a workstation/laptop type thing.
If you're deploying the binary to more than one machine, you quickly run into issues where the CPUs are different and you would need to rebuild for each of them. This is feasible if you have a couple of machines that you generally upgrade together, but quickly falls apart at just slightly more than 2 machines.
Lots of organizations buy many of a single server spec. In fact that should be the default plan unless you have a good reason to buy heterogeneous hardware. With the way hardware depreciation works they tend to move to new server models “in bulk” as well, replacing entire clusters/etc at once. I’m not sure why this seems so foreign to folks…
Nobody is saying dev machines are building code that ships to their servers though… quite the opposite, a dev machine builds software for local use… a server builds software for running on other servers. And yes, often build machines are the same spec as the production ones, because they were all bought together. It’s not really rare. (Well, not using the cloud in general is “rare” but, that’s what we’re discussing.)
So you buy exact same generation of Intel and AMD chips to your developers than your servers and your cutomsers? And encode this requirement into your development process for the future?
The only time I used -march=native was for a university assignment which was built and evaluated on the same server, and it allowed juicing an extra bit of performance. Using it basically means locking the program to the current CPU only.
However I'm not sure about -O3. I know it can make the binary larger, not sure about other downsides.
It is completely fine to use -march=native, just do not make it the default for someone building your project.
That should always be something to opt-in.
The main reason is that software are a composite of (many) components. It becomes quickly a pain in the ass of maintainability if any tiny library somewhere try to sneak in '-march=native' that will make the final binary randomly crash with an illegal instruction error if executed on any CPU that is not exactly the same than the host.
When you design a build system configuration, think for the others first (the users of your software), and yourself after.
This various a lot between compilers. Clang for example treats O3 perf regressions a bugs In many cases at least) and is a bit more reasonable with O3 on. GCC goes full mad max and you don't know what it's going to do.
If you have a lot of "data plane" code or other looping over data, you can see a big gain from -O3 because of more aggressive unrolling and vectorization (HPC people use -O3 quite a lot). CRUD-like applications and other things that are branchy and heavy on control flow will often see a mild performance regression from use of -O3 compared to -O2 because of more frequent frequency hits due to AVX instructions and larger binary size.
I made a program with some inline assembly and tried O3 with clang once. Because the assembly was in a loop, the compiler probably didn't have enough information on the actual code and decided to fully unroll all 16 iterations, making performance drop by 25% because the cache locality was completely destroyed. What I'm trying to say, is that loop unrolling is definitely not a guarantee for faster code in exchange for binary size
> skipping cmake completely? would this be feasible?
Feasible but difficult. CMake has a tremendous user mass, so you do want to be able to use a CMake-based project as a dependency. The CMake Target/Config export system expose CMake internals and make that difficult to consume a CMake built project without CMake.
The cleanest way to do that is probably what xmake is doing: Calling cmake and extract targets information from CMake to your own build system with some scripting. It is flaky but xmake has proven it is doable.
That's said: CPS should make that easier on the longer term.
Please also consider that CMake is doing a lot of work under the hood to contains compiler quirks that you will have to do manually.
> integration of other languages in the project?
Trying to integrate higher level languages (Python, JS) in package managers of lower level languages (C, C++) is generally a bad idea.
The dependency relation is inverted and interoperability betweens package managers is always poor. Diamond dependency and conflicting versions will become quickly a problem.
I would advise to just expose properly your build system with the properties I described and use a multi-language package manager (e.g Nix) or, at default, the higher level language package manager (e.g uv with a scikit-build-core equivalent) on top of that.
This will be one order of magnitude easier to do.
> how to handle qt?
Qt is nothing special to handle.
Qt is a multi language framework (C++, MOC, QML, JS and even python for PySide) and need to be handle as such.
> Why is China adding so many new generation plants powered by coal? On this and other forums, I see claims all the time that solar is cheaper than coal. As the world's leading producer of solar panels, you would think that they would utilize it even more if those claims are true.
Because reality is very different of propagandists and lobby reports.
Currently, not a single major country right now can afford to have energy storage capacity large enough to pass, even a single day, without sun if running exclusively on solar power.
Not even China, the biggest battery provider world wide.
Considering that to get a stable and reliable grid, the needed capacity would need to supply for weeks during Dunkelflautes, this is realisitically not going to happen before multiple decades.
China has an energy problem it need to solve now: The country is developing so its electricity consumption is growing, rapidly.
Their solution is the most pragmatic on short term: Building coal plants.
Their long term solution is also the most pragmatic on long term: Using Nuclear energy to support the baseload and a mix of hydro, solar, wind for the peaks.
The reason EDF had to be nationalized is because the government used the company as a "price shield" to protect consumer against energy price rise on the European market in 2022 with a mechanism named TRV (Tarif Régulé vente). That digged up EDF dept tremendously.
> Spain plans to deprecate their remaining nuclear for renewables for similar reasons
Span deprecated their nuclear government because their current Socialist government is aligned with Ecologists that are, like everywhere in Europe, antinuclear.
Additionally, the lack of spinning generator in Spain is currently partially what caused the Blackout in Spain in 2025 due to a lack of inertia in the system.
> EDF fleet upkeep will cost over 100 billion euros by 2035, court of auditors says
This is over 25 years and will prolong-ate the lifetime of the 56 reactors by 20 more years. These produce 70% of the country need in electricity.
In comparison, the German energiewende cost 400 billions for 37% of electricity of 2025 produced by solar and wind. With production medium that will need to be entirely renewed in 20 years.
> California will achieve a low carbon generation profile for far cheaper than it cost France (refer to the Lazard LCOE
That is also wrong.
Because LCOE calculation does not take into consideration the price of the grid consolidating necessary for renewable nor the necessity of backup generation in case of dunkleflaute.
>> France had to nationalize EDF due to the exorbitant cost of their nuclear fleet
>That's just wrong.
No, it's correct, the total costs of the 2022 bailout was almost 10bn, and that was to get control over a company that had over 50bn in debt.
Furthermore it was discovered that the plants had neglected maintenance that had to be undertaken rightaway, that had nothing to do with the TRV.
Of course, the TRV didn't help, it caused a loss of 18bn in 2022 on top of everything else, but things were bad already.
So even if the mentioned 5 bn export now was pure profit - which is isn't - it would take 15-20 years to cover the bailout that has already taken place. The 100 billion of investments until 2035 is in addition to that.
And they will have to sell their power on markets that will increasingly often have free electricity from solar and wind. How do you pay 1000 educated plant operators when electricity prices are negative?
Unfortunately nuclear power isn't the kind of thing you can try and then walk away from when it turns out to be a bad idea. Which is likely the main reason it's still around.
> No, it's correct, the total costs of the 2022 bailout was almost 10bn, and that was to get control over a company that had over 50bn in debt.
Bailout of 2022 alone was around 22bn€, which was added on top of it the historical debt.
Revenue of EDF in 2025 is over 100bn€ to put things into perspective.
> Furthermore it was discovered that the plants had neglected maintenance that had to be undertaken rightaway, that had nothing to do with the TRV.
That is also wrong. The immediate maintenance in 2022 was related to "corrosion sous contrainte" which has nothing to do with carelessness. It was mainly the French nuclear regulator (ASN) over-reacting to some non-critical cracks find in some pipes. They have themselves said afterward that the immediate actions were not necessary. The actions were overreactive (from EDF side) and the calendar was very unfortunate.
> So even if the mentioned 5 bn export now was pure profit - which is isn't -
Indeed. Profits in 2025 were currently over 8bn€, so well over 5bn€.
5bn€ just concern the profit made by the exports.
This is not hard to understand: Making a profit by selling valuable nuclear energy during evening peak consumption while buying cheap intermittent solar during low consumption time is an easy game.
People generally do not understand that Nuclear is a CAPEX game, not an OPEX one.
> And they will have to sell their power on markets that will increasingly often have free electricity from solar and wind. How do you pay 1000 educated plant operators when electricity prices are negative?
By selling nuclear electricity at 180€/MWh every night when the sun do not shine.
(This is the average price, every evening peak this month)
Meaning-while, the profitability of solar operators will sink to the ground due to the overcapacity causing negative price during the day as soon as the sun shine. Many of them will die if not state subsidized with public money.
> nuclear power isn't the kind of thing you can try and then walk away from when it turns out to be a bad idea
It is currently the best low-carbon energy around.
And will continue to be for the next 2 decades.
The current Co2/kwh emission of France is 27g/kwh.
The comparison with country like Germany (397g/kwh) or state like California (190g/kwh) that spend >100Bn$ on renewable speak for itself.
I can safely bet that in 15y from now, the French grid will still be greener than the German one.
> making a profit by selling valuable nuclear energy
EDF adjusted economic debt at the beginning of 2026: €81.7 billion
After decades of massive help (nationalisations building it, monopoly, gift-loans, debt cancellation...
> the profitability of solar operators will sink to the ground due to the overcapacity causing negative price
Wait for storage (V2G...) and hydrogen to kick in.
> France
> Germany
France's transition to nuclear power began in 1963 and is now complete.
In other countries (Germany...), transitions to renewables began with the advent of their industrial versions, around 2005. The current context makes these transitions more challenging, and they are still underway.
Therefore, any comparison of their results, for example, greenhouse gas emissions, must be based not on snapshots (which currently favor France since its transition is complete), but on their progress: speed, costs, impacts, etc.
> decades of massive help (nationalisations building it, monopoly, gift-loans, debt cancellation...
I start seriously question your intellectual honesty here.
- For the last 2 decades, EDF was privatised and give back to the state an average of 2bn€ per year in dividende [1]. That is currently EDF giving to the state, not the opposite.
- The monopoly situation in France was ended in 2007. The loi NOME in 2010 even offred to the competitor of EDF an access to nuclear energy at fixed low price [2].
Worth to note that when the Energy crisis spiked in 2022, the same 'competitors' sent back their customers to EDF because they massively increased their price and did not want to follow the TRV.
> hydrogen to kick in.
Nobody sane of mind and reasonable take hydrogen and Power2Gas seriously in the energy sector: The law of physics simply play against it.
The general efficient is low (practically around 50%), the electrolizers strongly hate the spike style usage pattern necessary for a coupling with intermittent energy, and no installations of the required scale has even been tried.
The only reason this is still on the table is because it gives the gaz industry a reason to drain public subsidies and some hope to stay relevant.
> France's transition to nuclear power began in 1963 and is now complete.
Thats also wrong.
The Messner plan started in 1974 and France was other 55% of electricity production provided by Nuclear in 1985. It finishes with over 50 reactors in 15 years to cover up more than 70% of the electricity generated [3]
The cost of the plan Messmer was estimated at 100bn€ in 2012 money.
Germany started their energiewende in 2005 and 20 years later and 400Bn€ burned, they still do have a CO2/kwh intensity 4x higher than France in the 80s.
The results are so bad that Germany started to subsidies its own industry to protect them against electricity price increase [4]
The return on state capital endowments, ranging from 3% to 6%, represents a low real return, significantly lower than the theoretical rates of 8% or 9% (excluding inflation) projected at the time by the General Planning Commission for public enterprises (page 33). Handouts!
The payment of meager dividends is sometimes cancelled or postponed (2015, 2016, 2017, 2019: https://www.latribune.fr/economie/france/edf-l-etat-va-renon... ), or partially made in the form of EDF shares ("in securities", for example between 2016 and 2022) therefore in monkey money because it does not replenish the public coffers at the time or later: EDF is very indebted and the bulk of its assets (nuclear power plants) are unsellable.
> loi NOME in 2010 even offred to the competitor of EDF an access to nuclear energy at fixed low price
'Low'? Nope. It happened in 2012 and this price was set at €42/MWh
The total production cost of a MWh in 2010 was €22 (see the French Court of Auditors' report "The Costs of the Nuclear Power Sector," page 81). Since the existing generation fleet is considered fully depreciated, the €20 difference covers the extension of its operating life (Grand Carénage) and the renewal/expansion of the new nuclear power plants (EPR series).
Nope: hydrogen vehicles are easy to criticize because the mass and size of the tank are prohibitive, and compression significantly increases the cost.
This leads some to condemn all forms of hydrogen use. However, in the case of backup power, not having to store it in a small mobile tank or even transport it, and therefore being able to store it in a stationary industrial tank (where mass and volume are relatively unimportant), is not only possible but already being achieved (record: Air Liquide, and the competition is intensifying) and, incidentally, improves efficiency.
Efficiency:
- Electrolysis (PEM or alkaline): 0.75
- Storage: 0.95
- Conventional combined cycle turbine (gas + steam) with efficiency similar to that achieved with natural gas: 0.6
Overall: approximately 0.4 (just like a very recent nuclear reactor, and without any waste-producing fuel...)
> electrolizers strongly hate the spike style usage pattern necessary for a coupling with intermittent energy
See PEM.
> no installations of the required scale has even been tried
Indeed, however all components are ready.
> France's transition to nuclear power began in 1963 and is now complete.
> The cost of the plan Messmer was estimated at 100bn€ in 2012 money.
This is the sole building cost. R&D is estimated at 55 billions (1945-2010) and the Court wrote that it is very difficult to assess, (page 35, footnote) "the scope of analysis does not cover research expenditures in the military field, nor those related to basic research.".
>This is not hard to understand: Making a profit by selling valuable nuclear energy during evening peak consumption while buying cheap intermittent solar during low consumption time is an easy game.
It is also easy to understand that the nuclear plant costs money even as you are buying cheap solar, because you can't just shut them down. This is a problem already, and we already have solar plants that generate energy 24/7.
They are small, sure but many, and the number is increasing very fast.
There is also tech in the pipeline that will accelerate this. Very cheap batteries among them.
Technology is already being deployed that will have electricity trend towards being free or almost free, 24/7. Pretty soon value will not be generated by selling electricity, instead you will have to generate value from consuming almost free electricity.
When does a nuclear plant generate profits then? They will inevitably have to close, and unfortunately for France, nuclear plants cost money even after they have closed.
This is complete baloney and revisionist history. I followed that topic at the time pretty in depth. It took months and months and delay upon delay to get the plants back up and running. The spot prices in France at times in 2022 went over 1500 euros per MWh. If it was just "an overreaction" there would've been tremendous political pressure to just put the plants back online. The government and EDF are intertwined to the point any talk of new construction etc. always goes through Macron.
> I followed that topic at the time pretty in depth
You apparently did not. because you are the revisionnist here.
CSC (corrosion sous contrainte) is a well documented topic with accessible reports from the ASN (the french nuclear agency) [1], the court des comptes (French accounting court) and EDF itself.
The source of the problem is a phenomena that affect mainly the N4 (1400MW) series of the French reactor. It has been detected in 2021, so before the 2022.
Some pipe in some specific part of the circuit (secondary circuit) presented some unexpected cracks under inspection in one specific reactor.
And EDF chose the stop all the potentially affected reactor and disassembly all the potentially affected pipe to scan them with X ray and triple check that the corrosion phenomena is not widespread.
Where they over-reacted, is that they also disassembled the different serie 900Mw reactor 'just in case', at the worst time.... meaning right before Vladmir Putin attacked Ukrain.
> If it was just "an overreaction" there would've been tremendous political pressure to just put the plants back online
Sure. They should have just emergency duck tape the pipe without following any safety protocol, in a nuclear installation, just to please some politicians and because Putin dreamed of cold war again #sarcasm.
You seem to have very little clue of about the nuclear industry internals and its associated safety processes.... It of course took time.
The only thing you are correct on is that, indeed, it took longer than expected and caused delays.
You said the problems were overblown, not me. I don't think they were overblown, so I am not sure you should be lecturing me on duct tape and nuclear plants. The EDF had scheduled a quarter of the fleet for maintenance and then at the peak of the crisis pulled another quarter offline unplanned. This simply wouldn't have happened if it hadn't been necessary, the government wouldn't have allowed it at the time. The problem was not known in 2021, but at the time when they were built. Here is an interview from 1979 (!) with the president of the EDF at the time Marcel Boiteux, who said that this will happen, but it's not a big deal because it will happen after the plants had reached their EOL in 30 years [1]. Additionally there was a government commission or something like that in the early 2010s that basically concluded "we can't afford to build new ones, let's kick the can down the road and try to fix what we have now". And then 10 years later the biggest energy crisis since the 70s comes along, the very reason they were built and you end up relying on the weather forecast and German coal plants. A few years pass again and some people are talking themselves again into this technology being anything except useless.
> with the president of the EDF at the time Marcel Boiteux, who said that this will happen, but it's not a big deal because it will happen after the plants had reached their EOL in 30 years.
That's not what he said. He said this is the scenario in case of full cycle up and down every day. Which is obviously not how a central is operated.
Consensus today is that nuclear powerplant can live for around 60-80y without issues if the maintenance is done properly.
The US park is getting there.
> This simply wouldn't have happened if it hadn't been necessary, the government wouldn't have allowed it at the time.
The government has no word to say over an ASN decision, specially when Nuclear safety is at stake.
It is France we are talking about, not the USSR.
Again, it is commonly admitted today, after the facts, that it was over-reacting. Thats said: It is bad economically as it cost EDF few billions. But it is exactly what you want to see for safety: Better overreacting than having an incident.
> A few years pass again and some people are talking themselves again into this technology being anything except useless.
So. You are taking one single year failure as a representative example of a technology that has given cheap, abundant and low carbon electricity for the entire Europeean continent for 3 decades ?
Do you have not the impression of being of slightly bad faith here ?
You can pretend to be meticulous about it but the president of the EDF doesn't go on TV to speak to the general public to say 30 years if he meant something else. He would've said 80 years because it just sounds better. Sorry, it's pretty obvious that stress corrosion was a known issue, so there were no surprises.
It's France, not USSR. Is this why the EDF was involved in rescuing Areva from bankruptcy -- a sound business decision? Is this why the government is giving basically interest free loans to the EDF that will be repaid starting from maybe in 15 years? If you really believe that you are delusional. It's all just backroom wheeling and dealing. There is a good saying "don't get high on your own supply". The delusion of order in the western world will be its end, especially now considering it's crumbling before our eyes. Clinging to this idea is not healthy.
Abundant and low carbon, all nice things, but it's not why they were built. They were built for energy independence, and at this task it failed at the exact point in time when it was supposed to shine. Speaking of which, being built for one purpose doesn't necessarily make it useful for another purpose. It was built at a time when things like carbon emissions, climate change and overall sustainability were not a topic. Since sustainability is a topic today, it requires obviously different considerations. My only gripe with the German shutdown is that they didn't force the operators to pay for the decommissioning and waste disposal in full. That would've ended any debate about how realistic and useful this technology is because the companies would've been insolvent.
The cost of the energy transition in Germany is sometimes cited as €300 billion, €500 billion, or even €1.5 trillion.
These figures are worthless because no reputable source publishes a specific figure along with the scope of the project (some aspects of the investments needed for the electricity grid are independent of the energy source) and at least a timeframe.
These figures are actually projections published by various sources, covering distant deadlines (2050, etc.) and the entire electricity system, including non-renewable energy sources (whose additional costs are often overestimated).
> EDF nuclear fleet is highly profitable with around 92TWh exported in 2025
Nope. Electricity exports are officially exported at a loss, since the average price per MWh exported is generally slightly lower than the average French spot price ( https://assets.rte-france.com/prod/public/2025-04/2025-04-09... , page 87). According to the sound approach established by Mr. Boiteux, this price must compensate for production costs as well as investments.
The average market price is decreasing because the renewable energy sector is expanding across the continent, thus supplying more and more electricity at a production cost that is increasingly lower than that of nuclear power.
According to RTE, France will export 92.3 TWh in 2025 (page 75), paid €5.4 billion (page 15), meaning that the average price per MWh will be €58.7. However, this renewable energy sector (considered fully amortized) will produce electricity at a cost of €60.3 according to the CRE (which considers it fully amortized and therefore neglects the bulk of the investment), and at around €78 according to EDF ( https://www.edf.fr/sites/groupe/files/epresspack/6300/CP_Con... ), which wants to build EPR2 reactors and therefore needs to have the necessary funds.
In short, France is exporting at €58.70 a year when it needs to sell for at least €78 to finance its future reactors, thus "using up" its current fleet without setting aside enough money to replace it.
Worse still: if the costs of the EPR2 reactors exceed forecasts, as all EPR construction projects (Finland, France, China, and the UK) have done, the deficit will increase even further.
Fixed costs (investments, maintenance, depreciation of the EPR alone, etc.) are by definition paid whether the fleet produces or not. Therefore, exporting at a price higher than the variable costs (paid only if the plant produces) is a lesser evil because the difference covers a portion of the fixed costs: it is less expensive to export at a slight loss than not to produce and lose more (in technical terms: the gross margin helps cover fixed costs).
However, claiming that nuclear power is profitable simply because of electricity exports is misleading, and the ideal solution would be to produce electricity at the lowest possible cost, therefore using renewable energy sources.
Furthermore, a portion of France's electricity is generated from renewables, so attributing exports solely to nuclear power is misleading.
> Vehicles already on the roundabout must yield to cars entering it.
Yeah but that's theory and theory only.
I would say that 99.9% of anything that look like a roundabout is a "normal roundabout" where the priority is for people in the center, not for the ones entering. This is currently the same than the rest of Europe.
Place de l'étoile is an exception, not a rule and the total number of roundabout like that in the country can probably be counted on one hand.
> I long thought their strong Gaullist stance on sovereignty was a bit silly in today's world
There is very good reasons why De Gaulle was always a bit doubtful about American military protection and why post-war France put a strong emphasis on military sovereignty.
That has nothing to do with any French stubbornness or a so called French anti-American feeling.
The main reason is that De Gaulle experienced the fact American leadership can be untrustworthy first hand.
When he was the leader of the exiled French force during the 40s, Churchill supported him.
Meaningwhile Roosevelt refused to give him any support and actively acted to make him replaced by a puppet, General Giraud. Mainly because it was better aligned with American interests to setup a puppet state in France on the longer term.
The situation changed only later when it became pretty obvious that Giraud was antisemite, an openly nazi collaborationist and a pretty poor politician.
Only then, America started to support De Gaulle officially. Initially only indirectly through the relation between De Gaulle and Eisenhower.
I think it was a quip strategically designed to make the Americans feel better about themselves even as it clearly puts them down, and to become an aspiration at least. In some sense the history of the US is about unleashing a powerful idea and always falling inshort of living up to it.
It was even more than that. In the late 1930s the US told France not to worry about those aggressive Germans next door, if things go pear-shaped we'll stand by you, you can rely on us to help defend you.
This is why France went with its own nuclear deterrent, among other things.
This is a pretty flippant response to a rather insightful point by someone who isn't exactly a newbie to the language. They understand very well the implications of move being nondestructive and the point they're making stands nevertheless.
Let me clarify something here (And my apologies if this looked a bit aggressive).
There is always a cost to abstraction, and that can take different form. In C++, it is often build time (and/or complexity). And Chandler, in his talk, is perfectly right about that.
But that does not change the validity of the C++ concept 'zero cost abstraction at *runtime*'. It is possible to get proper language abstractions while not sacrificing runtime performances.
I did get sharp on his comment because this talk is constantly posted by a specific C crowd that profoundly hate any form of abstraction and use it as a totem to justify terrible development practices.
If your language support zero cost abstraction and genericity, by the sake of god, use it... most of the time the impact in term compilation time is worth the benefits.
You're not only undeservedly dismissing very salient high-level points but also just completely missing the low-level ones. Even ignoring build times and looking only at execution times (and btw it's not just time that matters here), even function calls are not always zero-cost. For multiple reasons, some of which differ across compilers more than others.
Nobody is concluding you shouldn't write functions either.
My remark is still valid. Even considering memory space and cognitive complexity.
> even function calls are not always zero-cost. For multiple reasons, some of which differ across compilers more than others
Divergence about the support of inlining in compiler implementation have nothing to do with the debate here. Some idiosyncrasy about C++ argument passing and lifetime might causes a cost in some specific scenario, still that is specific to C++.
It still does invalid the concept of zero runtime cost for abstraction.
As much as people like to dismiss it, Stepanov was right all along.
> Nobody is concluding you shouldn't write functions either.
Then, you will be surprised to learn that some 'devs' actually recommend exactly this and write guideline that minimize the number of function written. They tend to be from the same crowd that the one was describing before.
> "low-latency links", says the article. I wonder if they consider 500 ms ping to be low, or if they want to replace Geostationary with Low Earth Orbit.
Directional laser beams are orders of magnitude to jam compared to radio wave. That alone makes it of big interest for military applications, even with 500 ms latency.
There is several known cases where jamming caused the loss of costly military drones.
> Directional laser beams are orders of magnitude to jam compared to radio wave. That alone makes it of big interest for military applications, even with 500 ms latency.
2. Jam-Resistant Land Mobile Communications
This system uses a highly redundant optical communication technique
to achieve ultra-low, ultra-robust transmission. The basic unit is
the M1A1 tank. Each tank is labelled with the number 0 or 1 painted
four feet high on the tank turret in yellow, day-glo luminescent
paint. Several detection methods are under consideration:
Please correct me if I'm wrong, but I guess if you aim well enough, there could be a very long, narrow, non-reflective cylinder in front of the receiver that would block all light that is not coming exactly from the direction of the target satellite.
"If you aim well enough" is doing a ton of work there. Precise real-time optical tracking of a satellite from a moving platform is an extremely difficult problem. Even if the satellite itself is geostationary, it would also have to rotate to keep the "cylinder" pointed in the right direction to maintain signal.
I suppose you could make a "cylinder" or "cone" broad enough that, if the threat was static, could blot-out attempted jamming from only certain regions while staying open facing toward friendly zones.
No, but the airplane it would be talking to does. Hard enough when your transceiver is wide open, if you narrow your FOV to a thin cone in order to block jamming signals, the GEO now has to physically track the airplane somehow.
Either the whole satellite rotates or the transciever is on a mount that can rotate
Unless you plan on having 1 satellite per airplane, something tells me it's harder to constrain the FOV than you might suggest. There's also the small problem of the energy, complexity, & weight of having motorized parts on the satellite (or fine-grained attitude control for the satellite itself to track the craft).
Agreed, my point is it's a lot harder than tiagod made it sound.
It also doesn't account for some kind of mobile jammer making it inside the cone, particularly if it's staring at an adversarial nation where secure comms would be needed the most, but the adversary would have freedom of movement.
Not to mention that these former startups are now the Navy, and they are almost all squarely on the side of the person who tried to overthrow democracy.
- (1) Provide a way to compile without internet access and specify the associated dependencies path manually. This is absolutely critical.
Most 'serious' multi-language package managers and integration systems are building in a sandbox without internet access for security reasons and reproducibility reasons.
If your build system does not allow to build offline and with manually specified dependencies, you will make life of integrators and package managers miserable and they will avoid your project.
(2) Never ever build in '-03 -march=native' by default. This is always a red flag and a sign of immaturity. People expect code to be portable and shippable.
Good default options should be CMake equivalent of "RelWithDebInfo" (meaning: -O2 -g -DNDEBUG ).
-O3 can be argued. -march=native is always always a mistake.
- (3) Allow your build tool to be built by an other build tool (e.g CMake).
Anybody caring about reproducibility will want to start from sources, not from a pre-compiled binary. This also matter for cross compilation.
- (4) Please offer a compatibility with pkg-config (https://en.wikipedia.org/wiki/Pkg-config) and if possible CPS (https://cps-org.github.io/cps/overview.html) for both consumption and generation.
They are what will allow interoperability between your system and other build systems.
- (5) last but not least: Consider seriously the cross-compilation use case.
It is common in the world of embedded systems to cross compile. Any build system that does not support cross-compilation will be de facto banned from the embedded domain.
reply