Hacker Newsnew | past | comments | ask | show | jobs | submit | bluGill's commentslogin

The people who did contracts are aware of ada/spark and some have experience using it. Only time will tell if it works in c++ but they at least did all they could to give it a chance.

Note that this is not the end of contrats. This is a minimun viable start that they intend to add to but the missing parts are more complex.


Might be the case that Ada folks successfully got a bad version of contracts not amenable for compile-time checking into C++, to undermine the competition. Time might tell.

I strongly doubt that C++ is what's standing in the way of Ada being popular.

Ada used to be mandated in the US defense industry, but lots of developers and companies preferred C++ and other languages, and for a variety of reasons, the mandate ended, and Ada faded from the spotlight.

>the mandate ended, and Ada faded from the spotlight

Exactly. People stopped using Ada as soon as they were no longer forced to use it.

In other words on its own merits people don't choose it.


On their own merits, people choose SMS-based 2FA, "2FA" which lets you into an account without a password, perf-critical CLI tools written in Python, externalizing the cost of hacks to random people who aren't even your own customers, eating an extra 100 calories per day, and a whole host of other problematic behaviors.

Maybe Ada's bad, but programmer preference isn't a strong enough argument. It's just as likely that newer software is buggier and more unsafe or that this otherwise isn't an apples-to-apples comparison.


This is some pretty major conspiracy thinking, and would need some serious evidence. Do you have any?

[flagged]


Okay, on one hand, I'm very curious, but on the other hand, not really on topic for this forum. So I'll just leave a "wut".

/48 because ethernet mac addresses are that length and so you can assign everything that and find it.

Related our logging system has a debug which is not logged by default but can be turned on if a problem in an area is found (in addition to the normal error/info which is logged). I had the idea that if a test fails we should print all these debugs - easy enough to turn on but a number of tests failed because of side effects that didn't show up when off.

i'm trying to think of how/if we can run tests with all logging off to find the error and info logs with side effects.


The utility SHOULD ensure there is enough power for the worst case. Which is why they will sometimes pay someone to not generate power.

Those are code in the us now too. (with exceptions for where they don't make sense)

NEC doesn't specify GFCI breakers, it merely requires receptacles in certain areas have GFCI protection, and accepts GFCI breakers as one way to provide that.

The conventional practice in the US is still to use GFCI receptacles rather than breakers.


Right, but the NEC spec arc fault as well (i've only seen this on breakers). recepticals are cheaper and otherwise just as good.

Because NEC 210.12 requires all devices to be protected. Which means if you have a switch or splice before a plug the only way to protect those is with an AFCI breaker. The only exception is a continuous run from the breaker to an outlet in metal conduit or MC cable. Given how much is romex this effectively forces AFCI branch breakers.

I find that receptacles tend to break prematurely if they are wet locations, even if 'protected' with a weatherproof box etc. You also need to know where the receptacle is and make sure it is accessible instead of behind a piece of furniture etc. Then some electricians misunderstand and put receptacles throughout the run (much more expensive than one breaker which is about 2x a receptacle), and in edge cases you need to know the order in which to reset them to get things working again. I much prefer to just have everything in the panel.

Always important to note that "code" does not mean "must meet this standard". Many existing installations will not meet current code and there are varying levels of code (at least in the UK) that mean anything from an electrician can ignore minor faults through to network-notifiable issues.

But that's rather the point here that consumers are the ones who are going to be plugging in these devices, with no appreciation for their circuits and safety devices. The only code that matters is the last version of it adhered to when their home was last wired. In extremes, that can be 40 years or more.


sure, but everything new must meet current code. nobody upgrads when code changes anywhere. Codes from 40 years ago were not bad, though things are always improving.

QA should find not just bugs but where 'works as speced' is wrong because it doesn't make sense to do it that way.

Every billing system in use is constantly maintained with new features, bug fixes ane the like. The system of 20 years ago would apply the wrong tax laws today. the people asking for the new feature today care about how easy those are to add

I think adding new features is exactly the sort of place where AI is terrible, at least after you do it for a while. I think it's going to have a tendency to regenerate the whole function(s), but it's not deterministic. Plus, as others have said, the code isn't clean. So you're going to get accretions of messy code, the actual implementation of which will change around each time it gets generated. Anything not clearly specified is apt to get changed, which will probably cause regressions. I had AI write some graphs in D3.js recently, and as I asked for different things, the colors would change, how (if) the font sizes were specified would change, all kinds of minor things. I didn't care, because I modified the output by hand, and it was short. But this is not the sort of behavior I want my code output to have.

I think after a while the accretions are going to get slow, and probably unmaintainable even for AI. And by that time, the code will be completely unreadable. It will probably make the code written by people who probably should not be developers that I have had to clean up look fairly straightforward in comparison.


"The code isn't clean"

Skill issue. If you just one-shot everything, sure, you'll get a messy codebase. But if you just manage it like a talented junior dev, review the code, provide feedback, and iterate, you get very clean code. Minus the arguing you get from some OCD moron human who is attached to their weird line length formatting quirk.


That is why I understand everything before I commit. Ai can write a lot of bad code - but an expert can guide it to good code.

After a while why wouldn’t the expert just write the good code if the AI never truly learns?

Because the ai can write the tedious parts fast. you have to check but that is faster than writing.

True experts can design code architecture that doesn’t have “tedious parts”.

Combines have been had modern ai image recognition cameras (same technology as a llm) in the base model for a few years now.

you need not stick to any level. Some things that always have been are still bad (slavery is an obvious example now dated enough to be uncontrolversial). Some new things are bad and others good at any age.

don't grow up too set in your ways to not learn the new. But do grow up fast/young to get some cynicism for everything. now that I'm in my 50s the first is important but when younger the later was important.


The biggest problem in expanding for everyone else is they don't trust the market to exist for long enough to be worth paying for a new factory so they are not investing in it. The Chinese might be small, but they think the market will exist and are investing. Will they be right or wrong - I don't know.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: