From my reading (IANAL), there are three types of occupational fraud: asset misappropriation, corruption, and financial statement fraud. Since job posting are interpreted as a positive signal but are not (it seems) typically and explicitly included in formal financial statements, this wouldn't rise to the level of criminal fraud.
Really? I can't imagine not running the code locally. Honestly, my company has a micro services architecture, and I will just comment out the docker-compose pieces that I am not using. If I am developing/testing a particular component then I will enable it.
The problem with ai is the error checking piece. It’s nice that it can do this but I don’t see evidence it can validate what it generates at scale. Even then, how would you validate an ai validation ?
I’m okay with the value of the dollar drastically decreasing. In fact, I think it will naturally do so due to the high debt and labor becoming more expensive in other countries.
I think it would be good for U.S. workers as it will help make them more competitive in a global labor market.
ANY call to set a debt ceiling needs to be directly tied to converting all for-profit corporations into public-benefit corporations.
That doesn't restrict corporations from generating profit. It just balances how they use said profits.
"Fixing" the national debt problem by putting a ceiling on it without addressing one of the main contributors to the current state of government spending and quality of life in America is a recipe for disaster.
It's okay to rip the band-aid off, but only if you're ready to deal with what may be a mortal wound.
Look at all of the major government spending and determine who benefits most and where funds are sent. If the recipient of funds is a corporation, also review how their profits are spent and which humans eventually profit from that.
I'm using an expansive definition of boiler plate to be sure. But like boiler plate most unit tests require a little bit of thought and then a good amount of typing, doing things like setting up the data to test, mocking up methods, writing out assertions to test all your edge cases.
I've found sonnet and o1 to be pretty good at this. Better than writing the actual code because while modifying a system requires a lot of context of the overall application and domain, unit testing a method usually doesn't.
Yes. You write a function ApplyFooToBar(), and then unit tests that check that, when supplied with right Foos, the function indeed applies those Foos to the Bar. It's not a very intellectually challenging work.
If anything, the challenge is with all the boilerplate surrounding the test, because you can't just write down what the test checks themselves - you need to assemble data, assemble expected result, which you end up DRY-ing into support modules once you have 20 tests needing similar pre-work, and then there's lots of other bullshit to deal with at the intersection between your programming language, your test framework, and your modularization strategy.
Indeed. To many tests are just testing nothing other than mocks. That goes for my coworkers directly and for their Copilot output. They’re not useful tests, they are thing to catch actual errors, they’re maybe useful as usage documentation. But in general, they’re mostly a waste.
Integration tests, good ones, are harder but far more valuable.
> To many tests are just testing nothing other than mocks
Totally agree, and I find that they don't help with documentation much either, because the person that wrote it doesn't know what they're trying to test. So it only overcomplicates things.
Also harmful because it gives a false sense of security that the code is tested when it really isn't.
This has been my approach in the past that only certain parts of the code are worth unit testing. But given how much easier unit tests are to write now with AI I think the % of code worth unit testing has gone up.
> But given how much easier unit tests are to write now with AI I think the % of code worth unit testing has gone up.
I see the argument, I just disagree with it. Test code is still code and it still has to be maintained, which, sure "the AI will do that" but now theres a lot more that I have to babysit.
The tests that I'm seeing pumped out by my coworkers who are using AI for it just aren't very good tests a lot of the time, and honestly encode too much of the specific implementation details of the module in question into them, making refactoring more of a chore.
The tests I'm talking about simply aren't going to catch any bugs, they weren't used as an isolated execution environment for test driven development, so what use are they? I'm not convinced, not yet anyway.
Just because we can get "9X%" coverage with these tools, doesn't mean we should.
What’s wrong with having an interface with one implementation ? It’s meant to be extended by code outside the current repo most likely. It’s not a smell in any sense.
In that case you have more than one implementation, or at least a reasonable expectation that it will be used. I don't have a problem with that.
My comment was regarding interfaces used internally within the code, with no expectation of any external use. I wrote from a modern Java perspective, with mockable classes. Apparently interfaces are used by .Net to create mocks in unit tests, which could be a reason to use that approach if that is considered "best practice"
90% of single-implementation interfaces (in Kotlin on Android projects I've seen) are internal (package/module private, more or less.) So no, they are not meant to be extended or substituted, and tests are their only raison d'etre (irony: I've almost never seen any actual tests...) This is insane because there are other tools you can use for testing, like an all-open compiler plugin or testing frameworks that can mock regular classes without issues.
An interface with a single implementation sometimes makes sense, but in the code I've seen, such things are cludges/workarounds for technical limitations that haven't been there for more than a decade already. At least, it looks that way from the perspective of a polyglot programmer who has worked with multiple interface-less OOP languages, from Smalltalk to Python to C++.
Sure, but it introduces an invisible barrier for applicants. Someone who accidentally starts their email that way because they didn't know that hiring managers are doing this kind of filtering accidentally trigger the filter. That's not a problem for the hiring managers, but it adds one more reason why an applicant might get disqualified in a process that is already frustratingly opaque for applicants.
This is actually insane. 120,000 comments! To a certain extent if our law is already so complicated that you need to hire a lawyer to understand it that is already a fundamental problem.
Simplify the rules, make it easier to understand and reason about. The computers should be able to determine if someone is breaking a law, not trying to check if it is a bad law.
We should be using computing power where someone can ask: is this legal ? Can I do this? That’s the true value to society.
This kind of comes back to the common law vs civil law system. Most English-speaking countries operate under a common law system where the laws as written down provide the ground work, but precedent set by previous court cases is also legally binding. In contrast most of Europe and South America operates under a civil law system (yes, terrible name, not the opposite of criminal law) where the written law reigns supreme and previous court decisions are merely informing opinions.
As you can imagine, algorithmic decisions are incredibly difficult in any common law system. And while they might be viable in a civil law system, you would lose out on the ability of a judge to give consideration to the specific circumstances of each case.
In practise both systems end up unwieldy in an attempt to be fair. Common law systems because of the overwhelming amount of precedent to consider, civil law systems because the laws become incredibly long and complex, with complex interactions between laws
How do they achieve consistency in civil laws systems?
In the US if say a district court in California and a district court in Oregon adopt incompatible interpretations of a federal law, someone will appeal to the appeals court that covers both California and Oregon, that court will interpret that law and then that is binding in all that states under that appeals court (Alaska, Arizona, California, Hawaii, Idaho, Montana, Nevada, Oregon, and Washington).
If some other appeals court in some other region goes a different way, it can go to the Supreme Court which makes an interpretation for the whole country.
We eventually reach consistency even if Congress is unwilling or unable to revisit the law.
It seems like the best use of ai / computing power would be to do common law through llm? I understand it’s nuanced and complicated but that seems like a good use case for our current ai systems ? What am I missing ?
Court isn’t about understanding if action X violates law Y. Analyzing that part is quick in preliminary work. All of the hard work is proving that the defendant performed X, proving they had intent and it wasn’t an accident, etc.
Most laws have a lot of neuances and edge-cases. That's why the words and language is so important. The more edge-cases are discovered the more complicated the text becomes.
That’s not really true though. While legalese creates complexity a substantial part of judicial rulings is figuring out which of it should be ignored because it’s bullshit. A lot of law is driven by arguments of how things ought to be given a loose framework of law and precedent.
Simplify the rules, make it easier to understand and reason about
This happens every few decades...A legislature wipes the slate clean and starts fresh with a new, simpler set of laws.
Then spend the next few years discovering why the old set of laws was so complicated, as they gradually reintroduce laws to deal with the edge cases, loopholes, etc. that the new laws created. And then you end up with a complicated set of laws again.
The people who get hurt when you "simplify" the legal code aren't the corporations; they have the money to get good legal advice, nor the criminals since they don't particularly care about following the law in the first place. It's the common people who get hurt when the law is simplified, because most people are fundamentally law-abiding, and the law is complicated to deal with all the people who are not.