Hacker Newsnew | past | comments | ask | show | jobs | submit | flitzofolov's commentslogin

malware author is pretty harsh, but fair analysis overall

Can you elaborate on this?

What are some examples of skills you think are now essential, that prior have been taken for granted or obviated in some way?


My rule of thumb is that if you’re not measuring anything you’re not engineering. It’s not the whole picture, but to me the engineering part sometimes means being able to explain (and even quantify) why one solution is better than another.


I've found coding assistants to be a huge boon for this. All of the thorough analysis that previously would've taken a bunch of tedious extra thought work to do for marginal benefit (with a well-calibrated intuition) becomes 5 seconds of thought to the the computer to build a harness and then letting it chew on that for 15 minutes. It now also takes me one command and less than a minute to get pprof captures from all the production services my team owns (thanks to some scripts I had it write), which is just something I never would've bothered to automate otherwise, so we never really looked much at it. Codex is also very good at analyzing the results, and finding easy wins vs. knowing what would be invasive to improve, and then just doing it.

Thinking of seeing if I can get mutation testing set up next, and expanding our use of fuzzing. All of these techniques that I know about but haven't had the time to do are suddenly more feasible to invest into.


Not the original author, but I would guess that understanding the domain problem and interpreting it correctly in a software solution (not code, but a product with workflows, UX, etc.), which in turn requires ability to listen and understand and ask right questions on one hand (what a user wants to achieve), and a good understanding of the technical limitations as well as human habits on the other hand (what is possible and makes sense). One can argue that AI lacks what we'd call intuition and interpersonal qualities which are still necessary, as before AI.


Read further into the comment.

Your $300k+ TC job is going away. The only way you'll make the same take home is if you provide more value.

You can be a robotic IC, but you won't be any better than a beginner with Claude Code. You have to level up your communication and organizational value to stay at the top.

Everyone has to wear the cloth of a senior engineer now to simply stay in place. If you can't write well, communicate well, plan and organize, you're not providing more value than a Claude-enhanced junior.


> If you can't write well, communicate well, plan and organize,

Why not ask the LLM to write for you? Same for planning, organization and written communication.

Seems like robotic ICs can "robotize" most of the work stack.


"If you can't write well, communicate well, plan and organize" Straw man. Pretty sure, this is the dilbert equivalent of "I can problem solve". If you are an engineer, we are making boatloads being brought in to fix the incompetence of this level of thinking. INFOSEC alone is having a field day.

Would you like to buy a bridge? Coded by Claude. One previous owner. An owner who used said bridge to go to church once a week, and vibe code in Starbucks afterwoods.


How about the skill of saving hard disk space, memory, and CPU cycles, for a start? The skill of designing simple, reliable, fast, and efficient things, instead of giant complex bloated unreliable pieces of shit? How about a simple, usable web page that doesn't drag my machine to a crawl, despite its supercomputer-like ability to process billions of instructions per second and hold billions of bytes of data in working memory?

Remember when BIOS computers used to boot in seconds, reliably? When chat clients didn't require an embedded copy of Chromium? When appliances and automobiles didn't fall apart in 6 months, costing thousands to "repair" or just needing to be thrown away and bought again?

Remember when there used to be these things called "machine shops" and "Radio Shacks" and "parts stores" that people who built things frequented? Now most people have to call AAA if they get a flat tire. Changing their own oil is out of the question. "Eww, dirty oil, on my clean fingernails?" Many couldn't tell you which end is which on a screwdriver if their life depended on it.

I'd say these concepts are pretty essential, especially for any nation entertaining delusions of waging Total War against other big and powerful nations. Wasteful and foolish nations lose wars.


Beyond engineering itself, strictly computer engineering? How many coders have no idea what goes on behind an IDE. Have not even the slightest notion how a computer works. Who thinks building a computer means watching a Youtube video and buying ready made parts, putting them together, and then think they should be employed by NASA.

To begin: Math, Linux, Devops, C, and Assembly. Not a youtube video. Not arithmetic. Learn to the point that you could be employed by any of the above as a senior. And don't fear failure. Keep doing it until you understand it.


I agree with your original post that the need for hard skills will persist, but I see it in the other direction: software engineers are going to have to get better at thinking in larger abstractions, not deeper understanding of the stack. Those who can only solve problems locally and repeat the patterns they've seen before rather than create new patterns from building blocks are the ones who are going to struggle.


"software engineers are going to have to get better at thinking in larger abstractions" ........Math was first on my list. I don't know how else to say that.


Computer science is indistinguishable from sufficiently advanced maths.

The AI can already do that part.

The abstraction that matters going forward, is understanding why the abstraction chosen by the AI does or doesn't match the one needed by the customer's "big picture".

The AI is a bit too self-congratulatory in that regard, even if it can sometimes spot its own mistakes.


A lot of studying math is just learning jargon and applications for what are actually pretty straightforward concepts, which lets you better communicate with the computer. You get higher bandwidth communication and better ability to know all of the nuances in things it might propose. You can propose things and understand when it replies with nuances you missed.

Like intro differential geometry is basically a deep dive into what one actually does when reading a paper map. Something everyone (over 30?) is familiar with. But it turns out there's plenty to fill a graduate level tome on that topic.

Linear algebra is basically studying easy problems: y=ax. Plenty to write about how to make your problem (or at least parts of it) fit that mould.

I suspect and think I've seen others say that you get better outputs from LLMs when using jargon. Essentialy, its pattern matching tells it to say what an expert would say when using the terminology experts use.


> I don't know how else to say that.

Yep, exactly. The failure to realize that you mean different things when talking about "larger abstractions" is exactly the kind of miscommunication that software people will need to navigate better in the future.


If you need to have that explained to you, you are the problem.


Ah, I think “Math” as a single word on its means many different things to many different people, I didn’t interpret in quite the same way. But I see what you mean.

I’m not sure that my colleagues who I think of as “good at math” and “good at thinking in larger abstractions” are necessarily the same ones, but there’s definitely a lot of overlap.


Reminds me of "A Canticle for Leibowitz".

https://en.wikipedia.org/wiki/A_Canticle_for_Leibowitz


Nice to see a well reasoned counter to Altman's hyperbole.


That's like adding puke frosting to a turd cake.


It depends how good the AI is and how it's used etc. It would for example be neat to walk around in a massive AI generated multiplayer city that has a lot of intricate details.


Are we labeling procedural generation AI now? There have been procgen city generators for decades. Here's a particularly nice example:

https://store.steampowered.com/app/2859220/Vuntra_City/

Humans are really good at identifying the unreality of procgen because it loses the intentionality we expect from our built environments. Here's the author of that game talking about their solutions to the problem:

https://youtu.be/4MZ5-KQW3pc


Procedural generation isn't what we call "AI" now, but AI is absolutely procedural generation.

If your inputs to an LLM are prompt + seed, isn't that analogous to world config + seed in Minecraft? In fact, I would argue they both have the same problem: In an infinite world, geography has no meaning.

Why do I get bored of exploring new minecraft worlds? Because ultimately, I know the limits. I've seen cool minecraft mountains, what do I care if this time they're next to a desert but last time they were next to a lake? None of it has any meaning, there's no context. That mountain isn't a holy site for some religion, or a reminder of an ancient battle. It isn't a clever ploy by the writers of a game to set the stage for a final fight. No, it's just mountain #9057382 in minecraft world #74893124.

Midjourney's "wizard tower" with seed 1 vs seed 2? There are differences but none of them mean anything.

I played with AI image generators for a whole month and then I got bored. Not because I can't think of other prompts, but because "witch cat riding a broomstick" is something I can easily imagine in my head and that takes about the same level of effort as typing it into midjourney. And the result has the same meaning. None of my friends want to see "witch cat riding a broomstick" #A213DEF675, the novelty is gone. If they wanted to see a cat on a broomstick they could type the same prompt in.

Now, I only use image generators for D&D campaign art, and I write the campaign myself. Because I know my friends, I know what they like. I know what references they will understand, and every detail I hide in the world is something special for them.

I don't see the "metaverse" being any different. It's cool that we figured out a way to make the monkeys on the typewriters turn out stuff that is readable language instead of gibberish, but how can any work ever stand out in an infinite library?


Dead Cells uses procedural generation but does it in bespoke chunks, so while a level is different it’ll have chunks that are determined fun and then glued together by the procedural algorithm.


Yes, all of the games with the best procedural generation achieve it by heavily leveraging handcrafted pieces, but that's an indictment of procedural generation, not an endorsement.


Yes. This is the standard approach going back at least as far as the first Diablo: https://www.boristhebrave.com/2019/07/14/dungeon-generation-...


Sorta? Procedurally generated levels tend to not invite repeated playthroughs. Maybe this is a leap above previous efforts, but also maybe that isn't a problem.

I'm all for folks trying. I'm less enthusiastic it will pay off.


When the breadth of the content is just about infinite, will they care about repeated playthroughs? I think they could capture quite a few people just keen to explore and see what's around each successive corner.

I think they could do things like check the response to some areas and favour those to weed others out.

I remember spending time in RDR2 exploring the landscape, and finding the variety of moments that spawned (hunters, animal attacks, peril and whatever else). I think AI can get close enough to that quality of environment to satisfy people. As it was, the level designers were clearly inspired by the natural world which was not generated by human game designers.


Again, I'm not against people trying. I just have low confidence. It doesn't feel much different from many ideas I've experienced.

All said, the hubris of new attempts accomplishes a ton.


> When the breadth of the content is just about infinite, will they care about repeated playthroughs?

Read about the launch and the development of No Man's Land.


I dunno, it feels like when content tends towards infinite it also tends towards meaningless.


> Procedurally generated levels tend to not invite repeated playthroughs.

I agree with your broader point about low confidence in an AI-powered quantum leap, but in my experience this point about repeated playthroughs is actually backwards. Lots of the most popular and/or heralded games in recent years use procedural generation to enable repeated playthroughs as a core mechanic. Balatro, Hades, Slay the Spire, Diablo, Helldivers, Civilization, etc etc. Even things like drafting in MtG or board layouts in Settlers of Catan use procedural generation to increase replayability.


Hades is proceduralish. More randomized stitching together of created rooms. Some of the ones you name are not exploration, at all, though. Balatro and spire are tough to call procedurally generated? Literally static encounters randomly chosen.

Fair, though, that there are some that are closer than a straight read of my post would imply. Minecraft would be a good counterpoint. Those tend to be sandbox style. And nothing wrong with that.


AI isn't the same as procedural, but anyways, I think there are a lot of interesting possibilities. Maybe a long ways off though.


Fair it is technically different. I question by how much? This is a lot like ML not being statistics. Somewhat true, but also largely not.


Check out Shadows of Doubt if you want an infinite supply of procedurally generated cities with intricate details. No multiplayer, though.


Yeah but have you seen what people like to eat these days?


Makes sense, good luck! I know that sounds snarky, I'm looking forward to rational progress and cooperation on the evolution and adoption of the standard. Just haven't seen that played out in such a planned orderly fashion yet (ipv6?).


ipv6, unicode, usb...

Why am I more worried than excited about a new standard?

By the way bounds checking was introduced in Turbo Pascal in 1987. Iirc people ended up disabling it in release builds but it was always on in debug.

But ... it's Pascal, right? Toy language.


Bounds checking exists at very least since JOVIAL in 1958, or if you consider FORTRAN compilers have add an option for bounds checking for quite some time, 1957.

Here is my favourite quote, every time we discuss bounds checking.

"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."

-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"

Guess what programming language he is referring to by "1980 language designers and users have not learned this lesson".


> But ... it's Pascal, right? Toy language.

Not really.

It's just out of fashion. But there are really high quality current day implementation, like the one from Embarcadero (i think they acquired Borland a while ago?): https://www.embarcadero.com/products/delphi/features/design


I think nottorp was being a bit sarcastic. I think the point was, if Pascal, which some in the C/C++ world regard as a "toy" language, had this in 1987, maybe we can actually think about having it in "real" languages in 2025.


my bad, i might have missed the sarcasm then :)


I heard algol had bounds checking somewhere in 60s as an implementation feature. Reportedly customers liked it a lot that the programs don't produce wrong results faster.


Pascal being derived from Algol, it makes a lot of sense.


Yeah, I’m wondering what this even means. I’m assuming they’ll have to define “memory safety” which is already quite the task. Memory safe in what context? On what sort of machine? What sort of OS?


> On what sort of machine? What sort of OS?

Just sharing an anecdote: recently, I had to create Linux images for x86 on ARM machine using QEMU. During this process, I discovered that, for example, creation of initrd fails because of memory page size (some code makes assumption about page size and calculates the memory location to access instead of using system interface to discover that location). There's a similar problem when using "locate" utility. Probably a bunch more programs that have been successfully used millions, well, probably trillions times. This manifests itself in QEMU segfaulting when trying to perform these operations.

But, to answer the question: I think, one way to define memory safety is to ensure that the language doesn't have the ability to do I/O to a memory address not obtained through system interface. Not sure if this is too much to ask. Feels like for application development purposes this should be OK, and for system development this obviously will not work (someone has to create the system interface that supplies valid memory addresses).


I think the usual context just requires language soundness; it doesn't depend on having an MMU or anything like that. In particular, protection against:

- out-of-bounds on array read/write

- stack corruption such as overwriting the return address

It doesn't directly say "you can't use C", but achieving this level of soundness in C is quite hard (see sel4 and its Coq proof).


Everyone picks on C, but we have a standard for this. We've been following it for decades in regulated industries. If people take the time, it can be perfectly safe. It requires thinking of a computer as a precision machine, rather than a semantic "do what i'm thinking" box.


The problem is that people are really bad at that kind of precision.


Maybe I lack vision in such matters, but: how would you corrupt the stack without an out-of-bounds write?

But there's another aspect that I think you missed: use after free.

As you say, achieving this level of soundness with C is hard. Proving it is much harder. (Except, how do you know you've achieved it if you don't prove it?)


I suspect seL4 could be proven correct only because it uses simple lifetime patterns.


Yet that is not what memory safety means. A program being memory safe or not depends on its actual behaviour not what you can prove about that behaviour. There are plenty of safe C programs and plenty of unsafe ones. Proving something is safe doesnt make it safe.

Also these properties are a very small subset of general correctness. Who cares if you write a "safe" program if it computes the wrong answer?


> Proving something is safe doesnt make it safe.

Err .. that is actually the point of the proof. Can you give an example of something with a Coq-type safety proof that has a memory safety bug in it?


Not OP but you can in theory add cosmic rays, rowhammer attacks and brownout/undervolt glitching into the mix. Kinda stretching it but sometimes you have to think about these.


Reread my comment. You are confusing proof and fact.


You are not alone, it is absurd. The meaning is what you make of it.

I recommend reading Carl Sagan for the cosmic perspective and embracing the awe.

It's worth adding, that while it is indeed all arbitrary and we are cast here without rhyme or reason, not everyone is born with the same lot in life. I also recommend reading Angela Davis for the perspective of systematically oppressed people and their struggles for liberation.

For me, when I do get lost in similar thoughts, it helps me to ground in my actual experience, and especially in what I can do in the short time I've been given to ponder to help others make their lives better.


I couldn't find an easy link from these docs to the product page on mobile. Seems like a wasted opportunity. I had to edit the URL to get to the company website.


It's not really an opinion piece is it? It's docs. The language seems appropriate for articulating principles.


Whoops I missed this reply!

I think it's absolutely an opinion piece - defining specific items as principles by definition means expressing opinionated ideas about the relative priority of those items over others. Also, imperative mood contains value judgment, which is inherently opinion-based (e.g. "Never expose PII"). Making arguments for why you should or should not do things requires expressing opinions about relative importance, weight etc.

If this were instead an article describing what feature flags are, or one performing a survey of various approaches to building/scaling them, I think the lack of voice is just fine - that's dealing in statement of fact. But this article mandates and implores and exhorts - the value judgments inherent in that pathos are empty without genuine authorship.

Also I'm not saying the lack of voice is bad even for conveying meaning or teaching - more that it is jarring and uncanny to read imperative claims in an empty robotic voice devoid of ethos.

Finally, I also might be biased by my first documentation love, the zeromq guide, which is an extremely-strongly-opinionated piece of docs that does its job exceptionally well. I think when writing about how or why, a strong writer's voice is more compelling. This article stretches past just the what into those other question words, so its seeming lack of authorial authority falls flat to me.

Thanks for giving me an excuse to blabble lol.


There are products designed to solve this problem: https://www.humansecurity.com/products/ad-tech-teams

Disclosure - I used to work for WhiteOps (now known as Human Security apparently).


There are definitely products for this problem; it's a huge business. But they're not products that are compatible with EthicalAds' privacy-first JS-optional model.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: