Hacker Newsnew | past | comments | ask | show | jobs | submit | mmaunder's commentslogin

Cheapest four letter domain on Earth at this point, given the negative value of the business and brand.

OP can you please make it not as dark and slightly larger. Super useful otherwise. Qwen 3.5 9B is going to get a lot of love out of this.

I'm not usually one to whine, but agreed; additionally, add contrast to the modifiers (e.g. processor select). First thing I did when I visited was scale the website to 150%

Super impressive comparisons, and correlates with my perception having three seperate generations of GPU (from your list pulldown). Thanks for including the "old AMD" Polaris chipsets, which are actually still much faster than lower-spec Apple silicon. I have Ollama3.1 on a VEGA64 and it really is twice as fast as an M2Pro...

----

For anybody that thinks installing a local LLM is complicated: it's not (so long as you have more than one computer, don't tinker on your primary workhorse). I am a blue collar electrician (admittedly: geeky); no more difficult than installing linux. I used an online LLM to help me install both =D


Have to disagree in part at least. Text is pretty small which isn't good, but I'm glad to see it when sites don't succumb to the make-dark-mode-lighter trend.

I can't see shit on this website lol. It'd be nice if they had a switch to toggle a light mode.

+1

The website is super useful. That theme though... low-contrast text on too-dark theme is, uh, barely readable for me.


OP here, it's not mine though!

That's between 1 and 10 training runs on a large foundational model, depending on pricing discounts and how much they manage to optimize it. I priced this out last night on AWS, which is admittedly expensive, but models have also gotten larger.

Someone explain how you'd create a vector embedding using homomorphically encrypted data, without decrypting it. Seems like a catch 22. You don't get to know the semantic meaning, but need the semantic meaning to position it in high dimensional space. I guess the point I'm making is that sure, you can sell compute for FHE, but you quickly run up against a hard limit on any value added SaaS you can provide the customer. This feels like a solution that's being shoehorned in because cloud providers really really really want to have a customer use their data center, when in truth the best solution would be a secure facility for the customer so that applications can actually understand the data they're working with.

Most of modern machine learning is effectively linear algebra. We can achieve semantic search over encrypted vectors if the encryption relies on similar principles.

Nice. I played with this a bit. Agents are very good at Rust and CUDA so massive parallelization of compute for things like options chains may give you an edge. Also, you may find you have a hard time getting very low latency connection - one that is low enough in ms so that when you factor in the other delays, you still have an edge. So one approach might be to acknowledge that as a hobbyist you can't compete on lowest-latency, so you try to compete on two other fronts: Most effective algorithm, and ability to massively parallelize on consumer GPU what would take others longer to calculate.

Best of luck. Super fun!

PS: Just a follow-up. There was a post here a few days ago about a research breakthrough where they literally just had the agent iterate on a single planning doc over and over. I think pushing chain of thought for SOTA foundational models is fertile ground. That may lead to an algorithmic breakthrough if you start with some solid academic research.


That's one way to block those pesky young innovators from trampling our lawn.

Please don't post snarky, shallow dismissals. That's been against the guidelines for a long time.

Genuine innovation is what we most want to encourage. That's what Show HN has always been about.

The problem now is that coding assistants have dramatically lowered the bar for getting a product or tool working, without the need for much innovation. We need new ways of identifying projects that are genuinely innovative so that their creators can be fairly rewarded, rather than being drowned out.


Same. 52 year old CTO here.

But my AI didn't do what your AI did.

Cherry picked AI fail for upvotes. Which you’ll get plenty of here an on Reddit from those too lazy to go and take a look for themselves.

Using Codex or Claude to write and optimize high performance code is a game changer. Try optimizing cuda using nsys, for example. It’ll blow your lazy little brain.


Yeah right. A LLM in the hands of a junior engineer produces a lot of code that looks like they are written by juniors. A LLM in the hands of a senior engineer produces code that looks like they are written by seniors. The difference is the quality of the prompt, as well as the human judgement to reject the LLM code and follow-up prompts to tell the LLM what to write instead.

Lol what. The difference is that the senior... is a senior. Ask yourself what characteristics comprises a senior vs junior...

You're glossing over so much stuff. Moreover, how does the Junior grow and become the senior with those characteristics, if their starting point is LLMs?


I’m not glossing over anything. You and I are talking about the exact same thing phrased differently. How does a senior know when to reject some LLM code and start over? Experience. I don’t disagree with you but your tone is aggravating.

This. I really wonder how trainees are supposed to grow in an age where they are asked not to code themselves but guide a machine doing so.

Prompting is just step 1. Creating and reviewing a plan is step 2. Step 0 was iterating and getting the right skills in place. Step 3 is a command/skill that decomposes the problem into small implementation steps each with a dependency and how to verify/test the implementation step. Step 4 is execute the implementation plan using sub agents and ensuring validation/testing passes. Step 5 is a code review using codex (since I use claude for implementation).

I kind of agree. But I'd adjust that to say that in both cases you get good looking code. In the hands of a junior you get crappy architecture decisions and complete failure to manage complexity which results in the inevitable reddit "they degraded the model" post. In the hands of seniors you get well managed complexity, targeted features, scalable high performance architecture, and good base technology choices.

It’s easy to get AI to write bad code. Turns out you still need coding skills to get AI to write good code. But those who have figured it out can crank out working systems at a shocking pace.

Agreed 100%. I'd add that it's the knowledge of architecture and scaling that you got from writing all that good code, shipping it, and then having to scale it. It gives you the vocabulary and broad and deep knowledge base to innovate at lightning speeds and shocking levels of complexity.

I am sorry for asking, but... is there guide even on how to "figure it out"? Otherwise, how are you so sure about it?

Right here: https://codemanship.wordpress.com/2025/10/30/the-ai-ready-so...

This series of articles is gold.

Unsurprisingly, writing good software with AI follows the same principles as writing it without AI. Keep scopes small. Ship, refactor, optimize, and write tests as you go.


When a new technology emerges we typically see some people who embrace it and "figure it out".

Electronic synthesisers went from "it's a piano, but expensive and sounds worse" to every weird preset creating a whole new genre of electronic music.

So it seems plausible, like Claude's code, that our complaints about unmaintainable code are from trying to use it like a piano, and the rave kids will find a better use for it.



That's actually a great question. Truth be told the best way right now is to grab Codex CLI or Claude CLI (I strongly prefer Codex, but Claude has its fans), and just start. Immediately. Then go hard for a few months and you'll develop the skills you need.

A few tips for a quickstart:

Give yourself permission to play.

Understand basic concepts like context window, compaction, tokens, chain of thought and reasoning, and so on. Use AI to teach you this stuff, and read every blog post OpenAI and Anthropic put out and research what you don't understand.

Pick a hard coding problem in Python or Typescript and take a leap of faith and ask the agent to code it for you.

My favorite phrase when planning is: "Don't change anything. Just tell me.". Save this as a tmux shortcut and use it at the end of every prompt when planning something out.

Use markdown .md docs to create a planning doc and keep chatting to the agent about it and have it update the plan until you're super happy, always using the magic phrase "Don't change anything. Just tell me." (I should get myself a patent on that little number. Best trick I know)

Every time you see an anti-AI post, just move on. It's lazy people making lazy assumptions. Approach agentic coding with a sense of love, excitement, optimism, and take massive leaps of faith and you'll be very very surprised at what you find.

Best of luck Serious Angel.


You're not really answering the question are you?

Your answer is to play with it. Cool. But why cant you and others put together a proper guide lol? It cant be that hard.

Go ahead and do it - it'll challenge the Anti-AI posters you are referencing. I and others want to see that debate.


Don't worry we'll all be taking the Claude certification courses soon enough

Ah - I know! Seriously I know. There's such a bad need for this right now. The problem is that the folks who are great at agentic coding are coding their asses off 16 to 20 hours a day and don't have a minute they want to spend on writing guides because of the opportunity cost.

One of the rare resources I found recently was the OpenClaw guys interview on Lex. He drops a few bangers that are really valuable and will save you having to spend a long time figuring it out.

Also there's a very strong disincentive for anyone to write right now because we're competing against the noise and the slop in the space. So best to just shut the fuck up and create as fast as we can, and let the outcome speak for itself. You're going to see a lot more products like OpenClaw where the pace of innovation is rapid, and the author freely admits that they're coding agentically and not writing a single line.

I think the advantage that Peter has (openclaw author) is that he has enough money and success to not give a fuck about what people say re him writing purely agentically, so he's been very open about it which has been great for others who are considering doing the same.

But if you have a software engineering career or are a public figure with something to lose, you tend to STFU if you're doing pure agentic coding on a project.

But that'll change. Probably over the next few months. OpenClaw broke the ice.


Here’s some practical tips:

Start small. Figure out what it (whatever tool you’re using) can do reliably at a quality level you’re comfortable with. Try other tools. There are tons. If it doesn’t get it right with the first prompt, iterate. Refine. Keep at it until you get there.

When you have seen some pattern work, do that a bunch. It won’t always work. Write rules / prompts / skills to try to get it to avoid making the mistakes you see. Keep doing this for a while and you’ll get into a groove.

Then try taking on bigger chunks of work at a time. Break apart a problem the same way you’d do it yourself first. Write a framework first. Build hello world. Write tests. Build the happy path. Add features. Don’t forget to make it write lots of tests. And run them. It’ll be lazy if you let it, so don’t let it. Each architectural step is not just a single prompt but a conversation with the output being a commit or a PR.

Also, use specs or plans heavily. Have a conversation with it about what you’re trying to do and different ways to do it. Their bias is to just code first and ask questions later. Fight that. Make it write a spec doc first and read it carefully. Tell it “don’t code anything but first ask me clarifying questions about the problem.” Works wonders.

As for convincing the AI haters they’re wrong? I seriously do. Not. Care. They’ll catch up. Or be out of a job. Not my problem.


I’m not a SWE by trade so I could care less about your last comment.

But again this is all… vague. I’m personally not convinced at all.

I’ll be hiring for a large project soon, so I’ll see for myself what benefits (well I care about net benefits) these tools are providing in the workplace.


If it wasn’t clear, I don’t have any desire to convince anybody of anything. You don’t believe the future is here yet? Good luck holding on to that position. Not my problem. I was taking time to try to help somebody who sounded genuinely curious and seeking help. That I’m happy to do.

You’re writing novels when if you had something compelling to show it’d be simple and easy.

If you can’t make it simple and easy… then you haven’t understood it at all. All geniuses refer to this as the standard by which one understands something. Whether it’s Steve Jobs or Einstein. So don’t get mad. Show us all how simple and easy it is. If you can’t.. then accept you’re full of it and don’t quite get it as well as you claim. Not rocket science is it?

But here we are. And actually my project is going to create the future. You’re a bozo programmer who creates the future that others already see. Know your role and don’t speak for others like me who are in the position of choosing who gets hired.


You’re not going to create any future if you insult people trying to offer friendly advice, or think of the talent you rely on to create your vision as “bozo programmers”. I’d wish you good luck, but you have convinced me you don’t deserve it.

How do you figure anything out? You go use it, a lot.

Weird and frustrating that this has not hit the HN HP. I'm not the poster. Huge development in cybersec.

Google really know how to screw up a product experience.

npm install -g @googleworkspace/cli

gws auth setup

{ "error": { "code": 400, "message": "gcloud CLI not found. Install it from https://cloud.google.com/sdk/docs/install", "reason": "validationError" } }

Which takes you to...

https://docs.cloud.google.com/sdk/docs/install-sdk

Where you have to download a tarball, extract it and run a shell script.

I mean how hard is it to just imitate everyone else out there and make it a straight up npm install?


The readme is AI generated, so I am assuming the lack of effort and hand-off to the bots extends to the rest of this repository.

The contributors are a Google DRE, 5 bots / automating services, and a dev in Canada.


You don't need to use gcloud if you already have:

1. A GCP project (needed for OAuth) 2. Enabled APIs in said project


gcloud cli will probably also require you to make a Google Cloud project and stuff by clicking around their godforsaken webui. hopefully they streamlined that, it took me a long time to figure out when i wanted to write some JS in my spreadsheet

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: