Hacker Newsnew | past | comments | ask | show | jobs | submit | AustinDev's commentslogin

I didn't build it. I just found it on a live stream (https://www.twitch.tv/ej_sa) of the mission. Maybe `Show HN:` was the wrong prefix.

Ah, yeah, Show HN is for things you made. Interesting links such as the one you shared are just shared without any special prefix.

From the HN FAQ:

> What are Ask HN and Show HN?

> Ask HN lists questions and other text submissions. Show HN is for sharing your personal work and has special rules.


And they get Federal pensions and healthcare funded by tax dollars.

Vibe coding and it's consequences.

"The evidence is clear: Either you embrace AI, or get out of this career." -Github CEO

"Sooner than later, 80% of the code is going to be written by Copilot. And that doesn’t mean the developer is going to be replaced." -Github CEO


Downstream of this I used to cycle my accounts pretty regularly but have stopped since generative AI. Don't want people thinking I'm an LLM spam bot. My stupid comments are entirely my own.


I cycle accounts on here too (probably time to end this one, now that you mention it) but I don't plan on stopping. I refuse to build a long term identity on a platform that refuses to let me delete old comments if I want to (HN's policy). Too much liability for doxing etc


Only true blue organic human slop coming from my IP address!


Great feedback for the OP. I would also add, Can you make the text not selectable?


Thanks for the feedback, its done.


For my entire life I've never seen the feds do anything other than selective enforcement. See the latest disclosures RE: Zorro ranch and little saint James as recent examples.


LLM generated readme?

>Mantis 7 consists of four major components:

1, 2, 3, 4.


LLM-generated code probably, which human uses em-dashes and Unicode arrows for boilerplate file header comments? LLMs, on the other hand, very often do.

https://github.com/CrimsonDemon567PC/Mantis/blob/d7c8a79f0ff...


What makes you think it's LLM-generated?


Look at the project age and commit velocity. It’s…real obvious.


Did you read my comment?


1. Compiler (compiler.py) 2. File Types 3. Bytecode Format (MTN1) 4. Builtin Calls 5. Native Execution Model 6. CLI Usage 7. Runtime Functions

The 1, 2, 3 and 4 you saw are all part of the first component, if you scroll on the readme, you'll see it lists all of the 7 components.


Amicus Plato, sed magis amica veritas.


“Plato is my friend, but truth is a better friend.”


The messenger changes; but the truth doesn't. That is the point of the phrase. It's a jab at the GP's flawed logic.


Likewise, I feel like it's degraded in performance a bit over the last couple weeks but that's just vibes. They surely vary thinking tokens based on load on the backend, especially for subscription users.

When my subscription 4.6 is flagging I'll switch over to Corporate API version and run the same prompts and get a noticeably better solution. In the end it's hard to compare nondeterministic systems.


That's very interesting!

Also, +1. Opus 4.6 is strictly better than 4.5 for me


Audio models are also tiny, which is probably why small labs are doing well in the space. I run a LoRA'd Whisper v3 Large for a client. We can fit 4 versions of the model in memory at once on a ~$1/hr A10 and have half the VRAM leftover.

Each of the LoRA tunes we did took maybe 2-3 hours on the same A10 instance.


Is Whisper still getting nontrivial development? I was under the impression that it had stagnated, but it seems hard to find more than just rumors


My ~1.7% WER and faster than realtime processing in my application make it more than adequate. My application is multi-speaker with WPM rates >300 for long durations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: