Downstream of this I used to cycle my accounts pretty regularly but have stopped since generative AI. Don't want people thinking I'm an LLM spam bot. My stupid comments are entirely my own.
I cycle accounts on here too (probably time to end this one, now that you mention it) but I don't plan on stopping. I refuse to build a long term identity on a platform that refuses to let me delete old comments if I want to (HN's policy). Too much liability for doxing etc
For my entire life I've never seen the feds do anything other than selective enforcement. See the latest disclosures RE: Zorro ranch and little saint James as recent examples.
LLM-generated code probably, which human uses em-dashes and Unicode arrows for boilerplate file header comments? LLMs, on the other hand, very often do.
Likewise, I feel like it's degraded in performance a bit over the last couple weeks but that's just vibes. They surely vary thinking tokens based on load on the backend, especially for subscription users.
When my subscription 4.6 is flagging I'll switch over to Corporate API version and run the same prompts and get a noticeably better solution. In the end it's hard to compare nondeterministic systems.
Audio models are also tiny, which is probably why small labs are doing well in the space. I run a LoRA'd Whisper v3 Large for a client. We can fit 4 versions of the model in memory at once on a ~$1/hr A10 and have half the VRAM leftover.
Each of the LoRA tunes we did took maybe 2-3 hours on the same A10 instance.
My ~1.7% WER and faster than realtime processing in my application make it more than adequate. My application is multi-speaker with WPM rates >300 for long durations.
reply