You've had enough arguments with people in both this thread and the previous that I'm pretty sure you understand what the issue is with your use of the word "free".
What you are offering is NOT a free tool -- it is a demo, for a tool for which you are charging $12/month. No reasonable person would interpret a grand total of 3 exports as enough to justify calling this a "free" tool.
This is to say nothing of your violation of AGPL on the use of MuPDF, which has been pointed out here and elsewhere.
But of course, you're free to Show HN a paid product; just kindly don't insult our collective intelligences in the process.
agreed. i have never seen anyone (let alone an assortment) of hacker news users saying "i switched my 2fa to this after seeing how great it was!" Not really sure how one 'switches their 2fa' to an LLM...
This thread is about the 2FA app, not the LLM app. I don't care about the LLM app. What's this witch hunt? This app literally solved a (self-inflicted) problem I was having for some years now where I was keeping an old phone around just for MFA. I even thought about creating an iOS app that's compatible with Aegis files (actually I even _started_ working on that, but didn't get far) just to solve my problem. Now I don't have to, thanks to a comment here, and that's why I posted. Geez. I guess I'll stay with negative comments for the future, they seem to be more trustworthy.
I mean I get it, astroturfing is a real problem and an annoying one for communities. But I also have no idea how to prove to you that I am neither a bot nor shilling here.
I really wish those offering speech-to-text models provided transcription benchmarks specific to particular fields of endeavor. I imagine performance would vary wildly when using jargon peculiar to software development, medicine, physics, and law, as compared to everyday speech. Considering that "enterprise" use is often specialized or sub-specialized, it seems like they're leaving money on Dragon's table by not catering to any of those needs.
Its a cohort study, so you can only control for confounders. The 2nd paragraph of the discussion addresses the healthy-vaccinee effect you're referring to.
> At the heart of the problem is the tendency for AI language models to confabulate, which means they may confidently generate a false output that is stated as being factual.
"Confabulate" is precisely the correct term; I don't know how we ended up settling on "hallucinate".
The bigger problem is that, whichever term you choose (confabulate or hallucinate), that's what they're always doing. When they produce a factually correct answer, that's just as much of a random fabrication based on training data as when they're factually incorrect. Either of those terms falsely implies that they "know" the answer when they get it right, but "confabulate" is worse because there isn't "gaps in their memory", they're just always making things up.
About 2 years ago I was using Whisper AI locally to translate some videos, and "hallucinations" is definitely the right phrase for some of its output! So just like you might expect from a stereotypical schizo: it would stay on-task for a while, but then start ranting about random things, or "hearing things", etc.
He apparently pretended to not have written it despite its DNS pointing to his servers, and Certificate Transparency logs and Internet Archive all attributing the page to his domain. Compare the top comment thread in the first link above to his reply there:
Which part of the second link? Some of it is very accurately sourced, he 100% operates a loli bot which targetted subreddits banned by reddit for illegal content. Theres no walking around that. Near the end they also point out that Drew changes his TOS for SourceHut to align with banning projects he disagrees with, which makes GitHub look like paradise.
> the incident is that he wrote a document detailing repeated bad behaviour from a well known community figure? And this is a bad thing?
He collected all Stallman statements about Epstein and related subjects (this is perfectly ok) and then wrote his own summaries which completely misrepresent the things which were actually said. So what happened was that a lot of people just skimmed the summaries and concluded that Stallman molests children, or says that it's ok to do so etc etc.
If fact I have taken to link the Stallman report and add "don't read the summaries, read only the things that Stallman actually said". This only works if I believe the person is in good faith, of course. I would suggest the same to you.
Kinda horrible to see that the 4chan bigots use the same strategy to try to discredit drew devault, and implying things of ownership through their own created fake accounts and smearing campaigns. Pretty much all allegations on that page are circumstantial evidence, especially the bot ownership parts that sircmpwn even took down while citing those bigots using it to scrape child porn.
And then the dude of dmpwn posting things on image boards with the tag dmpwn, and forgetting to remove that from screenshots? lol, really?
Having experienced the same kind of doxxing attempts by 4chan bigots, /pol/ and kiwifarms, I think I am qualified to comment on how they operate.
Maybe someone needs to summon the Antichrist a second time to thin out the herd, huh?
Thanks for mentioning it! Makes me glad to live a life out of the spotlight and to be generally ignorant of stuff like this going on. Would not want to be targeted like that :/
I hate that this is now a thing you can ask unsarcastically.
Just use the tool you like the best man, screw what other people think. Yes, there's people who will go "you're bad because your use a tool that's made by a guy who said something wrong about Stallman" (or whatever he did exactly again). These people are not worth your attention.
My bad, I shouldn't have said tainted. Trustworthy is what I had in mind.
I moved my private repos to sr.ht ages ago because it was the open source, free software, ethical, longevitable approach. And stepping away from the mega corporations and everything going on with those.
Certain aspects of human nature, as they apply to the corporate world, can be acknowledged and understood, even if they're not excuses when they lead to the downfall of a prominent organization. When you give someone a big title, a dump truck full of cash, and a mandate to innovate, human nature dictates that most people will internalize the idea that "because I was given all this, I must be competent", even if they very obviously are not. Typically the outcome is a "bold plan forward" which is notable for lacking any actual clear solution to the company's main problems. In one example I know of, the CEO decided to pivot from an unrelated field towards launching a cryptocurrency, and cooked up a cartoonishly-dangerous marketing scheme to support the idea. One person ended up dying as a result, and the company then purged every mention of crypto from its website. (And yes, the company collapsed soon afterwards.)
While it's easy to blame the CEO with their oversized salary, the blame for such disasters doesn't just lie with them. After all, arguably the most important roles of the board are to hire a good CEO, ensure the CEO is actually performing as they should, and fire them if they're not. When politics, cronyism, or again, simple incompetence, lead the board to also fail at its job, you end up with the long, slow decline into obscurity we've seen so often in the tech world.
I also don’t think people should equate their history with their current state. They lied to their users and told them they’d never sell their data, and then they did. That is much worse than never having made the promise. I don’t trust them.
But, they have far too much support and are far too embedded to disappear anytime soon.
First, your business model isn't really clear, as what you've described so far sounds more like a research project than a go-to-market premise. Computational pathology is a crowded market, and the main players all have two things in common: access to huge numbers of labeled whole-slide images, and workflows designed to handle such images. Without the former, your project sounds like a non-starter, and given the latter, the idea you've pitched doesn't seem like an advantage. Notably, some of the existing models even have open weights (e.g. Prov-GigaPath, CTransPath).
Second, you've talked about using this approach to make diagnoses, but it's not clear exactly how this would be pitched as a market solution. The range of possible diagnoses is almost unlimited, so a useful model would need training data for everything (not possible). My understanding is that foundation models solve this problem by focusing on one or a few diagnoses in a restricted scope, e.g. prostate cancer in prostate core biopsies. The other approach is to screen for normal in clearly-defined settings, e.g. Pap smears, so that anything that isn't "normal" is flagged for manual review. Either approach, as you can see, demands a very different training and market positioning strategy.
Finally, do you have pathologists advising you, and have you done any sort of market analysis? Unless you're already a pathologist (and probably even if you were), I suspect that having both would be of immense value in deciding a go-forward plan.
Hi, thanks for the comment! Just wanted to respond to some of comments here:
>> First, your business model isn't really clear, as what you've described so far sounds more like a research project than a go-to-market premise.
This is not really a core component of our business but more so was just something cool that I built and wanted to share!
>> Computational pathology is a crowded market, and the main players all have two things in common: access to huge numbers of labeled whole-slide images, and workflows designed to handle such images. Without the former, your project sounds like a non-starter, and given the latter, the idea you've pitched doesn't seem like an advantage. Notably, some of the existing models even have open weights (e.g. Prov-GigaPath, CTransPath).
We have partnerships with a few labs to get access to a large amount of WSIs, both H&E and IHC, but our core business really isn't building workflow tools for pathologists at the moment.
>> Second, you've talked about using this approach to make diagnoses, but it's not clear exactly how this would be pitched as a market solution. The range of possible diagnoses is almost unlimited, so a useful model would need training data for everything (not possible). My understanding is that foundation models solve this problem by focusing on one or a few diagnoses in a restricted scope, e.g. prostate cancer in prostate core biopsies.
I agree with you in that I don’t necessarily think this is really a market solution at the current state (it isn't even close to accurate enough), but I think that the beauty of this solution is the general-purpose nature of it in that it can work not only across tissue types, but also different pathology tasks like IHC scoring along with cancer sub typing. The value of foundation models is in the fact that tasks can generalize. For example, part of what made this super interesting to me was the fact that the general purpose foundation models like GPT 5 are able to even perform this super niche task! Obviously there are path-specific foundation models too that have their own ViT backbones, but it is pretty incredible that GPT 5 and Claude 4.5 can perform at this level already.
Yes to the best of my knowledge, most FDA-approved solutions are point solutions, but I am not yet convinced this is the best way to deploy solutions in the long-term. For example, there will always be rare diseases where there isn't enough of a market for there to be a specialized solution for and in those cases, general-purpose models that can generalize to some degree may be crucial.
reply