Hacker Newsnew | past | comments | ask | show | jobs | submit | SurceBeats's commentslogin

Hey HN! We've been building HomeDock OS, a platform to self-host Docker apps with a full desktop-like UI running entirely in the browser.

Version 2.0 introduced exactly that, Prism, a window manager with real multitasking: drag, resize, minimize, and snap windows. It runs on Raspberry Pi, Linux, Mac, or Windows.

On the Drop Zone, files are encrypted with AES-256 GCM and 1.2M PBKDF2 iterations, even on a Pi Zero. App updates use Docker manifest digest comparison, not just tag checking so you can easily update them from the taskbar with a single click.

Happy to answer questions about architecture or design decisions!!!!


The workflow is quite simple, you type a question, the planchette moves across the Ouija board spelling the answer letter by letter. The board shakes, glows or flickers depending on the spirit's mood. Runs fully offline using llama-cpp-python. Model auto-downloads from HuggingFace.

You can run it from source or using the Docker Compose, it also has real-time crisis detection, if someone shows signs of distress, a helpline banner appears. Even a fake spirit board shouldn't ignore real pain, I guess. Would love feedback on the UX and the model behavior!


Thanks for sharing your work.

Do you have writeup (or rough notes) on how you did the model fine-tuning?


Sure! No formal writeup but here's the gist, base model was Qwen2.5-3B-Instruct, fast, reliable, low ram specs and most of the times fine on cpu.

Dataset: ~620 Claude-crafted examples, all following the same pattern, a question you'd ask a Ouija board paired with a short, uppercase, cryptic response. Things like "Is anyone there?" "YES.", "Write me a poem" "NO.", "How did you die?" "Ouija: PAIN.". The key was being very very consistent with the output format across all examples.

Method was LoRA fine-tune using HuggingFace Transformers + PEFT. Rank 16, alpha 32, targeting all attention + MLP projections. 3 epochs, lr 2e-4, effective batch size 8. Trained on Apple Silicon (MPS). Loss went from ~3.0 to ~0.17 pretty quickly given how uniform the outputs are.

Baked a system prompt into every training example using Qwen's chat template, basically the rules the "spirit" follows (uppercase only, one-word answers, never elaborate). For deployment I merged the LoRA adapter, quantized to GGUF Q4_K_M via llama.cpp, rruns locally with llama-cpp-python. I'm planning to drop an iOS version too. Honestly the whole thing is more about the dataset design than anything fancy on the training side. 620 consistent examples was enough to completely override the models default chatty behavior.


Thanks


You're more than welcome!


In January someone hit us with ~2K malicious backlinks from AWS instances across 15+ regions, cheap TLD spam domains, and even Blogspot. We built a Python script to automate the disavow file generation, then added a UI and Dockerized it. Full technical writeup with the forensic analysis here: https://dev.to/surcebeats/someone-paid-around-2k-to-destroy-...

The tool parses exports from Ahrefs/SEMrush/Google Search Console, categorizes IPs vs domains, supports whitelisting, tracks new threats across uploads, and generates Google-ready disavow.txt files.

Feedback welcome.


I'm atm working on a couple of things, first the biz, a self-hosted home server OS that simplifies Docker management and provides a unified dashboard for running services at home. The goal is making self-hosting more accessible without sacrificing flexibility.

And also building as a hobbie a procedural universe generation engine that simulates galaxies, solar systems and planets in real-time. Everything is generated from a seed with actual orbital physics, seasonal changes and so... Built with Python/Flask backend too but Three.js for 3D visualization and React instead of Vue3 as in the prior one. Think No Man's Sky vibes but as an explorable simulation engine really D:


Benchmarks optimize for fundraising, not users. The gap between "state of the art" and "previous gen" keeps shrinking in real-world use, but investors still write checks based on decimal points in test scores.


we try to make benchmarks for users, but it's like that 20% article - different people want different 20% and you just end up adding "features" and whackamoling the different kinds of 20%

if a single benchmark could be a universal truth, and it was easy to figure out how to do it, everyone would love that.. but that's why we're in the state we're in right now


The problem isn’t with the benchmarks (or the models, for that matter) it’s their being used to prop up the indefensible product marketing claims made by people frantically justifying asking for more dump trucks of thousand-dollar bills to replace the ones they just burned through in a few months.


unfortunately as benchmark makers we can't really do anything about human nature :shrug:


Absolutely not. This is not a problem with any part of the engineering process. Nearly everything wrong with the AI business lies at the feet of product managers, marketing, the c-suite crowd, etc.


Nice! We considered this exact approach but never shipped it in the end. The geolocation permission is probably unnecessary friction and probably an overkill imho... Timezone + rough location (country-level from IP) would get 95% accuracy without the prompt. Most users will bounce on that permission dialog.

Solid work though, especially the twilight transitions. Loving it!!!


This should be an OS feature and apps should just use the system theme.


Both iOS and Android allow you to set dark mode by some schedule, and that's conveyed by their respective browsers to websites.


Well, it is in KDE + Firefox. And yeah, the simplistic idea that day = bright and night = dark fails all the time, the OS has already other settings to deal with those failures, and your site or app should just use the system theme.


It is an OS setting to follow sunrise/sunset.


Suffered that back in the day with an Electron desktop app. Not to mention that the notarization and signing integration itself is completely broken. The first time you submit a binary it can take DAYS to process, and setting everything up to work properly with GitHub Actions CI/CD is absurdly time-consuming. It's ridiculous, and if you add this new notarial verification policy on top of that... In the end it's just Apple being Apple.


Google used to proudly say "Don't be evil"... But they just forgot to add "let us take that part".

When tech giants start deciding what technical knowledge is too "dangerous" for users to access, we've crossed into a different kind of territory. Installing an OS on your own hardware is now physical harm? That's some creative interpretation of their policies. The irony is that this kind of censorship just validates why people want to bypass these systems in the first place, nobody wants corporations deciding what they can and can't do with their own machines.


The article is kind of right about legitimate bloat, but "premature optimization is evil" has become an excuse to stop thinking about efficiency entirely. When we choose Electron for a simple app or pull in 200 dependencies for basic tasks, we're not being pragmatic, we're creating complexity debt that often takes more time to debug than writing leaner code would have. But somehow here we are, so...


Thinking is hard, so any product that gives people an excuse to stop doing it will do quite well, even if it creates more inconveniences like framework bloat or dependency rot. This is why shoehorning AI into everything is so wildly successful; it gives people the okay to stop thinking.


Yes. Too many people seem to forget the word "premature." This quote has been grossly misused to justify the most egregious cases of bloat and unoptimized software.


Yeah, somehow it went from don't micro optimize loops to 500MB Electron apps are just fine actually hahaha


The latest MS Teams update on MacOS fetched an installer that asked me for 1.2GB (Yes, G!) of disk space...


I recently found out that Teams was taking up over 5 GB on my laptop. The incompetence of Microsoft developers knows no bounds.


I recently set up a new virtual machine with xubuntu when I stopped being able to open my virtualbox image.

Turns out modern ubuntu will only install Firefox as a snap. And snap will then automatically grow to fill your entire hard drive for no good reason.

I'm not quite sure how people decided this was an approach to package management that made sense.


I hope Tauri gets some traction (https://v2.tauri.app/). The single biggest benefit it drastically smaller build size (https://www.levminer.com/blog/tauri-vs-electron).

A 500MB Electron app can be easily a 20MB Tauri app.


Not sure. Tauri apps run on the browser and browsers are absolute memory horders. At any time my browser is by far the biggest culprit of abusing available memory. Just look at all the processes it starts, it’s insane and I’ve tried all popular browsers, they are all memory hogs.


A big complaint with Electron that Tauri does avoid is that you package a specific browser with your app, ballooning the installer for every Electron app by the size of Chromium. The same with bundling NodeJS (or the equivalent backend for Tauri), but that isn't quite as weighty and the difference is which backend not whether it is there at all or not.

In either case you end up with a fresh instance of the browser (unless things have changed in Tauri since last time I looked), distinct from the one serving you generally as an actual browser, so both do have the same memory footprint in that respect. So you are right, that is an issue for both options, but IME people away from development seem more troubled by the package size than interactive RAM use. Tauri apps are likely to start faster from cold as it is loading a complete new browser for which every last byte used needs to be rad from disk, I think the average non-dev user will be more concerned about that than memory use.

There have been a couple of projects trying to be Electron, complete with NodeJS, but using the user's currently installed & default browser like Tauri, and some other that replace the back-end with something lighter-weight, even more like Tauri, but most of them are currently unmaintained, still officially alpha, or otherwise incomplete/unstable/both. Electron has the properties of being here, being stable/maintained, and being good enough until it isn't (and once it isn't, those moving off it tend to go for something else completely rather than another system very like it) - it is difficult for a newer similar projects to compete with the momentum it has when the “escape route” from it is generally to something more completely different.


Based on https://v2.tauri.app/concept/architecture/, it seems that Tauri uses native webviews, which allows Tauri apps to be much smaller and less of a memory hog than a tool which uses Electron and runs a whole browser.


Electron apps also run in a browser. They package an entire browser as part of the app.


And consequently, "you need 32GB of RAM just to be future-proof for the next 3 years".


On the flip side, what you're saying is also an overused excuse to dismiss web apps and promote something else that's probably a lot worse for everyone.

I've never seen a real world Electron app with a large userbase that actually has that many dependencies or performance issues that would be resolved by writing it as a native app. It's baffling to me how many developers don't realize how much latency is added and memory is used by requiring many concurrent HTTP requests. If you have a counterexample I'd love to see it.


Fortunately many apps seem to be moving to native webviews now instead of shipping electron


What is often missing from the discussion is the expected lifecycle of the product. Using Electron for a simple app might be a good idea, if it is a proof-of-concept, or an app that will be used sparsely by few people. But if you use it for the built-in calculator in your OS, the trade-offs are suddenly completely different.


A large majority of Electron crap could be turned into a regular website, but then the developers would need to actually target the Web, instead of ChromeOS Platform and that is too hard apparently.


I've recently gone back to more in depth (but still indie) Web dev with vuejs and quasar, and honestly I don't even find myself thinking about "targeting Web" any more - I just write code and it seems to work on pretty much everything (I haven't tested safari to be fair).


Vue is so good! I've been encouraged seeing more organizations mentioning using it (in the hiring thread etc.) lately.


I'd argue that the insane complexity of fast apps/APIs pushes many devs towards super slow but easy apps/APIs. There needs to be a middle ground, something that's easy to use and fast-enough, rather than trying to squeeze every last bit of perf while completely sacrificing usability.


Java Swing? It was slow in 1999, which means it's fast now. It's also a much more sensible language than JavaScript. It's not native GUI, but neither is JavaScript anyway.


Swing has no place in a sentence about good usability. It may be the best of the worst, but it's not a positive example. Things like html or Imgui are better to use, with the former also being much more powerful and the latter being as simple as can be while still being blazing fast.


The sad reality is that easy tech explores solution space faster


About time... I guess


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: