Hacker Newsnew | past | comments | ask | show | jobs | submit | esquire_900's commentslogin

This seems exactly not what you want. If fully invested in this, you never have the freedom to switch tools, ie go to a different team chat solution. The benefits of having these apps in one UI / ecosystem are relatively small: files - teamchat makes sense, but todo-email, kanban-recpies doesn't add any value.

Like samdixon mentions with ClickUp, the downside is quite large UX wise: you'd be constantly switching context witin dobase. Having 10 pinned tabs for all your tools is very convenient, checking a todo while working on an email in dobase feels messy.


I actually rather dislike having my info spread over a gazillion services, all of them having their own paid accounts or advertising. Also, a single unified search for all communication and shared notes would be very helpful.

Also, I'm not familiar with ClickUp nor Dobase, but I imagine you can have them open in multiple tabs, allowing for your preferred way of working?


This is sort of what their first sentence states? Except your line implies that they are fast in training and inference, they imply they are focusing on inference and are dropping training speed for it.

It's a nice opening as it is imo


They don't say anything about dropping training speed.


> a departure from Mamba-2, which optimized for training speed.

?


Yes? Mamba-2 optimized for training speed compared to Mamba-1. Mamba-3 adds optimization for inference. These are pretty much version numbers.


And here I am, feeling slightly ashamed at taking 1 or 2 flights a year. These "commute" statistics are staggering.


Cost wise it does not seem very effective. .5 token / sec (the optimized one) is 3600 tokens an hour, which costs about 200-300 watts for an active 3090+system. Running 3600 tokens on open router @.4$ for llama 3.1 (3.3 costs less), is about $0,00144. That money buys you about 2-3 watts (in the Netherlands).

Great achievement for privacy inference nonetheless.


I think we use different units. In my system there are 3600 seconds per hour, and watts measure power.


OP probably means watt-hours.


And 0.5 tokens/s should work out to 1800 tokens at the end of the hour. Not 3600 as stated.


Something to consider is that input tokens have a cost too. They are typically processed much faster than output tokens. If you have long conversations then input tokens will end up being a significant part of the cost.

It probably won't matter much here though.


Open router is highly subsidized. This might be cheaper in the long run once these companies shift to taking profits


But why not cross that bridge then. By that time you might have much more optimized local infrastructure. Although I do see that someone suffering through the local slowness now is what drives the development of these local options.


> Cost wise it does not seem very effective.

Why is this so damn important? Isn't it more important to end up with the best result?

I (in Norway) use a homelab with Ollama to generate a report every morning. It's slow, but it runs between 5-6 am, energy prices are at a low, and it doesn't matter if it takes 5 or 50 minutes.


> Why is this so damn important? Isn't it more important to end up with the best result?

You’re wondering why someone would prefer to get the same or better result in less time for less money?


Which seems weird, it's very technical. Monolith design, relationship types, I've never met an HR person who wondered about those kinds of things


author here! i originally intended this to be a technical write-up on everything i learned building an ATS (as someone who's in HR). the original title was "how to build an ATS and why you probably shouldn't". i debated mentioning what i built, but it felt hollow to _ramble_ about schemas, architecture, components with nothing to show for it.

the more i wrote and reflected, the more i thought about why the market never corrects itself despite the tools being expensive and badly designed. i've worked with hundreds of recruiters and most use spreadsheets. that's not a workflow quirk but i think an indictment of something bigger which traces back to everything in the post -- the buyer who never uses the product, the integrations racket, the "AI-native" tools bolted on top of a broken foundation, etc. etc.

so i ended up writing the first half. it's drawn from my experiences frustratingly buying an ATS for a small business, and watching the dysfunction of procurement/integration/lack of adoption play out at the enterprise level.

admittedly, HR/recruiting tech is a very niche audience, so the technical section probably lands better with engineers who've been handed a recruiting project than with anyone actually working in HR. so i wanted to offer a resource from that perspective.


Surprising how badly Jetbrains implemented AI. Apparently to such an extent that even after multiple years of LLM's someone felt confident enough to build a company that can do better.

This looks really neat, interesting technical writeup as well!


Thanks! Let us know if you have any questions / feedback.


Cool social experiment. It's interesting how narrow the scope of all top voted PRs are: change this or that detail in the voting (daily, count down votes etc), or make it more efficient (rust).

I wonder if this has the potential to build a "community" that will take this into a completely different direction, or if it will neatly stay within the initial boundaries.


Is the dependancy on Cloudflare worth the saved time in infrastructure? Getting a big bare metal and deploying a docker should go a long way.

This implementation sounds fully dependant on a service that Zed has little to say about.


FYI: Cloudflare provides an open source version of their Workers runtime[0], so the lock-in isn't as strong as it once was.

[0]: https://github.com/cloudflare/workerd


I think if the end game is to run workers runtime then they could also run something else from the start.

Its gonna be hard to compete with the scaling cloudflare offers if they migrate to their own dedicated infra, but it of course would become much cheaper than paying per request


Smart 11 | React Native Software Engineer | Netherlands (Breda) / Remote | Part or Full time | smart11.ai

Smart11 helps soccer players to become more intelligent on the field. We're a small team of 3 engineers and are looking for a react native developer to continue development on our mobile app.

Our soccer learning methods rely heavily on video (and a touch of vision AI) to help players learn from their own actions on the field. The app is crucial in that process, and used hundreds of hours a week by clubs and pro players alike. Your work might directly contribute to the improvement of soccer players around the world!

We're a productive, balanced team, and are looking for someone with a builder mentality. Apply at connect [at] smart11.ai, or send me a PM (I'm the CTO). If you're interested but are only experienced in related techniques, please do get in touch, we like to invest in the right people!


Really love what you're building. Sent in my application to your personal email, I hope that's okay.


Thanks for the suggestion, I missed that!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: