Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You still need to build real-time serving infrastructure on top of LLaMA/Vicuna/Alpaca in order to compete with ChatGPT/OpenAI so it's not going to be done by that many companies and OpenAI already has a mindshare/first mover advantage.


When you use ChatGPT you are leasing their GPU infrastructure and their proprietary model, this opens the possibility of leasing GPU infrastructure from another company and using an open source model. You don't necessarily need to do the hard parts yourself, you can hire it out to competing companies.


Sure, but it's extra work slowing you down as your competitor is surfing the wave at full speed. Moreover, you are relying on an old LLM whereas OpenAI is developing newer versions of theirs, keeping their competitive advantage. Even Google who has the infra has a ridiculously bad LLM to compete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: