Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Code / Model Sharing Platforms for Robotics, AI and Deep Learning
3 points by jmiseikis on April 21, 2022 | hide | past | favorite | 3 comments
GitHub became de-facto open source community and version control for software code.

What similar platforms have or are likely to become similar to GitHub, but tailored for deep learning, AI & robotics?

To start the conversation: ROS/ROS2, HuggingFace Transformers, Docker

Also we can cover frameworks with easy-to-share models like PyTorch, TensorFlow, JAX



Currently isn’t this is mostly also GitHub? with some functionality to load artifacts like model checkpoints from git LFS, figshare, public S3 bucket, or similar. I’m not super familiar sure huggingface, but that it seems similar to torchvision + model zoo

Are there specific limitations of that status quo you have in mind, in regard to specific tailoring for robotics, AI, deep learning?

I really like the model card view on huggingface https://huggingface.co/distilbert-base-uncased-finetuned-sst...


Thanks for your answer!

Tailoring to these areas mainly come from my personal focus areas, but at the same time, in the last 2 years the code and model sharing from research (both, university and companies/corporates) has leaped ahead dramatically in terms of providing code, models and sometimes datasets.

GitHub is so far the main source, but usually models and data is linked from many different resources due to large file sizes. And running/testing the approaches and models is often still quite cumbersome due to different versions and so on. Probably shared Google Colab notebooks is currently the least "painful" way to test out the methods.

That's the motivation to ask around and see if there maybe some other formats/platforms that would make such sharing even easier, thus reduce the "engineering" overhead in testing and help researchers and developers to focus more on the research part advancing the field even further :)


Thanks for elaborating!

> And running/testing the approaches and models is often still quite cumbersome due to different versions and so on.

I definitely agree with this. I don’t know that I have any particular insight on infrastructure, but I think one of the things that makes huggingface and SMP [0] successful is a well defined common model and data API.

It can be hard to clone a random research repo and train models on custom datasets because the model API is slightly different, or expects the data in a different format. I think this is a thing in graph neural networks right now, where different libraries use different data formats, and some groups roll their own

0: https://segmentation-models-pytorch.readthedocs.io/en/latest...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: