Hacker Newsnew | past | comments | ask | show | jobs | submit | qualudeheart's commentslogin

Nevada regulators have escalated the legal battle against Karshi by filing a lawsuit in Carson City District Court to close the company's sports contract in the state.


TNTNV is paying for the TTD-contracted plowing service at East Short Trail parking lot #1, one of the busiest leisure trails in North Lake Tahoe. TNTNV is paying the full cost of the service, previously allocated funds for winter plows, and will continue to do so as required by the storm to ensure safe access to the area's trail network so that everyone can enjoy responsible winter recreation.


Most of my posts here have been satirical or exaggerated. Among those not so has been my aphorism that DeepMind can train a new model faster than you can go back to grad school. You‘d have went back to school for computer science for a Masters‘ degree and by the time you slithered out of that box, AlphaCode 2 slithered out of its box and into your cubicle. No more space for you!


This reads AI generated.


That’s just James Lindsay’s pseudointellectual tripe, and has further roots in some early twentieth century political theorists. You can’t trust Lindsay because he’s a political operative first and a scholar second. I don’t know of any scholars of gnosticism who think gnosticism is connected to LGBT in a serious way.


Managers will do every job.


AI will do management.


Smart Money flips houses. The Smartest Money flips GPUs.


The smartest money flips the output of the GPUs, more precisely. Flipping GPUs is just a side project.


Smart Money flips houses. The smartest money flips bits.


Could you share the code with us?


Sure! https://pastebin.com/xm7D1c30

I didn't bother cleaning it so it's just a code dump, but it's fairly straightforward. Not included are a Python script to parse and clean the raw documents into JSON files (used in `summarize` to output results), code to read these files and get the embeddings from OpenAI for use in `newEmbeddingJSON `, and a bunch of random parallelization shell scripts that I didn't save.

To use it, I call newDBFromJSON from a directory of JSON embedding vectors and serialize the binary representation. This takes a few minutes mostly because parsing JSON is slow, but you I only needed to do this once. When I need to search for the top 10 documents most similar to document X, I call `search` with the embedding vector for that doc. Alternatively if I need to do semantic search with natural language, I'll call the OpenAI API to get the embedding vector for the query and call `search` with that vector. It's pretty fast thanks to Go concurrency maxing out my CPU. It's super accurate with the search results thanks to OpenAI's embeddings.

It's nowhere close to production-ready (it's littered with panics), but it was good enough for me.

Hope this helps!

Edit: oh and don't use float64 (OpenAI's vectors are float16)


Maybe it would take a sabbatical to heal its trauma like real humans. Maybe it would go to therapy.


Humans who pick their own training data can make greats results by chosing which courses at college or grad school to attend.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: