I wonder if teaching an LLM how to write Prolog and then letting it write it could be a great way to explore spaces like this in the future. Other people in I wonder if teaching an LLM how to write Prolog and then letting it write it could be a great way to explore spaces like this in the future.
I only ever learned it in school, but if memory serves, Prolog is a whole "given these rules, find the truth" sort of language, which aligns well with these sorts of problem spaces. Mix and match enough, especially across disparate domains, and you might get some really interesting things derived and discovered that are low-hanging fruit just waiting to be discovered.
I’ve worked at a company where all we had were large, unified, post-request logs and honestly those logs were orders of magnitude easier to work with than having to coallate lots of logs into a single concept. ELK liked those giant logs more, too.
It does help that the system was built from the ground up to use the big logs.
I think the best feature was that we would extract and build a ton of keys into the full-size log, so you could find all transactions related to X really, really easily, where X was a wide variety of things.
Every place I’ve been elsewhere the logs have always been way harder to parse, process or even just find. It would always take several disparate queries to get everything.
E.g. “oh, we need to find something related to X? Well okay, that gives us 50 different separate calls, now let’s do another search on each of those calls’ unique transaction id and review each whole transaction individually” vs “I need X. Cool, here’s all 50, complete calls.”
Edit: to be clear, throughout the code it was just “context.info()”, over and over again, like regular logging, it was just shoved into a big object at the end of all non-application-crashing exceptions. And the application was built to not fully crash, like, ever. (And this worked)
Or possibly “in addition to”, yeah. I think this is where it needs to go. We can’t keep training HUGE neural networks every 3 months and throw out all the work we did and the billions of dollars in gear and training just to use another model a few months.
That loops is unsustainable. Active learning needs to be discovered / created.
I don't think old prompts would become useless. A few studies have shown that prompt crafting is important because LLMs often misidentify the user's intent. Presumably an AI that is learning continuously will simply get better at inferring intent, therefore any prompts that were effective before will continue to be effective, it will simply grow its ability to infer intent from a larger class of prompts.
That depends on the goals of the prompts you use with the LLM:
* as a glorified natural language processor (like I have done), you'll probably be fine, maybe
* as someone to communicate with, you'll also probably be fine
* as a *very* basic prompt-follower? Like, natural language processing-level of prompt "find me the important words", etc. Probably fine, or close enough.
* as a robust prompt system with complicated logic each prompt? Yes, it will begin to fail catastrophically, especially if you're wanting to be repeatable.
I'm not sure that the general public is that interested in perfectly repeatable work, though. I think they're looking for consistent and improving work.
They were trying to compete with an existing, VERY good couple of alternatives, and the people most actually likely to use that product were already on those services.
It was a losing play that didn’t know what market it was actually entering.
VRChat is the most popular one. Age verification. User generated models. User generated worlds. Revenue sharing in worlds. For-sale models and props. It’s quite feature rich now.
They're not all deranged! Some are completely productive, functional furries. Probably. Maybe.
Also, your statement is far too reductive! There's plenty of avatars with scales! Also, don't forget the anime girls that are actually middle-aged men and the occasional sentient burrito.
The suspiciously wealthy software developers, astronauts, pharmacists, game devs and artists that build high quality 3d models, Blender and Substance Painter tutorials and add-ons that prop up a good percentage of the VR headset market, Patreon market, and have a thriving artisan ecosystem?
What do you expect? Did you see the movie ready player one? This kind of experience is ideal for furries and cosplay types and they featured in the movie too.
If you can be anything, it makes sense it attracts people who want to not be what they already are.
It’s a reference to a famous YouTube video[0] about how to write instructions that can be followed.
One of the most important things a programmer needs to do is learn how to tell a computer how to do something. It’s a surprisingly hard skill because each step is way more complicated and has way more variables to go through.
Switching to developer mode is very likely something he won’t be doing nor allowed to do on the Chromebook his parents bought him or the school assigned him.
The kid’s parents want to be able to monitor their kid. The kid’s parents want to be able to drag the machine to a local store and have the people there fix it.
The kid’s parents - and the kid - all have iPhones, so it’s familiar.
The kid’s school requires Windows or Mac for their WiFi and won’t let the kid use Linux because they don’t trust it.
There’s plenty of reasons why Linux isn’t the answer in current climate.
I only ever learned it in school, but if memory serves, Prolog is a whole "given these rules, find the truth" sort of language, which aligns well with these sorts of problem spaces. Mix and match enough, especially across disparate domains, and you might get some really interesting things derived and discovered that are low-hanging fruit just waiting to be discovered.
reply