Hacker Newsnew | past | comments | ask | show | jobs | submit | cdata's commentslogin

This appears to be dated 2016. Did the preliminary results amount to anything?


AI has pushed me to arrive at an epiphany: new technology is good if it helps me spend more time doing things that I enjoy doing; it's bad if it doesn't; it's worse if I end up spending more time doing things that I don't enjoy.

AI has increased the sheer volume of code we are producing per hour (and probably also the amount of energy spent per unit of code). But, it hasn't spared me or anyone I know the cost of testing, reviewing or refining that code.

Speaking for myself, writing code was always the most fun part of the job. I get a dopamine hit when CI is green, sure, but my heart sinks a bit every time I'm assigned to review a 5K+ loc mountain of AI slop (and it has been happening a lot lately).


I agree. I’m using copilot more and more as it gets better and better, but it is getting better at the fun stuff and leaves me to do the less fun stuff. I’m in a role where I need to review code across multiple teams, and as their output is increasing, so is my review load. The biggest issue is that the people who lean on copilot the most are the least skilled at writing/reviewing code in the first place, so not only do I have more to review, it’s worse(1).

My medium term concern is that the tasks where we want a human in the loop (esp review) are predicated on skills that come from actually writing code. If LLMs stagnate, in a generation we’re not going to have anyone who grew up writing code.

1: not that LLMs write objectively bad code, but it doesn’t follow our standards and patterns. Like, we have an internal library of common UI components and CSS, but the LLM will pump out custom stuff.

There is some stuff that we can pick up with analysers and fail the build, but a lot of things just come down to taste and corporate knowledge.


I've been using it to do big refactors are large changes that I would simply avoid because, before, the benefits don't outweigh the costs of the doing it. I think half the problem people have is just using AI for the wrong stuff.

I don't see why it doesn't help with reviewing, testing, or refining code either. One of the advantages I find is that an LLM "thinks" differently from me so it'll find issues that I don't notice or maybe even know about. I've certainly had it develop entire test harnesses to ensure pre/post refactoring results are the same.

That said, I have "held it wrong" and had it done the fun stuff instead and that felt bad. So I just changed how I used it.


I read a lot of AI generated code these days. It makes really bad mistakes (even when the nature of the change is a refactor). I've tried out a few different tools and methodologies, but I haven't escaped the need to babysit the "agent." If I stepped aside, it would create more work for me and others on the backend of our workflow.

I read anecdotes of teams that push through AI-driven changes as fast as possible with awe. Surely their AIs are no more capable than the ones I'm familiar with.


I read all the code and it sometimes make mistakes -- but I wouldn't call it really bad. And often merely pointing it out will get a correction. Sometimes it is funny. It's not perfect but nothing is perfect. I have noticed that the quality seems to be improving.

I still think whether you see sustained value or not depends a lot on your workflow -- in what you choose to do or decide and what you let it choose to do or decide.

I agree with you that this idea of just pushing out AI code -- especially code written from scratch -- by an AI sounds like a disaster waiting to happen. But honestly a lot of organizations let a lot of crappy code into their code-base long before AI came long. Those organizations are just doing the same now at scale. AI didn't change the quality, it just changed the quantity.


Arguably the ad business is to blame. It created a perverse incentive. They maximized pay-to-play. The losers were authors that previously published on a passion budget (and would/could never pay for ads). AI is just the last nail in the coffin.


The foresters refusing to plant vast tracts of Norway Spruce aren't protecting themselves; quite the opposite, they’re falling behind. The gap is widening between states who've replaced mixed forests with a flawless mono-crop and those who haven't. The first states are growing forests faster, and harvesting a more desirable wood. The second group is... not.


Very cool. I'm always on the lookout for languages - especially beginner-friendly ones - that are good candidates for building Wasm Components (my use case is a fantasy console with Wasm game cartridges).

Have you given any thought to supporting Wasm Components as a build target?


Doolang i started for writing server and apis easily and simply, but you points make sense for Wasm components. Not planned as of now as this is just starting point, but surely would take you thought for my further roadmap. Honestly i'm not too much expert in such thing yet, but would appreciate if you have any other more recommendations and suggestions


Don't sleep on the Rust toolchain for this! You can have DOM-via-Wasm today, the tools generate all the glue for you and the overhead isn't that bad, either.



Got a rec? The reply to you is talking about a component framework, rather than actual vanilla html/css access. I haven't seen anything, personally, that allows real-time, direct DOM interaction.


https://github.com/wasm-bindgen/wasm-bindgen is the tool for raw access to the DOM APIs.


Ah, yes! This seems like exactly the kind of minimalist exposure I was trying to find, to avoid the emscripten dependency. Thanks!


I had the pleasure of meeting Mikeal on a few occasions, but mainly I've benefited from his work over the years (initially via the JavaScript ecosystem, and later through the Protocol Labs community).

PouchDB was way ahead of its time, and I'm just now coming around to how crazy cool it was and is compared to most other tech in its space.

He made a great deal of positive impact on technical areas I care about. Rest in peace.


just learning about pouchdb now. why did it not take off you think?


Around 2016 sometimes, a small team (me included) built a "mini" version of our main product (Typeform) which was using PouchDB for syncing forms/answers between the backend and the mobile app (written with Phonegap/Cordova if I remember correctly), mainly so we could have offline capabilities.

Everything worked fine, and was cool to launch something like that since I'm not a mobile developer by any measure. But PouchDB required using CouchDB for the syncing, which was both the first document DB we deployed in our production infrastructure, and the only use case for having CouchDB at all, so we didn't have lots of expertise about it.

I think managing CouchDB ended up being the biggest maintenance hassle at one point, as it was kind of an extra piece, compared to the "real" setup that hosted the other production data. AFAIK, there was no experts on CouchDB at the company either.

So I guess in the end if this "frontend sync library" you're want to use also ends up dictating the backend storage/engine, then make sure you can "afford" a completely new and standalone piece for just that. Unless you're already using CouchDB, then it seems like a no-brainer.

Probably today I'd cobble together something "manually" with Postgres and WebSockets/SSE instead if I was looking to do the same thing again.


I remember 2017, at offline camp, I proposed talking about using offline first libraries with existing backends. Nobody, was interested. Seems the people interested in such tech were pretty much sold on CouchDB.

Just now, almost a decade later, we get libraries like Tinybase and SignalDB.


In addition to the sync issues mentioned, personally I think overcoming the browsers was the real issues. Nobody wanted to support this, the security would have been a contrived nightmare.


I entered the workforce in 08/09. At that time things seemed really dire. It felt to me like the whole house of cards was coming down, and I told myself that I would take any job that I could get.

I ultimately landed a job with an odd startup, eccentric founders, working out of an attic. In hindsight I couldn't have asked for a better start to my career. But, my expectations were rock bottom at the time.

Anyway, keep your mind open to all possibilities. You never know where an unlikely choice may take you. And, good luck!


I wonder if you could pair this with nix e.g.,:

    - shell: nix develop --command {0}
      run: ...


In my experience, the default VM size is so slow, you probably don't want Nix on a workflow that doesn't already take minutes.

Even with a binary cache (we used R2), installing Lix, Devbox and some common tools costs us 2 1/2 minutes. Just evaluating the derivation takes ~20-30 seconds.


You can use a self-hosted runner with an image that has anything pre-loaded.


Is there a way to cache the derivation evaluation?


You can cache arbitrary directories in github actions, but the nix package cache is enormous and probably bigger than GH's cache system will allow. Restoring multi-gig caches is also not instant, though it still beats doing everything from scratch. Might be more feasible to bake the cache into a container image instead. I think any nix enthusiast is still likely to go for self-hosted runners though.


The default cache action also has issued with anything that isn't owned by the runner user, and caches are per-repository, so you can't just have one cache like you do for binary caches.


Yes, we do this, although you need to do `nix develop --command bash -- {0}` to make it behave as a shell.


This is so close to validating my expectations that I'm almost skeptical of its veracity. I'm a regular Quest player, but I couldn't tell you how to launch Horizons if you held a gun to my head.

The leaders of corporate initiatives like this often tell themselves that they are building an ecosystem. They also seem convinced that an ecosystem will manifest from a highly curated, centrally developed silo that they have total control over. I guess it sort of worked for Facebook back in the day, but they were surfing on a lot of good will when it happened (and look at it now).

Things that are much closer to a metaverse than Horizons will ever be:

- Minecraft (Bedrock)

- VRChat

- Any popular multiplayer game that includes a free level editor

- The open web


Definitely agree about VRChat (and Minecraft to a lesser extent)!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: