Hacker Newsnew | past | comments | ask | show | jobs | submit | more SomeCallMeTim's commentslogin

> LLMs are very, very good at generating code

Ummm.... Awful code that often looks right at first glance, maybe.

Maybe LLMs can generate the kind of code that's really shallow in its complexity, but for literally everything I would call interesting LLMs have produced hot garbage. From "it doesn't quite do what I want" to "it couldn't possibly work and it's extremely far from being sane," though it always looks reasonable.


> Ummm.... Awful code that often looks right at first glance, maybe.

> Maybe LLMs can generate the kind of code that's really shallow in its complexity, but for literally everything I would call interesting LLMs have produced hot garbage. From "it doesn't quite do what I want" to "it couldn't possibly work and it's extremely far from being sane," though it always looks reasonable.

None of this has any bearing.


What would that even accomplish?

And no, even if it were created as part of vaccine or coronavirus research, that would not equate to it being a biological weapon. That's an absolutely unjustified leap. Do you know how virus research works? Clearly not. They still have smallpox viruses sitting around in labs. I don't know if there is any active research related to improving the smallpox vaccine, but if there were, and there were a release, in what way could you possibly classify vaccines as biological weapons?

But back to the original point: WHY? There is ample evidence that this particular lab had lax safety protocols that might have resulted in a leak of the virus. There is also evidence that similar viruses existed in the wild animal population of the area. They may or may not have been studying one of those viruses in the lab, but that alone doesn't prove where it first infected a human.

But say you have incontrovertible proof. What does that change?

People who died will remain dead. People with long COVID will remain ill. We can rattle sabers at China, but they're likely to continue to deny it was their fault. So what does happen? We can extrapolate from the past:

1. People of Chinese (or any east Asian) ancestry will be treated badly or even killed in twisted "revenge" fantasies of various idiots around the world.

2. The saber rattling could escalate to actual hot war, and more people would die.

3. Say I'm wrong and China does admit the lab screwed up. What then? They'll fire and/or execute people who were responsible. And...? It's not like they're going to pay compensation to everyone who lost a loved one around the world. They'll perform some political theater, and after a news cycle it will fade away.

I don't see an upside unless you're hoping for #3 and think that killing or jailing a few more people will somehow even the scales? Killing or jailing people for incompetence seems cruel and unusual to me. Never attribute to malice that which is adequately explained by stupidity.


Web3 morphed to mean "things that use a distributed blockchain." That's why we hate it. (It's not just currency either; it's anything blockchain, including NFT.)

"P2P web" needs a new name.


The aversion to blockchain and cryptocurrency is because they're a scam.

There's literally no use case for crypto that makes sense. Nothing that couldn't be better done without a distributed blockchain.

Blockchain is a way to set up a really slow, really limited, and hard to manage database that's hosted among multiple untrusted entities. Literally every use case would be better handled by using a single trusted database and cryptographic signatures for verification (as opposed to "crypto", which really means blockchain. I hate how words get co-opted.).

AI at least has positive uses, even if it's also being hyped well beyond its actual usefulness.


Even though there are scams, Blockchain have a lot of utilities.

You don't have as many scams in AI just because it doesn't involve money, but there are countless cases of people labeling stuff with "AI" or just some other shady stuff using AI to preform scams, spam and other malicious stuff.

Saying that "would be better handled by using a single trusted database and cryptographic signatures for verification", is the same as saying that "AI could just be replaced with Ifs and Elses and other algorithms, and all you are doing is just optimal solutions".

I'm just mocking the fact that HN seems to have the idea of "AI Good, Crypto Bad", even thought that both are just Trendy tech


> Blockchain have a lot of utilities.

I have yet to see a single application for blockchain that isn't better done without blockchain.

Zero. None.


I also tend to switch to straight black and white. I don't get the desire for less contrast.

I do like ligatures, and I like checking out new fonts, but I'm not going to be offended if you don't. ;)


Meh. I like nice fonts, but I never talk about what font I use.

I don't even know what font I'm currently using, though I put some thought into choosing it at the time.

Some of us just care about what our fonts look like. I'm loving the configurator with Iosevka; yes, I'll likely spend an hour tweaking a custom font, but after that I'll just be done and move on.

There's no "flex" if I never even show anyone my dev environment or talk about what font I use. I think you're experiencing selection bias in thinking that only people who flex care about their fonts. ;)


I am of a similiar mindset. I put an afternoon into choosing what font will inspire me to code to, about every couple years. I set it up, forget about it, tell no one since it would feel petty and personal.

It'd be nice to have a word for something you care substantially about, but with fleeting desire and no desire to preen about.

I am on Iosevka now partly because I can pull it from AUR in my setup scripts which is something not all do.


I don't know you, so I am not claiming you are flexing.

But you claim that I am suffering from mental bias, while you know nothing about me other than a single post.

Maybe it is just that I posted something a little too close to the mark for you, and rather than reflect on why that might bother you, you externalize it.


I do wonder whether encouraging people to exist entirely in their bubble is healthy.

I agree with the article to a point. I don't really have a "daily news ritual," though I tend to stumble across major stories.

But enough people being isolated from the news can result in an uninformed populace. People continuing to vote R or D simply out of habit.

I mean, I'm unlikely to vote anything but D for the foreseeable future. But if the R party self-destructs and an actual, viable, left-leaning party is created, I'd want to know that.

But that doesn't mean I need updates on every manufactured crisis daily, either.


I find that "the news" is what is creating those bubbles. The more you read the nytimes, or watch fox, the more you lock into that particular bubble mindset.


Fox? Yes. Absolutely.

NYT? Only if you consider "what's actually happening" or "reality" to be a bubble. I see both liberal and conservative viewpoints in the NYT all the time.

Reality does have a stubborn liberal bias, after all.

That said, NYT is equally driven by sensationalism, in the "If it bleeds, it leads" sense.


You probably can read the news once per month and be well informed.


For whatever reason liberals always seem to be the ones who withdraw into their bubbles. The problem is this leaves the other side the opportunity to capture various government organizations, like SCOTUS.


I'm not sure conservatives are exactly the social butterflies you make them out to be.


It has nothing to do with being a social butterfly. I don't have the quote handy, but it's decades old. A GOP operative said something like "our goal is to get liberals to disconnect from the political process" while at the same time engaging their own base. This is what the GOP is famous for. The social issues (flag burning, abortion, gay marriage, etc, etc, etc).

In the late 90's and leading up to the 2000 POTUS race, with few exceptions all of my liberal friends had disconnected from politics. It's always been the case, in my circle of people I know, that conservatives vote more regularly. This is over more than 40 years.


I think you're mistaking centrists for liberals.

Are you in a predominantly conservative area? A "red state"?

Or is your circle of friends in their late teens and 20s? Younger folks just don't vote as often as older.

In the circle of people I know, who are mostly actual liberals, and who are generally 40+, they've been very engaged and voting consistently.

Heck, even the conservative Republicans I know have been engaged and voting consistently for Democrats since Obama came on the scene. I do tend to hang around smart and educated individuals, and every ethical conservative at this point has defected to voting for Democrats for years now--at least since Trump won.


> Package.json, no such thing for Deno (not unless you go out of your way to infect Deno with it)

Yeah...this always felt like a bug in Deno. ¯\_(ツ)_/¯

> scripts, doesn't matter since all Deno projects have the same tools built-in (benchmarker, bundler, compiler, formatted, linter, task runner, test runner, etc...)

AND...you therefore can't do anything outside of the box in a well-defined way. Oops.

Example: How do I run just the main server and not a worker? How do I seed the database?

> dependencies doesn't matter anywhere near as much because Deno comes with a standard library

Sigh. Yeah. Really. You said that.

OK, using the standard library, please:

- Parse Excel files, modify them, and rewrite them to disk.

- Load and modify a PNG and write it out as a JPG. Be sure you scale it down using a Lanczos kernel.

> eslint doesn't matter since deno has a linter

One of the most powerful eslint rules for TypeScript is the detection of dangling promises. A quick glance at the deno lint rules doesn't see anything that will handle this critical lint rule.

And OOPS, adding eslint to the project in a standardized way isn't possible because there's no package.json!

... I could go on.

Sorry, but Deno fanboying is so last year.


> > scripts, doesn't matter since all Deno projects have the same tools built-in (benchmarker, bundler, compiler, formatted, linter, task runner, test runner, etc...)

>

> AND...you therefore can't do anything outside of the box in a well-defined way. Oops.

>

> Example: How do I run just the main server and not a worker? How do I seed the database?

That doesn't really make sense. If you're trying to do anything in a well-defined way, you're going to need a box. The box is the definition.

------------------------------------------------------------------------------------------------

> > dependencies doesn't matter anywhere near as much because Deno comes with a standard library

>

> Sigh. Yeah. Really. You said that.

>

> OK, using the standard library, please: [...]

I'm not going to be able to do these in a comment here, but I can briefly tell you how I would go about doing them.

> - Parse Excel files, modify them, and rewrite them to disk.

Use streams API to create a pipeline (input -> modify -> output).

If by Excel files you mean CSVs, then I can stream the whole thing, making modifications and saving while I'm still streaming in the file.

If by Excel files you mean an xlsx file, then I tweak the pipeline to (input -> unzip -> modify -> rezip -> output).

> - Load and modify a PNG and write it out as a JPG. Be sure you scale it down using a Lanczos kernel.

First I'd make sure I have the encoder and decoder settled. If you have a specific encoder in mind (e.g. ImageMagick + specific settings) then you can box that up by compiling it to web assembly. If you don't have a specific encoder in mind then it doesn't really matter what you use. If you have a specific encoder in mind, then same thing. If you don't, you can always use the JS standard Canvas API which you can load an image into then pull out raw pixel info.

If you want to do the whole thing from scratch, you can do it like the Excel way and create a pipeline using the streams API and TypedArrays.

------------------------------------------------------------------------------------------------

> > eslint doesn't matter since deno has a linter

> One of the most powerful eslint rules for TypeScript is the detection of dangling promises. A quick glance at the deno lint rules doesn't see anything that will handle this critical lint rule.

>

> And OOPS, adding eslint to the project in a standardized way isn't possible because there's no package.json!

Linting for floating promises is tricky to accomplish (ESLint doesn't get it quite right).

Depending on the code, you can get stuck in the halting problem. To solve it correctly, you need to write a compiler/solver that has access to the relevant type information.

This is something that the typescript project itself has been investigating and once solved would be added as another strict rule (e.g. noImplicitOverride, etc...).

If you don't care about correctness and want any heuristic solution, then use any heuristic solution.


ESLint does an amazing job in detecting floating promises. I've not had it miss one, ever. When adding this to a project, I've discovered multiple accidental bugs due to a missing "await" keyword--bugs that were extremely subtle and intermittent in many cases.

The only thing it can't do is determine that you actually did handle the promise later. Which is fine. It's a LINTING RULE, and false positives are the name of the game.

What's BAD is when you accidentally miss handling a promise at all. It's an invisible error without the linting rule.

Your other comments...don't even make sense. You're going to build a Lanczos filter by hand? Or you're only going to ... compile ImageMagick to WebAssembly?!, ... an implementation which is tremendously slower (nearly unusably so for large images) than that of Sharp:

https://www.npmjs.com/package/sharp

... which is simply an import away?

No, what you're doing is called "motivated reasoning." You've concluded that Deno is the best, and you're reinterpreting all of my complaints in convoluted ways to support your predetermined conclusion.

Standard fanboy behavior. Or troll behavior. I cite Poe's Law as why it's impossible to tell the difference.


Not a fanboy. But I'll go through my reasoning anyway.

----------------------------------------------------------------

## Eslint

We're on the same page that:

- heuristics work until they don't

- we'd all rather have false positives than missed positives

- Having something is better than nothing

- ESLint fits it's roll well for now

I've been using node since the 0.x days and used iojs while it was ahead, I've gone from jslint -> jshint -> eslint -> tslint -> eslint (because it supported ts) + sonarlint -> now (which is rome tools in editor, and sonarlint to catch what rome tools doesn't support yet).

The one problem I've had transitioning projects through each linter is just the amount of crufty exception comments to disable rules that ends up through my code.

I've found that I can get rid of a lot of those by setting up my configs and turning on most of the strict modes.

As for promises specifically, I've ended up needing to fire off a lot of floating promises in the last 18 months. There's a lot of firing things off to set values in things that may not exist anymore. (I've noticed it happens a lot a lot around using Promise.race to make code that gracefully recovers in chaotic environments)

The false positives got too much so I disabled the rule (and at a later point moved away from eslint altogether).

There's also some tricky things with catching floating promises when working with frameworks that can deal with super hard to debug side-effects.

So now I have:

- floating promises where for specific situations I don't want to catch their errors and have them throw up to the next layer

- floating promises that are conditionally caught

- floating promises that are actually a wrapped promise that fails as an uncaught promise should, but also conditionally triggers the debugger or logs out a trace.

I use different approaches depending on the situation.

Maybe for you, the only situation you need to deal with is catching all promises (in which case you should continue using ESLint then turn off that rule when TS gets the correct promise handling detection), but for me it causes more trouble than it fixes.

----------------------------------------------------------------

## Images

If I had to do what you suggested, I would have actually just used sharp, but after reading your comment about dependencies, I wasn't sure if you would accept that answer.

I suggested two alternatives:

1. Compiling an existing solution into web assembly to box up that functionality and call into it. It could be ImageMagick, libpng, libspng, libvips, etc... I suggested ImageMagick because that tends to be the go-to for when someone has a specific setting in mind (usually due to a legacy system).

2. Setting up streams and using TypedArrays (which is what sharp uses anyway)


Most of those 2M EV chargers only need to be Level 2.

It makes MUCH more sense to use J1772 by default in those 2M chargers, since every EV other than Tesla currently uses that standard, and Tesla can easily use an adapter.

So CCS hasn't "lost". In fact, it's likely to remain a standard. In fact, there are some 5,200 CCS charging station locations in the US, compared to only about 1,800 Supercharger stations. [1]

A stronger position in the market only amplifies itself. Beta was better in every way than VHS, but VHS had a lead in popularity, and network effects eventually resulted in the death of Beta.

[1] https://insideevs.com/news/673190/nacs-ccs1-locations-chargi...


SAE (the Society of Automotive Engineers) just formally adopted NACS as preferred standard [1], so CCS1 may have indeed "lost".

Conversion of existing CCS1 chargers was one of the things SAE supposedly considered in making the decision. NACS "speaks CCS software over Tesla-designed hardware" so it's mostly just a hardware change at one end of the cable. (Tesla made some smart decisions when they decided to standards track NACS.)

[1] https://www.theverge.com/2023/6/27/23775208/tesla-nacs-elect...


We'll see what happens, I guess.

As long as I can charge my car (Ioniq 5) moving forward, it doesn't really matter.

CCS has higher speed charging than is available at Superchargers, at least for my car, though, so I hope the NACS standard has the ability to support the higher charging speed.


> 5,200 CCS charging station locations in the US, compared to only about 1,800

Sure, but in the smaller print on that diagram it mentions that there are twice as many superchargers. CCS stations are typically small, superchargers typically large.


...and?

As long as there are chargers available at a particular station, why should I care how many are sitting unused?

I've been by many Supercharger stations where there were no cars charging at all. Sometimes I see one or two, but the majority of the time most of the chargers are empty.

The number of locations indicates the flexibility of the charging network. There are CCS chargers in places where Superchargers don't exist, meaning you have more places you can go with CCS than you can go with a Tesla.

And that isn't a hypothetical. I've driven down to the Carlsbad Caverns with my family in the past, and I wanted to do it again with my electric car. For a long time, southwest New Mexico had NO fast charging stations for any brand of charger, but in the last year it's gained enough CCS chargers that I could comfortably drive back to the caverns without worrying about range. And there are still no Superchargers within a hundred miles. (Note I'd be coming from Colorado, so driving down from the north side of the state--you could probably make it TO Carlsbad from a west Texas Supercharger, and then make it back to the same Supercharger, but that would be many hours of driving out of my way.)

Would I like to see more redundancy at every location? Sure. But to me it's more important that there are chargers available where I want to go than that there are a ton of redundant chargers at every location.

Maybe if I lived in LA and wanted to be able to drive to SF I would have different priorities.


How many of those CCS stations are actually up and running properly? I can't stand Musk and Tesla and haven't yet made the jump into EVs but I plan to soon and everything I've seen online suggests Telsa chargers just work while EA and other CCS brands are extremely hit and miss.


Other than new stations that just haven't come online, I've never once pulled up to a location and failed to charge at a decent rate. Similar experiences with most people I've talked to actually at the stations I'm charging at.

I'm sure wherever these reviewers are doing their reviews, they're getting bad experiences. But personally I've never once had a problem. Maybe bad maintenance is more of a localized thing?


I've done a long distance road trip, and the CCS stations have occasional failed chargers, but every single one was available to charge my car.

Think of it this way: The Supercharger network has to be reliable, because there's only one station every ~100 miles or so. The CCS network can afford to be a bit less reliable because it's more redundant. And even then, the "less reliable" typically means that one out of four available chargers in a location is out of service, not that the location itself is unusable.

And there are apps that lets you see the status of chargers at a location, so you can check in advance whether a particular location is working. If a location seems to be in poor repair, you can always plan around it.


Reading about how, over time, the volume of a black hole seems to grow infinitely makes me immediately jump to an obvious connection:

The universe as a whole is ALSO a system that appears to grow infinitely.

I mean, there have been many conjectures that a black hole could contain a new universe, and that the creation of a black hole also creates a new universe. This would seem to hint at another potential connection.

That said...I'm really not a physicist, so maybe that connection is at best a hook for a science fiction story. But it surprises me that the article didn't even mention the parallel.


> over time, the volume of a black hole seems to grow infinitely

Unfortunately, the article fails to note that this is a speculative hypothesis with which not all physicists agree. It's not an established fact. It's not even an established theoretical prediction.


I'd be willing to put $100 on longbets to say our universe is inside a singularity, but I doubt anyone who remembers I existed will still be alive when we figure it out.


You best bet some future physicist doesn't also figure out how to reverse entropy, or they'll be back to collect on that bet.


If you bring me back from the dead, I may or may not go full Frankenstein’s Monster on you. Be warned, posterity.


You can't be inside a singularity because that's a point, but we are inside (surrounded on all sides by) an event horizon.


It is consistent to say that anyone at that point all of a sudden gets an extra "coordinate system" they can interact with, but they can never leave the point in the outer coordinate system. That would be 'every singularity contains a new universe'.

It's not a very scientific theory, since you cannot report back after attempting to test its predictions. Though the predictions can be tested empirically!


My other long bet is that 100 years from now we'll still be telling people, "I bet you're fun at parties."

And if we're going to 'well achtually', we don't know what happens inside of an event horizon. If time stops due to the infinite curvature, does the singularity ever form, or does it just collapse in on itself for eternity?


What proof exists for this hypothesis?


The hypothesis that we are surrounded by an event horizon? That's not a hypothesis, it's a logical conclusion. The observable universe has a boundary beyond which the expansion of space is faster than the speed of light; therefore light cannot reach us, which is the (loose) definition of an event horizon.


In other words, the evidence for that is the evidence of the exponential(ish) expansion of space.


I like that idea, a bubble has small bubbles inside. All of them growing, but one inside it is growing more, and it "eats" the others until it consumes the bubble it is in, and then it pops and all matter starts moving from center outwards.

Now we need to find some way to prove this is real.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: