Hacker Newsnew | past | comments | ask | show | jobs | submit | n_e's commentslogin

The reply looks like it was written by an LLM. Not that this excuses anything.

If anything that's worse...

The explanation is at the end of the article: another GoDaddy customer asked for the transfer of a similar-looking domain name, and they transferred the wrong domain.

And then slow rolled support.

And then flat out lied that they received "the correct" documentation justifying the transfer when they hadn't received any documentation, and denied the appeal.

Frankly the whole thing is inexplicable. The best explanation is fraudulent business practices to save 60 seconds of looking for the documentation.


With all the publicity GoDaddy has received over the last 10 years or so, I wonder why anybody reasonable would deal with them any more. Maybe the prices are irresistibly low, IDK.

I was done with them the day I knew their founder and CEO bribed corrupt African governments to go kill elephants, pose for pictures and share them with family and friends. I hunt and fish, but there’s something particularly evil about spending a fortune to abuse broken systems in poor nations to go after one of the most social species on earth, which are also known for having a strong awareness of death.

They show up as the #2 ad spot when you search "register a domain" and most people don't know any better.

they are not inexplicably low -- any rational person sees that any low prices are one year intro deals that revert to excessive after the first year.

We have always hated working with them, and have moved all clients to cloudflare.


You moved from the worst registrar to the second worst registrar. Cloudflare will call you up one day demanding an immediate payment of $150k and holding your domains hostage if you don't comply.

Cloudflare isn’t anywhere near being the second worst registrar. I’ve never had anything remotely similar to this occur, and I’ve had hundreds of domains with Cloudflare for years.

Cloudflare will call you up one day demanding an immediate payment of $150k and holding your domains hostage if you don't comply.

[citation very much needed]


https://robindev.substack.com/p/cloudflare-took-down-our-web... - one of a number of citations. To find more insert the terms [Cloudflare, hostage] into your favorite search engine.

Whatever was really happening in that incident it seems clear that it was not a simple matter of having registered some domains with Cloudflare and then getting a shakedown for $100k+ because of that.

If anyone else chooses to read the post then I suggest skimming the comments (that are mostly hidden by default) as well.


The point isnt the apologists that pop up whereever CF gets mentioned, the point is that they more or less have a built reputation for deceptive loss leader marketing.

Maybe early/MVP product engineers should know better, but CFs own education materials do not teach you to expect that.


I have no financial or professional connection to Cloudflare as far as I know and that's partly because I'm not sure I like the way they operate and the level of control over everyone's access to the Web they now have. But if we're going to criticise then I think it should be on a reasonable and preferably objective basis. The claim I challenged appears to be the complete opposite of that unfortunately.

If something sounds too good to be true, it probably is.

> Whatever was really happening in that incident it seems clear that

... CF for all their faults probably weren't the bad guy, when they discovered a "customer" absolutely taking the piss with capacity and doing incredibly sketchy things with domains to get around regulatory issues.

I have a courtesy hire car from a breakdown service at the moment with "unlimited mileage". I suspect they mean "unlimited mileage doing the sort of thing you do normally", and that "Unlimited, cool, I'm driving this thing from Scotland to Dagestan" would be met with opposition and a large invoice.


If you were in Scotland or Europe more generally, it'd be illegal for "unlimited mileage" to not actually be unlimited mileage.

If CF decides you're subject to an invisible limit which they won't even tell you and you have your domains at CF, they hold your domains hostage. Luckily, these guys had their domains somewhere else so they weren't hostage. Don't be the one who is.


Cmon, this is the guy that was running a shady online casino which was tanking Cloudflare’s IP reputation, completely different.

Cloudflare didn't give them the option to quit hosting with CF and port their domains out. It held the domains hostage because the domains were registered through CF.

Are you talking about a different article? The one linked says they only had their NS pointed at CloudFlare and the domains weren't registered there

Yeah, thats FUD. Cloudflare hasnt called anybody demanding huge sums of cash and holding your domains hostage. As a registrar they're fine, dont play scammy scum upsell games (because they have a real business model that isnt just registration skim).


For me, it's Namecheap.

That's worrying. My search-fu is failing me. Link please.

What are the good alternatives? In domain business, everyone like a service which has lived for decades. GoDaddy being the one helps a lot.

I love Cloudflare for my .com domains but they don’t support a lot of TLDs till date.


Porkbun.com Been around since 2015. You decide if that's long enough.

For my extremely simple needs, the website is domain name magic for a plain customer wanting a plain service.


If you prefer to use a service that's been around for a really long time, Network Solutions will happily sell you a domain! (Maybe Tucows, too; their domain-registering arm is now called OpenSRS, it appears.)

The bad publicity is all in tech spaces and they do ads IRL.

> Why is reserving a megabyte of stack space "expensive"?

Because if you use one thread for each of your 10,000 idle sockets you will use 10GB to do nothing.

So you'll want to use a better architecture such as a thread pool.

And if you want your better architecture to be generic and ergonomic, you'll end up with async or green threads.


> Because if you use one thread for each of your 10,000 idle sockets you will use 10GB to do nothing.

1.On a system that is handling 10k concurrent requests, the 10GB of RAM is going to be a fraction of what is installed.

2. It's not 10GB of RAM anyway, it's 10GB of address space. It still only gets faulted into real RAM when it gets used.


> 1.On a system that is handling 10k concurrent requests, the 10GB of RAM is going to be a fraction of what is installed.

My example (and the c10k problem) is 10k concurrent connections, not 10k concurrent requests.

> 2. It's not 10GB of RAM anyway, it's 10GB of address space. It still only gets faulted into real RAM when it gets used.

Yes, and that's both memory and cpu usage that isn't needed when using a better concurrency model. That's why no high-performance server software use a huge amount of threads, and many use the reactor pattern.


> Yes, and that's both memory and cpu usage that isn't needed

No, it literally is not. The "memory" is just entries in a page table in the kernel and MMU. It shouldn't worry you at all.

Nor is the CPU used by the kernel to manage those threads going to be necessarily less efficient than someone's handrolled async runtime. In fact given it gets more eyes... likely more.

The sole argument I can see is just avoiding a handful of syscalls and excessive crossing of the kernel<->userspace brain blood barrier too much.


> > Yes, and that's both memory and cpu usage that isn't needed No, it literally is not. The "memory" is just entries in a page table in the kernel and MMU. It shouldn't worry you at all.

Only if you never free one of those stacks. TLB flushes can be quite expensive.


Fair enough, though it's not like an async tasks runner doesn't have its own often relatively expensive book-keeping.

> 1.On a system that is handling 10k concurrent requests, the 10GB of RAM is going to be a fraction of what is installed

I've written massively concurrent systems where each connection only handled maybe a few kilobytes of data.

Async io is a massive win in those situations.

This describes many rest endpoints. Fetch a few rows from a DB, return some JSON.


> you will use 10GB to do nothing.

You don't pay for stack space you don't use unless you disable overcommit. And if you disable overcommit on modern linux the machine will very quickly stop functioning.


The amount of stack you pay for on a thread is proportional to the maximum depth that the stack ever reached on the thread. Operating systems can grow the amount of real memory allocated to a thread, but never shrink it.

It’s a programming model that has some really risky drawbacks.


> Operating systems can grow the amount of real memory allocated to a thread, but never shrink it.

Operating systems can shrink the memory usage of a stack.

  madvise(page, size, MADV_DONTNEED);
Leaves the memory mapping intact but the kernel frees underlying resources. Subsequent accesses get either new zero pages or the original file's pages.

Linux also supports mremap, which is essentially a kernel version of realloc. Supports growing and shrinking memory mappings.

  stack = mremap(stack, old_size, old_size / 2, MREMAP_MAYMOVE, 0);
Whether existing systems make use of this is another matter entirely. My language uses mremap for growth and shrinkage of stacks. C programs can't do it because pointers to stack allocated objects may exist.

> C programs can't do it because pointers to stack allocated objects may exist.

They sure shouldn't exist to the unused region of the stack though; if they do, that's a bug (because anything could claim that memory now). You should be free and clear to release stack pages past your current stack pointer.


High level languages have entire runtime systems dedicated to managing resources like that. My language can allocate, grow, shrink and deallocate stacks dynamically. It has complete visibility into everything, and the stacks themselves are designed to be relocatable and position-independent.

In C it's impossible to even get the stack pointer without dropping to assembly or using compiler builtins. It's hard to know where the stack starts or even how big it is.


I do agree with this, but just to be clear (for others), you don't need any runtime managing resource lifecycles to know that there shouldn't be pointers into free memory, such as the currently unused portion of the stack.

There isn’t any operating system or compiler that does this today, and it probably isn’t worth it to pursue. Enlarging the stack via page fault is really expensive, so you would need really advanced heuristics to prevent repeatedly unmapping/remapping those pages.

The correct tool for myriad of small tasks is coroutines / green threads / async tasks, so why spend any energy optimizing threads for that purpose instead of what they are already good at?


In the general case it's absolutely not worth it. In the context of "you want a large number of OS threads, and are willing to go to some effort", it's theoretically something you'd want to do; suppose the startup for a thread is measurably a high water mark for stack usage, after startup the steady state stack usage won't exceed 20% of that high mark, and you'd like as many threads/stacks as possible.

Coroutines / green threads / async tasks will all do this too, but there's something to be said for using/relying on the system scheduler instead of bringing your own in in addition.


Stack memory is never unmapped until the thread terminates as far as I know. I don’t know of any kernel that does this, for precisely the reason you arrive at by the very last sentence.

It's just normal pages to the kernel. In theory, it's totally possible for the program to munmap some of its own stack's pages if it was sophisticated enough. Typical C programs just aren't capable of it, at least not without great effort.

On a 64-bit system, 10 GB of address space is nothing.

10 GB of RAM is certainly something though. Especially in current times.

Except if those threads are actually faulting in all of that memory and making it resident, they'd be doing the same thing, just on the heap, for a classic async coroutine style application.

If you have hugepages enabled, all of those threads are probably faulting in a fair amount of memory.

Only if you've actually faulted in 2MB contiguously already.

I'm not sure what approach you're suggesting?

Asking a more junior developer or someone who "show little interest in learning" to discuss their approach with you before they've spent too much time on the problem, especially if you expect them to take the wrong approach seems like the right way to do things.

Throwing out a PR of someone who doesn't expect it would be quite unpleasant, especially coming from someone more senior.


This is how I try to approach it. I don't think it's a new thing for a new hire to come in hot and try to figure things out themselves rather than spending time with the team. Or getting lost down rabbit holes.

> Anyone know of a better way to protect yourself than setting a min release age on npm/pnpm/yarn/bun/uv (and anything else that supports it)?

With pnpm, you can also use trustPolicy: no-downgrade, which prevents installing packages whose trust level has decreased since older releases (e.g. if a release was published with the npm cli after a previous release was published with the github OIDC flow).

Another one is to not run post-install scripts (which is the default with pnpm and configurable with npm).

These would catch most of the compromised packages, as most of them are published outside of the normal release workflow with stolen credentials, and are run from post-install scripts


Yep! depsguard sets trustPolicy: "no-downgrade" where applicable.

> Why waste a round trip, build time, loss of flow and CI machine queue wait time when you can catch things early?

Because we want to be sure that the checks have passed, and that they have passed in a clean environment.

Contributors can, in addition, use git hooks, or run tests in watch mode, or use their IDE.

Also it's annoying to have slow git hooks if you commit often.


I haven't checked, but it would be surprising that the min-release-age applies to npm audit and equivalent commands


> Cloud sql lowest tier is pennies a day

Unless things have improved it's also hideously slow, like trivial queries on a small table taking tens of milliseconds. Though I guess that if the alternative is google sheets that's not really a concern.


I process TB-size ndjson files. I want to use jq to do some simple transformations between stages of the processing pipeline (e.g. rename a field), but it so slow that I write a single-use node or rust script instead.


Now I'm really curious. What field are you in that ndjson files of that size are common?

I'm sure there are reasons against switching to something more efficient–we've all been there–I'm just surprised.


> Now I'm really curious. What field are you in that ndjson files of that size are common?

I'm not OP,but structured JSON logs can easily result in humongous ndjson files, even with a modest fleet of servers over a not-very-long period of time.


So what's the use case for keeping them in that format rather than something more easily indexed and queryable?

I'd probably just shove it all into Postgres, but even a multi terabyte SQLite database seems more reasonable.


Replying here because the other comment is too deeply nested to reply.

Even if it's once off, some people handle a lot of once-offs, that's exactly where you need good CLI tooling to support it.

Sure jq isn't exactly super slow, but I also have avoided it in pipelines where I just need faster throughput.

rg was insanely useful in a project I once got where they had about 5GB of source files, a lot of them auto-generated. And you needed to find stuff in there. People were using Notepad++ and waiting minutes for a query to find something in the haystack. rg returned results in seconds.


You make some good points. I've worked in support before, so I shouldn't have discounted how frequent "once-offs" can be.


The use case could be e.g. exactly processing an old trove of logs into something more easily indexed and queryable, and you might want to use jq as part of that processing pipeline


Fair, but for a once-off thing performance isn't usually a major factor.

The comment I was replying to implied this was something more regular.

EDIT: why is this being downvoted? I didn't think I was rude. The person I responded to made a good point, I was just clarifying that it wasn't quite the situation I was asking about.


At scale, low performance can very easily mean "longer than the lifetime of the universe to execute." The question isn't how quickly something will get done, but whether it can be done at all.


Good point. I said it above, but I'll repeat it here that I shouldn't have discounted how frequent once offs can be. I've worked in support before so I really should've known better


Certain people/businesses deal with one-off things every day. Even for something truly one-off, if one tool is too slow it might still be the difference between being able to do it once or not at all.


This reminds me of someone who wrote a regex tool that matches by compiling regexes (at runtime of the tool) via LLVM to native code.

You could probably do something similar for a faster jq.


I would love, _love_ to know more about your data formats, your tools, what the JSON looks like, basically as much as you're willing to share. :)

For about a month now I've been working on a suite of tools for dealing with JSON specifically written for the imagined audience of "for people who like CLIs or TUIs and have to deal with PILES AND PILES of JSON and care deeply about performance".

For me, I've been writing them just because it's an "itch". I like writing high performance/efficient software, and there's a few gaps that it bugged me they existed, that I knew I could fill.

I'm having fun and will be happy when I finish, regardless, but it would be so cool if it happened to solve a problem for someone else.


I maintain some tools for the videogame World of Warships. The developer has a file called GameParams.bin which is Python-pickled data (their scripting language is Python).

Working with this is pretty painful, so I convert the Pickled structure to other formats including JSON.

The file has always been prettified around ~500MB but as of recently expands to about 3GB I think because they’ve added extra regional parameters.

The file inflates to a large size because Pickle refcounts objects for deduping, whereas obviously that’s lost in JSON.

I care about speed and tools not choking on the large inputs so I use jaq for querying and instruction LLMs operating on the data to do the same.


This isn't for you then

> The query language is deliberately less expressive than jq's. jsongrep is a search tool, not a transformation tool-- it finds values but doesn't compute new ones. There are no filters, no arithmetic, no string interpolation.

Mind me asking what sorts of TB json files you work with? Seems excessively immense.


> Uses jq for TB json files

> Hadoop: bro

> Spark: bro

> hive: bro

> data team: bro


made me remember this article

<https://adamdrake.com/command-line-tools-can-be-235x-faster-...>

  Command-line Tools can be 235x Faster than your Hadoop Cluster (2014)

  Conclusion: Hopefully this has illustrated some points about using and abusing tools like Hadoop for data processing tasks that can better be accomplished on a single machine with simple shell commands and tools.


This article is good for new programmers to understand why certain solutions are better at scale, there is no silver bullet. And also, this is from 2014, and the dataset is < 4GB. No reason to use hadoop.

The discussion we had here was involving TB of data, so I'm curious how this is faster with CLIs rather than parallel processing...


JQ is very convenient, even if your files are more than 100GB. I often need to extract one field from huge JSON line files, I just pipe jq to it to get results. It's slower, but implementing proper data processing will take more time.


More than 100GB can be 101GB, 500GB or 1TB+. I was speaking about 1TB+ files. I'm not sure you can get it faster unless you have a parallel processor.


are those tools known for their fast json parsers?


If we talk about TB or PB+ scales, then yes.


Oh, can you post some benchmarks? I didn't know that parser throughput per core would change with the amount of data like that.


> but so could FFI calls to another language for the CPU bound work

Worker threads can be more convenient than FFI, as you don't need to compile anything, you can reuse the main application's functions, etc.


True! Although in a lot of Node you DO have a compile chain (typescript) you need to account for. There’s a transactional cost there to get these working well, and only sharing the code it needs. These days it’s much smaller than it used to be, though, so worker functions are seeing more use.

I make my comment to note tho that in many envs it’s easier to scale out than account for all the extra complications of multiple processes in a single container.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: