Hacker Newsnew | past | comments | ask | show | jobs | submit | hackthemack's commentslogin

Opening a dual stack ipv4 and ipv6 does allow the service to accept both ipv4 and ipv6 connections. But I do not think that is what zadikian is getting at?

It does not address the network level identity and reachability. There is no default, globally routable mapping where owning a ipv4 automatically gives you an equivalent identity in ipv6 that others can reach without translation infrastructure. The transition mechanisms are not uniform or canonical, and that increases complexity.

6to4 was an attempt at that kind of embedding and I do not think it succeeded?

The original specification of ipv6 did not directly address a translation mechanism? It seemed to rely on, well, everyone will go dual stack and we will shut down the old ipv4 stack. I think it should have addressed that in the beginning and provided the one canonical way of doing it, perhaps with guides on timelines to get the ISP and backbone providers to get on board.


I think this is the kind of the topic that can be endlessly debated because you can not easily go back in time and test out alternate hypothesis. I will say that I do not like ipv6 because it tried to fix multiple accumulated problems. I know! How contrarian! How can you be against trying to fix things. But all of those issues made ipv6 a dual stack solution that replaced ipv4.

Address exhaustion, Routing table scalability, restore end to end routability, autoconfiguration, header simplification, mulitcast + anycast, security standardization.

Whereas, I think a lot of those things could have been solved in other ways, or more slowly. I would have preferred a ipv4.2 64 style because it would have prioritized

Address exhaustion, keeping backward operational compatibility, fewer changes to institutional knowledge, and still had incremental rollout (that I think would have occurred much more quickly than ipv6).


> keeping backward operational compatibility

It is not possible to be backwards compatibility with a larger address space


You are right that a 32 bit ipv4 stack can not understand a 64 bit packet format. The thing I am trying to get at is not native compatibility, it is operational compatibility via translation. I know, I know, you will probably say that is what ipv6 bridges do.

But in an ipv42 type setup, you would have determnistic embedding so that every ipv4 address is represented inside the larger address space. This would allow translation at network boundaries and let old systems continue to operate unchanged. Then the routers and systems would be upgraded incrementally. I think that is why it would have been upgraded more quickly.


> But in an ipv42 type setup, you would have determnistic embedding so that every ipv4 address is represented inside the larger address space

IPv6 supports that, but it ended up not getting used very much.

See https://en.wikipedia.org/wiki/List_of_IPv6_transition_mechan...


I remember reading about that a long time ago. I wonder why it never really caught on?

I think part of the problem is not so much a technical one, as a coordination issue. Who are you more likely to get on board? ISP and backbone providers. What is the path forward? Here is the recommended path forward, kind of thing.


We have that, it's called ipv6. A section of the v6 address space is sectioned off to hold all v4 addresses

The embedding I believe you are referring to is not a part of the global routing model. (maybe I am wrong?) What I am describing is making that kind of declaration central to the system in a deterministic, network wide mapping of ipv4 to the larger ipv6 space. The translation in ipv6 ended up being handled by a mix of mechanisms after the fact, rather than a single, uniform mapping model that tied directly to the address structure. I think part of the problem is they did not put that front and center, at the beginning, when doing the initial specification.

How would an embedding handle the other 99.999999999999% of addresses not embedded?

At least at first, you wouldn't, you'd embed all of them. Cloudflare has 1.1.1.1, so they get 1.1.1.1:: too.

Not doing that was one of the key points of starting fresh with IPv6. Doing that would mean that you could end up with billions of routes to consider.

One reason for large address space is that those with networks could be placed sparsely and left room to grow. Thus allowing less routes in general.


Indeed doing it this way would keep the fragmentation, or at least delay fixing it. That's what these articles always overlook, the goal of ipv6 wasn't to just add more bits, it was also to defrag the routes.

I think instead of 1.1.1.1::, you could do 4:1.1.1.1::, wait for v4 to be gone, then start building new topologies in the other /8s. Not sure how hard that is, but it seems easier than what they're trying to do now.


Your proposal (translation) is addressed as point 3B in the article.

I went and re-read point 3B. I agree that some hypothetical ipv42 faces a translation problem.

But it does not follow that address design is irrelevant. The structure of the address space directly determines whether translation can be stateless and alogrithmic.

In a hypothetical ipv42 design that preserves a deterministic embedding relationship between old and new addresses, translation at the edges could be largely stateless and mechanically reversible, to reduce coordiation overhead between operators and it makes reachability more predictable.

In our world ipv6, the transition seems to require a mix of dual stack, nat64, dns64, tunneling aproaches. The mapping between ipv4 and ipv6 is not uniformly deterministic across all deployment contexts.

Also, there is just a human factor. The mental gymnastics that go on. The perception of what is the way forward? With ipv6, it feels like everyone has to go get their ipv6 stack in order. With a hypothetical ipv42, where the ISPs and backbone providers can throw in the translation layers, it feels like, to me, they would have gotten on board much more quickly. Yeah, I know, it is just a feeling.


I agree with you about the embedded addresses, and I don't understand why the space was moved to all zeros to a bunch of other mappings.

but the utility of this isn't that high. we already know how to handle 4-4 and 6-6 traffic just fine. but if a 4 host wants to talk to a 6 host, it just doesn't have the extra bits in order to describe it, so this just doesn't facilitate 4-6 endpoint communication at all. this is true even you substitute v6 with any other layer 3 with a larger address space.

where it does help is in a unified routing backbone, that would allow v4 prefixes to be announced in the v6 routing system. which is arguably useful.


I don't see how it matters we forced people into ipv6 as well. Who cares. It's more about the difference in mental models that prevented adoption especially among those who run the services that are on the internet.

This is a fake argument. Noone is arguing for backwards compatibility.

But there was also no necessity to demand reshaping networks and changing address assignment in a way that made migration extremely work intensive and hard to deploy in parallel.


How would you do it?

I wouldn't try to reinvent DHCP, kept NAT and generally attempted to keep the overall shape of a v6 network the same as v4 networks to ease transition of large deployments.

Ipv6 now has most of that - after years of resistance - which results in a mixed mess of "several ways to do it" approaches spiced with clients and equipment supporting a random set of them.


And yet 50% of the internet is using CGNAT just fine. The extra bits are just in a different place.

Yes, but CGNAT is an inherently stateful system and as a result will always be more expensive to operate per packet than a stateless router. The reason we are seeing steady (if slow) growth in native IPv6 is because the workarounds for IPv4 exhaustion cost money, and eventually upgrading equipment and putting pressure on website operators to support IPv6 becomes cheaper than growing CGNAT capacity.

There was a proposal called SIP that mostly focused on increasing address length (it got published as a historic RFC eventually): https://www.rfc-editor.org/rfc/rfc8507

It still had the problem that it made it harder for middleboxen (compared to IPv4) to look at port numbers.


How you would have implemented backward compatibility? I am interested to hear the general technical details of how this could have been possible.

I am mostly interested in two basic scenarios. With expectation that only on one side is any changes made. Host from new addressing scheme connecting to old one and receiving data back. Host from old addressing scheme connecting to one in new one and receiving data back.


Yeah, but most* companies hire for just whatever X programming language they use, and do not care if you know how to program and do not care that you could pick up whatever X is in a couple of weeks. (Anecdotally for "most", I am sure there are exceptions)

Not FAANG. They just need to know you can leetcode extremely fast in arbitrary languages.

It is my understanding that the US Government set up a system, long, long ago, where the British would spy on Americans and then the British would supply the information to the NSA, thereby the NSA is not technically spying on American citizens.

Words mean nothing. They can be interpreted how ever they need to be interpreted by those in power.

https://en.wikipedia.org/wiki/ECHELON


Heard of that. If you have to do some spying, that indirect method might be preferable if the partner country’s spies are a little nonpartisan

australia and america have the same agreement. these countries may be dragons but live in fear of losing their hoard (borrowing that analogy from https://news.ycombinator.com/item?id=47963204)

> australia and america have the same agreement

This has no basis whatsoever in Australian law.

Procuring someone else to do it on your behalf is still an offence under s 7(1) of the Telecommunications (Interception and Access) Act 1979 (Cth).

TELECOMMUNICATIONS (INTERCEPTION AND ACCESS) ACT 1979 - SECT 7

Telecommunications not to be intercepted (1) A person shall not:

  (a)   intercept;

  (b)   authorize, suffer or permit another person to intercept; or

  (c)   do any act or thing that will enable him or her or another person to intercept;
a communication passing over a telecommunications system.


How about we refer to a primary source instead.

https://www.nsa.gov/Helpful-Links/NSA-FOIA/Declassification-...

Which provision of the treaty purports to enable this conduct?


Well you have reality and "laws".

In this case we have law, which gives effect to treaties and binds the employees of the intelligence agencies, and then we have the unsupported conspiracy theories that you’re mindlessly parroting here.

I, perhaps, owe my career to DOS. As a kid, everyone relied on me to get their games, soundcard, and disk drives to work. Juggling IRQs HIMEM and CHKDSK. Soundblaster 16 forever!

Kinda the same, but for me, it was more of a gaming-driven motive. I learnt the basic DOS commands my observing my cousin, so I could run Prince of Persia, GORILLA.BAS, Dangerous Dave etc, even when he wasn't around (it was his 286).

Later on, when I got my own PC (a 486) I got into scripting by customising my AUTOEXEC.BAT to display a menu so I can jump into my favourite game immediately after the PC booted. Of course, I also learnt about TSRs, conventional memory, tuning CONFIG.SYS etc just so that I can run some tricky games like BioMenace and OMF2097.

I event learnt basic networking and made my own null-modem cable, because I wanted to play OMF2097 with my friends without sharing the same keyboard (we would always fight over who gets to use the right side of the keyboard, which was obviously the best side).

The first time I dealt with a virus was when I tried installing Prince of Persia 2 from a set of floppies I got from my friend. Dealing with the virus (it was one that "melted" the screen) unlocked a whole new world of malware research for me - and collecting malware became one of my hobbies. I also learnt hex editing and some assembly language because I wanted to cheat in Prince of Persia 2, and unlock shareware programs like Cheat Machine - and what I saw within the hex code of Cheat Machine blew me away, it opened another new world for me.

I built my first PC (a PIII 450, along with my first GPU - an nVidia RIVA TNT) - all parts carefully selected, so that I can play games with the best performance for the price.

In the Windows world, I was endlessly tuning my PC, diving into the registry, switching kernels (yes, there were thirdparty kernels you could install), even optimising file layout on the disk - all so that I could get the best gaming performance. I dived deep into scripting with AutoHotkey and Perl to make macros, bots and other random utilities for the games I played. After that I.. well, I could go on, but you get the picture.

So while DOS was my starting point (and a most fond memory), it was ultimately gaming that I owe my career to.


I started with c64 but stuck there with basic ( extension port was broken unfortunately could not dive into assembly there )

Then I got from someone an old 286, ( hdd was not working ) spent most of my time on "debug" command. Then I got a book for x86 assembly and DOS (interrupts etc). ( Which was kind of hard on non-english speaking country ) I still somehow recall some pages from memory :)

Dived into cracking/cheats, made even money on password recovery. How long those x86 knowledge carried me was unbelievable when I look back.


BioMenace and OMF worked out of the box for me but BioMenace took 8 minutes (!) to load. It would sit at the start screen (a load screen) hanging the entire time and then load really fast after the delay. Is this related to the configuration stuff you're mentioning?

This didn't occur on any other game for me, which I recall having over 50.


Yeah, my BioMenace used to hang at the loading screen as well, and as weird as it sounds, the fix was to move your mouse. I guess it was getting stuck waiting for a hardware interrupt or something. You could also start the game with your mouse unplugged of course.

As for OMF, it used to give me a "not enough memory" error until I got rid of all my TSRs (basically a "clean boot"), as it had a very high conventional memory requirement, almost close to 600k if memory serves me right. Actually even BioMenace had a high conventional memory requirement. It wasn't until few years later that I learnt that there was no need to get rid of all the TSRs, you could just tell DOS to load everything in the High/Upper memory areas by tweaking your CONGIG.SYS and AUTOEXEC.BAT. Not sure if this was something that was introduced in later versions of DOS or if it could do it all along. But it wasn't in any of the official manuals at the time, I found these tweaks in a game guide on some random BBS.


Yep, I once made a good living getting drivers into high memory and making those menus in config.sys. Installing network drivers and Netware client. People were quite happy to see me when I arrived.

For some reason I thought DOS had already been opened.


They have open sourced a few versions of DOS, including 4.0 I believe. This is just the latest.

For me, DOS 5.0 was the best. Would love to see that.

And of course we have both FreeDOS and SvarDOS now.


Totally, my first revision control system was unwittingly developed by me as a kid to keep progressive backups of my simcity cities.

Everything I ever needed to know about IT, I learned installing Warcraft 2 on DOS

That is sort of the inferred conclusion of the podcast, it is a bit of a racket by the big players.

I wrote a very short ebook on the subject some 10 years ago, 15 Questions About Online Advertising.

It should be available for free on Apple Books, Google Books, Kobo etc, or for 0.99 on Amazon.


I am not sure I agree with everything stated in the post. Although it is plausible.

From the places I have worked, I would say the reason there is so much bad code at big companies is nobody really cares. It is just a job. You could bust your butt and clean up a code base, or fix a 10 year old bug that has been costing the company a half a million dollars a year, and it will not increase your salary in the slightest (Well, maybe a kudos and a pat on the back but that is not guaranteed).

Office Space had it right in the scene where Peter is being interviewed by the two bobs.


My memory is a bit hazy, but I thought what you are describing is very common with people who flatline and come back? I have vague memories that a new anesthetic drug was developed and used on soldiers undergoing surgery in the Vietnam war, and there was something about it that caused the same kind of reaction in those who were put under. Again, my memory is very hazy on the subject. I should go do some research and update this comment (and I just might).

EDIT I did a little searching. I think it might have been an old report about Ketamine before it became more wide known. Apparently it was used during the Vietnam War.

https://en.wikipedia.org/wiki/Ketamine#Near-death_experience


I prefer Mullah Nasruddin's experience, which was that death is perfectly OK unless you disturb the camels, at which point they beat you. https://ia800908.us.archive.org/28/items/idries-shah-the-exp...

Amazing recommendation! I was hooked by that most powerful New York Times Bestseller endorsement but in 1600s. "Many say: I wanted to learn, but only found madness. But those who seek wisdom will not find it elsewhere."

I was going to mention ketamine. Famous for this type of effect. I don't want to belittle the meaningful experience, but the mind is a really powerful organ and it's a safer bet to treat these experiences as arising from mind rather than beyond it. Shrug.

>>it's a safer bet to treat these experiences as arising from mind rather than beyond it

Your brain has to be alive and exist normally for it to have these experiences. So its quite obvious, nothing is coming from outside of it.

I do feel like its some kind of brain rebooting itself or something like that.

Its sad babies can't tell us if they experience the same during childbirth, but I have a guess that they experience something similar as well.

Its just that the brain is starting up and checking if there is a OxDEADBEEF or a fresh boot. And giving you the primal, brain not initialising any other interface(like eyes, ears, limbs etc). You experience what life would be if only brain existed on its own without everything else apart from it.


Safer why?

Lots to say there. The last few centuries have shown that many things which previously seemed inexplicable have been convincingly explained without resort to the supernatural. So a material basis of conscious experience seems a good bet.

Related, and hinted at by my original comment: the brain is capable of generating truly profound experiences. There is a tendency to ascribe them to something 'beyond ourselves' but again, advances in medicine and neuroscience have shown that these are explicable, subject to manipulation by chemical and electrical signals, which again suggests a material basis for conscious experience.


It's true that many things have yielded to science. And yet, what we discuss (the "hard problem of consciousness") hasn't. In my opinion, the burden is on you to prove that progress in other questions implies inevitable progress on an unrelated question that hasn't budged at all.

I said this in my other comment but, when you say the brain generates truly profound experiences, you beg the question (in the philosophical sense of the phrase). It's all in the word "experience." For in order for an experience to happen, some entity has to be experiencing. For there to be an illusion, there has to be an entity being deceived. And then how do you explain that entity? It can't be illusory experiences all the way down..

Any honest person has to see the connection between experience and the material brain. But I don't think it's honest to say it's obvious that experience is entirely material. The connection is deeply mysterious and may never be understood. I personally would rather accept that than claim that I don't really exist just so that everything can be explained.


The evidence is abundant and continues to chip away at the "hard problem". For example, we can through anesthesia turn on and off conscious experience. Through various drugs we can manipulate the character of conscious awareness, inducing ecstasy, visions, abiding serenity, terror, pain, grief... all states that were previously described as ineffable.

To say we haven't made progress on understanding consciousness is to move the post; we continue narrowing the 'hard problem' and eventually it seems like there will be nothing left other than a misunderstanding, something like the resolution of Xeno's paradox.


I don't mean to be insulting but, you don't seem to understand what the hard problem is. It is not "is the brain intimately linked with conscious experience?" I would agree we've made progress on that question. It is the harder question of "why is there conscious experience at all? Why does it feel the way it does?" I would argue no progress has been made on this whatsoever, and possibly can't be done.

You can try to claim that this question is meaningless, but that doesn't seem principled to me, not to mention that it completely ignores the fact that gestures broadly all this is happening.


In light of the fact that the entire universe is perceptible only through conscious awareness, the 'hard' question is equivalent to the question "why is there anything instead of nothing?" When asked this way, it's clearly not answerable. Everything short of that seems to have a material answer.

Edit: happy to chat more about this, as it's deeply interesting to me and I do want to understand your perspective. It may need a longer form than this thread allows. I've added a link to get in contact with me on my about page.


Not answerable != immaterial/nonmeaningful.

I'd be happy to talk more as I am passionate about this. I think the idea that there is no soul is actually extremely dehumanizing, and involves someone essentially saying "I don't really exist" (even if they redefine "I exist" to mean something more Materialist, it is, in my view, still saying that). I'll ping you on bluesky.


Jacob's Ladder is a great movie based on this theme.

I find the topic of the morality or effectiveness of having a H-1B a little bit intractable to reason about rationally. Consider a simplified model of the system.

You have 2 countries, C1 and C2.

Scenario 1: C1 has enough demand for 100 tech jobs. C1 only has 50 qualified natives for 100 tech jobs.

The wages of C1 go up because there is more demand than supply.

Scenario 2: C1 has enough demand for 100 tech jobs. C1 only has 50 qualified natives for 100 tech jobs.

Now you put in a H1-B visa program that will pay the same as the prevalent wage as a local native. C2 has enough candidates to fill the other 50 positions.

The wages of C1 will NOT go up because now supply matches demand.

Is Scenario 2 fair? Who gets to decide what fair is? Given the above system, I think I would argue that H1-B visa programs cause wage deflation in C1, even if it is filling jobs that would not be filled and even if the jobs paid the exact same as someone working in the native country.

I am not dogmatic about that though. Willing to hear a counterpoint to scenario 2.


Scenario 2 now has country C1 with 100 tech workers, and they got their pick of the 50 best workers from (lower paying) country 2.

Country 1 is now a better place to start a new company or expand your existing company because all the best workers in the field work in country 1. Starting the same business in country 2 will almost certainly fail.

This is literally why the Bay Area became the world’s most important tech hub and isolationism will allow (and is allowing) Chinese tech to jump ahead of the US. The government doesn’t care about losing a literal arms race, largely to reduce the political power of California. By no longer educating and welcoming the world’s brightest engineers the USA is going to be reduced to support and manufacturing roles where its large workforce will have to compete with everyone else and salaries will tumble.


It's not so simple because:

1. Companies can hire overseas. There's some cost to it in terms of added friction, but if wages rise enough in C1, then it's worth the friction to hire in C2 instead.

2. Workers also consume and invest, raising demand for other jobs. Employment is not a zero sum game, especially at the macro scale.


Scenario 2 makes sense. I think the counterpoint people bring up is to just stick to Scenario1 and let the salaries go up and let people jump ships every now and then for a raise but they forget that C1 is a ultra-capitalist country.


"By a continuing process of inflation, government can confiscate, secretly and unobserved, an important part of the wealth of their citizens." -- John Maynard Keynes


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: