They're great but I wish Hetzner had a US (or CA) east coast presence, the latency of going across the ocean is really troublesome. They have some presence for their cloud offering, so they at least have some experience with the idea.
This is a pretty interesting solution. I could see how this could useful for certain kinds of problems (as part of a ddos attack mitigation for example).
The bulk of the comments in here are focused on comparing Larry Ellison to a lawn mower, so I'll try a new tack and say that I'm genuinely confused at what the value prop of Oracle is.
Given the history of their business model being licensing of important databases that are hard to switch off of, I've actually made a point to avoid using Oracle as much as possible (even so far as to leave MySQL when they acquired it, and I've never started a fresh project in Java, which they used to drive a lawsuit they had with Google).
From my chair, they make an expensive database they try to sell to golf executives. There are innumerable equal (better?), free alternatives, and most startups are founded by broke coders in bedrooms that choose those instead and stick with the devil they know. And they have an un-competitive cloud service? Enlighten me on what I would use Oracle for, I'm genuinely curious.
Short answer: today I think there is genuinely nothing that anyone should use oracle for, but their database used to be seriously far ahead of the competition.
A very long time ago (circa 2000) there were basically 2 databases that worked for use cases where you needed high availability and vertical scalability and those were Oracle and Sybase and Oracle was really the only game in town if you actually wanted certain features like online backups and certain replication configurations.
At the time, MySQL existed and was popular for things like websites but had really hard scalability caps[1] and no replication so if you wanted HA you were forced to go to oracle pretty much. Postgres also wasn't competitive above certain sizes of tables that seem pretty modest now but felt big back then, and you used to need to shut postgres access down periodically to do backups and vacuum the tables so you couldn't use it for any sort of always-on type of use case.
Oracle also had a lot of features that now we would use other free or cloud-hosted services for like message queues.
[1] in particular if you had multiple concurrent readers they would permanently starve writers so you could get to a situation where everyone could read your data but you could never update. This was due to a priority inversion bug in how they used to lock tables.
We were building a payments system in the early 2000s and got a diktat to not use Oracle. The amount of things we had to build to satisfy the availability and durability requirements were so huge it consumed the first few years of work. We didn’t get to the business side of things until much later. Funny thing is we ended up giving up on MySQL and went back to oracle after all that work. The whole thing was scraped after a couple of years.
To get to the level of scale that oracle can handle we had to build sharding and cluster replication from scratch. It still didn’t get to even 1/10th of a single oracle node. Obviously we made a lot of poor architecture decisions as well - in hindsight, of course.
Yes, although a lot of the most advanced PostgreSQL features that would bear comparison in this discussion are relatively recent. PostgreSQL didn't have them in the 2000s, either, and where it did, the ergonomics were much worse than they are today.
I use Patroni (https://github.com/patroni/patroni) (no affiliation to me) which is a really nice and reliable PostgreSQL distribution that provides automatic failover and not just active-standby nodes with manual failover.
As I understand it, you would have to script a separate watchdog process for the basic PostgreSQL, to get high availability.
I remember when Salesforce was a true innovator. I attended a Salesforce conference the other day. Sounds like they're all-in on their "agentic" vision. I miss when the Salesforce conferences were tailored to nerds like me and included products that the CEO would never hear about but my team would fight tooth and nail to keep in the budget.
I only ever worked with the Linux/Windows variant. I can’t believe I am saying this about an IBM product, but I found it to be actually rather pleasant to work with.
As an IBM hobbyist user, picture something worse than VMS in 'hackerdom'. IBM's mainframe OSes are like NT/OS2 taken to the total extreme with objects, because by default you don't see files but objects which might have files... or not.
Imagine the antithesis of Emacs. That's an IBM environment with 3270 terminals and obtuse commands to learn.
As somebody who administers several large DB2 clusters all linked together with multiple replication modes (HADR, SQLREP) for an emergency services communication platform, I can confirm this. It's pretty damn rock solid even on Linux these days.
Kind of, but there are some subtle differences in my opinion. Oracle is top-to-bottom evil, whose business model basically boils down to screwing over their clients and everyone else at every possible opportunity, comparable to the likes of McKinsey or Accenture.
IBM is a bit more nuanced. My wife grew up in an IBM town and a lot of her family and her friends’ families used to work there in the 70s and 80s. People, especially the engineers, used to take pride in their work there.
You had to be careful with MySQL back then as constraints were syntactic sugar but not enforced. PostgreSQL was indeed much tougher to manage but more full-featured.
Really, you've always had to be careful with MySQL. It really was the PHP of RDBMSs.
The silent "SHOW WARNINGS" system, nonsense dates like Feb 31, implicit type conversions like converting strings to 0s, non-deterministic group by enabled by default, index corruption, etc.
Not just constraints, transactions were also a no-op. The MyISAM engine is still available in modern versions if you want to experience this, it's just not default anymore.
Yep, I've had to work with a MyISAM project with no transactions - it's a reasonably simple system thank goodness but a little scary all the same (and lots of boilerplate to deal with partial failures).
I love Postgres in 2026, but it really was not a viable enterprise option before 2010. MySQL had decent binlog replication starting in 2000 which made up for a lot of the horrible warts it had.
Not to mention there was Percona, and both Google & Facebook contributed a number of patches that made monitoring MySQL top notch (such as finding slow running queries, unused indexes, locks etc.).
SQL Server was pretty good until they went the Oracle way with their licensing shenanigans, but even with that they were a lot cheaper than Oracle. In fact SQL server was one of the few great products that came out of MS.
Having written a rust client for it, even their documentation is absolutely stellar. You just read how the protocol works from the PDF and implement it.
50GB sounds like nothing, but I believe you in the quality. Most big bucks paid databases need to be high quality though, otherwise they would fail as products
My first job was a SQL DBA.
15 years and 5 companies later, this startup I'm at (which got acquired recently), still uses SQL Server. It has stood the test of time.
Actually one of the very few really good MS products at all?
Visual Studio is also great and widely adopted.
But what else do they have? I had some good experiences with Exchange years ago, but this is just my personal experience, since most people seem to hate it.
What else do they have that is considered a good/solid product that you would recommend to someone?
I disagree as I was running clustered sql server 6.5 and 7 in 1998 for hundreds of concurrent users doing millions of reads per hour on NT basically commodity boxes. Replaced it with Oracle for 100x cost and lost performance.
I think even back then you were usually better off with distributed databases running mysql or postgres over Oracle. Although people liked to think a giant Oracle db was better.
For others like me who might be skeptical to hear throughput in any metric other than seconds (and is used to large numbers in hours/days being used to inflate), I think millions per hour is actually quite high for 1998.
Assume that means 5_000_000/hour. 5M/hr => 83k/min => 1400/s. That is impressive for late 90s. I was generous on what "millions per hour" meant, but even if its 2.5M/hr that would be 700/s, which is still quite good.
What do you mean a distributed database running mysql or postgres? Even today you can't have a distribute db running (real) Postgres, it doesn't do multi-master clustering.
DB2 was crazy good for certain use cases but very weird. For one, the pattern for DB2 efficiency was pretty much the exact opposite of every other database. Every other database would say "Normalize your tables, use BCNF, blah blah, small reference tables, special indices etc".
DB2, the pattern was "denormalize everything into one gigantic wide table". If you did that it was insanely fast for the time and could handle very large datasets.
I have not had much experience with DB2, but given that the relational data model and normalization was invented at IBM (Codd) and IBM's implemenation of those concepts was DB2, DB2 performing poorly with a normalized data model seems strange.
My recollection was that DB2 did not support multi version concurrency control like Oracle and Postgres did. The result was a lot of lock contention with DB2 if you were not careful. MVCC was eventually added to DB2, but by then it was too late.
DB2 had/has excellent data compression capabilities, achieving ratios for OLTP that would only be equaled by later OLAP columnar systems.
For raw performance needs, many financial services schema were going to be denormalized anyway. Compression was a great way to claw some of the resulting inefficient storage back.
> DB2, the pattern was "denormalize everything into one gigantic wide table". If you did that it was insanely fast for the time and could handle very large datasets.
SQL Server 2000 was well received in the segments that mattered as a challenger. Oracle was in first place running on Unix. However, it was viewed as expensive and the procurement experience was regarded as unpleasant. People wanted competition even if they didn't think SQL Server, or another alternative, could unseat Oracle for the most important stuff.
Windows was really picking up steam and there was a move to web development in the Windows-based developer space. Visual Basic and Delphi were popular but desktop development had peaked. ASP was for building your apps and SQL Server was the natural backend. SQL Server fed off this wave. It wasn't dislodging Oracle, but rather than every app being built on Oracle, more apps started to use SQL Server as the backend.
Then ASP.NET appeared on the scene and demand grew even more. It was a well-integrated combo that appealed to a lot of shops. I started my career in a global pharma and there was a split between tech budget. IT was a Windows shop for many reasons and ran as much on SQL Server as possible. R&D was Unix/Linux with Oracle. There was a real battle going on in the .NET vs Java (how about some EJB 1) and the databases followed the growth curves of both rather than competing against each other.
The SQL Slammer worm brought a lot of attention to the product. There were instances running everywhere and IT didn't expect so much adoption. Back then you had a lot more servers running inside offices than you do today. My office was much like my homelab today. This validated the need so the patches got applies, IT got involved in the upkeep, and adoption continued to grow.
Oracle's sales folk and lawyers were horrible to deal with. I had some experience of this directly as they tried pushing Java-related products and my boss dragged me into the evals. One of my in-laws was outside counsel in the IT space doing work with enterprise-sized companies. He claims they are the worst company he's ever had to deal with and wouldn't delegate any decision-making locally which endlessly dragged out deals. They had a good product but felt they could get away with anything. Over time he saw customers run lots of taskforces to chip away Oracle usage. This accelerated with SaaS because you could eliminate the app AND Oracle in one swoop.
I remember talking to one tech leader at the time who described it as "surprisingly good, for a microsoft product" which sort of summed it up. But it had similar characteristics to sybase except more so because you had to run it on an NT server (iirc) and so there was an even harder cap on the scale of hardware you could run it on, whereas you could run oracle on really top-end sparc hardware that was way more powerful than anything that ran windows.
Depends if the director or VP liked Microsoft or not. I’ve worked at places that loved SQL Server and Microsoft server products in general. Others did not use them anywhere in their datacenter and wouldn’t have considered them. Oracle, IBM, and Microsoft were very dependent on if the people in charged liked them. Not so much technical merits.
SQL Server was very good and used in a lot of enterprises. ime the decision between Oracle and SQL Server tended to be down to whether the IT department or company was a "Microsoft Shop" or not. There were a lot of things that came free with SQL Server licenses and it had really nice integrations with other Microsoft enterprise systems software and desktop software.
Oracle was definitely seen as the more mature and resilient (and expensive!) RDBMS in all the years I worked in that space. It also ran on Unix/Linux whereas SQL Server was windows only. Many enterprises didn't like running Microsoft servers, for lots of (usually good) reasons.
we still have an informix db for an old early 2000s application we have to support. shit runs on centos5 lmao. it's actually not too bad, around v12 there's cdc capabilities (requires you to build your own agent service to consume the data) that made the exercise of real time replicating the app db in our edw a cakewalk. which ironically has greatly extended the lifespan of the application since no one has to query informix anymore directly.
My experience at the time was that it was perceived as not serious enough and lacking important features. If my memory isn't very bad, I believe as late as 2000 SQL Server still only supported AFTER triggers.
In my experience in the late 90s and early 00s, besides Oracle and Sybase, DB/2 and Informix were also regarded as good. Oracle was considered the best though.
Thanks for the clarification, I guess my memory is very bad after all! :)
Do you remember if that was a recent addition?
Full disclosure: I was quite the newbie back then and most of what I "new" about SQL Server was what the more experienced coworkers told me. This was a very IBM-biased place so I'm not surprised they would have stuck to some old shortcoming, like people who still talk about bad MySQL defaults that have been changed for at least 10 years.
Up until that job (which was my second Actual Formal Job), all my DB experience had been with either dBase (I think III plus or IV) and access, so this was a whole new world with me.
It was through MS SQL Server that a colleague taught me about backups and recovery, after I ran an update in prod but forgot to include the where clause ... :)
SQL Server was Sybase until (I think) version 4.9, just rebranded as Microsoft SQL Server.
Then the two versions split and I don't think that any of the Sybase source code remains in what is SQL Server today.
That said, a lot of the concepts (like a significant number of system stored procedures) and also TSQL remain almost the same, with small differences (except for system functions, which SQL Server has a lot more functionality).
When you come from the Sybase world getting a start on SQL Server is quite straight forward when it comes to handling the database.
Internals and other low level nuts and bolts differ nowadays, of course.
MS SQL Server was a cheaper, friendlier plugin replacement for Sybase in the early 2000s.
I built apps in an active-active bidirectional replication telecom Sybase environment and was deeply involved in migrating it to MS SQL server in the early 2000s. I remember a fair amount of paranoia and effort around the transition as our entire business and customers' phone calls depended on it (for "reasons") but in hindsight it went quite smoothly and there were no regrets afterwards.
The Microsoft went and added a nice BI stack to the whole thing which added a new dimension of value creation at a new low price point.
Open arms, especially given its graphical tooling.
Starting with version 7.5 it was quite alright, however being Microsoft, it has been mostly used in Microsoft shops, alongside VB, MFC two tier applications, ASP, .NET, Sharepoint, Dynamics,...
IIRC they also had the first native (100% Java) JDBC driver, so you could run from any platform and without weird JNI locking issues when using threads.
> A very long time ago (circa 2000) there were basically 2 databases that worked for use cases where you needed high availability and vertical scalability
... and both of them were Postgres.
I used it in the late 90s for the backend for websites written in PHP3, but everyone said this was ridiculous and silly and don't you know that everyone's using the MySQL thing.
So I used this MySQL thing, but by about 2005 I'd gone back to powering my lawnmower with a 500bhp Scania V8 because I just preferred having that level of ridiculous overkill in an engine.
Nowadays? Key/Value store in RAM is probably fine for testing -> Sqlite is often Good Enough -> Ah sod it, fire Postgres into a docker container and warn the neighbours, we're going down the Scanny V8 route yet again.
Sort of. The VirtualBox Extension Pack is free for personal or educational use. It is explicitly not free for work use[0]. You can download it for free, but accepting the license to do so obligates you to pay them for it.
I was around back then and I call Bullshit on everything you claim. There were more database options in 2000 than there were in 1996. Even before that there was FoxPro… c’mon man. Oracle’s only value was they built a NO EXIT clause into their contracts…
Oracle was the ONLY game in town if you were serious. It was like buying IBM in the 80s. Source: programmed PL/SQL and embedded SQL at the Toronto Stock Exchange in the early 90s, on SCO Unix and Oracle.
it was soooooo the only game in town that they were like NVDA now, yea you got alternatives but you really don't and hence you charge insane prices and everyone is paying up with a grin on their faces. oracle was the only game in town 100% if you were serious!
You'd have to assume businesses were insane/stupid to go with Oracle to the tune of billions and billions of dollars if you believe that they had zero value to sell.
Oracle buys smaller enterprise companies with rich customers that were already using Oracle DB, or makes them rely on it, then cashes in on licensing.
So for example, they bought Micros (most EFTPOS terminals in the world are powered by them, I think), they bought Cerner (big supplier of IT to healthcare companies), they bought PeopleSoft. If your big company isn't using SAP, it's probably using that. Mundane but essential things for large businesses: CRM, ERP, payroll/HR.
So that's what you'd use Oracle for. Or perhaps you wouldn't use Oracle, then Oracle would buy your IT supplier and either you have to change your IT supplier (costing you millions) or congrats you're an Oracle customer now.
Oracle used to have EBS which competed primarily in SCM/Procurement (or ERP) domain.
They acquired the leaders in HCM (PeopleSoft) and CRM (Siebel) domains and combined them all (SCM, HCM, CRM) into a single (new) product called Fusion.
Their pitch also was - PeopleSoft and Siebel required people with different skill sets (PeopleCode for PeopleSoft, Siebel eScript for Siebel); but with Fusion, customization would not require any programming language knowledge and in the rare cases that it did, you just needed Java. This meant it was cheaper for enterprises. This was a big selling point for a lot of enterprises and helped them reduce their IT cost.
I presume what that actually means is that the Java programmer needs to learn the now-Java APIs of what PeopleCode did, and the now-Java APIs of what eScript did, and they're all probably JavaBeans...
But I suppose that does help a company get staff in the door if the job advert just says "Java programmer", they won't be immediately scared off.
Your phone calls and SMS messages that touch the phone network, likely touch Oracle. Yes, nearly all of them.
For a tech-adjacent example of an acquisition of an entrenched supplier, look at Tekelec, a telecom hardware and software vendor which Oracle purchased in 2013[1].
Tekelec had a number of products but Oracle really cared about one: the EAGLE family, which is a suite of hardware and software for handling network signaling and routing over SS7. For any customer, EAGLE sits at the core of their networks and it is why your calls actually get connected and billed correctly.
EAGLE had a customer base that included nearly all of the important global telecom carriers. From the press release:
> Tekelec’s technology enables service providers to deliver, control and monetize innovative and personalized communications services and is utilized by more than 300 service providers in over 100 countries.
Verizon[2][3] runs EAGLE STP in their core, as does AT&T[4] (f/k/a SBC). Old business win press releases from Tekelec mean Bell Canada and Rogers still likely do. Based on job postings, Vodafone and Virgin Mobile use EAGLE STP for exchanging SS7 messages to/from roaming partners. And from public RFPs, the US Department of Defense[5] runs their own private phone networks, with EAGLE STP at the core.
Given how prevalent EAGLE deployments were in the early 2000s, how SS7 is needed to make the phone network functional, and how STPs are fixtures that do NOT get swapped out often, I feel very confident in saying that Oracle has had a supporting hand in most, if not all, of the phone calls and text messages you've placed since 2013.
It's not used in 4G/5G; Diameter is used instead. Most cellular telcos are ending or planning to end their 2G/3G networks (3G moreso than 2G). In the US, the FCC continues to push for IP-only networks, and AT&T is turning off their landline services (though they keep pushing out the date, it's currently at 2029). Obviously, the US is not the only country, but this seems to be the global direction.
Nonetheless, I can imagine that Oracle will still worm its way into telco recordkeeping and billing systems even if the protocol changes...
Oracle and Java are deeply embedded in US gov work. How deep? Let's just say a large number of classified developer jobs hire for Java. Ellison has been a huge proponent of a surveillance state, and that likely ingratiates him with certain three letter agencies.
The only developers I know who write Java full time work in systems that take pictures of things from far away.
My employer is actively hiring java engineers and we don't "take pictures of things from far away".
There are vibrant java user's groups all around the world. There are many java community conferences. The most recent redmonk language rankings[0] show java at #3.
> The only developers I know who write Java full time work in systems that take pictures of things from far away.
We all have different circles. I work for a bank and the bulk of the LOB code here is Java (or something that runs under a JVM). There are no Oracle databases as far as I know, but my visibility is limited.
Yeah, lots of corporate backend code is Java, and Java is a great choice for backend/server code. I've never seen Oracle anywhere, though, not in banks and not in governments. I've mostly seen Postgres and MSSQL and some MongoDB.
I've been working in Wall St. banks for the past 30 years, and I've never used an Oracle database. The investment banks were all Sybase shops in the 90's, and a bunch of them still are. In my experience those that do move are most likely to go to SQL Server, since its Sybase roots make the transition a little easier.
When something has been there for 20+ years switching costs are big.
I work for a pretty big one and we’ve got an exacc or twelve.
Regulatory thing for us, some workloads need production support for the data tier for various boring legal and compliance reasons, so our choices are kida limited to oracle and, these days, mongo, who have made massive inroads to enterprise in the last couple years.
There was a time (around the beginnings of Mac OS X) when Java was considered a first class citizen in Mac OS X, next to Objective-C.
Some NeXT products like WebObjects got ported to Java (and ran not only the iTunes backend but also things like the original Dell online store) and there was something called the Java bridge which allowed you to program Cocoa applications with Java.
It wasn't just Apple, in the late 90s/early 2000s there was a not insignificant number of folks in business/academia who thought Java would take over the world. Windows XP also shipped with an embedded JVM for running Java apps out of the box at one stage too, just before Microsoft doubled down on c#/.NET.
Along with MacOS X, Apple's Xcode IDE even had native java project support briefly in this era as well.
Yup, this brings back my academia years in 1998, sitting with KDE 1.0 and Java 1.1. It was mostly Java, then Perl as this fabulous scripting/glue language, teeny bit of C and MIPS Assembler for the low level courses.
We didn't touch a fairly esoteric language called Python much. Because we saw the future. Java and IPv6 was about to change everything.
Are you talking about minecraft? Minecraft was known for working only because it is so simple graphically compared to other games. It was said to allocate and deallocate hundreds of megabyte of memory every frame.
Minecraft still runs, and it may look graphically simple but it's actually pretty complex (as it has millions of blocks in memory at any time and has to cull which to not render, etc).
Minecraft does do some horrible things to the JVM, but it's strong and can take it.
Because it is graphically simple. That's not even a CPU issue.
millions of blocks in memory at any time and has to cull which to not render, etc).
128x128x128 is already 2 million voxels. Minecraft and any other game like that can use an octree or some variation to not individually deal with blocks. When things are in the distance or occluded or empty space you cull a courser level of the octree.
Java can be fast compared to scripting languages but I don't know why minecraft would be an example. It is a simple game that was poorly written and had to be re-written in C++ for other platforms. It got by on being simple and but running on full PCs at the same time.
Yeah I can see that, even when a recruiter contacted me a few years back for a data engineering position, they were looking strictly for experience in JVM-based languages.
Yeah, I wish they did more Clojure as well. As far as I could tell, it was kind of snuck in about ~12 years ago, and it kind of grew from there.
To be fair, I know people hate on it, but I honestly do kind of think Objective C is kind of a cool language. I think it's ugly but I think the message-passing style semantics are kind of neat.
Adding Smalltalk message passing as an extension to C was very clever and allowed writing very efficient code and dynamic high level UI code in a single language. The semantics were kept clear by the distinctive syntax of message passing. And allowed access to any existing C libraries.
Objective C is neat inasmuch as it managed to add a simple but practical object system to C without all the added baggage of C++. It wasn't without its downsides - in particular, the overhead of a method call was significantly higher than in C++ - but I still appreciate it for its minimalism.
> entire backend for iTunes (Jingle) was written in Java
Wasn't that because iTunes started out as a NextStep WebObjects application? WebObjects started on Objective C, transitioned to a framework for Java in early 2000's, came to Apple with the Next acquisition.
The financial market infrastructure heavily relies on Java. Transactions at commercial banks across North America are mostly executed on Java codebases.
There are probably millions of corporate projects written in Java. One of the reasons Oracle bought Sun Microsystems (who invented Java) was because Oracle itself had written so much middleware crap in Java.
Both Java and C#/.NET are super-popular in Enterprise land, with the choice between them mainly being if the enterprise is a Microsoft shop or not.
Everything SAP touches is written in Java too, and it's boring old payroll stuff. There's the entire Android user interface with millions of Java-only app developers.
Oracle may well be in bed with the spooks, but it's not a Java-specific thing.
I worked for a drug discovery company doing Java [1] since we were using Kafka Streams very liberally, but everything was done with the OpenJDK Temurin distribution. It was drilled into our heads on the first day do not install anything from Oracle. I think they were afraid of some weird lawsuit unless they bought an expensive license.
I totally get it, but it made me a bit sad because they were even weary of something like GraalVM for some projects where startup time was becoming an issue; I think the Community Edition for GraalVM would have been fine but I think they had this "we don't touch anything with an the Oracle name directly attached with a ten foot pole". Which is totally fair.
[1] It's not hard to find which one but I politely ask that you do not post it here in relation to this thread.
The Ahead of Time compilation is pretty nice for some stuff. Generally startup time is significantly improved, so if you're writing command line tools in particular it can be cool.
Yes but Java has historically had pretty long startup times. If you want to write Java specifically then GraalVM is probably still the best option for AOT.
Useful for letting us know that GP has a limited network and situational awareness.
I always find these “relative to me” claims not very informative on the internet. But it fun when every once in a while you notice the claimer is Bill Joy or Linus Torvalds or someone where the relativeness holds weight.
> The only developers I know who write Java full time work in systems that take pictures of things from far away.
Huh??? Google, the search engine part, is written in Java as far as I know. Yandex uses Java extensively. Odnoklassniki, once second most popular Russian social network, is written in Java. Banks like Java. Android apps are written in Java (and Kotlin, which I consider an abstraction over Java).
And that's only what I can remember right away. A sizable chunk of the world runs on Java.
Oracle effectively still largely controls the evolution of the language and of OpenJDK, and Java is still a registered Oracle trademark. While it could be forked and renamed if necessary (as happened in the javax –> jakarta transition), that would likely end up being quite disruptive and costly.
That being said, Oracle’s valuation is based on their huge integrated ecosystem. That they also control Java, while not insignificant, probably only plays a minor role there.
I think that overstates, there is a lot of java in the enterprise still, it's lose share to golang and typescript and in certain cases rust, but it's still around and doing just fine (to my annoyance).
That is a silly take. The absolute majority of Java devs in the world does not work in spy agencies (sounds like it’s more about your personal network being close to that world)
> Why would go $58B in debt to support a new feature that no one will want after alienating everyone above?
Short term shareholder equity gains during an over exuberant hype cycle you do not know when might repeat.
"As long as the music is playing, you've got to get up and dance." -- Citigroup CEO Chuck Prince (symbolizing Wall Street's reckless persistence in risky lending despite signs of a market downturn)
> The trap is an almost inevitable consequence of what many managers might regard as a blessing, because it occurs when the capital markets overvalue a company’s equity—and especially when stock overvaluation is common in a particular sector. In the following pages, we’ll describe the trap, show how it has played out in various industries, and suggest where it may be playing out once again.
"If you're playing a poker game and you look around the table and and can't tell who the sucker is, it's you." -- Paul Newman
Most enterprises don't seem to be running ZFS with Linux, and the only large target using FreeBSD I can think of is Netflix, but AFAIR they don't use ZFS either.
Oracle sues when there's $$$ to make, but I don't think ZFS would warrant them much.
> the only large target using FreeBSD I can think of is Netflix, but AFAIR they don't use ZFS either.
I can't quite remember, but I think they might have mentioned using ZFS rather than UFS for the OS, but I'm pretty sure they're not using it for the CDN data partitions. I love ZFS, but for CDN nodes, I think it would be more harmful than helpful; especially how ARC is separate from the FreeBSD 'Unified Buffer Cache', and how much work Netflix has done to reduce the number of times data hits RAM on the way from disk to the user.
> Oracle sues when there's $$$ to make, but I don't think ZFS would warrant them much.
(Agreeing with you), if they are using ZFS for the OS and Oracle makes ZFS toxic, it shouldn't take long to ditch it.
It’s why I run btrfs on my lab machines. I’ve used ZFS for, looks at calendar oh, wow, literally decades now. It’s fantastic. But the miasma of Oracle’s infection keeps me from recommending it for anything commercial.
ZFS, in a vacuum is fantastic. But it’s not in a vacuum.
There's no zfs grenade. It's CDDL, feel free to use it wherever you want. Oracle can't come after you for violating the gpl even if somehow using zfs on linux violates the gpl.
Everything I have read is that the cddl is not compatible with binary deployments of zfs on linux so actually wouldnt that mean yes they could press that if you bundled it with gpl? Actual lawyers have said yes it could which is what I am refering to, however I think the actual answer is that Oracle has created a latch by inaction on this subject for so long now.
CDDL is more permissive than gpl. It's not a violation of cddl to intermingle with code under a different license. GPL is the issue and it's the individual contributors to linux that _could_ sue.
I'm not a lawyer. I don't k is what Oracle's lawyers can and can't do. Even if I'm legally in the right, Oracle's lawyers could break me if they wanted. I can't know if there is a ZFS grenade, and neither can you. But we can choose to not deal with Oracle.
At that point, if they wanted to, they could sue mort96 for saying something bad about Oracle. It's unlikely they'll do that and perhaps a bit less unlikely they'll sue over ZFS.
Most of their legal shenanigans appear to be restricted to companies that already license some software from them.
The present US government would delight in finding a new way to funnel more yachtloads of cash to Larry Ellison, legally or otherwise, as long as someone in the regime gets something out of it.
There's more to it than just pure databases. They have a pretty large vertical of SaaS apps, specifically ERP. Oracle Saas (their ERP platform) is used by thousands of customers - these are systems implemented with SI's and run super critical functions like payroll, manufacturing, etc really hard to rip out once they're put in place. This has been fueling their growth for some time, and seems like OCI is picking up now from a pure infra POV. But yeah I don't think I'd ever use any Oracle components voluntarily or at the very least find ways to have exit paths
Yeah, many of these comments are extremely misinformed. Oracle has been an application software vendor for a long time.
For example, Oracle sell Opera. Opera manages hotels, both individual and chains. And integrates with their amusement park management software.
People complain about them, but software like that is much closer to an sdk than a finished product. It is generally customized for each buyer for their needs. And the quality of the customization is more on the buyer than on Oracle.
Oracle have a giant suite of these products for POS, guest experiences, amusement parks, hospitality, marketing (b2b and b2c), etc. And companies buy from Oracle because they're not good at making software and because you do leverage some economies of scale.
on this point the netsuite ecosystem is huge and there are not that many options for SME's that are too big/complex for inacct or quicken (even campfire/rillet etc) but don't want to get anywhere near SAP, Infor, IFS, oracle (not netsuite).
Oracle database has unparalleled scalability. Ask someone who works at Microsoft SQL Server division what their bug database looks like. They will tell you that a single SQL Server instance cannot scale to the entire SQL Server division. Oracle on the other hand has a single database for the entire company. No other database is this scalable.
But Oracle is not just a database company. Oracle started as a database company, but today they are more an applications company than a database company. They have ERP back-office applications (finance, operations, HR), and CRM front-office applications (sales, marketing, service). Oracle bought a large number of applications software companies such as Seibel, PeopleSoft, JD Edwards, NetSuite and Cerner to become this big.
Of course Oracle is also a major cloud services provider and provide AI superclusters, and GPU instances from NVIDIA and AMD (context for today's layoffs).
I'm actually impressed by the amount of abuse our Oracle instances are able take from our developers.
Massive amounts of parallel single reads and writes with millisecond responses mixed with mega-joins of incorrectly indexed tables that works flawlessly "on their machine" that limp on well enough to sneak past performance testing with just the planner silently writhing in agony.
The original question does discount the capability of Oracle's database too much, as only something "golf executives" buy. When you have a large problem that is best solved with a relational model, Oracle delivers and can indeed be worth all the money and license hell involved.
I think the big use case for Oracle products are for businesses that are not in the IT space. A lot of reasons for this, a big one is the breadth of Oracle's products is very solid and, similar to Microsoft, you can be sold on Oracle solving all your database needs across your business: HR, asset management, customer relationship stuff, your actual business, all with a single vendor. Non-IT management will be told that it all integrates seamlessly, you don't need to hire IT staff dealing with software from 10 different vendors, just the one.
For instance, I work in the utility industry. They offer specialized utility-specific software for managing data from our meters, our customer and billing system, asset management, HR, accounting, reporting from all these systems. Even more specialized stuff exists that we don'tbuy. No doubt if you had a different use case, Oracle would sell us on their ability to handle it. I think this is the model they follow. They are not trying to sell to startups, tech platforms, software companies, etc. They are trying to sell to your bank.
The comments on this subthread are a bit out of touch in a very coastal-tech way -- yes, Oracle is a monster, yes, their tech is garbage, yes, their products are awful.
But Oracle owns Cerner Health (now Oracle Health, but to most users it is still Cerner), i.e. 25% market share of the EHR space, and PeopleSoft, which you are painfully familiar with if you work for a bigcorp or anywhere in the public sector in North America. Their database product is very far from their only LOB.
You would use it to keep your job when your company goes with it against all technical recommendation due to the push of a higher up that wouldn't let the idea go for stupid or suspicious reasons.
I actually sat through an Oracle sales presentation around 1999 (I was the product engineer, along with company executives) and honestly it sounded pretty good. At the time we were using Lotus Notes for a database so even pencil and paper would have been better. Oracle absolutely was the market leader and there was no doubt about its technical chops. Oracle Parallel Server could run active-active across two sites separated by many miles of fibre, which was a remarkable thing to do back then.
Oracle came back with a quote that was so far outside what our company could afford that we went with Informix (not a cheap database). Pretty lucky escape.
A year or two later I ported the whole stack to PostgreSQL and it worked absolutely fine since we didn't have that much scale. Unfortunately when I left the guy who took over was a huge Informix fan so he deleted all the PG code and went back.
Oracle was (and I assume is) able to provide "hardware" level support for their software, if you paid enough and your database was shitting itself at 2 AM, they'd have engineers on the phone and maybe on the plane to get you back up and running.
For smaller companies and businesses? Not needed. For big multinationals where a minute of downtime is millions of dollars in revenue? Oracle is cheap.
I didn't save the tweet I saw it in but I saved the joke - "I wish I had enough money to run Oracle instead of Postgres." "Why do you want to do that?" "I don't, I just wish I had enough money to."
> Every company I've worked for has avoided Oracle software of any kind.
Lucky you. Sadly, not all companies are new enough to be able to do that. Some embarked on Java when it was Sun, and Oracle when the only alternative would have been SQL Server (or DB2 on AIX, AS/400, or MVS).
A past company I worked for made SaaS and On-prem and we supported Oracle with the on-prem. The simple act of supporting and testing that option was enormous for our company, but the customers we had that needed it were highly lucrative.
I worked for one company that used it. Everybody on the ground hated it but the costs of migrating away were enormous because every system they relied upon was tightly coupled to every other system. It would have been a multiyear project to get off it.
Their software wasn't just more expensive than using open source equivalents it was worse, too. It's just very, very sticky.
At the same time the sales team wine and dine key decision makers and try to strike the fear of god in to them so they don't rock the boat.
We heard you're planning to migrate away from Oracle. We understand, but unfortunately, that means we have to get rid of the 75% discount we gave you, so we'll make a decade of revenue in the two years it'll take you to get rid of us. Still planning to migrate away?
They sell to cash-rich organisations who are a bit clueless about technology and so can't or wouldn't want to either roll their own, or go with a better but smaller provider?
e.g. I was unsurprised when I spotted that Novartis (no connection, btw) was deep in with Oracle. Big pharma, lots of money, typically-clueless-big-org-IT-leadership, etc.
And, their product have worked correctly for decades.
So if you have a lot of money and don't want to take any risk you go the oracle route. It's not the best product today, but you won't have any surprise, except cost, that you can justify because it's oracle.
Which is the same as using a tank to go grocery shopping because you're afraid of an accident on the way.
You need everything in house to support a thank, special garage, specifically trained crew, specific fuel...
Precisely this. They prey on outsource-happy big orgs that have 1 million different SaaS all tied together by scotch tape (because their IT dept. is also outsourced)
The whole medical industries business model is that they create so much compliance regulation that you need every compliance product under the sun to comply with them, thereby you can keep out competition.
I have a theory that being cash-rich creates an atmosphere of technological cluelessness, or more specifically weaponized incompetence. A cash-rich company attracts sociopathic executives, who are focused on the prestige of working at a top company. These executives display a unified front outwardly, but internally they are all stabbing each other in the back constantly. And any executive who champions in-house software is just giving other executives ammunition whenever said software has the smallest bug.
I think saying that Oracle is a provider of database software is something that you hear from old technical folks who have not used Oracle since the 90s. In fact Oracle's revenue from database software is very tiny. Much of their revenue comes from enterprise and cloud software.
If you are a publicly traded company who needs to report audited financial results every quarter to shareholders, there are less than 10 ERP software in world for that and Oracle owns 5 of those - Fusion, PeopleSoft, Netsuite, JDE and EBS.
Also, in the last decade a big chunk of their revenue comes from cloud services where these enterprises move away from their physical hardware and onto the cloud. Here also, Oracle provides one of the most generous free limits compared to GCP, AWS and Azure. Also they provide some unique options that I have not seen in any other clouds like Bring Your Own License where you can keep running your old enterprise databases in the cloud with just paying for the compute .
Few years back they also bought out Cerner which was the largest EHR company at that time and it pushed their head count by 28000.
They grew their headcount massively during covid like any other software company and simultaneously took on too much debt to build datacenters.
But with rising cost of these capex builds, they are in consolidation mode and reducing headcounts just like other companies.
Oracle the company has not been about Oracle the DB server for 20+ years.
Oracle the company specializes in acquiring software, integrating it in their ecosystem, selling the installations, and living off the recurring licensing fees (NetSuite is one example).
The focus not being the DB for 20 years is mostly true, with the exception that all of their applications are well-served by having a very scalable and very bulletproof database in-house.
If you're a massive organization and have need for a mission critical database that cannot fail for any reason, with massive scale and scalability, security, reliability, high availability, high throughput, etc., and you want instant expertise available to solve any DB problems, then you're not using Postgres, a cloud database or rolling your own noSQL solution.
I'm talking about huge, billion dollar institutions like banks, financial services, governments, logistics, manufacturing, software, etc. These are the companies run by the "golf executives" who want guaranteed database dependability and are willing to pay for it.
In this case, you'll use Oracle or IBM DB2.
For example, if you're TikTok dealing with billions of interactions, or Boeing, maintaining critical databases of millions of parts, you'll be totally willing to spend huge amounts of money to make sure your data is rock solid, even if it's overpriced (Apparently, Microsoft is one of Oracle's biggest customers of all companies!)
They acquired a lot of applications - ERP, CRM, finance - I suspect actual database licensing revenues are only a small part of their revenues these days.
Years ago I had some fun integrating with Hyperion Financial Management (HFM) - which is actually a pretty impressive beast if you need consolidated financial reporting!
It's exactly as you said. The dependency on old school legacy implementations that can't be turned off. To be fair to Microslop even they had the foresight to open source .net core and even try to bring some things to the open source community. Oracle actively turned into a patent troll.
They make a lot of money off of the cloud services and their layered "enterprise" applications. Selling "just the database" isn't what Oracle's been about for a very long time.
Government contracts. You get good at bidding, there's money to be made there. And those bidding processes are way more than just the tech. That's their main value prop I think. Having the bureaucratic machine to bid and win contracts.
If you want to build a system of monolith services and be locked into a 30 year old waterfall development model, then Oracle is for you.
I’ve had this argument with several DBAs. They always claim “Oracle is the most performant,” while quite possibly true technically, they also tend to run a single massive instance that inevitably leads to a complete failure of the site under heavy load. Oracle is often designed to be the single point of failure. I believe that is by design. The same problems can be solved with modern event driven architectures, better caching, horizontal dynamic scaling, etc.
I had to explain this to some slightly younger colleagues recently. It's hard to believe now, but in ye olde days hardware was not as cheap and abundant as it is now. So you invested heavily in your database servers and to justify the hardware and software cost, ran as many workloads as possible on it to spread the pain.
This is also the same incentives that resulted in many classic architectures from 80s and 90s relying heavily on stored procedures. It was the only place where certain data could be crunched in a performant way. Middleware servers lacked the CPU and memory to crunch large datasets, and the network was more of a performance bottleneck.
> and be locked into a 30 year old waterfall development model,
Oracle switched from Waterfall development model to sprint years ago. They also switched from yearly to quarterly releases (for their Apps) which means they deliver a lot of features in a year.
> They also switched from yearly to quarterly releases (for their Apps) which means they deliver a lot of features in a year.
Without commenting on whether this is true of Oracle, that conclusion doesn't inherently follow from the given. If I'm driving 60 miles per hour, then recalculate it in miles per minute, that doesn't actually mean I'm going faster. Oracle could easily be delivering 1/8 a year's worth of features in 1/4 a year due to release process overhead for all I'd know.
Totally agreed looking at them from a development and cutting-edge viewpoint. They own what was once very competitive platforms and languages, which they still support. They have largely transitioned into rent seekers.
From the investment standpoint they still have a lot of value to siphon from, but its all rent seeking behavior, its not producing new ecosystems like them or Sun did in the past. Long-term blue chip play.
Though all the Paramount stuff is loosely coupled to them now, so tough to say if its a good long-term play anymore.
Oracle is used for mission-critical legacy applications, which is common in the federal government IT space. Replacing with OSS is a nontrivial undertaking, but it is happening. For the most part, replacing Oracle’s Java with OpenJDK is relatively painless, but some agencies preferred the licensed version because it includes support. Replacing a database, however, is a much scarier task, even when you have experienced and competent DBAs.
The technology is irrelevant, at one point in my career I worked for a ~200 person co. that was aquried by another massive (20,000+ seat) company who was even less useful than Oracle; trying to figure out the rough mechanics of why such a useless tower of people is built and maintained, I realized the magic: big juicy government contracts!
Their biggest asset is ERP. That's how they get orgs locked in, because migrating ERP systems after deployment can take decades of work and cost multitudes more than just eating Oracle's renewal increases. Could orgs jettison them into the sun? Totally. Is it fiscally sensible? Yeah, absolutely. Can you sell that to the board? Nope.
The best way to kill Oracle - because such a toxic organization absolutely deserves to fail - is to avoid building anything atop their infrastructure ever again going forward. Don't use their Java tooling, don't use their software suites, don't use their cloud services.
Just don't use Oracle for anything new, and work to get the fuck off of it for anything that remains.
The only reason Oracle survives is because rich dumb fucks keep giving them money.
A while ago we were looking at migrating ERP - netsuite was a not a good price proposition and candidly feels a bit dated - but when you mapped features it was pretty impressive and for a lot of business that have some complexity (multi-entity, multi-currency, multi site mfg or inventory), there is not a whole lot of good alternatives because you can't use quicken but you definitely don't want SAP
The irony is that the ERP space is ripe for innovation and disruption, but nobody wants to get into ERP because it's a goddamn nightmare.
Every business runs slightly differently than everyone else, and ERP tries to be this all-encompassing monolith. I wonder if the solution to ERP isn't just targeted microservices exposing data via APIs...
yeah but i think that is the problem - everyone wants to customize / thinks their business is super unique, but honestly the customization is what makes ERP so painful. industry customization is important (motels are different than a small manufacturer) but for SMEs a standard solution that fits their industry is the can be implemented in a manner that's not akin to open heart surgery would be far more valuable to these firms.
> I'm genuinely confused at what the value prop of Oracle is.
Legal blame transfer.
Oracle has every single compliance checkbox you need for any certification you can name.
So, if your end customer (generally BigCorp or BigGovt) wants "NitWit Certification v4007", you call up the Oracle sales rep, get a quote, and pass the cost along with a significant markup.
Support, that's what you'd use them for. Something breaks and your team can't figure it out? You make a phone call and someone will be there in a jiffy to work things out. And if he can't either they'll fly a whole team from a different city or even a different country until they solve it.
I just read a tech "policy" document of one of the largest packaged food makers in Mexico. They explicitly say they ONLY use paid services/software to ensure there is liability and support.
There are A Lot of businesses thar are happy to burn cash for a false sense of security. They don't know better.
> They explicitly say they ONLY use paid services/software to ensure there is liability and support.
Hahahahaha. Yes, by all means, call IBM or Oracle and ask them to compensate you for downtime. Support, sure, of a sort. If you throw enough money at them, you'll get an answer from them in an arbitrarily short time frame.
I assume you would use Oracle Cloud if, for whatever reason possibly related to legal or competition, you cannot use AWS, or GCP, or Azure. It's hard for me to imagine a startup that needs cloud and would onboard to Oracle Cloud and not to any of the top 3 providers instead.
Also, their inability to make a NewSQL DB rivaling Spanner or Cockroach (they basically just had to clone one of these or acquire Cockroach) puts them out of any serious competition for the future of databases. Their "Oracle Autonomous DB" ain't it.
Oracle has made a large bet on being a cloud, but nobody wants their terrible cloud, which is reflected in their dollar-store prices. They staffed up and built facilities that they can't sell so I am not surprised they are now swinging the axe.
Oracle Cloud isn't actually "terrible cloud", but it definitely isn't geared toward smaller users like startups and individuals. It's downright hostile to casual use. But for fortune 500 companies who don't mind being in bed with Oracle, the price can be right.
I posted a response from Gemini where I asked Gemini, "Do you have citations proving or disproving whether Oracle Cloud is still attracting cloud customers away from Azure, AWS and GCP?" and it seems it disagrees with your "nobody wants their terrible cloud" view.
You are missing the business model - buy has-been platforms and frameworks and charge big bucks for maintenance. Customers eventually manage to migrate off you, but it is fine cause you buy some other has-been stack and then overcharge for that.
From small-scale use over the course of several years, I've found their "cloud" (OCI) to be a solid and well-planned product. Additionally, I've experienced not one single outage or hiccup so far (Stockholm region).
They do own some vertically integrated products like PeopleSoft that use these lower level foundational pieces. But I agree, I don't know why anyone in their right mind would use Oracle outside of golf and sailing bribes.
If you are not yet in Oracle's clutches you have to be extremely naive or shortsighted to be using Oracle cloud. Obviously the low prices are because they have a shit product and shit reputation, and the moment they think they captured large enough audience they are going to hike them
Yup. They're not offering those prices out of generosity. They're offering them because that's the most they can charge big players who understand what it means to buy from Oracle.
They aren't a database company. They are a full spectrum B2B SaaS contract company. They make far more by up selling services than they do from databases total. Half of their stack will run on whatever db you want.
Aren't their databases behemoths that satisfy requirements (especially of regulatory nature) of large banks and such? I don't think they have much in common with the needs of your run-of-the-mill startup.
oracle is deeply embedded in enterprise and a lot of other enterprise solutions also use it. they have no value proposition for startups. likely just on existing clients and ppl who end up using stuff that requires their products.
Well, if you're an elected official, and you're in charge of government organizations that could be used to enrich billionaire donors by using a donor's services - Oracle fits that niche very well!
> From my chair, they make an expensive database they try to sell to golf executives.
This is basically it. You wouldn’t want to use oracle for anything, and they know that. What they also know, very very well, is that they can get their fingers into high-dollar orgs and shmooze people that have little knowledge on the matter to lock themselves into basically never ending contracts for garbage products.
Oracle is a perfect distillation of capitalism in that way.
I have a cousin who owns a business that decided to contract with Oracle, and as he described this my first thought was “why tf would you ever sign an agreement with Oracle, don’t you know about them?”
But as he described the whole dealings (for some SEO product, I’m not sure of the specifics) it became clear to me they bamboozled him, gave him a bait and switch, and left him on the hook for a huge bill he never thought he’d have to pay.
So to answer your question I don’t think there is a value prop, I think it’s actually a giant grift.
Clearly you are in USA. It’s not how their business works and Startups are not their target. Lobbying governments across the world with questionable practices are
For Jason R., it was an exciting time. His company was trying to break into the telecom market with a new product that they'd get to build almost entirely from scratch. The only part that he wasn't excited about was that the major customers had very specific requirements that his team would have to meticulously follow. In this case, some bigtime POTS operators demanded that all servers must come from Sun, and any databases must be built on Oracle 8i.
One of the applications they were building had to interface with the clients' call data records (or CDRs). The most important use of CDRs is for phone bill calculation, so naturally they were stored in properly designed and indexed tables. The CDRs were stored alongside all billing records, and were frequently accessed by mission-critical internal applications, and they weren't prepared to expose all of that to a third party. So instead, Jason's company would have to construct CDRs on their own from the signaling message flow. Because the CDRs would be processed right away, they wouldn't even need to store them. The tentative architecture called for an Oracle database for CDR pipelining from the front end to the application backend.
When the analysis was being conducted, the team grew concerned with the costs — both in terms of budget and disk I/O. Oracle licenses are incredibly expensive, and there would be a huge volume of CDR data written to and read from the database. Finally, it dawned on someone that the database was completely superfluous since records were processed as they came in. In fact, a single, low-end Sun server with a few hundred megs of RAM could easily handle the CDR generation and application backend.
Excited about their good news, they called up a meeting with the product managers. "We've discovered that we can deliver the product at a fraction of what our original estimates were." The managers left the room, some looking happy, others just looking incredulous.
Later that day, Jason got a call from the VP of Engineering. "Jason, while I understand what you're proposing is technically valid, you have upset the marketing team."
"I'm sorry... did I say something?"
"It's just that they've promised the customer that our product would use Oracle 8i, and now they're going to be made liars. Can you just humor me and add Oracle 8i to the design somewhere?"
"Uh..."
"I have enough trouble politically as it is. I really appreciate this favor!" click
After delivering the news to his team, they argued a bit on what to use Oracle for. Ultimately they delivered the final product with an Oracle database that had a single table which was used to store a handful of configuration parameters.
It was the most expensive individual table Jason had ever created in his entire career.
Google Cloud's bandwidth pricing is much more reasonable. That's one thing I see. Google, Microsoft, and Amazon still charge the standard big cloud absolutely ludicrous bandwidth markup, stuff like $0.08/gig.
If you know what bandwidth actually costs, that's like $500/gallon gasoline.
Oracle is still expensive relative to wholesale bandwidth but it's at least not absolutely insane.
The Owens river gorge has the highest concentration of sport climbs anywhere in North America, but there's not much variation, they're mostly edge ladders on weirdly slippery rock.
Not far away are the world's most photogenic boulders, the Buttermilks, and when I visited (from Canada) I was surprised to find that the boulders are on LA municipal property and the pipe that takes Owen's River's water over the Sierras is nearby.
People went ballistic on me a few months ago for bringing this up, but this is exactly the kind of outage that makes me really, really worried about extremely short lived certificates. https://news.ycombinator.com/item?id=46118371
I'm not sure I follow. This outage seems like it occurred for less than 1 day. The post you link to is about having certificates expire after 45 days. What's the connection you see?
Some CAs are experimenting with shorter, 7 day certificates as well.
still not an outage that would endanger anyone's ability to renew in time, but for small or extremely shitty CAs (and there are a lot of those) such an outage may take enough time to cause issues in theory I guess?
It doesn't have to be small or more shitty than average. If Google has
a compliance issue and can meet it in 8 hours then its a pretty clear one.
They could have an issue that needs round trips of discussions with
auditors before resuming. etc. I'm not familiar with 24/7 auditor
services.
That's only if you delay renewal until the last day of the lifetime of the certificate. If you renew at day 30 you'd only get in trouble if there's more than two weeks of downtime.
You’re supposed to renew your cert way in advance of the expiration time. For 47-day certs the general expectation is that you renew them monthly, so in the worst case you’d need more than two weeks of CA outage before anything went wrong.
You're joking, but still: that's one very possible outcome of both requiring centrally issued certificates for security reasons and browsers refusing to display websites without.
Effectively certificates are now a license to publish.
It's the same problem. Search for "fauux" (one of our more popular web sites) and you'll see other sites talking about one of the more popular sites but you won't see a link to the site itself.
I would love to see some evidence for the huge increase in protein on this new pyramid. I'm not challenging it, I'm genuinely curious if there's substantial evidence that a lot of it is actually good for most people.
I don't like to admit this, but at this point honestly I think ipv6 is largely a failure, and I say this as someone that wrote a blog post for APNIC on how to turn on ipv6.
I'll get endless pushback for this, but the reality is that adoption isn't at 100%, it very closely needs to be, and there are still entire ISPs that only assign ipv4, to say nothing of routers people are buying and installing that don't have ipv6 enabled out of the box.
A much better solution here would have been an incredibly conservative "written on a napkin" change to ipv4 to expand the number of available address space. It still would have been difficult to adopt, but it would have the benefit of being a simple change to a system everyone already understands and on top of a stack that largely already exists.
I'm not proposing to abandon ipv6, but at this point I'm really not sure how we proceed here. The status quo is maintaining two separate competing protocols forever, which was not the ultimate intention.
> A much better solution here would have been an incredibly conservative change to ipv4 to expand the number of available address space
"And what do you base this belief on?
Fact is you'd run into exactly the same problems as with IPv6. Sure, network-enabled software might be easier to rewrite to support 40-bit IPv4+, but any hardware-accelerated products (routers, switches, network cards, etc.) would still need replacement (just as with IPv6), and you'd still need everyone to be assigned unique IPv4+ addresses in order to communicate with each other (just as with IPv6)."[0]
> Fact is you'd run into exactly the same problems as with IPv6.
If you treat IPv4 addresses as a routable prefix (same as today), then the internet core routers don't change at all.
Only the edge equipment would need to be IPv4+ aware. And even that awareness could be quite gradual, since you would have NAT to fall back on when receiving an IPv4 classic packet at the network. It can even be customer deployed. Add an IPv4+ box on the network, assign it the DMZ address, and have it hand out public IPV4+ addresses and NAT them to the local IPv4 private subnet.
IPv6 seems to be a standard that suffered from re-design by committee. Lots of good ideas were incorporated, but it resulted in a stack that had only complicated backwards compatibility. It has taken the scale of mobile carriers to finally make IPv6 more appealing in some cases than IPv4+NAT, but I think we are still a long way from any ISP being able to disable IPv4 support.
> Only the edge equipment would need to be IPv4+ aware.
"Only"? That's still the networking stack of every desktop, laptop, phone, printer, room presentation device, IoT thing-y. Also every firewall device. Then recompile every application to use the new data structures with more bits for addresses.
And let's not forget you have to update all the DNS code because A records are hardcoded to 32-bits, so you need a new record type, and a mechanism to deal with getting both long and short addresses in the reply (e.g., Happy Eyeballs). Then how do you deal with a service that only has a "IPv4+" address but application code that is only IPv4-plain?
Basically all the code and infrastructure that needed to be updated and deployed for IPv6 would have to be done for IPv4+.
But the desktop/laptop/phone/printer was the EASIEST thing to change in that 30 year history. And it would have been the easiest thing to demand a change req from a company for.
Yes: but the process would have been exactly the same whether for a hypothetical IPv4+ or the IPng/IPv6 that was decided on; pushing new code to every last corner of the IP universe.
How could it have been otherwise given the original network structures were all of fixed lengths of 32 bits?
If we have IPv4 address 1.2.3.4, and the hypothetical IPv4+ adds 1.2.3.4.1.2.3.4 (or longer), how would a IPv4-only router handle 1.2.3.4.1.2.3.4? If an IPv4-only host or application gets a DNS response with 1.2.3.4.1.2.3.4, how is it supposed to use it?
As I see it, the transition mechanism for some IPv4+ that 'only' has longer addresses is exactly the same as for IPv6: new code paths that use new data structures, with a gradual rollout with tech refreshes and code updates where hosts slowly go from IPv4-only to IPv4-and-IPv4+ at different rates in different organizations.
If you think it's somehow different, can you explain how it is so? What proposal available (especially when IPng was being decided on in the 1990s) would have allowed for a transition that is different than the one described above (gradual, uncoördinated rollout)?
The proposal is that IPv4+ would be interpretable as an IPv4 packet. Either the IP header is extended, or we add another protocol layer for the IPv4+ bits (IPv4+ is another envelope for the user payload).
DNS is like today: A and AAAA records for IPv4 and IPv4+ respectively.
Core routers do not need to know about IPv4+, and might never know.
The transition is similar to 6to4. The edge router does translation to allow IPv4+ hosts to connect to IPv4 hosts. IPv4 hosts are unable to connect to IPv4+ directly (only via NAT). So it has the similar problem to IPv6 that you ideally want all servers to have a full IPv4 address.
What you don't have is a completely parallel addressing system, requirements to upgrade all routers (only edge routers for 4+ networks), requirements to have your ISP cooperate (they can just give you an IPv4 and you handle IPv4+ with your own router), and no need that the clients have two stacks operating at once.
It's essentially a better NAT, one where the clients behind other NATs can directly connect, and where the NAT gradually disappears completely.
If you hand UTF-8 that actually uses anything added by utf-8 to something that can only render ASCII, the text will be garbled. People can read garbled text ok if it’s a few missing accented characters in a western language, but it’s no good for Japanese or Arabic.
In networking terms, this is like a protocol which can reach ipv4 hosts only but loses packets to the ipv4+ hosts randomly depending on what it passes through. Who would adopt a networking technology that fails randomly?
v6 has nearly 3 billion users. How is that abysmal?
We've never done something like the v4->v6 migration before, on this sort of scale. It's not clear what the par time for something like this is. Maybe 30 years is a normal amount of time for it to take?
HTTP->HTTPS was this kind of scale, and it was smooth because they changed as little as possible while also being very careful about default behaviors.
3 billion people sorta use ipv6, but not really, cause almost all of those also rely on ipv4 and no host can really go ipv6-only. Meanwhile, many sites are HTTPS-only.
And because it's a layer 7 thing, so it only required updating the server and client software, not the OS... and only the client and server endpoints and not the routers in between... and because we only have two browser vendors who between them can push the ecosystem around, and maybe half a dozen relevant web server daemons.
Layer 3 of the Internet is the one that requires support in all software and on all routers in the network path, and those are run by millions of people in hundreds of countries with no central entity that can force them to do anything.
HTTP->HTTPS is only similar in terms of number of users, not in terms of the deployment itself. The network effects for IP are much stronger than for HTTP.
They don't "sorta" use v6, they're properly using it, and you can certainly go v6-only. I'm posting from a machine with no v4. Also, if you want to go there: HTTPS was released before IPv6, and yet still no browser is HTTPS only, despite how much easier it is to deploy it.
I know they aren't very comparable in a technical way, but look at the mindset. IPv6 included decisions that knowingly made it more different from v4 than strictly needed, cause they wanted it to be perfect day 1. If they did HTTPS like this, it'd be tied to HTTP/2.
Most browsers now discourage plain HTTP with a warning. Any customer-facing server basically needs to use HTTPS now. And you're rare if you actually have no ipv4, not even via a tunnel.
The compromised "ipv4+" idea a bunch of people keep asking for wouldn't require changing the spec down the road. ISPs would just need to clean up their routes later, and SLAAC could still exist as an optional (rather than default) feature for anyone inclined to enable later. Btw, IPv6 spec was only finalized in 2017, wasn't exactly one-shot.
I don't know if HTTP's job is easier. Maybe on the client side, since there were never that many browsers, but you have load-balancers, CDNs, servers, etc. HTTP/2 adoption is still dragging out because of how many random things don't support it. Might be a big reason why gRPC isn't so popular too.
> HTTP->HTTPS was this kind of scale, and it was smooth because they changed as little as possible while also being very careful about default behaviors.
HTTP->HTTPS is not equivalent in any way. The payload in HTTP and HTTPS are exactly the same; HTTPS simply adds a wrapper (e.g., stunnel can be used with an HTTP-only web server). Further HTTP(S) is only on the end points, and specifically in the application layer: your OS, switch, firewall, CPE, ISP router(s), etc, all can be left alone.
If you're not running a web browser or web server (i.e., FTP, SMTP, DNS, database) then there are zero changes that need to be made to any code on a system. This is not true for changing the number of bits the addressing space: every piece of code that calls socket(), bind(), connect(), etc, has to be touched.
Whereas the primary purpose of IPng was to expand the address space, which means your OS, switch, firewall, CPE, ISP router(s), etc, all have to be modified to handle more address bits in the Layer 3 protocol data unit.
Plus stuff at the application layer like DNS (since A records are 32-bit only, you need an entire new network type): entire new library functions had to be created (e.g., gethostbyname() replaced by getaddrinfo()).
I hear people say the IETF/IP Wizards of the 1990s should have "just" picked an IPng that was a larger address space, but don't explain how IPv4 and hypothetical IPv4+ would actually work. Instead of 1.1.1.1, a packet comes in with 1.1.1.1.1.1.1.1: how would a non-IPv4+ router know what to do with that? How would non-updated routers and firewalls be able to handle longer addresses? How would non-updated DNS code be able to handle new record types with >32 bits?
HTTP->HTTPS looks easy in hindsight, but there were plenty of ways it could have gone wrong. They took the path of least resistance, unlike ipv6. I know they're different layers ofc.
To answer the last question, routers would need IPv4+ support, just like ipv6 which already happened. The key is it's much easier for users to switch after. No dual stack, you get the same address, routes, DNS, and middleboxes like NAT initially. ISPs can't hand out longer addrs like /40 until things like DNS are upgraded in-place to support that, but again those are pretty invisible changes throughout the stack.
> To answer the last question, routers would need IPv4+ support, just like ipv6 which already happened.
So exactly like IPv6: you need to roll out new code everywhere.
> The key is it's much easier for users to switch after. No dual stack, you get the same address, routes, DNS, and middleboxes like NAT initially. ISPs can't hand out longer addrs like /40 until things like DNS are upgraded in-place to support that, but again those are pretty invisible changes throughout the stack.
So exactly like IPv6: you need to roll out new code everywhere.
Would organization have rolled out in IPv4+ any differently than IPv6? Some early, some later, some questioning the need at all. It's the exact same coördination / herding cats problem.
It's a simple toggle on vs asking orgs to redo their entire network. In both cases you need routers and network stacks to support the new packet format, but that isn't the hard part of ipv6, we already got there and people still aren't switching.
Sorry, I'm still not seeing how a IPv4+ would be any less complicated (or as simple) as IPv6. In either case you would still have to:
* roll out new code everywhere
* enable the protocol on your routers
* get address block(s) assigned to you
* put those blocks into BGP
* enable the protocol on middleware boxes
* have translation boxes for new-protocol hosts talk to old-protocol-only hosts
* enable the protocol on end hosts
And just because you do it, does not mean anyone else would do in the same timeframe (or ever). You're back in the chicken-and-egg of whether servers/services do it first ("where are the clients?"), or end-devices ("where are the services?").
Redo all your addresses and routes, reconfigure or replace NAT and DHCP, reconfigure firewall, change your DNS entries at minimum. If it's a home or small business and you don't want to fight the defaults, you go from NAT to NATless.
No, routers would have to be fixed anyway, because even if you put extra bits into extension header we have 30 years of experience that routers and ISPs will regularly fuck around with those extra bits - it's related to why we have TLS GREASE option.
Application rework would be exactly the same as with v6, because the issue was not with v6 but with BSD Sockets API exposing low-level details to userland.
> Only the edge equipment would need to be IPv4+ aware. And even that awareness could be quite gradual, since you would have NAT to fall back on when receiving an IPv4 classic packet at the network. It can even be customer deployed. Add an IPv4+ box on the network, assign it the DMZ address, and have it hand out public IPV4+ addresses and NAT them to the local IPv4 private subnet.
Congratulations, you’ve re-invented CGNAT, with none of the benefits, and the additional hassle of it being an entirely new protocol!
No. No “extra bits” on an IPv4 address would have ever worked. NAT itself is a bug. Suggesting that as an intentional design is disingenuous.
I have not "reinvented CGNAT". It is hierarchal public addressing similar to IPv4 and IPv6.
The edge router has an IPv4+ subnet (either a classic v4 address, or part of a v4+ address). It maintains an L2 routing table with ARP+, and routes IPv4+ packets to the endpoint without translation. Private subnetting and NAT is only needed to support legacy IPv4 clients.
CGNAT pools IPv4 public addresses and has an expanded key for each connection, and translates either 4 to 6 or into a private IPv4 subnet. My proposal needs no pooling and only requires translation if the remote host is IPv4 classic and the edge router is not assigned a full IPv4+/24.
Not just the edge router. Every router between the ISP edge and the destination edge.
And since the goal is “backwards-compatability”, you’d always need to poll, because a “legacy” IPv4 client would also be unable to send packets to the IPv4+ destination. Or receive packets with an IPv4+ source address.
And it would be an absolute nightmare to maintain. CGNAT + a quasi backwards-compatible protocol where the backwards-compatability wouldn’t work in practice.
So you would have exactly the same problem as IPv6. I can say the same about v4 and v6 today. You could just turn off IPv4 on the internet, and we’d only need to do translation on the edge for the legacy clients that would still use IPv4. You can even put IPv4 addresses in IPv6 packets!
I think you've actually reinvented 6to4, or something morally very close to it.
Each v4 address has a corresponding /48 of IPv6 tunnelled to it. The router with that IP receives the tunnelled v6 packets, extracts them and routes them on natively to the end host. This is something that v6 already does, so you don't need to make posts complaining about how dumb they were for not doing it.
That's quite true, but in this counterfactual, IPv4+ doesn't pretend that 6to4 is just a transition mechanism to an all-IPv6 future. That is, IPv4+ is as-if 6to4 was the default, preferred, or only mechanism, and core routers were never demanded to upgrade.
It's an edge based solution similar to NAT, but directly addressable. And given that it extends IPv4, I think it would have been much more "marketable" than IPv6 was.
But again, this is all counterfactual. The IETF standardized IPv6, and 30 years on it's still unclear that we will deprecate IPv4 anytime soon.
I agree with that belief, and I've been saying it for over 20 years.
I base it on comparing how the IPv2 to IPv4 rollout went, versus the IPv4 to IPv6 rollout. The fact that it was incredibly obvious how to route IPv2 over IPv4 made it a no-brainer for the core Internet to be upgraded to IPv4.
By contrast it took over a decade for IPv6 folks to accept that IPv6 was never going to rule the world unless you can route IPv4 over it. Then we got DS-Lite. Which, because IPv6 wasn't designed to do that, adds a tremendous amount of complexity.
Will we eventually get to an IPv6 only future? We have to. There is no alternative. But the route is going to be far more painful than it would have been if backwards compatibility was part of the original design.
Of course the flip side is that some day we don't need IPv4 backwards compatibility. But that's still decades from now. How many on the original IPv6 will even still be alive to see it?
The IPv2 to IPv4 migration involved sysadmins at less than 50 institutions (primarily universities and research labs), updating things they considered to be a research project, that didn’t have specialised network hardware that knew anything about IP, and any networked software was primarily written either by the sysadmins themselves or people that one of them could walk down the corridor to the office of. Oh, and several months of downtime if someone was too busy to update right now was culturally acceptable. It’s not remotely the same environment as existed at the time of IPv6 being designed
Hardware would catch up. And IPv4 would never go away. If you connect to 1.1.1.1 it would still be good ole IPv4. You would only have in addition the option to connect to 1.1.1.1.1.1.1.2 if the entire chain supports it. And if not, it could still be worked around through software with proxies and NAT.
So... just a less ambitious IPv6 that would still require dual-stack networking setups? The current adoption woes would've happened regardless, unless someone comes up with a genius idea that doesn't require any configuration/code changes.
I disagree. The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply. A less ambitious IPv4 is exactly what we need in order to make any progress
It’s not _that_ different. Larger address space, more emphasis on multicast for some basic functions. If you understand those functions in IPv4, learning IPv6 is very straightforward. There’s some footguns once you get to enterprise scale deployments but that’s just as true of IPv4.
Lol! IPv4 uses zero multicast (I know, I know, technically there's multicast, but we all just understand broadcast). The parts of an IPv4 address and their meaning have almost no correlation to the parts of an IPv6 address and their meaning. Those are pretty fundamental differences.
IP addresses in both protocols are just a sequence of bits. Combined with a subnet mask (or prefix length, the more modern term for the same concept) they divide into a network portion and a host portion. The former tells you what network the host is on, the latter uniquely identifies the host on that network. This is exactly the same for both protocols.
Or what do you mean by “parts of an IPv4 address and their meaning”?
That multicast on IPv4 isn’t used as much is irrelevant. It functions the same way in both protocols.
The biggest difference is often overlooked because it's not part of the packet format or anything: IPv4 /32s were not carried over to IPv6. If you owned 1.1.1.1 on ipv4, and you switch to ipv6, you get an entirely different address instead of 1.1.1.1::. Maaybe you get an ipv6-mapped-ipv4 ::ffff:1.1.1.1, but that's temporary and isn't divisible into like 1.1.1.1.2.
And then all the defaults about how basically everything works are different. Home router in v6 mode means no DHCP, no NAT, and hopefully yes firewall. In theory you can make it work a lot like v4, but by default it's not.
> The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply.
In my experience the differences are just an excuse, and however similar you made the protocol to IPv4 the people who wanted an excuse would still manage to find one. Deploying IPv6 is really not hard, you just have to actually try.
Part of the ipv6 ambition was fixing all the suboptimally allocated ipv4 routes. They considered your idea and decided against it for that reason. But had they done it, we would've already been on v6 for years and had plenty of time to build some cleaner routes too.
I think they also wanted to kill NAT and DHCP everywhere, so there's SLAAC by default. But turns out NAT is rather user-friendly in many cases! They even had to bolt on that v6 privacy extension.
> I said it's different from IPv4. At the IP layer.
In what way? Longer addresses? In what way is it "so different" that people are unable to handle whatever differences you are referring to?
We used to have IPv4, NetBEUI, AppleTalk, IPX all in regular use in the past: and that's just on Ethernet (of various flavours), never mind different Layer 2s. Have network folks become so dim over the last few years that they can't handle a different protocol now?
Current statistics are that a bit over 70% of websites are IPv4 only. A bit under 30% allow IPv6. IPv6 only websites are a rounding error.
Therefore if I'm on an IPv6 phone, odds are very good that my traffic winds up going over IPv4 internet at some point.
We're 30 years into the transition. We are still decades away from it being viable for servers to run IPv6 first. You pretty much have to do IPv4 on a server. IPv6 is an afterthought.
> We are still decades away from it being viable for servers to run IPv6 first.
Just put Cloudflare in front of it. You don’t need to use IPv4 on servers AT ALL. Only on the edge. You can easily run IPv6-only internally. It’s definitely not an afterthought for any new deployments. In fact there’s even a US gov’t mandate to go IPv6-first.
It’s the eyeballs that need IPv4. It’s a complete non-issue for servers.
Listen, you can be assured that the geek in me wants to master IPv6 and run it on my home network and feel clever because I figured it out, but there's another side of me that wants my networking stuff to just work!
If you don’t want to put Cloudflare in front of it, you can dual-stack the edge and run your own NAT46 gateway, while still keeping the internal network v6 only.
You have a point. But you still need DNS to an IPv4 address. And the fact that about 70% of websites are IPv4 only means that if you're setting up a new website, odds are good that you won't do IPv6 in the first pass.
Cloudflare proxy automatically creates A and AAAA records. And you can’t even disable AAAA ones, except in the Enterprise plan. So if you use Cloudflare, your website simply is going to be accessible over both protocols, irrespective of the one you actually choose. Unless you’re on Enterprise and go out of your way to disable it.
Actually, my bad. NAT was NEVER standardized. Not only NAT was never standardized, it’s never even been on standards track. RFC 3022 is also just “Informational”
Plus, RFC 1918 doesn’t even mention NAT
So yes, NAT is a bug in history that has no right to exist. The people who invented it clearly never stopped to think on whether they should, so here we are 30 years later.
That doesn't really mean much. Basic NAT wasn't eligible to be on the standards track as it isn't a protocol. Same reason firewall RFCs are informational or BCP.
The protocols involving NAT are what end up on the standards track like FTP extensions for NAT (RFC 2428), STUN (RFC 3489), etc.
It didn’t speed up adoption and people then tried most of the other solutions people are going to suggest for IPv4+. Want the IPv4 address as the network address instead? That’s 2002:a.b.c.d/48 - many ISPs didn’t deploy that either
Think of it like phone numbers. For decades people have accepted gradual phone number prefix additions. I remember in rural Ireland my parents got an extra digit in the late 70s, two more in the 90s, and it was conceptually easy. It didn't change how phones work, turn your phone into a party line or introduce letters or special characters into the rotary dial, or allow you to skip consecutive similar digits.
For people who deal with ip addresses, the switch from ipv4 to ipv6 means moving from 4 digits (1.2.3.4) to this:
Yes, the ipv6 examples are all the same address. This is horrible. Worse than MAC addresses because it doesn't even follow a standard length and has fancy (read: complex) rules for shortening.
Plus switching completely to ipv6 overnight means throwing away all your current knowledge of how to secure your home network. For lazy people, ipv4 NAT "accidentally" provides firewall-like features because none of your home ipv4 addresses are public. People are immediately afraid of ipv6 in the home and now they need to know about firewalls. With ipv4, firewalls were simple enough. "My network starts with 192.168, the Internet doesn't". You need to learn unlearn NAT and port forwarding and realise that with already routable ipv6 addresses you just need a firewall with default deny, and then add rules that "unlock" traffic on specific ports to specific addresses. Of course more complexity gets in the way... devices use "Privacy Extensions" and change their addresses, so making firewall rules work long-term, you should use the device's MAC Address. Christ on a bike.
I totally see why people open this bag of crazy shit and say to themselves "maybe next time I buy a new router I'll do this, but right now I have a home with 4 phones, 3 TVs, 2 consoles, security cameras, and some god damn kitchen appliances that want to talk to home connect or something". Personally, I try to avoid fucking with the network as much as possible to avoid the wrath of my wife (her voice "Why are you breaking shit for ideological reasons? What was broken? What new amazing thing can I do after this?").
What is confusing about that? That's like complaining that you can write an IPv4 address as 001.002.003.004 or 1.2.3.4. Even the :: isn't much different from being able to write 127.0.0.1 as 127.1 (except it now becomes explicit that you've elided the zeroes).
While it's possible to write an ipv4 address in a bunch of different ways (it's just a number, right?) nobody does it because ipv4 standard notation is easy to remember. Ipv6 is not, and none of these attempts to simplify it really work because they change the "format". I understand it and you understand it, but the point here is that it's unfriendly to anyone who isn't familiar with it.
These are all the same address too:
1.2.3.4, 16909060, 0x1020304, 0100401404, 1.131844, 1.0x20304, 1.0401404, 1.2.772, 1.2.0x304, 1.2.01404, 1.2.3.0x4, 1.2.0x3.4, 1.2.0x3.0x4, 1.0x2.772, 1.0x2.0x304, 1.0x2.01404, 1.0x2.3.4, 1.0x2.3.0x4, 1.0x2.0x3.4, 1.0x2.0x3.0x4, 0x1.131844, 0x1.0x20304, 0x1.0401404, 0x1.2.772, 0x1.2.0x304, 0x1.2.01404, 0x1.2.3.4, 0x1.2.3.0x4, 0x1.2.0x3.4, 0x1.2.0x3.0x4, 0x1.0x2.772, 0x1.0x2.0x304, 0x1.0x2.01404, 0x1.0x2.3.4, 0x1.0x2.3.0x4, 0x1.0x2.0x3.4, 0x1.0x2.0x3.0x4
v6 has optional leading zeros and ":: splits the address in two where it appears". v4 has field merging, three different number bases, and it has optional leading zeros too but they turn the field into octal!
"Why are you breaking shit for ideological reasons? What was broken? What new amazing thing can I do after this?"
LOL. Yup. What can I do after this? The answer is basically "nothing really" or "maybe go find some other internet connection that also has IPv6 and directly connect to one of my computers inside the network (which would have been firewalled I'd hope so I'd, what, have to punch open a hole in the firewall so my random internet connection's IPv6 can have access to the box? how does that work? I could have just VPN'd in with the IPv4 world).
Seriously though, how do I "cherry pick hole punch" random hotel internet connections? It's moot anyway because no hotel on earth is dishing out publicly accessable IPv6 addresses to guests....
Hardware support for ipv6 hasn't been the limiting factor in a long time. Users higher on the stack don't want to adopt something that makes so many unnecessary changes.
You’re focusing on the technical difficulty of implementing it in software. This is not the problem. IPv6 support is now present in almost every product, but people still refuse to set it up because it’s so different to what they’re used to (I’m not arguing whether the changes are good - they’re just changes). IPv4+ would’ve solved this social problem.
There’s absolutely, utterly zero chance IPv4+ would be adopted. CGNAT is the solution to the social problem.
I don’t even buy your way of thinking - unlike an “engineering” solution or an “incentives” solution, the problem with “social solutions I speculate about” is: they offer nothing until implemented. They are literally all the same, no difference between the whole world of social solutions, until they are adopted. They are meaningless. They’re the opposite of plans.
Like what’s the difference between IPv4+, which doesn’t exist, and “lets pass a law that mandates ipv6 support”? Nothing. This is what the mockery of “just pass a law” is about. I don’t like those guys, but they are right: it’s meaningless.
This is a good point. The same “why should I migrate” would affect it. However, it being so close to the predecessor, it would be much easier to do so, catching way more people who are on the fence.
The IPv4+ could pass through a router that doesn't know about it - the cloud host that receives that packet could interpret it in a special way, in fact you could stuff additional data into the next layer of the stack for routing - it's not like many services beyond TCP would need to support the scheme.
> The IPv4+ could pass through a router that doesn't know about it
It couldn't do that reliably. We don't have any flags left for that that. Options are not safe. We've got one reserved flag which is anyways set to 0, so that's not safe either.
There's the reserved bit (aka the evil bit[1]). Are you saying gear out there drops packets with reserved bit set to 1? Wouldn't surprise me, just curious.
Seems like IPv4+ would have been a good time to use that bit. Any IPv4+ packets could have more flags in the + portion of the header, if needed.
That bit is currently defined as "Bit 0: reserved, must be zero", so there will be network gear out there, that either drops the packet otherwise or resets the bit to 0 when forwarding.
That makes it effectively impossible to ever use then, so a waste of a bit. Too bad they made that mistake when writing the spec. Would have been better if they specified it like most APIs, ie ignore if you get it, carry it if you forward, and set it to zero if you send it.
It depends what you want to achieve. If we had some feature which is actually incompatible and needed everything else to set it to 0, then it would be perfect. It's not a mistake when you don't predict the future.
This whole discussion reminds me of the beautiful design of UTF-8. They used the lower bits to be ASCII which made backwards compatibility so much easier. It also reminds me of the failure of Intels Itanium and the success of AMD x64. Engineers often want to abandon backwards compatibility to make a new "beautiful" design, but it's the design that has full backwards compatibility that's actual impressive.
It reminds me of python 3. Basically, a huge chunk of people (in my case, scientific programming) get an enormous mess and nothing at all of value until... 3.6 maybe (the infix matrix mult operator). Stunningly, people weren't enthused about this deal.
It would maybe be okay at the router to break some things, but ffs even in software I have to choose? Why do I need both ping and ping6 this is stupid!! They really screwed up by making it a breaking change to the OS and not just internet routing.
They didn't screw up. They made it a breaking change to OSs because it had to be a breaking change to OSs. If anyone screwed up here, it was the people who made v4, not the ones that made v6.
For ping, I think it originally had different binaries because ICMPv4 and ICMPv6 are different protocols, but Linux has had a dual-stack `ping` binary for a very long time now. You can just use `ping` for either address family.
The only solution is a gov't mandate. China went from almost no adoption to leading the world in adoption (77% of all Chinese internet users) in a few years because they explicitly prioritized it in their last 5-year-plan.
US government has finally learnt from how vendors break the mandates and there's now IPv6 mandate if you want to sell to federal government, and waivers are only available for buyers not vendors, and individually every time.
I wouldn't say "failure". There are many, many IPv6 client devices out there, mostly on mobile networks. And it works great and they do well and the tools all support it very well.
But IPv4 will never, ever die. The rise of NAT as a pervasive security paradigm[1] basically neuters the one true advantage IPv6 brought to the table by hiding every client environment behind a single address, and the rise of "cloud everything" means that no one cares enough about reaching peer devices anyway. Just this morning my son asked me to share a playlist, so of course I just send him a link to a YouTube Music URL. Want to work on a spreadsheet for family finances with your spouse in the next room? It lives in a datacenter in The Dalles.
[1] And yes, we absolutely rely as a collective society on all our local devices being hidden. Yes, I understand how it works, and how firewalls could do this with globally writable addresses too, yada yada. But in practice NAT is best. It just is.
> I wouldn't say "failure". There are many, many IPv6 client devices out there, mostly on mobile networks.
Honestly it's a huge success due to this fact alone.
IPv6 is failure only if you measure success by replacing IPv4 or if you called "time" on it before the big mobile providers rolled it out. The fact that all mobile phones support it and many mobile networks exclusively deploy it tells you what you really need to know.
IPv6 is a backbone of the modern Internet for clients, even if your servers don't have to care about it due to nat64.
I toyed with using ipv6 in my local network just to learn it and what a headache that was. Ultimately not worth the hassle. I can remember most of the important device ipv4 on my network, I can't say the same for v6.
This is the first time I've heard this critique. I think most people don't care if their IP address is easily human readable/memorizable. In my experience when people do deal with ipv4/v6 addresses directly, they just copy-paste.
Man, readability of IP numbers is a important thing. You are not always in a situation where you can simply copy the address.
I can tell you what is what simply from the Ipv4 address, but when its IPv6, my dyslexia is going to kick my behind.
Readability reduces errors, and IPv6 is extreme unreadable. And we have not talked yet about pre-fix, post-fix, that range :: indicator, ... Reading a Ipv6 network stack is just head pain inducing, where as Ipv4 is not always fun but way more readable.
They where able to just extend IPv4 with a extra range, like 1.192.120.121.122, 2.... and you have another 255 Ipv's ... They did the same thing for the Belgium number plates (1-abc-001) and they will run out in the year 11990 somewhere lol...
The problem is, that Ipv6 is over engineered, and had no proper transition from Ipv4 > Ipv6 build in, and that is why 30 years later, we are still dealing with the fallout.
Genuinely speaking, that sounds like a process issue if you really can't copy/paste. Perhaps you don't have control over whichever scenario you're talking about but not describing, but data entry is famously error prone regardless of it being 12 characters or 32, and if you're trying to focus on reliability, avoiding errors, you should be avoiding it at all costs.
Circa 1999 I was working for Cisco as a sysadmin. I got my CCNP through internal training and considered making a career of network administration, but ipv6 changed my mind. It seemed so much more difficult and unpleasant to deal with. I didn't want that to be my day to day work.
I think the same thing happens on a different scale with ISPs. They don't want to deal with it until they have to for largely the same reason.
> It seemed so much more difficult and unpleasant to deal with.
In my experience it’s much easier and much more pleasant do deal with. Every VLAN is a /64 exactly. Subnetting? Just increment on a nibble boundary. Every character can be split 16 ways. It’s trivial.
You don’t even need to use a subnet calculator for v6, because you can literally do that in your head.
Network of 2a06:a003:1234:5678::555a:bcd7/64? Easy - the first 4 octets.
Network of 10.254.158.58/27? Your cheapest shotgun and one shell please.
If you have a /48 assigned, you’ll burn the prefix in your brain. Leaves 16 bits for the network address.
e.g. you’ll get 2a06:a003:1234::/48 from the ISP - what you’ll really need to remember is the 2a06:a003:1234:xxxx::/64 part. And I use the VLAN id for the xxxx part. Trivial.
At first I though so too but IPv6 is actually easier. instead of CIDR you always have 64 bits for network and 64 for host. You get a public /48 IPv6 prefix that allows for 16 bits of subnets and then the host addresses can just start at 1 if you really want. So addresses can be prefix_1_1 if you want. And the prefix is easy to memorize since it never changes.
I DO think using 64 bits for hosts was stupid but oh well.
That seems oddly rigid though. I need to known in advance which networks will definitely never need subnetting so I can assign them a /64.
Why have so, so many address bits and then give us so few for subnetting? People shame ISPs endlessly for only giving out /56s instead of /48s, pointing at the RFCs and such. But we still have 64 entire bits left over there on the right! For what? SLAAC? Was DHCP being stateful really such a huge problem that it deserves sacrificing half of our address bits?
The actual intention has always been that there be no hard-
coded boundaries within addresses, and that Classless Inter-
Domain Routing (CIDR) continues to apply to all bits of the
routing prefixes.
I actually think it would have had a better chance of success if ipv6 had embraced the breaking changes to add some killer feature that would have made it worthwhile to upgrade even for entities who didn't need to worry about running out of ipv4 addresses.
IPv6's failure was mostly caused by the IETF's ivory tower dwellers, who seem to generally have no practical experience or understanding whatsoever of how networks are actually built and run today, especially at the small to mid scale.
Small site multihoming, for example, is an absolute disaster. Good luck if you're trying to add a cellular backup to your residential DSL connection.
IETF says you should either have multiple routers advertising multiple provider-assigned prefixes (a manageability nightmare), or that you should run BGP with provider independent address space; have fun getting your residential ISP or cellular carrier onboard with this idea.
IETF has a history of being hostile to network operators. I mean actual network operators - not the people who show up at conferences or work the mailing list who just happen to get a paycheck from a company that runs a network (and have zero production access / not on call / not directly involved in running shit). It's gotten better in the last few years in certain areas (and credit to the people who have been willing to fight the good fight). But it's very much a painful experience where you see good ideas shot down and tons of people who want to put their fingerprint on drafts/proposals - it's still a very vendor heavy environment.
Even the vendor representatives are mostly getting paid to post on mailing lists and show up at conferences.
They're not building products, and they're not supporting, visiting or even talking to their customers. Design-by-committee is a full time job that people actually building things for a living tend to not have time for.
The fact is that already in 1993 routing tables were just too big, and the fact is that having a "flat" address space was always going to mean huge routing tables, and the fact is that because IPv6 is still "flat" routing tables only got larger.
The fix would have been to have a subset of the address space that is routed as usual for bootstrapping ex-router address->AS number mapping, and then do all other routing on the basis of AS numbers _only_. This would have allowed us to move prefix->AS number mappings into.. well, DNS or something like it (DNS sucks for prefix mapping, but it could have been extended to not suck for prefix mapping), and all routing would be done based on AS numbers, making routing tables in routers _very small_ by comparison to now. Border routers could then have had tiny amounts of RAM and worked just fine. The IP packets could have borne AS numbers in addition to IP addresses, and all the routers in the middle would use only the AS numbers, and all the routers at the destination AS would know the routes to the destination IPs.
But, no. Great missed chance.
Well, we still could do this with IPv6, but it would be a lot of heavy lifting now.
EDIT: Ah, I see draft-savola-multi6-asn-pi existed.
> a cellular backup to your residential DSL connection
Hmm, what's the problem? I suppose your home devices should never be exposed to the public internet, and should only be accessible via a VPN like Wireguard. NAT64 is a thing if your home network is IPv4.
BTW what's the trouble with multi-homing? Can't an interface have two separate IPv6 addresses configured on it, the same way as IPv4 addresses?
> BTW what's the trouble with multi-homing? Can't an interface have two separate IPv6 addresses configured on it, the same way as IPv4 addresses?
Yes, an interface can hsve two separate IPv6 addresses, but that doesn't make it easy.
If you do the easy and obvious thing of setting up two routers to advertise their prefix with your preferred priority when they're available (and advertise it as unavailable when they're not), your devices are likely to configure themselves for addresses on both prefixes, which is great.
Then when you open a new tcp connection (for example), they'll pick a source address more or less randomly... There's a RFC suggestion to select the source address with the largest matching prefix with the destination address, which is useful if the prefix is pretty long, but not so useful when the prefix is 2001:: vs 2602::
Anyway, once the source address is selected, the machine will send the packet to whichever router most recently sent an announcement. Priorities only count among prefixes in the same announcement. If you manage to get a connection established, future packets will use the same source address, but will be sent as appropriate for the most recently received advertisement.
This is pretty much useless, if you want it to work well, you're better off with NAT66 and a smart NAT box.
This so, and this is the same if you use IPv4. IPv6 does not bring any regression here; sadly, no progress either. If you have a server that listens to requests though, such as an HTTP server, I don't see how this setup would be grossly inadequate for the purpose.
I would experiment with advertising two default routes, one with a significantly higher metric than the other. Most / all outgoing traffic would go through one link then. If you want to optimally load both uplinks, you likely need a more intelligent (reverse) load balancer.
> If you have a server that listens to requests though, such as an HTTP server, I don't see how this setup would be grossly inadequate for the purpose.
That's the problem. It sounds like it would work if you do this. The documentation suggests multi homing like this would work. When your server gets a request, it sends back the response from the address it received on... but the problem is what router it sends to; when it sends to the correct router, everything is good, when it sends to the wrong router, that router's ISP should drop the packets, because they come from a prefix they don't know about.
> I would experiment with advertising two default routes, one with a significantly higher metric than the other.
Sounds like it would work, but as far as I've found, the priority metric only works if the prefixes are in the same advertisement. If each router advertises its own prefix, the actual metric used is most recent advertisement wins as default route.
As I recall, I tried Windows, Linux, and FreeBSD and it was circa 2020. 25 years in, bad OS support for a supposed feature means the feature doesn't work.
> BTW what's the trouble with multi-homing? Can't an interface have two separate IPv6 addresses configured on it, the same way as IPv4 addresses?
Because it breaks your network when that router goes away. Your switch ACLs, firewall rules, and DNS records all become invalid because they contain addresses that no longer exist, that your devices continue trying to reach anyway.
Ah, I understand what you likely mean saying "small site multihoming": not a Web site (where it would be trivial), but e.g. a small office.
But with multi-homing you would need to actively test which of your uplinks has Internet access anyway, won't you? And you would have to react somehow when one of your uplinks goes down.
It's easiest to do by abstracting your site away. Make it use a LAN, and do port-forwarding and proxying through a box that knows about the multiple uplinks, and handles the switch-over when one of them goes down. I don't see how it might be easier with IPv4 than with IPv6.
I still assume that you don't want the internals of your office network directly accessible via the public Internet, even when you easily can; VPNs exist for a reason.
In the IPv4 world, it's easy. Just use NAT, and forward everything over your preferred bearer. Have your router ping 8.8.8.8 or something periodically from that WAN interface to verify reachability. If your preferred link goes down, make your backup link the primary route, clear your NAT translation table, and your local devices remain mostly oblivious that anything happened.
> It's easiest to do by abstracting your site away. Make it use a LAN, and do port-forwarding and proxying through a box that knows about the multiple uplinks, and handles the switch-over when one of them goes down. I don't see how it might be easier with IPv4 than with IPv6.
In the IPv6 world, this is pretty much what you have to do. A whole lot of extra complexity and expense that you didn't have previously.
Extra complexity and expense? You're describing basically the same thing they are. A router that does NAT and decides which link to send the packets over based on connection testing.
And IPv6 has the benefit of a significantly simpler 1:1 NAT.
NPTv6 is rarely used, and so its real world implementations tend to be poorly tested and buggy.
The answer in this case ends up being solutions like explicit web proxies, or alternatively a VPN concentrator or the like from which you can receive a routable prefix delegation, and then run multiple tunnels to satisfy your own availability or policy routing needs. Either way, you’re building some complex infrastructure to overcome regressions imposed upon you at layer 3.
You should be using dynamic DNS and firewall rules should be on the subnet boundary in this scenario, any decent firewall (including referee PFsense/OpnSense) support ACLs that follow IPv6 address changes.
That doesn't solve the problem. DNS remains broken until each and every device, assuming VERY generously that it is capable of dynamic DNS at all, realises that one of its prefixes has disappeared and it updates its DNS records. With DNS TTL and common default timeouts for prefix lifetime and router lifetime, that can take anywhere from 30 minutes to 30 days.
> and firewall rules should be on the subnet boundary in this scenario, any decent firewall (including referee PFsense/OpnSense) support ACLs that follow IPv6 address changes.
This requires you to assign one VLAN per device, unless perhaps you've got lots of money, space, and power to buy high end switches that can do EVPN-VXLAN so that you can map MAC addresses to SGTs and filter on those instead.
> each and every device ... updates its DNS records.
What device on your office LAN should maintain its own DNS records? Advertise your own caching DNS server over DHCP(6), give its responses a short TTL (10 sec), make it expire the relevant entries, or the whole cache, when one of your links goes down. I suppose dnsmasq should handle this easily.
It seems that the discussion turned away from a multi-homed setup (pooling the bandwidths of two normally reliable links) to an HA/failover setup (with two unreliable links, each regularly down).
It either needs to be able to update DNS by itself (a la Active Directory), or it needs to be able to give the DHCP server a sensible hostname in order for DHCP to make this update on its behalf, which most IoT devices cannot.
The amount of ignorance in these ipv6 posts is astounding (seems to be one every two months). It isn't hard at all, I'm just a homelabber and I have a dual-stack setup for WAN access (HE Tunnel is set up on the router since Bell [my isp] still doesn't give ipv6 address/prefixes to non-mobile users), but my OpenStack and ceph clusters are all ipv6 only, it's easy peasy. Plus subnetting is a heck of a lot less annoying that with ipv4, not that that was difficult either.
“it’s easy peasy” says guy who demonstrably already knows and has time to learn a bunch of shit 99.9% of people don’t have the background or inclination to.
People like you talking about IPv6 have the same vibe as someone bewildered by the fact that 99.9% of people can’t explain even the most basic equation of differential or integral calculus. That bewilderment is ignorance.
"The shit about IPv4" was easy to learn and well documented and supported.
"The shit about IPv6" is a mess of approaches that even the biggest fanboys can't agree on and are even less available on equipment used by people in prod.
IPv6 has failed wide adoption in 30 decades, calling it "easy" is outright denying the reality and shows the utter dumb obliviousness of people trying to push it and failing to realize where the issues are.
Could you share a list of IPv6 issues that IPv4 does not exhibit? Something that becomes materially harder with IPv6? E.g., "IPv6 addresses are long and unwieldy, hard to write down or remember". What else?
Traffic shapping in v6 is harder than v4. At least it was for me, because NDP messages were going into the shaping queue, but then getting lost since the queue only had a 128 bit address field, and 128 bits isn't actually enough for local addresses. When the traffic shaping allowed traffic immediately, the NDP traffic would be sent, but if it needed to be queued, the adapter index would get lost (or something) and the packets disappeared. So I'd get little bursts of v6 until NDP entries timed out and small queues meant a long time before it would work again.
Not an issue in ipv4 because ARP isn't IPv4 so IP traffic shaping ignores it automatically.
Software support is a big one. I ran pfSense. It did not support changing IPv6 prefixes. It still barely does. So something as simple has having reliable IPv6 connectivity and firewall rules with pfSense was impossible just a few years ago for me.
Android doesn't support DHCPv6 so I can't tell it my preferred NTP server, and Android silently ignores your local DNS server if it is advertised with a IPv4 address and the Android device got a IPv6 address.
Without DHCPv6 then dynamic DNS is required for all servers. Even a 56 bit prefix is too much to remember, especially when it changes every week. So then you need to install and configure a dynamic DNS client on all servers in your network.
"I already know enough to be productive, can the rest of the world please freeze and stop changing?"
This is not even that unreasonable. Sadly, the number of IP devices in the world by now far exceeds the IPv4 address space, and other folks want to do something about that. They hope the world won't freeze but would sort of progress.
Network engineering is a profession requiring specific education. At a high level it’s not different from calculus. You learn certain things and then you learn how to apply them in the real life situations.
It’s not hard for people who get an appropriate education and put some effort into it. Your lack of education is not my ignorance.
company where i work has deployments across the world with few hundreds of thousands of hardware hosts (in datacenters), vms and containers + deployments in a few clouds. also a bunch of random hardware from multitude of vendors. multiple lines for linking datacenters and clouds. also some lines to more specific service providers that we are using.
all of it ipv4 based. ipv6 maybe in distant future somewhere on the edge in case our clients will demand in.
I find this completely fine. I don't see much (if any) upside in migrating a large existing network to anything new at all, as long as the currently deployed IPv4 is an adequate solution inside it (and it obviously is).
Public-interfacing parts can (and should) support IPv6, but I don't see much trouble exposing your public HTTP servers (and maybe mail servers) using IPv6, because most likely your hosting / cloud providers do 99.9% of it already, out of the box (unless it's AWS, haha), and the rare remaining cases, like, I don't know, a custom VPN gateway, are not such a big deal to handle.
I ran network team at an organization with hundreds of thousands hardware hosts in tens-of-megawatts large data centers, millions of VMs and containers, links between data centers, links to ISPs and IXes. We ran out of RFC1918 addresses at around 2011-2012 and went IPv6-only. IPv4 is delivered as a service to nodes requiring it via an overlay network. We intentionally simplified network design by doing so.
I should have been gentler and less arrogant, yes. Sincerely though, please explain how ipv6 is in anyway more difficult than a properly set up ipv4 enterprise. What tools are not available?
I left my job as a NE/architect over a 15 years ago, but the show stopper back then revolved around how to handle routing with firewalling. Firewalling being biggest roadblock due to needing traffic symmetry. I'm doing my best to remember why we stopped at just providing v6 at the edge for site-specific Internet hosted services and never pushed it further.
Mind you, our team discussed this numerous times over a few years and never came up with a solution that didn't look like it would require us to completely fork-lift what we were doing. The whole team was FOR getting us to v6, so there was no dogmatic opposition.
Consider this:
25k employee company. Four main datacenter hubs spread out across the USA with 200 remote offices evenly dual-homed into any two of the four.
All four of the DCs had multi-ISP Internet access advertising their separate v4 blocks and hosting Internet services. The default-route was redistributed into the IGP from only two locations, site A and B. e.g. two of the four DCs were egress for Internet traffic from the population of users and all non-internet-facing servers. IGP metrics were gently massaged as to fairly equally use of both sites.
All outbound traffic flowed naturally out of the eastern or western sites based on IGP metrics. This afforded us a tertiary failover for outbound traffic in the event that both of the Internet links into one of the two egress sites was down. e.g., if both of site A's links (say, level-3 and att) were down, the route through site A was lost, and all the egress traffic was then routed out site B (and vice-versa). This worked well with ipv4 because we used NAT to masquerade all the internal v4 space as site X's public egress block. Therefore all the return traffic was routed appropriately.
BGP advertisements were either as-path prepended or supernetted (don't remember which) such that if site A went down, site B, C, or D would get its traffic, and tunnel it via GRE to the appropriate DC hub's external segment.
The difficulty was that traffic absolutely had to flow symmetrically because of the firewalls in place, and easily could for v4 because NAT was happening at every edge.
With v6 it just didn't seem like there was any way to achieve the same routing architecture / flexibility, particularly with multi-homing into geographically disparate sites.
I'm not sure anymore where we landed, but I remember it being effectively insurmountable. I don't think it was difficult for Internet-hosted services, but the effort seemed absolutely not worth it for everything on the inside of the network.
I want to send my ssh via my low latency reliable connection, I want to route my streaming via another connection. That’s just a routing rule and srcnat in ipv4
That’s before you go on to using PBR. I want to route traffic with different dscp via different routes.
Ultimately I want the rout g to be handled by the network, not by the client.
Without nat, my understanding is the right way in v6 is to issue addresses of every network and then send a message to each end device asking it to use a specific ip address to route traffic and hope every client implements RFC 4191 in the right way.
The "proper" way would be to get your own ASN and use BGP to route the traffic.
If you're wanting to use a secondary WAN link as a backup for when the other goes down you could have the backup link's LAN have a lower priority. (So I guess hope everything implements RFC 4191 like you said).
You can use NAT66/NPTv6 if you want (though it's icky I guess).
Nat 4, it’s trivial. But IPv6 tell me how terrible nat is despite it being the only solution in both the v6 and v4 world.
Sadly my 4g provider will not peer via bgp with me, even if I could provide an AS and Sufficiently large IP range.
I think my home ISP will actually peer with me, but I’d have to tunnel to them over my non-fibre connection, and there’s reduced resilience in that case.
At work that wouldn’t help at all, there are very few providers for many of our branch offices.
So once again ipv6 only works with “icky” nat, or on simple 1990s style connections, and not in the real world of multiple providers. Now sure I can do npt which means I don’t need to keep track of state, but then if I didn’t keep track of state I lose the benefits of a stateful firewall.
As such the only benefits of nat on v6 is that source ports will never need to change even if client 1 and client 2 both send to server 1 port 1234 from source port 5555. This helps with a handful of crappy protocols which embed the layer 4 data (port number) in a layer 6 or 7 protocol.
I've been thinking we could simply extend the ipv4 address to be 11 bytes by (ab)using the options field. That is, add an option that holds more bytes for the source and destination address, which are to be appended to the address already present in the header.
I am thinking that since an option starts with 2 bytes and everything must be padded to a multiple of 4 bytes, we can add 16 bytes to the packet, which would hold 7 extra address bytes per source and destination, giving us 11 byte addresses. ISPs would be given a bunch of 4-byte toplevel addresses and can generate 7-byte suffixes dynamically for their subscribers, in a way that is almost the same as CGNAT used today but without all the problems that has.
Most routers will only need to be updated to pass along the option and otherwise route as normal, because the top level address is already enough to route the packet to the ISP's routers. Then only at the edge will you need to do extra work to route the packet to the host. Not setting the option would be equivalent to setting it to all 0s, so all existing public hosts will be automatically addressable with the new scheme.
There will of course need to be a lot more work done for DNS, DHCP, syntax in programs, etc, but it would be a much easier and more gradual transition than IPv6 is demanding.
I don't think so. It would be more confusion because no one will know if a network is ipv4 or ipv4+, leading to edge case bugs and confusion and people will similarly be lazy and choose to only implement ipv4 knowing it will always be reverse compatible and the cost is transferred to the consumer.
Plus, it's only 2048x the address space. It's within the realm of possibility that we will need to upgrade again once this place is swarming with robots.
x2048 is a lot though! Maybe we should let the robots figure out their own solution, rather than trying to make every atom on Earth individually addressable :)
ipv6 adoption is still steadily rising. Not as fast as anyone hoped, but at least steadily. There is no way it can be abandoned at this point even if we wanted to.
I wonder if it could still be usurped by another standard that is somehow more popular. If adoption of that leapfrogs over IPV6 then maybe it will have just been a waypoint along the way.
What would a new standard do that would make it more popular? IPv6, for all its faults, is designed to be the last Internet Protocol we will ever need.
In the new standard every publicly routable packet will include a cryptographically signed passport number of the responsible person.
Then the government could, for example, limit criminals' access to the internet by mandating that their packets be dropped on most major ISPs, or at least deprioritised.
Funny enough I actually looked at a scheme for corporate networks where your personal corporate ID is encoded as part of the host bits of the IPv6 packet and policy could be applied based on who you are instead of what machine it is (or both). It was kind of neat but the complexity was too high for it to gain traction, and also it turns out that most corporate networks are allergic to IPv6 and government networks doubly so.
Stripped of all the other baggage that came with it (e.g. SLAAC, IPsec, etc) IPv6 is an incredibly conservative addressing extension. The only thing even more conservative than v6 would have been to drop the lower 64 bits of the address and the associated EUI-64 local addressing scheme. Which... to be fair, that turned out to be a very bad idea, but the length of the field isn't what was holding up v6 adoption.
I suspect by "incredibly conservative" you mean "backwards compatible", which... no. You can't make an addressing extension backwards compatible with hardware that doesn't read all of the address. Of course, we did that anyway with CGNAT, and predictably it causes huge problems with end-to-end connectivity, which is the whole point of IPv6. You're probably thinking more along the lines of an explicit "extension addressing header" for v4. Problem is, that'd mean a more awkward version of IPv6's /64 address split[0], combined with all sorts of annoying connectivity problems. The same corporate middleboxes that refuse to upgrade to IPv6 also choke on anything that isn't TCP traffic to ports 80 and 443. So you'd need Happy Eyeballs style racing between CGNAT IPv4 and "extended IPv4".
Also, that would just be a worse version of 6in4. Because they also thought of just tunneling IPv6 traffic in IPv4 links. I don't think you understand how incredibly conservative IPv6 actually is.
The problem with "incredibly conservative" IP extensions is that nothing beats the conservatism of doing literally nothing. IT infrastructure is never ripped out and replaced unless there is a business case for doing so. The current problem with IPv6 adoption is that nobody has yet said "let's stop processing IPv4 traffic", they've only said "let's get more dual-stack hosts online", which is a process that only asymptotes to 100% IPv6, and never reaches it.
IPv4 was not the first version of the Internet protocol. That honor goes to Network Control Protocol (NCP). The reason why we don't have an asymptotic long tail of Internet hosts still demanding NCP connectivity is because this was back when "having a connection to the Internet" meant "having a connection to ARPANET". The US military could just refuse to process NCP packets and actively did this to force people onto IPv4. Now imagine if someone big like Google said "we're going to stop accepting IPv4 connections" - people would jump onto v6 immediately.
[0] Let's say we add a 32-bit extension header onto IPv4
"Stripped of all the other baggage that came with it..."
But that baggage is a huge part of the problem. Almost nothing you know about IPv4 applies when you switch to IPv6, and most of us found that out the hard way when we tried to make the switch. Leaves a pretty bad taste in your mouth.
I mean this is just wrong. Routing and switching behave exactly the same in V6 vs V4. Details on how you get an IP and what it looks like changed but there's TONS of knowledge shared between the two.
When I configure a new router at my home, routing is barely a blip on the radar. I mean, everything that's not local goes upstream. Switches just swich, I plug in cables and they work.
The things I need to think about are precisely the things that changed radically. Firewall rules aren't the same due to prefix changes and no NAT. DHCP isn't the same, DNS isn't quite the same, distributing NTP servers isn't the same.
Almost nothing of what I knew about configuring my home router for IPv4 has transferred to IPv6 configuration.
"Details on how you get an IP and what it looks like changed but..."
This is exactly what I'm talking about. When you have problems with your IP network, that's the first thing you try and figure out, "what's my address? Why is that my address? Did it change? If so, why? Are other devices able to get packets? What are their addresses? Why can those addresses get packets but this address can't?"
> The current problem with IPv6 adoption is that nobody has yet said "let's stop processing IPv4 traffic"
Mobile carriers have done that between consumer devices and network towers. That forced a lot of innovation (including tools like better DNS64 and "happy eyeballs" protocols) and network stack hardening.
The roll out of out CGNAT in some cases is "let's drop IPv4 traffic randomly" and "happy eyeballs" in consumer devices is transparently driving a lot of consumer traffic to IPv6.
This is why mobile and consumer devices are leading the pack on IPv6 adoption.
It's maybe not all of Google that next needs to say "we're going to stop accepting IPv4 traffic", it's maybe more specifically GCP (and AWS and Azure) that need to do that to drive the non-consumer IPv6 push we need. The next best thing would be for all the cloud providers to at least start raising IPv4 address prices until their clients start to feel them.
> The current problem with IPv6 adoption is that nobody has yet said "let's stop processing IPv4 traffic"…
One of the giant CDNs translates all IPv4 traffic to IPv6 at the edge (stateless NAT46) and is IPv6-only in its core network (for one of its primary product networks; like everybody they have multiple networks.)
Multiple networks do the same - Both T-Mobile (at least in EU) and Orange no longer actually support v4 other than through funky 464 and by funky I mean really funky at times.
Truth is there are too many devices that only speak IPv4 or have untested IPv6 stack. People still can’t even agree on how ipv6 address is represented.
I think this is defeatist talk where it’s not warranted. I remember IPX networks in the 90s were still a thing because people believed they could eke out a little more performance for their games. It’s taking a long time to move to IPv6 in some parts of the world. eg: anyone who doesn’t feel the pain of the IPv4 address crunch likely due to having a large chunk to begin with. Many influential organizations in North America definitely fall in that category.
IPv6 is a success IMHO because it is used in so many places. Google’s IPv6 traffic graph shows close to 50% adoption and still trending up. We can’t possibly expect the world to be near 100% overnight… the internet is a big place with the whole spectrum of humans influencing IT; There will always be someone who will cling to IPv4 for dear life.
> I'm not proposing to abandon ipv6, but at this point I'm really not sure how we proceed here. The status quo is maintaining two separate competing protocols forever, which was not the ultimate intention.
The end game will be a cryptographically large address space allocated based on some cryptographic operation, rather than a committee carving up the space arbitrarily.
Tor already does this, addresses allocation is not a problem.
I think they used to use hashes, but now use Ed25519 public keys.
Obviously, Tor is not suitable for most tasks.
No one should have to pay for the extra latency if they don't need the anonymity.
The real problem is routing in these address spaces, and there have been a few projects like CJDNS which try to solve it.
Imagine every address along a major road is 3 digits, and some shortsighted post office code assumes 3. Your business is 845 Oak St. One day they say hey, this road is getting too long, let's update that code to support 10 digits and we never worry about this again.
Oh and btw, your address is now 9245593924 Oak St.
reply