Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel Foundry Services and Arm announce collaboration on SoC design (intel.com)
438 points by mepian on April 12, 2023 | hide | past | favorite | 189 comments


This seems absolutely inevitable.

If IFS is serious it needs to be fabbing SoC's with Arm CPUs and to be as competitive as possible with TSMC.

Nothing forces IFS to make Arm CPU's that compete with Intel's desktop and server products.

Of course, the two companies have a long history of collaboration including Intel having an Arm architecture license via StrongARM. There is an interesting anecdote from Robin Saxby of Arm about when Steve Jobs phoned him and tried to persuade him to block the sale of StrongARM to Intel but Saxby convinced Jobs otherwise.


Intel sold off Xscale (née StrongARM) just before the touchscreen smartphone boom started. Seems like they would have been well positioned to take the lead in this market. Probably Apple would have used an Xscale SOC for the iPhone if Intel had taken it seriously.

Strategic blunder of the century so far?


Xscale sale was announced 7 months before the iPhone was announced, but presumably way after work had started and Apple had asked Intel for a processor. A truly poor decision.

> There was just one problem: The PC era was about to end. Apple was already working on the iPhone, which would usher in the modern smartphone era. Intel turned down an opportunity to provide the processor for the iPhone, believing that Apple was unlikely to sell enough of them to justify the development costs.

https://www.vox.com/2016/4/20/11463818/intel-iphone-mobile-r...


> 7 months before the iPhone was announced, but presumably way after work had started and Apple had asked Intel for a processor

You say that as if Intel should have known that the iPhone would be an explosive hit. They probably assumed it would be yet another Blackberry competitor, not really a threat to their desktop hegemony.


Think of why the iPhone existed in the first place. It was because Jobs saw the writing on the wall and knew that cell phones were bound to destroy the market for the iPod, Apple's moneymaker. Before making the iPhone, he made the Rokr and Jobs threw massive amounts of resources into the iPhone at large risk to Apple. He bought up the patent for multi touch back in 2005.

I honestly think people overlook this aspect of Jobs leadership, basically from the time he returned on Apple he doubled down on mobile computing repeatedly throwing 50% of Apple's marketing budget into the unproven iPod for instance. Why was Jobs able to see the writing on the wall and not Intel? I think Steve Jobs himself said it best:

https://www.youtube.com/watch?v=P4VBqTViEx4

Intel itself was NOT totally blindsided by the mobile revolution. Intel Atom development started in 2004. Intel Atom could have dominated the market. There were Intel Execs pushing Atom. But all the R&D was pumped into the desktop and server business, the quality silicon was reserved for the desktop and server business, Intels execs didn't want these inexpensive chips for what were presumed to be peoples secondary computers to cannibalise its expensive desktop and server chips, and the strategic vision was simply bad. You can apologise for it but the mistake may have doomed the entire business and plenty of companies did not make the same mistake.

Intel wouldn't even exist if they didn't already make such a shift away from the memory market into the microprocessor market firing most of the their company in the process. Intel had leaders with strategic vision and guts back in the day, and the senior leadership simply weren't good enough to commit to mobile when that bridge had to be crossed.


> Intel Atom could have dominated the market

Depends on what you call Atom, but if you mean Atom as it was released then absolutely no, it had exactly zero fighting chance. IIRC first Atoms were Pentium 4 or so fabricated on smaller process and resulting in lower power draw. The only meaningful use for early Atoms was crappy netbooks.

However, it was or could have been an important stepping stone for paving the road towards x86-based low power devices. Remember, it was also a time where linux-on-arm started booming. It could have been an alternative to ARM SoCs, but there were simply too little attractive offerings and Broadcom/Freescale/Ti effectively ate the market


I think you’re missing the broader point. The “netbook” form factor was absolutely on to something, but it failed to deliver. The atom processor is what enabled the form factor, and practically set the trajectory for laptops for next 10 years.

Everyone has rose tinted glasses and short memories when it comes to just how far computers have come in the last 15 years. For reference, netbooks predate iPads by 3 years. They came out about the same time as the first iPhone.

The low power long life computing platform has for the longest time been a dream. Netbooks with 4-8 hour battery existed at the same time that a top of the line laptop would get only 3 hours battery life, a normal consumer grade laptop would top out at 1-2 hours, for a 1/4 of the price.

If you squint hard enough, there is no discernible difference between the aspiration of the netbook platform, and a modern chrome book or iPad with a keyboard, but Intel doesn’t make processors for either platform.


> The “netbook” form factor was absolutely on to something, but it failed to deliver.

Completely agree here, I call Atom important stepping stone. However, as you mentioned Intel does not make processors for descendant platforms.

As I have elaborated in different comment, I guess the major reason Atom failed was market positioning. They were effectively eaten by ARMs getting more powerful from one end and mobile CPUs getting more power efficient from the other end.

So while I agree that Atom based netbooks helped start the whole mobile compute market, my argument is that market positioning of Atom products meant that they were destined to be replaced. IMO success of Atom required a bit more useful PC-like devices AND high integrity SoCs for appliances.


The first atoms were their own kind with design principles more from the P5 than even the PPro. It was slow. I'm unsure if it was slower than contemporary Arm, but for sure it had a way lower IPC than the Pentium 4.


IMO, ARM won not because it was measurably faster, but rather because there was an ARM based offering all over the spectrum of SoC needs. In the appliance/embedded market Atom was firmly in the high end and yet in pc-ish market Atom was very much low end.


Yes, but Nokia, Ericson, Sony and Microsoft could have done it, if it wasn't for their mismanagment.

People also overlook the mobile phone market outside US.


That at that time would not have been a bad take.

The big telcos were handing out data in miserly amounts and eye bleeding prices (think 50 bucks for 1MB then another 30-50 for SMS). AT&T was willing to pair with an unlimited plan set the iPhone wildly ahead of all the other phones in that category. VZ messed up big on that one. They were touting how amazing their network was meanwhile their competition was letting people surf the net from their phone. VZ was meanwhile still charging silly rates for SMS. Meanwhile I could open a connection on an iPhone and use an online chat service or the unlimited SMS. All of the other phones from that time could do similar things. It was the per month charge that people balked at to do simple internet things. Then on top of that it was apple so it had a bit of cool to go with it. I personally think though without that data plan the thing would have been a dud.


> The big telcos were handing out data in miserly amounts and eye bleeding prices (think 50 bucks for 1MB then another 30-50 for SMS). AT&T was willing to pair with an unlimited plan set the iPhone wildly ahead of all the other phones in that category.

From what I recall, the original iPhone only supported 2G, while contemporary phones supported 3G. I remember a lot of criticism around the iPhone web experience because of that, even though the interface was clearly superior to everyone who used it.

It looks like there were relatively competitive data plans available[1] that offered unlimited data. I'm not really convinced that that was the problem.

---

1. https://arstechnica.com/gadgets/2007/04/data-plans-the-barri...


> From what I recall, the original iPhone only supported 2G, while contemporary phones supported 3G.

Yep. Only EDGE and quad-band GSM ("2G"). The second generation iPhone was called the "iPhone 3G" specifically because it supported tri-band UMTS ("3G") with HSDPA. In fact, some even back-named the first generation iPhone to "iPhone 2G".


They didn't have to know that the iPhone would have been an explosive hit, but they should have seen that Smart Phone/PDA like device sales were increasing at 30-50% a year at that time, and there was a definite feeling that this was the future.

Even if the iPhone wasn't an explosive hit and the smartphone growth didn't explode, 10% of Blackberries market would have meant millions of processors sold to Apple in the first years and any other Blackberry competitor that was interested,


I don't want to defend Intel, but Apple's own announced hope at the time of the iphone introduction was to "eventually reach 1% of the phone market".


Apple was just reassuring customers and the market, by telling them that their tiny size and newbie market status was an advantage, not a disadvantage.

But they clearly had high ambitions for their new revolutionary ergonomic mobile "phone" + "music player" + "internet". They wanted to define a new indispensable market and they wanted to own it. They already had a multi-year product path and design pipeline for regular major upgrades in place.

The iPhone was intended to be a "next big thing" from the start.

And it seems unlikely Steve was selling Intel on doing a new chip for 1% of the phone market. He was telling Intel this was the future, in some form.

Apple was an objective market demonstration, with no ambiguity, that there was now a need for low-power first (as apposed to speed first) that would got to other suppliers if Intel didn't move.

This was a leadership failure at Intel. They happen.


I always thought it was a response to other cell-phone makers integrating music-player functionality into their cell phones, threatening Apple's music-player revenue stream.


Me too. Ballmer's reaction to the iphone was not obviously stupid at the time, especially given Microsft's business model of direct enterprise sales.


> there was a definite feeling that this was the future

Some people thought that, yeah. But not everybody, and not on the same places Intel sold their chips to.

Anyway, Intel sold their foundry exactly because its unitary profit was way too small for them. Since then, the company only got more expensive, so they wouldn't be able to keep it.


ARM is open. So that means they would have to compete with the whole world both on chip design, and fabbing, for a low power chip, with margins lower than what they are used to.

Sucsess(as measured in valuation back than, not today, that's how managers measure themselves), was far from certain.


Right. Big company disease caused Intel to miss the biggest CPU market and relegated to big metal servers because they got undercut. It’s still a miss for the business and the CEO regardless of how managers/VPs measure themselves. CEOs are supposed to take a longer term strategic view. Since the mobile revolution Intel stock peaked at 2x from before 2007 while Qualcomm has done about 2x better than them. Of course part of that is cellular chips / licensing. But there’s also other CPUs that were competing with Qualcomm. Maybe that’s why Intel didn’t bother to compete (Qualcomm SoC is CPU+cellular if I recall correctly).

It’s still a lesson on the importance of keeping a foothold for the lowest volume market even if you’re not making any money as long as the cost is sustainable - cutting every unprofitable part of the business leaves you for disruption. Same thing happened to Microsoft with smartphones and Google with social.


>Of course part of that is cellular chips / licensing. But there’s also other CPUs that were competing with Qualcomm. Maybe that’s why Intel didn’t bother to compete (Qualcomm SoC is CPU+cellular if I recall correctly).

Intel had a cellular modem business which they formed in 2011 and sold to Apple in 2019.


4 years after smartphones became a thing. It’s a bit too late and no one really bought their chips in the high end phones.


> Intel had a cellular modem business which they formed in 2011

They acquired* from Infineon. And completely messed it up.


Sadly staying in everything is death by a thousand cuts. It is hard to have all the strategies at once.

Magic is finding out what of those can fly and it is not a manager but a leader quality to do well with new fields. Although sometimes managers feel the obvious moves for the short term sometimes better.


It is also a lesson for the cost-cutting that is popular right now. Sure, trim fat and waste - but R&D and and test projects in new areas are important for future potential markets. Keep some of that alive even it currently isn't profitable so that you have paths to future markets.


I mean, Blackberry was huge and the market was still rapidly growing with no signs of slowing down. The addressable market was even bigger since even back then lots of people didn’t have cell phones and the majority were still feature phones (but more and more people were ditching landlines instead).

Sure, it can be hard to know when to stay in, but the “death by a thousand cuts” is not a financial problem. These are often rounding errors to continue. It’s a question of focus. And Intel clearly made the mistake of thinking they didn’t need to focus on mobile.


> ARM is open

Qualcomm still has a 60-70%+ market share in the $300+ Android phone market and their gross profit margin was similar to Intel until last year so it's obviously doable. And XScale/Intel was probably the best positioned company to dominate the ARM SoC market back in the mid 2000s.


Well they also have success in WiFi & Cellular, ironically two markets that Intel also participated in and never succeeded in doing anything, ever. Truly an astonishingly mismanaged company.


I feel that when every management book starts do bring up a company as an example, one should just sell the stocks in 2-3 years. Like Intel and its OKRs.


ARM is not open... You can license the IP.


That wasn't the only poor decision-making they were doing while Craig Barrett was CEO at that time. They also doubled down on the Itanic processor, refused to make a 64-bit x86 CPU until AMD forced them to, and were pushing mega-expensive RAMBUS memory with their power-guzzling P4 processor. Nearly everything they were doing was a bad decision. If anything, it's a case study in why engineers should not be CEOs. (Barrett was an engineer; he was replaced by Otellini who was not, and who turned things around for Intel.)


It’s worse than that. Apple had just switched PPC to Intel, and Jobs approached Intel in 2005 to make the iPhone SoC.

From Paul Otellini exit interview in 2013

>"We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we'd done it… The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do... At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.”

The title of the article is "Paul Otellini's Intel: Can the Company That Built the Future Survive It?” and that is still a good question a decade later.

https://archive.is/B0Bbs


Well that is Intel’s view. Inside Apple their engineers refuse to use anything Intel and threaten Jobs to quit.


The biggest mistake of Paul Ottelini for sure. The world would look very, very different if he said yes.


Before becoming Intel's CEO Paul Otellini served as executive VP of sales and marketing. This uber-blunder is a great example what happens when you let a "bean-counter" run a tech company.

EDIT: corrected factual error. Paul was never a CFO but much of his career was in sales and marketing. Sales do count as "bean-counters" because their main metric are counting PaL.


That's not correct. Otellini did not serve as CFO at any time and did not have a finance background. Please check your facts before making a comment like this.

Edit: I see you've changed your comment without acknowledgment. Still wrong 'sales and marketing' isn't 'bean counting' and Otellini had extensive 'tech' experience.


Thanks for pointing out a factual error. I corrected it and added explicit EDIT paragraph.


Some of their worst years were under Krzanich, who had an engineering background… I think it is hard to guess who’ll be a good CEO for a tech company. Someone like Lisa Su with a background in engineering but also lots of R&D seems like the best pick. The CEO doesn’t need to do the i-dotting, t-crossing engineering stuff anyway, having a good idea conceptual of what is upcoming and possible is more important.


Seems like what you need is a triumvirate... an idea guy (The Jobs), a tech guy (The Wozniak), and a 3rd person who's main job is to keep the first two from killing each other, and ideally a sense for operations and PR.


Jobs wasn't too bad at tech, he passed Al Alcorn's job interview at Atari after all.


Probably "The Eric Schmidt" would be that third person


I mean you should probably just own the whole mistake — in no world are bean-counters sales people.

https://dictionary.cambridge.org/dictionary/english/bean-cou...


Disagree. The Sales leaders are generally also looking for continued growth 3 years out. Sales know there’s sprints but you are also running a marathon where continued growth YoY is expected. They should have seen the smartphone potential and started chipping away to grow their base long term.

Similar to why the best sales leaders only do deals where there’s renewal possibilities in 2-3 years out.


Have their been any successful major companies run by former CFOs? My experience here is that it is often a death sentence via a long road or stagnation.


Heh, I was at Intel in the late 90s when they were internally worried as heck of Transmeta (not in relation to LGBTQ or the company meta :-)

https://www.zdnet.com/article/arm-shares-tumble-as-transmeta...

in 1997 Intel was terrified of Transmeta, and it lit a fire under their ass to get the Celeron out more quickly...

Me and a colleague worked a F-ton to test out not only the Celeron, AGP, SIMD, OpenGL, Unreal Engine, and all the latest games...

But Intel paid the game devs a Million Marketing bucks if they would optimize their games (and general software) to the SIMD instructions, to make the game feel more subjectively responsive on the Celeron as opposed to the AMD, and the seemingly incoming Transmeta.

so I think that Transmeta was the big blunder, but in reverse ; ; Intel should have been all over gaming, and not been afraid of some plucky startup...

But recall all the senior engineers at intel from <1995 had slide rules and pocket protectors.... they didnt see the entertainment wave of computing early enough and made far too many mistakes and tried to catch up.... personally, I think Linux saved Intel from bankruptcy.


Transmeta was attractive to embedded HW developers because it had a PCI host controller built in, you didn't need a separate northbridge chip, the speed was less important.


Blackberry phones peaked in 2009, there was plenty of time to reconsider. This is just blatant revisionism, there were plenty of on-ramp opportunities within the product timeline.

Apple didn't move off Samsung entirely until the A8, and they still rely on many Samsung hardware IPs within the SoC to this day. It was not just a fab arrangement from day 1. https://twitter.com/calebccff/status/1472517091970494465

Apple relied on many of the same suppliers as the rest of the industry for a very long time, except the ones they acquired and closed-off. Purely on a "rising tide lifts all boats basis", Intel would still have had access to the dominant player if the ARM-based touchscreen era wiped out the incumbents. However, this is still a very low-margin business, despite requiring more engineering effort than desktop.

You could argue that $1500 phones were unthinkable at the time, but as a commodity, chipset manufacturers see less than 5-10% percent of that, even on flagship phones.


Intel actually designed a CPU for the iPhone. Apple didn't use it because it failed to meet their performance/thermal requirements. Intel was not happy about this.


Sourcing for this? This is a very under-reported story IMO, the main source we have is self-serving comments from the guy who screwed it up saying “we passed because it wasn’t going to be profitable at the projected volumes”. But I don’t see how Intel would have even been capable of being the sort of partner that mobile SoC development requires.


Consider this a rumor until I can remember the source.

There was a deep dive article I came across about this a while ago and the only other interesting bits I've never seen anywhere else were tha

- Intel was offering a chip based on core/atom and couldn't hit the required performance/TDP

- apple wanted a custom chip meeting the required specs, but this is where Intel balked due to estimates of low demand to design and fab it

- apple went with ARM and PowerVR GPU was an important part of this story

Sometimes I wish I had a searchable full text index of articles I found interesting but didn't see value in bookmarking


> Sometimes I wish I had a searchable full text index of articles I found interesting but didn't see value in bookmarking

I hope you won't mind this reply that isn't related to the topic, but this part of your comment stood out to me.

The way I get around this is by just saving the bits of articles I find interesting for later full-text search, and on the side I make topic-specific feeds of what I'm saving available via RSS for others to subscribe to.[1]

This paradigm shift from saving/bookmarking entire articles to saving highlights has worked really well for me, maybe it could work for you?

[1]: https://notado.app/feeds/jado - I run the service my highlights and feeds are hosted on, but if you're not interested in a subscription service that does this I don't think it's particularly difficult to spin up something yourself over a weekend or two


Thanks for the kind suggestion, I'll look into this


Was it an x86 core rather than Xscale?


Intel doesn't invest in anything besides x86. They let XScale languish while they owned it and even if the iPhone used XScale, Intel still would have found a reason to under-invest in it.


But Intel did respond with x86 SoCs for smartphones. Merrifield, Moorefield and Airmont.


I still have an Asus android tablet with an Intel CPU, circa 2014.

It wasn't that great then and it deserves death now. Zero ability to upgrade or get third party OS on it due to those processors not taking off.


And they were plagued with compatibility and performance problems.

Almost bought my niece an Android tablet with an x86 CPU. They were priced super low. Turns out there was a reason for that. Apps ran like shit and any games written using the Android NDK just straight-up didn't work.


To me this seems to be an attempt to leverage their dominant position at the time to further strengthen it, and eventually failing to strong-arm the industry.


Xscale didn't have a GPU built into the SoC.


It only cost Otellini his cushy job


Non-touchscreen smartphone boom been going for years prior.

The problem was that embedded OS-es like Symbian, A200, PalmOS were running circles around Windows Mobile. Most of them used Samsung SoCs which were few times slower than StrongARM.

StrongARM was seen exclusively in the context of it being used to "lift" extremely bloated WinMo smartphones, and was otherwise thought as an overkill for other uses.


I recently went from a massive Intel gaming rig (10000 series i9, 3080, and the 1200w powersupply to match) to a the basic $1999 Mac Studio (albeit joined with the several 2TB external SSDs from the old machine).

Night and day... and the Mac is actually faster for programming type stuff and the audio stuff I mess around with - on top of being a much more pleasant environment for both.

Even for gaming it's a lot more competent than you'd expect, especially for games that have been ported to Silicon. WoW runs at over 100fps (NB: 1440p, 165hz gaming monitor) with settings turned up pretty high. Factorio runs awesome, and just got a silicon port that bumped performance another 20% or so.

Obviously it's not so good at AAA games, but I have a PS5 for that, anyway. A lot more of my Steam library was avilable than I expected.. all the Paradox grand strategy stuff, lots of tycoon/city/base builders...

On a day to day level it's just so much more plesant... effectively silent (not recording studio silent, there's a very very slight noise, but it's well below the noise floor if the household HVAC is running, or you have any sound at all playing.

Even maxed out (like a chess engine using all cores) the fans barely come off idle and the machine temp peaks in the high 50s. At idle, it sites in the very low 30s, only 10c above ambient - and things that are on the edge of the SoC, like the RAM, are only 5c above ambient.

Then old i0 beast would idle in the 60s and go all the way into the 90s under load, with fans sounding like jet turbines.


How does this description of your hardware purchases relate to the post you replied to? Or to the news article?

Yes, the 10900k as the biggest 14nm consumer chip was a power guzzler and is bad compared to Intel/AMD/Apples current line up on new processes.


These 10900k or 11 series anecdotes always crack me up. I switched from M1 Pro to Ryzen 6800H and it feels just as fast as the much more expensive apple laptop. It's not even using TSMC 5nm like the new Zen 4 parts.


I think the majority of what made the M1 lineup so successful, was that they were on TSMC's 5nm before anyone else, and had that exclusivity.

November 2020 the m1 mac mini was released... September 2022 was when Zen4 came along, the next CPU to use the same node. Nearly a full 2 years.

Cinebench scores... The M1 Ultra got 24189 multithreaded... not that impressive but its tdp is 60w... at 65w the 7950x (eco mode) scored 31308 in the same test, down from 38291 at full power usage (170w).

It will be interesting to see if apple pushes for another exclusivity deal with TSMC for 3nm.


> It will be interesting to see if apple pushes for another exclusivity deal with TSMC for 3nm.

It’s pretty much guaranteed as it’s part of the deal Apple has with TSMC. They bankroll a significant part of costs for a new process node, in exchange they get exclusive access to the node for a number of years.


AMD is yet to release a TSMC 5nm mobile part (low power 5nm), so the exclusive foundry lead Apple's been paying for is more like 2.5 years. It is my understanding that things are pretty much the same with TSMC's 3nm node. Apple's lead is all but guaranteed for at least a couple more years.


I'm curious if anyone has a similar story but running Linux instead of macOS? I have a DIY AMD 5600X system (128 GB RAM, two M.2 drives) that is fast but warm. In the summer it can be uncomfortable in my upstairs office, despite A/C. I could be tempted into trying an ARM workstation if I thought it would make a big difference on heat.

I don't really do gaming, except occasional Minecraft with the kids. My GPU is a 10+ years old GeForce GTX 560, and it seems fine. (But maybe that only works with x64? I don't know.)

I've already had a taste of Apple's ARM speeds with my M1 Macbook Air. Running `make clean && make` on Postgres is practically instant. I don't understand what's going on there to make it so fast. But for my daily work I'd rather be in Xubuntu. Is anyone out there doing something similar?


No experience myself but I've heard good things about Asahi Linux (https://asahilinux.org/). Not sure how usable it already is as a daily driver.


For the heat try undervolting the cpu, there’s even a utility to help you get started with the settings by 1usmus, if you are willing to tinker you can get some incredible power efficiency.


Counterpoint to the "but I have a PS5 for that": if you develop video games or work in CAD every day, Macs just aren't a sensible option.

Regarding recording studios, PCs are generally kept in a machine room away from everything else.

Great that it works for you, but for a lot of creative industries, the Mac doesn't make sense in 2023.


Yea, I’m aware how recording studios work. Those of us making amateur recordings at home don’t always have the luxury.


I'm really curious what is holding CAD companies back to port their software to the Mac?


This is a great question that people were asking even 15-20 years ago, likely even before then too, but it was before my time in asking it.

Essentially, the demand for CAD on Mac just isn't high enough to justify it, especially when anybody serious about CAD wants the absolute cutting edge of hardware, and the ability to expand upon it as soon as the rest of the industry is able to take advantage of the latest GPU. Being competitive is everything.

It's never been a case of software needing to be ported. Some CAD apps have come and gone on Mac, but the demand just isn't there in terms of sales to make it worthwhile. That's not to say that there aren't CAD apps on Macs, there are, but the big boy industrial toys are elsewhere.

Keep in mind we used to run Parallels and Bootcamp in the '00s to get Windows software on Macs!


Plus the dominant platform (Soludworks) is totally built using MS UI libraries… ribbon, etc.


I thougt autocad was ported to M1?


No one uses autocad anymore. It's as relevant to the modern CAD world as MS Basic is to us. Even the architects have moved on to stuff like Revit.


(Like, to further clarify, when I did my Engineering Technology degree, we were trained on AutoCAD (But also 3D Cad), and it was obvious then that autocad was antiquated and was soon to be abandoned en masse.

This was in 2004.


I'd assume their rendering enigne is written for single API, such as OpenGL or Vulkan. Porting it to metal would be quite large project.


Isn't there a reasonable OpenGL shim on top of Vulkan?


Cad programs barely work on the systems they’re designed for. I love solidworks, but it’s a real turd if you aren’t using one of the correct “certified” driver for your graphics card.


I made a similar transition as you earlier this year from a i7 8700k/GTX 1080 Ti Windows desktop to a Mac Studio and much like you I'm very happy with that decision for a lot of the reasons you already described.

One of the more interesting aspects of the switch I hadn't considered though is that it actually (positively) impacted my power bill - I'm paying about $5-$10 less per month since the switch.

The difference in form factor is welcome as well - some days I find myself astonished at how much power fits in such a (relatively) small box on my desk.


Still the performance vs price seems pretty bad. From what I see their 10 core 500GB SSD + 32GB RAM costs as much as a 7950X (16 cores) + RX 7900XT + 64GB RAM + 1TB SSD which should have better performance and you can actually play AAA games (at very high performance and quality, also 4k).

I like Apple stuff (I have an iPhone, MBP) but their high-end stuff is really overpriced for what it offers.


Well, let's try to compare like for like...

7950x = $599 7900XT = $800 64GB of RAM = $150

Thats' $1550 right there, before motherboard, storage, case, cooling, powersupply, etc. By the time you add all that (plus labor, either in dollars or in time) I bet you're in the $3500 range with those specs.

You have to consider the total package... the Studio is 7" x 7" x 4"... that's smaller than a microATX board, never mind some giant dual slot graphics card (Which also draws 300w btw, over 3x what the entire Studio draws).

Lack of noise, heat, and power efficiency has real value. Some of us are at a point where we want stuff that just works, not to fiddle with components and BIOs settings. Plus, frankly, nobody on the PC side has anything even close to Applecare.


I think you went way overboard with the costs - a good AM5 MOBO is around $200, case $150, AIO cooling or good fan $100 and a good PSU would be $200. That's a total of $650. Add to that assembly and sanity checking for let's say $100 (many times it comes free for these upper end setups) and we get $750.

That's a total, with your estimates for the CPU and GPU prices, of $2300 which is around $1000 lower than the Studio while having more power. I agree the power efficiency is worse but also note that the Mac simply doesn't have access to the same compute power. Also the CPU and GPU are redlined by default and going down to almost 50% TDP only causes a 5% drop in performance so it is not that bad.

Of course there are advantages that you mentioned (form factor, better support) but, at least for me, that is not worth a $1k+ premium. Also using MacOS for me at least is a pain compared to Linux.


You're comparing against the ultra. I'm using (and talking) about the base model Max, which is $1999 all in. (The Ultra is 20 core, btw. It's essentially two Maxes fused together.)

I will agree that the Ultra ($3999) is probably not a great value for most people, since outside of synthetic benchmarks, it's usually more like 10-20% faster, not 100%, as outside of editing 8k video or AI, there really isn't much that scales well to that many threads.

Another thing I'll mention, and that I think really is a big part of the special sauce, is the insane memory bandwidth.

A 7950x has a maximum memory bandwidth of 83.7GB/sec. An M1 Max has 409GB/sec.

It's really hard to outrun RAM that's essentially soldered directly to the CPU.


I’m generally a pretty big fan of self-built systems in general and Intel in particular (what can I say, every couple years they pulled a rabbit out of their hat during my childhood, it was really magical).

But that memory bandwidth is some envy inducing stuff.


The ram is regular package on package stuff, it's not the reason for the high bandwidth.

The bandwidth is due to apple giving more area and pins to more channels.


The Mac Studio was a bit of a sleeper. For AI inferencing at 128gb it is higher performance than any affordable consumer solution with that much memory available, it's quiet, it's small, it's power efficient, it supports 5 displays. It costs a lot less than an equivalent MBP.


It's the desktop a lot of us having been waiting for... no built in screen, no forced purchase of hundreds of dollars in "magic (i.e. garbage) keyboard/mouse), and ports out the wazoo... even the base model I have has 4x Thunderbolt 4, 4x USB 3 (two Type C on the front, 2 Type A on the back), 10Gb eth, even a headphone jack and SD slot.


Interesting comment about the Apple keyboard. I have bought Apple Magic keyboards for all my Windows machines. Need a correct driver to make the fn key work and remap the delete key, but otherwise I have not yet found a better TKL keyboard.


Hotswap mechanical or nothing for me. I hate low travel keys (and flat keycaps).

I actually just bought this last week for the work machine..

https://www.keychron.com/products/keychron-k8-pro-qmk-via-wi...

Not as good as the board on my personal PC, but that is a highly tweaked out, kit built, full metal gasket mount board that I have mumble hundreds of dollars in to...


People are different. I grew up with clunky and noisy IBM keyboards. Really love the low key travel.


Model Ms are terrible. A nice modern mechanical with light linear switches is neither.


I buy mechanical keyboards for all my macs and skip the flat feel of the chiclet keyboards. It is too bad that such an experience can’t be very portable.


I used to like apple keyboards but I got RSI using them. I was probably holding it wrong but switching to mechanical keyboards fixed it for me.


I have a similar 8700k system (I’ve got a weaker GPU, though). I already kind of feel like anything that gets even close to stressing it would be much happier running off on some cluster or server somewhere.


Chess Engines I've got to say are an argument against Apple Silicon, at the same power draw any AMD or Intel solution is going to beat the pants off of it.


Seems super fast to me? Benchmarking now...

bench 1024 8 26 in latest stockfish (well, whatever brew installed two days ago)

Total time (ms) : 128108 Nodes searched : 1605433485 Nodes/second : 12531875

12,531,875 NPS is competitive with a 2950x threadripper, which is an $1100 cpu with a TDP of 180w. The M1 Max has a power draw under load of about 90w max (and that's the full SoC), but under typical high loads it's more like 50-60w.


2950x is a weird comparison when it was two generations older than what AMD had available at the time. Also the 2950x still did 37422972 NPS which is still roughly 3x higher. I think it goes without saying that if you look back enough generations you will eventually find an AMD/Intel product that gives less performance/watt.

The 5950x was also available for sale at the time, a system based on it was cheaper, and it does 54029460 NPS in stockfish which is over 4x faster.

It's not like the Mac Studio is too slow for chess engines to be usable for purposes like analysis, but the thing is if you make that argument than why not get a much cheaper Ryzen 5 or Intel i5 based system which is still faster than the Mac Studio? Say a Ryzen 5600g which is also pretty power efficient and can fit in a nice little box and push a few displays?

https://openbenchmarking.org/test/pts/stockfish&eval=7596252...


Because running chess engines is like 0.5% of what I use a computer for.


> Even for gaming it's a lot more competent than you'd expect, especially for games that have been ported to Silicon. WoW runs at over 100fps (NB: 1440p, 165hz gaming monitor) with settings turned up pretty high. Factorio runs awesome, and just got a silicon port that bumped performance another 20% or so.

WoW doesn't need much hardware to run. What is the fps on the gaming rig for comparison?


About the same. It’s cpunlikited not gpu. But either is plenty of gps combined with an adaptive refresh rate monitor… even the studio hits the monitors 165hz cap indoors, and it’s 100+ in most out door zones… some of the ones with tons of fog get it down into maybe the 80s, but it’s still really smooth… no jitter or microstuttering.


I'd say that much of the M2 speed comes from its ultrawide RAM interface, only possible because RAM is soldered basically directly to the CPU die(s). It can't scale to server sizes, but it makes perfect sense for a laptop or a smaller desktop.


Apple's wide ram interface is an advantage. But they aren't doing anything exotic to achieve it. AMD & Intel could offer consumer cpus with more memory channels but they choose not to, likely for cost and market-segmentation reasons.


Isn’t having the memory integrated into the dies bit exotic for a performance desktop?


"On Package" isn't the same thing as on die. Apple's M1 LPDDR memory setup isn't really any different from what you would find in a normal PC laptop. By putting the memory as close as possible to the CPU it makes it easier to maintain signal integrity, but it's not really any different from anyone else's approach.


Yeah, it's insane. Even my M1 Max is 400GB/sec.


> While 243GB/s is massive, and overshadows any other design in the industry, it’s still quite far from the 409GB/s the chip is capable of. More importantly for the M1 Max, it’s only slightly higher than the 204GB/s limit of the M1 Pro, so from a CPU-only workload perspective, it doesn’t appear to make sense to get the Max if one is focused just on CPU bandwidth.

https://www.anandtech.com/show/17024/apple-m1-max-performanc...


It's shared with the GPU, so limited it to CPU-only doesn't seem very fair. In fact, I think not having to transfer data to the GPU is another big part about why, at least for casual gaming, it packs way more punch than it really has any right to.


To throw a similar experience into the mix... I agree on all counts, particularly around silence & efficiency - especially on the MBPs where the fans will actually turn off when not needed.

I've been a long time Windows user (20+ yrs) and have built my own PCs for nearly as long. My most recent 5900X + 3080 FE setup with 2x Gen 4 NVMes is amazing, but it's not the most efficient. ~95W idle and can top out around 200-400W depending on the use.

The (base) 14" M1 Pro I got launch day has been great overall, just really smooth/snappy in real-world use, where lately I've hit snags on Windows and MS software. The 14" M1 Pro maxes out around 35W or so (including its display). Desktop's more like 500W including displays. Great battery life on all the new M1s, it can do a full day at 8-14+ hours of medium/heavy use depending on your use/multitasking. It doesn't have to throttle down for thermal or power reasons like many recent Intel laptops (including a previous and recent-gen XPS 15 - which was much noisier by comparison).

In terms of quietness, I have the 3080 FE undervolted to typically only top out around 240W with under 2% performance loss. It typically runs around 55-65c in games without the fans being noticeable. 5900X is cooled by a 240mm AIO so it too is able to maintain reasonable temps running nice low fan speeds - only up to around 1050 RPM for sustained max loads which is still quiet compared to most other peoples' desktop PCs I've seen.

The 14" M1 Pro has never been audible above the idle fan speeds on the desktop (which run at ~550-600rpm and are also nice and quiet). The MBP is virtually always silent unless you manually crank the fan. Direct sunlight or somewhere very hot the fans might kick up a little more in response, but for me it's never really noticeable or even on or audible under normal use, and I like quiet devices.


IFS needs to branch off of Intel's umbrella as an independent Fab competing with TSMC, vertical integration for Intel has failed to compete in last 5 years.


The only real Arm competitor to Intel servers is Ampere. There are literally thousands of customers to TSMC that Intel can serve, as long as they don't just focus on whales. Gelsinger needs to realize intel isn't in the pole position and hasn't been for a years, but he keeps acting like it.


From what I understand Arm for servers is not really a thing right now. What sorts of industry shifts would it take for this adoption? What would be the challenges?


Why? AWS has Gravitron and Ampere seems to be growing fast both seem to be very competitive in certain uses cases.


It's very much a thing! AWS and GCP and Oracle have had it for years, probably others as well. Just a few days ago, Hetzner launched Ampere based VPS.


What are the use cases?


They're more energy efficient, which means they're cheaper to run. That's about it


> The collaboration will focus on mobile SoC designs first, but allow for potential design expansion into automotive, Internet of Things (IoT), data center, aerospace and government applications.

Take a note on the wording. They are definitely making a mobile SoC for somebody, with Qualcomm being the likeliest client.

But then they instantly dilute it with other buzzwords, and try to break the line of thinking how it will compete with Intel's own products.


'Data center' doesn't necessarily mean Xeon competitors.


This is the definitive comment in this thread ^.


I read the press release twice and it doesn't really say much. I'm a digital physical design engineer.

What it probably means is that ARM will port their standard cells and memory compiler to Intel's process. The other IP like the ARM CPU's are delivered as Verilog RTL and a customer can synthesize them to standard cells using any foundry's standard cell library.

When I worked at ARM they would work with any foundry even those that made competing IP. If my fabless semiconductor company decided to use Intel as our fab then we would still be using ARM CPUs in our chips. We aren't going to use Intel though based on Intel's prior history over the last 20 years of saying they will be a fab to external customers and then backing away and cancelling those plans. Based on talking to former coworkers most of the people talking to Intel as a fab are using it to negotiate lower prices with TSMC or Samsung.


Huge world-view shift from Intel. Likely the precursor to splitting the fab part of Intel from the design part with the intention of spinning one or the other off.

It also underscores the reality that Intel doesn't think it can keep its fabs busy/profitable with just its own designs anymore.


> Intel doesn't think it can keep its fabs busy/profitable with just its own designs anymore

Intel has been saying so for the last 2-3 years.



Not super unexpected though. Pat Gelsinger has been teasing this move[0] for as long as he's been CEO, and it makes sense. The industry is verging on RISC again, and Intel does have meaningful knowledge they can apply here. If their business model is right, it could be a profitable side pot.

Honestly, this says less about Intel's desperation to me and more about ARM's. Just a few months ago they were rushing to renegotiate their contracts, and there have been rumblings for a while about ARM's eventual response to RISC-V. They know they need buy-in from legacy companies to make ARM a lasting ISA, simply trusting Apple not to throw them under the bus will get them PowerPC'd in an instant.

[0] https://www.macworld.com/article/677947/intels-ceo-wants-app...


Now that you mention RISC-V, maybe a bolder move from Intel would have been to throw their weight behind RISC-V and try to be the leader of the new tech instead of a co-leader of the old tech.

On the other hand, ARM is very popular right now, and maybe Intel feels like what it needs right now is to take some of that market to deprive competing fabs of revenue and give itself better economies of scale.


>a bolder move from Intel would have been to throw their weight behind RISC-V

Intel axed their involvement with RISC-V.[1]

[1]: https://www.theregister.com/2023/01/30/intel_ris_v_pathfinde...


They will still release some devboard with chips co-designed with SiFive, and offer their foundries to clients who want to fab chips based on RISC-V.

Just not under the Pathfinder program.


That was just one part of their involvement.


Couldn’t disagree more with your second paragraph. Customers making SoCs with Arm CPUs can happily go to TSMC. Intel is desperate to get IFS working and if it hasn’t got an Arm offering it’s basically locked out of mobile SoCs.

Also Apple is a very small part of Arm’s business.


Apple is a fraction of ARM's current business, but represents the entirety of their future. The microcontroller licensing business is pretty much over (not that ARM China refuted that), so their money will be on licensing core design and the ISA to mid-high range SOCs. RISC-V won't be there for a while, but it's already starting to replace STM32s in devices that only need simple digital controllers.

The writing is on the wall - RISC will be democratized, and ARM's intellectual property is bleeding value by the day. They know this, and they're making good moves to secure their positioning by working with Intel. Even still, their stakeholders are getting iffy and even their largest customers don't see them as essential to their business. Hundreds of Chinese manufacturers already kicked them to the curb years ago.

> Intel is desperate to get IFS working and if it hasn’t got an Arm offering it’s basically locked out of mobile SoCs.

Or, less charitably, ARM is getting iffy about TSMC's density roadmap and wants to hedge their bet a little. They lose nothing by reaching out to Intel, and Intel loses nothing by expanding their lithography to 2 popular ISAs. It is a mutual moneymaking move, and one that would not exist if ARM corporate was confident in their ability to wait-out Intel in the long term.


I wonder how this affects TSMCs business in the long run and if this is a move that is specifically geared towards foundries to come under US ownership. Interesting regardless.


This is huge. For one, it can decrease geopolitical risks for US-based firms. It also offers a chance to obtain higher-end silicon. Some companies can’t afford to use TSMC’s latest and greatest silicon because big companies have soaked up all the capacity. This offers an alternative route for them. Hopefully it will make the environment much more competitive.


> Some companies can’t afford to use TSMC’s latest and greatest silicon because big companies have soaked up all the capacity.

Apple is single handedly bankrolling the latest and greatest from TSMC. If it wasn’t for them, the latest and greatest would likely be even more expensive/out of reach for “some companies”.


Not single-handedly, Nvidia also puts up multi-billion dollar investments in TSMC's fab technology. That's what put them on the 4nm node so early.


This is totally true, Esp. if Intel can be competitive. The dark side outcome is if it is really successful, it can really destabilize Asia by making the outcome of invading Taiwan less important to the west.


I would imagine TSMC and the Taiwanese government are keeping a really close eye on this and doing everything they can to mitigate their the erosion of their Silicon Shield. I wonder what exactly though...


>making the outcome of invading Taiwan less important to the west.

That's the whole point of the recent semiconductor investment boom, though. Reduce dependence on Taiwanese silicon so that we can care less about the geopolitics of the region.

The west essentially is getting tired of playing bodyguard to Taiwan; because as has been evident with the Ukraine invasion, the west is not willing to go to war to protect peace.


Sure, let's clown on the western democracies adequately measuring the threat of nuclear warfare.

Russia and China cannot win a conventional war against the West. For the West, it wouldn't be easy, it wouldn't be quick, but it would be a certainty. Maybe it would even take a couple of wars, like Europe in the 20th century. The only way for those two terrible countries to win is if they use nuclear weapons, and use them quickly. Western democracies know this, and act accordingly.


Is anyone trying making IOT type devices in geopolitically stable places (in so far as there are any). Espressif seems like a pretty clear leader right now, but perhaps that is only from a hobbyists perspective? If bad things happened we would have bigger problems then wifi enabled toothbrushes, but it seems like a niche someone should make more robust and diverse.


Future US made IOT devices will mostly be ARM Cortex M33 and M0+ microcontrollers. They have the latest cryptography and safety features built-in and nice libraries. It's surprisingly difficult to find out what fabs build them. Maybe someone here can chime in?


Those things mostly don't have built in wifi which is annoying, because then you need a separate chip for the wifi. Atmel is the exception, arduino even makes a dev board with it, but it is $35 vs $8 for a esp32 devboard.


Can't IoT devices be made on older fab's like the 32/40nm and up fab's available in for instance Germany (GloFo)? There are more of them in other more-or-less stable regions.


Of course, but I imagine Intel's spare 14nm+++++++++++ capacity is pretty significant at the moment, which is best in class (non-TSMC division).


Shouldn't IoT be most sensitive for power consumption?


Usually more sensitive to price I think? Many IOT devices plug into wall outlets. For those that don't, I would expect the wireless capability to dwarf the power consumption of the chip itself, but it seems like newer fabs would definitely be a good thing.


Yeah. Depends on a lot, but only small independent things really care about power. For the rest, put a bigger battery on it or plug it in... and nearly the entire consumer market for IOT can just plug it in.


TSMC will be gobbled up by US and Chinese foundries. The talent will most likely go to Chinese fabs given they have access to a bigger customer base as more and more Chinese people get into middle income class. But not only that China might get better market access in BRICS and SCO nations which is a bigger block than the NATO and partners block.

US fabs will mostly service NATO and partners nations, i don't see them being able to compete in the global south on price given Chinese infrastructure, living costs of labor and domestic market size advantages which can fuel the R&D of Chinese semi designer studios and fabs.


more competition is better since Global Foundries drop out of 7nm and Samsung isn't doing any better.


Intel’s first announced partnership to be a third party fab is to manufacture chips based on processes that were state of the art in 2013.

TSMC doesn’t have much to fear in the near future.

Edit: I see where the confusion lies.

https://www.forbes.com/sites/patrickmoorhead/2022/08/10/inte...

This was announced two years ago. Intel hasn’t proven that it can be a third party manufacturer for anything close to cutting edge

Intel announced they would be producing chips for MediaTek two years ago.


What on earth are you talking about? The article is discussing partnering with Arm using their 18A node. At 20A, Intel will be transitioning from its FinFET design over to a new type of transistor, known as a Gate-All-Around transistor, or GAAFET. In Intel’s case, the marketing name they are giving their version is RibbonFET. This is a technology that only started to come to a head in early 2020. The other arm of Intel’s 20A and onwards design is what the company is calling ‘PowerVia’. This is usually referred to as ‘backside power delivery’ in the industry. Although slightly older, we're still only talking about 2017 at the earliest.

Intel has its 2nm node (called 20A, followed by 18A six months later) lined up for production to start in the first half of 2024, which compares to TSMC’s equivalent node (called N2) which is scheduled for the second half of 2025. 18A will be half a node denser than N2, while also being 12 months earlier to market.

As to IFS as a concept, it's only recently been revamped in 2021 after a lackluster attempt in 2013 as you mentioned. So far they've announced partnerships with Arm, MediaTek, Qualcomm, AWS, and the DoD. That's a pretty heavy hitting pack, not including all of Tower Semiconductors customers. Intel wouldn't be building another 3 fabs if it wasn't serious this time about building capacity for other companies.


Intel has missed so many deliverables that I take any of their statements with a healthy dose of skepticism.

Like their 7nm process has been delayed, what? 4 times? Have they even shipped it? Its the same story they had with the 10nm process too. IIRC they were behind like 5-6 years on that one too.


Intel Raptor lake CPUs are widely available these days and are built on Intel 7N. https://en.m.wikipedia.org/wiki/Raptor_Lake?cmdf=intel+13th+...


Intel 7 isn't their 7nm process, but their renamed 10nm process.


    TSMC 7nm HPC:    66M Transistors/mm^2
    TSMC 7nm Mobile: 96M Transistors/mm^2
    Intel 7:        100M Transistors/mm^2
    TSMC 7nm+:      115M Transistors/mm^2
 
It was perfectly fair for them to rename it, but yes, in the context of deliverables it's the same slipped process that used to be called 10nm.


Yeah Intel has had a great roadmap for the past decade. They haven’t executed most of it and when they did, they were years late to market. They have a couple years to fix that or they’ll live on government subsidies.


> Arm, MediaTek, Qualcomm, AWS, and the DoD

Remember that unlike the others, Arm won't be an actual customer here. They don't require any capacity themselves.


Interesting to know that "2nm" and "20A"(Armstrong) are just marketing names, they don't reflect physical sizes

- see: https://en.wikipedia.org/wiki/2_nm_process#:~:text=The%20ter...


My understanding is that hasn't been the case for a long time now. Wikipedia is saying the last node actually measured by transistor length was from 1994.

https://en.m.wikipedia.org/wiki/Semiconductor_device_fabrica...


Probably an autocorrect mishap, but in 20A the A is for angstrom.


> 18A will be half a node denser than N2, while also being 12 months earlier to market.

It would be interesting to hear what the consensus view is on Intel actually being able to meet this timetable. Surely a strong degree of scepticism is warranted given their recent record.


A recent interview with Shang-Yi Chiang, former Vice President of R&D at TSMC (also held positions at TI, HP, and SMIC) had insightful commentary on the speed of bringing up a new node.

"We all take two years to develop one generation, how come you guys can do it in one or one-and-a-half year?" And they asked if some of your customer transfer technology to you or what not? And I told him, "No," I told him that, "That's not true." I think he probably implied we steal technology from customer, the way he talk.

And I say, "I'll tell you why." I said that, "When we develop one node, basically you have some learning cycles. First, you do some simulation. And you have some idea, then you run wafers to prove that. So, you run a group of wafers according to simulation and you have some splits. The wafer runs through the fab, they come out and you measure them, you analyze them, and you try to improve and you run this again. This again, you run. So, this is learning cycle." At that time, "It takes about six learning cycle, roughly, to complete one generation." Of course, you had some short loops and not just one. I said that, "My R&D wafer in the fab run much faster than yours, because my R&D engineer works three shifts and you only work one shift. So, your R&D wafer move eight hours a day, my work/move 24-hours a day. So, my wafers go three times faster, even if you are twice smarter than me, I still beat you up." <laughter>

https://www.computerhistory.org/collections/catalog/10279267...


Some skepticism, but not too much. There's a decent possibility the self-sabotage at Intel stopped after new CEO Pat Gelsinger took over.

Previous leadership was seriously mismanaging things. The following comment sheds some light on their stagnation during the 10nm/'Intel 7' node development period:

https://news.ycombinator.com/item?id=31759034


Looks like Intel 4nm will only start shipping 2023H2. I'm not sure I see Intel 2nm shipping less than 6 months later, especially given Intel's recent history of stumbling on execution. How firm do you believe the 2024H2 delivery date of N2 to be?


Intel’s first announcement when they decided to manufacture for third parties wasn’t ARM it was MediaTek.

Intel has a long way to go to prove they can catch up.

https://www.forbes.com/sites/patrickmoorhead/2022/08/10/inte...

Intel isn’t dabbing cutting edge chips from them


Are you perhaps confusing 18A with 18nm? 18A is 1.8nm and is scheduled for the second half of 2024.


They're referencing the first round of IFS which launched in 2013 and never really became much of anything but had a lot of announcements and fanfare at the time. Their foundry would have been the most advanced available foundry processes at the time had it launched as planned. From Feb 2013: https://www.reuters.com/article/us-altera-intel-manufacturin...


Actually I was referencing the MediaTek announcement

https://www.forbes.com/sites/patrickmoorhead/2022/08/10/inte...


Interesting to see Intel (or atleast a part of Intel) 'working with' a competitor.

Although this might help Intel become more of a foundry player like TSMC this could help ARM get a greater share of the the server market over Intels xeon processors.


Glad that Pat is making good on his strategy to offer IFS to everyone, even bleeding edge process nodes. I really think this is the only way to truly mitigate the dependance on fabs that are entirely in the East China Sea region. If Intel can ensure that IP is safe between IFS and a potential competitor to Intel, then I think this is all for the better.


Intel Foundry Services (IFS) specifically.


The original title says "Intel Foundry" but I had to make it fit into HN's character limit.


Ahhh another example of the hardest problem in CS - naming things!


I wonder if this can be solved by coming up with the acronym you want (because let’s be honest, the clever acronym is what matters) and asking ChatGPT to work out what it means.


It's solved by asking Dang for a charitable exception


"Intel Foundry Services and Arm to collaborate on SoC design?"


I can't edit the parent comment anymore so just for the context, my original title was "Intel and Arm Announce Multigeneration Collaboration on Leading-Edge SoC Design" and someone changed it to the current title.


I might have abbreviated "Collab" instead.


I hope they can make money from IFS, because based on Intels lastest earnings, they can't even make money on their own designs made in their own fabs.


So what does this mean for RISC-V?


Not much.

All we've learned here is that Intel foundry will fab some ARM chips.


Glad to hear Leading Edge is back in the game![1]

[1] https://en.wikipedia.org/wiki/Leading_Edge_Products


So Intel might use TSMC's better process for some of its own CPUs while fabbing ARM CPUs in its foundries?


These are processes several years down the line. Intel is saying their processes will be competitive at that point.


"Competition is for losers"


Back to Intel StrongARM Pocket PC PDAs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: