Today’s ARM chips are widely used in low-end devices such as Apple’s iPhone and iPad. Great for hand-held personal computing. But limited architecturally to 32-bit memory addressing, which limits memory addressing to 4 GB. Now 4 GB is eight times what the iPhone 4 uses for OS memory, so 32-bit addressing is not inhibiting smartphone capabilities anytime soon. Or any other market where the ARM core plays today.
Nope, the 32-bit addressing limits inhibit an ARM role in the datacenter.
Today’s servers and, most recently, PCs run a 64-bit processor with a 64-bit-aware OS. A 64-bit OS allows for control of many more resources that a mere 4 GB. Many two-socket servers today support up to 192 GB of DRAM, for instance. This Tier-1 class of server is the datacenter workhorse, used as a building-block for front-ending large numbers of users in enterprise applications (e.g., Oracle and SAP) or Internet apps (e.g., Apache web servers). Tier-2 and Tier-3 servers use even more memory and processors.
The next generation ARM silicon will support 40-bit addressing and have hardware that translates 40-bit memory addresses into something that the 32-bit core can execute. It will also enable virtual machine (VM) technology. The means to do this is very mature, dating back to mainframes in the 1970s. How ARM does it and the fact that the architecture is 40 bits and not 64 bits is not important to this blog or most IT managers.
What is important is that it will be easy to port Linux to a new ARM chip. And Linux represents about half of enterprise Tier-1 servers today (with Microsoft getting the lion’s share of the rest). That means an ARM/Linux server could be widely available from multiple vendors by mid-decade.
To Intel and AMD, who share almost all of the lucrative Tier-1 server market, ARM is an unwanted market entrant. The economic argument is simple. A new 40-bit ARM core won’t cost much more to manufacture than the $25 or so it costs to put an ARM processor in an iPhone 4. That compares to the hundreds to thousands of dollars for an Intel or AMD CPU. So it is logical that an ARM-based market entrant would aim at lowering server processor costs, perhaps dramatically.
Yes, I am skipping over the considerable technology that IT managers demand in their enterprise servers that would not be present in ARM’s chips. But a lot of IT executives will think long and hard before deciding that an ARM-based blade server, for example, handling web traffic doesn’t need the entire server reliability-availability-serviceability (RAS) stack that’s found in Intel and AMD servers. They’d rather pay a lot less per blade. So, the risk to Intel and AMD is a race to the bottom on server pricing. True, Intel could counter with an Atom-based server core easily, but Intel can’t sell that part for Xeon prices, and thus would lose the margin dollars that fund a lot of R&D (and profits).
Complicating the scenario for Intel is the possibility that Apple, which sole-sources processors from Intel for Mac Books, iMacs, and Mac Pros, could tweak the new ARM core and use it to run a new iOS-based operating system including all the half-million App Store applications and components of OS X. In other words, Apple ARM-based laptops and desktops. I’m not betting on that scenario, but be assured that it’s being considered at Apple HQ.
Instead of getting simpler, IT decision-making looks to get more complicated as ARM, a licensed architecture manufactured by multiple sources, gets into the computing mainstream by mid-decade. And competitive pressure gets hotter for Intel and AMD.