Enterprise Computing Jumps on the Supply-Demand Curve

The traditional enterprise computing server suppliers are in an ever-faster game of musical chairs with cloud computing competitors. Recent cloud price cuts will accelerate enterprise adoption of the cloud, to the economic detriment of IBM, HP, Oracle Sun.

Many IT executives sat down to a cup of coffee this morning with the Wall Street Journal opened to the Marketplace lede, “Price War Erupts in Cloud Services.” Cloud computing from the likes of Amazon, Google, and Microsoft is “changing the math for corporate executives who spend roughly $140 billion a year to buy computers, Internet cables, software and other gear for corporate-technology nerve centers.” This graphic begs the question,

50 Million Page View Web Site Costs“Gee, maybe my data-center computing model for the company needs a strategic re-think?” And while there’s a very active consulting business by the usual business-transformation consulting suspects, the no-cost answer is: yes, cloud computing is a valid model that most enterprises and applications should move to over time.

This blog post, though, is not about the nuances of cloud computing today. Rather, we need to take a look at how the supply-demand curve for enterprise computing must impact the traditional enterprise server business — hard. (And yes, I am breaking a vow made during Economics 101 to never mention economics in polite company).

Cloud computing is sucking the profits out of the traditional server business.

For over fifty years, in the case of IBM, the traditional server companies including HP and Sun sold big iron, proprietary operating software and storage, and lots of services at high margins. In the past two decades, Intel’s mass-market silicon evolved into the Xeon family that took away a large percentage of that proprietary “big iron”. Yet the Intel specialist firms such as NCR and Sequent never could beat the Big Three server suppliers, who took on Xeon-based server lines of their own.

Cloud computing is sucking the profits out of the traditional server business. IBM is selling its Xeon business to Lenovo, and is likely to considerably reduce its hardware business. Oracle’s Sun business looks like a cash cow to this writer, with little innovation coming out of R&D. HP is in denial.

All the traditional server companies have cloud offerings, of course. But only IBM has jettisoned its own servers in favor of the bare-metal, do-it-yourself offerings from Amazon, Google, and lately Microsoft.

Price-war-driven lower cloud computing prices will only generate more demand for cloud computing. Google, and Microsoft have other businesses that are very profitable; these two can run their cloud offerings lean and mean. (Amazon makes up tiny margins with huge volume). To recall that Economics 101 chart:

Supply-Demand Curve

The strategic issue for IT executives (and traditional-supplier investors) is what happens over the next five years as lower server profits hollow out their traditional supplier’s ability to innovate and deliver affordable hardware and software? Expect less support and examine your application software stacks; you’ll want to make migration to a cloud implementation possible and economical. The book isn’t even written on cloud operations, backup, recovery, performance and other now well-understood issues in your existing data centers.

Meanwhile, what are your users up to? Like PCs sprouted without IT blessings a generation ago, cost-conscious (or IT schedule averse) users are likely playing with the cloud using your enterprise data. Secure? Regulatory requirements met? Lots to think about.

Follow me on Twitter @PeterSKastner

POWER to the People: IBM is Too Little, Too Late

“On August 6, Google, IBM, Mellanox, NVIDIA and Tyan today announced plans to form the OpenPOWER Consortium – an open development alliance based on IBM’s POWER microprocessor architecture. The Consortium intends to build advanced server, networking, storage and GPU-acceleration technology aimed at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers.”

IBM Hardware Is Not Carrying Its Weight
As the last computer manufacturer with its own silicon fab, IBM has a financial dilemma. The cost of silicon fab investments is increasing. 
Hardware revenues are declining.  There are fewer new Z-series mainframes and POWER-based midrange computers on which to allocate hardware R&D, product development, fab capex, and other amortized costs. POWER revenues were down 25% in the latest quarter. Bloomberg reports furloughs of the hardware staff this month in an effort to cut costs.

The cloud-based future data center is full of Intel Xeon-based servers as practiced by Google, Amazon, Facebook et al. But margins on Intel-architecture servers — IBM’s instantiation is the X Series — are eroding. Widely believed rumors earlier this year had IBM selling off its X Series business to Lenovo, like IBM spun off its PC business in 2005.

Clearly, the IBM hardware business is the subject of much ongoing discussion in Armonk, NY.

The OpenPOWER Consortium is a Strategic Mistake
Our view view is that IBM has made a strategic mistake with this announcement by admitting proprietary defeat and opening POWER up to an open-source consortium. The signal IBM is sending is that it is no longer totally committed to the long-term future of its mainframe and POWER hardware. The sensitive ears of IBM’s global data center customers will pick this message up and, over time, accelerate plans to migrate off of IBM hardware and software.

Proprietary hardware and software business success depends a great deal on customer trust — more than is commonly assumed. Customers want a long term future planning horizon in order to continue investing in IBM, which is not the lowest-cost solution. When trust is broken, a hardware business can crash precipitously. One such example is Prime Computer, a 1980s Massachusetts darling that was acquired, dropped plans for future processors, and watched its installed base decline at a fifty-percent per annum rate. On the other hand, H-P keeps Digital Equipment and Tandem applications going to this day.

By throwing doubt on its future hardware business horizon, IBM risks its entire business model. Yes, that is a far-fetched statement but worth considering: the IBM services and software business is built around supporting, first and foremost, IBM hardware. Lose proprietary hardware customers, and services and high-margin software business will decline.

So, we think IBM is risking a lot by stirring up its customer base in return for a few million dollars in POWER consortium licensing revenue.

What About Google?
To see how this deal could turn even worse for IBM, let’s look at the motives of the headline consortium member, Google.

First, IBM just gave Google the “Amdahl coffee mug”. In the mainframe hay days of the 1970s, it was a common sales tactic for Amdahl, a mainframe clone company in fierce competition with IBM, to leave a coffee mug for the CIO. Properly placed on a desk, it sent the message to the IBM sales team to drop prices because there was competition for the order. A POWER mug — backed by open POWER servers — will send a pricing signal to Intel, which sells thousands of Xeon chips directly to Google. That action won’t budge the needle much today.

POWER servers are most likely to appear in Open Compute form, as blades in an open-hardware rack-tray. These are the cost-reduced server architectures we see sucking the margin out of the entire server industry. Gas on the fire of that trend.

And we don’t see Google needing to build its own Tier-3 backend database servers, a common role for POWER servers. However, Google customizing POWER chips with nVidia GPU technology for some distant product is believable. For example, we’re puzzling how Google will reduce the $85,000 technology cost of its driverless automobile to mass-market cost levels, and the consortium could become part of that solution.

Open POWER Software Too?
IBM is emphatically not throwing POWER operating system (i.e., AIX Unix and OS/400) and systems software into the open consortium. That would give away the IBM family jewels. So, the open-source hardware folks will quickly turn to the Linux on POWER OS’s. Given a choice, the buyers will turn to open-source — that is, free or lower cost — versions of IBM software equivalents for system software. We see little software-revenue upside to IBM’s POWER consortium move. Nor services either.

Fortunately, IBM did not suggest that POWER licensing would extend to the fast-growing mobile world of tablets and smartphones because that would be a bridge way too far. IBM may staunch some of the embedded POWER chip business lost to ARM’s customers and Intel in recent years through customizations by licensing designs ala ARM Holdings.

Thoughts and Observations
In conclusion, we see nothing good happening to IBM’s bottom line as a result of the OpenPOWER Consortium announcement. And if it wasn’t about the bottom line, why risk long-term customer trust in IBM’s long-term hardware platform commitments? The revenue from POWER licensing will not come close to compensating for the weakness that IBM displays with this consortium strategy.

I ask this without drama or bombast: can we now see the dim horizon where IBM is no longer a major player in the computer hardware business? That’s a huge question which until now has never been asked nor needed to be asked. Moreover, no IBM hardware products would mean no IBM fab is needed.

The real implications are about IBM’s declining semiconductor business. POWER (including embedded POWER) is a volume product for IBM Microelectronics, along with current-generation video game chips. The video game business dries up by year end as Sony and Microsoft introduce the next generation consoles, sans IBM content. POWER licensing through the OpenPOWER Consortium might generate some fab business for the East Fishkill, NY IBM fab, but that business could also go to Global Foundries (GloFo) or Taiwan Semi (TMSC). Where’s the chip volume going to come from?

IBM will not be able to keep profitably investing in cutting-edge semiconductor fabs if it does not have the fab volume needed to amortize costs. Simple economics of scale. But note that IBM fab technology has been of enormous help to GloFo and TSMC in getting to recent semiconductor technology nodes. Absent IBM’s help, this progress would be delayed.

Any move by IBM to cut expenses by slowing fab technology investments will have a cascading negative impact on global merchant semiconductor fab innovation, hurting, for example, the ARM chip ecosystem. Is the canary still singing in the IBM semiconductor fab?

Your comments and feedback are invited.

Follow @PeterSKastner on Twitter

IBM POWER Linux Server

IBM POWER Linux Server

The 2013-2014 Computing Forest – Part 1: Processors

Ignoring the daily tech trees that fall in the woods, let’s explore the computer technology forest looking out a couple of years.
Those seeking daily comments should follow @peterskastner on Twitter.

Part 1: Processors

Architectures and Processes

Intel’s Haswell and Broadwell

We’ll see a new X86 architecture in the first half of 2013, code-name Haswell. The Haswell chips will use the 22 nm fabrication process introduced in third-generation Intel Core chips (aka Ivy Bridge). Haswell is important for extending electrical efficiency, improving performance per clock tick, and as the vehicle for Intel’s first system on a chip (SoC), which combines a dual-core processor, graphics, and IO in one unit.

Haswell is an architecture, and the benefits of the architecture carry over to the various usage models discussed in the next section.

I rate energy efficiency as the headline story for Haswell. Lightweight laptops like Ultrabooks (an Intel design) and Apple’s MacBook Air will sip the battery at around 8 watts, half of today’s 17 watts. This will dramatically improve the battery life of laptops but also smartphones and tablets, two markets that Intel has literally built $5 billion fabs to supply.

The on-chip graphics capabilities have improved by an order of magnitude in the past couple of years and get better of the next two. Like the main processor, the GPU benefits from improved electrical efficiency. In essence, on-board graphics are now “good enough” for the 80-th percentile of users. By 2015, the market for add-on graphics cards will start well above $100, reducing market size so much that the drivers switch; consumer GPUs lead high-performance computing (HPC) today. That’s swapping so that HPC is the demand that supplies off-shoot high-end consumer GPUs.

In delivering a variety of SoC processors in 2013, Intel learns valuable technology lessons for the smartphone, tablet, and mobile PC markets that will carry forward into the future. Adjacent markets, notably automotive and television, also require highly integrated SoCs.

Broadwell is the code-name for the 2014 process shrink of the Haswell architecture from 22nm to 14nm. I’d expect better electrical efficiency, graphics, and more mature SoCs. This is the technology sword Intel carries into the full fledged assault on the smartphone and tablet markets (more below).

AMD

AMD enters 2013 with plans for “Vishera” for the high-end desktop, “Richland”, an SoC  for low-end and mainstream users, and “Kabini”, a low-power SoC  for tablets.

The 2013 server plans are to deliver its third-generation of the current Opteron architecture, code name Steamroller. The company also plans to move from a 32nm SOI process to a 28nm bulk silicon process.

In 2014, AMD will be building Opteron processors based on a 64-bit ARM architecture, and may well be first to market. These chips will incorporate the IO fabric acquired with microserver-builder Seamicro. In addition, AMD is expected to place small ARM cores on its X86 processors in order to deliver a counter to Intel’s Trusted Execution Technology. AMD leads the pack in processor chimerism.

Intel’s better performing high-end chips have kept AMD largely outside looking in for the past two years. Worse, low-end markets such as netbooks have been eroded by the upward charge of ARM-based tablets and web laptops (i.e., Chromebook, Kindle, Nook).

ARM

ARM Holdings licenses processor and SoC designs that licensees can modify to meet particular uses. The company’s 32-bit chips started out as embedded industrial and consumer designs. However, the past five years has seen fast rising tides as ARM chip designs were chosen for Apple’s iPhone and iPad, Google’s Android phones and tablets, and a plethora of other consumer gadgets. Recent design wins includes Microsoft’s Surface RT. At this point, quad-core (plus one, with nVidia) 32-bit processors are commonplace. Where to go next?

The next step is a 64-bit design expected in 2014. This design will first be used by AMD, Calxeda, Marvell, and undisclosed other suppliers to deliver microservers. The idea behind microservers is to harness many (hundreds to start) of low-power/modest-performance processors costing tens of dollars each and running multiple instances of web application in parallel, such as Apache web servers. This approach aims to compete on price/performance, energy/performance, and density versus traditional big-iron servers (e.g., Intel Xeon).

In one sentence, the 2013-2014 computer industry dynamics will largely center on how well ARM users defend against Intel’s Atom SoCs in smartphones and tablets, and how well Intel defends its server market from ARM microserver encroachment. If the Microsoft Surface RT takes off, the ARM industry has a crack at the PC/laptop industry, but that’s not my prediction. Complicating the handicapping is fabrication process leadership, where Intel continues to excel over the next two years; smaller process nodes yield less expensive chips with voltage/performance advantages.

Stronger Ties Between Chip Use and Parts

The number of microprocessor models has skyrocketed off the charts the past few years, confusing everybody and costing chip makers a fortune in inventory management (e.g., write-downs). This really can’t keep up as every chip variation goes through an expensive set of usability and compatibility tests running up to millions of dollars per SKU (stock-keeping unit e.g., unique microprocessor model specs). That suggests we’ll see a much closer match between uses for specific microprocessor variations and the chips fabricated to meet the specific market and competitive needs of those uses. By 2015, I believe we’ll see a much more delineated set of chip uses and products:

Smartphones – the low-end of consumer processors. Phone features are reaching maturity: there are only so many pixels and videos one can fit on a 4″ (5″?) screen, and gaming performance is at the good-enough stage. Therefore, greater battery life and smarter use of the battery budget become front and center.

The reason for all the effort is a 400 million unit global smartphone market. For cost and size reasons, prowess in mating processors with radios and support functions into systems on a chip (SoCs) is paramount.

The horse to beat is ARM Holdings, whose architecture is used by the phone market leaders including Samsung, Apple, nVidia, and Qualcomm. The dark horse is Intel, which wants very much to grab, say, 5% of the smartphone market.

Reusing chips for multiple uses is becoming a clever way to glean profits in an otherwise commodity chip business. So I’ll raise a few eyebrows by predicting we’ll see smartphone chips used by the hundreds in microservers (see Part 2) inside the datacenter.

Tablets – 7″ to 10″ information consumption devices iconized by Apple’s iPad and iPad Mini. These devices need to do an excellent job on media, web browsing, and gaming at the levels of last year’s laptops. The processors and the entire SoCs need more capabilities than smartphones. Hence a usage category different from smartphones. Like smartphones, greater battery life and smarter use of the electrical budget are competitive differentiators.

Laptops, Mainstream Desktops, and All-in-One PCs – Mainstream PCs bifurcate differently over the next couple of years in different ways than the past. I’m taking my cue here from Intel’s widely leaked decision to make 2013-generation (i.e., Haswell) SoCs that solder permanently to the motherboard instead of being socketed. This is not a bad idea because almost no one upgrades a laptop processor, and only enthusiasts upgrade desktops during the typical 3-5 year useful PC life. Getting rid of sockets reduces costs, improves quality, and allows for thinner laptops.

The point is that there will be a new class of parts with the usual speed and thermal variations that are widely used to build quad-core laptops, mainstream consumer and enterprise desktops, and all-in-one PCs (which are basically laptops with big built-in monitors).

The processor energy-efficiency drive pays benefits in much lower laptop-class electrical consumption, allowing instant on and much longer battery life. Carrying extra batteries on airplanes becomes an archaic practice (not to mention a fire hazard). The battle is MacBook Air versus Ultrabooks. Low-voltage becomes its own usage sub-class.

Low End Desktops and Laptops – these are X86 PCs running Windows, not super-sized tablet chips. The market is low-cost PCs for developed markets and mainstream in emerging markets. Think $299 Pentium laptop sale at Wal-Mart. The processors for this market are soldered, dual-core, and SoC to reduce costs.

Servers, Workstations, and Enthusiasts – the high end of the computing food chain. These are socketed, high-performance devices used for business, scientific, and enthusiast applications where performance trumps other factors. That said, architecture improvements, energy efficiency, and process shrinks make each new generation of server-class processors more attractive. Intel is the market and technology leader in this computing usage class, and has little to fear from server-class competitors over the next two years.

There is already considerable overlap in server, workstation, and enthusiast processor capabilities. I see the low end Xeon 1200 moving to largely soldered models. The Xeon E5-2600 and Core i7 products gain more processor cores and better electrical efficiency over the Haswell generation.

Part 2: Form-Factors

Part 3: Application of Computing

Dell Inspiron 15z

Dell Inspiron 15z

Next-gen ARM cores aim for server and PC role

Next-gen ARM cores break memory barrier, add hypervisor support – News – Linux for Devices.

Today’s ARM chips are widely used in low-end devices such as Apple’s iPhone and iPad. Great for hand-held personal computing. But limited architecturally to 32-bit memory addressing, which limits memory addressing to 4 GB. Now 4 GB is eight times what the iPhone 4 uses for OS memory, so 32-bit addressing is not inhibiting smartphone capabilities anytime soon. Or any other market where the ARM core plays today.

Nope, the 32-bit addressing limits inhibit an ARM role in the datacenter.

Today’s servers and, most recently, PCs run a 64-bit processor with a 64-bit-aware OS. A 64-bit OS allows for control of many more resources that a mere 4 GB. Many two-socket servers today support up to 192 GB of DRAM, for instance. This Tier-1 class of server is the datacenter workhorse, used as a building-block for front-ending large numbers of users in enterprise applications (e.g., Oracle and SAP) or Internet apps (e.g., Apache web servers). Tier-2 and Tier-3 servers use even more memory and processors.

The next generation ARM silicon will support 40-bit addressing and have hardware that translates 40-bit memory addresses into something that the 32-bit core can execute. It will also enable virtual machine (VM) technology. The means to do this is very mature, dating back to mainframes in the 1970s. How ARM does it and the fact that the architecture is 40 bits and not 64 bits is not important to this blog or most IT managers.

What is important is that it will be easy to port Linux to a new ARM chip. And Linux represents about half of enterprise Tier-1 servers today (with Microsoft getting the lion’s share of the rest). That means an ARM/Linux server could be widely available from multiple vendors by mid-decade.

To Intel and AMD, who share almost all of the lucrative Tier-1 server market, ARM is an unwanted market entrant. The economic argument is simple. A new 40-bit ARM core won’t cost much more to manufacture than the $25 or so it costs to put an ARM processor in an iPhone 4. That compares to the hundreds to thousands of dollars for an Intel or AMD CPU. So it is logical that an ARM-based market entrant would aim at lowering server processor costs, perhaps dramatically.

Yes, I am skipping over the considerable technology that IT managers demand in their enterprise servers that would not be present in ARM’s chips. But a lot of IT executives will think long and hard before deciding that an ARM-based blade server, for example, handling web traffic doesn’t need the entire server reliability-availability-serviceability (RAS) stack that’s found in Intel and AMD servers. They’d rather pay a lot less per blade.  So, the risk to Intel and AMD is a race to the bottom on server pricing. True, Intel could counter with an Atom-based server core easily, but Intel can’t sell that part for Xeon prices, and thus would lose the margin dollars that fund a lot of R&D (and profits).

Complicating the scenario for Intel is the possibility that Apple, which sole-sources processors from Intel for Mac Books, iMacs, and Mac Pros, could tweak the new ARM core and use it to run a new iOS-based operating system including all the half-million App Store applications and components of OS X. In other words, Apple ARM-based laptops and desktops. I’m not betting on that scenario, but be assured that it’s being considered at Apple HQ.

Instead of getting simpler, IT decision-making looks to get more complicated as ARM, a licensed architecture manufactured by multiple sources, gets into the computing mainstream by mid-decade. And competitive pressure gets hotter for Intel and AMD.