Enterprise Computing Jumps on the Supply-Demand Curve

The traditional enterprise computing server suppliers are in an ever-faster game of musical chairs with cloud computing competitors. Recent cloud price cuts will accelerate enterprise adoption of the cloud, to the economic detriment of IBM, HP, Oracle Sun.

Many IT executives sat down to a cup of coffee this morning with the Wall Street Journal opened to the Marketplace lede, “Price War Erupts in Cloud Services.” Cloud computing from the likes of Amazon, Google, and Microsoft is “changing the math for corporate executives who spend roughly $140 billion a year to buy computers, Internet cables, software and other gear for corporate-technology nerve centers.” This graphic begs the question,

50 Million Page View Web Site Costs“Gee, maybe my data-center computing model for the company needs a strategic re-think?” And while there’s a very active consulting business by the usual business-transformation consulting suspects, the no-cost answer is: yes, cloud computing is a valid model that most enterprises and applications should move to over time.

This blog post, though, is not about the nuances of cloud computing today. Rather, we need to take a look at how the supply-demand curve for enterprise computing must impact the traditional enterprise server business — hard. (And yes, I am breaking a vow made during Economics 101 to never mention economics in polite company).

Cloud computing is sucking the profits out of the traditional server business.

For over fifty years, in the case of IBM, the traditional server companies including HP and Sun sold big iron, proprietary operating software and storage, and lots of services at high margins. In the past two decades, Intel’s mass-market silicon evolved into the Xeon family that took away a large percentage of that proprietary “big iron”. Yet the Intel specialist firms such as NCR and Sequent never could beat the Big Three server suppliers, who took on Xeon-based server lines of their own.

Cloud computing is sucking the profits out of the traditional server business. IBM is selling its Xeon business to Lenovo, and is likely to considerably reduce its hardware business. Oracle’s Sun business looks like a cash cow to this writer, with little innovation coming out of R&D. HP is in denial.

All the traditional server companies have cloud offerings, of course. But only IBM has jettisoned its own servers in favor of the bare-metal, do-it-yourself offerings from Amazon, Google, and lately Microsoft.

Price-war-driven lower cloud computing prices will only generate more demand for cloud computing. Google, and Microsoft have other businesses that are very profitable; these two can run their cloud offerings lean and mean. (Amazon makes up tiny margins with huge volume). To recall that Economics 101 chart:

Supply-Demand Curve

The strategic issue for IT executives (and traditional-supplier investors) is what happens over the next five years as lower server profits hollow out their traditional supplier’s ability to innovate and deliver affordable hardware and software? Expect less support and examine your application software stacks; you’ll want to make migration to a cloud implementation possible and economical. The book isn’t even written on cloud operations, backup, recovery, performance and other now well-understood issues in your existing data centers.

Meanwhile, what are your users up to? Like PCs sprouted without IT blessings a generation ago, cost-conscious (or IT schedule averse) users are likely playing with the cloud using your enterprise data. Secure? Regulatory requirements met? Lots to think about.

Follow me on Twitter @PeterSKastner

Why IBM Will Exit the X86 Server Business

With hardware profits almost non-existent, IBM’s server hardware strategy needs a hurry-up fix. Jettisoning the X86 business and its sales/marketing employees will free up much-needed cash flow dollars. But the System z and Power series remain expensive to support.

Q4-2013 Was a Hardware Business Debacle
IBM’s systems and technology division (S&T), also known as hardware, saw sales fall 26% , as pre-tax earnings fell by $768 million to $200 million. As the press release says in grim, adjective-free prose:

Total systems revenues decreased 25 percent.  Revenues from System z mainframe server products decreased 37 percent compared with the year-ago period.  Total delivery of System z computing power, as measured in MIPS (millions of instructions per second), decreased 26 percent versus the prior year.  Revenues from Power Systems decreased 31 percent compared with the 2012 period.  Revenues from System x decreased 16 percent.  Revenues from System Storage decreased 13 percent.  Revenues from Microelectronics OEM decreased 33 percent.

With pre-tax income of $7.0 billion, S&T’s $0.2 billion contribution represented a mere 2.9% of company gross profits.

For the year 2013, S&T segment revenues were $14.4 billion, a decrease of 19 percent (down 18 percent, adjusting for currency).   Corporate revenues for 2013 totaled $99.8 billion. S&T gross margins were down 3.5 points to 35.6%, compared to rising overall IBM margins of 48.6%.

IBM generated free cash flow of $15.0 billion, down approximately $3.2 billion year over year. A lot of that short-fall can be laid at the doorstep of the S&T division.

IBM’s hardware division is a declining business, falling from 21.3% of company revenues in 2007 to 14.4% in 2013, now with inadequate profits. Moreover, the S&T division requires a billion-dollar-plus annual R&D budget and bears the costs of IBM’s semiconductor fabs — on obviously declining unit volumes. S&T is not pulling its weight.

Those are the problems driving a strategy to sell off the X86 commodity server portion of S&T.

The Hardware Market is Changing Rapidly
Last April, I argued emphatically that the whole of IBM was better off retaining the X86 business. IBM hardware, including X86, drive software and services revenues in other parts of IBM, and support a robust partner community that services small and medium establishments too small for IBM direct sales to efficiently cover.

What’s happened since then is IBM’s acquisition of SoftLayer Technologies, a cloud “Infrastructure as a Service” supplier, which specializes in bare-metal X86 servers with options for using IBM’s Power servers. SoftLayer is now IBM’s cloud strategy instantiated.

I still believe killing off hardware choices for customers for IBM customers will result in a declining IBM top line. But the financial situation outlined in the previous section begs for a look at IBM’s options.

The Corner Office View
The sale of IBM’s X86 business has the following pieces:

  • Generates cash from the sale
  • Allows a reduction in sales and marketing expenses such as X86 advertising and trade shows
  • Allows for a permanent reduction in staff in X86 R&D, marketing, and sales
  • Creates a multi-billion dollar software and service recurring revenue opportunity at SoftLayer.

Unlike a year ago, IBM’s X86 customers can be encouraged to move their X86 workloads to the SoftLayer cloud and rent the computing they require. No more fork-lift upgrades, data center floor-space, HVAC limits, and all the other considerations of running your own data center. Same high-quality IBM software available. Lots of work completed on cloud auditability and compliance, making SoftLayer attractive for large enterprise workloads.

With some effort, the IBM partners can be incentivized to get their small-business customers into the cloud. “The corporate data center is so twentieth century.” This limits customer, channel, and revenue loss. It’s a viable cannibalization strategy.

Exiting the X86 server business, IBM no longer has to engineer, develop and qualify X86 servers to its very high standards, nor bear the costs of that quality. What replaces X Series X86-based customer products at SoftLayer can be built to lower cloud-quality standards — “if it breaks, reboot on another instance.” In short, IBM can squeeze costs at its own SoftLayer data centers by moving to commodity cloud servers it builds instead of using over-engineered and -differentiated X Series machines designed for customer data centers.

All indications are that IBM wants to get this done soon.

What About the Rest of S&T?
The three pieces of S&T are servers, storage, and microelectronics.

Microelectronics exists to lower the costs of fabricating the proprietary System z  mainframe and Power Systems servers, which are still an enormously profitable ecosystem. IBM still has its own semiconductor fab, and partners with GlobalFoundries to share costs on semiconductor R&D.

The competitive pressures on System z  mainframe and Power Systems servers are mostly from X86 servers of all sorts. IBM is not contemplating exiting the System z or Power hardware market. But it does have a declining margin problem and an inexorable workload trend that favors commodity, X86 computing. Expect no immediate upheavals in the proprietary server segment.

Storage is an expected component in a system-level hardware sale. There are no commodity threats to IBM’s storage business, but there are options that include the cloud. Expect no immediate upheavals in the storage segment.

Nevertheless, an unbiased cost-cutter would take a hard look at exiting Microelectronics. That is, exiting the semiconductor fabrication business — revenues down 33% in Q4 to a run-rate of under $2 billion — and working with a fab partner on future System z  and Power Systems server designs. Intel would fit that bill.

However, the likely IBM reaction to losing the control of its key proprietary hardware semiconductor fabrication can be politely summed up as “over my dead, blue body.” But the numbers don’t lie: without the X86 business, z and Power have an additional fab-based financial burden to bear that is impossible to hide. Storage and Microelectronics can’t make it up. If S&T revenues continue to decline as they have for the past seven years, another server shoe must eventually drop.

[Update January 24, 2014: IBM announced a definitive agreement to sell its X86 business to Lenovo for $2.3 billion in cash and Lenovo stock.]

Follow me on Twitter @PeterSKastner

IBM X-series

IBM X-series

IT Industry Hopes for Q4 Holiday Magic

I am floored by how it has come to pass that almost all of the 2013 new tech products get to market in the fourth quarter of 2013. For the most part, the other three quarters of the year were not wasted so much as not used to smooth supply and demand. What is to be done?

2013 products arrive in Q4
Here are some of the data points I used to conclude that 2013 is one backend-loaded product year:

  • Data Center: Xeon E3-1200 v3 single-socket chips based on the Haswell architecture started shipping this month. Servers follow next quarter. Xeon E5 dual-socket chips based on Ivy Bridge announced and anticipated in shipping servers in Q4. New Avoton and Rangely Atom chips for micro-servers and storage/comms are announced and anticipated in product in Q4.
  • PCs: my channel checks show 2013 Gen 4 Core (Haswell) chips in about 10% of SKUs at retail, mostly quad-core. Dual-core chips are now arriving and we’ll see lower-end Haswell notebooks and desktops arriving imminently. Apple, for instance, launched its Haswell-based 2013 iMac all-in-ones September 24th. But note the 2013 Mac Pro announced in June has not shipped and the new MacBooks are missing in action.
  • Tablets: Intel’s Bay Trail Atom chips announced in June are now shipping. They’ll be married to Android or Windows 8.1, which ships in late October. Apple’s 2013 iPad products have not been announced. Android tabs this year have mostly seen software updates, not significant hardware changes.
  • Phones: Apple’s new phones started selling this week. The 5C is last year’s product with a cost-reduced plastic case. The iPhone 5S is the hot product. Unless you stood all day in line last weekend, you’ll be getting your ordered phone …. in Q4. Intel’s Merrifield Atom chips for smartphones, announced in June have yet to be launched. I’m thinking Merrifield gets the spotlight at the early January ’14 CES show.

How did we get so backend loaded?
I don’t think an economics degree is needed to explain what has happened. The phenomenal unit growth over the past decade in personal computers, including mobility, have squarely placed the industry under the forces of global macro-economics. The recession in Europe, pull-back in emerging countries led by China, and slow growth in the USA all contribute to a sub-par macro-economic global economy. Unit volume growth rates have fallen.

The IT industry has reacted with slowed new product introductions in order to sell more of the existing products, which reduces the cost-per-unit of R&D and overhead of existing products. And increases profits.

Unfortunately, products are typically built to a forecast. The forecast for 2012-2013 was higher than reality. More product was built than planned or sold. There are warehouses full of last year’s technology.

The best laugh I’ve gotten in the past year from industry executives is to suggest that “I know a guy who knows a guy in New Jersey who could maybe arrange a warehouse fire.” After about a second of mental arithmetic, I usually get a broad smile back and a response like “Hypothetically, that would certainly be very helpful.” (Industry execs must think I routinely wear a wire.)

So, with warehouses full of product which will depreciate dramatically upon new technology announcements, the industry has said “Give us more time to unload the warehouses.”

Meanwhile, getting the new base technology out the door on schedule is harder, not easier. Semiconductor fabrication, new OS releases, new sensors and drivers, etc. all contribute to friction in the product development schedule. But flaws are unacceptable because of the replacement costs. For example, if a computing flaw is found in Apple’s new iOS 7, which shipped five days ago, Apple will have to fix the install on over 100 million devices and climbing — and deal with class action lawsuits and reputation damage; costs over $1 billion are the starting point.

In short, the industry has slowed its cadence over the past several years to the point where all the sizzle in the market with this year’s products happens at the year-end holidays. (Glad I’m not a Wall Street analyst.)

What happens next?
The warehouses will still be stuffed entering 2014. But there will be less 2012 tech on those shelves, now replaced by 2013 tech.

Marching soldiers are taught that when they get out of step, they skip once and get back in cadence.

The ideal consumer cadence for the IT industry has products shipping in Q2 and fully ramped by mid-Q3; that’s in time for the back-to-school major selling season, second only to the holidays. The data center cadence is more centered on a two-year cycle, while enterprise PC buying prefers predictability.

Consumer tech in 2014 broadly moves to a smaller process node and doubles up to quad-cores. Competitively, Intel is muscling its way into tablets and smartphones. The A7 processor in the new Apple iPhone 5S is Apple’s first shot in response. Intel will come back with 14nm Atoms in 2014, and Apple will have an A8.

Notebooks will see a full generation of innovation as Intel delivers 14nm chips that are on an efficiency path towards thresh-hold voltages — as low as possible — that deliver outstanding battery life. A variation on the same tech gets to Atom by 2014 holidays.

The biggest visible product changes will be in form-factors, as two-in-one notebooks in many designs compete with tablets in many sizes. The risk-averse product manufacturers (who own that product in the warehouses) have to innovate or die, macro-economic conditions be damned. Dell comes to mind.

On the software side, Apple’s IOS 7 looks and acts a lot more like Android than ever before. Who would have guessed that? Microsoft tries again with Windows version 8.1.

Consumer buyers will be information-hosed with more changes than they have seen in years, making decision-making harder.

Intel has been very cagy about what 2014 brings to desktops; another year with Haswell refreshers before a 2015 new architecture is entirely possible. Otherwise, traditional beige boxes are being replaced with all-in-ones and innovative small form-factor machines.

The data center is in step and a skip is unnecessary. The 2014 market battle will answer the question: what place do micro-servers have in the data center? However, there is too much server-supplier capacity chasing a more commodity datacenter. Reports have IBM selling off its server business, and Dell is going private to focus long-term.

The bright spot is that tech products of all classes seems to wear out after about 4-5 years, demanding replacement. Anyone still have an iPhone 3G?

The industry is likely to continue to dawdle its cycles until global macro-economic conditions improve and demand catches up with more of the supply. But moving the availability of products back even two months in the calendar would improve new-product flow-through by catching the back-to-school season.

Catch me on Twitter @peterskastner

warehouse-300x196

 

POWER to the People: IBM is Too Little, Too Late

“On August 6, Google, IBM, Mellanox, NVIDIA and Tyan today announced plans to form the OpenPOWER Consortium – an open development alliance based on IBM’s POWER microprocessor architecture. The Consortium intends to build advanced server, networking, storage and GPU-acceleration technology aimed at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers.”

IBM Hardware Is Not Carrying Its Weight
As the last computer manufacturer with its own silicon fab, IBM has a financial dilemma. The cost of silicon fab investments is increasing. 
Hardware revenues are declining.  There are fewer new Z-series mainframes and POWER-based midrange computers on which to allocate hardware R&D, product development, fab capex, and other amortized costs. POWER revenues were down 25% in the latest quarter. Bloomberg reports furloughs of the hardware staff this month in an effort to cut costs.

The cloud-based future data center is full of Intel Xeon-based servers as practiced by Google, Amazon, Facebook et al. But margins on Intel-architecture servers — IBM’s instantiation is the X Series — are eroding. Widely believed rumors earlier this year had IBM selling off its X Series business to Lenovo, like IBM spun off its PC business in 2005.

Clearly, the IBM hardware business is the subject of much ongoing discussion in Armonk, NY.

The OpenPOWER Consortium is a Strategic Mistake
Our view view is that IBM has made a strategic mistake with this announcement by admitting proprietary defeat and opening POWER up to an open-source consortium. The signal IBM is sending is that it is no longer totally committed to the long-term future of its mainframe and POWER hardware. The sensitive ears of IBM’s global data center customers will pick this message up and, over time, accelerate plans to migrate off of IBM hardware and software.

Proprietary hardware and software business success depends a great deal on customer trust — more than is commonly assumed. Customers want a long term future planning horizon in order to continue investing in IBM, which is not the lowest-cost solution. When trust is broken, a hardware business can crash precipitously. One such example is Prime Computer, a 1980s Massachusetts darling that was acquired, dropped plans for future processors, and watched its installed base decline at a fifty-percent per annum rate. On the other hand, H-P keeps Digital Equipment and Tandem applications going to this day.

By throwing doubt on its future hardware business horizon, IBM risks its entire business model. Yes, that is a far-fetched statement but worth considering: the IBM services and software business is built around supporting, first and foremost, IBM hardware. Lose proprietary hardware customers, and services and high-margin software business will decline.

So, we think IBM is risking a lot by stirring up its customer base in return for a few million dollars in POWER consortium licensing revenue.

What About Google?
To see how this deal could turn even worse for IBM, let’s look at the motives of the headline consortium member, Google.

First, IBM just gave Google the “Amdahl coffee mug”. In the mainframe hay days of the 1970s, it was a common sales tactic for Amdahl, a mainframe clone company in fierce competition with IBM, to leave a coffee mug for the CIO. Properly placed on a desk, it sent the message to the IBM sales team to drop prices because there was competition for the order. A POWER mug — backed by open POWER servers — will send a pricing signal to Intel, which sells thousands of Xeon chips directly to Google. That action won’t budge the needle much today.

POWER servers are most likely to appear in Open Compute form, as blades in an open-hardware rack-tray. These are the cost-reduced server architectures we see sucking the margin out of the entire server industry. Gas on the fire of that trend.

And we don’t see Google needing to build its own Tier-3 backend database servers, a common role for POWER servers. However, Google customizing POWER chips with nVidia GPU technology for some distant product is believable. For example, we’re puzzling how Google will reduce the $85,000 technology cost of its driverless automobile to mass-market cost levels, and the consortium could become part of that solution.

Open POWER Software Too?
IBM is emphatically not throwing POWER operating system (i.e., AIX Unix and OS/400) and systems software into the open consortium. That would give away the IBM family jewels. So, the open-source hardware folks will quickly turn to the Linux on POWER OS’s. Given a choice, the buyers will turn to open-source — that is, free or lower cost — versions of IBM software equivalents for system software. We see little software-revenue upside to IBM’s POWER consortium move. Nor services either.

Fortunately, IBM did not suggest that POWER licensing would extend to the fast-growing mobile world of tablets and smartphones because that would be a bridge way too far. IBM may staunch some of the embedded POWER chip business lost to ARM’s customers and Intel in recent years through customizations by licensing designs ala ARM Holdings.

Thoughts and Observations
In conclusion, we see nothing good happening to IBM’s bottom line as a result of the OpenPOWER Consortium announcement. And if it wasn’t about the bottom line, why risk long-term customer trust in IBM’s long-term hardware platform commitments? The revenue from POWER licensing will not come close to compensating for the weakness that IBM displays with this consortium strategy.

I ask this without drama or bombast: can we now see the dim horizon where IBM is no longer a major player in the computer hardware business? That’s a huge question which until now has never been asked nor needed to be asked. Moreover, no IBM hardware products would mean no IBM fab is needed.

The real implications are about IBM’s declining semiconductor business. POWER (including embedded POWER) is a volume product for IBM Microelectronics, along with current-generation video game chips. The video game business dries up by year end as Sony and Microsoft introduce the next generation consoles, sans IBM content. POWER licensing through the OpenPOWER Consortium might generate some fab business for the East Fishkill, NY IBM fab, but that business could also go to Global Foundries (GloFo) or Taiwan Semi (TMSC). Where’s the chip volume going to come from?

IBM will not be able to keep profitably investing in cutting-edge semiconductor fabs if it does not have the fab volume needed to amortize costs. Simple economics of scale. But note that IBM fab technology has been of enormous help to GloFo and TSMC in getting to recent semiconductor technology nodes. Absent IBM’s help, this progress would be delayed.

Any move by IBM to cut expenses by slowing fab technology investments will have a cascading negative impact on global merchant semiconductor fab innovation, hurting, for example, the ARM chip ecosystem. Is the canary still singing in the IBM semiconductor fab?

Your comments and feedback are invited.

Follow @PeterSKastner on Twitter

IBM POWER Linux Server

IBM POWER Linux Server

Two-In-One Tablet/Notebooks: Mirage or Miracle?

Intel’s sales and marketing senior executive Tom Kilroy said at Computex last week “The two-in-one concept is really going to be the new wave”, citing computers such as the Lenovo Yoga, which can be used as both a laptop, a tablet and in ‘tent’ mode with a viewing screen that stands up on its own, he said. “The days of carrying around a smartphone, a tablet, and a notebook are numbered – the discrete tablet as we know it will go by the wayside and the 2-in-1 will be the future. If you’re doing content creation it just doesn’t happen on the phone.”

We’re Heading in the Right Direction
I cannot agree more that the endpoint for all the phone, tablet, laptop/notebook convergence talk is fewer devices for most people. For one reason, cost alone prohibits many from affording three devices at a roughly $600 unsubsidized cost per device. The phone is most likely to stand alone because it is the most pocket-portable, and can do it all albeit in a tiny form factor. That leaves the battle between the tablet and laptop as the device most likely to morph dramatically this decade.

Many will settle on a converged laptop-tablet (laptab) that combines the media consumption strengths of the tablet with the data and media production strengths of the laptop/notebook. Such a laptab could do the jobs of both a laptop and a tablet with few if any compromises. Done right, the laptab will be the converged non-phone device.

Intel’s Haswell and Baytrail announcements on June 3rd set the stage for that company to rapidly become a much bigger player in the smartphone and laptop business, as well as setting the stage for Ultrabook laptops with convertible features. What’s changed with this latest generation of technology is much improved performance-per-watt, idle power, and battery life.

The mainstream laptop is now in the same ballpark as tablets with keyboards in baseline mobility and weight. Convertible laptops sans keyboards give up little to dedicated tablets in hardware. Tablets and laptops now both have adequate screen resolution, processor speed, memory and storage, and battery life for media consumption — the tablet’s tour de force.

However, while the industry is now headed in the direction of laptab convergence, I don’t think we are yet on course. As the old New England adage goes, “If you don’t know where you’re going, any road will get you there.”

Let’s look more deeply at what a converged two-in-one should look like. We’d hate for the promise of convergence to be a mirage.

A Two-in-One Should Be Just That
Our qualitative market research shows consumers really do want one mobile device that can do the jobs now performed separately by laptops and tablets. As usual, the devil is in the details.

To the users in our research panels, converged really does mean “coming together” in hardware and software including apps. That’s not the public industry directional focus we’ve seen.

On laptops, users want to “run tablet apps on a productivity OS (like Windows or OS X). On smartphones and tablets, users want to “run productivity apps I am familiar with, and have access to my home and work data.” Queried further, users tell us they want a merged hardware feature-set  combining a tablet and laptop together with the ability to run their tablet apps on the laptop. (You gotta love non-technology users for wanting technology miracles. It’s what drives innovation.)

A converged laptab hardware set would include:

  • Processor, memory, storage, touch screen
  • All-day (and probably into the night) battery life
  • Keyboard, preferably removable for weight reasons
  • Tablet sensors: GPS, accelerometer, 1080p video/still camera
  • Radios: Bluetooth, WiFi, optional 3G/LTE cellular. NFC when retailers enables the NFC eCommerce market.

The heavy lift from these pesky consumers is in the software stack. They want to run tablet apps (e.g., Angry Birds) on a full operating system. For example, iOS apps on an OS X MacBook.

Importantly, they do not want to buy a second copy of an app (e.g., $4.99 Angry Birds for OS X) on a full operating system. “One app with one set of data” is what we heard, along with complaints about the intricacies of syncing. Now, we suggested to Apple execs five years ago this summer that running iOS apps on OS X was a good idea. They replied, “Yeah, we’ve heard that.” But not done anything about it.

So, consider consumers with jobs who need to stay compatible with applications and files they use at work. Think Microsoft Office. These folks can’t give up the laptop and its office-productivity OS for a consumption-oriented tablet in an either-or decision. Let’s call this the business laptab market.

Conversely, tablets are go-to devices for video watchers, Internet surfers, email and readers but few book writers or budget spreadsheet accountants. It’s the consume(r) market.

How the tech industry responds to the above specs for the two divergent markets will dictate the course and duration of converged laptab demand.

There are two positive signs we’ve noticed recently, both involving Intel. First, the company acquired ST-Ericsson’s global navigation satellite system (GNSS) business. That will bring GPS capabilities to Intel’s communications chip business, and hence to Intel mobile products like tablets and laptops. Second, the Silvermont architecture that includes the Baytrail tablet chip supports virtualization, so hypothetically Android could run along side Windows or iOS along side OS X. Just a  small matter of programming and licensing.

On the negative side, a number of products have been announced with dual-boot capabilities, especially Android and Windows. This plays to the “laptop converts to tablet” form of convertibles. However, our research says dual-boot is not the destination.

To watch laptab convergence play out, keep an eye on the three OS players: Microsoft Windows, Apple’s iOS and OS X, and Google. We view Google as a wild card because it could rather easily merge Chrome OS and Android with Google apps (but has said that won’t happen over the next two years.) The hardware industry really cannot deliver the converged laptab described above without the active support of the OS players.

Laptab users may want to think through their long-term options. Browser-based apps, especially those using HTML5 and the cloud, are quite interchangeable. The trend is towards cross-OS applications. Apple’s iWork office apps will soon run on iOS, OS X, and Windows. Google Docs and Microsoft Office 365 are available in the cloud. Microsoft just delivered Office for iPhone.

Summary Observations
The direction of laptab convergence we’ve seen to date is headed in the right direction, but the finish line is not in sight.

Real consumers with experience in tablets and laptops see the need to bring all the sensor and media hardware in tablets to laptops; convertible laptops with removable keyboards do not go far enough into the desired experience to replace and substitute for tablets.

The convergence miracle is a combined tablet and notebook OS software and user apps. The hardware to do this is now close at hand. But the willingness of the industry to push the software, licensing, and marketing investment is not apparent.

Thus, widespread laptab substitution for tablets and laptops is not in the foreseeable future. The market, especially the business laptab market, will remain additive.

Follow @peterskastner on Twitter

Lenovo Ideapad Yoga convertible laptop

Lenovo Ideapad Yoga convertible laptop

Pulse Check: How Intel is Scaling to Meet the Decade’s Opportunities

Eighteen months ago, Intel announced it would address the world’s rapidly growing computing continuum by investing in variations on the Intel Architecture (IA). It was met with a ho-hum. Now, many product families are beginning to emerge from the development labs and head towards production. All with IA DNA, these chip families are designed to be highly competitive in literally dozens of new businesses for Intel, produced in high volumes, and delivering genuine value to customers and end users.

Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities. What is Intel doing and how can they pull this off?

The 2010’s Computing Continuum
Today’s computing is a continuum that ranges from smartphones to mission-critical datacenter machines, and from desktops to automobiles.  These devices represent a total addressable market (TAM) approaching a billion processors a year, and will explode to more than two billion by the end of the decade.  Of that, traditional desktop microprocessors are about 175 million chips this year, and notebooks, 225 million.

For more than four decades, solving all the world’s computing opportunities required multiple computer architectures, operating systems, and applications. That is hardly efficient for the world’s economy, but putting an IBM mainframe into a cell phone wasn’t practical. So we made due with multiple architectures and inefficiencies.

In the 1990’s, I advised early adopters NCR and Sequent in their plans for Intel 486-based servers. Those were desktop PC chips harnessed into datacenter server roles. Over twenty years, Intel learned from its customers to create and improve the Xeon server family of chips, and has achieved a dominant role in datacenter servers.

Now, Intel Corporation is methodically using its world-class silicon design and fabrication capabilities to scale its industry-standard processors down to fit smartphones and embedded applications, and up into high-performance computing applications, as two examples. Scaling in other directions is still in the labs and under wraps.

The Intel Architecture (IA) Continuum
IA is Intel’s architecture and an instruction set that is common (with feature differentiation) in the Atom, Core, and Xeon microprocessors already used in the consumer electronics, desktop and notebook, and server markets, respectively.  These microprocessors are able to run a common stack of software such as Java, Linux or Microsoft Windows.  IA also represents the hardware foundation for hundreds of billions of dollars in software application investments by enterprise and software application package developers that remain valuable assets as long as hardware platforms can run it — and backwards compatibility in IA has protected those software investments.

To meet the widely varying requirements of this decade’s computing continuum, Intel is using the DNA of IA to create application-specific variants of its microprocessors.  Think of this as silicon gene-splicing.  Each variant has its own micro-architecture that is suited for its class of computing requirements (e.g., Sandy Bridge for 2011 desktops and notebooks). These genetically-related processors will extend Intel into new markets, and include instruction-set compatible microprocessors:

  • Embedded processors and electronics known as “systems on a chip” (SOCs) with an Atom core and customized circuitry for controlling machines, display signage, automobiles, and industrial products;
  • Atom, the general-purpose computer heart of consumer electronics mobile devices, tablets, and soon smartphones;
  • Core i3, i5, and i7 processors for business and consumer desktops and notebooks, with increasing numbers of variants for form-factor, low power, and geography;
  • Xeon processors for workstations and servers, with multi-processors capable advances well into the mainframe-class, mission-critical computing segment;
  • Xeon datacenter infrastructure processor variants (e.g., storage systems, and with communications management a logical follow-on);

A Pause to Bring You Up To Date
Please do not be miffed: all of the above was published in February, 2011, more than two years ago. We included it here because it sets the stage for reviewing where Intel stands in delivering on its long-term strategy and plans of the IA computing continuum, and to remind readers that Intel’s strategy is hiding in plain sight for going on five years.

In that piece two years ago, we concluded that IA fits the market requirements of the vast majority of the decade’s computing work requirements, and that Intel is singularly capable of creating the products to fill the expanding needs of the computing market (e.g., many core).

With the launch of the 4th Generation Core 22nm microprocessors (code-name Haswell) this week and the announcement of the code-name Baytrail 22nm Atom systems on a chip (SoCs), it’s an appropriate time to take the pulse on Intel’s long-term stated direction and the products that map to the strategy.

Systems on a Chip (SoCs)
The Haswell/Baytrail launch would be a lot less impressive if Intel had not mastered the SoC.

The benefits of an SoC compared to the traditional multi-chip approach Intel has used up until now are: fewer components, less board space, greater integration, lower power consumption, lower production and assembly costs, and better performance. Phew! Intel could not build a competitive smartphone until it could put all of the logic for a computer onto one chip.

This week’s announcements include SoCs for low-voltage notebooks, tablets, and smartphones. The data center Atom SoCs, code-name Avoton, are expected later this year.

For the first time, Intel’s mainstream PC, data center, and mobile businesses include highly competitive SoCs.

SoCs are all about integration. The announcement last month at Intel’s annual investor meeting that “integration to innovation” was an additional strategy vector for the company hints at using many more variations of SoCs to meet Intel’s market opportunities with highly targeted SoC-based variants of Atom, Core, and Xeon.

Baytrail, The Forthcoming Atom Hero
With the SoCs for Baytrail in tablets and Merrifield in smartphones, Intel can for the first time seriously compete for mobile marketshare against ARM competitors on performance-per-watt and performance. These devices are likely to run the Windows 8, Android, and Chrome operating systems. They will be sold to carriers globally. There will be variants for local markets (i.e., China and Africa).

The smartphone and tablet markets combined exceed the PC market. By delivering competitive chips that run thousands of legacy apps, Intel has finally caught up on the technology front of the mobile business.

Along with almost the entire IT industry, Intel missed the opportunity that became the Apple iPhone. Early Atom processors were not SoCs, had poor battery life, and were relatively expensive. That’s a deep hole to climb out of. But Intel has done just that. There are a lot fewer naysayers than two years ago. The pendulum is now swinging Intel’s way on Atom. 2014 will be the year Intel starts garnering serious market share in mobile devices.

4th Generation Core for Mainstream Notebooks and PCs
Haswell is a new architecture implemented in new SoCs for long-battery-life notebooks, and with traditional chipsets for mainstream notebooks and desktops. The architecture moves the bar markedly higher in graphics performance, power management, and floating point (e.g., scientific) computations.

We are rethinking our computing model as a result of Haswell notebooks and PCs. Unless you are an intense gamer or workstation-class content producer, we think a notebook-technology device is the best solution.

Compared to four-year old notebooks in Intel’s own tests, Haswell era notebooks are: half the weight, half the height, get work done 1.8x faster, convert videos 23x faster, play popular games 26x faster, wake up and go in a few seconds, and with 3x battery life for HD movie playing. Why be tethered to a desktop?

Black, breadbox-size desktops are giving way to all-in-one (AIO) designs like the Apple iMac used to write this blog. That iMac has been running for two years at 100% CPU utilization with no problems. (It does medical research in the background folding proteins). New PC designs use notebook-like components to fit behind the screen. You’ll see AIOs this fall that lie flat as large tablets or go vertical with a rear kick-stand. With touch screen, wireless Internet and Bluetooth peripherals, these new AIOs are easily transportable around the house. That’s the way we see the mainstream desktop PC evolving.

And PCs need to evolve quickly. Sales are down almost 10% this year. One reason is global macro-economic conditions. But everybody knows the PC replacement cycle has slowed to a crawl. Intel’s challenge is to spark the PC replacement cycle. Haswell PCs and notebooks, as noted above, deliver a far superior experience to users than they are putting up with in their old, obsolescent devices.

Xeon processors for workstations, servers, storage, and communications
The data center is a very successful story for Intel. The company has steadily gained workloads from traditional (largely legacy Unix) systems; grown share in the big-ticket Top 500 high-performance computing segment; evolved with mega-datacenter customers such as Amazon, Facebook, and Google; and extended Xeon into storage and communications processors inside the datacenter.

The Haswell architecture includes two additions of great benefit to data-center computing. New floating point architecture and instructions should improve scientific and technical computing throughput by up to 60%, a huge gain over the installed server base. Second, transactional memory is a technology that makes it easier for programmers to deliver fine-grain parallelism, and hence to take advantage of multi-cores with multi-threaded programs, including making operating systems and systems software like databases run more efficiently.

In the past year, the company met one data-center threat in GPU-based computing with PHI, a server add-in card that contains dozens of IA cores that run a version of Linux to enable massively parallel processing. PHI competes with GPU-based challengers from AMD and nVidia.

Another challenge, micro-servers, is more a vision than a market today. Nevertheless, Intel created the code-name Avoton Atom SoC for delivery later this year. Avoton will compete against emerging AMD- and ARM-based micro-server designs.

Challenges
1. The most difficult technology challenge that Intel faces this decade remains software, not hardware.  Internally, the growing list of must-deliver software drivers for hardware such as processor-integrated graphics means that the rigid two-year, tick-tock hardware model must also accommodate software delivery schedules.

Externally, Intel’s full-fray assault on the mobile market requires exquisite tact in dealing with the complex relationships with key software/platform merchants: Apple (iOS), Google (Android), and Microsoft (Windows), who are tough competitors.

In the consumer space such as smartphones, Intel’s ability to deliver applications and a winning user experience are limited by the company’s OEM distribution model. More emphasis needs to be placed on the end-user application ecosystem, both quality and quantity. We’re thinking more reference platform than reference hardware.

2. By the end of the decade, silicon fabrication will be under 10 nm, and it is a lot less clear how Moore’s Law will perform in the 2020’s. Nevertheless, we are optimistic about the next 10-12 years.

3. The company missed the coming iPhone and lost out on a lot of market potential. That can’t happen again. The company last month set up an new emerging devices division charged with finding the next best thing around the same time others do.

4. In the past, we’ve believed that mobile devices — tablets and smartphones — were additive to PCs and notebooks, not substitutional. The new generation of Haswell and Baytrail mobile devices, especially when running Microsoft Windows, offer the best of the portable/consumption world together with the performance and application software (i.e., Microsoft Office) to produce content and data. Can Intel optimize the market around this pivot point?

Observations and Conclusions
Our summary observations have not changed in two years, and are reinforced by the Haswell/Baytrail SoCs that are this week’s proof point:

  • Intel is taking its proven IA platforms and modifying them to scale competitively as existing markets evolve and as new markets such as smartphones emerge.
  • IA scales from handhelds to mission-critical enterprise applications, all able to benefit from a common set of software development tools and protecting the vast majority of the world’s software investments.  Moreover, IA and Intel itself are evolving to specifically meet the needs of a spectrum of computing made personal, the idea that a person will have multiple computing devices that match the time, place and needs of the user.
  • Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities.

Looking forward, Intel has fewer and less critical technology challenges than at any point since the iPhone launch in 2007. Instead, the company’s largely engineering-oriented talent must help the world through a complex market-development challenge as we all sort out what devices are best suited for what tasks. We’ve only scratched the surface of convertible tablet/notebook designs. How can Intel help consumers decide what they want and need so the industry can make them profitably? How fast can Intel help the market to make up its mind? Perhaps the “integration to innovation” initiative needs a marketing component.

If the three-year evolving Ultrabook campaign is an example of how Intel can change consumer tastes, then we think industry progress will be slower than optimal. A “win the hearts and minds” campaign is needed, learning from the lessons of the Ultrabook evolution. It will take skillsets in influencing and moving markets in ways Intel will need more of as personal computing changes over the next decade, for example, as perceptual computing morphs the user interface.

Absent a macro-economic melt-down, Intel is highly likely to enjoy the fruits of five years of investments over the coming two-year life of the Haswell architecture. And there’s no pressing need today to focus beyond 2015.

Biography

Peter S. Kastner is an industry analyst with over forty-five years experience in application development, datacenter operations, computer industry marketing, PCs, and market research.  He was a co-founder of industry-watcher Aberdeen Group in 1989.  His firm, Scott-Page LLC, consults with technology companies and technology users.

Twitter: @peterskastner

Haswell Core i7 desktop microprocessor

Haswell Core i7 desktop microprocessor