Why IBM Will Exit the X86 Server Business

With hardware profits almost non-existent, IBM’s server hardware strategy needs a hurry-up fix. Jettisoning the X86 business and its sales/marketing employees will free up much-needed cash flow dollars. But the System z and Power series remain expensive to support.

Q4-2013 Was a Hardware Business Debacle
IBM’s systems and technology division (S&T), also known as hardware, saw sales fall 26% , as pre-tax earnings fell by $768 million to $200 million. As the press release says in grim, adjective-free prose:

Total systems revenues decreased 25 percent.  Revenues from System z mainframe server products decreased 37 percent compared with the year-ago period.  Total delivery of System z computing power, as measured in MIPS (millions of instructions per second), decreased 26 percent versus the prior year.  Revenues from Power Systems decreased 31 percent compared with the 2012 period.  Revenues from System x decreased 16 percent.  Revenues from System Storage decreased 13 percent.  Revenues from Microelectronics OEM decreased 33 percent.

With pre-tax income of $7.0 billion, S&T’s $0.2 billion contribution represented a mere 2.9% of company gross profits.

For the year 2013, S&T segment revenues were $14.4 billion, a decrease of 19 percent (down 18 percent, adjusting for currency).   Corporate revenues for 2013 totaled $99.8 billion. S&T gross margins were down 3.5 points to 35.6%, compared to rising overall IBM margins of 48.6%.

IBM generated free cash flow of $15.0 billion, down approximately $3.2 billion year over year. A lot of that short-fall can be laid at the doorstep of the S&T division.

IBM’s hardware division is a declining business, falling from 21.3% of company revenues in 2007 to 14.4% in 2013, now with inadequate profits. Moreover, the S&T division requires a billion-dollar-plus annual R&D budget and bears the costs of IBM’s semiconductor fabs — on obviously declining unit volumes. S&T is not pulling its weight.

Those are the problems driving a strategy to sell off the X86 commodity server portion of S&T.

The Hardware Market is Changing Rapidly
Last April, I argued emphatically that the whole of IBM was better off retaining the X86 business. IBM hardware, including X86, drive software and services revenues in other parts of IBM, and support a robust partner community that services small and medium establishments too small for IBM direct sales to efficiently cover.

What’s happened since then is IBM’s acquisition of SoftLayer Technologies, a cloud “Infrastructure as a Service” supplier, which specializes in bare-metal X86 servers with options for using IBM’s Power servers. SoftLayer is now IBM’s cloud strategy instantiated.

I still believe killing off hardware choices for customers for IBM customers will result in a declining IBM top line. But the financial situation outlined in the previous section begs for a look at IBM’s options.

The Corner Office View
The sale of IBM’s X86 business has the following pieces:

  • Generates cash from the sale
  • Allows a reduction in sales and marketing expenses such as X86 advertising and trade shows
  • Allows for a permanent reduction in staff in X86 R&D, marketing, and sales
  • Creates a multi-billion dollar software and service recurring revenue opportunity at SoftLayer.

Unlike a year ago, IBM’s X86 customers can be encouraged to move their X86 workloads to the SoftLayer cloud and rent the computing they require. No more fork-lift upgrades, data center floor-space, HVAC limits, and all the other considerations of running your own data center. Same high-quality IBM software available. Lots of work completed on cloud auditability and compliance, making SoftLayer attractive for large enterprise workloads.

With some effort, the IBM partners can be incentivized to get their small-business customers into the cloud. “The corporate data center is so twentieth century.” This limits customer, channel, and revenue loss. It’s a viable cannibalization strategy.

Exiting the X86 server business, IBM no longer has to engineer, develop and qualify X86 servers to its very high standards, nor bear the costs of that quality. What replaces X Series X86-based customer products at SoftLayer can be built to lower cloud-quality standards — “if it breaks, reboot on another instance.” In short, IBM can squeeze costs at its own SoftLayer data centers by moving to commodity cloud servers it builds instead of using over-engineered and -differentiated X Series machines designed for customer data centers.

All indications are that IBM wants to get this done soon.

What About the Rest of S&T?
The three pieces of S&T are servers, storage, and microelectronics.

Microelectronics exists to lower the costs of fabricating the proprietary System z  mainframe and Power Systems servers, which are still an enormously profitable ecosystem. IBM still has its own semiconductor fab, and partners with GlobalFoundries to share costs on semiconductor R&D.

The competitive pressures on System z  mainframe and Power Systems servers are mostly from X86 servers of all sorts. IBM is not contemplating exiting the System z or Power hardware market. But it does have a declining margin problem and an inexorable workload trend that favors commodity, X86 computing. Expect no immediate upheavals in the proprietary server segment.

Storage is an expected component in a system-level hardware sale. There are no commodity threats to IBM’s storage business, but there are options that include the cloud. Expect no immediate upheavals in the storage segment.

Nevertheless, an unbiased cost-cutter would take a hard look at exiting Microelectronics. That is, exiting the semiconductor fabrication business — revenues down 33% in Q4 to a run-rate of under $2 billion — and working with a fab partner on future System z  and Power Systems server designs. Intel would fit that bill.

However, the likely IBM reaction to losing the control of its key proprietary hardware semiconductor fabrication can be politely summed up as “over my dead, blue body.” But the numbers don’t lie: without the X86 business, z and Power have an additional fab-based financial burden to bear that is impossible to hide. Storage and Microelectronics can’t make it up. If S&T revenues continue to decline as they have for the past seven years, another server shoe must eventually drop.

[Update January 24, 2014: IBM announced a definitive agreement to sell its X86 business to Lenovo for $2.3 billion in cash and Lenovo stock.]

Follow me on Twitter @PeterSKastner

IBM X-series

IBM X-series

Advertisements

HealthCare.Gov: IT Rules Broken, Mistakes Made

Numerous friends, neighbors and clients have asked me about the IT fiasco in the eight weeks since the  Obamacare federal exchange project, HealthCare.Gov, was launched. “How did it happen and what went wrong?”, they ask. Lacking subpoena power, I can only draw from experience and common sense. There were lots of information technology (IT) rules broken and mistakes made. Someone else can write the book.

Performance, delivery date, features, and quality are a zero sum game
The project was executed with the expectation of complex features, very high volume and performance metrics, robust production quality expectations, and a drop-dead date of October 1, 2013. Even with all the money in the federal budget, tradeoffs are still necessary in the zero-sum equation of successful project management. The tradeoffs were obviously not made, the system went live October 1, and the results are obvious.

The Feds are different
That federal IT project procurement and management is different from the private sector is like night and day. Some of the major factors include:

  • Politics squared.
  • IT procurement regulations are a gamed system that’s broken. Everybody in Washington knows it and nobody will do anything about it.
  • The federal government does little programming and development in-house. Most is contracted out.
  • The culture lacks accountability and real performance metrics.

The HealthCare.gov website is really a complex online marketplace. It’s not the first, though. HealthCare.gov has taken longer to complete than World Wars I and II, the building of the atomic bomb, and putting a man in space.

Too many cooks in the kitchen
The specifications were never really frozen in a version approved by all stakeholders. That meant the programmers never worked with a fixed design.

The HealthCare.gov project was always surrounded by politics and executive branch oversight that led to design changes, such as the late summer decision to graft a rigorous registration process into the site before users could see policy choices.

No surprise that this high visibility project would have lots of micro-management. But the many — over fifty — IT contractors working on the project had no incentive to tradeoff time-to-completion with feature changes. They weren’t in charge. The timeline slipped, a lot.

Who’s in charge? Can the project manager do the job?
There was no take-charge project manager responsible for this half-billion dollar undertaking. The Centers for Medicare & Medicaid Services (CMS) was assigned oversight by an executive without extensive complex system integration project experience. It’s obvious that day-to-day coordination of the over fifty contractors working on Healthcare.gov was lacking, and that project management was sub-par from seeing the first remedy in October was assigning government technicians with such experience.

The White House trusted its own policy and political teams rather than bringing in outsiders with more experience putting in place something as technically challenging as HealthCare.gov and the infrastructure to support it.

Absent a take-charge project management team, the individual IT contractors pretty much did their own thing by programming the functions assigned to them and little else. This is obvious from all the finger-pointing about parts of the site that did not work October 1st. The lack of systems integration is telling.

After October 1st, a lead project manager (a contractor) was appointed.

We don’t need no stinking architecture.
Why was the federal website at Healthcare.gov set up the way it was? How were the choices of architecture made — the overall technology design for the site?

Everybody now knows the site needs to handle the volumes of millions of subscribers during a short eligibility and sign-up window between October 1 and December 15th (now extended to the 23rd). The HealthCare.gov website handles 36 states. Each state has different insurance plans at different tiers and prices, but the way an individual state site works is identical to other states. Every one has bronze, silver, and gold policies from one or more insurers to compare.

The Healthcare.gov website was architected as one humongous site for all 36 states handling all the visitors in a single system. All the eggs were put in one basket.

An alternative architecture would immediately rout visitors from the home page to 36 separate but technologically identical state-level systems. More operations management, but infinitely greater opportunities for scale-out and scale-up for volume.

Another benefit of a single application code-base replicated dozens of times is that the risk and expense of states that chose to run their own sites can be mitigated. The 14 states that built their own sites all did it differently, to no national benefit. While California succeeded, Oregon has yet to enroll a customer online. We paid 14 times.

Another questionable architectural decision is the real-time “federal data hub” that makes eligibility and subsidy decisions. As designed, the website queries numerous agencies including Social Security, Homeland Security, Internal Revenue, Immigration and other agencies to make a complicated, regulation-driven decision on whether the (waiting) customer is qualified to buy a policy and what federal subsidies, if any, the customer gets to reduce policy premium costs.

This design approach puts a strain on all of the agency systems, and leads to inevitable response time delays as all the data needed to make a determination is gathered in real time. It also requires that agency systems not designed for 24/7 online operations reinvent their applications and operations. This hasn’t happened.

An alternative design would make some or all of the determination in batch programs run ahead of time, much like a credit score is instantly available to a merchant when you’re applying for new credit. That would greatly un-complicate the website and reduce the time online by each user.

Security as an add-on, not a built-in.
The HealthCare.gov website needs the most personal financial information to make eligibility decisions. Therefore, it’s shocking that data security and privacy are not an integral and critical part of the system design, according to testimony by security experts to Congress. This is a glaring lapse.

We’re not going to make it are we?
By April 2013, the project was well underway. An outside study by McKinsey looked at the project and pointed out the likelihood of missing the October deadline, and the unconventional approaches used to date on the project.

There’s almost always a cathartic moment in an IT project with a fixed deadline when the odds favor a missed deadline.

The HealthCare.gov project broke at least three IT project management best practice rules: overloading the roster, succumbing to threatening incentives, and ignoring human resistance to being cornered.

Let’s throw more people at the project.
More labor was thrown at the project both before and after October 1st. That slowed the project down even more. An IT management mentor explained it to me early in my career this way: “If you need a baby in a month, nine women can’t get that job done. Not possible. Try something different, like kidnapping.” This later became known as Brooks’ Law which says “adding manpower to a late software project makes it later”.

“Failure is not an option!”
When managers yell “Failure is not an option!” is it any surprise that project management reporting immediately becomes a meaningless exercise? The managers were flogging the troops to make the deadline. So the troops reported all sorts of progress that hadn’t actually happened. Passive resistance is real.

It therefore comes as no surprise when managers post-launch “concluded that some of the people working in the trenches on the website were not forthcoming about the problems.” It’s a fairytale world where nobody has ever read the real world depicted in Scott Adams’ Dilbert comic strip.

The project passed six technology reviews, and told Congress all was well. CMS IT management approved the green-light schedule status on the website. It’s still online here. Completely inaccurate.

“We’ll just all work 24/7 until we’re over the hump.”
As I write this a week ahead of the December 1 “drop-dead date 2.0”, it’s hard to fathom how many weeks the project teams have been working late, nights, and weekends. Soldiers are kept on the front line for days or weeks except in the most dire circumstances. The military knows from experience that troops need R&R. So do IT troops.

“Let’s turn it on all at once.”
Sketchy project progress. Testing done at the integration level; no time left to do system testing or performance testing. It’s September 30, 2013. The decision is made: “Let’s turn it on all at once.”

Turning the whole system on at once is known as a “light-switch conversion”. You flip the switch and it’s on. In the case of HealthCare.gov, the circuit breaker blew, and that’s pretty much the case since day one. Now what to do?

“Where’s the problem at?”
From the moment the website was turned on, the system throughput — enrollments per day — was somewhere between none and awful. We’ve all heard the stories of more than a million abandoned registrations and a pitiful number of successful enrollments. Where were the bottlenecks? The project team did not know.

They didn’t know because there was practically no internal instrumentation of the software. We know this because the tech SWAT team brought in after October first said so.

What happens next?
The next drop dead date is December 1. There is no public report that the system has reached its 60,000 simultaneous user design goal, nor the “critical problem list” solved.

All the body language and weasel-wording suggests the website will support more simultaneous users than at launch two months ago. Do not be surprised at a press release admitting failure hidden during the Thanksgiving holiday rush.

To make the exchange insurance process work, a couple of million (or more, when you include cancelled policies that must be replaced) enrollments need to take place in December. If that’s not possible, there needs to be a clock reset on the law’s implementation. The pressure to do so will be irresistible.

There is no assurance that the enrollments made to date supply accurate data. Insurance companies are reviewing each enrollment manually. That process is not scalable.

It was sobering to hear Congressional testimony this week that 30%-40% of the project coding had yet to be completed. Essentially, the backend financial system is not done or tested. That system reconciles customer policy payments and government subsidies, and makes payments to insurers. If the insurer does not have a payment, you’re not a customer. Which makes the remaining 30% of the system yet another “failure is not an option” to get working correctly by Christmas.

There is no backup plan. Telephone and paper enrollments all get entered into the same HealthCare.gov website.

There is a high probability of a successful attack. One thorough security hack and public confidence will dissolve. There’s a difference between successful and thorough which allows a lot of room for spin.

If individual healthcare insurance is 5% of the market, and we’ve had all these problems, what happens next year when the other 95% is subjected to ACA regulations? No one knows.

Comment here or tweet me @peterskastner

The author has ten years experience as a programmer, analyst, and group project manager for a systems integrator;  established government marketing programs at two companies; has over twenty years experience in IT consulting and market research; and, has served as an expert witness regarding failed computer projects.

HealthCare.gov improvements are in the works

HealthCare.gov improvements are in the works

IT Industry Hopes for Q4 Holiday Magic

I am floored by how it has come to pass that almost all of the 2013 new tech products get to market in the fourth quarter of 2013. For the most part, the other three quarters of the year were not wasted so much as not used to smooth supply and demand. What is to be done?

2013 products arrive in Q4
Here are some of the data points I used to conclude that 2013 is one backend-loaded product year:

  • Data Center: Xeon E3-1200 v3 single-socket chips based on the Haswell architecture started shipping this month. Servers follow next quarter. Xeon E5 dual-socket chips based on Ivy Bridge announced and anticipated in shipping servers in Q4. New Avoton and Rangely Atom chips for micro-servers and storage/comms are announced and anticipated in product in Q4.
  • PCs: my channel checks show 2013 Gen 4 Core (Haswell) chips in about 10% of SKUs at retail, mostly quad-core. Dual-core chips are now arriving and we’ll see lower-end Haswell notebooks and desktops arriving imminently. Apple, for instance, launched its Haswell-based 2013 iMac all-in-ones September 24th. But note the 2013 Mac Pro announced in June has not shipped and the new MacBooks are missing in action.
  • Tablets: Intel’s Bay Trail Atom chips announced in June are now shipping. They’ll be married to Android or Windows 8.1, which ships in late October. Apple’s 2013 iPad products have not been announced. Android tabs this year have mostly seen software updates, not significant hardware changes.
  • Phones: Apple’s new phones started selling this week. The 5C is last year’s product with a cost-reduced plastic case. The iPhone 5S is the hot product. Unless you stood all day in line last weekend, you’ll be getting your ordered phone …. in Q4. Intel’s Merrifield Atom chips for smartphones, announced in June have yet to be launched. I’m thinking Merrifield gets the spotlight at the early January ’14 CES show.

How did we get so backend loaded?
I don’t think an economics degree is needed to explain what has happened. The phenomenal unit growth over the past decade in personal computers, including mobility, have squarely placed the industry under the forces of global macro-economics. The recession in Europe, pull-back in emerging countries led by China, and slow growth in the USA all contribute to a sub-par macro-economic global economy. Unit volume growth rates have fallen.

The IT industry has reacted with slowed new product introductions in order to sell more of the existing products, which reduces the cost-per-unit of R&D and overhead of existing products. And increases profits.

Unfortunately, products are typically built to a forecast. The forecast for 2012-2013 was higher than reality. More product was built than planned or sold. There are warehouses full of last year’s technology.

The best laugh I’ve gotten in the past year from industry executives is to suggest that “I know a guy who knows a guy in New Jersey who could maybe arrange a warehouse fire.” After about a second of mental arithmetic, I usually get a broad smile back and a response like “Hypothetically, that would certainly be very helpful.” (Industry execs must think I routinely wear a wire.)

So, with warehouses full of product which will depreciate dramatically upon new technology announcements, the industry has said “Give us more time to unload the warehouses.”

Meanwhile, getting the new base technology out the door on schedule is harder, not easier. Semiconductor fabrication, new OS releases, new sensors and drivers, etc. all contribute to friction in the product development schedule. But flaws are unacceptable because of the replacement costs. For example, if a computing flaw is found in Apple’s new iOS 7, which shipped five days ago, Apple will have to fix the install on over 100 million devices and climbing — and deal with class action lawsuits and reputation damage; costs over $1 billion are the starting point.

In short, the industry has slowed its cadence over the past several years to the point where all the sizzle in the market with this year’s products happens at the year-end holidays. (Glad I’m not a Wall Street analyst.)

What happens next?
The warehouses will still be stuffed entering 2014. But there will be less 2012 tech on those shelves, now replaced by 2013 tech.

Marching soldiers are taught that when they get out of step, they skip once and get back in cadence.

The ideal consumer cadence for the IT industry has products shipping in Q2 and fully ramped by mid-Q3; that’s in time for the back-to-school major selling season, second only to the holidays. The data center cadence is more centered on a two-year cycle, while enterprise PC buying prefers predictability.

Consumer tech in 2014 broadly moves to a smaller process node and doubles up to quad-cores. Competitively, Intel is muscling its way into tablets and smartphones. The A7 processor in the new Apple iPhone 5S is Apple’s first shot in response. Intel will come back with 14nm Atoms in 2014, and Apple will have an A8.

Notebooks will see a full generation of innovation as Intel delivers 14nm chips that are on an efficiency path towards thresh-hold voltages — as low as possible — that deliver outstanding battery life. A variation on the same tech gets to Atom by 2014 holidays.

The biggest visible product changes will be in form-factors, as two-in-one notebooks in many designs compete with tablets in many sizes. The risk-averse product manufacturers (who own that product in the warehouses) have to innovate or die, macro-economic conditions be damned. Dell comes to mind.

On the software side, Apple’s IOS 7 looks and acts a lot more like Android than ever before. Who would have guessed that? Microsoft tries again with Windows version 8.1.

Consumer buyers will be information-hosed with more changes than they have seen in years, making decision-making harder.

Intel has been very cagy about what 2014 brings to desktops; another year with Haswell refreshers before a 2015 new architecture is entirely possible. Otherwise, traditional beige boxes are being replaced with all-in-ones and innovative small form-factor machines.

The data center is in step and a skip is unnecessary. The 2014 market battle will answer the question: what place do micro-servers have in the data center? However, there is too much server-supplier capacity chasing a more commodity datacenter. Reports have IBM selling off its server business, and Dell is going private to focus long-term.

The bright spot is that tech products of all classes seems to wear out after about 4-5 years, demanding replacement. Anyone still have an iPhone 3G?

The industry is likely to continue to dawdle its cycles until global macro-economic conditions improve and demand catches up with more of the supply. But moving the availability of products back even two months in the calendar would improve new-product flow-through by catching the back-to-school season.

Catch me on Twitter @peterskastner

warehouse-300x196

 

Google As “The Cross-Platform Apps Company”

A beta version of Google’s Chrome Browser now supports Chrome App Launcher. This opens up the Chrome Store apps to Windows, Linux, and Mac OS desktops plus Google Android and Apple iOS mobile phones and tablets. Not to mention Google’s Chrome OS. Cross-platform is good, users say, because they increasingly recognize the utility of apps and their data across the devices in their lives.

Google's Chrome App Launcher

Google’s Chrome App Launcher

Common apps running on a familiar user interface and operating system across a wide variety of hardware platforms is an idea that crops up frequently in the history of the computer industry. Unix and Windows NT come immediately to mind. Google is apparently bringing the cross-platform idea back into play.

The Chrome browser runs on Android, Windows, Linux, and Mac OS and has more recently appeared on iOS. Bookmarks, tabs, settings are synchronized in Google’s cloud including Drive storage, and available to any device at any time. Chrome apps add much more than typical browser extensions. They are real apps, albeit with cloud and local data. Docs, Sheets, and Slides are the functional equivalent of Word, Excel, and Powerpoint in the Microsoft universe, and Pages, Numbers, and Keynote in the Apple Universe.

Chrome apps plus the already cross-platform Chrome browser give Google a wider breadth of platforms than the competition. As more data and usage is moved to the cloud (e.g., Office365), the benefits will become more apparent to cloud-migration users.

Perhaps my personal journey is illustrative. Like many professional users, I’ve followed Microsoft’s Office apps for generations. But over the past decade — Vista comes to mind — I started using a Mac. And I still have PCs. However, I never invested heavily in the Apple iWork office suite, using it for mostly Microsoft-compatible import and export or, lately, to make cross-platform .pdfs of finished documents or presentations. I have expertise and a software investment in Microsoft PC office apps and have no foreseeable intention to move to Microsoft Office 365.

Since more of my consumption and production is happening on tablets and even smartphones, I’m a good candidate to drop Apple iWork and move to Google apps. These appear on the Mac desktop and launch just like Mac apps. Or Windows apps.

Moreover, the mobile apps I use from the Chrome Store are all there too: WorkFlowy, TweetDeck, QuickBooks, and Evernote. It’s not just cloud office.

Let’s leave aside the issue of whether your data is secure in the cloud. That applies to all apps everywhere, and is worth pondering another day.

Being able to run a familiar, common set of apps across all the major hardware and OS platforms and time is a valuable competitive advantage.

I don’t see the technology industry yet recognizing that Google is quietly setting up to be the only supplier that can run the same apps on any broadly used platform.

Follow me on Twitter @peterskastner. Your comments are invited.

POWER to the People: IBM is Too Little, Too Late

“On August 6, Google, IBM, Mellanox, NVIDIA and Tyan today announced plans to form the OpenPOWER Consortium – an open development alliance based on IBM’s POWER microprocessor architecture. The Consortium intends to build advanced server, networking, storage and GPU-acceleration technology aimed at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers.”

IBM Hardware Is Not Carrying Its Weight
As the last computer manufacturer with its own silicon fab, IBM has a financial dilemma. The cost of silicon fab investments is increasing. 
Hardware revenues are declining.  There are fewer new Z-series mainframes and POWER-based midrange computers on which to allocate hardware R&D, product development, fab capex, and other amortized costs. POWER revenues were down 25% in the latest quarter. Bloomberg reports furloughs of the hardware staff this month in an effort to cut costs.

The cloud-based future data center is full of Intel Xeon-based servers as practiced by Google, Amazon, Facebook et al. But margins on Intel-architecture servers — IBM’s instantiation is the X Series — are eroding. Widely believed rumors earlier this year had IBM selling off its X Series business to Lenovo, like IBM spun off its PC business in 2005.

Clearly, the IBM hardware business is the subject of much ongoing discussion in Armonk, NY.

The OpenPOWER Consortium is a Strategic Mistake
Our view view is that IBM has made a strategic mistake with this announcement by admitting proprietary defeat and opening POWER up to an open-source consortium. The signal IBM is sending is that it is no longer totally committed to the long-term future of its mainframe and POWER hardware. The sensitive ears of IBM’s global data center customers will pick this message up and, over time, accelerate plans to migrate off of IBM hardware and software.

Proprietary hardware and software business success depends a great deal on customer trust — more than is commonly assumed. Customers want a long term future planning horizon in order to continue investing in IBM, which is not the lowest-cost solution. When trust is broken, a hardware business can crash precipitously. One such example is Prime Computer, a 1980s Massachusetts darling that was acquired, dropped plans for future processors, and watched its installed base decline at a fifty-percent per annum rate. On the other hand, H-P keeps Digital Equipment and Tandem applications going to this day.

By throwing doubt on its future hardware business horizon, IBM risks its entire business model. Yes, that is a far-fetched statement but worth considering: the IBM services and software business is built around supporting, first and foremost, IBM hardware. Lose proprietary hardware customers, and services and high-margin software business will decline.

So, we think IBM is risking a lot by stirring up its customer base in return for a few million dollars in POWER consortium licensing revenue.

What About Google?
To see how this deal could turn even worse for IBM, let’s look at the motives of the headline consortium member, Google.

First, IBM just gave Google the “Amdahl coffee mug”. In the mainframe hay days of the 1970s, it was a common sales tactic for Amdahl, a mainframe clone company in fierce competition with IBM, to leave a coffee mug for the CIO. Properly placed on a desk, it sent the message to the IBM sales team to drop prices because there was competition for the order. A POWER mug — backed by open POWER servers — will send a pricing signal to Intel, which sells thousands of Xeon chips directly to Google. That action won’t budge the needle much today.

POWER servers are most likely to appear in Open Compute form, as blades in an open-hardware rack-tray. These are the cost-reduced server architectures we see sucking the margin out of the entire server industry. Gas on the fire of that trend.

And we don’t see Google needing to build its own Tier-3 backend database servers, a common role for POWER servers. However, Google customizing POWER chips with nVidia GPU technology for some distant product is believable. For example, we’re puzzling how Google will reduce the $85,000 technology cost of its driverless automobile to mass-market cost levels, and the consortium could become part of that solution.

Open POWER Software Too?
IBM is emphatically not throwing POWER operating system (i.e., AIX Unix and OS/400) and systems software into the open consortium. That would give away the IBM family jewels. So, the open-source hardware folks will quickly turn to the Linux on POWER OS’s. Given a choice, the buyers will turn to open-source — that is, free or lower cost — versions of IBM software equivalents for system software. We see little software-revenue upside to IBM’s POWER consortium move. Nor services either.

Fortunately, IBM did not suggest that POWER licensing would extend to the fast-growing mobile world of tablets and smartphones because that would be a bridge way too far. IBM may staunch some of the embedded POWER chip business lost to ARM’s customers and Intel in recent years through customizations by licensing designs ala ARM Holdings.

Thoughts and Observations
In conclusion, we see nothing good happening to IBM’s bottom line as a result of the OpenPOWER Consortium announcement. And if it wasn’t about the bottom line, why risk long-term customer trust in IBM’s long-term hardware platform commitments? The revenue from POWER licensing will not come close to compensating for the weakness that IBM displays with this consortium strategy.

I ask this without drama or bombast: can we now see the dim horizon where IBM is no longer a major player in the computer hardware business? That’s a huge question which until now has never been asked nor needed to be asked. Moreover, no IBM hardware products would mean no IBM fab is needed.

The real implications are about IBM’s declining semiconductor business. POWER (including embedded POWER) is a volume product for IBM Microelectronics, along with current-generation video game chips. The video game business dries up by year end as Sony and Microsoft introduce the next generation consoles, sans IBM content. POWER licensing through the OpenPOWER Consortium might generate some fab business for the East Fishkill, NY IBM fab, but that business could also go to Global Foundries (GloFo) or Taiwan Semi (TMSC). Where’s the chip volume going to come from?

IBM will not be able to keep profitably investing in cutting-edge semiconductor fabs if it does not have the fab volume needed to amortize costs. Simple economics of scale. But note that IBM fab technology has been of enormous help to GloFo and TSMC in getting to recent semiconductor technology nodes. Absent IBM’s help, this progress would be delayed.

Any move by IBM to cut expenses by slowing fab technology investments will have a cascading negative impact on global merchant semiconductor fab innovation, hurting, for example, the ARM chip ecosystem. Is the canary still singing in the IBM semiconductor fab?

Your comments and feedback are invited.

Follow @PeterSKastner on Twitter

IBM POWER Linux Server

IBM POWER Linux Server

Apple’s Q2-2013: Q4 Anticipation

I’m on the road but wanted to update you on Apple’s second quarter.  Revenue was flat and profits were down compared to last year, while iPhone sales were up, and iPad and Mac sales were down. I expect the current third quarter to be constrained by anticipation of expected product announcements in September. Then, product supply issues will be unable to fully meet Q4 holiday demand for iPhone and iPad.

It sure looks like Apple has managed to compress a year’s worth of  opportunities into three or four months. Think how much smoother things might be if product came forth across the entire twelve months of the year.

The text below was supplied by Apple PR. While I cannot vouch for its accuracy, I have no reason at all to dispute it. It’s a useful condensation of the numbers.

This afternoon Apple announced third quarter results, including record June quarter iPhone sales and our highest ever Education revenue. You can find our earnings press release here and a replay of the call with Tim Cook and Peter Oppenheimer is available here
Overall:
– Apple reported quarterly revenue of $35.3 billion and net profit of $6.9 billion, compared to $35 billion and $8.8 billion, respectively, a year ago
– Gross margin was 36.9%, compared with 42.8% in the year-ago quarter
– International sales accounted for 57% of total quarterly revenue
– Apple generated $7.8 billion in cash and has returned $18.8 billion in cash to shareholders through dividends and share repurchases
iPhone:
– Apple sold 31.2 million iPhones, up from 26 million in the year-ago quarter
– iPhone leads in customer satisfaction and loyalty, according to numerous third-party research firms, including J.D. Power & Associates, ChangeWave and Kantar
– Apple reduced iPhone inventory by 600,000 units in the quarter
– iPhone remains strong in the enterprise, and has captured 62.5% of the US commercial market, according to IDC
iPad:
– Apple sold 14.6 million iPads in the quarter, compared with 17 million in the year-ago quarter
– iPad faced a tough June comparison, as the first iPad with a Retina display was launched in the year-ago quarter and we ramped up inventory
– iPad channel inventory was reduced by 700,000 units, making sell-through down just 3% year-over-year
– iPad usage share remains incredibly high, and grew to 84.3% last month, according to Chitika
Mac:
– Apple sold 3.8 million Macs, down from 4 million in the year-ago quarter
– The updated MacBook Air line was launched at WWDC in June, making it available for just three weeks of the quarter.
– The Mac was though down 7% but again outperformed the market, which contracted 11%, according to IDC
– We look forward to the launch OS X Mavericks this fall and of the all new Mac Pro later this year
Music/Services:
– iTunes, software and services together generated $4 billion in quarterly revenue
– We now have more than 320 million iCloud accounts and 240 million Game Center accounts
– There are more than 900,000 apps in the App Store, with more than 375,000 designed specifically for iPad
– Customers have downloaded more than 50 billion apps
– Apple has paid more than $11 billion to developers, half of which was earned in the last four quarters
Education:
– Our education division experienced its highest ever quarterly revenue
– 1.1 million iPads were sold in education, and the Mac experienced strong sales as well
– Maine’s statewide education technology program saw 94% of the state’s elementary and high schools choosing Apple products
– The first phase of Los Angeles Unified School District’s plan to provide 660,000 students with a tablet was approved, resulting in an initial $30 million iPad sale
Retail:
– Apple retail stores generated $4.1 billion in revenue, about equal to a year ago
– iPhone saw strong growth in sales of our own retail stores
– MacBook Air had its most successful Retail launch to date
– We opened six new stores across five countries and now have 408 stores, 156 outside the US
Apple iPads

Apple iPads

Peak Technology or Technology Peak?

The theory of peak oil — the point at which the Earth’s oil supply begins to dwindle — was a hot and debatable topic last decade. There are lots of signs that we are at a technology demand peak. Is this permanent, or how will we get past this peak?

The last-decade argument that oil production had permanently peaked proved to be laughably incorrect. Hydraulic fracturing  (“fracking”) technology developed in the United States changed the slope of the oil production curve upwards. This analyst has no intention becoming a laughing stock by suggesting that digital technology innovation has peaked. Far from it. However, few things in nature are a straight line; it certainly appears that digital technology adoption — demand — has slowed. We are in a trough and can’t foresee the other side.

One good place to look for demand forecasts is the stock market.

Smart Phones and Tablets
Last month, both gadget profit-leaders Samsung and Apple both took hits based on slower growth forecasts. “Pretty much everyone who can afford a smartphone or tablet has one, so where does the profit growth come from?” was the story line. Good question.

This month, AT&T and T-Mobile announced they would lease customers smartphones instead of selling them outright with a carrier discount. The phones and tablets coming off lease will be re-sold into the burgeoning used gadget market. It’s now too easy to get new-enough gadget technology in the used market. After all, your last-year’s hardware can still run this year’s free, new software upgrade.

On the surface, it appears that the global market for $600 smartphones and tablets is at or close to saturation — a peak.

Desktop and Notebook PCs
The stock market is not treating traditional technology makers very well. H-P is coming back from a near-death experience. Its stock is half what it was two years ago. Dell wants to go private so it can restructure and deal with market forces that are crushing margins and profits. Even staid and predictable IBM has lost its mojo over the past five quarters. Microsoft missed.

These technology makers are dealing with PCs, the data center, and services. They are not major players in the smartphone/gadget market. Their focus is on doing what they used to do more efficiently. That strategy is not working.

The desktop and notebook PC markets are almost all replacement units in developed countries. Macro-economics has dramatically slowed emerging market growth in formerly hot places like Brazil, Russia, India, and China (BRIC). The new customers are being added more slowly and at higher costs, and existing customers have increasingly voted to not upgrade as frequently. My 2008 Apple MacBook Air, cutting edge and quite expensive at the time, is still adequate for road trips. My Sandy Bridge Generation-1 Ultrabook has adequate battery life. There’s no compelling reason, most buyers tell us, to accelerate the PC replacement cycle.

Well, one temporary accelerator is the support demise next year for Windows XP. With auditors and consultants screaming about liability issues, non-profits and government are rolling in new PCs to replace their ten-year old kit. Thank goodness. But seriously, ten-year old PCs have been getting the job done, as defined by user management.

Note also that a new PC likely means a user-training upgrade to Windows 8. Both consumers and businesses are avoiding this upgrade more or less like the plague. There is no swell of press optimism that Windows 8.1 this fall will be the trick. PC replacement is a pain already, so few want to jump on an OS generation change as well.

Data Center
The data center market shows some points of light. Public cloud data centers by the big boys like Apple, Google, Facebook, and Amazon are growing like gangbusters. High Performance Computing, where ever more complex models consume as many teraflops as one can afford to throw at the problem. Recent press reports suggest that “national security” is a growing technology consumer. [understatement]

However, enterprise data centers, driven by cautious corporate management, are growing more slowly than five years ago; this market outsizes the points of light. Moreover, the latest generation of server technology really does support more users and apps than the gear being replaced. With headcount down and fewer new enterprise apps, fewer racks are now getting the computing workload done. (Storage, of course, is growing logarithmically). We also expect a growing trend towards “open computing” servers, a trend that will suck hardware margin and services revenue from the big server-technology makers.

Navigating From the Trough
So, mobile gadgets, traditional PCs, and the data center — the three legs of the digital technology stool — are all growing more slowly than in the recent past. This is the “technology demand peak” as we see it. We are presently past the peak and into the trough.

How deep is the trough and how long will it last? LOL. If we knew that, we could comfortably retire! Really, there are roughly a couple of trillion dollars in market cap at stake here. If the digital tech market growth remains anemic beyond another twelve months, then there will be too many tech players and too few chairs when the music stops. Any market observer can see that.

Our own view is that it will take a number of technology innovations that will propel replacement demand and drive new markets. The solution is new tech, not better-faster-smaller old tech. Where’s the digital equivalent of fracking? (Actually, fracking would not be possible without a lot of newly invented, computer-based technology.)

First, the global macro-economic slowdown is likely to resolve itself positively, perhaps soon. We don’t buy the global depression arguments. There are billions of potential middle-class new computer consumers and the data center backend to support them.

Next, mobile gadgets and PCs are on the verge of exciting new user interfaces. Things like holographic 3D displays — you are in the picture, and keyboards projected on any flat surface. Conference-room projection capabilities in every smartphone. New users interfaces, shared with PCs and notebooks, that are based on perceptual computing, the (wo)man-machine interface that recognizes voice, gestures, and eye movement, for starters.

Big data and the cloud are data-center conversation pieces. But these technologies are really toddlers, at best. Data-sifting technologies like the grandson of Hadoop will enable more real-time enterprise intelligence and wisdom. HPC has limits only of money available to invest. Traditional data centers will re-plumb with faster I/O, distributed computing, and the scale-up and scale-down capacity of an electric utility — while needing less from the electrical utility.

We don’t have all the answers, but are convinced it will take an industry kick in the pants to get us towards the next peak. More of the same is not a recipe for a solution. We are in a temporary downturn, not just past peak technology.

Your thoughts and comments are welcome.

Photo Credit: Eugene Richards

Photo Credit: Eugene Richards