IRS Loses Lois Lerner Emails

The IRS told Congress yesterday that two years of emails on Tax Exempt Organizations department manager Lois Lerner’s desktop were irretrievably lost due to a hard drive crash in 2011. As this is a technology blog, how could this event happen?

The Internal Revenue Service has 90,000 employees working in a complex financial-services organization. Like its private-sector counterparts, the IRS has a sophisticated Information Technology organization because the IRS mission is implementing the tax laws of the United States. The IRS is the epitome of a paper-pushing organization, and by 2011 paper-pushing was done by email.

1. The IRS first installed Microsoft’s enterprise email product, Exchange in the data center and Outlook on client desktops in 1998, about the same time as many Fortune 500 organizations. By 2011, the IRS had over a decade of operational experience.

2. Hard drives are the weak-link in IT installations. These mechanical devices fail at the rate of about 5% a year. With 90,000 employees, that works out to an average of 4,500 a year or 22 per work day. The IRS IT staff is very familiar with the consequences of user-PC hard drive failures. Data center storage management is another leaf in the same book.

3. The IRS reported to Congress that senior executive Lerner’s hard drive failed, and nothing could be recovered from it. It was forensically tested. As a result, the IRS claims, there is no record of the emails that were sent or received from Ms. Lerner’s computer. The thousands of emails recovered to date were extracted from sender or recipient email lists within the IRS, not from Lerner’s files. There is no record of emails to other federal departments, or to other organizations or  personal emails.

4. The implication is that the Lerner email history only resided on her computer. There is no other IT explanation.  Yet Microsoft Exchange in the data center stores copies of all email chains on multiple hard drives on multiple, synchronized email servers. That’s the way all enterprise email systems have to work. So the facts as stated make no IT sense.

But let’s look at the implications of a strategy where the Lerner email history only resided on her computer and the hard drive failed completely so nothing could be recovered:

  • Where are the Lerner PC backups? With a 5% annual failure rate, industry-wide PC backup strategies are as old as centralized email. There should be Lerner PC backups made by IRS IT. Leave it up to the user to make backups? No organization the size of the IRS allows that for all the obvious reasons that come to mind, starting with it doesn’t work in practice.
  • How could Lois Lerner do her work? The hard drive was lost and there were no PC backups. Besides losing two years worth of emails, GS-15 department head Lerner had to also lose all the data of a digital business life: calendar; contacts; personnel notes; work-in-process plans, schedules, meeting notes, reviews, budget spreadsheets, official IRS rulings.
    It is inconceivable that a modern executive could be stripped of all her business data and not face-plant within a week. Could you? Not me. Nobody has paper backup for everything anymore. Your business smartphone backs up to your PC.
  • The Exchange servers log every email coming into and going out of the IRS. Did the whole set of IRS backup tapes fail in an unreported catastrophe? That primary (but undiscovered) failure would make the routine failure of Lerner’s PC unrecoverable.

I cannot think of an acceptable reason for the unexplained yet unrecoverable loss of the data on Lerner’s PC while following the usual practices every IT organization I have worked with over decades. Which leaves only two alternatives: a much clearer explanation from IRS IT professionals of how these events could happen; or something nefarious is going on.

Follow me on Twitter @peterskastner

The author’s experience with federal email and records management began with the Ronald Reagan White House in 1982.

Email Inbox

 

 

 

 

Enterprise Computing Jumps on the Supply-Demand Curve

The traditional enterprise computing server suppliers are in an ever-faster game of musical chairs with cloud computing competitors. Recent cloud price cuts will accelerate enterprise adoption of the cloud, to the economic detriment of IBM, HP, Oracle Sun.

Many IT executives sat down to a cup of coffee this morning with the Wall Street Journal opened to the Marketplace lede, “Price War Erupts in Cloud Services.” Cloud computing from the likes of Amazon, Google, and Microsoft is “changing the math for corporate executives who spend roughly $140 billion a year to buy computers, Internet cables, software and other gear for corporate-technology nerve centers.” This graphic begs the question,

50 Million Page View Web Site Costs“Gee, maybe my data-center computing model for the company needs a strategic re-think?” And while there’s a very active consulting business by the usual business-transformation consulting suspects, the no-cost answer is: yes, cloud computing is a valid model that most enterprises and applications should move to over time.

This blog post, though, is not about the nuances of cloud computing today. Rather, we need to take a look at how the supply-demand curve for enterprise computing must impact the traditional enterprise server business — hard. (And yes, I am breaking a vow made during Economics 101 to never mention economics in polite company).

Cloud computing is sucking the profits out of the traditional server business.

For over fifty years, in the case of IBM, the traditional server companies including HP and Sun sold big iron, proprietary operating software and storage, and lots of services at high margins. In the past two decades, Intel’s mass-market silicon evolved into the Xeon family that took away a large percentage of that proprietary “big iron”. Yet the Intel specialist firms such as NCR and Sequent never could beat the Big Three server suppliers, who took on Xeon-based server lines of their own.

Cloud computing is sucking the profits out of the traditional server business. IBM is selling its Xeon business to Lenovo, and is likely to considerably reduce its hardware business. Oracle’s Sun business looks like a cash cow to this writer, with little innovation coming out of R&D. HP is in denial.

All the traditional server companies have cloud offerings, of course. But only IBM has jettisoned its own servers in favor of the bare-metal, do-it-yourself offerings from Amazon, Google, and lately Microsoft.

Price-war-driven lower cloud computing prices will only generate more demand for cloud computing. Google, and Microsoft have other businesses that are very profitable; these two can run their cloud offerings lean and mean. (Amazon makes up tiny margins with huge volume). To recall that Economics 101 chart:

Supply-Demand Curve

The strategic issue for IT executives (and traditional-supplier investors) is what happens over the next five years as lower server profits hollow out their traditional supplier’s ability to innovate and deliver affordable hardware and software? Expect less support and examine your application software stacks; you’ll want to make migration to a cloud implementation possible and economical. The book isn’t even written on cloud operations, backup, recovery, performance and other now well-understood issues in your existing data centers.

Meanwhile, what are your users up to? Like PCs sprouted without IT blessings a generation ago, cost-conscious (or IT schedule averse) users are likely playing with the cloud using your enterprise data. Secure? Regulatory requirements met? Lots to think about.

Follow me on Twitter @PeterSKastner

Why IBM Will Exit the X86 Server Business

With hardware profits almost non-existent, IBM’s server hardware strategy needs a hurry-up fix. Jettisoning the X86 business and its sales/marketing employees will free up much-needed cash flow dollars. But the System z and Power series remain expensive to support.

Q4-2013 Was a Hardware Business Debacle
IBM’s systems and technology division (S&T), also known as hardware, saw sales fall 26% , as pre-tax earnings fell by $768 million to $200 million. As the press release says in grim, adjective-free prose:

Total systems revenues decreased 25 percent.  Revenues from System z mainframe server products decreased 37 percent compared with the year-ago period.  Total delivery of System z computing power, as measured in MIPS (millions of instructions per second), decreased 26 percent versus the prior year.  Revenues from Power Systems decreased 31 percent compared with the 2012 period.  Revenues from System x decreased 16 percent.  Revenues from System Storage decreased 13 percent.  Revenues from Microelectronics OEM decreased 33 percent.

With pre-tax income of $7.0 billion, S&T’s $0.2 billion contribution represented a mere 2.9% of company gross profits.

For the year 2013, S&T segment revenues were $14.4 billion, a decrease of 19 percent (down 18 percent, adjusting for currency).   Corporate revenues for 2013 totaled $99.8 billion. S&T gross margins were down 3.5 points to 35.6%, compared to rising overall IBM margins of 48.6%.

IBM generated free cash flow of $15.0 billion, down approximately $3.2 billion year over year. A lot of that short-fall can be laid at the doorstep of the S&T division.

IBM’s hardware division is a declining business, falling from 21.3% of company revenues in 2007 to 14.4% in 2013, now with inadequate profits. Moreover, the S&T division requires a billion-dollar-plus annual R&D budget and bears the costs of IBM’s semiconductor fabs — on obviously declining unit volumes. S&T is not pulling its weight.

Those are the problems driving a strategy to sell off the X86 commodity server portion of S&T.

The Hardware Market is Changing Rapidly
Last April, I argued emphatically that the whole of IBM was better off retaining the X86 business. IBM hardware, including X86, drive software and services revenues in other parts of IBM, and support a robust partner community that services small and medium establishments too small for IBM direct sales to efficiently cover.

What’s happened since then is IBM’s acquisition of SoftLayer Technologies, a cloud “Infrastructure as a Service” supplier, which specializes in bare-metal X86 servers with options for using IBM’s Power servers. SoftLayer is now IBM’s cloud strategy instantiated.

I still believe killing off hardware choices for customers for IBM customers will result in a declining IBM top line. But the financial situation outlined in the previous section begs for a look at IBM’s options.

The Corner Office View
The sale of IBM’s X86 business has the following pieces:

  • Generates cash from the sale
  • Allows a reduction in sales and marketing expenses such as X86 advertising and trade shows
  • Allows for a permanent reduction in staff in X86 R&D, marketing, and sales
  • Creates a multi-billion dollar software and service recurring revenue opportunity at SoftLayer.

Unlike a year ago, IBM’s X86 customers can be encouraged to move their X86 workloads to the SoftLayer cloud and rent the computing they require. No more fork-lift upgrades, data center floor-space, HVAC limits, and all the other considerations of running your own data center. Same high-quality IBM software available. Lots of work completed on cloud auditability and compliance, making SoftLayer attractive for large enterprise workloads.

With some effort, the IBM partners can be incentivized to get their small-business customers into the cloud. “The corporate data center is so twentieth century.” This limits customer, channel, and revenue loss. It’s a viable cannibalization strategy.

Exiting the X86 server business, IBM no longer has to engineer, develop and qualify X86 servers to its very high standards, nor bear the costs of that quality. What replaces X Series X86-based customer products at SoftLayer can be built to lower cloud-quality standards — “if it breaks, reboot on another instance.” In short, IBM can squeeze costs at its own SoftLayer data centers by moving to commodity cloud servers it builds instead of using over-engineered and -differentiated X Series machines designed for customer data centers.

All indications are that IBM wants to get this done soon.

What About the Rest of S&T?
The three pieces of S&T are servers, storage, and microelectronics.

Microelectronics exists to lower the costs of fabricating the proprietary System z  mainframe and Power Systems servers, which are still an enormously profitable ecosystem. IBM still has its own semiconductor fab, and partners with GlobalFoundries to share costs on semiconductor R&D.

The competitive pressures on System z  mainframe and Power Systems servers are mostly from X86 servers of all sorts. IBM is not contemplating exiting the System z or Power hardware market. But it does have a declining margin problem and an inexorable workload trend that favors commodity, X86 computing. Expect no immediate upheavals in the proprietary server segment.

Storage is an expected component in a system-level hardware sale. There are no commodity threats to IBM’s storage business, but there are options that include the cloud. Expect no immediate upheavals in the storage segment.

Nevertheless, an unbiased cost-cutter would take a hard look at exiting Microelectronics. That is, exiting the semiconductor fabrication business — revenues down 33% in Q4 to a run-rate of under $2 billion — and working with a fab partner on future System z  and Power Systems server designs. Intel would fit that bill.

However, the likely IBM reaction to losing the control of its key proprietary hardware semiconductor fabrication can be politely summed up as “over my dead, blue body.” But the numbers don’t lie: without the X86 business, z and Power have an additional fab-based financial burden to bear that is impossible to hide. Storage and Microelectronics can’t make it up. If S&T revenues continue to decline as they have for the past seven years, another server shoe must eventually drop.

[Update January 24, 2014: IBM announced a definitive agreement to sell its X86 business to Lenovo for $2.3 billion in cash and Lenovo stock.]

Follow me on Twitter @PeterSKastner

IBM X-series

IBM X-series

IT Industry Hopes for Q4 Holiday Magic

I am floored by how it has come to pass that almost all of the 2013 new tech products get to market in the fourth quarter of 2013. For the most part, the other three quarters of the year were not wasted so much as not used to smooth supply and demand. What is to be done?

2013 products arrive in Q4
Here are some of the data points I used to conclude that 2013 is one backend-loaded product year:

  • Data Center: Xeon E3-1200 v3 single-socket chips based on the Haswell architecture started shipping this month. Servers follow next quarter. Xeon E5 dual-socket chips based on Ivy Bridge announced and anticipated in shipping servers in Q4. New Avoton and Rangely Atom chips for micro-servers and storage/comms are announced and anticipated in product in Q4.
  • PCs: my channel checks show 2013 Gen 4 Core (Haswell) chips in about 10% of SKUs at retail, mostly quad-core. Dual-core chips are now arriving and we’ll see lower-end Haswell notebooks and desktops arriving imminently. Apple, for instance, launched its Haswell-based 2013 iMac all-in-ones September 24th. But note the 2013 Mac Pro announced in June has not shipped and the new MacBooks are missing in action.
  • Tablets: Intel’s Bay Trail Atom chips announced in June are now shipping. They’ll be married to Android or Windows 8.1, which ships in late October. Apple’s 2013 iPad products have not been announced. Android tabs this year have mostly seen software updates, not significant hardware changes.
  • Phones: Apple’s new phones started selling this week. The 5C is last year’s product with a cost-reduced plastic case. The iPhone 5S is the hot product. Unless you stood all day in line last weekend, you’ll be getting your ordered phone …. in Q4. Intel’s Merrifield Atom chips for smartphones, announced in June have yet to be launched. I’m thinking Merrifield gets the spotlight at the early January ’14 CES show.

How did we get so backend loaded?
I don’t think an economics degree is needed to explain what has happened. The phenomenal unit growth over the past decade in personal computers, including mobility, have squarely placed the industry under the forces of global macro-economics. The recession in Europe, pull-back in emerging countries led by China, and slow growth in the USA all contribute to a sub-par macro-economic global economy. Unit volume growth rates have fallen.

The IT industry has reacted with slowed new product introductions in order to sell more of the existing products, which reduces the cost-per-unit of R&D and overhead of existing products. And increases profits.

Unfortunately, products are typically built to a forecast. The forecast for 2012-2013 was higher than reality. More product was built than planned or sold. There are warehouses full of last year’s technology.

The best laugh I’ve gotten in the past year from industry executives is to suggest that “I know a guy who knows a guy in New Jersey who could maybe arrange a warehouse fire.” After about a second of mental arithmetic, I usually get a broad smile back and a response like “Hypothetically, that would certainly be very helpful.” (Industry execs must think I routinely wear a wire.)

So, with warehouses full of product which will depreciate dramatically upon new technology announcements, the industry has said “Give us more time to unload the warehouses.”

Meanwhile, getting the new base technology out the door on schedule is harder, not easier. Semiconductor fabrication, new OS releases, new sensors and drivers, etc. all contribute to friction in the product development schedule. But flaws are unacceptable because of the replacement costs. For example, if a computing flaw is found in Apple’s new iOS 7, which shipped five days ago, Apple will have to fix the install on over 100 million devices and climbing — and deal with class action lawsuits and reputation damage; costs over $1 billion are the starting point.

In short, the industry has slowed its cadence over the past several years to the point where all the sizzle in the market with this year’s products happens at the year-end holidays. (Glad I’m not a Wall Street analyst.)

What happens next?
The warehouses will still be stuffed entering 2014. But there will be less 2012 tech on those shelves, now replaced by 2013 tech.

Marching soldiers are taught that when they get out of step, they skip once and get back in cadence.

The ideal consumer cadence for the IT industry has products shipping in Q2 and fully ramped by mid-Q3; that’s in time for the back-to-school major selling season, second only to the holidays. The data center cadence is more centered on a two-year cycle, while enterprise PC buying prefers predictability.

Consumer tech in 2014 broadly moves to a smaller process node and doubles up to quad-cores. Competitively, Intel is muscling its way into tablets and smartphones. The A7 processor in the new Apple iPhone 5S is Apple’s first shot in response. Intel will come back with 14nm Atoms in 2014, and Apple will have an A8.

Notebooks will see a full generation of innovation as Intel delivers 14nm chips that are on an efficiency path towards thresh-hold voltages — as low as possible — that deliver outstanding battery life. A variation on the same tech gets to Atom by 2014 holidays.

The biggest visible product changes will be in form-factors, as two-in-one notebooks in many designs compete with tablets in many sizes. The risk-averse product manufacturers (who own that product in the warehouses) have to innovate or die, macro-economic conditions be damned. Dell comes to mind.

On the software side, Apple’s IOS 7 looks and acts a lot more like Android than ever before. Who would have guessed that? Microsoft tries again with Windows version 8.1.

Consumer buyers will be information-hosed with more changes than they have seen in years, making decision-making harder.

Intel has been very cagy about what 2014 brings to desktops; another year with Haswell refreshers before a 2015 new architecture is entirely possible. Otherwise, traditional beige boxes are being replaced with all-in-ones and innovative small form-factor machines.

The data center is in step and a skip is unnecessary. The 2014 market battle will answer the question: what place do micro-servers have in the data center? However, there is too much server-supplier capacity chasing a more commodity datacenter. Reports have IBM selling off its server business, and Dell is going private to focus long-term.

The bright spot is that tech products of all classes seems to wear out after about 4-5 years, demanding replacement. Anyone still have an iPhone 3G?

The industry is likely to continue to dawdle its cycles until global macro-economic conditions improve and demand catches up with more of the supply. But moving the availability of products back even two months in the calendar would improve new-product flow-through by catching the back-to-school season.

Catch me on Twitter @peterskastner

warehouse-300x196

 

POWER to the People: IBM is Too Little, Too Late

“On August 6, Google, IBM, Mellanox, NVIDIA and Tyan today announced plans to form the OpenPOWER Consortium – an open development alliance based on IBM’s POWER microprocessor architecture. The Consortium intends to build advanced server, networking, storage and GPU-acceleration technology aimed at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers.”

IBM Hardware Is Not Carrying Its Weight
As the last computer manufacturer with its own silicon fab, IBM has a financial dilemma. The cost of silicon fab investments is increasing. 
Hardware revenues are declining.  There are fewer new Z-series mainframes and POWER-based midrange computers on which to allocate hardware R&D, product development, fab capex, and other amortized costs. POWER revenues were down 25% in the latest quarter. Bloomberg reports furloughs of the hardware staff this month in an effort to cut costs.

The cloud-based future data center is full of Intel Xeon-based servers as practiced by Google, Amazon, Facebook et al. But margins on Intel-architecture servers — IBM’s instantiation is the X Series — are eroding. Widely believed rumors earlier this year had IBM selling off its X Series business to Lenovo, like IBM spun off its PC business in 2005.

Clearly, the IBM hardware business is the subject of much ongoing discussion in Armonk, NY.

The OpenPOWER Consortium is a Strategic Mistake
Our view view is that IBM has made a strategic mistake with this announcement by admitting proprietary defeat and opening POWER up to an open-source consortium. The signal IBM is sending is that it is no longer totally committed to the long-term future of its mainframe and POWER hardware. The sensitive ears of IBM’s global data center customers will pick this message up and, over time, accelerate plans to migrate off of IBM hardware and software.

Proprietary hardware and software business success depends a great deal on customer trust — more than is commonly assumed. Customers want a long term future planning horizon in order to continue investing in IBM, which is not the lowest-cost solution. When trust is broken, a hardware business can crash precipitously. One such example is Prime Computer, a 1980s Massachusetts darling that was acquired, dropped plans for future processors, and watched its installed base decline at a fifty-percent per annum rate. On the other hand, H-P keeps Digital Equipment and Tandem applications going to this day.

By throwing doubt on its future hardware business horizon, IBM risks its entire business model. Yes, that is a far-fetched statement but worth considering: the IBM services and software business is built around supporting, first and foremost, IBM hardware. Lose proprietary hardware customers, and services and high-margin software business will decline.

So, we think IBM is risking a lot by stirring up its customer base in return for a few million dollars in POWER consortium licensing revenue.

What About Google?
To see how this deal could turn even worse for IBM, let’s look at the motives of the headline consortium member, Google.

First, IBM just gave Google the “Amdahl coffee mug”. In the mainframe hay days of the 1970s, it was a common sales tactic for Amdahl, a mainframe clone company in fierce competition with IBM, to leave a coffee mug for the CIO. Properly placed on a desk, it sent the message to the IBM sales team to drop prices because there was competition for the order. A POWER mug — backed by open POWER servers — will send a pricing signal to Intel, which sells thousands of Xeon chips directly to Google. That action won’t budge the needle much today.

POWER servers are most likely to appear in Open Compute form, as blades in an open-hardware rack-tray. These are the cost-reduced server architectures we see sucking the margin out of the entire server industry. Gas on the fire of that trend.

And we don’t see Google needing to build its own Tier-3 backend database servers, a common role for POWER servers. However, Google customizing POWER chips with nVidia GPU technology for some distant product is believable. For example, we’re puzzling how Google will reduce the $85,000 technology cost of its driverless automobile to mass-market cost levels, and the consortium could become part of that solution.

Open POWER Software Too?
IBM is emphatically not throwing POWER operating system (i.e., AIX Unix and OS/400) and systems software into the open consortium. That would give away the IBM family jewels. So, the open-source hardware folks will quickly turn to the Linux on POWER OS’s. Given a choice, the buyers will turn to open-source — that is, free or lower cost — versions of IBM software equivalents for system software. We see little software-revenue upside to IBM’s POWER consortium move. Nor services either.

Fortunately, IBM did not suggest that POWER licensing would extend to the fast-growing mobile world of tablets and smartphones because that would be a bridge way too far. IBM may staunch some of the embedded POWER chip business lost to ARM’s customers and Intel in recent years through customizations by licensing designs ala ARM Holdings.

Thoughts and Observations
In conclusion, we see nothing good happening to IBM’s bottom line as a result of the OpenPOWER Consortium announcement. And if it wasn’t about the bottom line, why risk long-term customer trust in IBM’s long-term hardware platform commitments? The revenue from POWER licensing will not come close to compensating for the weakness that IBM displays with this consortium strategy.

I ask this without drama or bombast: can we now see the dim horizon where IBM is no longer a major player in the computer hardware business? That’s a huge question which until now has never been asked nor needed to be asked. Moreover, no IBM hardware products would mean no IBM fab is needed.

The real implications are about IBM’s declining semiconductor business. POWER (including embedded POWER) is a volume product for IBM Microelectronics, along with current-generation video game chips. The video game business dries up by year end as Sony and Microsoft introduce the next generation consoles, sans IBM content. POWER licensing through the OpenPOWER Consortium might generate some fab business for the East Fishkill, NY IBM fab, but that business could also go to Global Foundries (GloFo) or Taiwan Semi (TMSC). Where’s the chip volume going to come from?

IBM will not be able to keep profitably investing in cutting-edge semiconductor fabs if it does not have the fab volume needed to amortize costs. Simple economics of scale. But note that IBM fab technology has been of enormous help to GloFo and TSMC in getting to recent semiconductor technology nodes. Absent IBM’s help, this progress would be delayed.

Any move by IBM to cut expenses by slowing fab technology investments will have a cascading negative impact on global merchant semiconductor fab innovation, hurting, for example, the ARM chip ecosystem. Is the canary still singing in the IBM semiconductor fab?

Your comments and feedback are invited.

Follow @PeterSKastner on Twitter

IBM POWER Linux Server

IBM POWER Linux Server

Peak Technology or Technology Peak?

The theory of peak oil — the point at which the Earth’s oil supply begins to dwindle — was a hot and debatable topic last decade. There are lots of signs that we are at a technology demand peak. Is this permanent, or how will we get past this peak?

The last-decade argument that oil production had permanently peaked proved to be laughably incorrect. Hydraulic fracturing  (“fracking”) technology developed in the United States changed the slope of the oil production curve upwards. This analyst has no intention becoming a laughing stock by suggesting that digital technology innovation has peaked. Far from it. However, few things in nature are a straight line; it certainly appears that digital technology adoption — demand — has slowed. We are in a trough and can’t foresee the other side.

One good place to look for demand forecasts is the stock market.

Smart Phones and Tablets
Last month, both gadget profit-leaders Samsung and Apple both took hits based on slower growth forecasts. “Pretty much everyone who can afford a smartphone or tablet has one, so where does the profit growth come from?” was the story line. Good question.

This month, AT&T and T-Mobile announced they would lease customers smartphones instead of selling them outright with a carrier discount. The phones and tablets coming off lease will be re-sold into the burgeoning used gadget market. It’s now too easy to get new-enough gadget technology in the used market. After all, your last-year’s hardware can still run this year’s free, new software upgrade.

On the surface, it appears that the global market for $600 smartphones and tablets is at or close to saturation — a peak.

Desktop and Notebook PCs
The stock market is not treating traditional technology makers very well. H-P is coming back from a near-death experience. Its stock is half what it was two years ago. Dell wants to go private so it can restructure and deal with market forces that are crushing margins and profits. Even staid and predictable IBM has lost its mojo over the past five quarters. Microsoft missed.

These technology makers are dealing with PCs, the data center, and services. They are not major players in the smartphone/gadget market. Their focus is on doing what they used to do more efficiently. That strategy is not working.

The desktop and notebook PC markets are almost all replacement units in developed countries. Macro-economics has dramatically slowed emerging market growth in formerly hot places like Brazil, Russia, India, and China (BRIC). The new customers are being added more slowly and at higher costs, and existing customers have increasingly voted to not upgrade as frequently. My 2008 Apple MacBook Air, cutting edge and quite expensive at the time, is still adequate for road trips. My Sandy Bridge Generation-1 Ultrabook has adequate battery life. There’s no compelling reason, most buyers tell us, to accelerate the PC replacement cycle.

Well, one temporary accelerator is the support demise next year for Windows XP. With auditors and consultants screaming about liability issues, non-profits and government are rolling in new PCs to replace their ten-year old kit. Thank goodness. But seriously, ten-year old PCs have been getting the job done, as defined by user management.

Note also that a new PC likely means a user-training upgrade to Windows 8. Both consumers and businesses are avoiding this upgrade more or less like the plague. There is no swell of press optimism that Windows 8.1 this fall will be the trick. PC replacement is a pain already, so few want to jump on an OS generation change as well.

Data Center
The data center market shows some points of light. Public cloud data centers by the big boys like Apple, Google, Facebook, and Amazon are growing like gangbusters. High Performance Computing, where ever more complex models consume as many teraflops as one can afford to throw at the problem. Recent press reports suggest that “national security” is a growing technology consumer. [understatement]

However, enterprise data centers, driven by cautious corporate management, are growing more slowly than five years ago; this market outsizes the points of light. Moreover, the latest generation of server technology really does support more users and apps than the gear being replaced. With headcount down and fewer new enterprise apps, fewer racks are now getting the computing workload done. (Storage, of course, is growing logarithmically). We also expect a growing trend towards “open computing” servers, a trend that will suck hardware margin and services revenue from the big server-technology makers.

Navigating From the Trough
So, mobile gadgets, traditional PCs, and the data center — the three legs of the digital technology stool — are all growing more slowly than in the recent past. This is the “technology demand peak” as we see it. We are presently past the peak and into the trough.

How deep is the trough and how long will it last? LOL. If we knew that, we could comfortably retire! Really, there are roughly a couple of trillion dollars in market cap at stake here. If the digital tech market growth remains anemic beyond another twelve months, then there will be too many tech players and too few chairs when the music stops. Any market observer can see that.

Our own view is that it will take a number of technology innovations that will propel replacement demand and drive new markets. The solution is new tech, not better-faster-smaller old tech. Where’s the digital equivalent of fracking? (Actually, fracking would not be possible without a lot of newly invented, computer-based technology.)

First, the global macro-economic slowdown is likely to resolve itself positively, perhaps soon. We don’t buy the global depression arguments. There are billions of potential middle-class new computer consumers and the data center backend to support them.

Next, mobile gadgets and PCs are on the verge of exciting new user interfaces. Things like holographic 3D displays — you are in the picture, and keyboards projected on any flat surface. Conference-room projection capabilities in every smartphone. New users interfaces, shared with PCs and notebooks, that are based on perceptual computing, the (wo)man-machine interface that recognizes voice, gestures, and eye movement, for starters.

Big data and the cloud are data-center conversation pieces. But these technologies are really toddlers, at best. Data-sifting technologies like the grandson of Hadoop will enable more real-time enterprise intelligence and wisdom. HPC has limits only of money available to invest. Traditional data centers will re-plumb with faster I/O, distributed computing, and the scale-up and scale-down capacity of an electric utility — while needing less from the electrical utility.

We don’t have all the answers, but are convinced it will take an industry kick in the pants to get us towards the next peak. More of the same is not a recipe for a solution. We are in a temporary downturn, not just past peak technology.

Your thoughts and comments are welcome.

Photo Credit: Eugene Richards

Photo Credit: Eugene Richards