IT Industry Hopes for Q4 Holiday Magic

I am floored by how it has come to pass that almost all of the 2013 new tech products get to market in the fourth quarter of 2013. For the most part, the other three quarters of the year were not wasted so much as not used to smooth supply and demand. What is to be done?

2013 products arrive in Q4
Here are some of the data points I used to conclude that 2013 is one backend-loaded product year:

  • Data Center: Xeon E3-1200 v3 single-socket chips based on the Haswell architecture started shipping this month. Servers follow next quarter. Xeon E5 dual-socket chips based on Ivy Bridge announced and anticipated in shipping servers in Q4. New Avoton and Rangely Atom chips for micro-servers and storage/comms are announced and anticipated in product in Q4.
  • PCs: my channel checks show 2013 Gen 4 Core (Haswell) chips in about 10% of SKUs at retail, mostly quad-core. Dual-core chips are now arriving and we’ll see lower-end Haswell notebooks and desktops arriving imminently. Apple, for instance, launched its Haswell-based 2013 iMac all-in-ones September 24th. But note the 2013 Mac Pro announced in June has not shipped and the new MacBooks are missing in action.
  • Tablets: Intel’s Bay Trail Atom chips announced in June are now shipping. They’ll be married to Android or Windows 8.1, which ships in late October. Apple’s 2013 iPad products have not been announced. Android tabs this year have mostly seen software updates, not significant hardware changes.
  • Phones: Apple’s new phones started selling this week. The 5C is last year’s product with a cost-reduced plastic case. The iPhone 5S is the hot product. Unless you stood all day in line last weekend, you’ll be getting your ordered phone …. in Q4. Intel’s Merrifield Atom chips for smartphones, announced in June have yet to be launched. I’m thinking Merrifield gets the spotlight at the early January ’14 CES show.

How did we get so backend loaded?
I don’t think an economics degree is needed to explain what has happened. The phenomenal unit growth over the past decade in personal computers, including mobility, have squarely placed the industry under the forces of global macro-economics. The recession in Europe, pull-back in emerging countries led by China, and slow growth in the USA all contribute to a sub-par macro-economic global economy. Unit volume growth rates have fallen.

The IT industry has reacted with slowed new product introductions in order to sell more of the existing products, which reduces the cost-per-unit of R&D and overhead of existing products. And increases profits.

Unfortunately, products are typically built to a forecast. The forecast for 2012-2013 was higher than reality. More product was built than planned or sold. There are warehouses full of last year’s technology.

The best laugh I’ve gotten in the past year from industry executives is to suggest that “I know a guy who knows a guy in New Jersey who could maybe arrange a warehouse fire.” After about a second of mental arithmetic, I usually get a broad smile back and a response like “Hypothetically, that would certainly be very helpful.” (Industry execs must think I routinely wear a wire.)

So, with warehouses full of product which will depreciate dramatically upon new technology announcements, the industry has said “Give us more time to unload the warehouses.”

Meanwhile, getting the new base technology out the door on schedule is harder, not easier. Semiconductor fabrication, new OS releases, new sensors and drivers, etc. all contribute to friction in the product development schedule. But flaws are unacceptable because of the replacement costs. For example, if a computing flaw is found in Apple’s new iOS 7, which shipped five days ago, Apple will have to fix the install on over 100 million devices and climbing — and deal with class action lawsuits and reputation damage; costs over $1 billion are the starting point.

In short, the industry has slowed its cadence over the past several years to the point where all the sizzle in the market with this year’s products happens at the year-end holidays. (Glad I’m not a Wall Street analyst.)

What happens next?
The warehouses will still be stuffed entering 2014. But there will be less 2012 tech on those shelves, now replaced by 2013 tech.

Marching soldiers are taught that when they get out of step, they skip once and get back in cadence.

The ideal consumer cadence for the IT industry has products shipping in Q2 and fully ramped by mid-Q3; that’s in time for the back-to-school major selling season, second only to the holidays. The data center cadence is more centered on a two-year cycle, while enterprise PC buying prefers predictability.

Consumer tech in 2014 broadly moves to a smaller process node and doubles up to quad-cores. Competitively, Intel is muscling its way into tablets and smartphones. The A7 processor in the new Apple iPhone 5S is Apple’s first shot in response. Intel will come back with 14nm Atoms in 2014, and Apple will have an A8.

Notebooks will see a full generation of innovation as Intel delivers 14nm chips that are on an efficiency path towards thresh-hold voltages — as low as possible — that deliver outstanding battery life. A variation on the same tech gets to Atom by 2014 holidays.

The biggest visible product changes will be in form-factors, as two-in-one notebooks in many designs compete with tablets in many sizes. The risk-averse product manufacturers (who own that product in the warehouses) have to innovate or die, macro-economic conditions be damned. Dell comes to mind.

On the software side, Apple’s IOS 7 looks and acts a lot more like Android than ever before. Who would have guessed that? Microsoft tries again with Windows version 8.1.

Consumer buyers will be information-hosed with more changes than they have seen in years, making decision-making harder.

Intel has been very cagy about what 2014 brings to desktops; another year with Haswell refreshers before a 2015 new architecture is entirely possible. Otherwise, traditional beige boxes are being replaced with all-in-ones and innovative small form-factor machines.

The data center is in step and a skip is unnecessary. The 2014 market battle will answer the question: what place do micro-servers have in the data center? However, there is too much server-supplier capacity chasing a more commodity datacenter. Reports have IBM selling off its server business, and Dell is going private to focus long-term.

The bright spot is that tech products of all classes seems to wear out after about 4-5 years, demanding replacement. Anyone still have an iPhone 3G?

The industry is likely to continue to dawdle its cycles until global macro-economic conditions improve and demand catches up with more of the supply. But moving the availability of products back even two months in the calendar would improve new-product flow-through by catching the back-to-school season.

Catch me on Twitter @peterskastner




POWER to the People: IBM is Too Little, Too Late

“On August 6, Google, IBM, Mellanox, NVIDIA and Tyan today announced plans to form the OpenPOWER Consortium – an open development alliance based on IBM’s POWER microprocessor architecture. The Consortium intends to build advanced server, networking, storage and GPU-acceleration technology aimed at delivering more choice, control and flexibility to developers of next-generation, hyperscale and cloud data centers.”

IBM Hardware Is Not Carrying Its Weight
As the last computer manufacturer with its own silicon fab, IBM has a financial dilemma. The cost of silicon fab investments is increasing. 
Hardware revenues are declining.  There are fewer new Z-series mainframes and POWER-based midrange computers on which to allocate hardware R&D, product development, fab capex, and other amortized costs. POWER revenues were down 25% in the latest quarter. Bloomberg reports furloughs of the hardware staff this month in an effort to cut costs.

The cloud-based future data center is full of Intel Xeon-based servers as practiced by Google, Amazon, Facebook et al. But margins on Intel-architecture servers — IBM’s instantiation is the X Series — are eroding. Widely believed rumors earlier this year had IBM selling off its X Series business to Lenovo, like IBM spun off its PC business in 2005.

Clearly, the IBM hardware business is the subject of much ongoing discussion in Armonk, NY.

The OpenPOWER Consortium is a Strategic Mistake
Our view view is that IBM has made a strategic mistake with this announcement by admitting proprietary defeat and opening POWER up to an open-source consortium. The signal IBM is sending is that it is no longer totally committed to the long-term future of its mainframe and POWER hardware. The sensitive ears of IBM’s global data center customers will pick this message up and, over time, accelerate plans to migrate off of IBM hardware and software.

Proprietary hardware and software business success depends a great deal on customer trust — more than is commonly assumed. Customers want a long term future planning horizon in order to continue investing in IBM, which is not the lowest-cost solution. When trust is broken, a hardware business can crash precipitously. One such example is Prime Computer, a 1980s Massachusetts darling that was acquired, dropped plans for future processors, and watched its installed base decline at a fifty-percent per annum rate. On the other hand, H-P keeps Digital Equipment and Tandem applications going to this day.

By throwing doubt on its future hardware business horizon, IBM risks its entire business model. Yes, that is a far-fetched statement but worth considering: the IBM services and software business is built around supporting, first and foremost, IBM hardware. Lose proprietary hardware customers, and services and high-margin software business will decline.

So, we think IBM is risking a lot by stirring up its customer base in return for a few million dollars in POWER consortium licensing revenue.

What About Google?
To see how this deal could turn even worse for IBM, let’s look at the motives of the headline consortium member, Google.

First, IBM just gave Google the “Amdahl coffee mug”. In the mainframe hay days of the 1970s, it was a common sales tactic for Amdahl, a mainframe clone company in fierce competition with IBM, to leave a coffee mug for the CIO. Properly placed on a desk, it sent the message to the IBM sales team to drop prices because there was competition for the order. A POWER mug — backed by open POWER servers — will send a pricing signal to Intel, which sells thousands of Xeon chips directly to Google. That action won’t budge the needle much today.

POWER servers are most likely to appear in Open Compute form, as blades in an open-hardware rack-tray. These are the cost-reduced server architectures we see sucking the margin out of the entire server industry. Gas on the fire of that trend.

And we don’t see Google needing to build its own Tier-3 backend database servers, a common role for POWER servers. However, Google customizing POWER chips with nVidia GPU technology for some distant product is believable. For example, we’re puzzling how Google will reduce the $85,000 technology cost of its driverless automobile to mass-market cost levels, and the consortium could become part of that solution.

Open POWER Software Too?
IBM is emphatically not throwing POWER operating system (i.e., AIX Unix and OS/400) and systems software into the open consortium. That would give away the IBM family jewels. So, the open-source hardware folks will quickly turn to the Linux on POWER OS’s. Given a choice, the buyers will turn to open-source — that is, free or lower cost — versions of IBM software equivalents for system software. We see little software-revenue upside to IBM’s POWER consortium move. Nor services either.

Fortunately, IBM did not suggest that POWER licensing would extend to the fast-growing mobile world of tablets and smartphones because that would be a bridge way too far. IBM may staunch some of the embedded POWER chip business lost to ARM’s customers and Intel in recent years through customizations by licensing designs ala ARM Holdings.

Thoughts and Observations
In conclusion, we see nothing good happening to IBM’s bottom line as a result of the OpenPOWER Consortium announcement. And if it wasn’t about the bottom line, why risk long-term customer trust in IBM’s long-term hardware platform commitments? The revenue from POWER licensing will not come close to compensating for the weakness that IBM displays with this consortium strategy.

I ask this without drama or bombast: can we now see the dim horizon where IBM is no longer a major player in the computer hardware business? That’s a huge question which until now has never been asked nor needed to be asked. Moreover, no IBM hardware products would mean no IBM fab is needed.

The real implications are about IBM’s declining semiconductor business. POWER (including embedded POWER) is a volume product for IBM Microelectronics, along with current-generation video game chips. The video game business dries up by year end as Sony and Microsoft introduce the next generation consoles, sans IBM content. POWER licensing through the OpenPOWER Consortium might generate some fab business for the East Fishkill, NY IBM fab, but that business could also go to Global Foundries (GloFo) or Taiwan Semi (TMSC). Where’s the chip volume going to come from?

IBM will not be able to keep profitably investing in cutting-edge semiconductor fabs if it does not have the fab volume needed to amortize costs. Simple economics of scale. But note that IBM fab technology has been of enormous help to GloFo and TSMC in getting to recent semiconductor technology nodes. Absent IBM’s help, this progress would be delayed.

Any move by IBM to cut expenses by slowing fab technology investments will have a cascading negative impact on global merchant semiconductor fab innovation, hurting, for example, the ARM chip ecosystem. Is the canary still singing in the IBM semiconductor fab?

Your comments and feedback are invited.

Follow @PeterSKastner on Twitter

IBM POWER Linux Server

IBM POWER Linux Server

Pulse Check: How Intel is Scaling to Meet the Decade’s Opportunities

Eighteen months ago, Intel announced it would address the world’s rapidly growing computing continuum by investing in variations on the Intel Architecture (IA). It was met with a ho-hum. Now, many product families are beginning to emerge from the development labs and head towards production. All with IA DNA, these chip families are designed to be highly competitive in literally dozens of new businesses for Intel, produced in high volumes, and delivering genuine value to customers and end users.

Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities. What is Intel doing and how can they pull this off?

The 2010’s Computing Continuum
Today’s computing is a continuum that ranges from smartphones to mission-critical datacenter machines, and from desktops to automobiles.  These devices represent a total addressable market (TAM) approaching a billion processors a year, and will explode to more than two billion by the end of the decade.  Of that, traditional desktop microprocessors are about 175 million chips this year, and notebooks, 225 million.

For more than four decades, solving all the world’s computing opportunities required multiple computer architectures, operating systems, and applications. That is hardly efficient for the world’s economy, but putting an IBM mainframe into a cell phone wasn’t practical. So we made due with multiple architectures and inefficiencies.

In the 1990’s, I advised early adopters NCR and Sequent in their plans for Intel 486-based servers. Those were desktop PC chips harnessed into datacenter server roles. Over twenty years, Intel learned from its customers to create and improve the Xeon server family of chips, and has achieved a dominant role in datacenter servers.

Now, Intel Corporation is methodically using its world-class silicon design and fabrication capabilities to scale its industry-standard processors down to fit smartphones and embedded applications, and up into high-performance computing applications, as two examples. Scaling in other directions is still in the labs and under wraps.

The Intel Architecture (IA) Continuum
IA is Intel’s architecture and an instruction set that is common (with feature differentiation) in the Atom, Core, and Xeon microprocessors already used in the consumer electronics, desktop and notebook, and server markets, respectively.  These microprocessors are able to run a common stack of software such as Java, Linux or Microsoft Windows.  IA also represents the hardware foundation for hundreds of billions of dollars in software application investments by enterprise and software application package developers that remain valuable assets as long as hardware platforms can run it — and backwards compatibility in IA has protected those software investments.

To meet the widely varying requirements of this decade’s computing continuum, Intel is using the DNA of IA to create application-specific variants of its microprocessors.  Think of this as silicon gene-splicing.  Each variant has its own micro-architecture that is suited for its class of computing requirements (e.g., Sandy Bridge for 2011 desktops and notebooks). These genetically-related processors will extend Intel into new markets, and include instruction-set compatible microprocessors:

  • Embedded processors and electronics known as “systems on a chip” (SOCs) with an Atom core and customized circuitry for controlling machines, display signage, automobiles, and industrial products;
  • Atom, the general-purpose computer heart of consumer electronics mobile devices, tablets, and soon smartphones;
  • Core i3, i5, and i7 processors for business and consumer desktops and notebooks, with increasing numbers of variants for form-factor, low power, and geography;
  • Xeon processors for workstations and servers, with multi-processors capable advances well into the mainframe-class, mission-critical computing segment;
  • Xeon datacenter infrastructure processor variants (e.g., storage systems, and with communications management a logical follow-on);

A Pause to Bring You Up To Date
Please do not be miffed: all of the above was published in February, 2011, more than two years ago. We included it here because it sets the stage for reviewing where Intel stands in delivering on its long-term strategy and plans of the IA computing continuum, and to remind readers that Intel’s strategy is hiding in plain sight for going on five years.

In that piece two years ago, we concluded that IA fits the market requirements of the vast majority of the decade’s computing work requirements, and that Intel is singularly capable of creating the products to fill the expanding needs of the computing market (e.g., many core).

With the launch of the 4th Generation Core 22nm microprocessors (code-name Haswell) this week and the announcement of the code-name Baytrail 22nm Atom systems on a chip (SoCs), it’s an appropriate time to take the pulse on Intel’s long-term stated direction and the products that map to the strategy.

Systems on a Chip (SoCs)
The Haswell/Baytrail launch would be a lot less impressive if Intel had not mastered the SoC.

The benefits of an SoC compared to the traditional multi-chip approach Intel has used up until now are: fewer components, less board space, greater integration, lower power consumption, lower production and assembly costs, and better performance. Phew! Intel could not build a competitive smartphone until it could put all of the logic for a computer onto one chip.

This week’s announcements include SoCs for low-voltage notebooks, tablets, and smartphones. The data center Atom SoCs, code-name Avoton, are expected later this year.

For the first time, Intel’s mainstream PC, data center, and mobile businesses include highly competitive SoCs.

SoCs are all about integration. The announcement last month at Intel’s annual investor meeting that “integration to innovation” was an additional strategy vector for the company hints at using many more variations of SoCs to meet Intel’s market opportunities with highly targeted SoC-based variants of Atom, Core, and Xeon.

Baytrail, The Forthcoming Atom Hero
With the SoCs for Baytrail in tablets and Merrifield in smartphones, Intel can for the first time seriously compete for mobile marketshare against ARM competitors on performance-per-watt and performance. These devices are likely to run the Windows 8, Android, and Chrome operating systems. They will be sold to carriers globally. There will be variants for local markets (i.e., China and Africa).

The smartphone and tablet markets combined exceed the PC market. By delivering competitive chips that run thousands of legacy apps, Intel has finally caught up on the technology front of the mobile business.

Along with almost the entire IT industry, Intel missed the opportunity that became the Apple iPhone. Early Atom processors were not SoCs, had poor battery life, and were relatively expensive. That’s a deep hole to climb out of. But Intel has done just that. There are a lot fewer naysayers than two years ago. The pendulum is now swinging Intel’s way on Atom. 2014 will be the year Intel starts garnering serious market share in mobile devices.

4th Generation Core for Mainstream Notebooks and PCs
Haswell is a new architecture implemented in new SoCs for long-battery-life notebooks, and with traditional chipsets for mainstream notebooks and desktops. The architecture moves the bar markedly higher in graphics performance, power management, and floating point (e.g., scientific) computations.

We are rethinking our computing model as a result of Haswell notebooks and PCs. Unless you are an intense gamer or workstation-class content producer, we think a notebook-technology device is the best solution.

Compared to four-year old notebooks in Intel’s own tests, Haswell era notebooks are: half the weight, half the height, get work done 1.8x faster, convert videos 23x faster, play popular games 26x faster, wake up and go in a few seconds, and with 3x battery life for HD movie playing. Why be tethered to a desktop?

Black, breadbox-size desktops are giving way to all-in-one (AIO) designs like the Apple iMac used to write this blog. That iMac has been running for two years at 100% CPU utilization with no problems. (It does medical research in the background folding proteins). New PC designs use notebook-like components to fit behind the screen. You’ll see AIOs this fall that lie flat as large tablets or go vertical with a rear kick-stand. With touch screen, wireless Internet and Bluetooth peripherals, these new AIOs are easily transportable around the house. That’s the way we see the mainstream desktop PC evolving.

And PCs need to evolve quickly. Sales are down almost 10% this year. One reason is global macro-economic conditions. But everybody knows the PC replacement cycle has slowed to a crawl. Intel’s challenge is to spark the PC replacement cycle. Haswell PCs and notebooks, as noted above, deliver a far superior experience to users than they are putting up with in their old, obsolescent devices.

Xeon processors for workstations, servers, storage, and communications
The data center is a very successful story for Intel. The company has steadily gained workloads from traditional (largely legacy Unix) systems; grown share in the big-ticket Top 500 high-performance computing segment; evolved with mega-datacenter customers such as Amazon, Facebook, and Google; and extended Xeon into storage and communications processors inside the datacenter.

The Haswell architecture includes two additions of great benefit to data-center computing. New floating point architecture and instructions should improve scientific and technical computing throughput by up to 60%, a huge gain over the installed server base. Second, transactional memory is a technology that makes it easier for programmers to deliver fine-grain parallelism, and hence to take advantage of multi-cores with multi-threaded programs, including making operating systems and systems software like databases run more efficiently.

In the past year, the company met one data-center threat in GPU-based computing with PHI, a server add-in card that contains dozens of IA cores that run a version of Linux to enable massively parallel processing. PHI competes with GPU-based challengers from AMD and nVidia.

Another challenge, micro-servers, is more a vision than a market today. Nevertheless, Intel created the code-name Avoton Atom SoC for delivery later this year. Avoton will compete against emerging AMD- and ARM-based micro-server designs.

1. The most difficult technology challenge that Intel faces this decade remains software, not hardware.  Internally, the growing list of must-deliver software drivers for hardware such as processor-integrated graphics means that the rigid two-year, tick-tock hardware model must also accommodate software delivery schedules.

Externally, Intel’s full-fray assault on the mobile market requires exquisite tact in dealing with the complex relationships with key software/platform merchants: Apple (iOS), Google (Android), and Microsoft (Windows), who are tough competitors.

In the consumer space such as smartphones, Intel’s ability to deliver applications and a winning user experience are limited by the company’s OEM distribution model. More emphasis needs to be placed on the end-user application ecosystem, both quality and quantity. We’re thinking more reference platform than reference hardware.

2. By the end of the decade, silicon fabrication will be under 10 nm, and it is a lot less clear how Moore’s Law will perform in the 2020’s. Nevertheless, we are optimistic about the next 10-12 years.

3. The company missed the coming iPhone and lost out on a lot of market potential. That can’t happen again. The company last month set up an new emerging devices division charged with finding the next best thing around the same time others do.

4. In the past, we’ve believed that mobile devices — tablets and smartphones — were additive to PCs and notebooks, not substitutional. The new generation of Haswell and Baytrail mobile devices, especially when running Microsoft Windows, offer the best of the portable/consumption world together with the performance and application software (i.e., Microsoft Office) to produce content and data. Can Intel optimize the market around this pivot point?

Observations and Conclusions
Our summary observations have not changed in two years, and are reinforced by the Haswell/Baytrail SoCs that are this week’s proof point:

  • Intel is taking its proven IA platforms and modifying them to scale competitively as existing markets evolve and as new markets such as smartphones emerge.
  • IA scales from handhelds to mission-critical enterprise applications, all able to benefit from a common set of software development tools and protecting the vast majority of the world’s software investments.  Moreover, IA and Intel itself are evolving to specifically meet the needs of a spectrum of computing made personal, the idea that a person will have multiple computing devices that match the time, place and needs of the user.
  • Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities.

Looking forward, Intel has fewer and less critical technology challenges than at any point since the iPhone launch in 2007. Instead, the company’s largely engineering-oriented talent must help the world through a complex market-development challenge as we all sort out what devices are best suited for what tasks. We’ve only scratched the surface of convertible tablet/notebook designs. How can Intel help consumers decide what they want and need so the industry can make them profitably? How fast can Intel help the market to make up its mind? Perhaps the “integration to innovation” initiative needs a marketing component.

If the three-year evolving Ultrabook campaign is an example of how Intel can change consumer tastes, then we think industry progress will be slower than optimal. A “win the hearts and minds” campaign is needed, learning from the lessons of the Ultrabook evolution. It will take skillsets in influencing and moving markets in ways Intel will need more of as personal computing changes over the next decade, for example, as perceptual computing morphs the user interface.

Absent a macro-economic melt-down, Intel is highly likely to enjoy the fruits of five years of investments over the coming two-year life of the Haswell architecture. And there’s no pressing need today to focus beyond 2015.


Peter S. Kastner is an industry analyst with over forty-five years experience in application development, datacenter operations, computer industry marketing, PCs, and market research.  He was a co-founder of industry-watcher Aberdeen Group in 1989.  His firm, Scott-Page LLC, consults with technology companies and technology users.

Twitter: @peterskastner

Haswell Core i7 desktop microprocessor

Haswell Core i7 desktop microprocessor

On the Impact of Paul Otellini’s CEO Years at Intel

Intel’s CEO Paul Otellini is retiring this week. His 40-year career at Intel now ending, it’s a timely opportunity to look at his impact on Intel.

Source: New York Times

Source: New York Times

Intel As Otellini Took Over

In September 2004 when it was announced that Paul Otellini would take over as CEO, Intel was #46 on the Fortune 100 list, and had ramped production to 1 million Pentium 4′s a week (today over a million processors a day). The year ended with revenues of $34.2 billion. Otellini, who joined Intel with a new MBA in 1974, had 30 years of experience at Intel.

The immediate challenges the company faced fell into four areas: technology, growth, competition, and finance:

Technology: Intel processor architecture had pushed more transistors clocking faster, generating more heat. The solution was to use the benefits of Moore’s Law to put more cores on each chip and run them at controllable — and eventually much reduced — voltages.

Growth: The PC market was 80% desktops and 20% notebooks in 2004 with the North America and Europe markets already mature. Intel had chip-making plants (aka fabs) coming online that were scaled to a continuing 20%-plus volume growth rate. Intel needed new markets.

Competition: AMD was ascendant, and a growing menace.  As Otellini was taking over, a market research firm reported AMD had over 52% market share at U.S. retail, and Intel had fallen to #2. Clearly, Intel needed to win with better products.

Finance: Revenue in 2004 recovered to beat 2000, the Internet bubble peak. Margins were in the low 50% range — good but inadequate to fund both robust growth and high returns to shareholders.

Where Intel Evolved Under Paul Otellini

Addressing these challenges, Otellini changed the Intel culture, setting higher expectations, and moving in many new directions to take the company and the industry forward. Let’s look at major changes at Intel in the past eight years in the four areas: technology, growth, competition, and finance:


Design for Manufacturing: Intel’s process technology in 2004 was at 90nm. To reliably achieve a new process node and architecture every two years, Intel introduced the Tick-Tock model, where odd years deliver a new architecture and even years deliver a new, smaller process node. The engineering and manufacturing fab teams work together to design microprocessors that can be manufactured in high volume with few defects. Other key accomplishments include High-K Metal Gate transistors at 45nm, 32nm products, 3D tri-gate transistors at 22nm, and a 50% reduction in wafer production time.

Multi-core technology: The multi-core Intel PC was born in 2006 in the Core 2 Duo. Now, Intel uses Intel Architecture (IA) as a technology lever for computing across small and tiny (Atom), average (Core and Xeon), and massive (Phi) workloads. There is a deliberate continuum across computing needs, all supported by a common IA and an industry of IA-compatible software tools and applications.

Performance per Watt: Otellini led Intel’s transformational technology initiative to deliver 10X more power-efficient processors. Lower processor power requirements allow innovative form factors in tablets and notebooks and are a home run in the data center. The power-efficiency initiative comes to maturity with the launch of the fourth generation of Core processors, codename Haswell, later this quarter. Power efficiency is critical to growth in mobile, discussed below.


When Otellini took over, the company focused on the chips it made, leaving the rest of the PC business to its ecosystem partners. Recent unit growth in these mature markets comes from greater focus on a broader range of customer’s computing needs, and in bringing leading technology to market rapidly and consistently. In so doing, the company gained market share in all the PC and data center product categories.

The company shifted marketing emphasis from the mature North America and Europe to emerging geographies, notably the BRIC countries — Brazil, Russia, India, and China. That formula accounted for a significant fraction of revenue growth over the past five years.

Intel’s future growth requires developing new opportunities for microprocessors:

Mobile: The early Atom processors introduced in late 2008 were designed for low-cost netbooks and nettops, not phones and tablets. Mobile was a market where the company had to reorganize, dig in, and catch up. The energy-efficiency that benefits Haswell, the communications silicon from the 2010 Infineon acquisition, and the forthcoming 14nm process in 2014 will finally allow the company to stand toe-to-toe with competitors Qualcomm, nVidia, and Samsung using the Atom brand. Mobile is a huge growth opportunity.

Software: The company acquired Wind River Systems, a specialist in real-time software in 2009, and McAfee in 2010. These added to Intel’s own developer tools business. Software services business accelerates customer time to market with new, Intel-based products. The company stepped up efforts in consumer device software, optimizing the operating systems for Google (Android), Microsoft (Windows), and Samsung (Tizen). Why? Consumer devices sell best when an integrated hardware/software/ecosystem like Apple’s iPhone exists.

Intelligent Systems: Specialized Atom systems on a chip (SoCs) with Wind River software and Infineon mobile communications radios are increasingly being designed into medical devices, factory machines, automobiles, and new product categories such as digital signage. While the global “embedded systems” market lacks the pizzazz of mobile, it is well north of $20 billion in size.


AMD today is a considerably reduced competitive threat, and Intel has gained back #1 market share in PCs, notebooks, and data center.

Growth into the mobile markets is opening a new set of competitors which all use the ARM chip architecture. Intel’s first hero products for mobile arrive later this year, and the battle will be on.


Intel has delivered solid, improved financial results to stakeholders under Otellini. With ever more efficient fabs, the company has improved gross margins. Free cash flow supports a dividend above 4%, a $5B stock buyback program, and a multi-year capital expense program targeted at building industry-leading fabs.

The changes in financial results are summarized in the table below, showing the year before Otellini took over as CEO through the end of 2012.

GAAP 2004 2012 Change
Revenue 34.2B 53.3B 55.8%
Operating Income 10.1B 14.6B 44.6%
Net Income 7.5B 11B 46.7%
EPS $1.16 $2.13 83.6%

The Paul Otellini Legacy

There will be books written about Paul Otellini and his eight years at the helm of Intel. A leader should be measured by the institution he or she leaves behind. I conclude those books will describe Intel in 2013 as excelling in managed innovation, systematic growth, and shrewd risk-taking:

Managed Innovation: Intel and other tech companies always are innovative. But Intel manages innovation among the best, on a repeatable schedule and with very high quality. That’s uncommon and exceedingly difficult to do with consistency. For example, the Tick-Tock model is a business school case study: churning out ground-breaking transistor technology, processors, and high-quality leading-edge manufacturing at a predictable, steady pace of engineering to volume manufacturing. This repeatable process is Intel’s crown jewel, and is a national asset.

Systematic Growth: Under Otellini, Intel made multi-billion dollar investments in each of the mobile, software, and intelligent systems markets. Most of the payback growth will come in the future, and will be worth tens of billions in ROI.

The company looks at the Total Addressable Market (TAM) for digital processors, decides what segments are most profitable now and in the near future, and develops capacity and go-to-market plans to capture top-three market share. TAM models are very common in the tech industry. But Intel is the only company constantly looking at the entire global TAM for processors and related silicon. With an IA computing continuum of products in place, plans to achieve more growth in all segments are realistic.

Shrewd Risk-Taking: The company is investing $35 billion in capital expenses for new chip-making plants and equipment, creating manufacturing flexibility, foundry opportunities, and demonstrating a commitment to keep at the forefront of chip-making technology. By winning the battle for cheaper and faster transistors, Intel ensures itself a large share of a growing pie while keeping competitors playing catch-up.

History and not analysts will grade the legacy of Paul Otellini as CEO at Intel. I am comfortable in predicting he will be well regarded.

Follow me on Twitter @PeterSKastner

Silvermont: Atom Steps Into the Spotlight

Intel unveiled its Silvermont architecture for Atom 22nm and 14nm chips yesterday. The billboard numbers are 5x lower power consumption and 3x more performance than the current Atom chips, which use the Saltwell architecture at 32nm. The first chips based on the Silvermont architecture, codenamed Baytrail for tablets and Merrifield for smartphones, should start shipping by the end of 2013.

Highlight: Performance and Power Excellence vs. ARM

Intel projects the architecture will deliver significantly better performance, at lower power draw, than its ARM-based competition. Let’s get right to the fisticuffs.

Silvermont Performance/Power

Silvermont Performance/Power

In the chart above, Intel claims Silvermont-based Atom systems-on-a-chip (SoCs) will deliver more performance at lower battery draw in both dual-core (e.g., smartphone) and quad core (e.g., tablet) uses — at the time of product launch later this year. Moreover, Intel confidently predicts the dual-core Atom will beat quadcore ARM chips in performance and power usage. The gloves just came off.

Note though in the fine print that these are projected CPU performance based on architectural simulations. We’ll have to wait for the product launch for the real benchmark comparisons.

Is Intel just bluffing about wiping the floor with ARM on performance and power? We are strongly convinced that Intel is not bluffing; the launch videoconference was hosted at, Intel’s investor relations portal where SEC-material announcements are made. Who in their right minds would want to bring the SEC and the class-action bar down on their heads with unwarranted and unsupportable benchmarketing claims?

Architecture Highlights

Our readers don’t want the full computer science firehouse on how the architecture works. A good review is AnandTech here. The important take-away points are:

  • Silvermont is a tour de force design that marries a custom version of Intel’s industry-leading, 22nm process with modern SoC design. It is optimized for low-power usage; new power-efficient design libraries were built and can be carried into other Intel architecture endeavors (i.e., Core).
  • Supports 2-8 cores in pairs. Each core has out-of-order execution (an Atom first), modern branch prediction, SIMD instructions, AES-NI security instructions, and Intel’s virtual technology (VT) for virtualization. Each pair of cores shares 1MB of level 2 cache. The design goal was low power consumption without sacrificing performance.
  • Like Atom’s big brother, Core, there is extensive on-chip digital power management including new power states. The SoC dynamically manages bursts of higher clock speeds, and looks  at first glance to be very sophisticated.
  • The overall dynamic power range is more efficient that ARM BIG:little approaches.

Where Will Silvermont Be Used?

The obvious places are in smartphones and tablets. Other than mentioning the market attractiveness of full Windows 8 on a tablet as well as the choice of Google’s Android — and maybe even a dual boot, let’s leave the smartphone and tablet war until another day when we compare real products. 

What we don’t hear today is talk about the likely growth for Silvermont-based Atom SoCs in markets other than phones and tablets. That’s a mistake because Intel surely has these markets in its sights:

  • Netbooks: Remember the 2008 low-cost Internet-consumption notebooks killed by ARM/Android by 2011? They’ll be back in spades. Lump Google Chromebooks in this category too.
  • Automotive: The abject failure of Ford’s My Ford Touch entertainment system using ARM and Microsoft Embedded Windows is the joke of the auto industry. Atom can play a role here as automobiles are today a processor-rich environment.
  • Retail Systems: Point-of-sale and checkout systems cry for low-power, small form-factor devices. Ditto ATMs.
  • Digital Signage: The market for personal ads on digital signage is just arriving. This will become a large market later in the decade.
  • Embedded Systems: Intel’s 2009 acquisition of Wind River Systems aimed to do more in real-time, embedded systems for healthcare, manufacturing, distributtion, automation,  and other activities. Silvermont-generation Atom chips are a big step forward for these markets.

Closing Thoughts

An architecture is not a testable or buyable product. Nevertheless, Silvermont looks to be the real deal for performance and power, and ought to be giving ARM licensees heartburn.

With the introduction of products based on the Silvermont architecture, Atom becomes a hero. Not a hero brand, but a hero family of chips that move out of the also-ran category to being in the spotlight as front-line performers in Intel’s many-chip continuum of computing strategy.

Silvermont is an important way-point in measuring Intel’s commitment and delivery of chips with: competitive power consumption, SoC maturity, and a new phone/tablet/embedded system workload target — without dropping the ball in the rest of the business. The proof of the architecture will be the Baytrail and Merrifield SoCs that start arriving by the holidays. And the Haswell announcement next month will clearly show Intel juggles multiple balls.

On balance, we are very pleased with the benchmark points that Intel promises to meet or exceed. That’s the proof of the pudding.

Why CPU Upgrades Won’t End With 2014’s Broadwell

Soldered to the Motherboard

Intel announced that its 2014-era microprocessors code-name Broadwell will come in a Ball Grid Array (BGA) package. In English, that means a circuit package made to be soldered to the motherboard.

Up until Broadwell, desktop PCs generally were packaged to fit into mechanical sockets. The key benefits of a socket are twofold: the microprocessor CPU and motherboard can be sold separately, and assembled by the do-it-yourselfer or systems builder; and, the PC could be upgraded with a (better, faster) microprocessor compatible with the socket. For example, you can put 2012 Ivy Bridge microprocessors into 2011 Sandy Bridge motherboards with modest effort.

A BGA future is a desktop problem, as recent notebooks have used soldered-down BGA packaging to achieve a slimmer height. More importantly, just about nobody pops open a notebook to upgrade the processor.

Desktop Upgrade Denial, Anger, Bargaining, Depression, Acceptance

With no sockets in Broadwell and subsequent chip families, the desktop PC enthusiast community has gone into the denial and anger phases of the five stages of grief.

The community belief is that they’ll have to buy a one-shot motherboard-with-CPU purchase with no opportunity for a future performance upgrade. Moreover, there’s gnashing of teeth over the likely problems where motherboard suppliers have to make big-dollar inventory bets when they solder a microprocessor to a particular feature-set on a motherboard product. The fears are much decreased feature choice and a very expensive dead-on-arrival return process.

The desktop enthusiast community may be absolutely correct in their projections for the world after this year’s Haswell. However, I suspect the glass is more than half-full, not approaching empty.

Welcome to Upgrades as a Service (UaaS)

What if Broadwell-generation chips are indeed soldered to notebooks and desktop motherboards? That doesn’t mean upgrades are impossible. I think there is plenty of evidence that Intel has been quietly gearing up for the soldered-down future, a future where upgrades are possible and practical.

In 2010, Intel rolled out a two-phase project that allowed a few microprocessor versions to be upgraded over the Internet, unlocking features with an electronic payment. A lot of thought and e-commerce back-end software development went into this “experiment”. This is the secret sauce that would allow upgrades online.

My thought is that Intel is now about ready to roll out online updates to soldered-down Broadwell-generation microprocessors.

Want more cores, cache, CPU features, or Turbo headroom? There’s a price for each and a bundle for all.

UaaS Impact

For PC manufacturers, the uplifts could be made at the factory, and the end-product priced based on feature set. The manufacturer benefit is fewer microprocessor SKUs (i.e., stock-keeping units) at the cost of a new chip feature-set generation step.

Online and retail stores would also require fewer unique SKUs, since the upgrade could be done in-store or online by the end customer. Lower inventory costs and fewer sales, reducing margins, of slow selling chips.

Enterprise customers could upgrade individual knowledge-worker PCs with more performance for a special project at a small fraction of the initial acquisition costs. In fact, it would be done at the line-of-business level with a company credit card.

The upgrade technology is also applicable to notebooks, creating a new upgrade revenue stream.

Intel itself would need many fewer SKUs and the cost of inventory of each. This is not to say that there would be one Broadwell chip that could be infinitely customized. But there would be no need for the 35 desktop SKUs that we have today with Ivy Bridge.

There are gotcha’s with the online upgrade scheme, but the obvious problems also exist with today’s upgradable sockets. For example, keeping the heat dissipation envelope aligned with the microprocessor heat generation; more voltage, more heat.

Final Thoughts

I think there’s a high probability that Intel will offer online upgrades to Broadwell desktops.

The idea reduces the industry’s increasing SKU complexity, leading to a leaner PC industry, which means higher potential profits. It gives enthusiasts a continued opportunity to pay more to get more performance.

Intel turns most chip sales into starter-homes with an upgrade annuity stream delivering software-industry margins to a hardware company. What’s not to like about that, Wall Street?

The technology to deliver the upgrades online has been in the field since 2010. The how-to-deliver-this lessons have been learned and tweaked.

So, I conclude online upgrades are the solution to Intel’s permanently soldered-down microprocessors.

Comments: Twitter @peterskastner

Intel microprocessor socket

Is Intel’s Tick-Tock About to Stutter-Step?

A widely reported story at TechSpot that a flaw in the forthcoming 2013 generation of Core microprocessors for PCs and notebooks (codename Haswell) led to speculation that Intel would delay some but not all Haswell chips until a fix in silicon could be made subsequent to a June launch. Now all of this is unconfirmed by Intel, so take the following analysis in this blog post with a grain of salt.

The reason I’m writing anything is that Intel can ill afford a delay in its mainstream processor delivery schedule for clear business reasons that I’ll outline below. And waiting on another stepping of Haswell would clearly cross a dateline that leads to all sorts of follow-on inefficiencies ad infinitum. However, a stutter-step in production timing might get everything back on track in 2014.

The Ideal PC Annual Launch Schedule

The PC industry has a cadence which Intel has fine-tuned with its microprocessor launch and production schedule. My stylized version looks like this:

  • Year – 1: all the testing and sharing with partners for design purposes
  • Year – 1, December: begin shipping revenue units for partner’s early production and demo units. This looks good when reporting Q4 results to Wall Street in mid-January.
  • January: launch the years’s generation of technology at, say, the Consumer Electronics Show. This gets loads of press attention and helps to freeze technology purchases until the products ship. January automobile shows serve the same purpose.
  • February and March: partners begin shipping hero products aimed at enthusiasts and thought-leaders, but not in high volumes. Also, get evaluation units into the hands of IT departments. The chip fabs are ramping efficiency now.
  • April and May: IT departments select the year’s PC and notebook standards and begin an annual refresh cycle by July.
  • By June: mainstream chip launch and the partner’s fall product roll-out. Collect orders from retailers.
  • July and August: Asia makes volume PCs and notebooks and ships them into retail.
  • August – September: Back-to-school is a major consumer refresh period, where the new products are on display and students, in particular, look for retail new tech purchases.
  • October: Asia PC manufacturers gear up for the holiday selling season. By now, the Intel fabs are running like clock-work, at high yields. The third major chip product launch of the year, typically Xeons for servers (but it could be Atoms for mobile devices in the future) occurs now.
  • November – December: Holiday tech purchases is the largest volume selling period of the year. And repeat the revenue shipment for the next year’s products before January.

We’re Not on the Ideal Launch Schedule

Haswell has not been launched yet, and probably won’t until June, not January 2013. How did this happen? That’s not the topic so I’ll spare you the drawn out details. The synopsis: Sandy Bridge in 2011 had a chip set flaw, which halted shipments in January and slowed that year’s roll-out by a quarter, so that in January 2012 the industry was not ready (e.g., had inventory and equipment to write off) for Ivy Bridge. In late 2012, a PC industry slow-down left everybody with inventory that they wanted to burn off before starting with new kit for Haswell, which requires a new motherboard not compatible with Sandy or Ivy Bridge. So that’s how we got here.

Refer back to the ideal launch schedule. By June 30th, the industry needs to have introduced all of its performance and mainstream products, taken orders, and contracted Asia to produce the tens of millions of units needed for a successful back-to-school and holiday selling period.

That’s not going to happen in 2013. A big bang of products and production by the PC industry starting in June to catch up with August back-to-school is too risky a business move to contemplate. The industry would not move as one. Problems and finger-pointing. On top of that scenario, throw on the blog lead with a possible Haswell chip flaw that, at minimum, Intel wants fixed but may not delay the Haswell launch for the fix.

So, what’s the second half outlook for the PC industry?

Status Quo or Stutter Step?

Here are the three scenarios I see for the second half:

1. Launch Haswell in June with a big bang.  Try to get the new products into retail by August in depth and in volume by compressing the introduction and production calendar. This is the least likely and highest risk scenario.

2. Launch Haswell in June. Pretend June is January on the ideal calendar, and ramp Haswell in the second half alongside refreshed Ivy Bridge (i.e., Third Generation Core) products. Creates consumer confusion with two generations on sale at retail at once, but that’s been the reality anyway. Leaves the merchandising problem to the retail channel. This scenario is the low-risk, low reward bet.

3. A Stutter Step in marching is a quick step or skip that gets an out-of-step marcher back in cadence. In this scenario, Intel would ramp Haswell in the second half alongside refreshed Ivy Bridge as in Scenario 2. Haswell is the Tick in Intel’s Tock-Tock cadence, with a tick being a new architecture and the tock a shrink of the architecture. The stutter-step in January 2014 would launch the 14 nm shrunk version of Haswell, code-name Broadwell:

  • Bringing on Broadwell in  January would get Intel and the industry back on the ideal schedule. The assumption is that Intel is ready for 14 nm, and I have heard nothing to dissuade me from that assumption.
  • The ideal schedule is the lowest annual risk and the highest profits for the industry.
  • The status quo is a time-bomb waiting for the next flaw. As discussed above, June is already a crap shoot for a big bang launch into the holiday season. Any future delay (and I think the complexity of today’s chips favors flaws and delays) would likely overlap the biggest selling season with the newest technology launch. Ouch, cognitive buyer’s dissonance.
  • Since Haswell and Broadwell are interchangeable at the motherboard level, risk is mitigated because a 14 nm fab hiccup could be remediated with Haswell chips.
  • Any move to speed up the annual cadence adds risks, and the stutter-step scenario cuts the economic life of Haswell to six months-plus, not twelve-plus. However, 14 nm in volume is double-digits percentage less expensive to manufacture, lowering chip costs and improving margins. Gentlemen, start your net-present-value spreadsheets.
  • Delaying 14 nm production until mid-2014 puts a huge crimp in Intel’s mobile strategy, which desperately needs to get ahead of the technology competition the sooner the better.
  • There are plenty of remaining uses for the idle 22 nm fab equipment, including Ivy Bridge, remainder Haswell, and 22 nm Xeon and Atom products yet to launch in 2013.

Send your comments to Twitter: @peterskastner