Home » Digital World » How Intel is Scaling to Meet the Decade’s Opportunities

How Intel is Scaling to Meet the Decade’s Opportunities


Eighteen months ago, Intel announced it would address the world’s rapidly growing computing continuum by investing in variations on the Intel Architecture (IA). It was met with a ho-hum. Now, many product families are beginning to emerge from the development labs and head towards production. All with IA DNA, these chip families are designed to be highly competitive in literally dozens of new businesses for Intel, produced in high volumes, and delivering genuine value to customers and end users.

Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities. What is Intel doing and how can they pull this off?

The 2010’s Computing Continuum
Today’s computing is a continuum that ranges from smartphones to mission-critical datacenter machines, and from desktops to automobiles.  These devices represent a total addressable market (TAM) approaching a billion processors a year, and will explode to more than two billion by the end of the decade.  Of that, traditional desktop microprocessors are about 175 million chips this year, and notebooks, 225 million.

For more than four decades, solving all the world’s computing opportunities required multiple computer architectures, operating systems, and applications. That is hardly efficient for the world’s economy, but putting an IBM mainframe into a cell phone wasn’t practical. So we made due with multiple architectures and inefficiencies.

In the 1990’s, I advised early adopters NCR and Sequent in their plans for Intel 486-based servers. Those were desktop PC chips harnessed into datacenter server roles. Over twenty years, Intel learned from its customers to create and improve the Xeon server family of chips, and has achieved a dominant role in datacenter servers.

Now, Intel Corporation is methodically using its world-class silicon design and fabrication capabilities to scale its industry-standard processors down to fit smartphones and embedded applications, and up into high-performance computing applications, as two examples. Scaling in other directions is still in the labs and under wraps.

The Intel Architecture (IA) Continuum
IA is Intel’s architecture and an instruction set that is common (with feature differentiation) in the Atom, Core, and Xeon microprocessors already used in the consumer electronics, desktop and notebook, and server markets, respectively.  These microprocessors are able to run a common stack of software such as Java, Linux or Microsoft Windows.  IA also represents the hardware foundation for hundreds of billions of dollars in software application investments by enterprise and software application package developers that remain valuable assets as long as hardware platforms can run it — and backwards compatibility in IA has protected those software investments.

To meet the widely varying requirements of this decade’s computing continuum, Intel is using the DNA of IA to create application-specific variants of its microprocessors.  Think of this as silicon gene-splicing.  Each variant has its own micro-architecture that is suited for its class of computing requirements (e.g., Sandy Bridge for 2011 desktops and notebooks). These genetically-related processors will extend Intel into new markets, and include instruction-set compatible microprocessors:

  • Atom, the general-purpose computer heart of consumer electronics mobile devices, tablets, and soon smartphones;
  • Embedded processors and electronics known as “systems on a chip” (SOCs) with an Atom core and customized circuitry for controlling machines, display signage, automobiles, and industrial products;
  • Core i3, i5, and i7 processors for business and consumer desktops and notebooks, with increasing numbers of variants for form-factor, low power, and geography;
  • Xeon processors for workstations and servers, with multi-processors capable advances well into the mainframe-class, mission-critical computing segment;
  • Xeon datacenter infrastructure processor variants (e.g., storage systems, and with communications management a logical follow-on);
  • And others to follow as new technology markets unfold.

Can Intel Scale One Architecture Across the 2010’s Computing Continuum?
It is proper to question whether Intel is capable of such a massive expansion of its base microprocessor product set.  After all, smartphones have little in common with datacenter number crunchers, right?  Actually, they do have a lot in common as many variants are in essence “general purpose” processors.  Processor speed, thermal envelope, wattage and power management, manageability, and chip geometry are all variables that are demonstrably well understood by Intel.

Moreover, Intel is the market leader at fabricating consistent (think “tick-tock” delivery process), reliable, high-volume silicon-based products.  It is also very adept at delivering the software tools and ecosystem needed to harness the hardware effectively.

At this juncture, only one technology company in the world could credibly propose a common computing architecture that scales from consumer electronics devices to datacenters: Intel Corporation.  The company has the technology tools and knowhow, cash flow, human resources, ability to execute, and vision to glue the billions of Internet-connected computing devices together in a single computing continuum.  Intel views succeeding in delivering such a continuum of IA-based hardware and software to match the computing continuum as its brass-ring chance of the decade.

I conclude that IA, indeed, fits the market requirements of the vast majority of the decade’s computing work requirements, and that Intel is singularly capable of creating the products to fill the expanding needs of the computing market (e.g., many core).

The go-to market strategy for each new market builds on the commonality of the IA continuum. The secret sauce is taking the DNA of IA and arduously and expensively scaling it to meet the specific technology requirements of individual markets. Let’s look at the very low end and the very high end as examples of such scaling.

Medfield, The Forthcoming Atom Hero
This week at the Mobile World Conference in Barcelona, Intel announced it was sampling chips for the next-generation Atom, code-name Medfield. This IA chip is the first to arguably approach the technology requirements of the mainstream smartphone market, as the entire computer will fit into a board the size of a postage label, compared to a credit card for the current Moorestown generation, and a 3″x5″ card for the previous generation, Menlow. Built on the same year-old 32 nm process as Westmere and Sandy Bridge, the company has not released additional details of Medfield’s capabilities. Expect to see Medfield in smartphone and tablet products next year.

Meanwhile, the past year has seen too numerous to mention activities in the Atom space through design wins, acquisitions (e.g., Infineon), partnerships, and alliances.

The five year big-picture starts in 2006 with the first Atom cobbled from an old Pentium design and built on a last-generation process. The result was a low-capability processor that fanned the netbook and nettop markets. Today, Intel has thousands of employees working on new designs and go-to-market strategies to fit a wide variety of emerging markets:

  • Atom-N for netbooks;
  • Atom-D for nettop basic, entry-level PCs;
  • Atom-CE for set-top boxes (e.g., Google TV, Boxee);
  • Atom-E for embedded applications (e.g., automotive, signage), an under-appreciated opportunity;
  • Atom-T for tablets;
  • Atom-Z for handhelds (e.g., smartphones).

Moreover, as I forecast last month, Intel is making multi-billion dollar cap ex investments in next-generation fabs so that it can accelerate Atom designs faster than Moore’s Law, and to have the capacity to deliver more chips on the latest fabrication process. Having the fastest, smallest, low-leakage transistors is a competitive advantage worth pursuing. Having fab capacity is a prerequisite to taking market share.

In short, believing two years ago that Intel could scale Atom down to a competitive smartphone offering took a leap of faith or an insiders perspective. Today, the evidence is strong enough that I would bet on Intel taking meaningful market share in Atom-centric markets over the next several years. It took Intel more than ten years to win a majority share of the server chip market. And we’re approaching five years on the Atom path to maturity. I take a long-term view here.

Next Up, Many Integrated Cores
At the other end of the computing continuum from Atom applications is high-performance computing (HPC). Intel is scaling the high-end by using Many Integrated (IA) Cores in a forthcoming micro-architecture. The world’s number one supercomputer today is in China. It consists of thousands of Xeon server-class processors mixed with nVidia Tesla graphics processor cards.

Intel is investing in MIC in order to have a scalable, all-IA solution to the world’s toughest scientific, technical, engineering, and research workloads, and with one software application base instead of the need to learn and program multiple architectures.

MIC’s heritage is Larrabee, a graphics processor card that was never brought to market in 2009. The architecture, hardware, and software technology lessons learned are recast in the first MIC product, codename Knights Corner, which I expect to be announced this year, perhaps as early as Intel Developer Forum in September.

Challenges
1. The most difficult technology challenge that Intel faces this decade is not hardware but software.  Internally, the growing list of must-deliver software drivers for hardware such as processor integrated graphics means that the rigid two-year, tick-tock hardware model must also accommodate software delivery schedules.

Larrabee was an apparent example of mis-aligned hardware and software schedules.  The need for longer software development schedules, especially testing, is forcing Intel to have hardware working earlier in the tick-tock cycle.  This leads to better quality, but more software creates greater inherent schedule and business risks.

2. In the consumer space such as smartphones, Intel’s ability to deliver applications and a winning user experience are limited by the company’s OEM distribution model. More emphasis needs to be placed on the end-user application ecosystem, both quality and quantity.

3. By the end of the decade, silicon fabrication will be under 10 nm, and it is a lot less clear how Moore’s Law will perform in the 2020’s.

4. Externally, software parallelism remains a sticky problem, as Intel creates more parallel threads in hardware, but programmers are slow to distribute their applications across that hardware.  This is a computer science problem of considerable enormity.  Intel is investing heavily with professional product developers, and its parallel software tools are widely used and admired.  However, that may not be enough to unblock the parallel software bottleneck.

5. The market challenges for Intel are also great opportunities.  About ten percent of cell phones are smartphones today, but that will grow towards 80% by the end of the decade.  The competitive feeding frenzy led by Apple’s iPhone and now iPad will drive competition and innovation at an enormous rate.

What’s the Competition Doing?
Intel’s competition comes on two fronts: AMD in PCs, notebooks and servers — and tablets by year-end; ARM Holdings’ 1,200 licensees in the smartphone, tablet, and embedded markets.

For AMD, 2011 is shaping up as a game of Battleship. In December, fired off Brazos, and Intel responded in January with Sandy Bridge. The two companies will have at least three other exchanges this year. From a scaling standpoint, AMD is competitive in netbooks up to servers. But AMD is conceding the smartphone space to Intel and ARM.

ARM micro-architectures are very power efficient RISC designs that are well suited to electronic gadgets and embedded applications. The 1,200 ARM licensees all do things a little differently in assembling their chips, so its not an apples-to-apples comparison to talk about “ARM vs. Intel”. Recent announcements of quad-core ARM chip designs should not be directly compared to Intel’s quad-cores; Intel (and AMD) get much more work done in each clock tick. ARM will need quad-cores and more to move up into entry-PC applications that require more processor horsepower. As for 64-bit ARM in the server markets, I’ll believe it when I see it. Lots of wimpy (ARM) cores are not the answer to datacenter scaling issues.

Observations and Conclusions
Intel is likely to get a small piece of the smartphone pie over the next couple of years, but could, with continuing investment and sustained concentration, be a major player by mid-decade.  Surely that is the company’s goal.  SOCs are another multi-billion dollar opportunity that will unfold slowly as product design cycles are much longer.  Yet autos alone are a 35 million unit opportunity growing to 60 million in late decade.

Intel’s new market investments are best watched over a period of years; success will be hard to measure quarter-by-quarter.

Therefore, and here I speak to Wall Street quarter-driven analysts, give the smartphone/tablet/embedded initiatives at least two more years before calling long-term winners and losers.

Intel is taking its proven IA platforms and modifying them to scale competitively as existing markets evolve and as new markets such as smartphones emerge.  To do this, Intel discarded the PC beige-box mentality in favor of application-specific silicon form-factors and product requirements implemented in individual micro-architectures.

IA scales from handhelds to mission-critical enterprise applications, all able to benefit from a common set of software development tools and protecting the vast majority of the world’s software investments.  Moreover, IA and Intel itself are evolving to specifically meet the needs of a spectrum of computing made personal, the idea that a person will have multiple computing devices that match the time, place and needs of the user.

Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities.

My conclusion is that Intel’s self-interest in growth is pushing the company to innovate in new markets, which will benefit computer buyers.  But the company is also hugely invested in the worldwide growth of a middle class in emerging markets, which expands the universe by hundreds of millions of potential customers.

We will all look back a decade hence at how far personal computing has come in ten years.

Biography

Peter S. Kastner is an industry analyst with over forty years experience in application development, datacenter operations, computer industry marketing, and market research.  He was a co-founder of industry-watcher Aberdeen Group in 1989.  His firm, Scott-Page LLC, consults with technology companies and technology users.

Datacenter Server Farm

 


Advertisements

One thought on “How Intel is Scaling to Meet the Decade’s Opportunities

  1. Pingback: Intel Pushes the Computing Continuum Down the 22nm Voltage Scale | Technology Pundits

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s