Pulse Check: How Intel is Scaling to Meet the Decade’s Opportunities

Eighteen months ago, Intel announced it would address the world’s rapidly growing computing continuum by investing in variations on the Intel Architecture (IA). It was met with a ho-hum. Now, many product families are beginning to emerge from the development labs and head towards production. All with IA DNA, these chip families are designed to be highly competitive in literally dozens of new businesses for Intel, produced in high volumes, and delivering genuine value to customers and end users.

Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities. What is Intel doing and how can they pull this off?

The 2010’s Computing Continuum
Today’s computing is a continuum that ranges from smartphones to mission-critical datacenter machines, and from desktops to automobiles.  These devices represent a total addressable market (TAM) approaching a billion processors a year, and will explode to more than two billion by the end of the decade.  Of that, traditional desktop microprocessors are about 175 million chips this year, and notebooks, 225 million.

For more than four decades, solving all the world’s computing opportunities required multiple computer architectures, operating systems, and applications. That is hardly efficient for the world’s economy, but putting an IBM mainframe into a cell phone wasn’t practical. So we made due with multiple architectures and inefficiencies.

In the 1990’s, I advised early adopters NCR and Sequent in their plans for Intel 486-based servers. Those were desktop PC chips harnessed into datacenter server roles. Over twenty years, Intel learned from its customers to create and improve the Xeon server family of chips, and has achieved a dominant role in datacenter servers.

Now, Intel Corporation is methodically using its world-class silicon design and fabrication capabilities to scale its industry-standard processors down to fit smartphones and embedded applications, and up into high-performance computing applications, as two examples. Scaling in other directions is still in the labs and under wraps.

The Intel Architecture (IA) Continuum
IA is Intel’s architecture and an instruction set that is common (with feature differentiation) in the Atom, Core, and Xeon microprocessors already used in the consumer electronics, desktop and notebook, and server markets, respectively.  These microprocessors are able to run a common stack of software such as Java, Linux or Microsoft Windows.  IA also represents the hardware foundation for hundreds of billions of dollars in software application investments by enterprise and software application package developers that remain valuable assets as long as hardware platforms can run it — and backwards compatibility in IA has protected those software investments.

To meet the widely varying requirements of this decade’s computing continuum, Intel is using the DNA of IA to create application-specific variants of its microprocessors.  Think of this as silicon gene-splicing.  Each variant has its own micro-architecture that is suited for its class of computing requirements (e.g., Sandy Bridge for 2011 desktops and notebooks). These genetically-related processors will extend Intel into new markets, and include instruction-set compatible microprocessors:

  • Embedded processors and electronics known as “systems on a chip” (SOCs) with an Atom core and customized circuitry for controlling machines, display signage, automobiles, and industrial products;
  • Atom, the general-purpose computer heart of consumer electronics mobile devices, tablets, and soon smartphones;
  • Core i3, i5, and i7 processors for business and consumer desktops and notebooks, with increasing numbers of variants for form-factor, low power, and geography;
  • Xeon processors for workstations and servers, with multi-processors capable advances well into the mainframe-class, mission-critical computing segment;
  • Xeon datacenter infrastructure processor variants (e.g., storage systems, and with communications management a logical follow-on);

A Pause to Bring You Up To Date
Please do not be miffed: all of the above was published in February, 2011, more than two years ago. We included it here because it sets the stage for reviewing where Intel stands in delivering on its long-term strategy and plans of the IA computing continuum, and to remind readers that Intel’s strategy is hiding in plain sight for going on five years.

In that piece two years ago, we concluded that IA fits the market requirements of the vast majority of the decade’s computing work requirements, and that Intel is singularly capable of creating the products to fill the expanding needs of the computing market (e.g., many core).

With the launch of the 4th Generation Core 22nm microprocessors (code-name Haswell) this week and the announcement of the code-name Baytrail 22nm Atom systems on a chip (SoCs), it’s an appropriate time to take the pulse on Intel’s long-term stated direction and the products that map to the strategy.

Systems on a Chip (SoCs)
The Haswell/Baytrail launch would be a lot less impressive if Intel had not mastered the SoC.

The benefits of an SoC compared to the traditional multi-chip approach Intel has used up until now are: fewer components, less board space, greater integration, lower power consumption, lower production and assembly costs, and better performance. Phew! Intel could not build a competitive smartphone until it could put all of the logic for a computer onto one chip.

This week’s announcements include SoCs for low-voltage notebooks, tablets, and smartphones. The data center Atom SoCs, code-name Avoton, are expected later this year.

For the first time, Intel’s mainstream PC, data center, and mobile businesses include highly competitive SoCs.

SoCs are all about integration. The announcement last month at Intel’s annual investor meeting that “integration to innovation” was an additional strategy vector for the company hints at using many more variations of SoCs to meet Intel’s market opportunities with highly targeted SoC-based variants of Atom, Core, and Xeon.

Baytrail, The Forthcoming Atom Hero
With the SoCs for Baytrail in tablets and Merrifield in smartphones, Intel can for the first time seriously compete for mobile marketshare against ARM competitors on performance-per-watt and performance. These devices are likely to run the Windows 8, Android, and Chrome operating systems. They will be sold to carriers globally. There will be variants for local markets (i.e., China and Africa).

The smartphone and tablet markets combined exceed the PC market. By delivering competitive chips that run thousands of legacy apps, Intel has finally caught up on the technology front of the mobile business.

Along with almost the entire IT industry, Intel missed the opportunity that became the Apple iPhone. Early Atom processors were not SoCs, had poor battery life, and were relatively expensive. That’s a deep hole to climb out of. But Intel has done just that. There are a lot fewer naysayers than two years ago. The pendulum is now swinging Intel’s way on Atom. 2014 will be the year Intel starts garnering serious market share in mobile devices.

4th Generation Core for Mainstream Notebooks and PCs
Haswell is a new architecture implemented in new SoCs for long-battery-life notebooks, and with traditional chipsets for mainstream notebooks and desktops. The architecture moves the bar markedly higher in graphics performance, power management, and floating point (e.g., scientific) computations.

We are rethinking our computing model as a result of Haswell notebooks and PCs. Unless you are an intense gamer or workstation-class content producer, we think a notebook-technology device is the best solution.

Compared to four-year old notebooks in Intel’s own tests, Haswell era notebooks are: half the weight, half the height, get work done 1.8x faster, convert videos 23x faster, play popular games 26x faster, wake up and go in a few seconds, and with 3x battery life for HD movie playing. Why be tethered to a desktop?

Black, breadbox-size desktops are giving way to all-in-one (AIO) designs like the Apple iMac used to write this blog. That iMac has been running for two years at 100% CPU utilization with no problems. (It does medical research in the background folding proteins). New PC designs use notebook-like components to fit behind the screen. You’ll see AIOs this fall that lie flat as large tablets or go vertical with a rear kick-stand. With touch screen, wireless Internet and Bluetooth peripherals, these new AIOs are easily transportable around the house. That’s the way we see the mainstream desktop PC evolving.

And PCs need to evolve quickly. Sales are down almost 10% this year. One reason is global macro-economic conditions. But everybody knows the PC replacement cycle has slowed to a crawl. Intel’s challenge is to spark the PC replacement cycle. Haswell PCs and notebooks, as noted above, deliver a far superior experience to users than they are putting up with in their old, obsolescent devices.

Xeon processors for workstations, servers, storage, and communications
The data center is a very successful story for Intel. The company has steadily gained workloads from traditional (largely legacy Unix) systems; grown share in the big-ticket Top 500 high-performance computing segment; evolved with mega-datacenter customers such as Amazon, Facebook, and Google; and extended Xeon into storage and communications processors inside the datacenter.

The Haswell architecture includes two additions of great benefit to data-center computing. New floating point architecture and instructions should improve scientific and technical computing throughput by up to 60%, a huge gain over the installed server base. Second, transactional memory is a technology that makes it easier for programmers to deliver fine-grain parallelism, and hence to take advantage of multi-cores with multi-threaded programs, including making operating systems and systems software like databases run more efficiently.

In the past year, the company met one data-center threat in GPU-based computing with PHI, a server add-in card that contains dozens of IA cores that run a version of Linux to enable massively parallel processing. PHI competes with GPU-based challengers from AMD and nVidia.

Another challenge, micro-servers, is more a vision than a market today. Nevertheless, Intel created the code-name Avoton Atom SoC for delivery later this year. Avoton will compete against emerging AMD- and ARM-based micro-server designs.

Challenges
1. The most difficult technology challenge that Intel faces this decade remains software, not hardware.  Internally, the growing list of must-deliver software drivers for hardware such as processor-integrated graphics means that the rigid two-year, tick-tock hardware model must also accommodate software delivery schedules.

Externally, Intel’s full-fray assault on the mobile market requires exquisite tact in dealing with the complex relationships with key software/platform merchants: Apple (iOS), Google (Android), and Microsoft (Windows), who are tough competitors.

In the consumer space such as smartphones, Intel’s ability to deliver applications and a winning user experience are limited by the company’s OEM distribution model. More emphasis needs to be placed on the end-user application ecosystem, both quality and quantity. We’re thinking more reference platform than reference hardware.

2. By the end of the decade, silicon fabrication will be under 10 nm, and it is a lot less clear how Moore’s Law will perform in the 2020’s. Nevertheless, we are optimistic about the next 10-12 years.

3. The company missed the coming iPhone and lost out on a lot of market potential. That can’t happen again. The company last month set up an new emerging devices division charged with finding the next best thing around the same time others do.

4. In the past, we’ve believed that mobile devices — tablets and smartphones — were additive to PCs and notebooks, not substitutional. The new generation of Haswell and Baytrail mobile devices, especially when running Microsoft Windows, offer the best of the portable/consumption world together with the performance and application software (i.e., Microsoft Office) to produce content and data. Can Intel optimize the market around this pivot point?

Observations and Conclusions
Our summary observations have not changed in two years, and are reinforced by the Haswell/Baytrail SoCs that are this week’s proof point:

  • Intel is taking its proven IA platforms and modifying them to scale competitively as existing markets evolve and as new markets such as smartphones emerge.
  • IA scales from handhelds to mission-critical enterprise applications, all able to benefit from a common set of software development tools and protecting the vast majority of the world’s software investments.  Moreover, IA and Intel itself are evolving to specifically meet the needs of a spectrum of computing made personal, the idea that a person will have multiple computing devices that match the time, place and needs of the user.
  • Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities.

Looking forward, Intel has fewer and less critical technology challenges than at any point since the iPhone launch in 2007. Instead, the company’s largely engineering-oriented talent must help the world through a complex market-development challenge as we all sort out what devices are best suited for what tasks. We’ve only scratched the surface of convertible tablet/notebook designs. How can Intel help consumers decide what they want and need so the industry can make them profitably? How fast can Intel help the market to make up its mind? Perhaps the “integration to innovation” initiative needs a marketing component.

If the three-year evolving Ultrabook campaign is an example of how Intel can change consumer tastes, then we think industry progress will be slower than optimal. A “win the hearts and minds” campaign is needed, learning from the lessons of the Ultrabook evolution. It will take skillsets in influencing and moving markets in ways Intel will need more of as personal computing changes over the next decade, for example, as perceptual computing morphs the user interface.

Absent a macro-economic melt-down, Intel is highly likely to enjoy the fruits of five years of investments over the coming two-year life of the Haswell architecture. And there’s no pressing need today to focus beyond 2015.

Biography

Peter S. Kastner is an industry analyst with over forty-five years experience in application development, datacenter operations, computer industry marketing, PCs, and market research.  He was a co-founder of industry-watcher Aberdeen Group in 1989.  His firm, Scott-Page LLC, consults with technology companies and technology users.

Twitter: @peterskastner

Haswell Core i7 desktop microprocessor

Haswell Core i7 desktop microprocessor


Advertisements

Silvermont: Atom Steps Into the Spotlight

Intel unveiled its Silvermont architecture for Atom 22nm and 14nm chips yesterday. The billboard numbers are 5x lower power consumption and 3x more performance than the current Atom chips, which use the Saltwell architecture at 32nm. The first chips based on the Silvermont architecture, codenamed Baytrail for tablets and Merrifield for smartphones, should start shipping by the end of 2013.

Highlight: Performance and Power Excellence vs. ARM

Intel projects the architecture will deliver significantly better performance, at lower power draw, than its ARM-based competition. Let’s get right to the fisticuffs.

Silvermont Performance/Power

Silvermont Performance/Power

In the chart above, Intel claims Silvermont-based Atom systems-on-a-chip (SoCs) will deliver more performance at lower battery draw in both dual-core (e.g., smartphone) and quad core (e.g., tablet) uses — at the time of product launch later this year. Moreover, Intel confidently predicts the dual-core Atom will beat quadcore ARM chips in performance and power usage. The gloves just came off.

Note though in the fine print that these are projected CPU performance based on architectural simulations. We’ll have to wait for the product launch for the real benchmark comparisons.

Is Intel just bluffing about wiping the floor with ARM on performance and power? We are strongly convinced that Intel is not bluffing; the launch videoconference was hosted at INTC.com, Intel’s investor relations portal where SEC-material announcements are made. Who in their right minds would want to bring the SEC and the class-action bar down on their heads with unwarranted and unsupportable benchmarketing claims?

Architecture Highlights

Our readers don’t want the full computer science firehouse on how the architecture works. A good review is AnandTech here. The important take-away points are:

  • Silvermont is a tour de force design that marries a custom version of Intel’s industry-leading, 22nm process with modern SoC design. It is optimized for low-power usage; new power-efficient design libraries were built and can be carried into other Intel architecture endeavors (i.e., Core).
  • Supports 2-8 cores in pairs. Each core has out-of-order execution (an Atom first), modern branch prediction, SIMD instructions, AES-NI security instructions, and Intel’s virtual technology (VT) for virtualization. Each pair of cores shares 1MB of level 2 cache. The design goal was low power consumption without sacrificing performance.
  • Like Atom’s big brother, Core, there is extensive on-chip digital power management including new power states. The SoC dynamically manages bursts of higher clock speeds, and looks  at first glance to be very sophisticated.
  • The overall dynamic power range is more efficient that ARM BIG:little approaches.

Where Will Silvermont Be Used?

The obvious places are in smartphones and tablets. Other than mentioning the market attractiveness of full Windows 8 on a tablet as well as the choice of Google’s Android — and maybe even a dual boot, let’s leave the smartphone and tablet war until another day when we compare real products. 

What we don’t hear today is talk about the likely growth for Silvermont-based Atom SoCs in markets other than phones and tablets. That’s a mistake because Intel surely has these markets in its sights:

  • Netbooks: Remember the 2008 low-cost Internet-consumption notebooks killed by ARM/Android by 2011? They’ll be back in spades. Lump Google Chromebooks in this category too.
  • Automotive: The abject failure of Ford’s My Ford Touch entertainment system using ARM and Microsoft Embedded Windows is the joke of the auto industry. Atom can play a role here as automobiles are today a processor-rich environment.
  • Retail Systems: Point-of-sale and checkout systems cry for low-power, small form-factor devices. Ditto ATMs.
  • Digital Signage: The market for personal ads on digital signage is just arriving. This will become a large market later in the decade.
  • Embedded Systems: Intel’s 2009 acquisition of Wind River Systems aimed to do more in real-time, embedded systems for healthcare, manufacturing, distributtion, automation,  and other activities. Silvermont-generation Atom chips are a big step forward for these markets.

Closing Thoughts

An architecture is not a testable or buyable product. Nevertheless, Silvermont looks to be the real deal for performance and power, and ought to be giving ARM licensees heartburn.

With the introduction of products based on the Silvermont architecture, Atom becomes a hero. Not a hero brand, but a hero family of chips that move out of the also-ran category to being in the spotlight as front-line performers in Intel’s many-chip continuum of computing strategy.

Silvermont is an important way-point in measuring Intel’s commitment and delivery of chips with: competitive power consumption, SoC maturity, and a new phone/tablet/embedded system workload target — without dropping the ball in the rest of the business. The proof of the architecture will be the Baytrail and Merrifield SoCs that start arriving by the holidays. And the Haswell announcement next month will clearly show Intel juggles multiple balls.

On balance, we are very pleased with the benchmark points that Intel promises to meet or exceed. That’s the proof of the pudding.

Why CPU Upgrades Won’t End With 2014’s Broadwell

Soldered to the Motherboard

Intel announced that its 2014-era microprocessors code-name Broadwell will come in a Ball Grid Array (BGA) package. In English, that means a circuit package made to be soldered to the motherboard.

Up until Broadwell, desktop PCs generally were packaged to fit into mechanical sockets. The key benefits of a socket are twofold: the microprocessor CPU and motherboard can be sold separately, and assembled by the do-it-yourselfer or systems builder; and, the PC could be upgraded with a (better, faster) microprocessor compatible with the socket. For example, you can put 2012 Ivy Bridge microprocessors into 2011 Sandy Bridge motherboards with modest effort.

A BGA future is a desktop problem, as recent notebooks have used soldered-down BGA packaging to achieve a slimmer height. More importantly, just about nobody pops open a notebook to upgrade the processor.

Desktop Upgrade Denial, Anger, Bargaining, Depression, Acceptance

With no sockets in Broadwell and subsequent chip families, the desktop PC enthusiast community has gone into the denial and anger phases of the five stages of grief.

The community belief is that they’ll have to buy a one-shot motherboard-with-CPU purchase with no opportunity for a future performance upgrade. Moreover, there’s gnashing of teeth over the likely problems where motherboard suppliers have to make big-dollar inventory bets when they solder a microprocessor to a particular feature-set on a motherboard product. The fears are much decreased feature choice and a very expensive dead-on-arrival return process.

The desktop enthusiast community may be absolutely correct in their projections for the world after this year’s Haswell. However, I suspect the glass is more than half-full, not approaching empty.

Welcome to Upgrades as a Service (UaaS)

What if Broadwell-generation chips are indeed soldered to notebooks and desktop motherboards? That doesn’t mean upgrades are impossible. I think there is plenty of evidence that Intel has been quietly gearing up for the soldered-down future, a future where upgrades are possible and practical.

In 2010, Intel rolled out a two-phase project that allowed a few microprocessor versions to be upgraded over the Internet, unlocking features with an electronic payment. A lot of thought and e-commerce back-end software development went into this “experiment”. This is the secret sauce that would allow upgrades online.

My thought is that Intel is now about ready to roll out online updates to soldered-down Broadwell-generation microprocessors.

Want more cores, cache, CPU features, or Turbo headroom? There’s a price for each and a bundle for all.

UaaS Impact

For PC manufacturers, the uplifts could be made at the factory, and the end-product priced based on feature set. The manufacturer benefit is fewer microprocessor SKUs (i.e., stock-keeping units) at the cost of a new chip feature-set generation step.

Online and retail stores would also require fewer unique SKUs, since the upgrade could be done in-store or online by the end customer. Lower inventory costs and fewer sales, reducing margins, of slow selling chips.

Enterprise customers could upgrade individual knowledge-worker PCs with more performance for a special project at a small fraction of the initial acquisition costs. In fact, it would be done at the line-of-business level with a company credit card.

The upgrade technology is also applicable to notebooks, creating a new upgrade revenue stream.

Intel itself would need many fewer SKUs and the cost of inventory of each. This is not to say that there would be one Broadwell chip that could be infinitely customized. But there would be no need for the 35 desktop SKUs that we have today with Ivy Bridge.

There are gotcha’s with the online upgrade scheme, but the obvious problems also exist with today’s upgradable sockets. For example, keeping the heat dissipation envelope aligned with the microprocessor heat generation; more voltage, more heat.

Final Thoughts

I think there’s a high probability that Intel will offer online upgrades to Broadwell desktops.

The idea reduces the industry’s increasing SKU complexity, leading to a leaner PC industry, which means higher potential profits. It gives enthusiasts a continued opportunity to pay more to get more performance.

Intel turns most chip sales into starter-homes with an upgrade annuity stream delivering software-industry margins to a hardware company. What’s not to like about that, Wall Street?

The technology to deliver the upgrades online has been in the field since 2010. The how-to-deliver-this lessons have been learned and tweaked.

So, I conclude online upgrades are the solution to Intel’s permanently soldered-down microprocessors.

Comments: Twitter @peterskastner

Intel microprocessor socket

The 2013-2014 Computing Forest – Part 1: Processors

Ignoring the daily tech trees that fall in the woods, let’s explore the computer technology forest looking out a couple of years.
Those seeking daily comments should follow @peterskastner on Twitter.

Part 1: Processors

Architectures and Processes

Intel’s Haswell and Broadwell

We’ll see a new X86 architecture in the first half of 2013, code-name Haswell. The Haswell chips will use the 22 nm fabrication process introduced in third-generation Intel Core chips (aka Ivy Bridge). Haswell is important for extending electrical efficiency, improving performance per clock tick, and as the vehicle for Intel’s first system on a chip (SoC), which combines a dual-core processor, graphics, and IO in one unit.

Haswell is an architecture, and the benefits of the architecture carry over to the various usage models discussed in the next section.

I rate energy efficiency as the headline story for Haswell. Lightweight laptops like Ultrabooks (an Intel design) and Apple’s MacBook Air will sip the battery at around 8 watts, half of today’s 17 watts. This will dramatically improve the battery life of laptops but also smartphones and tablets, two markets that Intel has literally built $5 billion fabs to supply.

The on-chip graphics capabilities have improved by an order of magnitude in the past couple of years and get better of the next two. Like the main processor, the GPU benefits from improved electrical efficiency. In essence, on-board graphics are now “good enough” for the 80-th percentile of users. By 2015, the market for add-on graphics cards will start well above $100, reducing market size so much that the drivers switch; consumer GPUs lead high-performance computing (HPC) today. That’s swapping so that HPC is the demand that supplies off-shoot high-end consumer GPUs.

In delivering a variety of SoC processors in 2013, Intel learns valuable technology lessons for the smartphone, tablet, and mobile PC markets that will carry forward into the future. Adjacent markets, notably automotive and television, also require highly integrated SoCs.

Broadwell is the code-name for the 2014 process shrink of the Haswell architecture from 22nm to 14nm. I’d expect better electrical efficiency, graphics, and more mature SoCs. This is the technology sword Intel carries into the full fledged assault on the smartphone and tablet markets (more below).

AMD

AMD enters 2013 with plans for “Vishera” for the high-end desktop, “Richland”, an SoC  for low-end and mainstream users, and “Kabini”, a low-power SoC  for tablets.

The 2013 server plans are to deliver its third-generation of the current Opteron architecture, code name Steamroller. The company also plans to move from a 32nm SOI process to a 28nm bulk silicon process.

In 2014, AMD will be building Opteron processors based on a 64-bit ARM architecture, and may well be first to market. These chips will incorporate the IO fabric acquired with microserver-builder Seamicro. In addition, AMD is expected to place small ARM cores on its X86 processors in order to deliver a counter to Intel’s Trusted Execution Technology. AMD leads the pack in processor chimerism.

Intel’s better performing high-end chips have kept AMD largely outside looking in for the past two years. Worse, low-end markets such as netbooks have been eroded by the upward charge of ARM-based tablets and web laptops (i.e., Chromebook, Kindle, Nook).

ARM

ARM Holdings licenses processor and SoC designs that licensees can modify to meet particular uses. The company’s 32-bit chips started out as embedded industrial and consumer designs. However, the past five years has seen fast rising tides as ARM chip designs were chosen for Apple’s iPhone and iPad, Google’s Android phones and tablets, and a plethora of other consumer gadgets. Recent design wins includes Microsoft’s Surface RT. At this point, quad-core (plus one, with nVidia) 32-bit processors are commonplace. Where to go next?

The next step is a 64-bit design expected in 2014. This design will first be used by AMD, Calxeda, Marvell, and undisclosed other suppliers to deliver microservers. The idea behind microservers is to harness many (hundreds to start) of low-power/modest-performance processors costing tens of dollars each and running multiple instances of web application in parallel, such as Apache web servers. This approach aims to compete on price/performance, energy/performance, and density versus traditional big-iron servers (e.g., Intel Xeon).

In one sentence, the 2013-2014 computer industry dynamics will largely center on how well ARM users defend against Intel’s Atom SoCs in smartphones and tablets, and how well Intel defends its server market from ARM microserver encroachment. If the Microsoft Surface RT takes off, the ARM industry has a crack at the PC/laptop industry, but that’s not my prediction. Complicating the handicapping is fabrication process leadership, where Intel continues to excel over the next two years; smaller process nodes yield less expensive chips with voltage/performance advantages.

Stronger Ties Between Chip Use and Parts

The number of microprocessor models has skyrocketed off the charts the past few years, confusing everybody and costing chip makers a fortune in inventory management (e.g., write-downs). This really can’t keep up as every chip variation goes through an expensive set of usability and compatibility tests running up to millions of dollars per SKU (stock-keeping unit e.g., unique microprocessor model specs). That suggests we’ll see a much closer match between uses for specific microprocessor variations and the chips fabricated to meet the specific market and competitive needs of those uses. By 2015, I believe we’ll see a much more delineated set of chip uses and products:

Smartphones – the low-end of consumer processors. Phone features are reaching maturity: there are only so many pixels and videos one can fit on a 4″ (5″?) screen, and gaming performance is at the good-enough stage. Therefore, greater battery life and smarter use of the battery budget become front and center.

The reason for all the effort is a 400 million unit global smartphone market. For cost and size reasons, prowess in mating processors with radios and support functions into systems on a chip (SoCs) is paramount.

The horse to beat is ARM Holdings, whose architecture is used by the phone market leaders including Samsung, Apple, nVidia, and Qualcomm. The dark horse is Intel, which wants very much to grab, say, 5% of the smartphone market.

Reusing chips for multiple uses is becoming a clever way to glean profits in an otherwise commodity chip business. So I’ll raise a few eyebrows by predicting we’ll see smartphone chips used by the hundreds in microservers (see Part 2) inside the datacenter.

Tablets – 7″ to 10″ information consumption devices iconized by Apple’s iPad and iPad Mini. These devices need to do an excellent job on media, web browsing, and gaming at the levels of last year’s laptops. The processors and the entire SoCs need more capabilities than smartphones. Hence a usage category different from smartphones. Like smartphones, greater battery life and smarter use of the electrical budget are competitive differentiators.

Laptops, Mainstream Desktops, and All-in-One PCs – Mainstream PCs bifurcate differently over the next couple of years in different ways than the past. I’m taking my cue here from Intel’s widely leaked decision to make 2013-generation (i.e., Haswell) SoCs that solder permanently to the motherboard instead of being socketed. This is not a bad idea because almost no one upgrades a laptop processor, and only enthusiasts upgrade desktops during the typical 3-5 year useful PC life. Getting rid of sockets reduces costs, improves quality, and allows for thinner laptops.

The point is that there will be a new class of parts with the usual speed and thermal variations that are widely used to build quad-core laptops, mainstream consumer and enterprise desktops, and all-in-one PCs (which are basically laptops with big built-in monitors).

The processor energy-efficiency drive pays benefits in much lower laptop-class electrical consumption, allowing instant on and much longer battery life. Carrying extra batteries on airplanes becomes an archaic practice (not to mention a fire hazard). The battle is MacBook Air versus Ultrabooks. Low-voltage becomes its own usage sub-class.

Low End Desktops and Laptops – these are X86 PCs running Windows, not super-sized tablet chips. The market is low-cost PCs for developed markets and mainstream in emerging markets. Think $299 Pentium laptop sale at Wal-Mart. The processors for this market are soldered, dual-core, and SoC to reduce costs.

Servers, Workstations, and Enthusiasts – the high end of the computing food chain. These are socketed, high-performance devices used for business, scientific, and enthusiast applications where performance trumps other factors. That said, architecture improvements, energy efficiency, and process shrinks make each new generation of server-class processors more attractive. Intel is the market and technology leader in this computing usage class, and has little to fear from server-class competitors over the next two years.

There is already considerable overlap in server, workstation, and enthusiast processor capabilities. I see the low end Xeon 1200 moving to largely soldered models. The Xeon E5-2600 and Core i7 products gain more processor cores and better electrical efficiency over the Haswell generation.

Part 2: Form-Factors

Part 3: Application of Computing

Dell Inspiron 15z

Dell Inspiron 15z

2013 PCs: Accelerated Innovation Is Coming Soon

After a day and a half at Intel Developer Forum 2012, it’s apparent that the 2013 PC ecosystem will see a lot more innovation than we’ve been accustomed to of late. A year from now, PCs will consume much less power, perform as well or better, and have new capabilities centered around perceptual computing.

Haswell Architecture Enables New Power-On Life Expectations

Hard to believe, but Intel did not lead IDF with the microprocessor hardware. In fact, Intel said very little about the speeds and feeds of next year’s new fourth-generation Core microprocessor, code-name Haswell. Yet Haswell is designed from the ground up to make Ultrabook laptops highly desirable and very mainstream.

The undisputed fact is that Haswell will use Intel’s 22 nm three-dimensional transistor process.

The eye-opener is how much power consumption is improving. Intel says the dual-core system-on-a-chip, low-voltage processor for Ultrabooks delivered in 2013 will draw 10 watts or less. This power level is half that of 2012 third-generation Core processors (i.e., Ivy Bridge), and a 20-fold improvement from the circa-2008 Core 2-based MacBook Air I am using to write this blog.

These power-to-performance metrics will allow all-day computing, eliminating once and for all the need to compute near an electrical power outlet. You’ll think of your Ultrabook as energy-sipping as your tablet (although that’s not actually the case).

Haswell Makes Ultrabooks Even Better

Ultrabooks will come of technology age with Haswell. The Ultrabook ecosystem has gained momentum from Intel’s large industry push and incentives. Entering 2013, there will be over 140 Ivy Bridge Ultrabook designs in the market or on the way.

We are seeing much more innovation in the current and future generations of Ultrabooks, accelerated by touch screens, Windows 8 and its touch support, and new tablet-laptop hybrids. These hybrids use clever ways to disconnect, flip, or slide the keyboard from the tablet-size logic and screen. These new form-factors will be at retail point-of-sale for the holidays, and are worth checking out.

Windows 8 and touch support will rapidly gain consumer demand, especially as the $100 price premium declines next year. Touch, which most consumers are now used to on smartphones and tablets, is a natural evolution of the user interface. It will take off.

Next year’s Haswell adds icing to the Ultrabook cake. The candles are new technology centered on perceptual computing.

Intel’s Perceptual Computing Software Brings the Gaming Console to the PC

Those readers familiar with Microsoft’s xBox 360 Kinect or the Wii, a camera that senses interactive gestures and speech, will instantly understand how Intel’s beta-stage perceptual computing software and third-party hardware will take the PC user interface to a whole new level. Available now as a free developer’s kit, the software will drive perceptual computing into application products arriving about the delivery time of Haswell next year.

Perceptual computing gets us beyond keyboard commands and mouse clicks. The hardware, initially by Creative, “sees” nearby gestures and actions, such as hand, finger, and facial gestures like smiling. Facial analysis tracks the mouth, nose, and eyes as the head moves. 2D and 3D object tracking can augment the reality experience by, for example, showing a user what a particular style of glasses would like like on her face. Finally, speech technology from Nuance, the iPhone Siri-enabling folks, will drive voice command innovation. But Siri’s magic happens in the cloud at Apple’s data centers. Intel’s perceptual computing will happen lightening fast because its local, as we saw in demos.

Will 2013 See the End of PC Cables?

Intriguingly, Intel teased that it will deliver wireless near-field technology with Haswell PCs that allows wireless device charging and cable-free communications. Any road warrior who once left a certain USB cable at home will be delighted. The rest of us will celebrate the end of PC set-up times where cabling nests are just part of the nomadic lifestyle.

The Desktop is Alive and Well

Windows 8, touch, and perceptual computing are also stirring juices in the desktop space. We’ve seen numerous all-in-one PCs that create giant canvases for artists, interactive television, gaming, and collapsible form-factors that result in a very big tablet. Visualize convertible all-in-one.

All of this is goodness as product innovation will drive industry-specific uses and consumer lifestyle-specific uses. The volumes won’t be mass market, but we see the industry breaking out of the beige-box mentality at higher gross margins. In fact, all-in-ones are a shining star at 65% compound annual growth rates.

How to Approach Buying a New PC or Laptop in Late 2012

The innovation described above starts in the 2012 holiday season and continues over the next year through the Haswell microprocessor-based product roll-out in Q2 and Q3 of 2013.

My 2008 MacBook Air is working but falling way behind the technology curve, a bad place for a technology analyst to be. I’m in the market to buy fairly soon.

First, I like the data and entertainment consumption ease of use of tablets. For my needs, tablet features that carry into a new laptop include:

  • a 3G or LTE cell radio to allow always-on connectivity; WiFi has too many limits
  • Multiple radios, especially GPS for location-based apps and Bluetooth for peripherals
  • Touch support
  • Windows 8
  • Thin and light.

Those requirements put me into the specs of a feature-rich Ultrabook. Moreover, a convertible Ultrabook might get me off an iPad and MacBook onto one Ultrabook, eliminating a lot of synching issues.

Since I don’t typically do a lot of always-on media consumption, I probably don’t require waiting for Haswell to gets its great battery life; Third-Generation Core “Ivy Bridge” will get the job done and still last several years into the technology future.

Getting ready for perceptual computing apps late next year will likely require careful spec checking to get good hardware support for 3D video tracking and dual-array microphones that improve voice command accuracy. No immediate answer here.

I’ll update this blog when I find the above laptop specs in a product we can both buy at retail. Or post your comments and suggestions.

For a desktop, I’ll wait for a Haswell-based touch-enabled all-in-one.

How Intel is Scaling to Meet the Decade’s Opportunities

Eighteen months ago, Intel announced it would address the world’s rapidly growing computing continuum by investing in variations on the Intel Architecture (IA). It was met with a ho-hum. Now, many product families are beginning to emerge from the development labs and head towards production. All with IA DNA, these chip families are designed to be highly competitive in literally dozens of new businesses for Intel, produced in high volumes, and delivering genuine value to customers and end users.

Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities. What is Intel doing and how can they pull this off?

The 2010’s Computing Continuum
Today’s computing is a continuum that ranges from smartphones to mission-critical datacenter machines, and from desktops to automobiles.  These devices represent a total addressable market (TAM) approaching a billion processors a year, and will explode to more than two billion by the end of the decade.  Of that, traditional desktop microprocessors are about 175 million chips this year, and notebooks, 225 million.

For more than four decades, solving all the world’s computing opportunities required multiple computer architectures, operating systems, and applications. That is hardly efficient for the world’s economy, but putting an IBM mainframe into a cell phone wasn’t practical. So we made due with multiple architectures and inefficiencies.

In the 1990’s, I advised early adopters NCR and Sequent in their plans for Intel 486-based servers. Those were desktop PC chips harnessed into datacenter server roles. Over twenty years, Intel learned from its customers to create and improve the Xeon server family of chips, and has achieved a dominant role in datacenter servers.

Now, Intel Corporation is methodically using its world-class silicon design and fabrication capabilities to scale its industry-standard processors down to fit smartphones and embedded applications, and up into high-performance computing applications, as two examples. Scaling in other directions is still in the labs and under wraps.

The Intel Architecture (IA) Continuum
IA is Intel’s architecture and an instruction set that is common (with feature differentiation) in the Atom, Core, and Xeon microprocessors already used in the consumer electronics, desktop and notebook, and server markets, respectively.  These microprocessors are able to run a common stack of software such as Java, Linux or Microsoft Windows.  IA also represents the hardware foundation for hundreds of billions of dollars in software application investments by enterprise and software application package developers that remain valuable assets as long as hardware platforms can run it — and backwards compatibility in IA has protected those software investments.

To meet the widely varying requirements of this decade’s computing continuum, Intel is using the DNA of IA to create application-specific variants of its microprocessors.  Think of this as silicon gene-splicing.  Each variant has its own micro-architecture that is suited for its class of computing requirements (e.g., Sandy Bridge for 2011 desktops and notebooks). These genetically-related processors will extend Intel into new markets, and include instruction-set compatible microprocessors:

  • Atom, the general-purpose computer heart of consumer electronics mobile devices, tablets, and soon smartphones;
  • Embedded processors and electronics known as “systems on a chip” (SOCs) with an Atom core and customized circuitry for controlling machines, display signage, automobiles, and industrial products;
  • Core i3, i5, and i7 processors for business and consumer desktops and notebooks, with increasing numbers of variants for form-factor, low power, and geography;
  • Xeon processors for workstations and servers, with multi-processors capable advances well into the mainframe-class, mission-critical computing segment;
  • Xeon datacenter infrastructure processor variants (e.g., storage systems, and with communications management a logical follow-on);
  • And others to follow as new technology markets unfold.

Can Intel Scale One Architecture Across the 2010’s Computing Continuum?
It is proper to question whether Intel is capable of such a massive expansion of its base microprocessor product set.  After all, smartphones have little in common with datacenter number crunchers, right?  Actually, they do have a lot in common as many variants are in essence “general purpose” processors.  Processor speed, thermal envelope, wattage and power management, manageability, and chip geometry are all variables that are demonstrably well understood by Intel.

Moreover, Intel is the market leader at fabricating consistent (think “tick-tock” delivery process), reliable, high-volume silicon-based products.  It is also very adept at delivering the software tools and ecosystem needed to harness the hardware effectively.

At this juncture, only one technology company in the world could credibly propose a common computing architecture that scales from consumer electronics devices to datacenters: Intel Corporation.  The company has the technology tools and knowhow, cash flow, human resources, ability to execute, and vision to glue the billions of Internet-connected computing devices together in a single computing continuum.  Intel views succeeding in delivering such a continuum of IA-based hardware and software to match the computing continuum as its brass-ring chance of the decade.

I conclude that IA, indeed, fits the market requirements of the vast majority of the decade’s computing work requirements, and that Intel is singularly capable of creating the products to fill the expanding needs of the computing market (e.g., many core).

The go-to market strategy for each new market builds on the commonality of the IA continuum. The secret sauce is taking the DNA of IA and arduously and expensively scaling it to meet the specific technology requirements of individual markets. Let’s look at the very low end and the very high end as examples of such scaling.

Medfield, The Forthcoming Atom Hero
This week at the Mobile World Conference in Barcelona, Intel announced it was sampling chips for the next-generation Atom, code-name Medfield. This IA chip is the first to arguably approach the technology requirements of the mainstream smartphone market, as the entire computer will fit into a board the size of a postage label, compared to a credit card for the current Moorestown generation, and a 3″x5″ card for the previous generation, Menlow. Built on the same year-old 32 nm process as Westmere and Sandy Bridge, the company has not released additional details of Medfield’s capabilities. Expect to see Medfield in smartphone and tablet products next year.

Meanwhile, the past year has seen too numerous to mention activities in the Atom space through design wins, acquisitions (e.g., Infineon), partnerships, and alliances.

The five year big-picture starts in 2006 with the first Atom cobbled from an old Pentium design and built on a last-generation process. The result was a low-capability processor that fanned the netbook and nettop markets. Today, Intel has thousands of employees working on new designs and go-to-market strategies to fit a wide variety of emerging markets:

  • Atom-N for netbooks;
  • Atom-D for nettop basic, entry-level PCs;
  • Atom-CE for set-top boxes (e.g., Google TV, Boxee);
  • Atom-E for embedded applications (e.g., automotive, signage), an under-appreciated opportunity;
  • Atom-T for tablets;
  • Atom-Z for handhelds (e.g., smartphones).

Moreover, as I forecast last month, Intel is making multi-billion dollar cap ex investments in next-generation fabs so that it can accelerate Atom designs faster than Moore’s Law, and to have the capacity to deliver more chips on the latest fabrication process. Having the fastest, smallest, low-leakage transistors is a competitive advantage worth pursuing. Having fab capacity is a prerequisite to taking market share.

In short, believing two years ago that Intel could scale Atom down to a competitive smartphone offering took a leap of faith or an insiders perspective. Today, the evidence is strong enough that I would bet on Intel taking meaningful market share in Atom-centric markets over the next several years. It took Intel more than ten years to win a majority share of the server chip market. And we’re approaching five years on the Atom path to maturity. I take a long-term view here.

Next Up, Many Integrated Cores
At the other end of the computing continuum from Atom applications is high-performance computing (HPC). Intel is scaling the high-end by using Many Integrated (IA) Cores in a forthcoming micro-architecture. The world’s number one supercomputer today is in China. It consists of thousands of Xeon server-class processors mixed with nVidia Tesla graphics processor cards.

Intel is investing in MIC in order to have a scalable, all-IA solution to the world’s toughest scientific, technical, engineering, and research workloads, and with one software application base instead of the need to learn and program multiple architectures.

MIC’s heritage is Larrabee, a graphics processor card that was never brought to market in 2009. The architecture, hardware, and software technology lessons learned are recast in the first MIC product, codename Knights Corner, which I expect to be announced this year, perhaps as early as Intel Developer Forum in September.

Challenges
1. The most difficult technology challenge that Intel faces this decade is not hardware but software.  Internally, the growing list of must-deliver software drivers for hardware such as processor integrated graphics means that the rigid two-year, tick-tock hardware model must also accommodate software delivery schedules.

Larrabee was an apparent example of mis-aligned hardware and software schedules.  The need for longer software development schedules, especially testing, is forcing Intel to have hardware working earlier in the tick-tock cycle.  This leads to better quality, but more software creates greater inherent schedule and business risks.

2. In the consumer space such as smartphones, Intel’s ability to deliver applications and a winning user experience are limited by the company’s OEM distribution model. More emphasis needs to be placed on the end-user application ecosystem, both quality and quantity.

3. By the end of the decade, silicon fabrication will be under 10 nm, and it is a lot less clear how Moore’s Law will perform in the 2020’s.

4. Externally, software parallelism remains a sticky problem, as Intel creates more parallel threads in hardware, but programmers are slow to distribute their applications across that hardware.  This is a computer science problem of considerable enormity.  Intel is investing heavily with professional product developers, and its parallel software tools are widely used and admired.  However, that may not be enough to unblock the parallel software bottleneck.

5. The market challenges for Intel are also great opportunities.  About ten percent of cell phones are smartphones today, but that will grow towards 80% by the end of the decade.  The competitive feeding frenzy led by Apple’s iPhone and now iPad will drive competition and innovation at an enormous rate.

What’s the Competition Doing?
Intel’s competition comes on two fronts: AMD in PCs, notebooks and servers — and tablets by year-end; ARM Holdings’ 1,200 licensees in the smartphone, tablet, and embedded markets.

For AMD, 2011 is shaping up as a game of Battleship. In December, fired off Brazos, and Intel responded in January with Sandy Bridge. The two companies will have at least three other exchanges this year. From a scaling standpoint, AMD is competitive in netbooks up to servers. But AMD is conceding the smartphone space to Intel and ARM.

ARM micro-architectures are very power efficient RISC designs that are well suited to electronic gadgets and embedded applications. The 1,200 ARM licensees all do things a little differently in assembling their chips, so its not an apples-to-apples comparison to talk about “ARM vs. Intel”. Recent announcements of quad-core ARM chip designs should not be directly compared to Intel’s quad-cores; Intel (and AMD) get much more work done in each clock tick. ARM will need quad-cores and more to move up into entry-PC applications that require more processor horsepower. As for 64-bit ARM in the server markets, I’ll believe it when I see it. Lots of wimpy (ARM) cores are not the answer to datacenter scaling issues.

Observations and Conclusions
Intel is likely to get a small piece of the smartphone pie over the next couple of years, but could, with continuing investment and sustained concentration, be a major player by mid-decade.  Surely that is the company’s goal.  SOCs are another multi-billion dollar opportunity that will unfold slowly as product design cycles are much longer.  Yet autos alone are a 35 million unit opportunity growing to 60 million in late decade.

Intel’s new market investments are best watched over a period of years; success will be hard to measure quarter-by-quarter.

Therefore, and here I speak to Wall Street quarter-driven analysts, give the smartphone/tablet/embedded initiatives at least two more years before calling long-term winners and losers.

Intel is taking its proven IA platforms and modifying them to scale competitively as existing markets evolve and as new markets such as smartphones emerge.  To do this, Intel discarded the PC beige-box mentality in favor of application-specific silicon form-factors and product requirements implemented in individual micro-architectures.

IA scales from handhelds to mission-critical enterprise applications, all able to benefit from a common set of software development tools and protecting the vast majority of the world’s software investments.  Moreover, IA and Intel itself are evolving to specifically meet the needs of a spectrum of computing made personal, the idea that a person will have multiple computing devices that match the time, place and needs of the user.

Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities.

My conclusion is that Intel’s self-interest in growth is pushing the company to innovate in new markets, which will benefit computer buyers.  But the company is also hugely invested in the worldwide growth of a middle class in emerging markets, which expands the universe by hundreds of millions of potential customers.

We will all look back a decade hence at how far personal computing has come in ten years.

Biography

Peter S. Kastner is an industry analyst with over forty years experience in application development, datacenter operations, computer industry marketing, and market research.  He was a co-founder of industry-watcher Aberdeen Group in 1989.  His firm, Scott-Page LLC, consults with technology companies and technology users.

Datacenter Server Farm

 


Tablets to Be Supercharged by Quad-core Chips

More than 60% of desktops and 95% of notebooks shipped in 2010 had two or fewer cores. Lots of market research says users are generally satisfied with the performance of their machines. So why on earth do we need quad-core gadgets like tablets and cell phones?

The answer: we don’t need quad cores. But the manufacturers are in a global, commodity race to the bottom where differentiation is hard to find. The technology is coming to market. So the product planners buckle under to the marketing suits and deliver quad cores.

As an analyst with considerable performance benchmark experience, I always have some small part of my brain thinking about performance when I use software apps or devices.

I think quad cores are a bad tradeoff for most users most of the time. The tradeoff is weight, memory,  and battery life.

Driving quad cores instead of dual cores requires considerably more power, which depletes the battery. To keep the battery life the same, it must be bigger, weigh more, and cost more.

Unmentioned in any quad-core gadget discussion I have seen to date is software support. A symmetric multi-processing operating system which efficiently harnesses all four cores will require a considerable amount of new code, testing, and work by app developers. The code itself is available off the shelf in Linux/Unix, but shoe-horning server OS kernel code into a smartphone — efficiently — is no easy task. Also, it will eat up a lot more main memory, a rather precious commodity in gadgets.

Lastly, with knowledge that the gadget also has many cores dedicated to graphics and video processing, I beg the question: just what will these four cores be doing almost all the time?

Rather than quad cores looking for a job, I would be much happier with gadgets that used sophisticated thermal envelope management such as Intel’s Turbo Boost 2. Speeding up a couple of cores briefly, then shutting them down is an elegant way to deliver higher performance with good battery life.

Source: – PCWorld