Self-Driving Software: Why We Need E Pluribus Unum

Today, numerous large and small companies around the world are working diligently on perfecting their company’s self-driving software. All the large traditional automobile companies are included as well as large technology firms such as Google, Intel and Microsoft, and even Uber. These companies are working in true twentieth-century capitalist fashion: they’re doing it all independently and secretly. This approach leads to sub-optimal technology and foreseeable tragedies.

Self-Driving Vehicles Use Artificial Intelligence (AI)

Programming a self-driving vehicle (SDV) by traditional software-development methods is so fraught with complexity that no one, to my knowledge, is attempting. So scrap that idea. Instead, developers have flocked to artificial intelligence, a red-hot technology idea built on rather old ideas about neural networks.

There’s a lot to AI technology beyond the scope of this blog. A quick Internet search will get you started on a deep dive. For today, let’s sketch a common approach to AI application development:

  • First, an AI rules-based model is fed real-world scenarios, rules, and practical knowledge. For example, “turning left into oncoming traffic (in the USA but not the UK) is illegal and hazardous and will likely result in a crash. Don’t do that.” This first phase is the AI Learning Phase.
  • Second, the neural network created in the learning phase is executed in a vehicle, often on a specialized chip, graphics processing unit (GPU) or multi-processor. This is the Execution Phase.
  • Third, the execution unit records real-world observations while driving, eventually feeding them back into the learning model.

The Problem of Many

Here’s the rub. Every SDV developer is on its own, creating a proprietary AI model with its own set of learning criteria. Each AI model is only as good as the data fed into its learning engine.

No single company is likely to encounter or imagine all of the third standard-deviation, Black Swan events that can and will lead to vehicle tragedies and loss of life. Why should Tesla and the state of Florida be the only beneficiaries of the lessons from a particular fatal crash? The industry should learn from the experience too. That’s how society progresses.

Cue the class-action trial lawyers.

E Pluribus Unum

E Pluribus Unum is Latin for “out of many, one”. (Yes, it’s the motto of the United States). My proposal is simple:

  • The federal government should insist that all self-driving vehicles use an AI execution unit that is trained in its learning phase with an open-source database of events, scenarios, and real-world feedback. Out of many AI training models, one model.
  • The Feds preempt state regulation of core AI development and operation
  • Vehicles that use the federalized learning database for training receive limited class-action immunity, just like we now do with immunization drugs.
  • The Feds charge fees to the auto industry that cover the costs of the program.


From a social standpoint, there’s no good reason for wild-west capitalism over proprietary AI learning engines that lead to avoidable crashes and accidents. With one, common AI learning database, all SDVs will get smarter, faster because they are benefiting from the collective experience of the entire industry. By allowing and encouraging innovation in AI execution engines, the industry will focus on areas that impact better-faster-cheaper-smaller products and not in avoiding human-risk situations. Performance benchmarks are a well-understood concept.

Philosophically, I don’t turn first to government regulation. But air traffic control, railroads, and numerous aspects of medical areas are regulated without controversy. Vehicle AI is ripe for regulation before production vehicles are produced by the millions over the next decade.

I am writing this blog because I don’t see the subject being discussed. It ought to be.

Comments and feedback are welcome. See my feed on Twitter @peterskastner.


“My ISP is a Solar-Powered Drone.”

Google, the ad-driven search giant, and Facebook, the social connections giant, are fighting over airplane drone technology companies. What’s that all about?

Solar-powered drones would, when they’re ready for mass-market in the next five years, be able to fly for weeks or months. They can take 2D and 3D photos resulting in better and more up-to-date maps. And they could serve as aerial Internet connections. It’s the latter that got my attention because it threatens the status quo in developed nations and opens new markets in developing nations.

Aerial Internet Drones (AIDs) suggest a breakout technology that solves — or at least remediates — the “wireless everywhere” mantra of the past decade. In developed countries such as the United States, intractable wireless problems include inadequate wireless bandwidth in high device areas (e.g., mid-town New York) necessitating more cell towers and greater slices of the electromagnetic spectrum. Moreover, “poor wireless coverage meets not-in-my-neighborhood” and inadequate capital make it politically and economically difficult to add enough cell towers to guarantee wireless broadband such as LTE to build a superior wireless broadband network in suburban and rural areas.

In underdeveloped geographies, which represent attractive new markets for the global technology and wireless companies, inexpensive and inadequate mobile broadband infrastructure creates a chicken-and-the-egg problem.

So, the vision to solve both developing and developed wireless broadband demand is to put up a global network of drones that serve as radio relays for wireless Internet connections. AIDs would be a new form of Internet router, loitering around a more-or-less fixed point in the sky.

At the right altitude, an AID has better line-of-sight than a cell tower located over the hill. The AID theoretically offers greater geographic coverage and often better signal quality than today’s cell tower networks. At a cost of less than $10 million per equipped AID, my envelope calculations suggest AID network costs compare favorably with cell towers for comparable geographic coverage.

In developing areas such as Africa, an AID network is a solution to creating metro- and rural-area Internet wireless infrastructure rapidly and without the difficulties of building land-line-connected cell towers.

Cellphone networks connect cell towers with land line connections to each other and to an Internet wired backhaul. An AID network needs to connect wirelessly to a) client cellphones and the Internet of Things and b) to a radio ground-station connected to an Internet wired backhaul. The radio ground-station is the crux of the difficulties I foresee.

The ground-station requires radio spectrum to communicate up to and down from the AID network. It represents a new demand on the over-burdened and highly political use of the electromagnetic spectrum. Where does the spectrum come from, whose ox is gored, and how are the skids greased?  Think lobbying.

Moreover, the incumbent cable and wireless ISPs (i.e., Comcast, Verizon, AT&T, Sprint, Dish, et al) are not likely to give up their near monopolies on Internet access by devices, homes, and businesses without a knockdown, drag-out political fight followed by years of litigation.

Add citizen privacy related to drone picture taking to this highly volatile Internet-industrial-complex wireless food fight and you can expect great spectator sport. Although in developing countries, the issue will be described as “drone spying by the NSA”.

Like many, I would greatly appreciate and even pay more for better wireless coverage and higher wireless device bandwidth. First, Google and Facebook have to solve the real technology problems of getting the AIDs into the sky. Second, they have to muscle a (much needed) rethink of wireless spectrum use and the roles of future ISPs through the political sausage factory, and nail down the new spectrum they need. Combined, this is a heavy lift.

So, with a sigh of regret, I suspect it will be quite a while before I can say “My ISP is a Solar-Powered Drone.”

Follow me on Twitter @PeterSKastner.

solar drone

Titan Aerospace/Associated Press

IT Industry Hopes for Q4 Holiday Magic

I am floored by how it has come to pass that almost all of the 2013 new tech products get to market in the fourth quarter of 2013. For the most part, the other three quarters of the year were not wasted so much as not used to smooth supply and demand. What is to be done?

2013 products arrive in Q4
Here are some of the data points I used to conclude that 2013 is one backend-loaded product year:

  • Data Center: Xeon E3-1200 v3 single-socket chips based on the Haswell architecture started shipping this month. Servers follow next quarter. Xeon E5 dual-socket chips based on Ivy Bridge announced and anticipated in shipping servers in Q4. New Avoton and Rangely Atom chips for micro-servers and storage/comms are announced and anticipated in product in Q4.
  • PCs: my channel checks show 2013 Gen 4 Core (Haswell) chips in about 10% of SKUs at retail, mostly quad-core. Dual-core chips are now arriving and we’ll see lower-end Haswell notebooks and desktops arriving imminently. Apple, for instance, launched its Haswell-based 2013 iMac all-in-ones September 24th. But note the 2013 Mac Pro announced in June has not shipped and the new MacBooks are missing in action.
  • Tablets: Intel’s Bay Trail Atom chips announced in June are now shipping. They’ll be married to Android or Windows 8.1, which ships in late October. Apple’s 2013 iPad products have not been announced. Android tabs this year have mostly seen software updates, not significant hardware changes.
  • Phones: Apple’s new phones started selling this week. The 5C is last year’s product with a cost-reduced plastic case. The iPhone 5S is the hot product. Unless you stood all day in line last weekend, you’ll be getting your ordered phone …. in Q4. Intel’s Merrifield Atom chips for smartphones, announced in June have yet to be launched. I’m thinking Merrifield gets the spotlight at the early January ’14 CES show.

How did we get so backend loaded?
I don’t think an economics degree is needed to explain what has happened. The phenomenal unit growth over the past decade in personal computers, including mobility, have squarely placed the industry under the forces of global macro-economics. The recession in Europe, pull-back in emerging countries led by China, and slow growth in the USA all contribute to a sub-par macro-economic global economy. Unit volume growth rates have fallen.

The IT industry has reacted with slowed new product introductions in order to sell more of the existing products, which reduces the cost-per-unit of R&D and overhead of existing products. And increases profits.

Unfortunately, products are typically built to a forecast. The forecast for 2012-2013 was higher than reality. More product was built than planned or sold. There are warehouses full of last year’s technology.

The best laugh I’ve gotten in the past year from industry executives is to suggest that “I know a guy who knows a guy in New Jersey who could maybe arrange a warehouse fire.” After about a second of mental arithmetic, I usually get a broad smile back and a response like “Hypothetically, that would certainly be very helpful.” (Industry execs must think I routinely wear a wire.)

So, with warehouses full of product which will depreciate dramatically upon new technology announcements, the industry has said “Give us more time to unload the warehouses.”

Meanwhile, getting the new base technology out the door on schedule is harder, not easier. Semiconductor fabrication, new OS releases, new sensors and drivers, etc. all contribute to friction in the product development schedule. But flaws are unacceptable because of the replacement costs. For example, if a computing flaw is found in Apple’s new iOS 7, which shipped five days ago, Apple will have to fix the install on over 100 million devices and climbing — and deal with class action lawsuits and reputation damage; costs over $1 billion are the starting point.

In short, the industry has slowed its cadence over the past several years to the point where all the sizzle in the market with this year’s products happens at the year-end holidays. (Glad I’m not a Wall Street analyst.)

What happens next?
The warehouses will still be stuffed entering 2014. But there will be less 2012 tech on those shelves, now replaced by 2013 tech.

Marching soldiers are taught that when they get out of step, they skip once and get back in cadence.

The ideal consumer cadence for the IT industry has products shipping in Q2 and fully ramped by mid-Q3; that’s in time for the back-to-school major selling season, second only to the holidays. The data center cadence is more centered on a two-year cycle, while enterprise PC buying prefers predictability.

Consumer tech in 2014 broadly moves to a smaller process node and doubles up to quad-cores. Competitively, Intel is muscling its way into tablets and smartphones. The A7 processor in the new Apple iPhone 5S is Apple’s first shot in response. Intel will come back with 14nm Atoms in 2014, and Apple will have an A8.

Notebooks will see a full generation of innovation as Intel delivers 14nm chips that are on an efficiency path towards thresh-hold voltages — as low as possible — that deliver outstanding battery life. A variation on the same tech gets to Atom by 2014 holidays.

The biggest visible product changes will be in form-factors, as two-in-one notebooks in many designs compete with tablets in many sizes. The risk-averse product manufacturers (who own that product in the warehouses) have to innovate or die, macro-economic conditions be damned. Dell comes to mind.

On the software side, Apple’s IOS 7 looks and acts a lot more like Android than ever before. Who would have guessed that? Microsoft tries again with Windows version 8.1.

Consumer buyers will be information-hosed with more changes than they have seen in years, making decision-making harder.

Intel has been very cagy about what 2014 brings to desktops; another year with Haswell refreshers before a 2015 new architecture is entirely possible. Otherwise, traditional beige boxes are being replaced with all-in-ones and innovative small form-factor machines.

The data center is in step and a skip is unnecessary. The 2014 market battle will answer the question: what place do micro-servers have in the data center? However, there is too much server-supplier capacity chasing a more commodity datacenter. Reports have IBM selling off its server business, and Dell is going private to focus long-term.

The bright spot is that tech products of all classes seems to wear out after about 4-5 years, demanding replacement. Anyone still have an iPhone 3G?

The industry is likely to continue to dawdle its cycles until global macro-economic conditions improve and demand catches up with more of the supply. But moving the availability of products back even two months in the calendar would improve new-product flow-through by catching the back-to-school season.

Catch me on Twitter @peterskastner



On the Impact of Paul Otellini’s CEO Years at Intel

Intel’s CEO Paul Otellini is retiring this week. His 40-year career at Intel now ending, it’s a timely opportunity to look at his impact on Intel.

Source: New York Times

Source: New York Times

Intel As Otellini Took Over

In September 2004 when it was announced that Paul Otellini would take over as CEO, Intel was #46 on the Fortune 100 list, and had ramped production to 1 million Pentium 4′s a week (today over a million processors a day). The year ended with revenues of $34.2 billion. Otellini, who joined Intel with a new MBA in 1974, had 30 years of experience at Intel.

The immediate challenges the company faced fell into four areas: technology, growth, competition, and finance:

Technology: Intel processor architecture had pushed more transistors clocking faster, generating more heat. The solution was to use the benefits of Moore’s Law to put more cores on each chip and run them at controllable — and eventually much reduced — voltages.

Growth: The PC market was 80% desktops and 20% notebooks in 2004 with the North America and Europe markets already mature. Intel had chip-making plants (aka fabs) coming online that were scaled to a continuing 20%-plus volume growth rate. Intel needed new markets.

Competition: AMD was ascendant, and a growing menace.  As Otellini was taking over, a market research firm reported AMD had over 52% market share at U.S. retail, and Intel had fallen to #2. Clearly, Intel needed to win with better products.

Finance: Revenue in 2004 recovered to beat 2000, the Internet bubble peak. Margins were in the low 50% range — good but inadequate to fund both robust growth and high returns to shareholders.

Where Intel Evolved Under Paul Otellini

Addressing these challenges, Otellini changed the Intel culture, setting higher expectations, and moving in many new directions to take the company and the industry forward. Let’s look at major changes at Intel in the past eight years in the four areas: technology, growth, competition, and finance:


Design for Manufacturing: Intel’s process technology in 2004 was at 90nm. To reliably achieve a new process node and architecture every two years, Intel introduced the Tick-Tock model, where odd years deliver a new architecture and even years deliver a new, smaller process node. The engineering and manufacturing fab teams work together to design microprocessors that can be manufactured in high volume with few defects. Other key accomplishments include High-K Metal Gate transistors at 45nm, 32nm products, 3D tri-gate transistors at 22nm, and a 50% reduction in wafer production time.

Multi-core technology: The multi-core Intel PC was born in 2006 in the Core 2 Duo. Now, Intel uses Intel Architecture (IA) as a technology lever for computing across small and tiny (Atom), average (Core and Xeon), and massive (Phi) workloads. There is a deliberate continuum across computing needs, all supported by a common IA and an industry of IA-compatible software tools and applications.

Performance per Watt: Otellini led Intel’s transformational technology initiative to deliver 10X more power-efficient processors. Lower processor power requirements allow innovative form factors in tablets and notebooks and are a home run in the data center. The power-efficiency initiative comes to maturity with the launch of the fourth generation of Core processors, codename Haswell, later this quarter. Power efficiency is critical to growth in mobile, discussed below.


When Otellini took over, the company focused on the chips it made, leaving the rest of the PC business to its ecosystem partners. Recent unit growth in these mature markets comes from greater focus on a broader range of customer’s computing needs, and in bringing leading technology to market rapidly and consistently. In so doing, the company gained market share in all the PC and data center product categories.

The company shifted marketing emphasis from the mature North America and Europe to emerging geographies, notably the BRIC countries — Brazil, Russia, India, and China. That formula accounted for a significant fraction of revenue growth over the past five years.

Intel’s future growth requires developing new opportunities for microprocessors:

Mobile: The early Atom processors introduced in late 2008 were designed for low-cost netbooks and nettops, not phones and tablets. Mobile was a market where the company had to reorganize, dig in, and catch up. The energy-efficiency that benefits Haswell, the communications silicon from the 2010 Infineon acquisition, and the forthcoming 14nm process in 2014 will finally allow the company to stand toe-to-toe with competitors Qualcomm, nVidia, and Samsung using the Atom brand. Mobile is a huge growth opportunity.

Software: The company acquired Wind River Systems, a specialist in real-time software in 2009, and McAfee in 2010. These added to Intel’s own developer tools business. Software services business accelerates customer time to market with new, Intel-based products. The company stepped up efforts in consumer device software, optimizing the operating systems for Google (Android), Microsoft (Windows), and Samsung (Tizen). Why? Consumer devices sell best when an integrated hardware/software/ecosystem like Apple’s iPhone exists.

Intelligent Systems: Specialized Atom systems on a chip (SoCs) with Wind River software and Infineon mobile communications radios are increasingly being designed into medical devices, factory machines, automobiles, and new product categories such as digital signage. While the global “embedded systems” market lacks the pizzazz of mobile, it is well north of $20 billion in size.


AMD today is a considerably reduced competitive threat, and Intel has gained back #1 market share in PCs, notebooks, and data center.

Growth into the mobile markets is opening a new set of competitors which all use the ARM chip architecture. Intel’s first hero products for mobile arrive later this year, and the battle will be on.


Intel has delivered solid, improved financial results to stakeholders under Otellini. With ever more efficient fabs, the company has improved gross margins. Free cash flow supports a dividend above 4%, a $5B stock buyback program, and a multi-year capital expense program targeted at building industry-leading fabs.

The changes in financial results are summarized in the table below, showing the year before Otellini took over as CEO through the end of 2012.

GAAP 2004 2012 Change
Revenue 34.2B 53.3B 55.8%
Operating Income 10.1B 14.6B 44.6%
Net Income 7.5B 11B 46.7%
EPS $1.16 $2.13 83.6%

The Paul Otellini Legacy

There will be books written about Paul Otellini and his eight years at the helm of Intel. A leader should be measured by the institution he or she leaves behind. I conclude those books will describe Intel in 2013 as excelling in managed innovation, systematic growth, and shrewd risk-taking:

Managed Innovation: Intel and other tech companies always are innovative. But Intel manages innovation among the best, on a repeatable schedule and with very high quality. That’s uncommon and exceedingly difficult to do with consistency. For example, the Tick-Tock model is a business school case study: churning out ground-breaking transistor technology, processors, and high-quality leading-edge manufacturing at a predictable, steady pace of engineering to volume manufacturing. This repeatable process is Intel’s crown jewel, and is a national asset.

Systematic Growth: Under Otellini, Intel made multi-billion dollar investments in each of the mobile, software, and intelligent systems markets. Most of the payback growth will come in the future, and will be worth tens of billions in ROI.

The company looks at the Total Addressable Market (TAM) for digital processors, decides what segments are most profitable now and in the near future, and develops capacity and go-to-market plans to capture top-three market share. TAM models are very common in the tech industry. But Intel is the only company constantly looking at the entire global TAM for processors and related silicon. With an IA computing continuum of products in place, plans to achieve more growth in all segments are realistic.

Shrewd Risk-Taking: The company is investing $35 billion in capital expenses for new chip-making plants and equipment, creating manufacturing flexibility, foundry opportunities, and demonstrating a commitment to keep at the forefront of chip-making technology. By winning the battle for cheaper and faster transistors, Intel ensures itself a large share of a growing pie while keeping competitors playing catch-up.

History and not analysts will grade the legacy of Paul Otellini as CEO at Intel. I am comfortable in predicting he will be well regarded.

Follow me on Twitter @PeterSKastner

The 2013-2014 Computing Forest – Part 1: Processors

Ignoring the daily tech trees that fall in the woods, let’s explore the computer technology forest looking out a couple of years.
Those seeking daily comments should follow @peterskastner on Twitter.

Part 1: Processors

Architectures and Processes

Intel’s Haswell and Broadwell

We’ll see a new X86 architecture in the first half of 2013, code-name Haswell. The Haswell chips will use the 22 nm fabrication process introduced in third-generation Intel Core chips (aka Ivy Bridge). Haswell is important for extending electrical efficiency, improving performance per clock tick, and as the vehicle for Intel’s first system on a chip (SoC), which combines a dual-core processor, graphics, and IO in one unit.

Haswell is an architecture, and the benefits of the architecture carry over to the various usage models discussed in the next section.

I rate energy efficiency as the headline story for Haswell. Lightweight laptops like Ultrabooks (an Intel design) and Apple’s MacBook Air will sip the battery at around 8 watts, half of today’s 17 watts. This will dramatically improve the battery life of laptops but also smartphones and tablets, two markets that Intel has literally built $5 billion fabs to supply.

The on-chip graphics capabilities have improved by an order of magnitude in the past couple of years and get better of the next two. Like the main processor, the GPU benefits from improved electrical efficiency. In essence, on-board graphics are now “good enough” for the 80-th percentile of users. By 2015, the market for add-on graphics cards will start well above $100, reducing market size so much that the drivers switch; consumer GPUs lead high-performance computing (HPC) today. That’s swapping so that HPC is the demand that supplies off-shoot high-end consumer GPUs.

In delivering a variety of SoC processors in 2013, Intel learns valuable technology lessons for the smartphone, tablet, and mobile PC markets that will carry forward into the future. Adjacent markets, notably automotive and television, also require highly integrated SoCs.

Broadwell is the code-name for the 2014 process shrink of the Haswell architecture from 22nm to 14nm. I’d expect better electrical efficiency, graphics, and more mature SoCs. This is the technology sword Intel carries into the full fledged assault on the smartphone and tablet markets (more below).


AMD enters 2013 with plans for “Vishera” for the high-end desktop, “Richland”, an SoC  for low-end and mainstream users, and “Kabini”, a low-power SoC  for tablets.

The 2013 server plans are to deliver its third-generation of the current Opteron architecture, code name Steamroller. The company also plans to move from a 32nm SOI process to a 28nm bulk silicon process.

In 2014, AMD will be building Opteron processors based on a 64-bit ARM architecture, and may well be first to market. These chips will incorporate the IO fabric acquired with microserver-builder Seamicro. In addition, AMD is expected to place small ARM cores on its X86 processors in order to deliver a counter to Intel’s Trusted Execution Technology. AMD leads the pack in processor chimerism.

Intel’s better performing high-end chips have kept AMD largely outside looking in for the past two years. Worse, low-end markets such as netbooks have been eroded by the upward charge of ARM-based tablets and web laptops (i.e., Chromebook, Kindle, Nook).


ARM Holdings licenses processor and SoC designs that licensees can modify to meet particular uses. The company’s 32-bit chips started out as embedded industrial and consumer designs. However, the past five years has seen fast rising tides as ARM chip designs were chosen for Apple’s iPhone and iPad, Google’s Android phones and tablets, and a plethora of other consumer gadgets. Recent design wins includes Microsoft’s Surface RT. At this point, quad-core (plus one, with nVidia) 32-bit processors are commonplace. Where to go next?

The next step is a 64-bit design expected in 2014. This design will first be used by AMD, Calxeda, Marvell, and undisclosed other suppliers to deliver microservers. The idea behind microservers is to harness many (hundreds to start) of low-power/modest-performance processors costing tens of dollars each and running multiple instances of web application in parallel, such as Apache web servers. This approach aims to compete on price/performance, energy/performance, and density versus traditional big-iron servers (e.g., Intel Xeon).

In one sentence, the 2013-2014 computer industry dynamics will largely center on how well ARM users defend against Intel’s Atom SoCs in smartphones and tablets, and how well Intel defends its server market from ARM microserver encroachment. If the Microsoft Surface RT takes off, the ARM industry has a crack at the PC/laptop industry, but that’s not my prediction. Complicating the handicapping is fabrication process leadership, where Intel continues to excel over the next two years; smaller process nodes yield less expensive chips with voltage/performance advantages.

Stronger Ties Between Chip Use and Parts

The number of microprocessor models has skyrocketed off the charts the past few years, confusing everybody and costing chip makers a fortune in inventory management (e.g., write-downs). This really can’t keep up as every chip variation goes through an expensive set of usability and compatibility tests running up to millions of dollars per SKU (stock-keeping unit e.g., unique microprocessor model specs). That suggests we’ll see a much closer match between uses for specific microprocessor variations and the chips fabricated to meet the specific market and competitive needs of those uses. By 2015, I believe we’ll see a much more delineated set of chip uses and products:

Smartphones – the low-end of consumer processors. Phone features are reaching maturity: there are only so many pixels and videos one can fit on a 4″ (5″?) screen, and gaming performance is at the good-enough stage. Therefore, greater battery life and smarter use of the battery budget become front and center.

The reason for all the effort is a 400 million unit global smartphone market. For cost and size reasons, prowess in mating processors with radios and support functions into systems on a chip (SoCs) is paramount.

The horse to beat is ARM Holdings, whose architecture is used by the phone market leaders including Samsung, Apple, nVidia, and Qualcomm. The dark horse is Intel, which wants very much to grab, say, 5% of the smartphone market.

Reusing chips for multiple uses is becoming a clever way to glean profits in an otherwise commodity chip business. So I’ll raise a few eyebrows by predicting we’ll see smartphone chips used by the hundreds in microservers (see Part 2) inside the datacenter.

Tablets – 7″ to 10″ information consumption devices iconized by Apple’s iPad and iPad Mini. These devices need to do an excellent job on media, web browsing, and gaming at the levels of last year’s laptops. The processors and the entire SoCs need more capabilities than smartphones. Hence a usage category different from smartphones. Like smartphones, greater battery life and smarter use of the electrical budget are competitive differentiators.

Laptops, Mainstream Desktops, and All-in-One PCs – Mainstream PCs bifurcate differently over the next couple of years in different ways than the past. I’m taking my cue here from Intel’s widely leaked decision to make 2013-generation (i.e., Haswell) SoCs that solder permanently to the motherboard instead of being socketed. This is not a bad idea because almost no one upgrades a laptop processor, and only enthusiasts upgrade desktops during the typical 3-5 year useful PC life. Getting rid of sockets reduces costs, improves quality, and allows for thinner laptops.

The point is that there will be a new class of parts with the usual speed and thermal variations that are widely used to build quad-core laptops, mainstream consumer and enterprise desktops, and all-in-one PCs (which are basically laptops with big built-in monitors).

The processor energy-efficiency drive pays benefits in much lower laptop-class electrical consumption, allowing instant on and much longer battery life. Carrying extra batteries on airplanes becomes an archaic practice (not to mention a fire hazard). The battle is MacBook Air versus Ultrabooks. Low-voltage becomes its own usage sub-class.

Low End Desktops and Laptops – these are X86 PCs running Windows, not super-sized tablet chips. The market is low-cost PCs for developed markets and mainstream in emerging markets. Think $299 Pentium laptop sale at Wal-Mart. The processors for this market are soldered, dual-core, and SoC to reduce costs.

Servers, Workstations, and Enthusiasts – the high end of the computing food chain. These are socketed, high-performance devices used for business, scientific, and enthusiast applications where performance trumps other factors. That said, architecture improvements, energy efficiency, and process shrinks make each new generation of server-class processors more attractive. Intel is the market and technology leader in this computing usage class, and has little to fear from server-class competitors over the next two years.

There is already considerable overlap in server, workstation, and enthusiast processor capabilities. I see the low end Xeon 1200 moving to largely soldered models. The Xeon E5-2600 and Core i7 products gain more processor cores and better electrical efficiency over the Haswell generation.

Part 2: Form-Factors

Part 3: Application of Computing

Dell Inspiron 15z

Dell Inspiron 15z

Say Goodbye to Your Favorite Teacher

John Thomas writes:

Don’t bother taking an apple to school to give your favorite teacher, unless you want to leave it in front of a machine. The schoolteacher is about to join the sorry ranks of the service station attendant, the elevator operator, and the telephone operators whose professions have been rendered useless by technology.

The next big social trend in this country will be to replace teachers with computers. It is being forced by the financial crisis afflicting states and municipalities, which are facing red ink as far as the eye can see. From a fiscal point of view, of the 50 US states, we really have 30 Portugals, 10 Italys, 10 Irelands, 5 Greeces, and 5 Spains.

The painful cost cutting, layoffs, and downsizing that has swept the corporate area for the past 30 years is now being jammed down the throat of the public sector, the last refuge of slothful management and indifferent employees. Some 60% of high school students are already exposed to online educational programs, which enable teachers to handle far larger class sizes than the 40 students now common in California.

It makes it far easier to impose pay for productivity incentives on teachers, like linking teacher pay to student test scores, as a performance review is only a few mouse clicks away. These programs also qualify for government funding programs, like “Race to the Top.” Costly textbooks can be dispensed with.

Blackboard (BBBB) is active in the area, selling its wares to beleaguered school districts as student/teacher productivity software. The company has recently been rumored as a takeover target of big technology and publishing companies eager to get into the space.

The alternative is to bump classroom sizes up to 80, or close down schools altogether. State deficits are so enormous that I can see public schools shutting down, privatizing their sports programs, and sending everyone home with a laptop. The cost savings would be enormous. No more pep rallies, prom nights, or hanging around your girlfriend’s locker. Of course, our kids may turn out a little different, but they appear to be at the bottom of our current list of priorities.

Creative destruction is also at work in higher education. Sixteen universities have created, with free courses taught by popular professors. When the University of Illinois announced it would offer online courses for free, fourteen thousand prospective students came running. It would appear that the Economics 101 supply-demand curve goes exponential when price is zero, as it should.

But is online education wasted time in front of a screen. In the first study of its kind, Ithaka says in a random study of 600 statistics students that one classroom session a week augmented by online courseware yields the same final exam results as a three session-a-week conventional course.

At the university level, costs are up 42% in the past decade (even after adjusting for aid). At the K-12 level, local budget pressure is cutting school budgets to the bone.

My conclusion is that online education is reaching the mainstream, aided by enormous pressures to cut costs and deliver predictable outcomes. Keep an eye on quality and avoid fads.

Somewhere in the not too distant future employers will have to decide on a big change from traditional credentials in hiring decisions. A college diploma today is a ticket to a white-collar job (at least it would be again if the economy picked up). Will employers hire students who have passed 120 credits of free, online college courses? Or will they demand a sheepskin that costs $200,000 and accompanies students who have passed 120 credits of paid, mostly-online college courses?

Will the motivated free-college students who lack back-breaking debt be better entry employees? I suspect so.

The Good Old Days of Education

Thoughts on Intel’s New 22nm 3D Transistors

Intel’s announcement yesterday announced the next generation of transistor process at 22 nm. New products based on the 22 nm transistors will begin arriving with the Ivy Bridge family in early 2012. What was not expected was that Intel would bet the fab on a radically new way of laying down transistors that puts the company a generation ahead of the silicon industry.

After briefings by Intel executives yesterday, my conclusions are that Intel has really ahead. I’ve sat through four previous generations of new transistor announcements, and it was the latest that got my serious attention.

The science of what Intel has done is relatively easy to explain (see more depth here andhere): for fifty years, integrated circuits have been laid out like city street-maps in two dimensions, called “planar”. As the fabrication process has shrunk now to 22 nm, the shrinking physical area of each transistor creates huge problems in current leakage and gate current. Intel could have done another generation by shrinking its Sandy Bridge 32 nm transistors to 22 nm planar transistors, while picking up modest — 10%-20% performance was widely speculated — performance and power improvements. While Intel would be first to market with 22 nm, ten percent or so improvements is a ho-hum to the (jaded) computer industry. But Intel announced a switch to an industry-first three-dimensional transistor, fooling the market watchers.

3D transistors are laid on top of the usual circuit layout (see photo below). 3D transistor gates are wrapped around three sides of a vertical fin, hence the name tri-gate. Those three-sided contact points make the transistor much more efficient that planar transistors:

  • 3D transistors work with much less input current
  • allowing for a doubling of density, hence smaller chips
  • requiring about half as many power transistors.

The Benefits of Intel’s 22nm Transistors
Compared to today’s Sandy Bridge 32 nm transistors,  22nm 3D transistor microprocessors will perform as well at half the power. For mobile, your battery life doubles. This has enormous implications for mainstream thin-and-light Core family notebooks in 2012 followed by Atom-based tablets and smartphones. For desktops and servers, you’ll see a growing family of more power-efficient processors slotted to electricity-constrained environments.

At the same power levels as today’s Sandy Bridge 32 nm transistors, you’ll find Ivy Bridge performing about 37% faster — a lot more eye-opening than 10%-20% we anticipated.

The much improved power-performance ratio that we’ll see in Ivy Bridge with 22nm silicon gives Intel great leeway in creating enticing products. I expect:

  • Overclockers will push 6GHz while performance SKUs exceed 4.0GHz
  • An expanded Turbo range of 1-1.5GHz allowing for a performance boost at whim
  • Apple MacBook Air for the masses. Or at least the mainstream 1.5lb $600 Windows business laptop.
  • By 2013, the tablet and smartphone war with ARM will see pitched battles. Ultra-low voltage Intel microprocessors finally have close-enough battery life to compete head-to-head.
  • With a wider range of performance and power-sipping product opportunities, we’ll see more and different form factors.

Where 22nm 3D Transistors Place Intel
With 22nm planar transistors, Intel would lead the microprocessor industry. But the third-dimension transistors on top of a 22nm process likely put Intel two nodes ahead of the competition. Intel will lead in production-ready transistor technology for at least four years.

The fact that Intel will produce all of its 2012 Ivy Bridge microprocessors using 100% 3D tri-gate 22nm transistors tells me the company is convinced it has the technology completely in hand.

All in all,  22nm 3D transistors are truly revolutionary. By expanding the gate area with a 3D vertical fin, Intel is showing a higher-probability path to continuing Moore’s Law at 10nm and below in the 2015 timeframe. That assurance alone is worth tens of billions to the technology industry.

22nm tri-gate transistor
General Mills Rice Chex