Self-Driving Software: Why We Need E Pluribus Unum

Today, numerous large and small companies around the world are working diligently on perfecting their company’s self-driving software. All the large traditional automobile companies are included as well as large technology firms such as Google, Intel and Microsoft, and even Uber. These companies are working in true twentieth-century capitalist fashion: they’re doing it all independently and secretly. This approach leads to sub-optimal technology and foreseeable tragedies.

Self-Driving Vehicles Use Artificial Intelligence (AI)

Programming a self-driving vehicle (SDV) by traditional software-development methods is so fraught with complexity that no one, to my knowledge, is attempting. So scrap that idea. Instead, developers have flocked to artificial intelligence, a red-hot technology idea built on rather old ideas about neural networks.

There’s a lot to AI technology beyond the scope of this blog. A quick Internet search will get you started on a deep dive. For today, let’s sketch a common approach to AI application development:

  • First, an AI rules-based model is fed real-world scenarios, rules, and practical knowledge. For example, “turning left into oncoming traffic (in the USA but not the UK) is illegal and hazardous and will likely result in a crash. Don’t do that.” This first phase is the AI Learning Phase.
  • Second, the neural network created in the learning phase is executed in a vehicle, often on a specialized chip, graphics processing unit (GPU) or multi-processor. This is the Execution Phase.
  • Third, the execution unit records real-world observations while driving, eventually feeding them back into the learning model.

The Problem of Many

Here’s the rub. Every SDV developer is on its own, creating a proprietary AI model with its own set of learning criteria. Each AI model is only as good as the data fed into its learning engine.

No single company is likely to encounter or imagine all of the third standard-deviation, Black Swan events that can and will lead to vehicle tragedies and loss of life. Why should Tesla and the state of Florida be the only beneficiaries of the lessons from a particular fatal crash? The industry should learn from the experience too. That’s how society progresses.

Cue the class-action trial lawyers.

E Pluribus Unum

E Pluribus Unum is Latin for “out of many, one”. (Yes, it’s the motto of the United States). My proposal is simple:

  • The federal government should insist that all self-driving vehicles use an AI execution unit that is trained in its learning phase with an open-source database of events, scenarios, and real-world feedback. Out of many AI training models, one model.
  • The Feds preempt state regulation of core AI development and operation
  • Vehicles that use the federalized learning database for training receive limited class-action immunity, just like we now do with immunization drugs.
  • The Feds charge fees to the auto industry that cover the costs of the program.

Conclusion

From a social standpoint, there’s no good reason for wild-west capitalism over proprietary AI learning engines that lead to avoidable crashes and accidents. With one, common AI learning database, all SDVs will get smarter, faster because they are benefiting from the collective experience of the entire industry. By allowing and encouraging innovation in AI execution engines, the industry will focus on areas that impact better-faster-cheaper-smaller products and not in avoiding human-risk situations. Performance benchmarks are a well-understood concept.

Philosophically, I don’t turn first to government regulation. But air traffic control, railroads, and numerous aspects of medical areas are regulated without controversy. Vehicle AI is ripe for regulation before production vehicles are produced by the millions over the next decade.

I am writing this blog because I don’t see the subject being discussed. It ought to be.

Comments and feedback are welcome. See my feed on Twitter @peterskastner.

Buying a PC for Your Third-World Adventure

A reader of this blog asked “What PC should I buy that can survive the erratic electricity of a third-world residency?” The answer, of course, is “It depends how much you want to spend.” But having reliable computing in a less-developed setting need not break the bank.

Assumptions

You’re an average, modern computer user with professional (i.e., office), social, and personal computing needs preparing to reside outside a first-world power grid. You could be in the mountains of Columbia or Colorado, or, like me, at the end of a one-kilometer driveway. You need to be able to use your PC at any time, but not necessarily all the time. You have a budget.

My previous stories on this subject are here. Your problem is spotty power that can come and go at any moment, day or night, and be off for hours. Your collateral problem is poor power with spikes, low and high voltage, surges, and intermittent on/off cycles. These can and will destroy the unprotected PC power supply in short order.

Strategy

The strategy is to put as much inexpensive stored electricity (i.e., batteries) in front of the computer’s power supply as practical. Duh! The easiest implementation is to use a laptop, which comes with a built-in battery. Modern laptops have hours of self-contained power while you wait for the power grid, backup generator, or tomorrow’s sun to renew your power supply.

Still easy but more expensive choices are a desktop all-in-one (such as an Apple iMac) or a regular desktop. In both the desktop cases cases, you’ll want an uninterruptible power supply (UPS) which stores AC grid power in a battery and delivers it to your electronic devices.

With those assumptions and strategy in mind, here is a prioritized list of what to buy and why to buy it:

The Basics

  • A laptop. Commercial grade (e.g., Dell XPS) has higher build quality than consumer grade (e.g., Dell Inspiron). You get what you pay for. Consider: 17″ screen-size as desktop replacement; SSD for reliability and speed. Your choice: Windows, Mac, even Chromebook.
  • A high-quality surge protector to filter as much electrical grief as possible. Mandatory unless you use a UPS.
  • A bigger and/or backup laptop battery. Greater off-grid time. More efficient than a UPS. Lowest cost when bought bundled with a new laptop.

The Upgrades

  • A powerful UPS, where power is measured in volt-amps. Over 1,000VA is better. Below 500VA is probably pointless with a laptop. The UPS has receptacles for other electrical necessities, so it becomes your electrical hub. Also, all UPS systems have power quality circuitry so your PC will always get clean power. Also, PC applications and a USB connection to the UPS can automatically and safely shut any PC down before the UPS itself exhausts its batteries.
  • A portable hard drive storage device to back up your PC. If this were me, it would rank in the Basics as a “must have”. The portable hard drives require no electrical power beyond a USB cable. With electricity (from your UPS), there are faster/greater capacity options.
  • A USB 3.0 Hub for greater I/O connectivity. Your laptop or all-in-one will never have enough USB ports for the printers, backup storage, Bluetooth speakers, and mobile devices that need charging. Your choices are four or seven ports. Go with the powered seven-port hub. After all, everyone in your house (office) will want to leech off your clean power. Plan accordingly.

The Options

Here’s where the budget goes out the window, but your level of electricity paranoia is nobody else’s business:

  • A secondary monitor scales your laptop’s screen to desktop size or becomes a second screen with more real estate.
  • Backup generator sized to your home electricity load. Best purchased locally as you will require service eventually. Requires (clean) gasoline.
  • Solar power generator requires solar panels, an AC inverter, and distribution hub. It can have its own battery for storage or use the UPS already in our specs. The money problem is a 300-400 watt solar installation can easily cost as much or more than our laptop computing device.
  • The ultimate upgrade for this scenario is a Ford C-Max Energi plug-in hybrid car with internal 7.4 kWh batteries, 2 AC power outlets, USB charging, and 12-volt power. You can also drive it. $31,770 and up.

Is a Tablet an Alternative?

A tablet or a laptop/tablet (i.e., a two-in-one) is worth considering. Portable, mobile, self-contained cellular network option. Some have a desktop operating system. The keyboard and mouse can use easily rechargeable AA batteries. Device operating life often exceeds eight hours. Rechargeable from a small solar panel. Connects to Bluetooth peripherals and to a video monitor/TV via an HDMI cable.

Minimalist computing dramatically simplifies backup power requirements.

Consolidated electronics such as a tablet connected to the LCD monitor also used as a TV makes planning easier and redundancy less necessary.

The Network

Getting on the Internet has its own set of problems and costs. You’ll need local knowledge to make cost-effective decisions.

Assuming a controllable data budget, the easiest Internet on-ramp is to use your smartphone as a hotspot and connect your laptop via Bluetooth. You won’t find unlimited data plans in the third world, so this approach needs careful usage-based planning.

A conventional desktop or laptop setup will require a network access device(s) to the cable, wireless broadband, or satellite network. Plan to power-protect these devices too by plugging them into your UPS. However, that limits PC placement to being close to the network access point.

Follow @PeterSKastner on Twitter

Note: the products linked in this blog post are not endorsed by the author. The author has no financial ties to any product mentioned in this blog post.

 

“My ISP is a Solar-Powered Drone.”

Google, the ad-driven search giant, and Facebook, the social connections giant, are fighting over airplane drone technology companies. What’s that all about?

Solar-powered drones would, when they’re ready for mass-market in the next five years, be able to fly for weeks or months. They can take 2D and 3D photos resulting in better and more up-to-date maps. And they could serve as aerial Internet connections. It’s the latter that got my attention because it threatens the status quo in developed nations and opens new markets in developing nations.

Aerial Internet Drones (AIDs) suggest a breakout technology that solves — or at least remediates — the “wireless everywhere” mantra of the past decade. In developed countries such as the United States, intractable wireless problems include inadequate wireless bandwidth in high device areas (e.g., mid-town New York) necessitating more cell towers and greater slices of the electromagnetic spectrum. Moreover, “poor wireless coverage meets not-in-my-neighborhood” and inadequate capital make it politically and economically difficult to add enough cell towers to guarantee wireless broadband such as LTE to build a superior wireless broadband network in suburban and rural areas.

In underdeveloped geographies, which represent attractive new markets for the global technology and wireless companies, inexpensive and inadequate mobile broadband infrastructure creates a chicken-and-the-egg problem.

So, the vision to solve both developing and developed wireless broadband demand is to put up a global network of drones that serve as radio relays for wireless Internet connections. AIDs would be a new form of Internet router, loitering around a more-or-less fixed point in the sky.

At the right altitude, an AID has better line-of-sight than a cell tower located over the hill. The AID theoretically offers greater geographic coverage and often better signal quality than today’s cell tower networks. At a cost of less than $10 million per equipped AID, my envelope calculations suggest AID network costs compare favorably with cell towers for comparable geographic coverage.

In developing areas such as Africa, an AID network is a solution to creating metro- and rural-area Internet wireless infrastructure rapidly and without the difficulties of building land-line-connected cell towers.

Cellphone networks connect cell towers with land line connections to each other and to an Internet wired backhaul. An AID network needs to connect wirelessly to a) client cellphones and the Internet of Things and b) to a radio ground-station connected to an Internet wired backhaul. The radio ground-station is the crux of the difficulties I foresee.

The ground-station requires radio spectrum to communicate up to and down from the AID network. It represents a new demand on the over-burdened and highly political use of the electromagnetic spectrum. Where does the spectrum come from, whose ox is gored, and how are the skids greased?  Think lobbying.

Moreover, the incumbent cable and wireless ISPs (i.e., Comcast, Verizon, AT&T, Sprint, Dish, et al) are not likely to give up their near monopolies on Internet access by devices, homes, and businesses without a knockdown, drag-out political fight followed by years of litigation.

Add citizen privacy related to drone picture taking to this highly volatile Internet-industrial-complex wireless food fight and you can expect great spectator sport. Although in developing countries, the issue will be described as “drone spying by the NSA”.

Like many, I would greatly appreciate and even pay more for better wireless coverage and higher wireless device bandwidth. First, Google and Facebook have to solve the real technology problems of getting the AIDs into the sky. Second, they have to muscle a (much needed) rethink of wireless spectrum use and the roles of future ISPs through the political sausage factory, and nail down the new spectrum they need. Combined, this is a heavy lift.

So, with a sigh of regret, I suspect it will be quite a while before I can say “My ISP is a Solar-Powered Drone.”

Follow me on Twitter @PeterSKastner.

solar drone

Titan Aerospace/Associated Press

Peak Technology or Technology Peak?

The theory of peak oil — the point at which the Earth’s oil supply begins to dwindle — was a hot and debatable topic last decade. There are lots of signs that we are at a technology demand peak. Is this permanent, or how will we get past this peak?

The last-decade argument that oil production had permanently peaked proved to be laughably incorrect. Hydraulic fracturing  (“fracking”) technology developed in the United States changed the slope of the oil production curve upwards. This analyst has no intention becoming a laughing stock by suggesting that digital technology innovation has peaked. Far from it. However, few things in nature are a straight line; it certainly appears that digital technology adoption — demand — has slowed. We are in a trough and can’t foresee the other side.

One good place to look for demand forecasts is the stock market.

Smart Phones and Tablets
Last month, both gadget profit-leaders Samsung and Apple both took hits based on slower growth forecasts. “Pretty much everyone who can afford a smartphone or tablet has one, so where does the profit growth come from?” was the story line. Good question.

This month, AT&T and T-Mobile announced they would lease customers smartphones instead of selling them outright with a carrier discount. The phones and tablets coming off lease will be re-sold into the burgeoning used gadget market. It’s now too easy to get new-enough gadget technology in the used market. After all, your last-year’s hardware can still run this year’s free, new software upgrade.

On the surface, it appears that the global market for $600 smartphones and tablets is at or close to saturation — a peak.

Desktop and Notebook PCs
The stock market is not treating traditional technology makers very well. H-P is coming back from a near-death experience. Its stock is half what it was two years ago. Dell wants to go private so it can restructure and deal with market forces that are crushing margins and profits. Even staid and predictable IBM has lost its mojo over the past five quarters. Microsoft missed.

These technology makers are dealing with PCs, the data center, and services. They are not major players in the smartphone/gadget market. Their focus is on doing what they used to do more efficiently. That strategy is not working.

The desktop and notebook PC markets are almost all replacement units in developed countries. Macro-economics has dramatically slowed emerging market growth in formerly hot places like Brazil, Russia, India, and China (BRIC). The new customers are being added more slowly and at higher costs, and existing customers have increasingly voted to not upgrade as frequently. My 2008 Apple MacBook Air, cutting edge and quite expensive at the time, is still adequate for road trips. My Sandy Bridge Generation-1 Ultrabook has adequate battery life. There’s no compelling reason, most buyers tell us, to accelerate the PC replacement cycle.

Well, one temporary accelerator is the support demise next year for Windows XP. With auditors and consultants screaming about liability issues, non-profits and government are rolling in new PCs to replace their ten-year old kit. Thank goodness. But seriously, ten-year old PCs have been getting the job done, as defined by user management.

Note also that a new PC likely means a user-training upgrade to Windows 8. Both consumers and businesses are avoiding this upgrade more or less like the plague. There is no swell of press optimism that Windows 8.1 this fall will be the trick. PC replacement is a pain already, so few want to jump on an OS generation change as well.

Data Center
The data center market shows some points of light. Public cloud data centers by the big boys like Apple, Google, Facebook, and Amazon are growing like gangbusters. High Performance Computing, where ever more complex models consume as many teraflops as one can afford to throw at the problem. Recent press reports suggest that “national security” is a growing technology consumer. [understatement]

However, enterprise data centers, driven by cautious corporate management, are growing more slowly than five years ago; this market outsizes the points of light. Moreover, the latest generation of server technology really does support more users and apps than the gear being replaced. With headcount down and fewer new enterprise apps, fewer racks are now getting the computing workload done. (Storage, of course, is growing logarithmically). We also expect a growing trend towards “open computing” servers, a trend that will suck hardware margin and services revenue from the big server-technology makers.

Navigating From the Trough
So, mobile gadgets, traditional PCs, and the data center — the three legs of the digital technology stool — are all growing more slowly than in the recent past. This is the “technology demand peak” as we see it. We are presently past the peak and into the trough.

How deep is the trough and how long will it last? LOL. If we knew that, we could comfortably retire! Really, there are roughly a couple of trillion dollars in market cap at stake here. If the digital tech market growth remains anemic beyond another twelve months, then there will be too many tech players and too few chairs when the music stops. Any market observer can see that.

Our own view is that it will take a number of technology innovations that will propel replacement demand and drive new markets. The solution is new tech, not better-faster-smaller old tech. Where’s the digital equivalent of fracking? (Actually, fracking would not be possible without a lot of newly invented, computer-based technology.)

First, the global macro-economic slowdown is likely to resolve itself positively, perhaps soon. We don’t buy the global depression arguments. There are billions of potential middle-class new computer consumers and the data center backend to support them.

Next, mobile gadgets and PCs are on the verge of exciting new user interfaces. Things like holographic 3D displays — you are in the picture, and keyboards projected on any flat surface. Conference-room projection capabilities in every smartphone. New users interfaces, shared with PCs and notebooks, that are based on perceptual computing, the (wo)man-machine interface that recognizes voice, gestures, and eye movement, for starters.

Big data and the cloud are data-center conversation pieces. But these technologies are really toddlers, at best. Data-sifting technologies like the grandson of Hadoop will enable more real-time enterprise intelligence and wisdom. HPC has limits only of money available to invest. Traditional data centers will re-plumb with faster I/O, distributed computing, and the scale-up and scale-down capacity of an electric utility — while needing less from the electrical utility.

We don’t have all the answers, but are convinced it will take an industry kick in the pants to get us towards the next peak. More of the same is not a recipe for a solution. We are in a temporary downturn, not just past peak technology.

Your thoughts and comments are welcome.

Photo Credit: Eugene Richards

Photo Credit: Eugene Richards

Pulse Check: How Intel is Scaling to Meet the Decade’s Opportunities

Eighteen months ago, Intel announced it would address the world’s rapidly growing computing continuum by investing in variations on the Intel Architecture (IA). It was met with a ho-hum. Now, many product families are beginning to emerge from the development labs and head towards production. All with IA DNA, these chip families are designed to be highly competitive in literally dozens of new businesses for Intel, produced in high volumes, and delivering genuine value to customers and end users.

Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities. What is Intel doing and how can they pull this off?

The 2010’s Computing Continuum
Today’s computing is a continuum that ranges from smartphones to mission-critical datacenter machines, and from desktops to automobiles.  These devices represent a total addressable market (TAM) approaching a billion processors a year, and will explode to more than two billion by the end of the decade.  Of that, traditional desktop microprocessors are about 175 million chips this year, and notebooks, 225 million.

For more than four decades, solving all the world’s computing opportunities required multiple computer architectures, operating systems, and applications. That is hardly efficient for the world’s economy, but putting an IBM mainframe into a cell phone wasn’t practical. So we made due with multiple architectures and inefficiencies.

In the 1990’s, I advised early adopters NCR and Sequent in their plans for Intel 486-based servers. Those were desktop PC chips harnessed into datacenter server roles. Over twenty years, Intel learned from its customers to create and improve the Xeon server family of chips, and has achieved a dominant role in datacenter servers.

Now, Intel Corporation is methodically using its world-class silicon design and fabrication capabilities to scale its industry-standard processors down to fit smartphones and embedded applications, and up into high-performance computing applications, as two examples. Scaling in other directions is still in the labs and under wraps.

The Intel Architecture (IA) Continuum
IA is Intel’s architecture and an instruction set that is common (with feature differentiation) in the Atom, Core, and Xeon microprocessors already used in the consumer electronics, desktop and notebook, and server markets, respectively.  These microprocessors are able to run a common stack of software such as Java, Linux or Microsoft Windows.  IA also represents the hardware foundation for hundreds of billions of dollars in software application investments by enterprise and software application package developers that remain valuable assets as long as hardware platforms can run it — and backwards compatibility in IA has protected those software investments.

To meet the widely varying requirements of this decade’s computing continuum, Intel is using the DNA of IA to create application-specific variants of its microprocessors.  Think of this as silicon gene-splicing.  Each variant has its own micro-architecture that is suited for its class of computing requirements (e.g., Sandy Bridge for 2011 desktops and notebooks). These genetically-related processors will extend Intel into new markets, and include instruction-set compatible microprocessors:

  • Embedded processors and electronics known as “systems on a chip” (SOCs) with an Atom core and customized circuitry for controlling machines, display signage, automobiles, and industrial products;
  • Atom, the general-purpose computer heart of consumer electronics mobile devices, tablets, and soon smartphones;
  • Core i3, i5, and i7 processors for business and consumer desktops and notebooks, with increasing numbers of variants for form-factor, low power, and geography;
  • Xeon processors for workstations and servers, with multi-processors capable advances well into the mainframe-class, mission-critical computing segment;
  • Xeon datacenter infrastructure processor variants (e.g., storage systems, and with communications management a logical follow-on);

A Pause to Bring You Up To Date
Please do not be miffed: all of the above was published in February, 2011, more than two years ago. We included it here because it sets the stage for reviewing where Intel stands in delivering on its long-term strategy and plans of the IA computing continuum, and to remind readers that Intel’s strategy is hiding in plain sight for going on five years.

In that piece two years ago, we concluded that IA fits the market requirements of the vast majority of the decade’s computing work requirements, and that Intel is singularly capable of creating the products to fill the expanding needs of the computing market (e.g., many core).

With the launch of the 4th Generation Core 22nm microprocessors (code-name Haswell) this week and the announcement of the code-name Baytrail 22nm Atom systems on a chip (SoCs), it’s an appropriate time to take the pulse on Intel’s long-term stated direction and the products that map to the strategy.

Systems on a Chip (SoCs)
The Haswell/Baytrail launch would be a lot less impressive if Intel had not mastered the SoC.

The benefits of an SoC compared to the traditional multi-chip approach Intel has used up until now are: fewer components, less board space, greater integration, lower power consumption, lower production and assembly costs, and better performance. Phew! Intel could not build a competitive smartphone until it could put all of the logic for a computer onto one chip.

This week’s announcements include SoCs for low-voltage notebooks, tablets, and smartphones. The data center Atom SoCs, code-name Avoton, are expected later this year.

For the first time, Intel’s mainstream PC, data center, and mobile businesses include highly competitive SoCs.

SoCs are all about integration. The announcement last month at Intel’s annual investor meeting that “integration to innovation” was an additional strategy vector for the company hints at using many more variations of SoCs to meet Intel’s market opportunities with highly targeted SoC-based variants of Atom, Core, and Xeon.

Baytrail, The Forthcoming Atom Hero
With the SoCs for Baytrail in tablets and Merrifield in smartphones, Intel can for the first time seriously compete for mobile marketshare against ARM competitors on performance-per-watt and performance. These devices are likely to run the Windows 8, Android, and Chrome operating systems. They will be sold to carriers globally. There will be variants for local markets (i.e., China and Africa).

The smartphone and tablet markets combined exceed the PC market. By delivering competitive chips that run thousands of legacy apps, Intel has finally caught up on the technology front of the mobile business.

Along with almost the entire IT industry, Intel missed the opportunity that became the Apple iPhone. Early Atom processors were not SoCs, had poor battery life, and were relatively expensive. That’s a deep hole to climb out of. But Intel has done just that. There are a lot fewer naysayers than two years ago. The pendulum is now swinging Intel’s way on Atom. 2014 will be the year Intel starts garnering serious market share in mobile devices.

4th Generation Core for Mainstream Notebooks and PCs
Haswell is a new architecture implemented in new SoCs for long-battery-life notebooks, and with traditional chipsets for mainstream notebooks and desktops. The architecture moves the bar markedly higher in graphics performance, power management, and floating point (e.g., scientific) computations.

We are rethinking our computing model as a result of Haswell notebooks and PCs. Unless you are an intense gamer or workstation-class content producer, we think a notebook-technology device is the best solution.

Compared to four-year old notebooks in Intel’s own tests, Haswell era notebooks are: half the weight, half the height, get work done 1.8x faster, convert videos 23x faster, play popular games 26x faster, wake up and go in a few seconds, and with 3x battery life for HD movie playing. Why be tethered to a desktop?

Black, breadbox-size desktops are giving way to all-in-one (AIO) designs like the Apple iMac used to write this blog. That iMac has been running for two years at 100% CPU utilization with no problems. (It does medical research in the background folding proteins). New PC designs use notebook-like components to fit behind the screen. You’ll see AIOs this fall that lie flat as large tablets or go vertical with a rear kick-stand. With touch screen, wireless Internet and Bluetooth peripherals, these new AIOs are easily transportable around the house. That’s the way we see the mainstream desktop PC evolving.

And PCs need to evolve quickly. Sales are down almost 10% this year. One reason is global macro-economic conditions. But everybody knows the PC replacement cycle has slowed to a crawl. Intel’s challenge is to spark the PC replacement cycle. Haswell PCs and notebooks, as noted above, deliver a far superior experience to users than they are putting up with in their old, obsolescent devices.

Xeon processors for workstations, servers, storage, and communications
The data center is a very successful story for Intel. The company has steadily gained workloads from traditional (largely legacy Unix) systems; grown share in the big-ticket Top 500 high-performance computing segment; evolved with mega-datacenter customers such as Amazon, Facebook, and Google; and extended Xeon into storage and communications processors inside the datacenter.

The Haswell architecture includes two additions of great benefit to data-center computing. New floating point architecture and instructions should improve scientific and technical computing throughput by up to 60%, a huge gain over the installed server base. Second, transactional memory is a technology that makes it easier for programmers to deliver fine-grain parallelism, and hence to take advantage of multi-cores with multi-threaded programs, including making operating systems and systems software like databases run more efficiently.

In the past year, the company met one data-center threat in GPU-based computing with PHI, a server add-in card that contains dozens of IA cores that run a version of Linux to enable massively parallel processing. PHI competes with GPU-based challengers from AMD and nVidia.

Another challenge, micro-servers, is more a vision than a market today. Nevertheless, Intel created the code-name Avoton Atom SoC for delivery later this year. Avoton will compete against emerging AMD- and ARM-based micro-server designs.

Challenges
1. The most difficult technology challenge that Intel faces this decade remains software, not hardware.  Internally, the growing list of must-deliver software drivers for hardware such as processor-integrated graphics means that the rigid two-year, tick-tock hardware model must also accommodate software delivery schedules.

Externally, Intel’s full-fray assault on the mobile market requires exquisite tact in dealing with the complex relationships with key software/platform merchants: Apple (iOS), Google (Android), and Microsoft (Windows), who are tough competitors.

In the consumer space such as smartphones, Intel’s ability to deliver applications and a winning user experience are limited by the company’s OEM distribution model. More emphasis needs to be placed on the end-user application ecosystem, both quality and quantity. We’re thinking more reference platform than reference hardware.

2. By the end of the decade, silicon fabrication will be under 10 nm, and it is a lot less clear how Moore’s Law will perform in the 2020’s. Nevertheless, we are optimistic about the next 10-12 years.

3. The company missed the coming iPhone and lost out on a lot of market potential. That can’t happen again. The company last month set up an new emerging devices division charged with finding the next best thing around the same time others do.

4. In the past, we’ve believed that mobile devices — tablets and smartphones — were additive to PCs and notebooks, not substitutional. The new generation of Haswell and Baytrail mobile devices, especially when running Microsoft Windows, offer the best of the portable/consumption world together with the performance and application software (i.e., Microsoft Office) to produce content and data. Can Intel optimize the market around this pivot point?

Observations and Conclusions
Our summary observations have not changed in two years, and are reinforced by the Haswell/Baytrail SoCs that are this week’s proof point:

  • Intel is taking its proven IA platforms and modifying them to scale competitively as existing markets evolve and as new markets such as smartphones emerge.
  • IA scales from handhelds to mission-critical enterprise applications, all able to benefit from a common set of software development tools and protecting the vast majority of the world’s software investments.  Moreover, IA and Intel itself are evolving to specifically meet the needs of a spectrum of computing made personal, the idea that a person will have multiple computing devices that match the time, place and needs of the user.
  • Intel is the only company with an architecture, cash flow, fabs, and R&D capable of scaling its computing engines up and down to meet the decade’s big market opportunities.

Looking forward, Intel has fewer and less critical technology challenges than at any point since the iPhone launch in 2007. Instead, the company’s largely engineering-oriented talent must help the world through a complex market-development challenge as we all sort out what devices are best suited for what tasks. We’ve only scratched the surface of convertible tablet/notebook designs. How can Intel help consumers decide what they want and need so the industry can make them profitably? How fast can Intel help the market to make up its mind? Perhaps the “integration to innovation” initiative needs a marketing component.

If the three-year evolving Ultrabook campaign is an example of how Intel can change consumer tastes, then we think industry progress will be slower than optimal. A “win the hearts and minds” campaign is needed, learning from the lessons of the Ultrabook evolution. It will take skillsets in influencing and moving markets in ways Intel will need more of as personal computing changes over the next decade, for example, as perceptual computing morphs the user interface.

Absent a macro-economic melt-down, Intel is highly likely to enjoy the fruits of five years of investments over the coming two-year life of the Haswell architecture. And there’s no pressing need today to focus beyond 2015.

Biography

Peter S. Kastner is an industry analyst with over forty-five years experience in application development, datacenter operations, computer industry marketing, PCs, and market research.  He was a co-founder of industry-watcher Aberdeen Group in 1989.  His firm, Scott-Page LLC, consults with technology companies and technology users.

Twitter: @peterskastner

Haswell Core i7 desktop microprocessor

Haswell Core i7 desktop microprocessor


Apps Will Eat Us Out of House & Home

No single mobile application (app) is a problem. In fact, the iPhone and its Apple App Store five years ago opened a whole generation of technology for the benefit of humankind. It’s the proliferation of apps that is the problem. We need to think about an end game.

As is often the case, I came on this blog opportunity by living my digital life. Over the past year, I’ve pared down my app collection of free and paid apps from about 150 to 120. I now have separate stacks for tablet versus smartphone. I did this because I was running out of expensive device memory, and because there were so many I didn’t use frequently enough to carry. Oh, you too?

But app pruning is not the problem for today. Rather, let’s think about the proliferation of all apps. Six months ago, there were over 800,000 active apps in Apple’s app store, according to 148Apps.biz. There are also hundreds of thousands of Android apps in the various app stores for that operating system. That’s a lot of apps!

So many apps, that it exceeds what anybody could productively use.

More importantly, and the nugget of this blog thought, is there are too many apps to categorize and keep track of.

Meanwhile, just like a decade ago we saw the land rush by businesses to reach consumers through web sites, the business-to-consumer (B2C) push for unique-to-the-business apps is overwhelming the stores and the ability of consumers. Every day now I click on a mobile URL and get waylaid by the screen asking “would you like the app for our web site?” Seems everybody now has an app.

So how do we as a digital society manage all those apps? I concede the app review sites. If you’re looking for hobby apps, say photography, you’ll do well to consider your peers’ reviews. But what about your local grocery store? Or office supply store. Or home improvement store. Lesser justification, me thinks.

What I can’t get out of my head is the idea that B2C apps end up in the digital equivalent of the oh-so-20th-century Yellow Pages. Where there are 2,000 business categories and local listings under each, some paid.

In any event, the proliferation of mobile apps cannot grow forever. We are already into the problems with large numbers. App Stores may scale in technology to millions of apps, but we really need an end game, a different way of sorting out and harnessing the apps we most need at the moment. That’s it: just-in-time apps (JIT-apps).

The old fashioned way

The old fashioned way

@peterskastner

 

The 2013-2014 Computing Forest – Part 1: Processors

Ignoring the daily tech trees that fall in the woods, let’s explore the computer technology forest looking out a couple of years.
Those seeking daily comments should follow @peterskastner on Twitter.

Part 1: Processors

Architectures and Processes

Intel’s Haswell and Broadwell

We’ll see a new X86 architecture in the first half of 2013, code-name Haswell. The Haswell chips will use the 22 nm fabrication process introduced in third-generation Intel Core chips (aka Ivy Bridge). Haswell is important for extending electrical efficiency, improving performance per clock tick, and as the vehicle for Intel’s first system on a chip (SoC), which combines a dual-core processor, graphics, and IO in one unit.

Haswell is an architecture, and the benefits of the architecture carry over to the various usage models discussed in the next section.

I rate energy efficiency as the headline story for Haswell. Lightweight laptops like Ultrabooks (an Intel design) and Apple’s MacBook Air will sip the battery at around 8 watts, half of today’s 17 watts. This will dramatically improve the battery life of laptops but also smartphones and tablets, two markets that Intel has literally built $5 billion fabs to supply.

The on-chip graphics capabilities have improved by an order of magnitude in the past couple of years and get better of the next two. Like the main processor, the GPU benefits from improved electrical efficiency. In essence, on-board graphics are now “good enough” for the 80-th percentile of users. By 2015, the market for add-on graphics cards will start well above $100, reducing market size so much that the drivers switch; consumer GPUs lead high-performance computing (HPC) today. That’s swapping so that HPC is the demand that supplies off-shoot high-end consumer GPUs.

In delivering a variety of SoC processors in 2013, Intel learns valuable technology lessons for the smartphone, tablet, and mobile PC markets that will carry forward into the future. Adjacent markets, notably automotive and television, also require highly integrated SoCs.

Broadwell is the code-name for the 2014 process shrink of the Haswell architecture from 22nm to 14nm. I’d expect better electrical efficiency, graphics, and more mature SoCs. This is the technology sword Intel carries into the full fledged assault on the smartphone and tablet markets (more below).

AMD

AMD enters 2013 with plans for “Vishera” for the high-end desktop, “Richland”, an SoC  for low-end and mainstream users, and “Kabini”, a low-power SoC  for tablets.

The 2013 server plans are to deliver its third-generation of the current Opteron architecture, code name Steamroller. The company also plans to move from a 32nm SOI process to a 28nm bulk silicon process.

In 2014, AMD will be building Opteron processors based on a 64-bit ARM architecture, and may well be first to market. These chips will incorporate the IO fabric acquired with microserver-builder Seamicro. In addition, AMD is expected to place small ARM cores on its X86 processors in order to deliver a counter to Intel’s Trusted Execution Technology. AMD leads the pack in processor chimerism.

Intel’s better performing high-end chips have kept AMD largely outside looking in for the past two years. Worse, low-end markets such as netbooks have been eroded by the upward charge of ARM-based tablets and web laptops (i.e., Chromebook, Kindle, Nook).

ARM

ARM Holdings licenses processor and SoC designs that licensees can modify to meet particular uses. The company’s 32-bit chips started out as embedded industrial and consumer designs. However, the past five years has seen fast rising tides as ARM chip designs were chosen for Apple’s iPhone and iPad, Google’s Android phones and tablets, and a plethora of other consumer gadgets. Recent design wins includes Microsoft’s Surface RT. At this point, quad-core (plus one, with nVidia) 32-bit processors are commonplace. Where to go next?

The next step is a 64-bit design expected in 2014. This design will first be used by AMD, Calxeda, Marvell, and undisclosed other suppliers to deliver microservers. The idea behind microservers is to harness many (hundreds to start) of low-power/modest-performance processors costing tens of dollars each and running multiple instances of web application in parallel, such as Apache web servers. This approach aims to compete on price/performance, energy/performance, and density versus traditional big-iron servers (e.g., Intel Xeon).

In one sentence, the 2013-2014 computer industry dynamics will largely center on how well ARM users defend against Intel’s Atom SoCs in smartphones and tablets, and how well Intel defends its server market from ARM microserver encroachment. If the Microsoft Surface RT takes off, the ARM industry has a crack at the PC/laptop industry, but that’s not my prediction. Complicating the handicapping is fabrication process leadership, where Intel continues to excel over the next two years; smaller process nodes yield less expensive chips with voltage/performance advantages.

Stronger Ties Between Chip Use and Parts

The number of microprocessor models has skyrocketed off the charts the past few years, confusing everybody and costing chip makers a fortune in inventory management (e.g., write-downs). This really can’t keep up as every chip variation goes through an expensive set of usability and compatibility tests running up to millions of dollars per SKU (stock-keeping unit e.g., unique microprocessor model specs). That suggests we’ll see a much closer match between uses for specific microprocessor variations and the chips fabricated to meet the specific market and competitive needs of those uses. By 2015, I believe we’ll see a much more delineated set of chip uses and products:

Smartphones – the low-end of consumer processors. Phone features are reaching maturity: there are only so many pixels and videos one can fit on a 4″ (5″?) screen, and gaming performance is at the good-enough stage. Therefore, greater battery life and smarter use of the battery budget become front and center.

The reason for all the effort is a 400 million unit global smartphone market. For cost and size reasons, prowess in mating processors with radios and support functions into systems on a chip (SoCs) is paramount.

The horse to beat is ARM Holdings, whose architecture is used by the phone market leaders including Samsung, Apple, nVidia, and Qualcomm. The dark horse is Intel, which wants very much to grab, say, 5% of the smartphone market.

Reusing chips for multiple uses is becoming a clever way to glean profits in an otherwise commodity chip business. So I’ll raise a few eyebrows by predicting we’ll see smartphone chips used by the hundreds in microservers (see Part 2) inside the datacenter.

Tablets – 7″ to 10″ information consumption devices iconized by Apple’s iPad and iPad Mini. These devices need to do an excellent job on media, web browsing, and gaming at the levels of last year’s laptops. The processors and the entire SoCs need more capabilities than smartphones. Hence a usage category different from smartphones. Like smartphones, greater battery life and smarter use of the electrical budget are competitive differentiators.

Laptops, Mainstream Desktops, and All-in-One PCs – Mainstream PCs bifurcate differently over the next couple of years in different ways than the past. I’m taking my cue here from Intel’s widely leaked decision to make 2013-generation (i.e., Haswell) SoCs that solder permanently to the motherboard instead of being socketed. This is not a bad idea because almost no one upgrades a laptop processor, and only enthusiasts upgrade desktops during the typical 3-5 year useful PC life. Getting rid of sockets reduces costs, improves quality, and allows for thinner laptops.

The point is that there will be a new class of parts with the usual speed and thermal variations that are widely used to build quad-core laptops, mainstream consumer and enterprise desktops, and all-in-one PCs (which are basically laptops with big built-in monitors).

The processor energy-efficiency drive pays benefits in much lower laptop-class electrical consumption, allowing instant on and much longer battery life. Carrying extra batteries on airplanes becomes an archaic practice (not to mention a fire hazard). The battle is MacBook Air versus Ultrabooks. Low-voltage becomes its own usage sub-class.

Low End Desktops and Laptops – these are X86 PCs running Windows, not super-sized tablet chips. The market is low-cost PCs for developed markets and mainstream in emerging markets. Think $299 Pentium laptop sale at Wal-Mart. The processors for this market are soldered, dual-core, and SoC to reduce costs.

Servers, Workstations, and Enthusiasts – the high end of the computing food chain. These are socketed, high-performance devices used for business, scientific, and enthusiast applications where performance trumps other factors. That said, architecture improvements, energy efficiency, and process shrinks make each new generation of server-class processors more attractive. Intel is the market and technology leader in this computing usage class, and has little to fear from server-class competitors over the next two years.

There is already considerable overlap in server, workstation, and enthusiast processor capabilities. I see the low end Xeon 1200 moving to largely soldered models. The Xeon E5-2600 and Core i7 products gain more processor cores and better electrical efficiency over the Haswell generation.

Part 2: Form-Factors

Part 3: Application of Computing

Dell Inspiron 15z

Dell Inspiron 15z