Home » Enterprise IT » “Itanium is Dead.” “No, It’s Not!”

“Itanium is Dead.” “No, It’s Not!”


Oracle dumped on HP (and Intel) this week by dropping future development for the Itanium processor, the mainstay of HP’s Integrity line of servers. This started a back and forth to-do between the two competitors that brings LOL tears to my eyes. Why? Because the computer industry has been too stodgy over the past decade since the Internet Bubble burst in 2001.

It used to be this tussle all the time in the computer industry! I had forgotten how much fun it was to be a bystander.

So, to the question of the day: will Intel kill off Itanium like Oracle claims?

No, there are two more generations of new Itanium processors under development, says Intel. Beyond that, I say, there’s no public roadmap for Itanium or any other Intel processor. Itanium generations are slower than most processor generations because datacenter IT managers want slow change and great stability in their purchases. There’s as much visibility on Itanium as can be expected.

How will anybody know when Itanium is headed for sunset? I’d say to watch for engineer defections followed by outright layoffs at Intel’s design labs in lovely Fort Collins, Colorado. That’s the hub of things Itanium.

What started all this? Oracle, which now owns server maker Sun, has been trying to make life miserable for competitor HP since the Sun acquisition closed last year. And especially since ex-HP CEO Mark Hurd joined Oracle last September.

The reality is that Intel’s widely used Xeon datacenter servers are much closer in performance to Itanium and getting closer every year. IT departments don’t need me to tell them that. But HP continues to ride the Itanium horse, with good-enough success.

For those enterprises invested in Itanium, I can see no compelling reason to change server architectures over the next three years.

Advertisements

4 thoughts on ““Itanium is Dead.” “No, It’s Not!”

  1. Pingback: World Spinner

  2. Itanium is not quite dead yet, but it was never healthy and it is basically in a coma on life-support:

    http://blog.truebob.com/2011/03/is-itanium-finally-dead.html

    It is doomed and I expect engineers to be deployed within a few years:

    http://blog.truebob.com/2011/03/redeploying-itanium-chip-designers.html

    I am currently writing up some notes for what I think they should do. I don’t think the engineers were bad ones. I also do not think the *notion* of the Itanium architecture is incorrect. However, it is plain enough that disastrous mistakes were made. Except for the most arcane of particular situations, there is no place for the Itanium.

    Regardless of your opinion on the technical merits of the Itanium it is clear that it simply does not have enough mind-share and market-share to give it any economies of scale. That means that software costs *so* much relative to any prospective returns that even Giants in the industry with captive markets can’t afford to keep producing software for it.

    Companies that should have some idea of the merits (or lack thereof) of processors like this, such as IBM and Cray have ditched the Itanium. One company (SGI) whose last word on Itanium was “SGI is 100% committed to Itanium” talks about their upcoming platform that uses Itanium like this:

    “Our first priority is to develop a Xeon version of Ultraviolet, based on the strong feedback we have received from many customers.

    For the uninitiated, Xeon is *not* Itanium.

    Itanium is not thriving and I do not expect it to. Despite having some responses to the problems with the x86 architecture, it was too little, too late. From my point of view, it failed to address some of the most severe problems with 20th century architecture. The other chips failed as well, but they are at least less expensive.

    • By far the largest beneficiary of Itanium technology is HP, which replaced VAX, HPPA, and Tandem hardware with one common Itanium platform.

      I don’t disagree that Itanium is not gaining hardware resellers or OEMs, and is losing software giants — but Oracle tweaking HP is another story.

      However, Itanium remains a viable platform with a couple of generations on the future roadmap. To Itanium owners, inertia is a powerful force. I expect there will be 50% of today’s Itanium installed base in 2020.

      Lastly, Itanium development started 20 years ago. That’s a decent and honorable life for any computer architecture, especially when sibling Xeon has gotten most of Intel’s attention for a decade.

  3. Re: By far the largest beneficiary of Itanium technology is HP, which replaced VAX, HPPA, and Tandem hardware with one common Itanium platform.

    All true, but for one reason or another all of the former platforms, including HP’s were destined to be retired anyway. They had to go somewhere. At the time this transition (or the commitment to do it) started, Itanium was the the only choice. Given their situation, I would have (reluctantly) done the same thing. However, I would have cut my losses and switched much sooner.

    Re: Oracle tweaking HP is another story

    Yes and no. I understand that Oracle has other reasons besides an (alleged) expectation that Itanium would die anyway. However, there *was* some merit in the notion that Itanium was a dead end. Now that Oracle (and Microsoft before them) have withdrawn support for Itanium it is hard to see how it can be kept alive. If you recall, this was supposed to be the mainstream chip that has been entirely given over to x86-64 chips.

    Re:couple of generations on the future roadmap

    Roadmaps change. Intel once said never, never, never ever would they embrace x86-64 even a bit. It was an invention of their rival AMD. Intel said that the only way to get an Intel 64-bit chip would be to switch to Itanium. They projected that Itanium would become the dominant chip. Nobody embraced the Itanium and when it became clear that people (like me) who needed 64-bit chips were going to AMD, Intel changed its tune. Now, x86-64 is entirely their main focus for 64-bit chips. Intel also said that Rambus was the answer to the memory bandwidth bottleneck. They said they would not go DDR. They changed that as well.

    Re: I expect there will be 50% of today’s Itanium installed base in 2020.

    That is hardly a vote of confidence for Itanium. This is only likely if starvation does not cause Intel to stop Itanium development. If Intel stops development, the power of an Itanium system nine years from now will not be worth the electricity to keep it running. Businesses held on to x86-32 bit stuff for a long, long time, but only because it continued to run on ever more capable, larger and faster machines. Had that stuff been constrained to running on 100Mhz single core machines with memory maximums under 256M, disks smaller than 4GB, 10Mb networks, no DVD support, no USB support, etc, businesses would have moved entirely. Intel is shifting Itanium onto a common platform with XEON, but you can bet that the optimizations will be for XEON, not Itanium. The 50% you cite is possible, but it is not a sure thing by any means and as a ‘best case’, it is not encouraging. If I was managing a data center with Itanium boxen, I would not be planning further acquisitions of Itanium hardware.

    Re: Itanium development started 20 years ago. That’s a decent and honorable life for any computer architecture, especially when sibling Xeon has gotten most of Intel’s attention for a decade

    I am quoting a big block there, but it fits nicely together. In my opinion, the 20 years was not honorable by any means. I am quite certain that no other chip in the history of the world has had even close to the resources showered on it with such a meager return. Arguably, during the entire 20 years it has *only* been in development. It never really started to live. If it did live, it definitely did not start any sort of life until 2001 with McKinley and that was not much of a life. I debuted with clock speeds around half of contemporary x86 chips. Clock speed is not everything, but it certainly is something and it packs a big psychological impact in the marketplace. It is true that even by then the Itanium was executing many CPU instructions per clock. However, this came at an enormous price in terms of development. It also may have been trumped by bandwidth issues.

    The fact XEON has gotten most of the attention this past decade is a telling criticism of the Itanium. It is clear that the collective wisdom of the marketplace has deemed the Itanium itself is a poor competitor to x86-64. For my purposes it certainly has been.

    I am not slamming EPIC per se, nor am I saying the people who developed Itanium and systems based on them are inferior to the people in the x86 ecosystem. I am not even saying that ‘lessons learned’ by the Itanium teams (positive and negative) are not useful for future development. What I am saying is that Itanium as we know it cannot last. Not all of those reasons are economic. Some are technical. Itanium as such (not necessarily stuff like EPIC) is a technological dead end.

    The x86 lineage is, perhaps, exceptionally monstrous, but I would say that all modern systems suffer horribly from deep legacy architectural ills (like, say, the Von Neumann Bottleneck). I am hoping to get an article out about the various things that I see ail modern systems. In my opinion, something has gone horribly wrong with our systems across the board and I think that the problems stem from a few easily correctable philosophical errors. Prime among these, or perhaps the root of them all is a total lack of imagination when it comes to seeing that (for instance, back when) more than 640Kbytes might be required. What should happen is that we should invent the appropriate protocols to allow components to interact as quickly and in as much volume as the physical system will allow. Instead, for instance, of creeping forward to 128-bit, 256-bit, 512-bit computing we should be looking instead at ‘width unbounded’ computing. The fact that hardware designers can’t see a use for a 65536-bit word does not mean one will never exist. Similarly, memory addressing should be devised to operate without limit. Again, the fact that you cannot envision a need for addressing beyond every single particle in the universe does not mean it won’t be needed. In short, all modern systems suffer from a legacy prejudice toward what I might call ‘boundaryism’. Things like ‘system on a chip’ should be natural extensions of the underlying systems, not a radical departure.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s