Self-Driving Software: Why We Need E Pluribus Unum

Today, numerous large and small companies around the world are working diligently on perfecting their company’s self-driving software. All the large traditional automobile companies are included as well as large technology firms such as Google, Intel and Microsoft, and even Uber. These companies are working in true twentieth-century capitalist fashion: they’re doing it all independently and secretly. This approach leads to sub-optimal technology and foreseeable tragedies.

Self-Driving Vehicles Use Artificial Intelligence (AI)

Programming a self-driving vehicle (SDV) by traditional software-development methods is so fraught with complexity that no one, to my knowledge, is attempting. So scrap that idea. Instead, developers have flocked to artificial intelligence, a red-hot technology idea built on rather old ideas about neural networks.

There’s a lot to AI technology beyond the scope of this blog. A quick Internet search will get you started on a deep dive. For today, let’s sketch a common approach to AI application development:

  • First, an AI rules-based model is fed real-world scenarios, rules, and practical knowledge. For example, “turning left into oncoming traffic (in the USA but not the UK) is illegal and hazardous and will likely result in a crash. Don’t do that.” This first phase is the AI Learning Phase.
  • Second, the neural network created in the learning phase is executed in a vehicle, often on a specialized chip, graphics processing unit (GPU) or multi-processor. This is the Execution Phase.
  • Third, the execution unit records real-world observations while driving, eventually feeding them back into the learning model.

The Problem of Many

Here’s the rub. Every SDV developer is on its own, creating a proprietary AI model with its own set of learning criteria. Each AI model is only as good as the data fed into its learning engine.

No single company is likely to encounter or imagine all of the third standard-deviation, Black Swan events that can and will lead to vehicle tragedies and loss of life. Why should Tesla and the state of Florida be the only beneficiaries of the lessons from a particular fatal crash? The industry should learn from the experience too. That’s how society progresses.

Cue the class-action trial lawyers.

E Pluribus Unum

E Pluribus Unum is Latin for “out of many, one”. (Yes, it’s the motto of the United States). My proposal is simple:

  • The federal government should insist that all self-driving vehicles use an AI execution unit that is trained in its learning phase with an open-source database of events, scenarios, and real-world feedback. Out of many AI training models, one model.
  • The Feds preempt state regulation of core AI development and operation
  • Vehicles that use the federalized learning database for training receive limited class-action immunity, just like we now do with immunization drugs.
  • The Feds charge fees to the auto industry that cover the costs of the program.


From a social standpoint, there’s no good reason for wild-west capitalism over proprietary AI learning engines that lead to avoidable crashes and accidents. With one, common AI learning database, all SDVs will get smarter, faster because they are benefiting from the collective experience of the entire industry. By allowing and encouraging innovation in AI execution engines, the industry will focus on areas that impact better-faster-cheaper-smaller products and not in avoiding human-risk situations. Performance benchmarks are a well-understood concept.

Philosophically, I don’t turn first to government regulation. But air traffic control, railroads, and numerous aspects of medical areas are regulated without controversy. Vehicle AI is ripe for regulation before production vehicles are produced by the millions over the next decade.

I am writing this blog because I don’t see the subject being discussed. It ought to be.

Comments and feedback are welcome. See my feed on Twitter @peterskastner.

“My ISP is a Solar-Powered Drone.”

Google, the ad-driven search giant, and Facebook, the social connections giant, are fighting over airplane drone technology companies. What’s that all about?

Solar-powered drones would, when they’re ready for mass-market in the next five years, be able to fly for weeks or months. They can take 2D and 3D photos resulting in better and more up-to-date maps. And they could serve as aerial Internet connections. It’s the latter that got my attention because it threatens the status quo in developed nations and opens new markets in developing nations.

Aerial Internet Drones (AIDs) suggest a breakout technology that solves — or at least remediates — the “wireless everywhere” mantra of the past decade. In developed countries such as the United States, intractable wireless problems include inadequate wireless bandwidth in high device areas (e.g., mid-town New York) necessitating more cell towers and greater slices of the electromagnetic spectrum. Moreover, “poor wireless coverage meets not-in-my-neighborhood” and inadequate capital make it politically and economically difficult to add enough cell towers to guarantee wireless broadband such as LTE to build a superior wireless broadband network in suburban and rural areas.

In underdeveloped geographies, which represent attractive new markets for the global technology and wireless companies, inexpensive and inadequate mobile broadband infrastructure creates a chicken-and-the-egg problem.

So, the vision to solve both developing and developed wireless broadband demand is to put up a global network of drones that serve as radio relays for wireless Internet connections. AIDs would be a new form of Internet router, loitering around a more-or-less fixed point in the sky.

At the right altitude, an AID has better line-of-sight than a cell tower located over the hill. The AID theoretically offers greater geographic coverage and often better signal quality than today’s cell tower networks. At a cost of less than $10 million per equipped AID, my envelope calculations suggest AID network costs compare favorably with cell towers for comparable geographic coverage.

In developing areas such as Africa, an AID network is a solution to creating metro- and rural-area Internet wireless infrastructure rapidly and without the difficulties of building land-line-connected cell towers.

Cellphone networks connect cell towers with land line connections to each other and to an Internet wired backhaul. An AID network needs to connect wirelessly to a) client cellphones and the Internet of Things and b) to a radio ground-station connected to an Internet wired backhaul. The radio ground-station is the crux of the difficulties I foresee.

The ground-station requires radio spectrum to communicate up to and down from the AID network. It represents a new demand on the over-burdened and highly political use of the electromagnetic spectrum. Where does the spectrum come from, whose ox is gored, and how are the skids greased?  Think lobbying.

Moreover, the incumbent cable and wireless ISPs (i.e., Comcast, Verizon, AT&T, Sprint, Dish, et al) are not likely to give up their near monopolies on Internet access by devices, homes, and businesses without a knockdown, drag-out political fight followed by years of litigation.

Add citizen privacy related to drone picture taking to this highly volatile Internet-industrial-complex wireless food fight and you can expect great spectator sport. Although in developing countries, the issue will be described as “drone spying by the NSA”.

Like many, I would greatly appreciate and even pay more for better wireless coverage and higher wireless device bandwidth. First, Google and Facebook have to solve the real technology problems of getting the AIDs into the sky. Second, they have to muscle a (much needed) rethink of wireless spectrum use and the roles of future ISPs through the political sausage factory, and nail down the new spectrum they need. Combined, this is a heavy lift.

So, with a sigh of regret, I suspect it will be quite a while before I can say “My ISP is a Solar-Powered Drone.”

Follow me on Twitter @PeterSKastner.

solar drone

Titan Aerospace/Associated Press

HealthCare.Gov: IT Rules Broken, Mistakes Made

Numerous friends, neighbors and clients have asked me about the IT fiasco in the eight weeks since the  Obamacare federal exchange project, HealthCare.Gov, was launched. “How did it happen and what went wrong?”, they ask. Lacking subpoena power, I can only draw from experience and common sense. There were lots of information technology (IT) rules broken and mistakes made. Someone else can write the book.

Performance, delivery date, features, and quality are a zero sum game
The project was executed with the expectation of complex features, very high volume and performance metrics, robust production quality expectations, and a drop-dead date of October 1, 2013. Even with all the money in the federal budget, tradeoffs are still necessary in the zero-sum equation of successful project management. The tradeoffs were obviously not made, the system went live October 1, and the results are obvious.

The Feds are different
That federal IT project procurement and management is different from the private sector is like night and day. Some of the major factors include:

  • Politics squared.
  • IT procurement regulations are a gamed system that’s broken. Everybody in Washington knows it and nobody will do anything about it.
  • The federal government does little programming and development in-house. Most is contracted out.
  • The culture lacks accountability and real performance metrics.

The website is really a complex online marketplace. It’s not the first, though. has taken longer to complete than World Wars I and II, the building of the atomic bomb, and putting a man in space.

Too many cooks in the kitchen
The specifications were never really frozen in a version approved by all stakeholders. That meant the programmers never worked with a fixed design.

The project was always surrounded by politics and executive branch oversight that led to design changes, such as the late summer decision to graft a rigorous registration process into the site before users could see policy choices.

No surprise that this high visibility project would have lots of micro-management. But the many — over fifty — IT contractors working on the project had no incentive to tradeoff time-to-completion with feature changes. They weren’t in charge. The timeline slipped, a lot.

Who’s in charge? Can the project manager do the job?
There was no take-charge project manager responsible for this half-billion dollar undertaking. The Centers for Medicare & Medicaid Services (CMS) was assigned oversight by an executive without extensive complex system integration project experience. It’s obvious that day-to-day coordination of the over fifty contractors working on was lacking, and that project management was sub-par from seeing the first remedy in October was assigning government technicians with such experience.

The White House trusted its own policy and political teams rather than bringing in outsiders with more experience putting in place something as technically challenging as and the infrastructure to support it.

Absent a take-charge project management team, the individual IT contractors pretty much did their own thing by programming the functions assigned to them and little else. This is obvious from all the finger-pointing about parts of the site that did not work October 1st. The lack of systems integration is telling.

After October 1st, a lead project manager (a contractor) was appointed.

We don’t need no stinking architecture.
Why was the federal website at set up the way it was? How were the choices of architecture made — the overall technology design for the site?

Everybody now knows the site needs to handle the volumes of millions of subscribers during a short eligibility and sign-up window between October 1 and December 15th (now extended to the 23rd). The website handles 36 states. Each state has different insurance plans at different tiers and prices, but the way an individual state site works is identical to other states. Every one has bronze, silver, and gold policies from one or more insurers to compare.

The website was architected as one humongous site for all 36 states handling all the visitors in a single system. All the eggs were put in one basket.

An alternative architecture would immediately rout visitors from the home page to 36 separate but technologically identical state-level systems. More operations management, but infinitely greater opportunities for scale-out and scale-up for volume.

Another benefit of a single application code-base replicated dozens of times is that the risk and expense of states that chose to run their own sites can be mitigated. The 14 states that built their own sites all did it differently, to no national benefit. While California succeeded, Oregon has yet to enroll a customer online. We paid 14 times.

Another questionable architectural decision is the real-time “federal data hub” that makes eligibility and subsidy decisions. As designed, the website queries numerous agencies including Social Security, Homeland Security, Internal Revenue, Immigration and other agencies to make a complicated, regulation-driven decision on whether the (waiting) customer is qualified to buy a policy and what federal subsidies, if any, the customer gets to reduce policy premium costs.

This design approach puts a strain on all of the agency systems, and leads to inevitable response time delays as all the data needed to make a determination is gathered in real time. It also requires that agency systems not designed for 24/7 online operations reinvent their applications and operations. This hasn’t happened.

An alternative design would make some or all of the determination in batch programs run ahead of time, much like a credit score is instantly available to a merchant when you’re applying for new credit. That would greatly un-complicate the website and reduce the time online by each user.

Security as an add-on, not a built-in.
The website needs the most personal financial information to make eligibility decisions. Therefore, it’s shocking that data security and privacy are not an integral and critical part of the system design, according to testimony by security experts to Congress. This is a glaring lapse.

We’re not going to make it are we?
By April 2013, the project was well underway. An outside study by McKinsey looked at the project and pointed out the likelihood of missing the October deadline, and the unconventional approaches used to date on the project.

There’s almost always a cathartic moment in an IT project with a fixed deadline when the odds favor a missed deadline.

The project broke at least three IT project management best practice rules: overloading the roster, succumbing to threatening incentives, and ignoring human resistance to being cornered.

Let’s throw more people at the project.
More labor was thrown at the project both before and after October 1st. That slowed the project down even more. An IT management mentor explained it to me early in my career this way: “If you need a baby in a month, nine women can’t get that job done. Not possible. Try something different, like kidnapping.” This later became known as Brooks’ Law which says “adding manpower to a late software project makes it later”.

“Failure is not an option!”
When managers yell “Failure is not an option!” is it any surprise that project management reporting immediately becomes a meaningless exercise? The managers were flogging the troops to make the deadline. So the troops reported all sorts of progress that hadn’t actually happened. Passive resistance is real.

It therefore comes as no surprise when managers post-launch “concluded that some of the people working in the trenches on the website were not forthcoming about the problems.” It’s a fairytale world where nobody has ever read the real world depicted in Scott Adams’ Dilbert comic strip.

The project passed six technology reviews, and told Congress all was well. CMS IT management approved the green-light schedule status on the website. It’s still online here. Completely inaccurate.

“We’ll just all work 24/7 until we’re over the hump.”
As I write this a week ahead of the December 1 “drop-dead date 2.0”, it’s hard to fathom how many weeks the project teams have been working late, nights, and weekends. Soldiers are kept on the front line for days or weeks except in the most dire circumstances. The military knows from experience that troops need R&R. So do IT troops.

“Let’s turn it on all at once.”
Sketchy project progress. Testing done at the integration level; no time left to do system testing or performance testing. It’s September 30, 2013. The decision is made: “Let’s turn it on all at once.”

Turning the whole system on at once is known as a “light-switch conversion”. You flip the switch and it’s on. In the case of, the circuit breaker blew, and that’s pretty much the case since day one. Now what to do?

“Where’s the problem at?”
From the moment the website was turned on, the system throughput — enrollments per day — was somewhere between none and awful. We’ve all heard the stories of more than a million abandoned registrations and a pitiful number of successful enrollments. Where were the bottlenecks? The project team did not know.

They didn’t know because there was practically no internal instrumentation of the software. We know this because the tech SWAT team brought in after October first said so.

What happens next?
The next drop dead date is December 1. There is no public report that the system has reached its 60,000 simultaneous user design goal, nor the “critical problem list” solved.

All the body language and weasel-wording suggests the website will support more simultaneous users than at launch two months ago. Do not be surprised at a press release admitting failure hidden during the Thanksgiving holiday rush.

To make the exchange insurance process work, a couple of million (or more, when you include cancelled policies that must be replaced) enrollments need to take place in December. If that’s not possible, there needs to be a clock reset on the law’s implementation. The pressure to do so will be irresistible.

There is no assurance that the enrollments made to date supply accurate data. Insurance companies are reviewing each enrollment manually. That process is not scalable.

It was sobering to hear Congressional testimony this week that 30%-40% of the project coding had yet to be completed. Essentially, the backend financial system is not done or tested. That system reconciles customer policy payments and government subsidies, and makes payments to insurers. If the insurer does not have a payment, you’re not a customer. Which makes the remaining 30% of the system yet another “failure is not an option” to get working correctly by Christmas.

There is no backup plan. Telephone and paper enrollments all get entered into the same website.

There is a high probability of a successful attack. One thorough security hack and public confidence will dissolve. There’s a difference between successful and thorough which allows a lot of room for spin.

If individual healthcare insurance is 5% of the market, and we’ve had all these problems, what happens next year when the other 95% is subjected to ACA regulations? No one knows.

Comment here or tweet me @peterskastner

The author has ten years experience as a programmer, analyst, and group project manager for a systems integrator;  established government marketing programs at two companies; has over twenty years experience in IT consulting and market research; and, has served as an expert witness regarding failed computer projects. improvements are in the works improvements are in the works

Say Goodbye to Your Favorite Teacher

John Thomas writes:

Don’t bother taking an apple to school to give your favorite teacher, unless you want to leave it in front of a machine. The schoolteacher is about to join the sorry ranks of the service station attendant, the elevator operator, and the telephone operators whose professions have been rendered useless by technology.

The next big social trend in this country will be to replace teachers with computers. It is being forced by the financial crisis afflicting states and municipalities, which are facing red ink as far as the eye can see. From a fiscal point of view, of the 50 US states, we really have 30 Portugals, 10 Italys, 10 Irelands, 5 Greeces, and 5 Spains.

The painful cost cutting, layoffs, and downsizing that has swept the corporate area for the past 30 years is now being jammed down the throat of the public sector, the last refuge of slothful management and indifferent employees. Some 60% of high school students are already exposed to online educational programs, which enable teachers to handle far larger class sizes than the 40 students now common in California.

It makes it far easier to impose pay for productivity incentives on teachers, like linking teacher pay to student test scores, as a performance review is only a few mouse clicks away. These programs also qualify for government funding programs, like “Race to the Top.” Costly textbooks can be dispensed with.

Blackboard (BBBB) is active in the area, selling its wares to beleaguered school districts as student/teacher productivity software. The company has recently been rumored as a takeover target of big technology and publishing companies eager to get into the space.

The alternative is to bump classroom sizes up to 80, or close down schools altogether. State deficits are so enormous that I can see public schools shutting down, privatizing their sports programs, and sending everyone home with a laptop. The cost savings would be enormous. No more pep rallies, prom nights, or hanging around your girlfriend’s locker. Of course, our kids may turn out a little different, but they appear to be at the bottom of our current list of priorities.

Creative destruction is also at work in higher education. Sixteen universities have created, with free courses taught by popular professors. When the University of Illinois announced it would offer online courses for free, fourteen thousand prospective students came running. It would appear that the Economics 101 supply-demand curve goes exponential when price is zero, as it should.

But is online education wasted time in front of a screen. In the first study of its kind, Ithaka says in a random study of 600 statistics students that one classroom session a week augmented by online courseware yields the same final exam results as a three session-a-week conventional course.

At the university level, costs are up 42% in the past decade (even after adjusting for aid). At the K-12 level, local budget pressure is cutting school budgets to the bone.

My conclusion is that online education is reaching the mainstream, aided by enormous pressures to cut costs and deliver predictable outcomes. Keep an eye on quality and avoid fads.

Somewhere in the not too distant future employers will have to decide on a big change from traditional credentials in hiring decisions. A college diploma today is a ticket to a white-collar job (at least it would be again if the economy picked up). Will employers hire students who have passed 120 credits of free, online college courses? Or will they demand a sheepskin that costs $200,000 and accompanies students who have passed 120 credits of paid, mostly-online college courses?

Will the motivated free-college students who lack back-breaking debt be better entry employees? I suspect so.

The Good Old Days of Education

AT&T + T-Mobile: Spectrum, Regulation, and a New Business Model

The proposed $39 billion merger of T-Mobile into AT&T started a firestorm of criticism from all directions, mostly on antitrust grounds. My bet is that the deal is not approved. The question on the table is “then what”?  The demand for wireless bandwidth is growing exponentially. AT&T is trying to get more bandwidth by buying T-Mobile’s. Without more bandwidth divided by demand across the wireless carriers, the mobile revolution starts faltering by 2015 when bandwidth supply constraints hits exponential demand growth. My solution: consider applying the U.S. electricity generation regulatory model to wireless communications.

“It’s the Bandwidth, Stupid”
To me, it’s a given that wireless mobile devices (gadgets) are a key growth industry for the global economy. The reason Apple has the #2 market cap right now rests firmly on the shoulders of iPhone and now iPad growth. Or check out this video of Corning’s view of the future. All the tech companies are fixated on this rapidly growing market, with tablet sales alone projected at over 100 million units in a couple of years. So, huge, growing demand in devices that need mobile wireless bandwidth.

Second, the gadgets are consuming more data per device. That’s why Verizon and AT&T capped monthly wireless bandwidth last year for new subscribers. The growth of wireless video, video chat, navigation maps, e-book downloading, digital news and more are rapidly growing the wireless data consumption.

How fast does AT&T see the wireless network demands growing over the next five years? Think exponential.

The chart above shows gadget device growth excluding smartphones from ~10M units in 2011 to ~60M in 2014. That’s plus 50 million plus smartphone growth of another 50 million or so.

AT&T projects an order of magnitude growth in bandwidth demand by 2015 to 50 petabytes a month.

Meanwhile, the wireless bandwidth spectrum is not growing. Oh, more cell towers are going up in poor reception areas and rural areas, but the basic supply of wireless electromagnetic spectrum is pretty much fixed (unless the government frees up more of its own reserved spectrum, but don’t count on that).

Which leads to point number three: fixed spectrum supply is about to be overwhelmed, inundated, creamed, smothered, drowned by a sea of gadget demand for wireless spectrum. Yes, 4G technologies will use that spectrum more efficiently, but there are limits to annual cap ex spending and nothing can ameliorate a 10x increase in demand over the next five years.

But the wireless spectrum is fragmented across multiple carriers by law in order to support market competition. AT&T customers compete with Verizon, T-Mobile, Sprint and other wireless spectrum lessees, reducing the spectrum for any one carrier customer’s. It’s like being trapped in the slow TSA line while other lines might be going faster. Point four: there is no legal basis for pooling the finite wireless spectrum, so every wireless provider does the best they can with the spectrum they lease.

Why Did AT&T Go After T-Mobile Now?
I am convinced AT&T went after T-Mobile for its bandwidth, plain and simple. There is no way AT&T can build enough cell towers in San Francisco or New York City to meet today’s demand, let alone an order of magnitude more bandwidth demand in five years. Telephone is betting it can beat the antitrust odds and gain more precious spectrum.

What If The Merger Does Not Go Through?
But what if politics and antitrust correctness deny the merger? The status quo starts rolling towards 2015:

  • 10X demand still happens
  • Wireless carriers put fingers in the dike
  • Everybody is mad at the national disgrace of a wireless system with horrible response times
  • Disgusted, users first slow usage and then don’t upgrade devices. This kills off tech innovation.

Don’t think that could happen? It already has before. CB radio in the 1970s: crushed by too much demand, too few channels, and foul-mouthed truckers. Dead as a mass market in two years.

Is There an Alternative to Impending Disaster?
I am a free-market thinker. However, in this case, it’s hard not to look at finite wireless spectrum as a natural monopoly. Let’s open up a public discussion.

My thoughts since the AT&T announcement Sunday have led me to the U.S. electricity industry, which is the 1990s was broken up into two components: electricity generators and transmission companies; and electricity marketing and delivery companies.

  • The electricity producer/wholesalers are highly regulated and make a return on capital. They are economically encouraged to grow low-cost and peak demand generation capacity. They band together into grids to deal with spikes in demand and outages;
  • The electricity retailers are in the less-regulated and profit from a markup on wholesale electricity. They compete on price, service, and support (not much, but there is competition).

I argue that the best national interest would be served if the wireless network function was given to a regulated monopoly charged with providing as much bandwidth as possible and as inexpensively as possible.

I’d take away public service spectrum and replace public service equipment like in-car police dispatching terminals with wireless digital-compatible equipment. Create an infrastructure boom that frees up lots of wall-penetrating spectrum. Voila, more very useful spectrum.

Since we wouldn’t have five or more wireless companies fighting for spectrum in every market, the local wireless network monopoly could focus on quality of service and improving gadget throughput.

The wireless service companies would sell price, service and support as retailers. They’s also sell phones and gadgets. AT&T and Verizon could keep their landline businesses.

Yes, there are some big issues like which wireless network technology to support, or whether to support them all going forward. But I think I’ve stirred the pot enough for today.

As usual, comments and feedback appreciated.

End of the Net Neut Fetish

Holman W. Jenkins, Jr.: End of the Net Neut Fetish –

Jenkins nails it on the head. The know-nothings running around like spoiled brats demanding “net neutrality” for the past three years are finally seeing some adult competition. Among the notable religious conversions is Google. Hey, this is business, after all.

Google’s epiphany is that Verizon and AT&T, the major Internet carriers in the U.S., are not afraid to change the up until now “all the buffet you can eat for a fixed price” model of Internet bandwidth. AT&T’s new iPhone 4 data pricing in June eliminated the unlimited data plan for new customers. That was the thrown gauntlet that led Google to its religious conversion from leading net neutrality advocate to a posture that is more agreeable with the company’s long-term bottom line.

The argument for net neutrality is that all traffic should be treated equally under all conditions. Sounds fair on the face of it, and that’s why egalitarian types at the Federal Communications Commission (and a bunch of Congressmen) embraced the idea. A bad idea at that.

The reality of Internet traffic is simple to understand. Bandwidth is fixed and requires large capital expenses. It doesn’t take many college students in my neighborhood swapping pirated digital movies to slow down everybody else’s e-mail, web surfing, stock quotes and other Internet applications.  Basically, net neutrality as the Internet exists today allows bandwidth hogs to degrade the service of everybody else. How fair is that?

While fixed Internet bandwidth from cable or fiber degrades slowly, wireless bandwidth is and will always be a finite commodity. What’s become obvious to the FCC and Google is that a wireless free lunch is a sure way to bog down access for everyone to unacceptable levels.  Just ask AT&T iPhone customers in San Francisco and New York. And capacity caps on wireless data shreds Google’s business plans to make lots of revenue growth from serving wireless advertising.

As Jenkins puts it, “Suddenly, those net neut advocates who live in the real world (e.g., Google) had to face where their advocacy was leading—to usage-based pricing for mobile Web users, a dagger aimed at the heart of their own business models. After all, who would click on a banner ad if it meant paying to do so?”

Once users understand that they can continue to get good rates for decent Internet access and bandwidth, they’ll spurn the principled but unworkable business model of the net neuts. After all, the comparison is already available in Europe, where wireless data costs about ten euros per megabyte. In the past two months at European rates, I would have racked up $4,000 in data fees versus the roughly $100 I actually spent with AT&T here. I don’t watch movies or stream music on my iPhone, so I’m not a hog myself. And note there’s no way my employer or I would pay $2,000 a month for walk-around data access like we already have here for much less.

So, after three years of net neut nonsense, the major businesses effected by a net-neutral Internet are finally waking up to the bad-for-business world they were about to create. Now, Google and partners will have to do more than a press-conference about-face. They’ll need to lobby Washington to change its net neutrality views as well.

Good luck at that. Killing the fallacies of net neutrality will take years of effort, Google. But maybe Google actually has come around to seeing how evil net neutrality could be for its own business as well as for average consumers lining up for digital gadgets.

Legacy of the Flash Crash: Lessons in Systemic Complexity

Those readers with concerns about the effects of computer system complexity on our society have a must-read article by the Wall Street Journal staff on the legacy of the May 6, 2010 “Flash Crash”. Kudos to the WSJ for answering questions I’ve had about what really happened day, when the market gyrated, crashed, halted, and came back, albeit roughed up in the process.

“The whole system failed,” says John Bogle, founder of fund company Vanguard Group. “In an era of intense technology, bad things can happen so rapidly. Technology can accelerate things to the point that we lose control.” And that day, we did.

The solution implemented to date, on a pilot basis, is a five-minute trading halt on S&P 500 stocks that move ten percent within five minutes. These “collars” act as a circuit-breaker, allowing humans to intervene in otherwise automated trading systems. The question for the Securities and Exchange Commission, which will issue its own Flash Crash report this fall, is whether circuit-breaker trading halts are sufficient defenses to another crash.

My own conclusion is that circuit-breakers alone are insufficient to prevent a similar meltdown. Based on the WSJ article and my own discussions with market participants, there are four areas that require in-depth analysis and decision-making:

  • All or None: The weak-sister exchange was the NYSE ARCA, whose quotations lagged other exchanges by two seconds, eternity when computer traders are working in millionths of a second. NASDAQ, followed by other exchanges, stopped routing orders to ARCA. But ARCA handles up to 30% of EFT trading, so cutting off ARCA from the rest of the market severed the widely traded S&P 500 ETF (SPY) from its option, individual stock component, and futures brethren. Bad things were bound to happen after that moment. Which raises the all-or-none question: if one exchange falters, is it better to cut it off and keep trading, or halt the market until the system of all markets can be synchronized again? The Flash Crash results suggest that treating all exchanges as part of a single integrated system that either runs or does not, all or none, deserves serious consideration.
  • Capacity: The evidence is clear that inadequate computer and network capacity were major contributors to the Flash Crash rout. This resulted from an avoidable human planning error (or a negligent cost-avoidance decision). After all the heads-up we got in the 2008-2009 financial crisis about “black swan” events being more frequent than most expect, inadequate computer capacity would seem to be a no brainer to anticipate, and is certainly easy to solve. (Note to HP Sales: what are you waiting for?)
  • Who’s Watching the Programmers? Two factors struck me as systems design and programming issues that should have probably been discovered and treated differently than they were, left to yield wrong or unexpected results. The first is stub quotes, which market makers use to set below a floor and above a ceiling on prices. They aren’t meant to be executed, yet numerous trades (i.e., Accenture for $.01 a share) at stub-quote were executed on Flash Crash day. The second anomaly involved an Apple trade with insufficient supply to meet the demand at a given price.  The programmers, rather than declining the portion of the trade with no supply at the market price, substituted $99,999.99 a share for Apple stock. That fact should stop the heart of anyone who has ever placed a “market order” to buy at the market price. And I think an arbitrary $100,000 a share is a design flaw.
  • ETF Disintermediation: Like the mortgage securities that brought our economy to its knees, stock indexes, Electronically Traded Funds (ETFs), futures, and options are all inextricably tied together. Ten or more years ago, if trading in a single stock was erratic, the market did not falter. Today, hundreds of millions of shares underlying the S&P 500 are continuously being bought and sold to keep the S&P 500 ETF, proxy for the U.S. equity markets, in line with its 500 component stocks. Now, layer on S&P 500 futures, arbitrage, and the short side of the market. This is one Gordian Knot of complexity. It’s unlikely we can prevent another Flash Crash until the pieces and the whole of ETF disintermediation and its effects on the market is fully understood.

The author’s resume includes design and implementation of a trust securities system for a money-center bank, financial industry marketing at two computer companies, and extensive experience in computer systems performance measurement and auditing.