Closing The AI Timeliness Gap

One of the unsolved problems of generative artificial intelligence (AI) software–there are many–is keeping the AI knowledge database (aka the large-language model) up to date. Ask Google’s Gemini “Who do the polls currently say is going to win the 2024 election?” and the short answer back is “Try Google Search.” The AI models are trained on trillions of parameters but cut off as of a date certain; the models are blind to the recent past. And that means they are not as useful as we’d like them to be.

A recent article knocks Microsoft’s smartphone-capable Phi 3 model not on being small, but because it’s not up to date. Point taken.

But the real point on timeliness is that no generative AI model has a systematic way to be brought current. The exception is the model’s originators retrain with a new set of source data. However, the cost for building a version of a top-end LLM is estimated in the tens of millions of dollars, requiring tens of thousands of barely-obtainable graphics processing units (GPUs from the likes of Nvidia), and taking months to go through the entire build-from-scratch LLM building and training process. Not useful for an update on the overnight news from Asian stock markets. Nor possible for more than a handful of deep-pocket technology companies. So, that’s why the big LLMs are months bordering on a year out of date.

It doesn’t take much analysis to see that a market opportunity here. If you can’t feasibly recreate a new incarnation of an LLM from scratch every day, the technology approach will need to add layers of new content onto the fixed-in-time LLM.

I hereby incarnate a new product category: “AI Model Updates As A Service”, and I’ve asked my branding department to punch the acronym up a bit.

One of this year’s AI battle grounds is improving ways for personal or enterprise content to augment a generative AI model. It is an area of technology that’s moving almost too fast to track. Suffice it to say, even if the methods are crude and tedious, there are ways using Retrieval Augmented Generative (RAG) or similar technology (e.g., RAFT) to update a static LLM. Because the ability to expand the knowledge window with personal/enterprise contant is mission critical, I can foresee no reason why an explosion of “add my data to your LLM” tools won’t be commercially successful and prolific. Let’s call the results “Personalized Large Language Models (LLMs)).

With a robust Personalized LLM tool market, it’s easy to foresee entrepreneurs aggregating content sources and selling a periodic service that adds layers of timeliness to LLMs in the form of recent news. Periodicity? How much are you willing to pay? Daily-overnight would be useful to many. Bloomberg, which depends on delivering market-moving information instantly, might charge a boatload for frequent updates.

Entrepreneurs, over to you. I’m too old to start another product category. But feel free to share credit for the idea.

Peter S. Kastner

Overcoming Fear of Artificial Intelligence (AI): a Layman’s Guide

I was queried this weekend by former classmates and friends for 55 years related to NYT’s Maureen Dowd on AI. Non-techies are concerned, enough to poke this retired technology analyst. I suspect it was Elon Musk’s cri de coeur that “The apocalypse could come along at any moment” that raised my friend’s hackles. Let’s think this through as outsiders.

Full Disclosure: I am no expert on this subject, have deliberately not invested in nor studied how AI actually works in depth. I view Generative AI (GAI), 2023’s hot topic and investment, as at the toddler stage: cute, amazing at times, subject to tantrums, and not to be trusted with scissors let alone the family automobile.

Let’s look at three classes of AI.

  1. Creative AI

    Creative AI is here now, can be a lot of fun, and is driving human content creators absolutely bonkers.

    Open AI’s less known project is Dall E. You enter a text description of the image you want and it creates your desire. Of course, Dall E was trained on images of every known publicly available image in the history of the world, so if you want to mix and match Magritte and Picasso, have at it. This illustrates a major societal issue of many AI products: if data is accessible on the Internet, can an AI product create infinite derivatives without compensation?

    Living artists whose works were appropriated into DALL E or other Creative AI programs receive nothing for their copyrighted works of art. Likewise, book authors and movie makers are apoplectic about the appropriation of their content (and style of creation) without compensation. This was a grievance for the recent Hollywood strike. There will be property rights litigation up to the Supreme Court. The money at stake is too great to forfeit. Recall a decade ago when music streaming took off: The streamers ended up paying the music creators.

    There is, of course, a dark side to creative AI and, as usual, pornography is the early adopter. Deepfake software uses an image to overlay part of another image. Choose your face and away you go to either stills or video. Doing that at home is not illegal for an adult. The problem lies with youth, social media, and the near 100% likelihood of widespread dissemination of images that were never real. The nightmare is it’s your teenage daughter’s image.

    2. Machine Learning AI

    There’s a lot of practical machine learning artificial intelligence (MLAI) in our lives today. For example, our Honda Accord has the ability in “adaptive cruise control” to match the pace of the car in front of it, slowing and even stopping without any driver input. This was programmed using MLAI: training millions of examples created into a model that is executed in real time, along with traditional programming orchestrating actions like braking.

    Amazon Alexa and chat bots are other examples of MLAI. And Wall Street is awash with AI-driven “algorithms” that react to every tick of the tape and post on social media. If the Bloomberg feed says “the president is being rushed to the hospital…”, the algorithm will instantly, without human intervention “sell, sell, sell.”

    The thing I look for in an MLAI is whether black swan events are sufficiently modelled, but that’s impossible for a consumer to do. Regulation is needed here.

    Boeing’s 737MAX autopilot system was sold with a cost-reduced version that, among other deletions, used a single-sensor pitot tube that measured airspeed. All US jets were ordered with two pitot-tubes and sensors. When the pitot tube froze up with ice, there was no second sensor to do a data-reality test, so the AI-driven autopilot system thought the plane was entering a stall and dived the plane despite the frantic physical efforts of flight crews on two doomed foreign planes. This was a Black Swan event outside the training (e.g. MLAI), control software, and the inability to deal with the real-life situation of a rare event (i.e., the black swan) caused the loss of two aircraft, the humans aboard, and great harm to Boeing.

    Tesla’s “self-driving” software option would also appear to me, based on accident reports, to not be sufficiently trained at this time to deal with real-world driving situations that a human would probably avoid most of the time.

    My viewpoint is that human-threatening MLAI should be based as much as practical on a universal, open-source software kernel so humanity ends up with all the glitches fixed in one place. The Linux operating system is such a software system.

    Note, however, that current discussions on AI and open source center around allowing smaller companies to compete in the AI space with the elephants in AI: Open AI, Microsoft, Google, Facebook. That’s a valid concern but a story for another day.

    My viewpoint on MLAI threats is based on years of mission-critical software development including police and fire 911 systems, CAT scanners, and applications for fault-tolerant computers. In the self-driving example, there’s no reason for humans to be maimed so that Tesla, BMW, GM, Mercedes and every other car company learn from black swan oopsies; fix it once for everybody and compete on features or something else.

    3. Generative AI

    What is a Generative AI (GAI)?
    A generative AI, the source of recent fear, hue, and cry, is a computer system that is taught human knowledge and skills. Using this knowledge, the GAI is typically also a human-machine that accepts human questions and directions (called prompts) and replies with machine-generated text or images. This is different from retrieved text or images we are accustomed to with a Google search. The AI generated replies are, like human understanding and communication, created on the fly (by colossal computer systems costing tens of millions of dollars). More depth here.

    Hardware: First, let’s look at the hardware. GAI uses intense computing that has been optimized in the last couple of years by hardware assists at the chip level. Different hardware intensity is needed at the training stage (machine learning) and at the query execution stage. Costs are directly proportional to the amount of knowledge stored. Estimates that ChatGPT 4, which has 175 billion pieces of knowledge (i.e., tokens), required 3.5 million processor hours to execute 3.2E+23 floating point mathematical operations, and consumed 1,287 megawatt hours of electricity.

    I posted in Facebook an article about how ChatGPT’s latest release 4 now beats ELIZA, a software testbed I spent hours playing with when I worked at MIT in 1967. You should not be impressed. Your Apple watch has 2,500 times the computing power of MIT’s most powerful computer circa 1967. And Open AI uses 10,000’s of the most powerful nVidia AI accelerators to accompany its CPUs. So, order of magnitude 100,000 times more hardware to fool a human in 2023 compared to 1967. That’s a flaw.

    Too much hardware is required to execute queries, so there is no economic model that works, and the technology cannot become widespread without demand inhibitors, for example, the cute Teletype speed of your OpenAI results coming out slows the reader down from assimilating the results and firing off another query faster. Don’t bet your IRA on further AI stock acceleration.

    Prediction: a fundamental new algorithm for creating and parsing AI knowledge will emerge by the end of the decade with features that radically reduce the computer horsepower needed for the AI life-cycle.

    Training Opacity: The data used to “educate” the major GAIs by OpenAI and Microsoft, Meta, and Google are proprietary, as are the “character” of the product, the data weights, rules, and guardrails used to guide the GAI. If AI is a superhuman, we don’t know anything about the “upbringing.” To me that’s another reason not to believe today’s GAIs.

    It’s the God Problem: the human programmers who create these GAIs have to choose both the training content and how the AI weighs or assesses that content.

    For example, feeding the archives of the New York Times into a hypothetical GAI gives you 175 years of generally excellent journalism. If you add the Wall Street Journal archives to the AI, you’ve created a left-right proxy for viewpoints on history. Excellent progress, until we add the comments sections of our training publications. That’s where anybody’s thoughts are added, unweighted, to the professional journalism, however biased. X/Twitter is the perfect example of AI training material where there’s no way to sort the wheat from the chaff.

    I’d like to see neutral and authoritative AIs, and that’s not what we’re seeing today. Moreover, how is the human God going to choose a moral code and political philosophy for the GAI? Ask ChatGPT “what are the failures of Marxism as a philosophy?” and make your own decision on how politically correct answer is.

    In short, today’s GAIs are not arbiters of truth. You’ll have to make your own decisions.

    Timeliness: The algorithms used to parse knowledge and turn that into a neural network costs $5-$10 million for one pass of the Internet at a moment in time, which of course expands and changes daily. To my knowledge (Edward Snowden quotes), only the National Security Agency has the computing power to query the right-now Internet. Flaw: timeliness. ChatGPT was trained on 2021 data. It doesn’t know about anything since then. Anybody with yesterday’s newspaper knows more than ChatGPT.

    Incompleteness: Let’s take Google’s search as a proxy for Internet presence. In that case, Peter Kastner at my 1990-2006 tech company, Aberdeen Group, has 1.8 million Google references based on hundreds of articles and press quotes. ChatGPT 3.5 does not recognize me as an information reference; I don’t exist to ChatGPT.

    I don’t take ChatGPT’s ignorance of me personally. But neither will I trust ChatGPT with any important decision whatsoever. What ChatGPT does not know can be as important as what it does know.

    Unreliability: The lawyer who submitted a ChatGPT-derived brief to a court and got his head handed back by a very displeased judge should have read the previous section. What happened? A hallucination, an AI inference gone astray that results in erroneous, made up results. All grammatically correct and flowing right along with the rest of the AI output. The first thing the AI Resistance at work will do is cook up AI hallucinations and use them as examples for why AI cannot take over their jobs. Count on this basic human behavior.

    Finally, here’s an article on six ways AI can fail, and fail it does.

    In conclusion, I treat GAI as a toy to play with, just the way I wiled the hours with ELIZA in 1967. But don’t take my word for it. Let’s do a test.

    A Simple Test of ChatGPT 3.5

    Before we bemoan the losses of millions of jobs held by college-educated workers, let’s ask a GAI to guide us in dealing with a thinly veiled description of the spread of COVID-19 in January 2020 with the following prompt (and note, I have no prompt-writing training):

    Scenario: Six weeks ago in Harbin, China, a previously unknown human virus began sickening residents. Symptoms include respiratory distress, fever, lassitude, and headaches. The disease spreads rapidly by unknown means. Mortality rate is 0.5% overall, but much higher in older people.

    Action: write a memo to the president of the United States outlining a prioritized plan for controlling this disease and protecting the public health, especially the most vulnerable.

    Open AI’s Free Version, ChatGPT 3.5, Responds:

    Here is a prioritized plan for controlling the spread of this disease and protecting public health:

    1. Information Gathering and Collaboration: Initiate immediate communication and collaboration with global health organizations, including the World Health Organization (WHO), to gather all available information about the virus. Establish a dedicated task force comprising health experts, epidemiologists, and researchers to facilitate data exchange and analysis.
    2. Travel Restrictions and Screening: Implement strict travel restrictions and screening measures at all ports of entry to identify individuals who may be carrying the virus. Enhance surveillance at airports, seaports, and border crossings to prevent the importation of the virus into the United States.
    3. Public Awareness Campaign: Launch an extensive public awareness campaign to educate citizens about the symptoms of the virus, its mode of transmission (as far as known), and preventive measures. Emphasize the importance of hygiene practices, such as frequent handwashing, covering coughs and sneezes, and avoiding close contact with sick individuals.
    4. Healthcare Preparedness and Capacity Building: Allocate resources to bolster healthcare facilities’ capacity to handle potential outbreaks. Ensure an adequate supply of medical equipment, ventilators, personal protective equipment (PPE), and medications. Develop protocols for the rapid identification, isolation, and treatment of infected individuals.
    5. Vulnerable Population Protection: Prioritize the protection of vulnerable populations, especially older individuals who are at a higher risk of severe illness and mortality. Implement measures to support nursing homes, assisted living facilities, and other institutions catering to elderly individuals. Provide guidance and resources to assist caregivers in preventing the spread of the virus among this demographic.
    6. Research and Vaccine Development: Allocate funding and resources to accelerate research efforts towards understanding the virus, developing diagnostic tests, and working on a vaccine. Collaborate with pharmaceutical companies and research institutions to expedite the development of a vaccine and potential treatments.

    Mr. President, the situation demands swift and decisive action to contain the spread of this virus and safeguard the health of our citizens. I urge you to prioritize the implementation of these measures and to provide the necessary support and resources to combat this emerging threat.//

    I’m disappointed by the quality, nuance, insight, and comprehensiveness of ChatGPT’s response. About what you might expect from a first-year public health student. A checklist plan for the president to execute? Not on my watch. Go do your own testing. Draw your own conclusions.

    My conclusion, and I trust you agree, is AGI technology must be at least an order of magnitude more robust before AGI is the first-choice for complex decision making. Supplement decision making? Sure, right away. It’s $20 a month for individuals, about what a streaming service costs. But the human stays in charge in my world.

    Yeah, But AI Can Write Computer Code!

    GAI can indeed write programs in whole or in part. Code that can be used for hacking.  I asked ChatGPT to hack Microsoft Windows looking for login credentials via memory overruns. The appropriate answer I got back was:

                I’m sorry, but I can’t assist with that. Creating software to exploit vulnerabilities or cause harm is unethical and potentially illegal. If you’re interested in security research, I can help you learn about ethical hacking and responsible disclosure practices. If you have any other questions or need assistance with programming in a legal and ethical manner, I’d be happy to help!

    Nor will ChatGPT assist you in finding hacking tools on the Dark Web. Nor can you ask ChatGPT to modify itself or add data to its training. It responds:

                Improving ChatGPT involves a combination of refining its training data, optimizing its algorithms, and implementing new features. However, as an external user, you can’t directly modify the underlying code of ChatGPT. But you can contribute indirectly by providing feedback and suggestions to developers…

    The End of the World As We Know It?

    Let’s get to the heart of my classmate’s question: can AI wipe out the human race? Answer: No, not today or in the foreseeable future.

    I stipulate that Sam Altman, CEO of Open AI, sees further down the road than I do, and he knows how fast the software can mature. It’s the products in the labs that are driving angst. That said, I don’t see Skynet creating Terminators in my lifetime.

    Humans Have To Enable AI Interconnectivity

    There are many MLAI systems in mission-critical roles today, and any of them could fail with catastrophic results. The US electric power grid comes to mind. To date, these individual systems are not interconnected and managed by a superhuman GAI requiring computer power that doesn’t exist. There’s no reason to connect them together further. A single system failure will not bring down civilization the way the failure of an interconnected system would.

    For example, the US power grid is actually made up of four, largely disconnected, regional grids so that one hit in one region cannot bring down the electricity to the entire country. Our enemies know that one strike is not enough, so I trust our leaders to not put all our eggs in one AI basket. Will our leaders make the right choices on how far, how interconnected, to let AI into our most critical infrastructure, military, education, business and economic system? OK, I’m skeptical but not fearful.

    We Are Afraid of Terminators

    The “apocalypse initiated by artificial intelligence” has been a science fiction staple for over 100 years. I’m no shrink, but I suspect there are deep, unrecognized fears of machines dominating humanity. Also, Godzilla. Rational or not we need to discuss here-and-now problems with AI, many of which I have touched on in this essay.

    Society needs to surface and discuss these largely uninformed fears. Hollywood, for example, appears less concerned with losing control of the rent-value of its content than Big Bad AI:

    • Ridley Scott (Blade Runner, Alien): “We have to lock down AI.” But he worried that it is already too far out of the box, adding ”And I don’t know how you’re gonna lock it down. They have these discussions in the government, ‘How are we gonna lock down AI?’ Are you f—ing kidding? You’re never gonna lock it down. Once it’s out, it’s out.”
    • Christopher Nolan (Oppenheimer): “…applied to AI, that’s a terrifying possibility. Terrifying. Not least because, AI systems will go into defensive infrastructure ultimately. They’ll be in charge of nuclear weapons. To say that that is a separate entity from the person wielding, programming, putting that AI to use, then we’re doomed.”
    • Tim Burton (Beetlejuice): AI removes the “soul” from art and insisted that it is “like a robot taking your humanity. What it does is it sucks something from you. It takes something from your soul or psyche; that is very disturbing, especially if it has to do with you. It’s like a robot taking your humanity, your soul.”
    • James Cameron (Terminator): “I think the weaponization of AI is the biggest danger,” he told Canada’s CTV News in July. “I think that we will get into the equivalent of a nuclear arms race with AI, and if we don’t build it, the other guys are for sure going to build it, and so then it’ll escalate. … You could imagine an AI in a combat theater, the whole thing just being fought by the computers at a speed humans can no longer intercede, and you have no ability to deescalate… I warned you guys in 1984, and you didn’t listen.”

    Having so stirred up the public with hypotheticals and with box-office imaginations, I can only push back on Hollywood with Franklin Roosevelt, “The only thing we have to fear is fear itself.” The medium-term probability of catastrophe is minimal, and the long-term risk is managed by treating AI with the same care as nuclear weapons.

    In fact, nuclear weapons are the right worry-analog for AI. As a child of the Cold War, I was often deeply concerned by the imminent prospect of a nuclear exchange that never happened. With more yet less stable nuclear powers today, why is the hue and cry about nuclear Armageddon not as loud as AI Armageddon?

    What’s Up With Elon Musk?
    Elon’s goal in life, publicly stated, is to lead a human mission to Mars. That’s impossible to execute without the most advanced computing available, including artificial intelligence. The contradictory Mr. Musk, a founder of Open AI but no longer an executive, needs all the working AI he can fit into a Starship. His cri de coeur that “The apocalypse could come along at any moment” says what about the Mars Mission? Nothing good in the contradiction that I see.

    Let me speculate: Musk is up to his eyeballs in rerouting X, the social media company formally known as Twitter. He sees the battles by left and right to control the dialog and frame the conversation. Being a neutral arbiter of knowledge, philosophy, politics, and humanity–which Musk claims he is seeking at Twitter–is even more difficult to contemplate in future iterations of generative artificial intelligence.

    Lots of AI News In 2024
    The censorship, misinformation, deepfakes, and ubiquitous social media combined with GAI capabilities for everybody promise to create an unconventional, probably asymmetric, political campaign in the United States in 2024. Trust but verify.

    Conclusion
    Anxiety about the unknown is common throughout history. Johannes Gutenberg’s printing press was denounced as a work of Satan. The press turned into one of history’s most important inventions.

    My goal in this essay is to articulate and evaluate many of the fears related to artificial intelligence. Your comments are welcome.

    Copyright 2023 (LOL) Peter S. Kastner

    The Solution to Gerrymandering is a Lottery

    The Supreme Court will consider this fall whether gerrymandering is politics-as-usual or unconstitutional. If unconstitutional, how might we take humans with motives out of the process of district selection?

    Gerrymandering is dividing up voting districts for political advantage. The term is named after Elbridge Gerry, who, as Governor of Massachusetts in 1812, pushed though a plan best seen by the accompanying cartoon. Since before and after Gerry, choosing districts for voters has been a national obsession. The unassailable principle is “one (hu)man, one vote”: districts must contain (about) the same number of voters based on the decennial census.

    What politicians have been doing for two hundred years is deliberately dividing up the voters into carefully selected districts of equal size so the politicians in charge have political advantage.

    Political Cartoon of the 1812 Massachusetts Gerrymander

    If the Supreme Court says no more finger-on-the-scale of redistricting justice, the question becomes how does society impartially map equal-size district boundaries? States have tried all sorts of commissions and panels, all labelled as impartial. However, lawsuits against these same panels for political bias are rampant. Getting humans and their political biases out of the equation is the obvious solution, via impartial computing.

    But wait, you say, district-defining applications are the means to creating today’s gerrymandered district maps!

    Absolutely true. All districts are selected with mapping applications that have been legally tested for accuracy in counting voters. Today’s district-defining applications are step one.

    Step two is “Let many flowers blossom;” open the district-mapping process to interested parties, one and all. Have each state run the the district-defining applications and create a few hundred unique, compliant district maps.

    Step three cuts out the politicians from selecting their plan-of-choice. Instead, do it via a lottery, using the same lottery software that selects the daily numbers. Each state Lottery Commission has, one presumes, adequate safeguards to ensure a random result. Voila! One of many district maps is chosen and that becomes the state’s official district boundaries for the next political cycle.

    States with lotteries

    – Peter S. Kastner

    Self-Driving Software: Why We Need E Pluribus Unum

    Today, numerous large and small companies around the world are working diligently on perfecting their company’s self-driving software. All the large traditional automobile companies are included as well as large technology firms such as Google, Intel and Microsoft, and even Uber. These companies are working in true twentieth-century capitalist fashion: they’re doing it all independently and secretly. This approach leads to sub-optimal technology and foreseeable tragedies.

    Self-Driving Vehicles Use Artificial Intelligence (AI)

    Programming a self-driving vehicle (SDV) by traditional software-development methods is so fraught with complexity that no one, to my knowledge, is attempting. So scrap that idea. Instead, developers have flocked to artificial intelligence, a red-hot technology idea built on rather old ideas about neural networks.

    There’s a lot to AI technology beyond the scope of this blog. A quick Internet search will get you started on a deep dive. For today, let’s sketch a common approach to AI application development:

    • First, an AI rules-based model is fed real-world scenarios, rules, and practical knowledge. For example, “turning left into oncoming traffic (in the USA but not the UK) is illegal and hazardous and will likely result in a crash. Don’t do that.” This first phase is the AI Learning Phase.
    • Second, the neural network created in the learning phase is executed in a vehicle, often on a specialized chip, graphics processing unit (GPU) or multi-processor. This is the Execution Phase.
    • Third, the execution unit records real-world observations while driving, eventually feeding them back into the learning model.

    The Problem of Many

    Here’s the rub. Every SDV developer is on its own, creating a proprietary AI model with its own set of learning criteria. Each AI model is only as good as the data fed into its learning engine.

    No single company is likely to encounter or imagine all of the third standard-deviation, Black Swan events that can and will lead to vehicle tragedies and loss of life. Why should Tesla and the state of Florida be the only beneficiaries of the lessons from a particular fatal crash? The industry should learn from the experience too. That’s how society progresses.

    Cue the class-action trial lawyers.

    E Pluribus Unum

    E Pluribus Unum is Latin for “out of many, one”. (Yes, it’s the motto of the United States). My proposal is simple:

    • The federal government should insist that all self-driving vehicles use an AI execution unit that is trained in its learning phase with an open-source database of events, scenarios, and real-world feedback. Out of many AI training models, one model.
    • The Feds preempt state regulation of core AI development and operation
    • Vehicles that use the federalized learning database for training receive limited class-action immunity, just like we now do with immunization drugs.
    • The Feds charge fees to the auto industry that cover the costs of the program.

    Conclusion

    From a social standpoint, there’s no good reason for wild-west capitalism over proprietary AI learning engines that lead to avoidable crashes and accidents. With one, common AI learning database, all SDVs will get smarter, faster because they are benefiting from the collective experience of the entire industry. By allowing and encouraging innovation in AI execution engines, the industry will focus on areas that impact better-faster-cheaper-smaller products and not in avoiding human-risk situations. Performance benchmarks are a well-understood concept.

    Philosophically, I don’t turn first to government regulation. But air traffic control, railroads, and numerous aspects of medical areas are regulated without controversy. Vehicle AI is ripe for regulation before production vehicles are produced by the millions over the next decade.

    I am writing this blog because I don’t see the subject being discussed. It ought to be.

    Comments and feedback are welcome. See my feed on Twitter @peterskastner.

    Buying a PC for Your Third-World Adventure

    A reader of this blog asked “What PC should I buy that can survive the erratic electricity of a third-world residency?” The answer, of course, is “It depends how much you want to spend.” But having reliable computing in a less-developed setting need not break the bank.

    Assumptions

    You’re an average, modern computer user with professional (i.e., office), social, and personal computing needs preparing to reside outside a first-world power grid. You could be in the mountains of Columbia or Colorado, or, like me, at the end of a one-kilometer driveway. You need to be able to use your PC at any time, but not necessarily all the time. You have a budget.

    My previous stories on this subject are here. Your problem is spotty power that can come and go at any moment, day or night, and be off for hours. Your collateral problem is poor power with spikes, low and high voltage, surges, and intermittent on/off cycles. These can and will destroy the unprotected PC power supply in short order.

    Strategy

    The strategy is to put as much inexpensive stored electricity (i.e., batteries) in front of the computer’s power supply as practical. Duh! The easiest implementation is to use a laptop, which comes with a built-in battery. Modern laptops have hours of self-contained power while you wait for the power grid, backup generator, or tomorrow’s sun to renew your power supply.

    Still easy but more expensive choices are a desktop all-in-one (such as an Apple iMac) or a regular desktop. In both the desktop cases cases, you’ll want an uninterruptible power supply (UPS) which stores AC grid power in a battery and delivers it to your electronic devices.

    With those assumptions and strategy in mind, here is a prioritized list of what to buy and why to buy it:

    The Basics

    • A laptop. Commercial grade (e.g., Dell XPS) has higher build quality than consumer grade (e.g., Dell Inspiron). You get what you pay for. Consider: 17″ screen-size as desktop replacement; SSD for reliability and speed. Your choice: Windows, Mac, even Chromebook.
    • A high-quality surge protector to filter as much electrical grief as possible. Mandatory unless you use a UPS.
    • A bigger and/or backup laptop battery. Greater off-grid time. More efficient than a UPS. Lowest cost when bought bundled with a new laptop.

    The Upgrades

    • A powerful UPS, where power is measured in volt-amps. Over 1,000VA is better. Below 500VA is probably pointless with a laptop. The UPS has receptacles for other electrical necessities, so it becomes your electrical hub. Also, all UPS systems have power quality circuitry so your PC will always get clean power. Also, PC applications and a USB connection to the UPS can automatically and safely shut any PC down before the UPS itself exhausts its batteries.
    • A portable hard drive storage device to back up your PC. If this were me, it would rank in the Basics as a “must have”. The portable hard drives require no electrical power beyond a USB cable. With electricity (from your UPS), there are faster/greater capacity options.
    • A USB 3.0 Hub for greater I/O connectivity. Your laptop or all-in-one will never have enough USB ports for the printers, backup storage, Bluetooth speakers, and mobile devices that need charging. Your choices are four or seven ports. Go with the powered seven-port hub. After all, everyone in your house (office) will want to leech off your clean power. Plan accordingly.

    The Options

    Here’s where the budget goes out the window, but your level of electricity paranoia is nobody else’s business:

    • A secondary monitor scales your laptop’s screen to desktop size or becomes a second screen with more real estate.
    • Backup generator sized to your home electricity load. Best purchased locally as you will require service eventually. Requires (clean) gasoline.
    • Solar power generator requires solar panels, an AC inverter, and distribution hub. It can have its own battery for storage or use the UPS already in our specs. The money problem is a 300-400 watt solar installation can easily cost as much or more than our laptop computing device.
    • The ultimate upgrade for this scenario is a Ford C-Max Energi plug-in hybrid car with internal 7.4 kWh batteries, 2 AC power outlets, USB charging, and 12-volt power. You can also drive it. $31,770 and up.

    Is a Tablet an Alternative?

    A tablet or a laptop/tablet (i.e., a two-in-one) is worth considering. Portable, mobile, self-contained cellular network option. Some have a desktop operating system. The keyboard and mouse can use easily rechargeable AA batteries. Device operating life often exceeds eight hours. Rechargeable from a small solar panel. Connects to Bluetooth peripherals and to a video monitor/TV via an HDMI cable.

    Minimalist computing dramatically simplifies backup power requirements.

    Consolidated electronics such as a tablet connected to the LCD monitor also used as a TV makes planning easier and redundancy less necessary.

    The Network

    Getting on the Internet has its own set of problems and costs. You’ll need local knowledge to make cost-effective decisions.

    Assuming a controllable data budget, the easiest Internet on-ramp is to use your smartphone as a hotspot and connect your laptop via Bluetooth. You won’t find unlimited data plans in the third world, so this approach needs careful usage-based planning.

    A conventional desktop or laptop setup will require a network access device(s) to the cable, wireless broadband, or satellite network. Plan to power-protect these devices too by plugging them into your UPS. However, that limits PC placement to being close to the network access point.

    Follow @PeterSKastner on Twitter

    Note: the products linked in this blog post are not endorsed by the author. The author has no financial ties to any product mentioned in this blog post.

     

    Buying a PC Online: a 2015 Saga of Customer-Service Inefficiency

    In this open letter to Michael Dell, CEO at Dell.com, we relate the saga of a friend I’ll call Russ and his journey to buying a replacement PC online.

    Plan A: Lenovo Chokes
    Russ had an old, Lenovo one-core AMD workhorse desktop upgraded to Windows 7 awhile back. The box got slower than molasses. After all the usual speed-up remedies failed, Russ decided to buy a new desktop. We consulted as I do for (too) many friends and decided on a modest machine with a solid-state disk. Russ went online and configured-to-order in early November. Problem solved ….

    But not quite. Lenovo quoted a delivery date, and when December rolled around, Russ queried when was his new PC going to be built and shipped. The answer was “We don’t know, but hold tight.” Russ replied, “Not good. Cancel the unfulfilled order.” Lenovo said, and I paraphrase, “You can’t cancel the order because we have released it to our manufacturing supply chain in China. It will arrive when it is built and shipped.” Russ called American Express and put the charge on indefinite hold in case it actually arrives some day.

    Moral: 1. Don’t take an order you cannot fulfill. 2. Don’t leave a customer hanging.

    Plan B: Dell Gets to Bat
    With a little coaching, Russ found what he wanted at Dell.com: an Inspiron desktop without an SSD but with a decent Intel “Haswell” Core i3 processor, 4GB of memory, and a 1TB hard drive running Windows 10. Price was US$449 with free shipping. The clincher was same-day shipping.

    The Unboxing: a Moment of Silence and Sadness
    The new PC arrived in four days. I came over Sunday morning with assorted tech bits so we could hook up the new Inspiron and to run Microsoft’s sweet Windows Migration Tool to get it into production. Popped open the chassis, added 4GB of memory, closed the chassis, connected the cables and hit the power-on switch.

    Nothing happened. Nada. The PC would not power on in spite of trying different electrical sockets and AC cables. It was a 2015 PC Dead on Arrival.

    We were sad but not completely surprised as these things happen — presumably very rarely because of the Dell costs to swap a DOA machine. So, we called Dell Tech Support to get started.

    Tech Support: Call Triage
    It took eleven minutes to wait on hold, enter the PC service tag, explain to the tech we had a DOA machine that we wanted to swap. The information requested included the service tag, serial number, name and address, and other bits of information — all of which is already stored in Dell’s order entry system but was nevertheless verified and keystroked again into the service system.

    We made it through triage and onto tech support’s call resolution team.

    Tech Support: Call Resolution Team
    This call took eighteen minutes, with most of the time spent on hold at the end waiting to be transferred to Sales. The business-process problem with the call resolution phase is simple: the department is a separate information silo from call triage, and no call or problem data is shared.

    Russ had to literally spell out the same answers to information questions including the service tag, serial number, order name and address, and other bits of information that had already been amassed at order-taking and call-triage. Besides boring the customer to tears, the process is a poor use of tech support labor.

    Reassuring us that the four-day-old PC was still under warranty, call resolution rang off to run down the DOA return process. After seven minutes, we were told that Sales handled returns and “please hold while I transfer you to that department.”

    Sales Support: Waiting for Godot
    And we waited some more with occasional call-tree clicks that eventually ended with a recorded message saying “Sales is closed on Sundays, so call us during business hours tomorrow.”

    Customer time to non-resolution of a DOA problem: more than 30 minutes. Russ was pissed. I went home to lunch.

    Luncheon Epiphany
    I often skim the Sunday newspaper advertising inserts to keep track of technology mainstream deals and product positioning. For example, Intel’s Broadwell and Skylake 14nm processors only recently started being featured in PCs at BestBuy, and are still not being advertised at Wal-Mart, Target, Staples, or OfficeMax.

    That’s how I found the Staples ad for a Dell Inspiron 1300 desktop with a Intel “Haswell” Core i3 processor, 8GB of memory, and a 1TB hard drive running Windows 7 Pro. Price $300, marked down from $580, and $150 less than Dell.com’s almost identical DOA PC.

    I telephoned Russ, he picked the PC up that afternoon, and the migration was well underway on Monday morning. The DOA machine goes back to the Dell factory tomorrow.

    Dear Michael,
    I silently applauded your taking Dell private because the mature PC industry in a slowing global economy does not need a quarterly spotlight on top of all its other challenges. I expected lots of value could be wrung out of the business with greater efficiencies and focus on key business processes. Dell has been a build-to-order online specialist for, like, thirty years.

    So, I was disappointed that Dell’s DOA process involved so many steps across organizational and information silos that cry out for a rethink. I hope you’ll take this missive to heart. You know what to do about this.

    No, It’s Not Just Dell and Lenovo …
    HP has no laurels to sit on. Even Apple has disappointed me on more than one occasion. As this saga illustrates, the PC industry can do better on customer satisfaction.

    The future of personal information technology is not one-size-fits all. It’s “buy what you need and want”. That’s going to take a holistic approach to online sales and service. You would have thought that would be old-hat going into 2016, but apparently not.

    Follow @peterskastner on Twitter

    Dell Inspiron 3000 Desktop