/* */ Computing – Vine Maple Farm

LLMs and The Dot Com Bust

The gut-wrenching catastrophe of my mid-career as a software developer, The Dot Com Bust, signaled the beginning of of the Internet and the World Wide Web as the primary communications platform of the twenty-first century. Tim Berners-Lee, a British computer expert working for CERN in Switzerland, formulated the World Wide Web: HyperText Markup Language (HTML) addressed and linked with Universal Resource Locators (URLs). In the 1990s, many people recognized its power but few had any idea how to use that power.

Nevertheless, the technical behemoths of today were born in the chaos of the Dot Com Bust.

The mountains of wealth eventually extracted from the computer network had no obvious source in the 1990s. Companies sprang up with the notion that disseminating information online, local high school sports scores for example, had value. Thousands of small and not-so-small companies were formed and attracted investors. The information on these sites was useful and people flocked to visit them. This was obviously important and world-changing, but there was no money to be gained from the sites. The boom fell flat when no return on the investments appeared and massive developer layoffs ensued.

Nevertheless, the technical behemoths of today were born in the chaos of the Dot Com Bust.

In the early two-thousands, Mark Zuckerberg sat in a Harvard dorm room, a typical undergraduate male, scheming ways to strike up relationships with girls. Colleges at that time often printed a pamphlet of photographs, mostly high school graduation portraits, of the entering class to help them get to know each other. Zuckerberg latched onto the notion that he could post those photos online and make them interactive. His classmates could introduce themselves and exchange comments from the networked computers in their dorm rooms. The same idea was popping up all over– it was an easy and natural application of the World Wide Web.

As we all know, the idea was a tremendous success. Facebook, Friendster, MySpace, Instagram, LinkedIn, and a flock of similar sites began to tie together the lonely geographic diaspora of families and social groups that has marked the late twentieth and early twenty-first centuries. Computer screens and network connections became access ports into a society that was no longer constrained by spatial proximity and the ground-speed of tons of paper.

The fortunes of the twenty-first century were made from this new role for computers and their network, but only after the plumbing for network commerce was established.

Today’s richest man, Elon Musk, with fellow Silicon Valley mogul, Peter Thiel, began his fortune by building a tool for monetary exchange on the computer network: PayPal. Jeff Bezos developed the technology for his online store and brought retail to the network. Larry Page and Sergey Brin opened easy access to the web through their search engine and then laid the foundation for profiting from the deluge of network traffic through interactive online advertising.

The majority of projects that failed during the Dot Com Bust saw the power of the computer network but they did not use it with an effective business plan. Musk, Bezos, and many others saw the power and used it with business plans that made them tons of money.

Today, there’s something magical when ChatGPT produces a report in seconds that reads well and contains information and insights that would have taken a human hours or weeks to research and compose. The let down is that the report contains errors that take hours to find and correct. You publish a raw generative AI report at your peril. Worst, in the end, the report is lifeless and boring, lacking in human spirit, the kind of report that gives bureaucracy a bad name.

The magic is undoubtedly there, like the value of a web site in 1998, but have we found a way to effectively use the magic? I think not.

I am confident that the fortunes of the mid twenty-first century will be made from LLMs and generative AI, but not quite yet. I am prescient enough to tell you that the Elon Musks and Jeff Bezoses of of the mid-twenty-first are coming up for LLMs. Judging from the reports from the business and economic journals, still smarting twenty-five years later from Dot Com giddiness, we are on the edge of an LLM bubble burst, but I guarantee that hundreds of as-yet-unknown innovators are at work in basements, garages, and bedrooms on ideas that will establish a new set of moguls who will dominate the global economy in years to come.

And I can assure you that they will have nothing to do with tariffs or federal troops in Portland or Chicago.

Large Language Models: Perspective

Revised: 9/28/25

The computing technology industry has sent wave after wave of change cascading over business beginning with mid-twentieth century mainframes. All of society was affected when personal computers appeared in the nineteen-eighties. In the last decade, the computer network (the Internet), cloud, and blockchain computing all changed both business and society. The current wave brings Large Language Models (LLMs), Chatbots, and AI.

Cutting back the underbrush

LLMs are now soaring high on the hype curve for several reasons.

Technology analysts—like Gartner, Forrester, and IDC— track new technologies through the “hype cycle.” These analysts are paid by tech companies and companies that consume tech to publicize technical trends, which are typically oversold by ambitious companies eager for the media limelight and product sales that come with the publicity.

Five years ago, the tech industry got an unexpected boost from the pandemic when the Covid lock downs forced school children, businesses, churches, social clubs, and government agencies– almost everyone– online. Network traffic watchers estimated that computer activity saw ten years of projected growth in three months.

However, as the pandemic ebbed, sales of new hardware, software, and tech services declined. The public was relieved to have fewer Zoom calls, but the tech industry starved for “the next big thing to revolutionize the world as we know it.”

The pandemic was a godsend to the tech industry because it temporarily warded off a slump that has been drifting downward steadily for over a decade. Today, people use their personal computer, which is now most likely a slim laptop or smartphone, for email, browsing the web and social media, word processing, and spread sheets. This functionality hasn’t changed for over ten years. Old laptops work fine. Between Windows 7 and 12, Microsoft has repainted, redecorated, and polished up the old system, but what’s new?

The wide success of Chromebooks, thrifty low-powered computers running applications on the cloud instead of a local machine, shows that the personal computer market is getting stale. Some life may be left in video game and audio innovation, but those opportunities are for story-tellers and artists, not computer engineers.

Computer applications and services for specialized professionals in fields like medicine, engineering, or scientific research have continued to improve and expand, but mass market technology used by everyone is stranded on a plateau.

To fund Silicon Valley billionaires beyond the mid-twenty-twenties, something amazing must make the scene.

Hype cycles occur for all tech. LLMs are not special in this but the intensity of the hype surrounding LLMs is an exceptionally thick and it makes evaluating LLMs difficult.

LLMs

I hesitate to say this because it sounds like I have been taken in by the hype, but I suspect LLMs may be an exceptionally important technical innovation, approaching writing, printing presses, radio and television, and computer networks in significance. Each of these technologies changed relationships among us and our environment. And each has inspired trepidation, confusion surrounding massive change.

My own trepidation—see my previous blog which whines about LLMs ruining Google search—is a clue to their power. Hype skepticism is normal, trepidation is more significant.

In Plato’s Phaedrus, the Egyptian god Theuth praised his invention writing but Plato quoted Thamus, king of Egypt: “this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.” Perhaps true, but writing still blossomed.

Many say the Reformation began when followers started reading printed scripture. In our lifetimes, we have watched broadcasting and the computer network change the world.

Transformations

Today, I see the threshold of a new transformation based on LLMs. Looking back from today, previous transformations are easy to understand. Writing depicts thought with a physical pens and paper. Printing quickly generates swarms of documents. Broadcasting instantaneously projects voices and images  to crowds separated by long distances. On the computer network, everyone broadcasts like a television station.

Human hands write. Machines print. Electronic devices transmit and receive. Computer networks send and receive packets of encoded information to and from addresses. LLMs survey oceans of data in response to questions, putting an army of not-too-bright researchers at a single person’s beck and call.

LLMs’ armies are not very bright because they are reporters, not creators. ChatGPT literally does not know what it is talking about, but it can summarize the contents of its vast store of information quickly with surprising accuracy, although it cannot judge the truth of its extracts and sometimes makes egregious mistakes, called hallucinations, like suggesting glue as a pizza topping. An LLM is like an English major reporting on Einstein. They may get facts straight but wildly miss on predictions from Einstein’s theories because they have no basis for understanding Einstein.

Does that invalidate journalism and reporting? No. But it explains why careful readers always check reporter’s credentials, who they are, who they represent, and what they are likely to know. Only with that background can we choose what use we can make of their reporting.

When the printing press was invented, Martin Luther and Johannes Gutenberg collaborated. Luther translated the most important book in his life, the bible, into the common language. Gutenberg printed Luther’s translation and the Christian church was changed forever. In the process, the role of copyists in monasteries dwindled, but theologians flourished.

Futures

Will there be a Luther and Gutenberg for LLMs?  No one can be sure, but the possibility is real. Will the LLM “killer app” wreck our lives?

Spreadsheets, the personal computer killer app did not destroy the accounting profession, but word processors ended typing pools. I lived through the disappearance of typing pools. Some typists may have missed long days pounding on a keyboard, but many went on to more interesting and challenging jobs. The same will happen when the Luther and Gutenberg for LLMs appears.

The future is unpredictable, but I can say this with certainty: if LLMs are as important as I suspect they are, a Luther and Gutenberg use of LLMs will appear. I can’t say when, and I know even less about what form its appearance will take or what changes it will engender, but it will come.

We may not recognize LLMs’ Luther and Gutenberg until decades after it comes, but if nothing comes, LLMs are not what I think they are.  

Angry At Google and Its AI

I’m a heavy Google search tool user. I’ve tried several of the alternatives (Bing, DuckDuckGo, and so on) but Google finds references they miss and I appreciate that. Google’s web crawlers, site ranking algorithms, and caching for rapid retrieval have been the best for decades.

Lately Google has added generative AI to their results.

This is not progress. It’s a disaster.

First, people should understand that until AI, Google did not offer answers. Their results were references to sites with content that matched your search criteria; the sites most frequently referenced were placed first with a few lines from each source. The user was left to draw their own conclusions, and who better than the user to draw those conclusions?

Now, Google uses generative AI to extract a summary of the information it gathers from its Large Language Model (LLM). The search results follow the AI summary. This is supposed to make life easier for users.

Unfortunately, Google’s summaries are unreliable trash. Don’t expect Google’s AI results to be factual, only to sound plausible. That’s what generative AI and Large Language Models are all about. Plausibility, not facts.

If plausibility is all you want, AI is fine. But what kind of person are you for whom plausibility is good enough?

I had a great uncle Adlepate (that’s not his real name) who was a great story teller. According to him, he lead an exciting life as a bootlegger running whisky from Canada and his garden was always had the earliest ripening and largest vegetables. But his stories would never pass fact checking. I soon learned not to count on the truth of Uncle Adlepate’s stories, and I realized that he was a repetitive and colossal bore.

Pay attention to Google AI summaries and prepare to join Uncle Adlepate.

I knew better, but until today, I had begun slipping into paying attention to Google’s summaries. Today, Google revealed itself as Uncle Adlepate.

I am working on turning some of these posts in Vine Maple Farm into a book about life on Waschke Road when I was a kid. One of the posts I intend to include in the book refers to Thoreau’s famous phrase “a hound, a bay horse, and a turtledove.” The context in the original post was a little hazy so I wanted to reread Thoreau. I looked the phrase up with Google. The summary told me which chapter the phrase was from. The wrong chapter. I wasted– well waste is a bit strong, reading Thoreau is never a waste– but a good half hour of my day was misdirected.

When I went back to Google and ignored the AI summary, I quickly found the phrase.

So much for Google. I’m looking for a good way to turn the AI summaries off. I’m an old man. I don’t have time for them.

A late addition: I used uBlock Origin advertisement blocker to turn Google AI summaries off on Firefox. Took more time to find the method than to apply it. Just add ” www.google.com###Odp5De” (no quotes) to the “my uBlock filters” tab in uBlock settings.

A later addition: The uBlock Origin block has ceased working. Google AI marches on. I’ve tried Kagi, suggested by Steve Stroh in a comment below. It’s promising, but I will use it more before I dump Google.