/* */ Vine Maple Farm – Page 5 – Pioneers, books, computing, and our future

Reading The News

This morning while scanning the list of web sites in my browser bookmark tab labelled “News.” I may have been doom-scrolling, but I soon lost interest in today’s events and opinions and began to think about what “News” was like when I was growing up on Waschke Road in the region that the denizens occasionally call “The Fourth Corner,” referring, perhaps pretentiously, to the last corner of the U.S. to be dominated by Europeans.

First, we lived in the upstairs of my grandparents’ house. When I entered the first grade, my grandparents bought the house and five acres across the road and they moved there, leaving old farmhouse to my parents.

My grandparents subscribed to the local daily newspaper, The Bellingham Herald. The Herald arrived by mail the day after it was published. No home delivery on Waschke Road back then. When my grandparents finished reading the paper, they gave it to my parents, usually just before supper at five-thirty. Thus, we read the evening newspaper about twenty-four hours after it was published.

We got a TV when I was in the first grade, but we didn’t watch the evening news much because Dad switched off the television when the news started. That was the signal to go to the barn and milk the cows, taking all our attention until eight or eight-thirty. That schedule was fixed by the sun and the cows. Bovines must be milked every twelve hours or they stop lactating. Milking had to be at six in the evening and six in the morning, or the dairy interfered with raising summer field crops.

Sometimes, we turned on the radio at noon dinner break, but more pressing farm issues often dominated the midday.

Knowledge from off our road also came from magazines: The Saturday Evening Post, Washington Farmer, Farm Journal, McCall’s, Sunset Magazine, Time, U.S. News and World Report, and Saturday Review all graced our rural mailbox  at one time or another.

No dearth of content threatened the old farmhouse, but the cadence of our news sources was far different from my sources today. Our most constant news source, the daily newspaper, had a twenty-four hour delay built in. Everything else was either weekly or monthly.

Compare that to today. I have close to twenty websites listed in my news bookmark tab. I could easily add more. These are all updated continuously and I open them several times a day. I have almost instant news from all over the globe.

Am I better informed than I was in the 1950s and 1960s? Depends on what you mean by “better.” I certainly wallow in half-baked and ill-considered data, but am I more aware of what is important in my surroundings?

I don’t know.

Large Language Models: Perspective

Revised: 9/28/25

The computing technology industry has sent wave after wave of change cascading over business beginning with mid-twentieth century mainframes. All of society was affected when personal computers appeared in the nineteen-eighties. In the last decade, the computer network (the Internet), cloud, and blockchain computing all changed both business and society. The current wave brings Large Language Models (LLMs), Chatbots, and AI.

Cutting back the underbrush

LLMs are now soaring high on the hype curve for several reasons.

Technology analysts—like Gartner, Forrester, and IDC— track new technologies through the “hype cycle.” These analysts are paid by tech companies and companies that consume tech to publicize technical trends, which are typically oversold by ambitious companies eager for the media limelight and product sales that come with the publicity.

Five years ago, the tech industry got an unexpected boost from the pandemic when the Covid lock downs forced school children, businesses, churches, social clubs, and government agencies– almost everyone– online. Network traffic watchers estimated that computer activity saw ten years of projected growth in three months.

However, as the pandemic ebbed, sales of new hardware, software, and tech services declined. The public was relieved to have fewer Zoom calls, but the tech industry starved for “the next big thing to revolutionize the world as we know it.”

The pandemic was a godsend to the tech industry because it temporarily warded off a slump that has been drifting downward steadily for over a decade. Today, people use their personal computer, which is now most likely a slim laptop or smartphone, for email, browsing the web and social media, word processing, and spread sheets. This functionality hasn’t changed for over ten years. Old laptops work fine. Between Windows 7 and 12, Microsoft has repainted, redecorated, and polished up the old system, but what’s new?

The wide success of Chromebooks, thrifty low-powered computers running applications on the cloud instead of a local machine, shows that the personal computer market is getting stale. Some life may be left in video game and audio innovation, but those opportunities are for story-tellers and artists, not computer engineers.

Computer applications and services for specialized professionals in fields like medicine, engineering, or scientific research have continued to improve and expand, but mass market technology used by everyone is stranded on a plateau.

To fund Silicon Valley billionaires beyond the mid-twenty-twenties, something amazing must make the scene.

Hype cycles occur for all tech. LLMs are not special in this but the intensity of the hype surrounding LLMs is an exceptionally thick and it makes evaluating LLMs difficult.

LLMs

I hesitate to say this because it sounds like I have been taken in by the hype, but I suspect LLMs may be an exceptionally important technical innovation, approaching writing, printing presses, radio and television, and computer networks in significance. Each of these technologies changed relationships among us and our environment. And each has inspired trepidation, confusion surrounding massive change.

My own trepidation—see my previous blog which whines about LLMs ruining Google search—is a clue to their power. Hype skepticism is normal, trepidation is more significant.

In Plato’s Phaedrus, the Egyptian god Theuth praised his invention writing but Plato quoted Thamus, king of Egypt: “this discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.” Perhaps true, but writing still blossomed.

Many say the Reformation began when followers started reading printed scripture. In our lifetimes, we have watched broadcasting and the computer network change the world.

Transformations

Today, I see the threshold of a new transformation based on LLMs. Looking back from today, previous transformations are easy to understand. Writing depicts thought with a physical pens and paper. Printing quickly generates swarms of documents. Broadcasting instantaneously projects voices and images  to crowds separated by long distances. On the computer network, everyone broadcasts like a television station.

Human hands write. Machines print. Electronic devices transmit and receive. Computer networks send and receive packets of encoded information to and from addresses. LLMs survey oceans of data in response to questions, putting an army of not-too-bright researchers at a single person’s beck and call.

LLMs’ armies are not very bright because they are reporters, not creators. ChatGPT literally does not know what it is talking about, but it can summarize the contents of its vast store of information quickly with surprising accuracy, although it cannot judge the truth of its extracts and sometimes makes egregious mistakes, called hallucinations, like suggesting glue as a pizza topping. An LLM is like an English major reporting on Einstein. They may get facts straight but wildly miss on predictions from Einstein’s theories because they have no basis for understanding Einstein.

Does that invalidate journalism and reporting? No. But it explains why careful readers always check reporter’s credentials, who they are, who they represent, and what they are likely to know. Only with that background can we choose what use we can make of their reporting.

When the printing press was invented, Martin Luther and Johannes Gutenberg collaborated. Luther translated the most important book in his life, the bible, into the common language. Gutenberg printed Luther’s translation and the Christian church was changed forever. In the process, the role of copyists in monasteries dwindled, but theologians flourished.

Futures

Will there be a Luther and Gutenberg for LLMs?  No one can be sure, but the possibility is real. Will the LLM “killer app” wreck our lives?

Spreadsheets, the personal computer killer app did not destroy the accounting profession, but word processors ended typing pools. I lived through the disappearance of typing pools. Some typists may have missed long days pounding on a keyboard, but many went on to more interesting and challenging jobs. The same will happen when the Luther and Gutenberg for LLMs appears.

The future is unpredictable, but I can say this with certainty: if LLMs are as important as I suspect they are, a Luther and Gutenberg use of LLMs will appear. I can’t say when, and I know even less about what form its appearance will take or what changes it will engender, but it will come.

We may not recognize LLMs’ Luther and Gutenberg until decades after it comes, but if nothing comes, LLMs are not what I think they are.  

Angry At Google and Its AI

I’m a heavy Google search tool user. I’ve tried several of the alternatives (Bing, DuckDuckGo, and so on) but Google finds references they miss and I appreciate that. Google’s web crawlers, site ranking algorithms, and caching for rapid retrieval have been the best for decades.

Lately Google has added generative AI to their results.

This is not progress. It’s a disaster.

First, people should understand that until AI, Google did not offer answers. Their results were references to sites with content that matched your search criteria; the sites most frequently referenced were placed first with a few lines from each source. The user was left to draw their own conclusions, and who better than the user to draw those conclusions?

Now, Google uses generative AI to extract a summary of the information it gathers from its Large Language Model (LLM). The search results follow the AI summary. This is supposed to make life easier for users.

Unfortunately, Google’s summaries are unreliable trash. Don’t expect Google’s AI results to be factual, only to sound plausible. That’s what generative AI and Large Language Models are all about. Plausibility, not facts.

If plausibility is all you want, AI is fine. But what kind of person are you for whom plausibility is good enough?

I had a great uncle Adlepate (that’s not his real name) who was a great story teller. According to him, he lead an exciting life as a bootlegger running whisky from Canada and his garden was always had the earliest ripening and largest vegetables. But his stories would never pass fact checking. I soon learned not to count on the truth of Uncle Adlepate’s stories, and I realized that he was a repetitive and colossal bore.

Pay attention to Google AI summaries and prepare to join Uncle Adlepate.

I knew better, but until today, I had begun slipping into paying attention to Google’s summaries. Today, Google revealed itself as Uncle Adlepate.

I am working on turning some of these posts in Vine Maple Farm into a book about life on Waschke Road when I was a kid. One of the posts I intend to include in the book refers to Thoreau’s famous phrase “a hound, a bay horse, and a turtledove.” The context in the original post was a little hazy so I wanted to reread Thoreau. I looked the phrase up with Google. The summary told me which chapter the phrase was from. The wrong chapter. I wasted– well waste is a bit strong, reading Thoreau is never a waste– but a good half hour of my day was misdirected.

When I went back to Google and ignored the AI summary, I quickly found the phrase.

So much for Google. I’m looking for a good way to turn the AI summaries off. I’m an old man. I don’t have time for them.

A late addition: I used uBlock Origin advertisement blocker to turn Google AI summaries off on Firefox. Took more time to find the method than to apply it. Just add ” www.google.com###Odp5De” (no quotes) to the “my uBlock filters” tab in uBlock settings.

A later addition: The uBlock Origin block has ceased working. Google AI marches on. I’ve tried Kagi, suggested by Steve Stroh in a comment below. It’s promising, but I will use it more before I dump Google.