In late January, speaking to Therapedic manufacturers at the Las Vegas Market, I presented on how to re-humanize generative AI. My goals were to de-mystify and de-stigmatize this powerful, controversial collaborative tool that is creeping into our work whether we want it to or not.
Essentially, the presentation featured steps toward AI literacy, which Anthropic’s genAI bot Claude defines as “empowering individuals to be informed, thoughtful and effective users of AI technologies across personal, professional and societal contexts. It helps ensure people can harness the benefits of AI while also navigating its complexities and limitations.”
I couldn’t have put it better myself, which is part of the point here. It’s also not a surprise, because genAI is a neural network modeled on the human brain.
At the Therapedic annual meeting, we were interested specifically in LLMs, or Large Language Models, because they can generate their own texts by connecting words, letters and symbols of all sorts in ways similar to humans. These LLMs do lateral “thinking,” or processing, creating new-ish things by re-combining from what they “know” or can access. This is essentially what we do with our own brains, only LLMs have “read” and can “remember” basically everything digitally published.

To more fully appreciate how genAI is fundamentally different, and vastly more powerful, than, say, Google search, consider the interface of most of the LLMs, such as ChatGPT, Claude and Copilot. This interface is basically identical to that used for chat, texting and instant messaging. A text box, a “send” button, and that’s really it. I’m sure you’ve heard of the duck test. If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. Well, if an interface looks like texting, behaves like texting and produces results like texting, it’s probably just like texting. And texting is simply the digital version of conversation.
This simplicity is the key to unlocking the potential of LLMs in the workplace and even at home.
Just talk to it
Now imagine that on the other side of the interface is a human, a companion, and you are on your way to AI dexterity. This re-humanizing turns that LLM into, depending on the need, a life coach, job coach, writing coach, HR consultant, librarian, paralegal and even a marriage counselor, to name just a few. All of these “people” are permanently on staff. They never ask for vacation or sick days, they don’t take smoke breaks and they cost pennies on the human resource dollar.
To quote the singer Seal, “It’s here, and it’s part of our evolution. You can’t fight it, and you really can’t see it as this enemy that’s going to be the end of mankind. I started out by thinking it was a machine, but once I started to relate to it as though I were talking to a person, this incredible collaboration started.”
So, this was the chief takeaway of the presentation: Using genAI is mostly about having a human conversation. It produces good, sometimes great AI results. Even the use of etiquette, such as including “please” and “thank you” in your queries, produces better results.
Another corollary is that integrating AI into the workplace is less about replacing humans than it is collaborating to get more done faster, because genAI relies on human creativity, human expression and human skill. Perhaps the key difference is that LLMs can extrapolate in ways and on a scale that we cannot.
Another takeaway from the session: All genAI chatbots are not created equal, and as with good bourbon, you get what you pay for. Different generators are better at different tasks, and the pay versions all are significantly better than the freeware their makers leverage to get you using their products.
For the kind of work I routinely do, Semantic Scholar is a good choice, because it can quickly identify research relevant to a particular query and provide summaries of that research. Anthropic’s Claude.ai is another good choice for me because it’s especially good at creative work, collaborative projects, computer coding, multilingual processing and developing training materials.
My employer, Berry College, furnishes us with Microsoft’s Copilot, in part because our version is firewalled off from the databases Copilot uses to train. In other words, what we put into the query box doesn’t end up getting sucked into Copilot’s “brain” to use for future queries. Copyrighted material doesn’t get compromised. But I wouldn’t grade Copilot as highly overall as either Claude or ChatGPT.
The secret sauce here, regardless of which generator you choose, is prompt engineering. Making the most of AI generators is about crafting the best prompts, and like any new skill, this takes time, trial and error, collaboration with those more advanced and just plain curiosity.
Toward better prompts
I’m getting better at prompt engineering with each focused session, but I know I have a long way to go. The incentive, at least for me, is how quickly and substantively even a slight change in a prompt can improve the result. The generators themselves “want” better prompts, so I often include a prompt asking the engine for tips on crafting a better prompt on the specific topic or task.
I also recently picked up that ChatGPT is really good working with Adobe PDFs. For example, I asked ChatGPT to summarize a long journal article into five key points and to reverse-engineer research questions that the article answers. I dragged the article into the interface and batta-bing, batta-bang, done.
Q-and-A is often the best part of these sorts of presentations, and the Therapedic meeting was no exception. We had an attorney in the room, and naturally he wanted to know why genAI has what are called hallucinations. There is a growing roster of lawyers who have submitted court filings with hallucinated citations. The ethics of using AI to write a court motion notwithstanding, why do the AI bots simply make up stuff?
I don’t have a definitive answer, but I know that like a lot of American voters, the bots are terrible at distinguishing truth from fiction, fact from mis- and disinformation. And all the LLMs are ever doing is statistical averaging, pastiching together bits of text from the training data. These bits are called tokens. By relying entirely on data and information we created, you and me, ordinary folk, there is quite predictably error, because we make stuff up, too. There is a ghost in the machine. There is bias, error and a very human, or fallible, sense of the truth.
While not perfect, generative AI is going to make us all more productive. Will it also be disruptive? Absolutely. We’ve lived through this already with the World Wide Web. And there have been other precursors: the camera, the phonograph and the spreadsheet. We thought the advent of the spreadsheet would eliminate the need for accountants. We believed that the camera would put a lot of artists out of business.
And like the camera, the gifts of genAI include speed, a new sort of knowing, the opportunity for creativity and even a new way of seeing.
I’ll close with a not-so-bold prediction: The speed of work will keep getting faster. Those who begin learning how to use genAI and intelligently parse its results will have a distinct advantage over those who for whatever reasons stick to their more traditional methods of iteration, information retrieval, summarization and analysis.