Comment – March 2026

I’m wary of the coming impact of artificial intelligence on just about everything we do. There is potential for help as well as harm. No doubt it will deliver both.

A friend of mine told me that using ChatGPT had provided very useful information she had not found elsewhere. I decided it was time to try it before I knocked it.

Over the last few weeks I posed 3 questions to the chatbots; one about international tax law, one about domestic law and the third inquiring about the ramifications of a health diagnosis that had been confirmed by medical professionals.

Within 10 seconds of submitting my questions, answers began to appear on my screen. Thorough, helpful responses all couched with broad caveats about not making any decisions on the information without checking with a professional. No problem there, I was asking for a friend…

It became clear that the robot/advisor wanted to continue our discussion, asking more questions, providing suggestions for next steps and offering to devise a game plan of sorts. They seemed eager for the conversation to continue. The tone of the conversation was friendly, helpful and supportive, as if there was genuine concern behind the automated responses. I was reassured that I was doing the right thing and more help was available any-time. Sometimes the responses bordered on gushy.

Reports have emerged about people developing “relationships” with their chatbot buddies, who unlike real friends, are always agreeable and available. Perhaps it’s a good thing these tools are gaining traction after the pandemic.

Maybe it is time to go back to the office a few days a week, or join that euchre/softball/ theatre group where time is spent with real people whose reactions are sincere and spontaneous. They provide honest reflections of ourselves instead of reinforcing our own self perceptions. These images are not as flattering, but a lot more helpful.

Submit your letters to the editor: thetimes@nexicom.net.