Artificial Intelligence (AI) is here to stay. Over time, it will become an integral part of what we do as healthcare providers, whether we like it or not, seamlessly.
As a self-proclaimed techno-junkie, sitting in front of four screens with three servers daily, I embrace every technological advancement available to make my time and work more efficient. I also use AI at least 20 times a day for research and writing, both personally and for business.
What’s going right…and what isn’t
The upside of AI is information and speed. It can find anything on the web faster than I can.
The downside of AI is accuracy. It finds what I search for and creates its own version of the best context for my thoughts. The problem with that is most of the time, it needs tweaking for accuracy.
If you use AI for any purpose, the caveat is that each time you use it, you MUST read what AI created to ensure it is what you want in your message. Each and every time you use it, every word.
Now, with an article like this, if it is not 100% accurate, so what? My reputation as a writer will be on the line. However, professional notes are another issue that can cost you your time, license, money or even freedom if you are consistently inaccurate, based on a third-party notes vendor and some precise regulatory requirements.
How I went my own way
I created an electronic medical records (EMR) platform, as I was tired of the garbage our industry was putting out for notes software.
I went to a medical software program that was easily eight years ahead of our industry and wrote a complete chiropractic template that does everything we need, from content to grammar, with computer processes, not AI. It was great.
One year into the project, which was five months ago, the software makers added AI. Loving technology, I immediately dug in.
What I got underwhelmed me.
Non-AI version vs. the AI version
Here’s how the new, AI-enhanced version measured up.
First, the time I spent reviewing and revising each record before printing was more than I would have spent just using the non-AI processes available.
The report structure was in a different format, not the polished version I had created, which was needed in chiropractic for E&Ms and SOAP notes. This is both for compliance and reputation for referrals.
The AI had made subtle changes in grammar that actually changed my findings.
I deleted AI from the system, as it wasn’t ready for “prime time,” and went back to the almost-perfect non-AI system I had created.
Keep on testing
Since then, I have reviewed multiple AI platforms in many current chiropractic EMR systems and obtained the same results: underwhelming. You enter your findings, then have to ask the AI to provide choices, which takes time and often doesn’t match the report structure.
Ambient mode (in which you talk and it types) is worse. AI makes too many clinical decisions, according to how the report reads, and forces you to engage in rewriting, which takes much more time than using the computer’s non-AI processes set up correctly with macros.
It is still the future
AI is the future, but that future is (still) a little way off. Maybe one or two years.
Remember, AI is called a “large language model” because it takes millions of words, reports and findings to synthesize one accurate report. For AI to achieve consistent accuracy in healthcare, it needs a little more time and more practice on more words, reports and findings.
I have had this conversation with family members and friends in almost every discipline in healthcare. As I did, they all “jumped in,” then jumped back out for the above reasons. AI for note-taking isn’t yet ready for any discipline. Close, but not yet.
If you need further evidence of this truth, look to the legal field. If you are dealing with medical legal cases and lawyers, , for example, one of the first questions the opposing counsel asks is “Did AI help you generate this report?”
If your answer is yes, that fact is held against you because your report was created by an unlicensed programmer and not you. The courts are still leery of the technology and disallow most AI-generated reports.
So, in my opinion, AI is not ready for prime time. Yet.
Mark Studin, DC, FPSC, FASBE(C), DAAPM, is an adjunct assistant professor at the University of Bridgeport, School of Chiropractic and an adjunct postdoctoral professor at Cleveland University-Kansas City, College of Chiropractic. He is also an Adjunct Associate Clinical Professor at The State University of New York at Buffalo, Jacobs School of Medicine and Biomedical Sciences, Department of Family Medicine. He earned his Fellowship in Primary Spine Care whose courses are certified in joint providership from The State University of New York at Buffalo, Jacobs School of Medicine and Biomedical Sciences, Office of Continuing Medical Education, and Cleveland University-Kansas City, College of Chiropractic. He also runs the Academy of Chiropractic’s Personal Injury Program. He can be reached at 631-786-4253 or drmark@academyofchiropractic.com.
Artificial Intelligence (AI) is here to stay. Over time, it will become an integral part of what we do as healthcare providers, whether we like it or not, seamlessly.





