In late June, Doody Consulting held a Luncheon of Gratitude for Chicago-area clients and friends from the 15 years Dan Doody and I have worked as a consulting practice. In addition to thanking these people for their support, the occasion offered a chance for me reflect on my 48 years in medical publishing. The title of the talk was “Where We’ve Been, Where We Are, and Where We’re Going”.  This post, the first of three adapted from that talk, discusses artificial intelligence as one element of where we’re going.

A lot of people think that artificial intelligence is the next big thing. In the last two or three years, vendors have cropped up in the scholarly publishing world using the .ai domain, and others offer services that include natural language processing, machine learning, and so on. From where I stand, this is clearly on the minds of the giant scientific/medical publishers. When they are not launching journals that they own, they are bidding lavishly for the rights to publish journals and even books published by societies. Acquiring access to content is easy to understand in the AI context – the more data you have, the more data you can feed into a machine learning system, and the more intelligent these systems can become.

So what is AI likely to mean to health practitioners, and to their patients? I think, within a few years, AI is going to be a reasonably big deal. By that, I mean that I see AI having an impact comparable to medical imaging – it’s going to be a critical component of diagnosis and treatment of many kinds, and in some areas it will be ubiquitous. But doctors and other health professionals are still going to have the central role in treating patients and communicating with them.

And what will AI mean to publishers and other society staff members in their professional work over the next few years? First and maybe most importantly, everyone has an obligation to learn what the heck AI is. That means going to meetings and workshops where experts will attempt to explain AI and its potential applications. Also, as a practical matter, publishers and society staff people should make a point of talking to the vendors who claim to be applying AI to medicine, or AI to information retrieval. If they’re exhibitors at a meeting, it’s worthwhile to chat them up, read their literature, and let them give a quick demo at the booth. Also, from time to time, organizations should give an AI vendor an hour or so of quality meeting time in the office so that a group of colleagues can listen to their pitch. These meetings also will provide a good opportunity for society staff and publishers to pitch the vendors on major challenges, and maybe they’ll come back with an application that makes a meaningful difference.

At the same time, I’d argue that AI won’t radically change society staff members’ interactions with their members, particularly the hard-working volunteers. While many clinicians, particularly younger ones, now grasp the power of AI, I think that most of them will be more comfortable if they know that humans – specifically, their fellow society members — are evaluating the evidence.

The sweet spot for the future may be a hybrid – a committee of society volunteers interacting with artificial intelligence. The volunteers can help assemble the petabytes of information that will “teach” an AI system, then continually evaluate the output of the system until it’s genuinely useful to their fellow members. This will be a lot of work, but I think most societies have a critical mass of members who will be willing to do it.

Whatever the future of AI in medicine, one thing is almost certain to be true: Medical care will still involve people taking care of, communicating with, and often comforting other human beings.