Small Language Models vs. Large Language Models in Healthcare

Large language models (LLMs) use AI to reply to questions in a conversational manner. Our friend ChatGPT is an LLM. It can reply to an uncapped range of queries because it taps into one billion or more parameters of data. Small language models (SLMs) are their pocket-sized cousins. 

An SLM specializes in tasks associated with smaller, more focused datasets. For instance, a hospital could leverage an SLM to enhance clinical documentation. How? By training the SLM on the patient’s historical and family data and inputting the patient’s vitals that day, the program would generate a clear summary for the doctor before stepping into the room. These would then automatically get saved into the patient’s electronic health records.

However, this chatbot would be limited to answering questions within its defined parameters. It wouldn’t be able to compare products with those of a competitor or handle subjects unrelated to John’s company, for example.

If we depend on LLMs to streamline all our tasks, it will become a jack-of-all-trades but a master of none, whereas, if we can leverage SLMs, we have the added benefit of: 

  • Computation efficiency and smaller memory requirements (both of which make the software cheaper to run).
  • Minimized data privacy concerns as there’s no need for extensive data transfer to central servers.
  • More manageable cybersecurity parameters. 

CB Insights found that SLMs are up to 88 times smaller than ChatGPT. Yet, they still rank in the top 6 in the Stanford Holistic Evaluation of Language Models (HELM), a benchmark used to evaluate language models’ accuracy in specific scenarios. So, if SLMs are measuring up to LLMs, do companies even need one (large) GenAI to rule them all? Connecting to established LLMs via APIs is not the only answer. 

Small language models are more governed and specialized than LLMs

How to Use SLMs in Healthcare

The healthcare system contends with overworked staff and painfully extended wait times. Automation for its repetitive tasks has historically been confined by red tape.  Navigating these complexities stresses three pain points:

  • Health info must remain private and accurate, lest someone be liable for data leaks or medical errors. 
  • Reimbursements are slow and bureaucratic. 
  • Getting on a doctor’s books when switching health plans is something of a battlefield. When I enrolled in a new plan a few years back, I saw a nurse for 14 months because in-network doctors nearby weren’t accepting new patients. 

We can solve these three headaches with tailored SLMs. Healthcare is a good candidate for SLMs because it uses focused medical data, not the entire contents of millions of miscellaneous articles. There’s less room for error, and it is easier to secure from hackers, a major concern for LLMs in 2024.

Doctors Can Use SLMs to Up Patient Count 

A doctor can set up a tailored AI to gather patient history, drill down symptoms, and provide preliminary recommendations. As a bonus, the bot is accurate — it can iron out spelling mistakes, spot inconsistencies in data (two things that delay reimbursements), and replace form filling. Another efficiency boost is integrating patient data directly from electronic health records (EHR) into reports instantly. If specific templates are required, reports can be generated to fit them.

Integrating SLMs with a clinic’s calendar can also simplify appointment scheduling. Imagine a chatbot that can schedule appointments, gathers patient information, and prepares patients with personalized recommendations before their appointments. 

Furthermore, SLM integration can streamline real-time insurance eligibility processes thanks to NLP’s ability to detect fraud when used in processing claims. This will reduce the time required to handle and process pre-authorization requests, ensuring prompt generation of accurate insurance claims. This efficiency allows patients to proceed with treatment promptly.

It’s important to note that the SLM won’t begin prescribing drugs or substituting a medical professional. Instead, it saves precious minutes during appointments. GenAI speeds up administrative processes, helping doctors manage patients more efficiently and streamline diagnoses

A Case for SLMs as Effective Communicators

Prompt communication and availability are crucial in healthcare. When doctors or clinic staff are unavailable, SLMs can connect with patients 24/7, regardless of the day of the week or whether it’s a holiday or business day. With a bit of code work, SLMs can even become multilingual, enhancing inclusivity in a doctor’s clinic. This feature is particularly valuable for telehealth products that monitor and serve patients remotely.

Effective communication not only involves speaking the patient’s language but also adjusting to their level of understanding. These AI models can translate complex medical jargon into easily understandable information. In the near future, we could expect LLMs to be culturally fluent, therefore adjusting their communication with patients according to their social and cultural context. 

Doctors Can Use GenAI as Research Concierges 

Tenured, senior university professors all have them: a league of junior graduate students who work as a collective research concierge. Medical specialists could do with this perk, too. They have a packed schedule attending patients or surgery and must stay atop the latest research in their specialty. 

Each medical specialization (oncology, dermatology, etc.) could have its own SLM that scans and summarizes the latest research from medical journals. For example, a medical journal version frees doctors’ time buried in research papers. It could save lives. Informed doctors mean effective treatments. 

The Data Analytics Partner Healthcare Asked for

The healthcare industry is renowned for generating vast amounts of data. It’s projected that by 2025, 36% of the world’s data will be healthcare-related. SLMs can help analyze and uncover patterns within this largely untapped data, which has been underutilized until now. This advancement will significantly enhance predictive analytics, enabling better anticipation of potential complications and adverse reactions to medications.

It’s not just predictive analytics that will benefit. Descriptive, diagnostic, and prescriptive analytics will also leverage the capabilities of SLMs. This will result in highly personalized patient care, where healthcare providers can offer tailored treatment options. By training SLMs on lifestyle habits and genetic data, we can enhance preventive care, promote wellness, and ultimately improve quality of life.


About Diego Espada

Diego Espada, VP of Delivery, helps guide BairesDev team integrity of development practices through the growth experienced by the company each year. Working across all areas of dev, Espada ensures that every team utilizes BairesDev’s stringent methodologies and level of quality.