Imagine you’re an IT professional for a large health system. A resident physician calls and says he must urgently send in a patient script, but he’s locked out of the system and needs a password or account reset. You recognize his voice. He provides his ID number and other credentials for verification. You successfully restored his access so he can log in – only, it wasn’t the doctor that you just spoke to. It was a cybercriminal.
Increasingly, threat actors are using social engineering – the practice of psychological manipulation that tricks people into sharing sensitive information or granting unauthorized access to accounts – as their method of choice to attack health systems. Advances in AI and increasing accessibility to AI deepfake technology for voice, video and images have rapidly lowered the barrier to entry for cybercriminals.
These attacks most often lead to trojan horses, including ransomware, that are presently targeting the healthcare sector. The deployment of eerily accurate voice impersonations has become an effective part of a complex cocktail of attack methods by fraudsters. In fact, since the launch of ChatGPT, vishing attacks have increased by 1,265%.
These attacks are aimed at securing ransom payments, and can also compromise medical records and company data, cost millions for recovery and litigation, and can have deadly consequences for patients by interrupting care. Meanwhile, healthcare companies are utterly defenseless against the onslaught. To prevent a looming patient safety crisis, hospitals, medical clinics, and healthcare organizations must take immediate action to strengthen cybersecurity protections against the latest form of social engineering: AI voice deepfakes, otherwise known as vishing.
AI Voice Impersonation: Vishing Is the New Phishing
Arguably the most sinister form of social engineering, vishing, involves taking snippets of someone’s voice and using AI to impersonate that individual to achieve some level of conversational adequacy over the phone. A recent global survey from McAfee reported that nearly half of adults say they share voice data online or in notes up to ten times a week. These public voice recordings are an easy target for malicious hacking, theft, or sharing. The same report identified more than 12 free online AI voice cloning tools that are simple to use and, in one instance, require only three seconds of audio to create an 85% match to the original voice. With a bit of tinkering, the user can get the match up to 95%.
Black Basta, a hacking group deemed to be a major threat to the healthcare industry, has extorted more than $100 million from its victims. As one of the most active ransomware-as-a-service (RaaS) threat actors today, Black Basta has set its sights on the healthcare sector, claiming responsibility for the recent attack on St. Louis-based Ascension’s hospital system. The threat actor recently began using a dual threat approach that combines spear phishing (a type of phishing aimed at a specific person or group, often with information of interest to the target) with vishing. Black Basta’s latest strategy is to send multiple spam emails to a group of people in a healthcare organization, and then call the victims posing as members of the organization’s IT staff. These faux IT personnel offer to help with the spam issue, which typically involves clicking on a corrupt link or downloading software.
And they’re not the only threat. During the height of the COVID pandemic, bad actors posing as employees of Michigan-based Spectrum Health used vishing to call patients in search of their member numbers and other protected health information. The calls appeared to be from a legitimate Spectrum Health phone number, rendering the patients’ caller ID useless.
The similarity of these tactics to those employed by the Scattered Spider threat cluster —also known as UNC3944, Scatter Swine, 0ktapus, or Muddled Libra — is uncanny. Known for their aggressive phishing, MFA bombing, and SIM swapping strategies, Scattered Spider is notorious for impersonating IT personnel to manipulate customer service staff into compromising security protocols. The group’s audacious attacks, including the takedown of MGM Resorts with BlackCat/ALPHV ransomware, underscore the critical need to protect IT help desks and customer service departments against social engineering. With the rising use of AI in social engineering attacks, the already high bar for remote identity verification becomes impossibly higher for underprepared healthcare organizations.
Patient Health and Safety Hang in the Balance
In a 2023 Ponemon Institute survey of health IT professionals, nearly half said their organizations had experienced a ransomware attack in the past two years and 45% said these attacks caused complications in medical procedures; 21% said ransomware attacks had an adverse impact on mortality rates.
One nurse at an Ascension hospital in Baltimore reported having to speak to physicians on the phone or in person before placing IVs in patients after the breach. At one Ascension medical center in Austin, glucometers that measure blood sugar became useless because they require scanning a patient’s wristband.
Even if not directly attacked, hospitals that are adjacent to targeted facilities are often affected. A recent PubMed study for the National Library of Medicine found that nearby hospitals handling an influx of additional patients might face “resource constraints affecting time-sensitive care for conditions such as acute stroke.” Yet, research from Enea reveals that 76% of businesses have insufficient voice and messaging fraud protection.
How to Fortify Defenses Against Social Engineering Attacks
The following recommendations by the World Economic Forum, the American Hospital Association and the Department of Health’s Health Sector Cybersecurity Coordination Center (HC3) sector alert, while essential for healthcare organizations, should only be considered a good start, but not capable of preventing attacks from groups like Black Basta and Scattered Spider.
- Train employees to recognize and avoid vishing. Employees should be alert to vishing techniques and have mitigation strategies in place to avoid them, such as the use of a code word to verify the identity of callers. Organizations should also have regular vishing “drills” to test their readiness in combatting vishing. Consider, though, how realistic it is to expect healthcare staff to rise to the level of sophistication of the world’s top hacker groups.
- Revalidate all payer website users. This will likely result in a surge of account lockouts and help desk tickets, which opens the door for a threat actor to impersonate an employee and gain control of their account. To handle this situation effectively, ensure your helpdesk and customer service agents are equipped with strong identity verification tools.
- Require in-person requests for highly sensitive transactions. In-person requests are unsustainable and, even when possible, cause care delays. Automated visual verification solutions offer the security of face-to-face meetings combined with the convenience of remote video calls.
- Adopt the zero-trust model. Reducing the number of individuals who have access to healthcare data resources can mitigate risk. A zero-trust approach means the enforcement of least privilege policies, granting only the minimal credentials required for specific tasks. Every level of data is accessed strictly on a need-to-know basis, minimizing the opportunities for unauthorized access. But, again, it only takes one person to fall victim to a scam.
Since healthcare is a fundamental resource of every nation’s critical infrastructure, the same as energy and water, it’s time to fortify defenses and prevent a coming patient safety crisis. Emerging tools such as AI-powered identity verification solutions, mobile cryptography, machine learning, and advanced biometric recognition can provide essential support for IT help desks and customer service departments, bolstering their ability to combat identity fraud and prevent impersonators from being authenticated.
Healthcare networks must be well-steeped in educating IT staffers, employees, and visiting partners alike to be on high alert for these threat actors and their sophisticated impersonation tactics. Advanced ID verification methods that account for cybercriminals’ scaled efforts through AI will be an essential safeguard for every healthcare facility.
About Aaron Painter
Aaron Painter is a global AI deepfake and cybersecurity expert. Driven by his personal experiences with online fraud and identity theft, Aaron founded Nametag to create the next generation of account protection. As CEO of Nametag, Aaron helps businesses prevent cyberattacks by stopping sophisticated threat actors, even those using AI deepfakes. Prior to his tenure at Nametag, Aaron served as CEO of London-based Cloudreach, a Blackstone portfolio company and the world’s leading independent multi-cloud solutions provider. He also spent nearly 14 years at Microsoft in various leadership roles, most notably VP and GM of Business Solutions. He is author of the best-seller, LOYAL, a book that details the key to leadership: fostering a culture of listening. In addition to being an active speaker, advisor and investor, he is also a Fellow at the Royal Society of Arts, Founder Fellow at OnDeck, a member of Forbes Business Council, and a senior External Advisor to Bain & Company.