Why Medical Software Manufacturers Avoid Strict Regulations, and What It Means for Patients

Lidziya Tarasenka

Lidziya Tarasenka

Clinical Editor at Andersen

Healthcare
Jun 24, 2025
Reading time: 9 mins
views

    The market for medical software is booming, but behind this growth lie growing risks. In the race for hypergrowth, both startups and large companies are increasingly launching products before fully validating their safety and effectiveness. This approach allows them to grab a market share quickly, but on the flip side, it can pose threats to patient safety and lead to serious regulatory issues.

    Today we’re speaking with Mike Pogose, Director of Quality Assurance and Regulatory Affairs at Hardian Health. We’ll explore why implementing robust standards and ensuring transparency throughout the software lifecycle isn’t just bureaucratic red tape; it’s literally a matter of safety.

    Andersen: In one of your LinkedIn comments, you've said: “Оnes that bite the bullet, grasp the nettle, and take up every other such metaphor to get their scribe software appropriately certified as medical devices will be the ones creating the regulatory mold around their business.” What did you mean?

    Mike: Мany ambient scribe manufacturers promote the benefits of their software lessening the administrative burden on clinicians, often citing two features, clinical coding and time saving. At the same time, I believe that manufacturers are avoiding regulation. Not evading, but avoiding regulation because of grey areas in legislation and in official guidance. For instance, clinical coding could be seen as a clinical function, not an administrative one, as incorrect coding could lead to delays in downstream diagnostic or therapeutic procedures or in the wrong procedure being ordered and performed.

    Time saving indicates that the clinician may not check the output of the scribe, which may contain errors, which again could lead to incorrect or inappropriate downstream care. The companies that embrace the software as a medical device regulatory pathway will be in a stronger position if and when the grey areas in the regulations in the various jurisdictions, such as the UK, EU and USA, are made black and white.

    Andersen: What are the biggest problems in software as a medical device and AI as a medical device industry?

    Mike: Underfunding of startup manufacturers. Possibly it’s caused by the naivety of both the founders and the investors — and by investors, I also mean grant-awarding bodies, whether they’re government or charitable grants. Software may seem like an easy, low-investment route to a monetizable product. Or, in the case of grants, something that could eventually find use in national healthcare systems.

    But repeatedly, I’m finding that unless the conversation is had early on with the founders and/or the investors, the time it takes to build the software is not well understood or allowed for in the funding runway.

    First, the manufacturer needs to implement the software according to the standards and regulations for medical device software lifecycle management. These standards are there to demonstrate that the software is safe, effective, and cybersecure.

    Second, they need to generate adequate preclinical and clinical evidence to demonstrate that the software is actually safe, effective, and cybersecure in the hands of real-world users, in real-world environments.

    And third, the documentation showing that it’s safe, effective, and cybersecure needs to be presented to the regulatory authorities for their sign-off.

    That’s why I’m saying underfunding — not having enough runway to cover all of these areas.

    Andersen: There are more and more ambient clinical documentation tools entering the market without MDR or equivalent certification. What are the short-term benefits and long-term risks of such an approach?

    Mike: In my view, the short-term gain is growth through blitzscaling. That’s a term coined by Reid Hoffman, the co-founder of LinkedIn. I’m going to use a few words from a really good article I found in a synopsis of the book he wrote about it. “What I see is that the manufacturers are making business strategy decisions and committing to them, even though their confidence in safety, effectiveness and cybersecurity of the software is not really 100%. And the regulatory position is not really fully understood. So it may be that the manufacturers, because there are so many of them trying this now, are vying for market share, so it’s more urgent and important to them to get on the market and into use, carving themselves a piece of the market.” So this is the classic argument of blitzscaling: get to market quickly, do something, and only then figure out the consequences afterwards.

    Andersen: Our fellow physicians often share how these tools actually work in practice. One story stood out: a few years ago, a very fancy system with lots of features was introduced in the clinic — and now only two forms are left. Everything else was gradually scrapped. Maybe this is an example of FOMO — fear of missing out — which affects not only ordinary people, but also CEOs of large companies, trying to stay ahead no matter what.So does that explain blitzscaling?

    Mike: Yes. And it’s not only my opinion, or Reid Hoffman’s. It really feels like that out there in the field. That’s what it feels like. And I honestly think that’s a lot of what it comes down to.

    If you look at investors — what a good investor will look for is preferably hypergrowth. Something that grows very, very quickly. Because in one way, this is all a kind of gamble, with different bets. They don’t need every startup to return something. They just need one to take off — and then the rest do not matter.

    All of this makes me more and more convinced: blitzscaling isn’t just a buzzword, rather it’s an accurate description of what’s going on. The generic idea of the blitzscaling strategy is that you’re not 100% sure what you’re doing. And because it’s healthcare, what you need to be doing is something that’s safe, effective, and cybersecure. And actually, in the end, that’s what the regulations are for, to protect consumers and to protect public health.

    Andersen: Why do medical device regulations exist in the first place?

    Mike: Because people’s lives depend on it. Medical device regulation is born out of drug regulation. You could say that the “mothership” of pharmaceutical regulation was thalidomide. It was a drug — well, maybe I’m a bit older than you — but it was prescribed in the 50s, 60s, 70s for certain conditions, including to pregnant women. And then it turned out it caused horrific deformities in the limbs of babies.

    The core issue was: there wasn’t enough testing. No one properly understood its potential effects on pregnancy or on the fetus. And that led to a massive scandal and, eventually, to regulatory reform — because the drug was being prescribed all over the world.

    It’s because of cases like that that we now have this clear understanding: the safety and effectiveness of drugs and medical devices must be regulated.

    Now, back to software: say, a digital therapeutic, something that helps you manage a mental health condition instead of taking a drug. That’s still therapy. It has safety considerations, effectiveness considerations, and, because it’s digital, cybersecurity considerations too.

    It may sound like science fiction, but it’s entirely realistic that a company could suffer a ransomware attack, and someone could push out a malicious software update that harms patients.

    Or someone could poison the training data — insert something malicious, whether it’s medical, political, or otherwise — and now the AI starts giving consistently wrong results. In healthcare, that kind of vulnerability or delay is simply unacceptable.

    Andersen: You’ve repeatedly emphasized that risk management for software as a medical device should be use case-driven. Could you give a concrete example where misunderstanding the intended use led to regulatory misalignment?

    Mike: Sure, I can give you an example we often use in presentations or training. It’s not software, but it’s very relatable — let’s talk about a wooden tongue depressor.

    When you go to the doctor, and they want to look in your throat, they use a wooden stick to push your tongue down. That piece of wood is used once and then thrown away. Yes — like an ice lolly stick, but wooden.

    The intended use of that device is to depress the tongue. But it was found that people started using it differently — as a splint for a child with a broken leg or arm, because the size was just right for that. They would tape it to the limb. But the problem is: it’s made of wood, and now it’s in a warm, damp environment, potentially exposed to airborne bacteria. Plus, fractures often come with open wounds.

    As a result, the child could end up with a bacterial infection — from a wooden tongue depressor that was originally intended only for pressing down tongues.

    That kind of use is called off-label — not according to the intended purpose.

    So now, you need a newly regulated tongue depressor — one that clearly states: “Do not use for splinting fractures. Only to be used for depressing tongues. Discard after single use.”

    Andersen: Modern regulations often rely heavily on the correct application of rules for software, with safety frequently managed through warnings and instructions. What are the risks for patients or healthcare workers in situations where trustworthiness depends more on behavior than design?

    Mike: If we take, for example, the EU medical device regulation, it outlines three priorities for risk mitigation. These priorities come from the international standard on risk management for medical devices.

    The first priority is to make the device inherently safe by design. That’s what the standard says.

    The second priority — if you can’t make something inherently safe by design — is to implement protective measures on the device itself or in the manufacturing process. For software, the manufacturing process means the software development lifecycle — the standards that have to be applied to ensure the software is written, deployed, and tested to be safe, effective, and cybersecure.

    If neither of those two approaches is possible for a particular feature, then comes the third priority: provide information for safety and/or training to users.

    You could say that the third priority is potentially overused, because it’s easier than the first two. You’ll often see disclaimers like “This is not to be used for medical purposes” written on the device itself or in the terms of use — the screen you must click through before you can use the software.

    But in my view, this is just avoidance of the issue.

    Andersen: There’s data showing that 76% of US physicians use ChatGPT for clinical questions — meaning real-life medical cases.

    Mike: If they’re using the consumer version of ChatGPT — the one you can download and access via web or the app on your phone — then, if you think about it, around 99.5% of people never read the license agreement before ticking “I accept” and starting to use it. And that agreement clearly says: “This is not to be used for safety-critical purposes, including medical.”

    To be used in a medical setting, you’d need the professional version of ChatGPT — the one a company pays for, which is essentially paying for OpenAI’s product liability insurance. But the consumer version — and I suspect the same goes for similar tools like Gemini — comes with the same kind of restrictions in its terms of use.

    Andersen: Yes, but who really cares?

    Mike: Well, this is a whole other topic — about liability. Who is responsible if ChatGPT says something wrong and the clinician acts on it? In UK law, that’s very clearly established. And now, the European Union has updated the product liability regulation, with a whole section about software and its use.

    But in the end, it’s still the clinician’s responsibility. They may look things up using ChatGPT, but if something goes wrong, they can’t sue OpenAI, because they ticked the box that says: “This is not to be used for medical purposes.”

    Andersen: Continuous monitoring is important to ensure that AI rules remain safe and effective over time. How well are today’s regulations prepared for the future — and how can we make them future-proof?

    Mike: In the UK and EU, there’s quite a strict approach to what we call post-market surveillance — that is, continuing to monitor safety, effectiveness, and cybersecurity after the product has gone to market. This includes watching out for drift and bias in AI performance under real-world conditions.

    Because things change — the training data may gradually become misaligned with the demographic of the population in the region where a particular tool is being used, or with clinical practice, or something else entirely.

    Now, if the AI system has gone through a proper regulatory route and post-market surveillance is mandated, or if the manufacturer even says, “We will have a sensible post-market surveillance regime” — that’s already a good sign.

    But it’s not just about the regulator — it’s also about the customer. For example, a hospital buying the product: what do they expect? What’s written in the contract?

    As for the US regulations — they’re not so strong yet, let’s say. They have left room for improvement.

    Share this post:

    Book a free IT consultation

    What happens next?

    An expert contacts you after having analyzed your requirements;

    If needed, we sign an NDA to ensure the highest privacy level;

    We submit a comprehensive project proposal with estimates, timelines, CVs, etc.

    Customers who trust us

    SamsungVerivoxTUI

    Book a free IT consultation