Artificial intelligence in healthcare
  1. Home
  2. Future of Health
  3. Artificial intelligence in healthcare

Artificial intelligence in healthcare

Artificial intelligence (AI) has been playing a critical role in industries for decades. As costs reduce and more test cases emerge, AI is now beginning to take a leading role in transforming healthcare. However, there remains a prevailing uncertainty around these technologies, and what they mean for the future of health professionals? In this panel, we delved into the world of AI, machine learning and robotics to see what’s possible and uncover what healthcare professionals need to know about this emerging technology.

Subscribe to the series

Don't miss the next video! Register below and make sure to be informed of each release.

Do you work in a medical practice? *
Thank you! We'll email you as each episode is released.

View transcript

Meet Our Panellists

  • Dan Draper, VP Engineering, MedicalDirector (Moderator)
  • Professor Anna Peeters, Director, Institute of Health Transformation, Deakin University
  • Dr Ronald Shnier, Chief Medical Officer, I-MED Radiology Network
  • Kate Quirke, Chief Executive Officer, Alcidion
  • Dr John Lambert, Chief Medical Officer, Harrison.ai

Key insights

Over the last few decades, we’ve witnessed the rise of AI technology across a wide variety of industries, with investment pouring in from private and government entities alike. The Australian Government announced in December 2019 that it will invest AU$7.5 million for research into the use of AI in healthcare.

As the healthcare sector becomes increasingly digitised, it’s only a matter of time before this technology becomes mainstream. But there’s still a fair amount of uncertainty around when and how this might happen, and many healthcare professionals are asking one important question: “Will AI evercompletely replace doctors?”

Dr John Lambert, Chief Medical Officer at Harrison.ai says it’s unlikely, at least in the foreseeable future.

“We are so far away from a general purpose AI, let alone an AI that could somehow replicate the experience of going to a caring professional,” he said.

“I’ve worked in technology and healthcare for an awfully long time and one thing that I remain as convinced of now as when I started, is that nothing is going to take away human touch, the healing hands, and emotional connection of healthcare professionals.”

There was a consensus among the panel that AI technology will augment, rather than replace, healthcare professionals. But that it needed to be a collaborative process, with the intention of improving quality, safety, and equity of healthcare.

“I believe there’s more value in combining the human and the computer to create a better outcome than either one alone can provide. But we need to make sure that the AI solutions are clinically sensible and are solving problems that clinicians, and more importantly patients, actually need solved”, Dr Lambert said.

AI is creating more health equity

Kate Quirke is the CEO of Alcidion, an Australian-based health informatics company. She said AI can be used to improve health outcomes and assist doctors in providing more effective and efficient treatment.

“Every doctor will have thousands of algorithms in their head that they know that they need to check for when they’re treating a patient. If you can assist by augmenting that and actually sorting through all of the noise that goes on, then you’re definitely a step towards improving outcomes.”

The panel discussed a few encouraging use cases where AI is being used to augment healthcare. Some notable examples included using AI to pick up irregularities in chest x-rays, using natural language processing to populate electronic health records, and combining AI and robotics to operate within sub-millimetre range and adjust for tremors in neurosurgery.

However, AI is not limited to a clinical setting, Kate explained.

“I think there’s a perception that the technology we’re using needs to play a role purely in a clinical setting, but that’s not necessarily the case.”

The panel discussed some promising applications of AI including using data to predict health outcomes, speeding up traditional research methodologies, and using supermarket spending data to map population dietary trends.

Dr Ronald Shnier, Chief Medical Officer at I-MED Radiology Network said one of the most important benefits AI will deliver is the globalisation of healthcare.

“Australia has access to world-class healthcare but there are many jurisdictions in the world where there is no medical expertise on the ground. To have an algorithm that can at least sort out the serious from the non-serious, where there’s no infrastructure to do that, that could be a huge service”, he observed.

Dr Shnier believes AI will do to medicine what the internet did to communication. “The internet is this great infrastructure and people sometimes use it well, and sometimes they use it terribly. Similarly, AI will have a great ability to deliver amazing value,” he said, “but you need safeguards”, he warned.

Kate Quirke adds that being mindful that everyone can access the benefits of AI is critical to creating equity in healthcare. “We’ve already got enough barriers. Let’s not create yet another barrier that’s dependent on whether you’re technology literate or not.”

Dr Lambert agreed that AI has both incredible potential and significant risks. He reasoned that risks can be mitigated by using one of the fundamentals of medicine, research.

“This is no different to a new surgical procedure or a new drug. We’re going to have to implement these tools in clinical environments, measure the impact, and be wise about the impact of usability.”

“And if you test it that way, then a lot of these risks will be identified and can then be mitigated,” he argued.

Getting the foundations right will lead to success

The foundation of AI technology is undoubtedly clean, reliable and relevant data. However, it’s estimated that as much as 80% of healthcare data is unstructured, such as handwritten notes, emails, radiological images, and pathology slides. This presents a major challenge when it comes to harnessing the full power of AI.

“An AI model is only as good as the data it was trained on”, said Dr Lambert. He pinpointed two big challenges when it comes to data: Getting access to consistent data, and creating the “ground truth labels”, which can be a very labour intensive process. But once a basis of data is established, the possibilities of AI are endless.

“I believe that once that data is gathered, AI will be able to connect dots that humans wouldn’t be able to connect.”

However, we need to walk before we can run, cautions Professor Anna Peeters, Director of the Institute of Health Transformation at Deakin University.

She said data is important, but the systems that underpin it are just as important to get right.

“I think the harder part is not the generation of those technologies and applications, it’s actually a generation of the systems which we need to embed them in.” She advises people to carefully consider things like governance, privacy, and funding mechanisms, as well as the human capabilities that we need to enable AI to work.

“We need to spend a lot more time giving attention to how you integrate these technologies into the system to ensure that it’s actually seamless for the patient and for the healthcare worker.”

Prepare now for the AI-enabled future

Prof. Peeters stressed there’s no one-size-fits-all approach to AI. She pointed out there are going to be capabilities and functions that healthcare professionals will be very happy for AI to take over, and others that we may never be able to shift.

“I don’t think anyone’s suggesting that everything suddenly shifts to AI. What we need to do is make sure we choose carefully and strategically, that we co-design and collaborate. And then we continually evaluate the outcomes, costs, benefits, and the risks”, she concluded.

Transcript

Dan Draper (00:15)
Artificial intelligence has been evolving for decades. As cost produce and more test cases emerge, AI is now beginning to take a leading role in transforming healthcare. However, there remains a prevailing uncertainty around these technologies and what they mean for the future of health professionals.

Dan Draper (00:33)
In this panel, we dive into the world of AI, machine learning and robotics, and see what’s possible, and ask the question that’s on everyone’s mind, will AI ever completely replace doctors? Today, We are joined by Professor Anna Peeters, director of the Institute of Health Transformation at Deakin University.

Professor Anna Peeters (00:51)
Hi, thanks very much. I’m looking forward to it.

Dan Draper (00:54)
Kate Quirke, CEO of Alcidion.

Kate Quirke (00:56)
Thanks, Dan, good to be here.

Dan Draper (00:58)
Dr. Ronald Schnier, chief medical officer at I-MED Radiology Network.

Dr Ron Shnier (01:02)
Thanks for the invitation.

Dan Draper (01:04)
And Dr. John Lambert, chief medical officer of Harrison.ai.

Dr John Lambert (01:08)
Thanks for having me, Dan.

Dan Draper (01:09)
Welcome, everybody. So, let’s get started. So, the foundation of health care is based on personal relationships, but the ever increasing usage of technology it leads us to the question, are doctors being replaced by technology? Ron, let’s start with you.

Dr Ron Shnier (01:25)
So, it is my belief that doctors won’t be replaced in the foreseeable future. However, the expectation of patients is changing, particularly with millennials and younger people that actually want an outcome rather than a relationship. And this does play into technology. The other thing that’s playing towards technology is the huge amounts of data available in every aspect of medicine.

Dan Draper (01:49)
We’ll go to Anna, what’s your take?

Professor Anna Peeters (01:53)
So, I think you’re right. There’s a popular view that there’s a big risk to the health care professions with the advent of AI. But I mean, obviously that’s not the goal. The goal is to improve patient outcomes, to improve patient quality and safety, to improve equity. And as you rightly said, for many of those things, we know that you need a health care touch point. And so, many of the best AI interventions, I think now are really being built through co-design with the patients, with the communities, with the healthcare workers to ensure that actually the patients and the healthcare workers get the best of both worlds.

Dan Draper (02:27)
What’s the assumption amongst GPs, Kate, do they feel like AI is going to replace them?

Kate Quirke (02:36)
Look, I don’t think so. I mean, unfortunately the GP market is probably the market that I least interacted with from a doctor’s perspective, because I tend to operate more with the specialty sector in the hospital sector. But I don’t think the perception and certainly where we’re coming from is not around replacing doctors, but it’s certainly around augmenting. I think the process around delivery of healthcare, around using data more effectively to, I think, decrease the cognitive load on doctors, in order to support better outcomes, support better decision making. That’s where I think we’re heading certainly in the short to medium term.

Dan Draper (03:16)
Ron, do you think doctors are concerned?

Dr Ron Shnier (03:19)
Doctors are concerned. If I take radiologists as an example. And the analogy would be at a radiology conference today, the sessions on AI are standing remotely. About 10% of people understand it. 45% of people are there because they’re worried about their jobs. And 45% are there to say it won’t happen. And 90% of them are wrong. But one of the points I’d like to make is one thing AI will deliver is globalization of healthcare. So for example, there are many jurisdictions in the world where there is no medical expertise on the ground, and AI can, in some ways act as a fantastic triage tool for those people.

Dan Draper (03:58)
So, John, this is obviously something that is pervasive fear, maybe one would argue in the industry. Healthcare is about mixing fact and feeling, perhaps, critical thinking with the interpersonal skills that come with being a human. What role can AI play to facilitate that relationship and to improve health outcomes without necessarily taking away the human element of it?

Dr John Lambert (04:30)
Look, I think there is tremendous opportunity to augment doctors in their practice. And you made a statement that there’s a lot of fear out there. To be honest, I think there’s a lot of ignorance out there more than fear. I think most doctors don’t even know what AI is capable of. And if they’ve heard a little bit, they might be a bit scared because they can extrapolate in a potentially dangerous way.

Dr John Lambert (04:52)
But I come from a highly technical specialty intensive care medicine. I’ve worked in computers and technology and health care for an awfully long time. And one thing that I remain as convinced off now as when I started, is that nothing is going to take away human touch, the healing hands, the emotional connection. And we are so far away from a general purpose AI, let alone an AI that could somehow replicate the experience of going to a caring professional.

Dr John Lambert (05:20)
So, I really think, anybody who talks about the risk of actually replacing doctors and especially GPs is really hyping up a reality that’s, if it ever occurs, there’s an awfully long way away. On the other hand, general practice in particular has been digitized for an awfully long time. The health care system is increasingly digitized. And as Ron said, there’s a tremendous amount of information out there or data. And as Kate said, the trouble is the sensory overload of clinicians is such they can’t deal with it or they can’t process it all. So, I think a very valid and almost low hanging fruit opportunity, which is here and now. And this is sort of thing I know Kate’s team work on and so does my team, is on how can we leverage that information, both to add value and also to make it a little bit more comprehensible for clinicians.

Dan Draper (06:13)
So, you touched on an interesting point there, which is around this idea of information overload. And I guess there’s two factors here at play, probably more, but two that I can think of. One is the sheer quantity of information, but also just the fallibleness of a human brain, the uncertainty, you might call it. Kate, can you tell us a little bit about how Alcidion are tackling this, and what your perspectives are on the role of AI in solving for the uncertainty of a human decision?

Kate Quirke (06:46)
We’re committed. First and foremost that you’ve got to get the data in a standardized format that you can actually agree that the data, the baseline from which you’re driving your decisions is in fact equal and can be auditable and managed. And that is really challenging. We’ve got multiple systems in health care. So, the first thing is about getting that data all together in a single area and understanding where that data comes from, and that it’s consistent in your use of it. So, any AI needs to understand that. And that’s why sometimes it’s easier to tackle particular specialties or particular areas initially, where you can get some uniformity to the data initially. But if you’re looking across a hospital system, even when they have one electronic medical record, they have got many other variants of data coming into that.

Kate Quirke (07:28)
So, before you do anything, you’ve got to make sure that you’ve got that accurate data to platform. And then for us, we’ve been really focused on adverse events in healthcare. We know that the third largest killer of US citizens after cancer and cardiovascular events, is adverse events in health care. So, if you can bring that data together and you can actually start to highlight risks associated, that by deploying algorithms that have been proven in many, many years, right? Every doctor here, every doctor that is listening to this will have thousands of algorithms in their head that they know that they need to check for when they’re turning a patient. If you can assist by augmenting that and actually sorting through all of the noise that goes on in every day, then you’re definitely giving it a step towards improving outcomes. Once the data is accurate, then it’s available for a lot more safe application of AI, safe application of algorithms.

Professor Anna Peeters (08:24)
If I just add, I mean, I think the other thing that’s really interesting to us is there’s not just one AI, right? It’s everything from large data analytics to virtual health platforms, assistive technologies. But it’s also really applicable to health, even outside the healthcare data that Kate is talking about. So, there’s really interesting ways to use data that currently exists for population prevention, for example, precision prevention, or even with the sustainability agenda, trying to find as Kate was saying, shining that data spotlight, if you like on where the waste in the system might be.

Professor Anna Peeters (08:55)
So, I think there’s a lot of applications for AI in health that will eventually improve health outcomes that have nothing to do with the healthcare professionals actual day-to-day role. So, I think we should make sure we’re as broad as possible in our thinking about the opportunities.

Kate Quirke (09:11)
Yeah, that’s really interesting. I think there’s a perception that the technology we’re using needs to play a role purely in a clinical setting, but that’s not necessarily the case. And that it might even be, we’re increasingly wearing wearable technology, for example, if say it can be early detection and things like that, and so forth.

Kate Quirke (09:29)
We’ll talk more about data in a moment. But one thing I wanted to pick up on Kate, you were talking about adverse situations. And actually Anna, I’d like to go to you, who is to blame when there’s a mistake? And this is, I guess, a fundamental problem across all kinds of AI. The common one is self-driving vehicles. If a car needs to make a decision about, do they kill one pedestrian or the passengers of the car, this is an interesting philosophical problem. But how do we think about that in healthcare, who’s to blame?

Professor Anna Peeters (10:04)
Yeah. No, I think it’s an excellent question. I’m not going to answer the question who’s to blame. But what I will definitely say is, I think one part of the AI conversation is completely the conversation we’ve been having, what are the technologies or the analytics available and how can you apply them? But actually I think the harder part almost is not the generation of those technologies and applications, it’s actually a generation of the systems, which we need to embed them in. So, what is the governance? What is the privacy? What are the funding mechanisms? What are the human capabilities that we need, whether it’s the system, the service, the practitioners, or the patients?

Professor Anna Peeters (10:38)
So, I know I’m not answering your question, but I think it’s a fundamental question. And I actually think we need to spend a lot more time giving attention to those questions about how do you integrate these technologies into the system to ensure that it’s actually seamless for the patient, for the healthcare worker. And exactly, as you say, I think that’s something like hospital in the home is going to be an increasingly key issue around, well, is it the specialist who’s discharged the patient? Is it their primary care practitioner? Is it a community healthcare worker who has to pick up responsibility, if there is an adverse event? And we haven’t nailed that yet.

Dr John Lambert (11:14)
Yep. And sorry, Dan, if I might add, the question of who’s to blame already exists with IT systems. And they don’t have to be IT systems. I mean, you can design an EMR and the medications takes six entries. The patient is on eight medications. And for some reason, the scroll bar doesn’t appear. So, the clinician doesn’t realize there’s two more meds, who’s to blame? Anna, alluded in my opinion, to usability of the system that the AI might be embedded in, or the workflow, it might be embedded in. All of those things contribute to the outcome.

Dr John Lambert (11:46)
And so, it’s not just the AI modeling. In fact, compared to some of the cognitive interactions of a human computer interface, AI is a relatively easy to explain. Understanding how a human interacts with the user interface is actually a lot harder to describe. And I often use the statement, well, does any clinician you speak to know how they make decisions? Probably not. So, asking us to have explainability over an algorithm defines solution, when we really have no idea how we make decisions ourselves. I find it a bit ironic, even though the question will still be asked.

Dan Draper (12:20)
Yeah. It’s interesting. Ron, do you have a take on that?

Dr Ron Shnier (12:22)
Yeah. Look, I think that the aim of the AI is to improve efficiency and safety of the doctor. We also have to realign our expectations. So, going to example of driverless cars, the Google car, I think there was one death in four million miles or something like that, yet that was deemed as not safe. Whereas for the same amount of human interaction, there would be many thousands of deaths. So, it’s also about the expectation of the technology.

Dan Draper (12:54)
Yeah. I think that’s a key point you’ve made right there, Ron. And that, yes, there might be deaths still, but you look at the bigger picture, that the point is that there’s probably going to be significantly lower health issues or severely reduced number of health issues, and certainly deaths.

Dr Ron Shnier (13:10)
And just one comment to the blame, invariably in systems, it’s more than one person or one process that contributes. So, most times it’s a coincidence of a series of things that go wrong. It’s very rare to be just one thing.

Dan Draper (13:26)
Just pick up on that point about multi-system failure. Kate, how do you see technology playing a role in addressing those concerns and particularly with the AI context? Is there a role for AI in understanding the overall health journey of a patient through the system?

Kate Quirke (13:44)
Yeah. I mean, we’ve been using it certainly. And it’s steering a little bit away, I suppose, from that the area of direct patient diagnosis and treatment, to looking at predictive analytics based on historical data that we can look through. That we can use machine learning to go back and say, based on hundreds, thousands of interactions of patients journey through the healthcare system, predict what might occur for a certain cohorts of patients or certain particular types of patients, if a certain set of activities are in play.

Kate Quirke (14:22)
[inaudible 00:14:22] is particularly useful when starting to look at the social determinants of care, marrying that in with the data from the primary care and the secondary care, and being able to look at the projected trajectory of what might happen to a patient after they’ve had an acute care incident, or you’re looking to the future in hospital, in the home. So, I’m really fascinated with that. And it’s less of a risky area in some respect in that it’s not directly interacting with patient diagnosis.

Dan Draper (14:50)
So, at the beginning of this whole journey that a patient might go through, and this is true of so many illnesses and diseases, is the idea that early detection is so important. And this is perhaps one area that AI can play a significant role because it can compare tiny, tiny, subtle differences to, say our wider population. Is there any chance that AI will become more capable or more accurate than humans in performing diagnosis of this kind? Let’s start with you, John, perhaps.

Dr John Lambert (15:24)
So, I think one of the things to remember about AI, is that an AI model is only as good as the data it was trained on. And in a lot of these questions you’re talking about, we don’t actually know what truth is. We don’t know what best practice is. And in the public health type setting that you’re speaking of, we don’t even have access to the data. So, we might complain busily about the lack of standardization between EMR and practice management systems, and the people that use them, more importantly than the systems themselves. But if you extend out to the pre-clinical interaction, the human out and the citizen, out in the population in their healthy state in theory, we’ve really got no consistency in gathering that data.

Dr John Lambert (16:04)
But I believe that once that data is gathered, AI will be able to connect dots that humans wouldn’t be able to connect. So, it will be able to connect 16 different features of your life, ranging from how well you sleep, how much you exercise, what your weight is, what your dietary intake is, and connect that with various disease outcomes in a much more effective way than humans might be able to.

Dr John Lambert (16:26)
But in terms of determining the best treatment, you have to have a ground truth to train the model. And that’s often one of our challenges. The two big challenges we find is getting access to consistent data. And Kate has mentioned that a couple of times as well. And I’d support her in that. But the second part is getting the ground truth labels, which can be very labor intensive to create if they don’t already exist.

Dan Draper (16:46)
So, you can’t have a conversation about AI or machine learning without talking about data at some level. And certainly this conversation is no exception. And John, you touched on a couple of issues there. Some of the other challenges that I noted down before the panel, and I’m sure there are many others. But one of them is, 80% of data is unstructured. It could be handwritten notes or even in digital setting, that the data could be just free texts. The other one, which I think is often not talked about as much as it could be, is bias in the data, racial and gender bias in the data, perhaps.

Dan Draper (17:21)
And Anna, what changes do we need to see in how we manage the data and how to think about data in the healthcare system, before we can start to fully trust and embrace AI technology?

Professor Anna Peeters (17:33)
Yeah. I think that’s a really good question. And that is one of the key points that I would have wanted to make. So, I’m glad you brought that up, which was, yeah, we can really only answer questions for which we have the data. And if we don’t know what we don’t know, then that’s always going to be a gap. And as you say, I mean, I think, from the United States, there’s been some really important papers come out around gaps in our predictive ability for black African-Americans because they haven’t been in the data. We know even pre-AI that in heart disease, treatments that women were not often in trials. And so, often the clinical algorithms don’t work for women as well as they do for men. So, this is not a new area.

Professor Anna Peeters (18:12)
But I think AI offers both a new challenge and a new opportunity. The new challenge, as you say, is if we don’t at least give a bit of an analysis to the data coming in, we won’t know what the gaps are and what potential biases are. So, I do think there’s a whole program of work that needs to happen, that’s much more routine and systematic than it is now, which is to analyze which population groups, which areas, which regions, which disease types are coming in through the data, and where are the data gaps.

Professor Anna Peeters (18:41)
But the opportunity with AI, is actually you can use AI to help identify data gaps as well. So, it’s very good at working out where holes are and what you’re not finding, and what it’s not telling you. So, I think that’s another area of AI application that I will imagine will burgeon over time.

Dr John Lambert (18:57)
Anna mentioned a really key value proposition for AI, which is detecting gaps in the data, even if you can’t tell what the gap is. There’s some amazing outwork I’ve seen in that space. But on top of that there is a ray of light also, that AI is fantastic at standardizing data. So, I mean, you talked about most of the textual or numeric data, but of course, image data, video data, audio data, we’ve already got fantastic examples of how AI can classify and standardize that.

Dr John Lambert (19:25)
I mean, you might have a trouble identifying all the people in a population. Let’s say, New South Wales, all the people that have CLPD or bullous lung disease. Well, we could run an AI model over every chest x-ray taken in New South Wales Health and define all the patients that have bullous lung disease. But that’s a classification task that’s relatively easy.

Dr John Lambert (19:45)
And the alternative, how would you do it? Nobody codes it consistently in EMRs. People refer to that disease with a thousand different phrasings. So, natural language processing would probably struggle. But you can solve it in a different way. So, yeah, in addition to what Anna said about finding holes, AI can be very, very effective as a standardization tool to add value out of the data that you’ve been collecting, that you might’ve previously thought was not useful. I think that’s going to change.

Dan Draper (20:15)
Kate, how do you think about the collection and management of data for, particularly for the purposes of AI? Are there guiding principles that Alcidion follows or that you yourself?

Kate Quirke (20:28)
The way we do it is obviously very much structured around our platform. So, we extract data in any form. We don’t mind if it’s structured, unstructured, HL7. I hate to use all these terms because not everyone knows them, but that’s a standard way of pulling things in. And we pull it into the platform. We convert it all to FHRE, so Fast Healthcare Interoperability Resource. And we use that as the basis. And then we use CSIOs onto our server to allow us to standardize that data. So, we’re really focused on what comes in at the beginning, at the front end.

Kate Quirke (21:01)
And then we’ve built in governance capability into the platform. So again, you can structure the governance around that data and understanding the ownership of it, who’s got to change it, when it’s changed. How many times has been changed. So, that if you are applying these models and you look at the results, that you have that path to understand where it had come from originally.

Kate Quirke (21:24)
So, we’ve very much focused on the standardization. However, given unstructured data doesn’t come in, the way we always want it. We actually decided that we needed to build in NLP into the platform. So, we created our own NLP. And we’re having a lot of fun with that. And our CMOs are having a lot of fun with that.

Kate Quirke (21:42)
But it’s amazing that if you can actually assist the clinicians to capture the data in a structured way, by embedding NLP, as they’re doing their noting, then you have actually addressed one of the key issues, which is about trying to take these data that was in an unstructured way in the first instance. So, we’ve recognized that we need to do something at the front end, as well as just taking what’s available and getting it in that standardized way.

Dan Draper (22:09)
Just for our audience who might not know that term, NLP, that’s natural language processing?

Kate Quirke (22:14)
Yeah. So basically, we’ve got a capability that as you’re writing your notes into the system, it starts presenting you with potential codes and stats. But we actually get to the point of presenting new pathways as you’re documenting your engagement with the patient. And behind that there’s a decision support engine that is creating all the rules. That means that when you type in CIPD, it presents you with a list of possible conditions, are they available, are they present or they’re not present? And then you can keep working your way through structuring. So, you can engage with a system in an unstructured way, getting a structured capability out of it, without asking the doctors at the front end to think in a structured way about what they’re doing.

Dan Draper (22:58)
Absolutely, makes sense.

Dr John Lambert (23:00)
And Dan, I’ve even seen examples of that. They’re prototype, and look, natural language processing is getting better all the time. Let’s put it that way. But ones that will just simply listen to the conversation between a GP and a patient, and automatically filter the question versus answer, categorize the data, pop it into structured fields in the EMR or patient management systems.

Dr John Lambert (23:19)
So, yes, you could argue at this stage, it’s still experimental research based. The quality may not be quite where you needed to be in real-time use. But I tell you one thing, all of these Zoom based calls and equivalent technologies have a very key advantage, because you can separate the audio stream. So, you know when it’s Kate talking versus John, versus Anna. And that can make these tools actually much more accurate, which is quite awful with all the telehealth That’s going on at the moment.

Dan Draper (23:47)
Yeah. It’s fascinating that the world is changing in so many ways. It’s making kinds of things possible that weren’t previously, well, not that they weren’t possible, but certainly much more accessible.

Professor Anna Peeters (23:57)
I could add just one more point about the data. And it builds on what John was saying, that I think one of the other kind of silver linings, if you like, or positive opportunities that it’s opening up is data sources that previously weren’t accessible to us, and now potentially accessible for health promoting purposes. So, just as an example, which I find really cool.

Professor Anna Peeters (24:19)
The Australia Institute for Health and Welfare and the Australian Bureau of Statistics, we’ve been working with them for a while now to think about population diets. So traditionally, you do a survey and you’re really dependent on who responds to the survey, and then they have to fill out this long food frequency questionnaire or dietary recalls. So, as you can imagine, really difficult to get validated about what people have eaten in any point in time.

Professor Anna Peeters (24:40)
So, how do you look at population diet trends? And so, what we’ve been working with them on is to look at supermarket sales data, which is 70% to 80% of the food that we eat in Australia comes from supermarkets. And now they can get some fantastic data around, well, what the dietary trends look like across the population based on the different foods that are selling. So, that’s also in its early days. But the potential is huge.

Dan Draper (25:03)
That’s really interesting. And actually, we’ve talked mostly in this conversation in somewhat in the abstract, I suppose. And I’d love to hear some more about some specific technologies or applications of AI technology that might be emerging. In particular, I wanted to point out that the Australian government recently announced in December seven and a half million dollars was allocated to research and to AI and healthcare. Perhaps we can talk a bit about what some of the specific applications of AI in healthcare might be. I mean, the food one is really interesting, but some others, perhaps. Ron, let’s start with you.

Dr Ron Shnier (25:36)
So, John and I are involved in a company that looks at image data. So, radiology generates huge amounts of data. There are very few specialties that some point don’t pass through radiology. So, we took an approach that it has to fulfill a couple of things. First of all, it has to be a seamless solution for the doctors, or otherwise they won’t use it. We then took very robust processes to curate that data, because it’s as good as your data curation. So, not only do we extract the data, for example, chest x-ray, where we have an algorithm that’s solves chest x-ray. But we have very strict criteria for people to pass a certain level of competency, to be able to even label. And everything is triple labeled and checked. So, we put a lot of checks and balances in. And we now have a product that can read a chest x-ray.

Dr Ron Shnier (26:29)
But one of the really, really important things, it’s all very well for me to say that our algorithm could outperform an expert radiologist. But where the doctor is critical is context. So, the AI will make a finding, but it’s the clinical context that is most important.

Dr Ron Shnier (26:45)
It’s a really exciting time to be in medicine because this whole notion of precision medicine, we now recognize that a patient’s disease is unique to that patient. And why some people who have the same problem die and others don’t, is because their whole environment is different. And it might be genetic. It might be environmental. It might be the type of drugs they take. And to have a tool that can start to assimilate that data as a solution for that particular patient is very exciting. And AI can do that.

Dan Draper (27:16)
It’s the concept of a digital twin, I think, where you’ve got a digital representation of all of the elements of another thing. And in this case, we’re talking about the digital twin of a human being.

Dr Ron Shnier (27:26)
Of a person. Yeah.

Dan Draper (27:28)
Yeah. Anyone else want to throw in some ideas about where AI technology is going and what some of the more interesting applications might be? Anna?

Professor Anna Peeters (27:37)
Yeah. I’ll give you a different one, perhaps, to everybody else. I’ll give you a research application, given that we’re a research institute. Because I think there’s huge potential in that space. So, one of the interesting things that’s come out of a collaboration between our obesity prevention researchers and our AI specialists has been a new rapid trial methodology.

Professor Anna Peeters (27:56)
And so, normally where we might try and do a clinical trial that’s randomized, would take one, two or three years to have a look at an impact on something like physical activity using Bayesian and AI algorithms. What this enables you to do is you choose your top seven interventions, and you have your dichotomous outcomes, a yes, no outcomes. And every week you can adapt your trial methodology based on the results across your different clinicians or your different health services, or your different boards. And so, you tweak your trial in terms of changing, which interventions you offer to whom.

Professor Anna Peeters (28:29)
So, that much more rapidly in this case, we had something around physical activity scripts for GPs. In seven weeks, we were able to get a result from the trial. So, just even things like that, being able to turn over traditional research, if you like, in a much more rapid time period is really exciting.

Dan Draper (28:45)
Yes. So, fast synthesis of evidence. And presumably, that even goes as far as creating guidelines and things like that as well.

Professor Anna Peeters (28:53)
Yes, that’s right. It’s a much more rapid learning health system. Right?

Dan Draper (28:56)
Kate, any additions you’d like to share?

Kate Quirke (29:00)
Okay. I guess a topical one is really from a COVID perspective in terms of looking at the outcome data or the course of the trajectory of the disease over time, and how building all that data together. Now, we never got enough data really here in Australia, probably to actually, adequately create the algorithms from a localized perspective. But certainly pulling in a lot of the experience that Europe was having through March, April, May and June, we’ve been working with a couple of Sydney hospitals, one, Sydney One Regional New South Wales Hospital around treatment of patients in the home. They’ve got armbands. They’re collecting constant device data from patients.

Kate Quirke (29:39)
Now, obviously we haven’t had a huge number of COVID patients in either of those jurisdictions. But if we married with that the algorithm from the work that’s been coming out of some of the Spanish hospitals that have been using AI in this area as well, we’ve been able to do some, again, preliminary work around predicting the course of the outcome of patients with particular co-morbidities, patients that presented with a temperature at a certain time in the trajectory of the disease and so forth.

Kate Quirke (30:06)
And obviously this is topical because it’s COVID, but it can be applied to anything that we’re trying to treat that’s chronic or affects a large number of the population in terms of diabetes or other chronic conditions, and so forth. So, I’m excited with that about the potential for that as we start to see more and more of this technology deployed in earnest, as we’re treating patients.

Dan Draper (30:30)
What are the other practical implications of AI technology here? I mean, is there a risk that we become especially in the medical profession for clinicians in particular, I think, or even just as individuals, is there a risk we become too dependent on what AI is telling us?

Professor Anna Peeters (30:47)
And I can start, if you like. I’m sure there’s lots of conversation there. But I guess I see the bigger issues as the ones I was touching on at the beginning, which we need to get all the other elements of the system right and ready for the AI. Because I think they’re the bits that are going to mean it’s clunky.

Professor Anna Peeters (31:06)
And like a really simple example, this is not going anywhere near AI, is the shift to telehealth right during COVID. So, there was a huge shift. But over 90% of those were on the phone, because whether it was the patient or the practitioner, or the service could not actually adapt to the video consults. So, I just think we need to walk before we can run. And we do need to make sure that all the things we’ve talked about before, like governance and privacy, and funded thing, enable the best use of AI, rather than it just being elite and on the fringes.

Dan Draper (31:41)
I see a few nods there. And Kate, have you got a take on that?

Kate Quirke (31:42)
No, no, I absolutely agree. I’ve said this, way back when we started moving [inaudible 00:31:48] to hospital in the home. And I was on a discussion around this that I felt that equity of access was something that we really, particularly from a telehealth perspective. I mean, a lot of what I’ve been talking about is much more intense treatment of patients with conditions. But if we’re really talking about the telehealth component, and even patients in the hospital, in the home, we must understand how they get access to that type of technology. And we don’t end up with a two tier, 40 year old, many, many, many tier system, where it depends whether you’re video literate or IT literate as to what healthcare you’re going to get.

Kate Quirke (32:26)
We’ve already got enough barriers. Let’s not create yet another barrier in terms of your healthcare outcome, is whether you’re technology literate or not. So, obviously we need to be mindful of that. But I don’t think that’s insurmountable to overcome. I really don’t. If the will is there, then I think there’s a lots of people doing amazing stuff around access to technology.

Dan Draper (32:44)
So, it’s interesting. The question isn’t necessarily about, is are going to overuse AI, it’s more about making sure that everyone can access the benefits of AI to extend that for the fold. Sorry, John, I cut you off.

Dr John Lambert (32:56)
Oh, no, no, no. That actually ties in well with something I was going to suggest. I mean, and you’ve got to prove those benefits. So, I totally support that AI is a wonderful way to improve equity of care, but not just access to care, but the quality of care. So, one thing about AI is it tends to perform consistently. Humans are incredibly variable beasts. And I know that me at 2:00 AM in the morning after a week on call, it’s not quite the same as me at 8:30 in the morning on the Monday when I start my run. So, even if the AI is average, it might still outperform the poor performing clinicians. So, there’s a whole lot of interesting things. But we have to test it all.

Dr John Lambert (33:37)
At Harrison, we believe that any of our AI models that we create should be a medical device. That it should go through regulatory approvals, just like any other medical device. It should be subject to scientific peer reviewed research to prove that it has a benefit, and it doesn’t cause dependency.

Dr John Lambert (33:51)
I mean, people talk about becoming reliance on AI, and it is possible. And look, it’s partly about how you design the AI and what purpose it is there for. We’re very keen to design tools that are meant for clinicians, as Ron mentioned earlier. But the reality is that a lot of AI solutions may well improve the performance of clinicians.

Dr John Lambert (34:10)
So, Ron was talking about a model we’re building with I-MED that looks at chest x-rays. Now, if you have junior staff that really, let’s be honest, a bit average at looking at all the fine details of a chest x-ray. I know I was. I’ll speak for myself. I was crap when I was an intern.

Dr John Lambert (34:26)
And if every time I looked at a chest x-ray, I had this thing, sitting up there on screen over my shoulder, in my ear, whatever you’d like, telling me, “Now, by the way, John, this x-ray is showing this, this and this.” Then I think, “Oh, wow, that’s two things. Oh my God, there’s actually eight things there.” If that happened every time I looked at an x-ray, I had a feeling that if you then took me out into Nyngan, where I was put once as an RMO2 to replace the GP there, I’d probably be better at looking at the chest X-ray, even if the AI wasn’t available in Nyngan.

Dr John Lambert (34:56)
So, it does go both ways. And the answer will be found through research. We’re going to have to implement these tools in clinical environments, measure the impact, be wise about the impact of usability, understand cognitive science and human factors. And we’re going to have answers to these things.

Dr John Lambert (35:12)
And I think that’s one thing healthcare has historically been very good at, is examining our practice and research. And I think as long as you take that approach, this is no different to a new surgical procedure or a new drug. And if you test it that way, then a lot of these risks will be identified and can then be mitigated.

Dan Draper (35:29)
So, the message there is that the technology is not replacing human capacity. It’s more collaborating with human capacity and that it’s enhancing [crosstalk 00:35:40].

Dr John Lambert (35:40)
Well designed technology should do that. I’m not saying there aren’t companies out there trying to replace clinicians. I think many of them will fail. But yeah, it is a choice whether you do that or not. And so, I just put that caveat on it. You can try to replace functionality, but I believe there’s more value in combining the human and the computer to create a better outcome than either one alone can provide.

Dr Ron Shnier (36:05)
Yes, to John’s point, consistency is very important. I know I’ll just be sitting in my office, looking at a complex case. The phone goes, it’s a referring doctor. Then I get called to go do an injection or a biopsy. Then I come back and my wife gives me a call. And then, where was I, right? If I have a tool to say, actually, you forgot to say this. Or for example, a mammography as a second reader or another. Not to belay this point, and I mentioned before.

Dr Ron Shnier (36:38)
Australia, we’re a very sophisticated country and we actually get world-class healthcare. But most of the world doesn’t have an opportunity to have an algorithm that can at least sort out the serious from the non-serious, where there’s no infrastructure to do that, is it could be a huge service.

Dan Draper (36:54)
I have trouble making my in the morning sometimes, when I’m getting lots of distractions. I can’t imagine what it’s like to do a very, very important diagnosis.

Dan Draper (37:03)
One of the other ways that we as humans collaborate with technology is through robotics. Now, I know robotics is you would argue maybe not technically AI because it’s often the humans that are still controlling it, but it’s an interesting area, nonetheless. And there has been an increase in instances of robots being used in a medical context, in particular in surgery. I’m interested in understanding how robotics have been playing a role in surgery, and also what impact there is on safety and efficiency, clinical outcomes, cost. Anyone got some takes on that?

Dr Ron Shnier (37:44)
I have a little comment. So, we do a lot of prostate cancer work. And there’s a robotic machine that urologists now use for surgery. And interestingly, we have all these governance issues and we rely on published medical data. But someone decided that the robotic was safer than a surgeon. And people were using robotics to do prostatectomies two years before there’s any published data to show that it made any difference at all.

Dr Ron Shnier (38:15)
And what’s actually come out of it is, number one, it is a huge learning curve to be good. But what the big benefit wasn’t so much patient survival, but much shorter bed stay and less blood loss. So, I think the take home message for that is sometimes we let technology slip through because it’s marketed as better without the proper governance around it.

Dan Draper (38:38)
Yeah, that’s interesting. That’s I mean, to be honest, the idea of having a robot do surgery on me scares me. I saw a video on the internet recently with a banana being operated on by a robot. And I imagine the challenges technically would be pretty significant.

Dr Ron Shnier (38:58)
But it has a big role. So, for example, we can do very high resolution imaging for [inaudible 00:39:02] surgery. We can do sub-millimeter resolution. And to have a robot controlled by a surgeon that does not have a tremor, and that can magnify the fields, that you can do tiny pieces of surgery area, where one or two millimeters does make a big difference to outcome, is certainly of value.

Dan Draper (39:21)
Yeah, understood. So once again, the emphasis being on the collaborative nature of the technology,

Dr John Lambert (39:28)
I think that’s the challenge I have when you ask me a question like that. I mean, robotics, I mean, if we talk about the example, Ron just made. The original robots in prosthetic surgery, I mean, we call them robots, but they weren’t automatic. They were probably better described as remote manipulators, because you’d sit in a console and you would just manipulate instruments at a distance from you through. And they were micro adjusted movements and whole lot of other things.

Dr John Lambert (39:53)
And then you can add AI in lots of different ways. You can put safety margins on it. You can put a resistance on the controller, when you approach a blood vessel recognized by AI in the image. You can put stabilizes into the software so that you might have a tremor, but the tool doesn’t. They’re all examples. And you don’t have to use AI for some of those. And AI is better for others.

Dr John Lambert (40:13)
That’s a long way from what you described Dan, where you’ve got a robot doing the operational timelessly, that’s like… Yeah, take that [crosstalk 00:40:21].

Dan Draper (40:20)
Science fiction.

Dr John Lambert (40:21)
I’m sorry. [inaudible 00:40:22] operative environments. So, I think that’s one of the challenges. What do you mean by AI in robotics, in surgery? There’s so many different ways they can assist. So, I think somebody, I’m not sure who, we have to crawl before we walk, before we run.

Dr Ron Shnier (40:38)
Let’s differentiate between human controlled robotics and autonomous robotics. So, already today, you can use remote controlled robotics, where you could be in Sydney and someone could be in Melbourne, and you could do the same surgery, but that is human controlled, it’s not autonomous.

Dan Draper (40:53)
And I suppose from what you’re saying, there’s another dimension to it, which might be it’s human controlled, but it’s human oriented because it’s removing tremors or what have you. Yeah. Understood.

Professor Anna Peeters (41:03)
So, I think it’s a really interesting point that’s been made, which is basically it’s not one size fits all, whether it’s robotics or whether it’s AI. And I think that’s a thread that we really need to make sure continues through the whole conversation. Because there are going to be capabilities and functions that will be very happy, I think for AI to take over or robots to take over. And I think immediately of the washing machine, nobody wants that capability back really, do we?

Professor Anna Peeters (41:29)
So, there are clearly functions that we’re going to be very happy with. Whereas there are others where we may never be able to shift. So, even in that really basic point before that I was talking about in terms of the shift to telehealth, which is not about AI. In this evaluation, we’ve doing with primary care practitioners, one of the comments has been, “Well, sometimes I need to use my sense of smell. I can tell from somebody’s breath if something’s wrong.” And so, there are things that we’re never going to be able to pick up routinely in AI or not for very long time.

Professor Anna Peeters (41:57)
So, I don’t think anyone’s suggesting that everything suddenly shifts to AI. And I think that’s really important. What we need to do as John said, is make sure we choose carefully and strategically, that we co-design and collaborate. And then we continually evaluate the outcomes, the costs, the benefits, the risks. And then we’d go back again, right? And go around the cycle again and do the next iteration.

Dan Draper (42:19)
There’s an interesting discussion there around how a lot of practices simply weren’t prepared for telehealth. I mean, I guess in many ways we weren’t really prepared for the pandemic. But isolating onto the technology element of this discussion, I mean, thinking forward, what can practices start to think about, start to do to prepare themselves for an AI enabled future? Are there indeed any things that we can do now?

Dr John Lambert (42:48)
I bet with the internet would be a good start for a lot of them. It might be a webcam on every computer. It’s the really basic stuff that has stopped so many practices getting on board with this thing really.

Professor Anna Peeters (43:01)
And even basic data integration. I know with the COVID response and the co-health and the integration for the first time of social care and health care, which has been fantastic. One of the biggest barriers was people were saying, we’ve got 25 different dashboards and nothing integrates. So, I do think that’s kind of a system level change that needs to happen. And that really has to be driven from somewhere like the department. And they can do that. And then there’ll be much better connectivity, interoperability, et cetera.

Kate Quirke (43:27)
If I can say, it’s a bit of both. But bear in mind, for all my entire career in healthcare IT, is the integration of primary and secondary care. We don’t actually even standardize the data between those two, very clear areas. And then add in the social determinants of health. We want to know about a patient. Unless we deal with it as a whole of system issue, how are we really going to use data to effectively change any of these outcomes?

Dr Ron Shnier (43:54)
Yeah. I mean, one of the issues with informatics is it’s always becomes a problem with commercial interests. And I’m not talking about making money, but it’s about protecting IP. And often we have these fantastic programs with fantastic data, and we want them to speak to each other. But for the vendors to protect their IP, it’s an impossible hurdle. And we run into that quite often. So, the person that can invent the thing in the middle, which is the sandwich to put these pieces of bread together, so no one is threatened, and be able to share that data in a useful way, it would be fantastic.

Dan Draper (44:29)
So, the takeaway here, everyone, is that the robots aren’t taking over anytime soon, but with the right data, our AI may be able to provide better health outcomes in the longterm.

Dr Ron Shnier (44:38)
Absolutely.

Dr John Lambert (44:38)
And when you’re talking about raw data, and I mentioned a really great thing that computers currently can’t smell. But I would also want to point out that they can do some things that humans can’t do. And for instance, infrared vision is no different to a computer than optical wavelength vision. And if I could see every one of my patients in infrared and see where the hotspots are. I happen to have an infrared camera. I put on my iPhone every now and again. And the world in infrared is a fascinating place. And I can think of quite a lot of AI applications driven by infrared cameras, which could be really powerfully used, especially in an acute care hospital environment, but also in a general practice.

Dr John Lambert (45:18)
And one of the nice things is that, infrared cameras can have what appear to be opaque lenses. So, the idea of being watched by big brother all the time is a significant barrier in video applications and image applications in healthcare. But infrared cameras can’t see that well, but can provide a lot of information. So, that’s just one example. There’s plenty of others.

Dr John Lambert (45:37)
But one of the cool things about AI that I’ve found since being more involved in it, is that it truly has the ability to sense in a sensorium beyond human capacity. Even in CT scans, they have 16,000 levels of gray in the Hounsfield units. We can only perceive up to 700. So, a computer can actually see in higher definition than humans can ever possibly see.

Dr John Lambert (45:59)
And so, the trouble we have is that the labeling is still done by humans. So, we’ve got to go beyond human capability sometimes to actually create the AI models. But just, Anna is right, they can’t do a lot of things, but they can also do a lot of things that can augment human sensory capability. And I’d like to see how we can leverage some of that in healthcare, because I think it would be wonderful to see what we can do.

Dan Draper (46:19)
It reminds me of the old Steve Jobs’ quote, a computer is like a bicycle for the mind. It’s a nice way of articulating how we think about the relationship we have with technology. Thinking through this conversation topic, and I really appreciate everybody’s contribution. Is there anything anyone would like to add, and would like in particular, our audience to know about the topic?

Dr Ron Shnier (46:42)
I just have one comment. It’s my belief that AI will do to medicine what internet did to communication. So, the internet is this great infrastructure and people sometimes use it well, and sometimes they use it terribly badly. Similar, AI will have this great ability to deliver amazing value, but you need safeguards.

Dan Draper (47:04)
Anna?

Professor Anna Peeters (47:05)
I guess one of the threads through this conversation that I really enjoyed, has been the focus on health equity. And I guess I’d like to underscore that I think AI actually has real potential to drive a step change in health inequalities in places like Australia. And the more that we can focus our attention on achieving those kinds of equity gains, I think that we’ll all be the better off for that.

Dr John Lambert (47:28)
Yeah. Look, I think the message I feel is that most people don’t fully understand what AI is, what it’s capable of. Sometimes to make it easier I say, look, it’s just statistics on steroids and predictive models, like regression models with an infinite number of inputs. Well, maybe it’s not an infinite number, but it could be millions of inputs to a regression graph, which we can only cope with two dimensions currently, maybe three, if you’re lucky. And yet computers can handle n dimensions, where n is an impossible number for humans to deal with.

Dr John Lambert (48:08)
Even looking at a chest x-ray, it’s a 4,000 by 3000 pixel image. As far as the AI is concerned, every one of those pixels is an input point. So, do the math, it’s at 12 million data points that it has to connect together and understand their relationships with each other, to come up with a prediction of what it sees on that x-ray. Every problem in medicine is a multi-parameter differential equation that we all do in our heads.

Dr John Lambert (48:31)
So, the application is just tremendous. And it’s no more scary than statistics, really, if you want to make it easier for people to deal with. So, I think people who, the 10% that Ron spoke about that understand AI, I think our job and our responsibility is actually to help the other 90% understand what it’s all about.

Dr John Lambert (48:49)
And I think we need people like Anna, who are focused on the research, the validation, the exploration, the science to make sure we aren’t just making leaps, which don’t make any sense. I think clinicians, and I don’t know everybody’s background here, but I know Ron and I are both clinicians. We need to make sure that the AI solutions are clinically sensible and are solving problems that clinicians actually need solved. And more importantly, patients actually need solved. And that we’re not just coming up with gimmicks that have no real purpose.

Dr John Lambert (49:18)
So, there’s a lot of roles to play with people who are not necessarily AI engineers. I’m not an AI engineer. I live with a team that blows my mind every single day. But I keep them connected to the problems that really matter. So, I think it’s an amazing environment working. It’s nothing to be scared of. But it’s certainly something people have to actively make an effort to learn about, to get the most value out of it.

Dan Draper (49:41)
Final remarks, Kate.

Kate Quirke (49:43)
Well, look, I think I liked a lot of the comments around, we’ve got to walk before we can run. I think there’s a lot of exciting stuff happening with this. But at the heart of this, we cannot lose sight of the patient and the outcomes for the patient. And the step to get to the patient for me is very much the clinician. And so much technology has been done to clinicians over the years and inflicted upon them. And we’ve told them this is the best way to do things.

Kate Quirke (50:08)
And what I’d really like to see here is clinician driven, user interaction that fits into their workflow, that supports what they’re trying to achieve. And we’re doing that because at the end of the day, we’ve got the patient’s outcomes at heart. And so, it’s got to be done really collaboratively. And that’s why I love to see that Harrison, have clinicians heavily involved in what they’re doing, because this should not be done to people. This should be something that we are walking through together with the best intentions of improving healthcare for everyone.

Dan Draper (50:39)
Nobody wants technology done to them. So, well, with that remark, look, I’ve really enjoyed this conversation. There’s been some fascinating topics. Thank you all for joining us.

Dr Ron Shnier (50:49)
Thank you.

Professor Anna Peeters (50:49)
Thank you.

Kate Quirke (50:50)
Thank you.

Dr John Lambert (50:51)
Thanks for having us.

Cloud vs on premise – what’s right for your practice?