I really think that ML could be a huge think in healthcare but it’s constant phrased in such a way that it’ll never work. You don’t want a robot replacing your doctor because your doctor is not really important as a fount of information and prediction but instead as a position of authority, trust, and ethics.
Dr. Metwally suggests this already: that the doctor’s job is to know how to care for a sick child in their mother’s arms. Knowing the diagnosis is just one (small) part of this—especially since likely diagnoses evolve over time but care must remain constant.
So it seems obvious to me that ML should never be represented as antagonistic to doctors. It should, it must, be augmenting them. ML should be decision support. Unfortunately, it seems like a lot of the time that I see people trying ML in medicine they want to disrupt the doctor. Further, lots of ML techniques don’t lend themselves to human interpretation which means there isn’t a good synthesis. I think there’s actually a technology and human-technology interface problem here to be solved.
A few comments on that page agree with you. As do I. Particularly, the rule-based systems seem the be best here because they can show to physician how they reached a conclusion from specific data & probabilities. One might even design the app to allow physicians introduce new data or tell it to ignore something. That’s to account for those things a computer can’t pick up so well but which they already did. Let them explore alternative possibilities then use their own judgment for final diagnosis.
You don’t want a robot replacing your doctor because your doctor is not really important as a fount of information and prediction but instead as a position of authority, trust, and ethics.
I don’t know, in most cases I personally would trust an algorithm much more than a doctor. An algorithm isn’t going to act like its time is more valuable than mine, allow its judgment be influenced by pharmaceutical reps, or get fatigued, for example.
Edit: Potentially algorithm-based healthcare could make pricing and billing much more transparent too.
Facebook, ad networks, Palantir, NSA, high-frequency trading (err time advantages), sales analysis for pharmaceuticals, and developers of these all rely on algorithms for their schemes. Trust them more now?
Accidentally found this doing a quick Google on AI or ML deployment in medical for another thread. The answers on this page, esp from doctors, include some excellent ones. I’m a Quora opponent normally given what Google or StackOverflow do for me. Yet, the impact this could have on getting new researchers started with ideas and reality-check is so high it almost looks to single-handedly justify the original investment into Quora. Well, may or may not for the VC’s but for the public good anyway. :)
Physicians in larger groups face perpetual pressure to see more patients in less time
This is a problem because the supply of physicians are restricted by government granted power to licensing boards to decide who can or can not practice medicine. So there is a perpetual shortage of doctors since the number of doctors trained each year is limited by the numbers of medical schools and by limit put on immigration.
Of course, the argument is that they want to make sure doctors are the best and so they can only take the top students each year. But if this argument is true, then they ought to only train one doctor a year. Anything else they are drawing a line in the sand and in that case I ought to be the one deciding who’s giving me medical advice.
Medicine is one of the most liable to regulation industries in the world, because of people’s propensity to pretend that economics do not apply when it comes to healthcare, as lives are ‘priceless’. That may be true, but doctors, nurses, support staff, hospitals, equipment and medicines are not.
“Of course, the argument is that they want to make sure doctors are the best and so they can only take the top students each year. But if this argument is true, then they ought to only train one doctor a year.”
The first point made sense but this is nuts. Prior to regulation, the medical industry was like the dark ages or something. Tons of BS, ineffective treatments, dangerous ones (opiates for kids)… list goes on. The consequences and politics involved were high enough to establish ways for vetting doctors and drugs. Any training and certification process can only aim to reach a certain standard. Nobody is looking for the one, perfect doctor. Just a baseline of those that do a decent job relative to prior system that was free-for-all plus steady improvement. Regs and courts also add accountability to the mix.
On your first comment, our training system is kind of like a form of apprenticeship. You need the pro' s there to guide the amateurs as they gain experience doing the difficult work in both diagnosis and care. That already limits how many doctors one can bring in under such a system. Nurse practitioners, too, as they do key work in the diagnoses and see the most stuff. So, the system would have to basically have more people there with the doctors learning from them. There’d be people there in about every diagnosis constantly learning. Probably also some way of anonymously collecting data on people to feed into A.I. work or even human analysis.
Sounds like a lot of work. Gotta wonder who is going to pay for it and what incentives will be for new doctors given supply will exceed demand.
I really think that ML could be a huge think in healthcare but it’s constant phrased in such a way that it’ll never work. You don’t want a robot replacing your doctor because your doctor is not really important as a fount of information and prediction but instead as a position of authority, trust, and ethics.
Dr. Metwally suggests this already: that the doctor’s job is to know how to care for a sick child in their mother’s arms. Knowing the diagnosis is just one (small) part of this—especially since likely diagnoses evolve over time but care must remain constant.
So it seems obvious to me that ML should never be represented as antagonistic to doctors. It should, it must, be augmenting them. ML should be decision support. Unfortunately, it seems like a lot of the time that I see people trying ML in medicine they want to disrupt the doctor. Further, lots of ML techniques don’t lend themselves to human interpretation which means there isn’t a good synthesis. I think there’s actually a technology and human-technology interface problem here to be solved.
A few comments on that page agree with you. As do I. Particularly, the rule-based systems seem the be best here because they can show to physician how they reached a conclusion from specific data & probabilities. One might even design the app to allow physicians introduce new data or tell it to ignore something. That’s to account for those things a computer can’t pick up so well but which they already did. Let them explore alternative possibilities then use their own judgment for final diagnosis.
I don’t know, in most cases I personally would trust an algorithm much more than a doctor. An algorithm isn’t going to act like its time is more valuable than mine, allow its judgment be influenced by pharmaceutical reps, or get fatigued, for example.
Edit: Potentially algorithm-based healthcare could make pricing and billing much more transparent too.
Facebook, ad networks, Palantir, NSA, high-frequency trading (err time advantages), sales analysis for pharmaceuticals, and developers of these all rely on algorithms for their schemes. Trust them more now?
Accidentally found this doing a quick Google on AI or ML deployment in medical for another thread. The answers on this page, esp from doctors, include some excellent ones. I’m a Quora opponent normally given what Google or StackOverflow do for me. Yet, the impact this could have on getting new researchers started with ideas and reality-check is so high it almost looks to single-handedly justify the original investment into Quora. Well, may or may not for the VC’s but for the public good anyway. :)
This is a problem because the supply of physicians are restricted by government granted power to licensing boards to decide who can or can not practice medicine. So there is a perpetual shortage of doctors since the number of doctors trained each year is limited by the numbers of medical schools and by limit put on immigration.
Of course, the argument is that they want to make sure doctors are the best and so they can only take the top students each year. But if this argument is true, then they ought to only train one doctor a year. Anything else they are drawing a line in the sand and in that case I ought to be the one deciding who’s giving me medical advice.
Medicine is one of the most liable to regulation industries in the world, because of people’s propensity to pretend that economics do not apply when it comes to healthcare, as lives are ‘priceless’. That may be true, but doctors, nurses, support staff, hospitals, equipment and medicines are not.
“Of course, the argument is that they want to make sure doctors are the best and so they can only take the top students each year. But if this argument is true, then they ought to only train one doctor a year.”
The first point made sense but this is nuts. Prior to regulation, the medical industry was like the dark ages or something. Tons of BS, ineffective treatments, dangerous ones (opiates for kids)… list goes on. The consequences and politics involved were high enough to establish ways for vetting doctors and drugs. Any training and certification process can only aim to reach a certain standard. Nobody is looking for the one, perfect doctor. Just a baseline of those that do a decent job relative to prior system that was free-for-all plus steady improvement. Regs and courts also add accountability to the mix.
On your first comment, our training system is kind of like a form of apprenticeship. You need the pro' s there to guide the amateurs as they gain experience doing the difficult work in both diagnosis and care. That already limits how many doctors one can bring in under such a system. Nurse practitioners, too, as they do key work in the diagnoses and see the most stuff. So, the system would have to basically have more people there with the doctors learning from them. There’d be people there in about every diagnosis constantly learning. Probably also some way of anonymously collecting data on people to feed into A.I. work or even human analysis.
Sounds like a lot of work. Gotta wonder who is going to pay for it and what incentives will be for new doctors given supply will exceed demand.
Let’s not pretend the doctors themselves have no influence on the number of credentials handed out each year.