Timeline for Why would patient management systems not assert limits for certain biometric data?
Current License: CC BY-SA 4.0
4 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Feb 17, 2021 at 21:41 | history | edited | IMSoP | CC BY-SA 4.0 |
added 92 characters in body
|
| Feb 17, 2021 at 19:27 | comment | added | Sphaerica Pullus | The AI/ML part does definitely come across as more worrying. From what I understand, the training data at least does need to be valid, although I don't know how big a dataset you need to accurately train a medical model. I gather it's also not really possible to validate an ML model analytically, in the sense that it's a black box. Having a trained medical professional evaluating every output seems ethically/legally necessary now, but I can't imagine it being the case in the future! | |
| Feb 17, 2021 at 19:21 | comment | added | Sphaerica Pullus | This makes sense, I think I neglected to consider the whole system in asking the question. It's the links and assumptions that are made which cause problems! Soft validation at each step seems like one of the best options, at least without considering potential programming effort. And the 'final' stage of reporting and acting on the data must assume un-validated data anyway, given that it's necessarily aggregating data from a wide variety of sources (some of which, I suppose, might also be self-reported, rather than be entered by a doctor). | |
| Feb 17, 2021 at 19:04 | history | answered | IMSoP | CC BY-SA 4.0 |