• Background Image

    Driving change

5 Febbraio 2020

Driving change

Digital transformation: it won’t be easyInterview with Kathrin Cresswell [Read the Italian version]
How do you know if innovation is working?Interview with Lavinia Ferrante di Ruffano [Read the Italian version]

Digital transformation: it won’t be easy

Despite their complexity, many frameworks could actually help to assess digital innovation. However, revolution is not the only thing to consider: we should invest in the quality of basic infrastructure which are obsolete and require updating.

Interview with Kathrin Cresswell, Chief scientist office chancellor’s fellow, Director of innovation, Usher institute, University of Edinburgh

What are the main obstacles and challenges faced by large-scale digital transformation initiatives? How can they be overcome?
The unhelpful answer to this question is that challenges are endless and there is no simple recipe to overcome them. That said, there are some strategies that can be put in place to tackle these issues and set up conditions in which initiatives are more likely to work. These include considerations at technology, people, organisational and macro-environmental levels. We are in the process of publishing a framework that takes decision makers through these dimensions relating to individual implementations. At large-scale programme level, these dimensions become even more complicated and there is no agreed way on how to best achieve intended outcomes. If I had to pick one dimension that is crucial though it would be a functioning technology that is usable and brings benefits to end-users, whether these relate to safety, quality or efficiency of care.

What kind of digital technologies are considered the most interesting by health care systems? How are they being used? By whom?
Again, a very big question. I’m not sure if I’m the best person to answer this. My initial reaction would be to be cautious of new technologies that are at the top of the hype cycle. There are many basic needs that may deserve resources but that do not come with the same political kudos e.g. we still use fax machines in many parts of the NHS but this just does not sound as sexy as new AI robots delivering care. It probably takes a parallel strategy of investing in existing infrastructures addressing basic needs whilst also building new tools
that may be potentially risky and have unknown unintended consequences. The focus on data is unavoidable though and data strategy needs to be firmly positioned in every organisational and national strategy.

What are the potential applications and the potential benefits for patients in the use of novel health information and digital technologies? Also, for which patients? Are there categories that can benefit more than others from these applications?
Sticking my neck out a little bit, I think there is significant potential for technologies that promote increasing patient involvement of those with chronic conditions. There are, however, risks associated
with these that have to be taken into account for example in relation to data interpretation and integration with other health information systems. Formative qualitative evaluations tracking emerging risks can
help to mitigate potential adverse consequences.

The future is a dream and anyone’s guess.

What are the main safety concerns related to digital health technologies? Could you give a few examples of safe and effective uses of digital health technologies?
There are many examples of safe use of technologies but also many of unsafe use. By nature, these relate to both technological and social factors. For example, clinical decision support systems can be very effective in improving decision making of prescribers but they can also introduce new safety threats associated with alert fatigue, where prescribers are faced with so many alerts that they ignore them and miss potentially important ones.

What does it mean “Digital Excellence” in health care? Why is important to define, measure and asses it?
Digital excellence is a moving target and means different things to different people. This makes it difficult to define, measure and assess. Nevertheless, organizations and health systems will never know how they are progressing if measurement is not even attempted.

How can we do it (measure and asses it?). Are there tools? Which one? Are there tools that are being studied (such as the framework you propose)?
There is HIMSS Analytics® Electronic Medical Record Adoption Model (EMRAM), and various related frameworks such as the Infrastructure Adoption Model (INFRAM) and the Continuity of Care Maturity Model (CCMM). The problem is that these are very much based on a North American model health system and many of their components may not translate well to other contexts. They also assume that digital maturity is an end-goal that can be achieved by progressing through a series of stages, and they very much focus on technology rather than social dimensions of change. Our proposed Evolve in Context model of digital excellence in health care addresses these shortcomings but also provides less straightforward answers as it reflects the complexity and constantly evolving landscape in which digital transformations take place. As a result, it does not provide the clear roadmap to change associated with existing HIMSS models.

How do you imagine this digital revolution will shape the future of healthcare? The upcoming as well the more distant (2050) future? What can we expect to see and what are instead scenarios that still belong to sci-fi books and movies?
The future is a dream and anyone’s guess. I would certainly like to see the increasing development of learning health systems facilitated by data and also the establishment of learning ecosystems where
organisations learn from each other’s digital experience. Automation to the extend of routine application of care robots is still quite a long way off I think.

January 2020


How do you know if innovation is working?

We need rigorous experimental studies to assess our resorting to artificial intelligence. We spoke about it with a researcher who is working to the integration of the AI extensions’ Consort and Spirit.

Interview with Lavinia Ferrante di Ruffano, Test Evaluation Research Group, Inst. Applied Health Research, University of Birmingham

Which are the current limits of artificial intelligence (AI) algorithms applied to patients’ care?
AI has a broad range of applications to patient healthcare, from patient identification all the way through to diagnosis and treatment prescription. While these algorithms have the potential to transform healthcare in a myriad of different ways (such as providing earlier or more accurate diagnosis, enabling faster and more efficient service delivery, and facilitating access to medical care), the key limit of at the moment is the dearth of evidence that the use of these interventions does more good than harm to patients. This is one of the key reasons underlying the slow uptake of AI healthcare technologies around the world. Conversely, the majority of AI intervention trials so far are validation studies (for example diagnostic accuracy studies), and even then few studies present externally validated results or compare the performance of AI with health-care professionals in the same patient sample [1]. In order to translate the potential of AI into clinical practice, studies are needed that evaluate patient and health service outcomes as a result of using an AI interventions, compared with current practice. Optimal reporting of these studies is critical to ensure that their results can be used to inform policy decisions and health technology assessments.

Do clinical trials still have a role in evaluating healthcare interventions such as AI algorithms?
Prior to their implementation in practice, all healthcare interventions must be evaluated rigorously to demonstrate that their use will do more good than harm to patient health. Randomised controlled trials provide the highest quality evidence for the effectiveness of healthcare interventions, and we do not see AI interventions as an exception to this. In the case of black-box algorithms, where the intended and unintended consequences of implementation may be unpredictable, the need for this level of evaluation will be even more critical.

“The key limit of at the moment is the dearth of evidence that the use of these interventions does more good than harm to patients”.

Which are the critical points missed by the current Consort and Spirit guidance regarding AI algorithms?
The original SPIRIT and CONSORT guidance was designed for the evaluation of therapeutic treatments (such as a drug or a surgical intervention), and so the AI extensions were conceived to identify and incorporate additional or different challenges to evaluating AI interventions. By discussion with all interested stakeholders, we are currently in the process of identifying all potential critical additions, however we hypothesise that elements which will require detailed and specific reporting include the study setting and its ability to administer a machine learning intervention in real time, the criteria for inclusion at the input-data level as well as at the participant level, the interactions between the human and the algorithm and its potential knock-on effects downstream, and the effects of adaptive machine learning technologies (which have the potential to continuously improve in performance) [2].

How will this problem be addressed by the Consort-AI and Spirit-AI steering group?
The CAISAI steering group have designed an international project to develop AI extensions to the existing CONSORT and SPIRIT checklists and guidance documents, which will focus specifically on clinical trials in which the intervention includes a machine learning or other AI component. Using the EQUATOR (Enhancing Quality and Transparency of Health Research) Network methodological framework for guideline development [3], the extensions will be produced in 4 stages: initial generation of additional items, two phases of Delphi participation, and a final consensus meeting to vote on the most accepted additions. Our initiative is complementary to the efforts of others working on reporting standards, such as the TRIPOD-ML (TRIPOD, Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis) initiative of Collins and Moons, which seeks to improve the reporting of machine-learning-driven predictive model development and validation [4].

Are you going to involve all the relevant stakeholders in the consensus process?
In a consensus project like the CONSORT and SPIRIT AI extensions, the integrity of its output is directly related to the breadth of stakeholders who can contribute to the project. The CONSORT-AI and SPIRIT-AI steering group has given serious and lengthy consideration to ensure that representatives from all identified stakeholder groups, and from a range of nations, are involved in initial item generation, as well as the Delphi stages and final consensus meeting. We can confirm that individuals from the following stakeholder groups have already contributed or agreed to take part (listed in no particular order): patient representatives, policy–makers (government bodies, medical bodies and research institutes), regulatory bodies, medical journals, AI developers and industry, methodologists, statisticians, trialists, AI standardisation groups, clinicians from a range of specialties, AI health research institutes, computational scientists, machine learning scientists, clinical/health informatics specialists, ethicists and research funding bodies.

Why do you consider so important the role of medical journals’ editors?
Any guidance document will only be successful if it is visible and can be easily applied to all relevant evaluations. Medical journals, represented by their editors, therefore play a critical role in the success of reporting and methods guidelines. They achieve this in two ways: 1) by participating in the generation and discussion of new CONSORT and SPIRIT items, journal editors allow us to incorporate the unique perspective of those experienced in seeing across the breadth of submitted and published AI research, as well as extensive experience in implementing existing checklists with authors. 2) medical journals play a substantive role in disseminating and publicising the existence of reporting guidelines and checklists, ensuring that authors around the globe see the checklists, as well as requesting submitting authors to use the checklists.

Do you expect the guidance will have an impact on the FDA regulatory process?
As important stakeholders in the evaluation of healthcare interventions, we are engaging with several international regulatory bodies as part of the consensus process for producing the CONSORT-AI and SPIRIT-AI checklists. Changing or influencing current regulatory processes are not within the remit of this project. Instead, our central aim is to improve the reporting and design of trials used to evaluate the effectiveness of AI healthcare interventions, so that regulatory and health technology assessment bodies have access to an evidence–base of sufficient quality to facilitate the introduction of effective AI interventions into healthcare.

References

[1] Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digital Health 2019;1:e271-97.
[2] Consort-AI and Spirit-AI steering group. Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nature Medicine 2019, Sep 24.
[3] EQUATOR Network. Reporting guidelines under development. (EQUATOR Network, accessed 4 August 2019);
[4] Collins GS, Moons KGM. Reporting of artificial intelligence prediction models. Lancet 2019;393:1577-9 (2019).

January 2020

0 Comments

Leave A Comment

Lascia un commento

Il progetto Forward è realizzato con il supporto di
  • ISCRIVITI ALLA NEWSLETTER DI FORWARD

    Quando invii il modulo, controlla la tua inbox per confermare l'iscrizione

  • Pin It on Pinterest

    Share This