8: Diagnostics underpinning digital health

Sophie Brice, David Sly, Sing Chee Tan

Learning outcomes


On completion of this chapter, learners will have the capability to:

Key terms

consumer

diagnosis

digital

health

health and social care

medicine

technology

user

COMPASS statement

The COMPASS statement (Mather & Almond 2022) situates how Chapter 8 uses the implementation model. This chapter will address Context, Person-centred and Analysis as aspects of the COMPASS implementation model mnemonic.

  • Context—the circumstance of digital transformation and influence on health and health and social care safety and quality.
    • How the diagnosis processes works
    • The evolution of technology facilitating digital diagnostics
  • Optimisation—evidence-based therapeutic actions, interventions or situations that influence effective use of digital technologies to prevent, manage or treat a health disorder or disease.
    • Impact of diagnostic technologies on diagnostic processes
    • Implementation challenges of digital diagnostic technologies
  • Model—representations of digital technology within health, healthcare and learning approaches that contribute to safety and quality.
    • Evaluating digital diagnostic technologies in context of their implementation
  • Person-centred—digital technology-enabled care that gives individuals the authority to better engage with, and control, their health.
    • How digital diagnostic technology implementation creates a drive for health and care to be person-centred
  • Analysis—a detailed examination of digital transformations in health and healthcare to understand the nature of or determine essential features
    • Evaluating digital diagnostic technologies in context of their implementation
    • Functionality and purpose considered as design startpoints and endpoints
  • Systematic—a transformative approach in digital health and healthcare that is methodical, repeatable and able to be learned as an organised approach
    • Considerations in ongoing drivers in transformation of digital diagnostic technology in health and social care
  • Solution—resolving a concern using a transformative approach in digital health and healthcare
    • We invite the reader to take the learnings from this chapter forward and envisage what solution they see ahead of them—and aim for it!

Introduction

Technological advances have paved the way for rapid growth of diagnostic technologies to support the delivery of digital healthcare. This chapter aims to support the development of an informed approach to navigate and unpack the bewildering range of diagnostic technologies that are now available in digital health. The aim will be achieved by presenting the perspectives of technology, clinical application and the user experience in a holistic and practical discussion relevant to all participants in health.

The context for diagnostic technology: Understanding the diagnostic process

In this section, we explore how a diagnosis is reached and how understanding this process is foundational to the effective design and implementation of digital diagnostic technologies.

Understanding the concept of a ‘diagnosis’

The term ‘diagnosis’ is derived from two root terms: gnosis (recognise / know) and dia (apart). Diagnosis reflects both the process and the schema used to establish the right clinical condition that explains the user’s symptomology (Balogh et al 2015b). It is crucial to recognise that a diagnosis is not the clinical condition itself (which is comprised of a pathological definition), but rather a decision allocating the most likely condition that aligns with the symptoms and signs the individual is experiencing.

The diagnostic process is complex, being shaped by numerous factors such as stakeholder values, political and economic factors, clinical judgment, available information and prevailing diagnostic criteria, all of which are variable. The need to weigh up competing priorities throughout this process highlights the importance of recognising that diagnostics goes beyond a particular test and requires ongoing clinician involvement.

What is the value of a diagnosis?

Clinical misdiagnosis is often identified as a safety and quality issue. However, the importance of establishing the right diagnosis extends much further.

At an individual user level, the right diagnosis is needed to:

  • ensure the right treatment is administered
  • ensure that the benefits of the treatments outweigh the associated risk
  • identify prognosis and planning
  • establish a follow-up plan.

At a public health level, where government and community organisations allocate resources towards large-scale intervention plans and funding decisions, the right diagnosis:

  • affects an accurate reporting of health in the community that is used to inform public health decision-making
  • influences the results of public health intervention assessments; that is, measuring the impact or success of activities directed at public health.

It is important to recognise that deriving the right diagnosis may not always be critical. A diagnosis, being a piece of information, has value primarily in relation to the ability to act on the information. The purpose of the diagnosis must be placed in the broader context of the user and to ensure that the information from a diagnosis can be acted on for the user’s benefit.

  • It may have limited impact on user outcome; for example, the diagnosis of prostate cancer in elderly men, where the slow progression of the cancer means that death is usually due to other causes rather than the cancer itself (Mistry et al 2009).
  • It may not have any impact on treatment decisions; for example, antenatal genetic testing for parents who would not consider an abortion
  • The diagnostic process may have risks that outweigh benefits; for example, bone density scanning as a routine screening exposes the population to significant costs and radiation, with minimal evidence of improvement in clinical outcomes (Winzenberg & Jones 2011).

How is a diagnosis developed?

To design more effective diagnostic technologies in digital health, we need to explore how a clinician arrives at a diagnosis. This can be understood within a three-stage framework of:

  1. 1. gathering data (data collection)
  2. 2. analysing data (interpretation)
  3. 3. making the diagnosis (action) (Fig 8.1).
Image
Fig 8.1The diagnostic process Source: (Balogh et al 2015a)

Gathering data

The first part of the diagnostic process is an exchange of information. The clinical encounter is between the person experiencing symptoms, clinically (and in this text) referred to as the ‘user’, and clinician. The encounter may be a conversation with a general practitioner, an introductory conversation upon arriving at a hospital, or any interaction with a health professional intended to assist with health and clinical needs.

The clinical encounter can be subdivided into three distinct phases to the clinical encounter (Balogh et al 2015a), designed to help guide the clinician towards the right diagnosis.

History.

This involves collecting information on the user’s current issue, past clinical history and social history, and is all geared towards uncovering all relevant information. An aphorism in medicine is that information collected during this phase is enough to establish approximately 80% of diagnoses.

Examination.

Physically examining the user comes next to identify signs that suggest a particular condition is or is not present.

Investigations.

Where information from the history and examination is inadequate, additional tests are ordered to help with the diagnosis. Although often construed as being key to a diagnosis, it is not often required. In addition, there is sometimes uncertainty in how an investigation or test result relates to actual user-relevant measures. An example of testing measures that may not relate to the right diagnosis is that many healthy individuals with no back pain may have an abnormal spine magnetic resonance imaging (MRI) image that confounds attributing back pain of some individuals to the appropriate cause (Brinjikji et al 2015).

Analysing data

Data collected during a clinical encounter needs to be analysed to identify the diagnosis. This involves transformation of clinical data to information (data in context), which is then used to derive a diagnosis.

There are various unconscious cognitive processes employed to match the clinical presentation with a diagnosis. The processes employed can be broadly aligned with popular descriptions of ‘system 1’ and ‘system 2’ thinking (Tay et al 2016, p. 2).

  1. 1. Reflex pattern recognition (also known as ‘System 1’). This is a rapid cognitive process that relies on pattern recognition and trained responses to a given scenario. Clinical experience forms the basis of identifying the associations between clinical features and diagnosis. Computational prediction models based on historical data aims to emulate this process.
  2. 2. Hypothetical deductive (also known as ‘System 2’). This is a deliberate, logical process that involves careful review of clinical data and reasoned interpretation of them to inform a diagnostic decision and may include applying learned rules and principles in new ways. This mode of thinking is particularly useful for uncommon presentations or unfamiliar combinations of symptoms. This mode of thinking is difficult to automate and presents an important reminder of the value of human judgment and expertise.

The cognitive processes required to convert data into useful clinical information potentially introduces bias and error. Although the use of algorithms embedded within automated decision support systems have been advocated to address these issues, there is a growing recognition of the inherent biases within algorithms and technology itself. This highlights the critical need for clinical judgment to be exercised in the analysis of data, and the ongoing need to ensure that this occurs in a context relevant to the users and their families.

Making the diagnosis

Using clinical information to derive a diagnosis is an iterative process, which always involves a degree of uncertainty. This uncertainty is driven by several factors, including variations in

  • Pathology and disease process
  • Human biology and the response to illness
  • User values
  • Changes in domain knowledge of physiology and pathology
  • Clinician experience and opinion

A rational and consistent approach to evaluating the evidence available is therefore crucial. Here are the main factors a clinician will adhere to in making their clinical diagnosis.

  • A list of possibilities (also known as the ‘differential diagnosis’) is developed based on initial information. The list is refined as more information becomes available.
  • Additional data dynamically modifies the probability of a diagnosis—this data may increase or decrease the probability of individual diagnosis. Most clinical reasoning processes implement Bayes’ theorem (usually unconsciously) as a tool to alter the likelihood of a diagnosis, based on new information becoming available. Studies have often assessed unconscious, expert clinician ‘gestalt’ as being equivalent to detailed probabilities derived from epidemiological data.
  • Some diagnoses may be associated with more uncertainty than others, indicating the importance of a closed-loop system that allows for review of diagnoses and revision of the diagnosis as additional information becomes available. This has traditionally been a challenge in health and social care due to the limits of traditional of clinical consultation (time and geographically constrained).
  • Where uncertainty exists, there is a need for the clinician to exercise judgment in how to act on incomplete information. This may include:
    • treating diagnoses that are unlikely, but potentially life-threatening
    • prioritising competing tests/investigations based on likelihood of the result changing the treatment strategy
    • creating a follow-up plan to assess for evolution of a condition for greater clarity of a diagnosis.

What all of this means is that there is no single measure (i.e. no single test or result) that establishes, with absolute certainty, the exact diagnosis. A final diagnosis is derived from an aggregate of the entire body of data that has been collected, interpreted through the lens of the clinician, who must then establish a diagnosis to be acted on (ideally ‘beyond reasonable doubt’). The technologies and tests available are the tools that support the collection of relevant clinical data that guides the diagnostic decision to be made.

LEARNING ACTIVITY

Diagnosing the cause of chest pain

A heart attack is a condition where there is a blocked blood supply to the heart, and requires urgent treatment to restore the blood supply and limit long-term damage to the heart. The treatment for a heart attack (called a percutaneous coronary intervention [PCI]) carries certain side effects, such as bleeding, damage to blood vessels and radiation exposure.

The main presenting symptom of a heart attack is chest pain, which is a very common complaint for many conditions.

The following users may all be presenting to the medical practitioner with chest pain.

  • An 18-year-old who was hit in the chest by a basketball while playing sport
  • A 25-year-old who has influenza and has been coughing badly and has chest pain whenever he coughs
  • A 40-year-old who fell off a ladder and broke his ribs
  • A 60-year-old whose parents and siblings both had major heart attack in their 50s
  • 1. How does the additional information provided alter the likelihood of a diagnosis of a heart attack?
  • 2. Who would most likely be having a heart attack and would benefit from a PCI despite the risks?

Technological advancements facilitating digital diagnostics

Technology, particularly digital technology, plays an ever-increasing role in diagnostics. As information (data) is fundamental to inform clinical decisions, and as the amount of health data that is collected and stored digitally increases, so too is the reliance on digital technology in diagnostics increasing. For the purposes of our chapter, we will focus on the role of digital technology for diagnostics, where the information gathered is in a digital format by design.

The application of a technology can change or lead to new objectives, making the purpose of a technology also important in recognising innovation. The case for innovation, therefore, is one of functionality and purpose. Next in this chapter, we will focus now on technology itself in the diagnostic process, exploring how advancements in hardware and software have enabled digital health technologies and their use in healthcare. We will also look at understanding how these advancements in technology impact the needs and stakeholders, with a focus on how these have resulted in change and innovation in health and social care.

Key technological developments

Technology has traditionally been separated into both hardware and software; however, evolution of these two components is linked. The invention of the silicon transistor and microchip hardware in the 1950s permitted the ability to acquire and store information digitally in silicon switches as a series of 1 or 0 binary digits (bits) (Isaacson 2014). Prior to this time, effective sharing and collaboration was limited by the impracticalities of paper hardcopies being printed or written material that could not be copied or shared easily or support iterative writing, editing or analysis. The new digital ‘bits’ allowed representation of information as numbers, and storage and calculations on these numbers could be incorporated into health and social care information, which could be shared more easily. Digitisation also led to the development of software, which made data acquisition, data entry and calculations more accessible and increased the capacity to work with larger amounts of data more efficiently than was possible with hardcopy formats (i.e. paper and printouts). Digital technology then depends on hardware, especially the microchip, with the software layer allowing increasingly more rapid advances in what can be achieved with digitised data, including health and social care data.

Up until the end of the 1970s, most development of hardware and software technology was a result of the drive for military advances and space exploration (Isaacson 2014). From the 1970s until today, computing has made its way into people’s homes and their personal and working lives in what is now known as consumer technology. In the 1970s and 1980s, video games and consoles were the driving force for advances in computing hardware and software. In the mid-1980s, Nintendo and other companies developed wildly popular handheld video games.

From the 1980s, cellular mobile telephones launched onto the user market (see Box 8.1), and in the early 2000s companies such as Palm developed smartphones. Smartphones added additional sensors and faster wireless communications hardware including bluetooth and 2G (second-generation cellular network). The popularity of the internet and its application to health led to the term ‘eHealth’, with people seeing the potential for improvement in health via digitisation of health data platforms. From 2007, with the launch of the Apple iPhone, and later the Fitbit and Apple Watch, developers began to see the potential in smart devices as diagnostic platforms (mHealth). Both eHealth and mHealth terms are less popular today and largely incorporated under the ‘digital health’ umbrella term.

BOX 8.1

Foundational Technological Advances that Supported Advances in Digital Health Technology

  • Internet and telecommunications (particularly since the World Wide Web (WWW) in the mid-1990s) enabling interconnected devices and sharing of health data
  • Mobile phones, smartphones and apps (which advanced rapidly with the launch of app stores in 2008) to support the software component of wearable technology
  • Miniaturisation of sensors and hardware circuits and open-source development platforms such as Arduino, reducing the barrier to entry and innovation for start-ups, university spin-out companies, small-to-medium enterprises and large companies to rapidly prototype and advance new diagnostic technology
  • Computer-aided design (CAD) software existed by the 1950s; however developments in the 1980s leads to greater uptake and accessibility for prototyping of health device, and more recently, the application of 3D printers has complemented the versatility of this tool
  • Big data, and in particular new software, enable aggregation, analysis and visualisation of health data; databases allow storage of huge volumes of information and with this has arisen new software tools to allow extraction and useful visualisation of datasets

The consumer electronic device market has been a primary reason for new advances in hardware and, due to the size of the market, user demand. Advances in microchips, including central processing units (CPUs), graphics processing units (GPUs), wireless technologies, virtual reality (VR), augmented reality (AR) and Internet of Things (IoT), have all been driven largely driven by demand for consumer products.

These advances have also facilitated the information technology revolution, where datasets rapidly became more accessible and connected via the internet. Networks became commonplace, initially as small, local networks such as within one building/organisation: an ‘intranet’. After this, local networks covered larger regions such as Minitel which reigned in France and the Broadcast teletext in the United Kingdom through the 1990s. These were the first opportunity for the general public to access a common network and ‘look up’ information such as a company phone number or cinema listings. By the end of the 1990s, the internet as we know it became common to the household in many communities, quickly allowing global unified access to the same sources of information.

Hardware advances continue to feed into the technological growth cycle. The current phase of improvements in sensor technology, such as miniaturisation and better accuracy, enables new and increased functionality and innovation in purpose. Hardware improvements also continue to increase the amount of data that can be collected and stored, supporting the growth of data science. Access is a consequential advantage of digitising information, as people gain the ability to share and copy data and the capacity to automate calculations (e.g. analysis). Arguably, the greatest advantage to the birth of digital technology has been the ‘computational’ capacity gained that has paved the way for analysis of increasing complexity, such as in the realms of meta-data and artificial intelligence (AI) science.

The triad of improvements in data storage capabilities, advances in computational capability (particularly parallel processing [Gaurav 2020]) and better communication methods to transfer this data has allowed for complex analysis of large data sets using novel machine learning (ML) methods and neural networks for AI (Warner et al 1964). This has seen mass market applications in areas such as object recognition; for example, Facebook’s automatic tagging of faces and recommendation systems such as those used by Netflix.

Another more recent area of advancement is AR and VR. The idea of AR was brought to public attention in the science fiction novel Airframe by Michael Crichton (of Jurassic Park fame), in which aircraft maintenance workers could use a headset to overlay visual electronic information about parts of an aircraft being inspected, an idea that has since been adapted in many more films (Crichton 1996). However, a much older conceptualisation we can all recognise is the Star Trek Holodeck, which was a VR and AR space used to investigate, explore and train. Application of AR accelerated with consumer technology, particularly with the release of Pokémon Go in 2017. The increasing availability of VR and AR technologies for the mass market, such as the Microsoft HoloLens, has raised interest in their application to health and social care (Thomas 2021).

These developments in hardware, information systems and computational capabilities form the backbone for recent advances in digital technologies for the purposes of diagnostics, as we will explore in the next section.

LEARNING ACTIVITY

Big data analytics in healthcare

IBM Watson Health (now Merative) was initially a multibillion dollar attempt at bringing AI to healthcare. To account for the rapidly changing nature of healthcare practice, the AI was given synthetic, hypothetical user data (as compared to real user data) to ‘train’ itself. As a result, it provided diagnostic and treatment recommendations that were unusual and sometimes unsafe.

  1. 1. How do you think the clinical professionals testing Watson felt once they discovered it was developed using artificial user data?
  2. 2. How would it impact their trust and interpretation of Watson’s recommendations?

Application of digital technologies in healthcare diagnostics

In the past decade, the companies most responsible for these advances in technology, such as those developing mobile telephones, the internet, apps, software, sensors and consumer electronics, have become much larger in market size than clinical device companies. These user product companies have recently begun advancing their own digital health technologies and strategies more rapidly. Apple, Samsung, Google, Microsoft, Meta and Amazon all have digital health technologies in software and/or hardware, a huge amount of patents in digital health diagnostics and a recognition that this is an enormous market.

These advances have included the development of ‘wearables’ with a focus on health, such as the Apple Watch and Fitbit. In many cases, wearables collect both health and clinical diagnostic information. These advances have also meant that some diagnostic technologies have been able to move from the specialised hospital and clinic to general practice, ambulatory monitoring, as well as home and consumer use. For example, accelerometers now fitted in hearing aids combined with heart rate detectors have been linked to create alert systems (Tan & Goyette 2019) (see Box 8.2).

BOX 8.2

Evolution of Electrocardiogram (ECG) Technology

The following are examples of the recent shift to diagnostics digital health technology deployed in the home.

  • Holter monitoring technology for remote capture of ECG data has been available since 1994. However, it is only in recent decades that this information has been captured and stored digitally.
    • The hardware has shrunk in size, becoming more comfortable for users and increasing the storage capacity. This has allowed longer recording periods and more data to be collected.
    • Recently, more digital Holter monitors include full 12-lead rather than 3-lead readings, allowing for more granular ECG recordings to improve diagnostic utility (Fig 8.2). More cardiac leads allows clinicians to better diagnose the precise location of electrical issues in cardiac conduction.
    • As the data is digital, companies supplying the device have also developed service models where the data is logged and streamed in real time and teams of data scientists operating remotely are able to analyse the data.
  • Apple Watches can include sensors for detecting oxygen saturation, heart rate, ECG and falls.
    • Initially, the sensors used in phones and smart watches for falls detection (accelerometer) and heart rate (photoplethysmography) were for gaming or wellness applications. FDA approvals were gained later and these found diagnostic application and use for medical diagnostics.
    • Leveraging user-orientated health technologies can thus lead to medically diagnostic information. For example, both heart rate and ECG sensors now found in some wearables can detect atrial fibrillation.
Image
Fig 8.2Comparison of 3-lead and 12-lead ECG

Sensor hardware technology has been evolving rapidly in recent years, making this area a prominent driver of mobile diagnostics. Sensors essentially convert one form of energy into another. In health technology, sensors may detect chemical, physical or other physiological changes and almost always now convert this into electrical signals that can be digitised into data by a silicon chip. Given that computing, smartphone and other mobile digital platforms are now largely resulting in incremental improvements in storage and speed, it is the addition of new sensors to these platforms that are rapidly adding in new diagnostic capabilities. Thus, in recent times smart watches have added photoplethysmography (PPG) sensors to detect heart rate and oxygen saturation, electrical sensors to detect ECG and accelerometers to detect falls. There is rapid competition to add additional sensors and diagnostic capabilities to these platforms such as for seizure detection and blood glucose monitoring. Other examples include the deployments of ‘smart user’ hospital beds, particularly in the United States, that can monitor vital signs.

Many of these sensors are in standalone diagnostics devices in hospitals, clinics and now wearable devices. We are also beginning to see sensors that are located in the environment, the hospital or the home that can also monitor health and wellbeing and linking remotely to a database for clinical management (CSIRO 2021). Sensors and digital devices are also driving an increase in point-of-care (POC) diagnostics. POC testing is that which occurs outside a laboratory setting, and provides immediate results, such settings including bedside testing, remote testing, mobile testing and rapid diagnostics.

Digital health technologies have put diagnostics technologies in the hands of more health professionals, across varied roles, as well as in the hands of consumers. Examples include the Surveillance, Epidemiology and End Results (SEER) program’s electroencephalogram (EEG) and ECG sensor systems and platforms. SEER’s platform allows in-home 24/7 monitoring of neurology users with suspected epilepsy. While in-home Holter monitoring has been available for many decades, systems such as SEER’s are fully digital in that all the data can be harvested in real-time modern analytics and ML can help process the data to inform diagnostics and allow data mining for future algorithms that aim to predict seizure onset to improve the lives of users (Stirling et al 2021).

At present, most existing and newer digital health technologies are ‘assistive’ to the clinician; that is, they provide opportunity for great data gathering. However, they often require clinical intervention or interpretation. The data gathered may not always be of immediate use for a diagnosis; however, as we have learned so far, the information yielded from a technology is what contributes to the diagnostic process rather than the technology itself. As such, a technology may be considered part of a diagnostic toolkit if the information it provides can lead to diagnosis.

The application of computational methods to healthcare diagnostics has been explored since the 1960s (Warner et al 1964), but has recently seen a resurgence due to the advances discussed earlier. Developments such as the widespread uptake of electronic medical records (EMRs), and the ability to capture and store high-resolution radiology and pathology images, are now providing the essential datasets for developing ML and AI algorithms. For example, there is growing interest in the application of object recognition technologies in areas such as radiology and histopathology for automated diagnostics (Dikici et al 2020, Ibrahim et al 2019), and the use of real-time user data analytics for clinical decision support (Wachter 2015).

The impact of information exchange systems has been particularly significant in radiology, with the development of picture archiving and communication systems (PACs) facilitating access and interpretation of radiology images such as x-rays. The digitisation of these images has also allowed for the development of AI and ML tools to facilitate automated image interpretation (Dikici et al 2020, Huang 2011).

The progressive digitisation of health information within EMRs across Australia has provided an opportunity for automated analysis of this data by clinical decision support systems (CDSSs). CDSSs may range from basic (e.g. highlighting test results that are out of a predefined normal range) to more complex (e.g. prediction algorithms to identify users who are at risk of developing an illness).

While there has been significant interest in the role of computational analysis via ML or AI to aid in complex diagnostic decision-making, there remain numerous barriers to effective deployment of these technologies. These include variable data quality, restricted access to ‘siloed’ EMR interfaces and data and limited clinician trust of ‘black-box’ algorithms (i.e. where the processing of data is opaque, leading to uncertainty about how the algorithm’s recommendation was derived). Additional barriers to the effective deployment of CDSS technology pertain to the underdevelopment of the human–computer interface and understanding of how to effectively deliver the output of advanced analytics in a manner that supports clinical workflow and decision-making (e.g. minimising unnecessary alerts to avoid alert fatigue).

Nonetheless, the use of ML and AI to leverage big data in healthcare may continue to play a large part in future advances in digital health diagnostics. As more data is aggregated from an array of clinical records and health sensors, logged across time, there will be growing opportunities for computational methods to analyse these datasets for diagnostic purposes at various levels—ranging from individual risk prediction, to diagnostic process optimisation and even accelerating the development of diagnostic technologies. The confluence of these computational advances, with novel designs for the human–computer interface (such as AR), presents potential opportunities for improving the diagnostic process. The impact these technology developments can have on the diagnostic process is at an early yet very exciting stage!

The impact of digital technologies on the diagnostic process

We have learned that digital health technology innovations can challenge and shape various aspects of the diagnostic process (Fig 8.3). Modern technologies both support and create smarter applications and clinical capabilities. The impacts of these technologies are also quite broad. These can be grouped into three vital areas (context, data gathering and cognition process) to understand how digital health technologies can underpin shifts away from traditional practices to the future of diagnostics.

Image
Fig 8.3Potential impact of digital technologies on the diagnostic process. Source: (adapted from National Academies of Science)

Impact on context

Context refers to the interpersonal relationships and environmental factors that shape the diagnostic process, and can be influenced by digital technologies in the following manner.

  • Clinician-derived knowledge to user knowledge
    • Individual clinicians are no longer the sole source of information on diagnostic knowledge. Digital technologies such as internet search engines, databases and mobile applications provide increasingly tailored information to users and their families.
  • Expert-driven to multidisciplinary team
    • As digital technologies become increasingly complex, there is a need for robust engagement between domain specific experts to understand and interpret the data being generated.
  • Illness-based to health-focused encounters
    • As diagnostic technology increasingly becomes available to the public outside the physical clinical environment, individuals start using these technologies without necessarily experiencing illness.
    • The end user of technology is no longer just the user (as the individual experiencing illness), but increasingly also the consumer (an individual who is otherwise well and wanting to improve health and wellbeing) and their families.
  • Time and geographically constrained to real-time and mobile diagnostics
    • The miniaturisation of monitoring technologies has improved accessibility to diagnostic tools, as they become increasingly portable and even embedded within publicly available consumer products (e.g. oxygen saturation [SpO2] sensors, ECGs in phones and watches).
    • This enables data collection beyond the space and time of the clinical consultations.
    • Digitisation of technology means that the capacity for communication, access and sharing of information allows gathering and sharing of information to be uncoupled from the clinician–user interaction. Instead, the clinical decision merely requires access to the information rather than the tool.
  • Population epidemiology to personalised care
    • The 20th century push for evidence-based medicine was, by necessity, based on a ‘law of averages’; that is, statistical means and averages from populations were used to establish test performances and treatment effect. This was a necessary approach to establish statistical relationships.
    • However, the increasing volume of personalised data allows for unique clinical profiles to be generated that may account for heterogeneity of treatment effect and allow for the delivery of more personalised treatment recommendations. In some ways, this is the return of the previous style of the old ‘physician notebook’, where clinicians would record the peculiarities of each user they saw and use it to personalise treatments.
  • Expansion of roles to users
    • While diagnostic tools have been traditionally applied to patients (someone experiencing illness who seeks healthcare), the increasing availability of these tools outside the illness context has engaged a broader range of consumers (individuals who use diagnostic technology outside of a clinical consult) described as users in this textbook.
    • These shifts to inclusive practice may be driven by a market potential; however, there are important positive implications for modern health and social care, such as fulfilling the ideals of person-centred care (Eberts 2019) and providing opportunity for innovation in health and social care.

Impact on data gathering

Digital technologies can impact the process of gathering data that support the diagnostic process in a number of ways.

  • The initial history is no longer asking users information about their issues, starting from a blank slate. Information technologies allow retrieval and synthesis of information from a broad range of sources and getting the consumers to fill in the blanks. This reduces one of the greatest frustrations of a health and social care user: having to relay the same information over and over!
  • The miniaturisation of technology increasingly allows in-depth assessment of the user’s physiology as part of the physical examination. This has blurred the distinction between ‘investigation’ and ‘examination’, allowing for more rapid assessment of user pathology for decision-making.
  • Rather than the post-consult being a traditional ‘review in 3 months’, the diagnostic process may continue to be refined in the time before the next face-to-face review. Data from community-based monitors may be uploaded and made available for clinicians and users/consumers, allowing for refinement of the diagnosis and treatment plan.

Impact on cognition process

Digital technologies may influence the cognitive processes that are used to formulate a diagnosis, regardless of whether ‘fast’ or ‘slow’ cognitive processes are used. For example, early machine learning/AI models are often examined in ‘human versus machine’-type studies but have failed to demonstrate a significant improvement over expert clinician diagnosis. This has led to a reconsideration of the role of these models, such as augmenting diagnostic processes to minimise error and bias in diagnosis. This may involve:

  • recommendation systems; for example, ML algorithms that suggest diagnoses that the clinician may have failed to consider
  • smart information systems that organise large volumes of information to support interpretation; for example, pathology systems that provide a visual cue for abnormal results.

What all of this means is that in digital health, diagnostic technologies have introduced innovative functions and capabilities into the diagnostic process, many of which were previously considered too complex or impractical for routine clinical practice. Vast arrays of new information can now be gathered, but accurate interpretation of data to achieve a diagnosis remains a crucial process (see Box 8.3).

BOX 8.3

Technologies that have impacted Diagnostic Service Provision

The following examples demonstrate the user and clinician as partners in the diagnostic process in ways that were market-led first and practice-changing last. These evolving relationships further drive innovation in health and social care.

  • Hearing aid amplification design that paved the way for self-led hearing screening. An Australian invention for an amplification strategy in hearing aids was led by the understanding that hearing is a medically and perceptually subjective experience (Blamey 2005). An alternative method for programming hearing aids allowed the person to drive the process to accommodate for their experience-led judgment on matching what they hear with what is desired for and by the person. Clinically and practically, this method allowed uncoupling of the clinician from the programming process to enable inclusion of personal experience-based judgment in the programming process. This was arguably the root of person-centred processes in this field (Brice & Saunders 2020, Saunders 2019). Using person-led data was further expanded in this system to incorporate self-led SPTs to also be used as part of the self-led fitting process (Blamey & Saunders 2015, Blamey et al 2015). A self-fit hearing aid technology was born. Australia has continued to lead with self-led triage and screening tests, designed to work with self-led tools as smoothly as they have with traditional clinician-led tools (Dillon et al 2016, Hearing Australia 2021, Hearing Test 2021).
    • Innovations that were initially software based, created a need for specialised hardware and system support.
    • Revising the roles between person and professional as tasks are re-assigned led to challenges to service design in a bottom-up call to action.
  • Self-report to assist self-led triage and screening. The reality of self-fit hearing care and commercial growth of products that do not require clinical input highlighted the need for assisting screening and triage of disease risk for those who may not have previously sought clinical guidance. The Consumer Ear Disease Risk Assessment was created and validated for the purpose of assisting appropriate referrals towards clinical guidance (Kleindienst et al 2017, Klyn et al 2019). This tool has been adopted by multiple providers of hearing care products that support Direct-To-Consumer access. The providers are then able to ensure that the self-led user is informed of the appropriate advice for them and that their data is recorded for clinical review and able to be shared with the user, encouraging the user to be supported in finding the clinical guidance they may need. Linking user data towards the clinical encounter, this system fulfils the role of providing diagnostically relevant information.
    • A call to action from commercialisation of user-led hearing care challenged how to achieve appropriate triage and assure early screening potential. The innovation response was the use of digitally inclusive design.
  • Mobilisation of clinical tasks challenges the ethical usage in practice. The transthoracic echocardiogram (TTE) is a diagnostic task that requires a specially trained technician to operate. Individuals spend several years training for this role and so the use of a TTE technician requires planning in multidisciplinary teams in healthcare (Liebo et al 2011). The design of a pocket-model echocardiography device (PME), now available commercially, has challenged the ease of use for the task and thus medical professionals can perform their own TTE rather than utilise a technician. This leads to the reality that physicians who may well be able to perform the TTE themselves do so; however, as it is not their speciality, any liability incurred is a serious problem for the physician who was not qualified to be performing the task. Removing the barrier of who can access and perform a TTE may allow a physician to complete their diagnosis and treatment planning more efficiently, in turn shifting to a more multidisciplinary application instead of a siloed, speciality-restricted application. Questions and challenges are then raised between improving efficiency of reaching a diagnosis with a multidisciplinary but not necessarily specialised team, against the use of a larger, more specialised team where fewer risk of liability concerns exist. Liability is thus an important factor that must not be overlooked in the application of a technology impacting decisions on who should, not just could, perform a clinical task. Very clear roles and responsibilities (i.e. clinical governance) are needed for all stakeholders!
    • Innovations were in hardware, and lead to ethical questions of application design and workforce planning.
  • Assessment and administrative burden unpacked from barriers to early detection. Mild cognitive impairment (MCI) is very difficult for primary care providers to diagnose, and can also lead to assessment burden in healthcare when inappropriate referrals may be made. Leveraging the ease of use and access for self-led surveys, an online early screening tool, MyMemCheck was developed that can support more efficient triage for facilitating appropriate referrals (Mansbach et al 2020).
    • Call to action from clinical burden and inefficiencies challenged how to achieve appropriate triage and assure early screening potential. The innovation response was using digitally inclusive design.

LEARNING ACTIVITY

Digital diagnosis of seizures

There are several in-home and wearable devices that now hope to detect and predict seizures in users. Such devices include the Embrace2 wristbands by Empatica, approved by the US Food and Drug Administration (FDA). Accurate and early detection and prediction of seizures could save lives by alerting caregivers of immediate or imminent danger for users.

Attaining perfect accuracy in automated detection and prediction of seizures is difficult.

  1. 1. What do you think are the potential dangers and risks of false diagnosis of seizures in a digital device?
  2. 2. How accurate should seizure detection and prediction devices be before they are released to market?
  3. 3. How do we determine such accuracy?

Evaluating digital health technologies

Prior to implementing any digital health technology for diagnostic purposes, a robust evaluation process ensures whether it is fit for purpose. ‘Purpose’ in this case varies according to the stakeholder lens being used for evaluation, as stakeholders would bring differing priorities for the diagnostic technology to address.

Specific stakeholder views to be considered are:

  1. 1. the clinician
  2. 2. the user or their family
  3. 3. the business analyst/project manager
  4. 4. the technologist/engineer
  5. 5. the governing agencies.

Sources of evidence

Evaluation of any diagnostic technology requires assessment of the available evidence base; the various perspectives outlined above will require and use different types of evidence. The choice to create different types of evidence also impacts the technology itself, ultimately shaping how it could and should be applied in digital health.

  • Evidence-base types vary depending on whether a digital device provides a true clinical ‘diagnostic’ outcome or is limited to providing guidance about health and wellness.
  • There are a vast number of sources to use for evaluating diagnostic technologies, all with varying purposes based on the intended audience. This includes (but is not limited to):
    • the Therapeutic Goods Administration, focused on the regulation of technology within the Australian context
    • published literature (e.g. PubMed) focused on the scientific assessment and validation of the technology
    • industrial publications (e.g. white papers) focused on providing key information relevant to a product or service offered by a company.
  • Despite the guidelines of the regulatory bodies, it is not clear if consumers or clinicians are always aware of the evidence base or regulatory level of newer digital health products and diagnostics.
  • Often device companies now develop strategies where they will market products first in the health and wellness space while collecting data and research with the device. They may then seek regulatory approval to claim a diagnostic level for the digital device.

Perspectives

The decision to use a technology, as well as to evaluate whether to keep using it, has varying aims depending on who the stakeholder is and what context the technology is being evaluated for. We will break down five key perspectives and describe the elements and priorities of evaluation for these.

Clinician

Clinician assessment of diagnostic technologies would focus on the following.

  1. 1. How well does the diagnostic technology perform?
  2. 2. Does the technology improve clinical outcomes?

The performance of diagnostic technologies should be evaluated in well-designed studies that detail the technology’s performance against a clearly defined gold standard diagnostic test, and provide appropriate outcome measures such as sensitivity, specificity and accuracy. Although adequate performance is a key requirement for any diagnostic technology, it must also be demonstrated to ultimately improve relevant clinical outcomes. Relevant clinical outcomes reflect actual improvement in health, and may include measures such as mortality, length of illness or length of hospital stay.

This concept implies that the mantra of ‘more data is always better’ may not hold true in diagnostics, as the data needs to be acted on to achieve a change in outcome. Unfortunately, the best way to act on additional information is not always clear.

For example, the potential disjoints between ‘more data’ and a meaningful relationship to clinical outcomes was demonstrated by the PACMAN trial, which examined the use of a special monitoring device called the pulmonary artery catheter (PAC) in intensive care. The PAC provided huge amounts of data that in theory would help doctors better diagnose and manage sick patients. However, this high-quality study showed that patients had the same outcome, regardless of whether a PAC was used, ultimately leading the PAC to fall out of favour as a diagnostic tool routinely used in ICU.

More recently, measures of patient-reported outcome measures (PROMs) such as quality of life and level of function are of interest as an outcome. These can be challenging to measure due to the complex interaction of factors that may lead to a change in these outcomes.

Users

Users are those that experience the health and social care journey. While users cannot be assumed to be trained or knowledgeable in the aspects that shape what care they will be privy to, they are, however, and always will be, the expert of their own experience. This in turn has implications on what information is sought, valued and accepted by the user in their health choices. Market-driven, populist views and economic information are examples of non-scientific information that can be part of users’ ‘accepted’ information. It is of increasing importance to accept the reality of the user’s perspective.

An example of this is the COVID-19 rapid antigen test (RAT) versus the polymerase chain reaction (PCR) test. The arrival of the COVID-19 pandemic has made most of us familiar with the forms of testing for the virus. The common options are a PCR test, which requires an uncomfortable swab to be taken, or a much more comfortable yet less accurate RAT that requires a less-invasive swab and is acceptable for self-administration. When considering what steps to take in seeking a COVID-19 diagnosis, the consumer will consider the cost of the test, access to the test, the time implications for travel/wait/work, and turnaround beyond knowing that the PCR test will be more accurate and reliable.

Successful application of a diagnostic technology impacts the validity of the data produced, contributing to the evidence base for the technology and, ultimately, the success of the technology itself. A brilliant technology loses its value if it is too difficult to use or implement. In the user perspective, the need to consider factors in compliance becomes very important. The attraction to create technology that affords the consumer control in their healthcare journey allows a potential market size for a technology to be that of the much larger group of potential users than the much smaller population of highly trained and specialised health professionals. The fact that a health professional will have exponentially more opportunity to use a tool may be overshadowed by the number of users who may well only use a tool once.

The digitalisation of diagnostic technology here has two core domains to consider:

  • opportunities to keep users engaged in the diagnostic task—user experience design for app design, considerations in monitoring, business model for associated support services (especially in chronic health management)
  • data management for the healthcare professional (HCP)—ease of sharing data to reduce personal load between user and HCP, integration with clinic systems and many more aspects covered in this textbook.

The user and the clinician are both the users of a diagnostic technology; however, without the user on board, there will ultimately be no viable users.

The designer

The development of any product is a process that requires input from stakeholders that in effect are designers by contributing to the design, regardless of their qualified or allocated position. However, the allocated designer is not the only one who impacts the design of the product. All stakeholders have a differing want, need, objective and perspective, meaning that the initial idea or intention for a product is only one part of a process where many iterations of the development can change the purpose, intended user, implementation or objective altogether. The Australian innovation of using consumer-led data to prepare hearing aid fittings was a novel step in redesigning clinical digital testing technology for a consumer, not a clinician. This change required a user experience-led design brief where the user was the consumer, which was not common practice at the time. The application of the testing tool had two facets to its evaluation: scientific evaluation for clinical acceptance, and usability and access for consumer acceptance. The design of this tool therefore evolved from its scientific initial design needs to a much longer post-implementation phase of application-based design needs that would not compromise the scientific integrity (Blamey & Saunders 2015, Blamey et al 2015).

Governing agencies

Governing agencies have a variety of roles in relation to the evaluation and approval of diagnostic technologies, such as safety and efficacy, as described in the previous sections. A key area of assessment, particularly in the context of the Australian taxpayer-funded universal health system, is a health economic assessment. A robust health economic evaluation would include a balanced assessment of the potential costs involved in providing subsidies to access this technology, weighed against the benefit in terms of improvement in population health. An example is the introduction of subsidies for continuous glucose monitors for diabetes, which has been demonstrated to reduce diabetic complications and associated costs with managing those complications. However, subsidies are only available for certain diabetic populations, where the potential long-term benefit to improving health and social care outcomes is greater.

Clinical devices and diagnostics are tightly controlled by regulatory bodies such as the TGA in Australia and the FDA in the United States. Clinical diagnostic devices monitor and assist diagnosis of a disease. In most cases, a medical professional is still the one making the diagnosis. Such diagnostic devices include medical imaging (MRI, CT, PET, ultrasound), clinical monitoring devices (vital signs monitoring) and pathology tests. For a clinical device or diagnostic technology to be approved by the regulatory authority, clinical efficacy and safety data must be supplied. Data on accuracy, precision and sensitivity/specificity of testing compared to the most relevant ‘gold standard’ and the associated characteristics of response curves (receiver operator characteristic [ROC]) that map where acceptable cut-offs for acceptance/decision-making are all factored into the evaluation by the regulatory bodies. In recent years, with the rise of digital health, many companies rapidly developed wearables and software that may often claim a diagnostic-level capability without having passed through such regulatory approvals. Recently, the FDA and TGA have provided updated guidelines about digital health products and the level of evidence required to justify a diagnostic (or treatment) claim. Software as a medical device (SaMD) has required new guidelines.

What all of this means is that the intended use is crucial in defining how a technology should be evaluated as a diagnostic tool. A digital health technology may well prove useful for a task other than what it was designed or intended for, or for a stakeholder other than the intended user. Any given context with a digital diagnostic technology has many potential factors that may change in importance as the circumstance, application or outcomes change. Wherever this occurs, it is thus important to recognise the innovation at hand and qualify it wherever possible for clarity and guidance on how a technology should or should not be used.

LEARNING ACTIVITY

Arrhythmia detection by smart watches—how do we evaluate technology?

Smart watches equipped with ECG sensors (see Box 8.2) are increasingly common and are validated in their ability to detect arrhythmias such as atrial fibrillation (AF). With most smart watch users being from a younger demographic, this ability to detect AF in young, otherwise healthy individuals is novel and presents significant challenges in clinical management.

Since AF is a known cause of strokes, the assumption is that treatment will be beneficial. However, the treatment involves blood-thinning medications that can cause significant, and sometimes life-threatening, complications such as bleeding into the brain or stomach. Furthermore, because most strokes occur in the elderly population, the majority of the research on the use of blood-thinners is in this elderly age group; we do not know if the risk of giving blood-thinners to younger people with AF outweighs the risks.

  1. 1. How useful is a diagnosis if there is no proven treatment?
  2. 2. What do you think are the issues in using research data on AF treatments from elderly users and applying it to younger users?
  3. 3. In the absence of clear research evidence, how can clinicians partner with users to interpret (and act on) novel diagnostic data?

Implementation

The act of putting a decision into effect is called implementation. A digital technology operated by a consumer rather than a medical professional is an opportunity to assess how useful and effective a tool can be. Gathering of information by the consumer or professional must occur first to be able to extract the most relevant information either as part of or to complement the clinical encounter. For the technology to be able to gather appropriate information at all, there are many factors of implementation that equally shape how effective it can be for its intended use, regardless of the planning, design and validation that led to the creation of the technology in the first place. Technology is a tool to gather and provide information. This section of the chapter will consider implementation of technologies that could provide diagnostically relevant information.

Sociotechnical considerations

The ‘sociotechnical’ model recognises that technology is never used in isolation; it is deployed within a complex network of interacting social elements, including people and culture. These social elements influence the way a particular technology is used, and should be addressed to facilitate uptake of the technology for improving health outcomes (Carayon et al 2011). This requires consideration of the intended purposes of the technology and the context into which it is deployed, which then influences how the same technology may be used in different ways.

The intended purpose of digital diagnostic technologies can be grouped as follows.

  • Sensingtechnology-dependent measurement; for example, blood glucose monitoring equipment
  • Testing—for example, speech testing apps
  • Analytics—interpretation of data to provide a recommendation; for example, ML-based analysis of chest X-rays to diagnose COVID-19
  • Tracking—historical trends; for example, self-report surveys of symptoms

How a diagnostic technology is used can mean that more than one of these groups can apply for any given task.

An example of this relates to measuring blood glucose for the 2 hours post meal to observe the response assists diagnosis of diabetes; that is, by observing the rate and peak of blood glucose and recovery in that time up to 4 hours post meal. However, if a person is at high risk of gestational diabetes, continuous monitoring of blood glucose over the weeks where onset of gestational diabetes is most likely to be seen can be used to alert a clinical team as soon as the symptoms start. This will reduce the risk of harm to the fetus by initiating treatment or management of the condition before the fetus can be exposed to any dangerous levels of blood glucose. Speech testing apps can also be used as a screening test or can be used periodically to monitor for changes in hearing over time, or in case of sudden loss of hearing, which has a vital window of 72 hours to seek clinical attention to recover the loss. Sensor technologies are increasingly able to perform continuous monitoring, allowing use as a screening tool as well as a tracking tool to detect change. The capacity to manage high volumes of data supports the ability to perform more readings, such as continuous monitoring set-ups, turning what once would have been impractical into a manageable and powerful tool.

Translational science

Translational science is the discipline of using observations, whether collected clinically or at a community level, to inform and guide developments of interventions that aim to improve the health of individuals and the population at large. Observations in this manner can impact diagnostics, therapeutics, clinical procedures and more. The growth of technology allows self-tracking or measuring of human health variables, and has been discussed since 2010 as the idea of the ‘quantified self’ (Swan 2013, Wolf 2010). A clinical problem for the ideas of the quantified self was the application of the data gathered from a vast array of technologies, each based on a sample of one (Mirza et al 2017). Statistically, this means data collected from technology designed for the common consumer, where application is for a subject pool of one, is challenged by the lack of standardisation when trying to analyse or interpret across any more than the single subject. Digital health technologies do, however, show great promise in overcoming two key barriers of quantified-self-based data bringing potential to how they can contribute to translational science (Wolf & De Groot 2020). Data sharing is the first barrier in being able to effectively aggregate data from multiple consumers, due to security and privacy factors, which are discussed elsewhere in this textbook. The second key barrier is the support for using and applying technologies at large, in the hands of the consumer, to be clinically helpful and improve the path to clinical validation. On the other side of these barriers lies a huge potential to bring the personal science generated by quantified-self applications of potentially diagnostic technologies (Wolf & De Groot 2020), and translational science that will be of benefit, to an ever-increasing standard of health and clinical symptomology knowledge (see Box 8.4).

BOX 8.4

Translating the SPT to Clinical Practice

The speech perception test (SPT) developed at Blamey Saunders Hears is an example of digital health technology allowing new learnings to come from the use of the technology and in turn shaping clinical guidance. This test is a clinically validated tool that is available for use by the public in the comfort of their own environment (Blamey & Saunders 2015, Blamey et al 2015). Each use of the test is a subject of one and as such fits the ideas of the quantified self. Allowing this test to be accessible by the public as well as for clinically controlled use has allowed a very large data set to be accumulated. In addition to basic identifier questions to take the test, the user is asked whether or not they are wearing hearing aids. Using this parameter allowed an analysis to be created that looked for test results with hearing aids, paired with results for the same user without hearing aids, as is performed in the clinic to verify if the hearing aids are providing benefit (Blamey 2019). The capacity to analyse the data generated from this test is enabled by the parameters encoded within; that is, matching tests performed by the same user, wearing hearing aids or not, time interval between tests (reflected repeated test or long-time interval matched tests looking for changes). The analysis of this data set allowed a simple formula to be found that can advise how much improvement in hearing a person can expect from their hearing aids, and in return advise if they are not receiving an acceptable amount of benefit. The clinical impact is quite versatile in assisting verification of benefit from a hearing aid and assessing if there is an issue with a hearing aid; for example, it needs a repair. This is an example of how the quantified self can help inform translational science, in turn assisting the diagnostic process where appropriate.

Clinical judgment is the key

An exciting change of tides in how digital health diagnostic technologies are shaping clinical practice can be seen in hearing care. Teleaudiology is the use of telehealth as pertains to the speciality of audiology. Much like telehealth as a whole, there have been various long-standing applications of teleaudiology in which the end user is assisted by a local facilitator as well as the user acting independently in their own care, supported remotely by clinical guidance (Saunders et al 2019). The uncoupling of clinical presence from its guidance has taken time to be well recognised by the industry at large (Brice et al In Press), finally becoming recognised in a revision of Australia’s clinical audiology guidelines released in 2022 (Audiology Australia 2022a) and further supported by separate guidelines specifically focusing on teleaudiology (Audiology Australia 2022b). What has changed with regards to incorporating digital health technologies, both diagnostic and rehabilitative, is the acceptance of the variety of tools available and with more coming. More specifically, that the choice and application of a digital health technology is not where the clinical governance is to be defined but that the clinical judgment of choosing and using such tools is where the clinical governance is to be derived. This subtle but crucial shift acknowledges that there will be more technologies that come and go, each providing their own clinical value; for example, screening tools like the speech perception test (SPT), performance tests to gauge hearing aid functionality or auditory processing, or tinnitus assessments and other specialised clinical measures. This shift also acknowledges that the clinical governance and quality of care is a product of the clinician/care provided, not of the tools the clinician chooses to use. Clinical judgment has and will continue to be key in diagnostic process and quality of care.

The point of personalisation in practice

Personalisation of data and medicine is the concept that the more you know about an individual and their individual characteristics, the more precisely their health can be managed. Clinically, personalisation really began with the ability to match genomic (genetic level) information of an individual to known outcomes from other people with similar characteristics. Matching details about the person to potential prognosis in this way was termed ‘precision medicine’ and attributed to Langreth and Waldholz, later being popularised as ‘personal medicine’ (Jørgensen 2019). Personalised medicine has had great value in pharmacology, gaining initial notoriety with relation to treating cancers, and continues to expand what information can be gathered about a person that can then be used to better inform care planning, treatment and expected outcomes based on knowing how treatment went for people with similar characteristics. Technology developments were instrumental in redefining which tests could be taken to gather what information, and continues to shape how much of modern clinical care can be personalised and to what degree. While person-centred care was a concept born out of psychology, where behaviour was the fundamental part of treatment, it has since become an integral part of the ideals of modern healthcare as discussed in Chapter 4. However, medicine is not primarily focused on behaviour and as such is arguably not necessarily interested in the person as much as their biology. The impact of acknowledging the person’s behaviour into the treatment plan and delivery was clearly demonstrated to lead to improved clinical outcomes noted in the 1990s, thus leading to a call for person-centred practices, especially regarding chronic health management (see Table 8.1). Taken together, the application of person-centred care in medicine followed by technology-enabled personalisation of medicine provides knowledge and tools to keep improving clinical outcomes by adopting a holistically personalised model of medicine and care from diagnostics to delivery.

TABLE 8.1

Historical Iterations of the Concept of Person-Centred Care

Image

Decade Author Concept Description Application
1940s Rogers Person centred Person needs to be actively involved to achieve behaviour change Shared care
1990s Clark Behaviour-dependent clinical outcomes Active role and improved clinical outcomes for medical behaviour-dependent management Clinical outcomes
Wagner Chronic care management Refined into the chronic care model Clinical outcomes
Bandura Self-efficacy in health management Applies psychology theory of self-regulation to behavioural self-regulation in health psychology to introduce the value of self-efficacy in chronic health management Care planning and delivery
2000s Langreth and Waldholz Precision (personalised) medicine Medical and technology advancements allow genetic and pharmacological personalisation of medicine for better informed-tailored medicine, led by medicine rather than any person; however, it is the first step in bridging medicine with an individualised view of care planning Information-led personalisation of medicine
2010s Wolf and de Groot Personal science, quantified self Technology-enabled data collection to support individualised health and medical science via diagnostic data and collection to inform decision-making and care planning Data-led personalisation of health and care
Personalised health and care From diagnostic to outcome management, personalising modelling, planning and delivery of health and social care; biology to behaviour are incorporated in an integrated, multidisciplinary model of medicine Holistically personalised medicine, data and behaviour

The past couple of decades have seen the diagnostic part of the clinical/health journey gain capacity to measure and manage vast amounts of data from one individual. The wealth of information that can now be gathered from any individual is at the core of the quantified-self movement and creates the discipline of ‘personal science’. The ubiquitous nature of person-based sensing and monitoring digital technologies capable of populating vast databases, where population-level multi-variate analysis can occur to support increasingly complex applications (Awad et al 2021, Li et al 2017), makes the reality of personalising health information for the consumer, outside of traditional clinical care, almost inevitable. Only recently have such large-scale applications and analyses become realistic, with huge potential to inform the diagnostic process in clinical encounters and, importantly, in the interpretation of the individual symptomology to a level not seen before (Wolf & De Groot 2020).

Digital health technologies can provide diagnostic information, such as continuous monitoring tools for self-tracking as part of further improving outcomes. Supporting positive and active engagement from the beginning of the health journey to better manage personal health—that is, potentially before illness or disease (Kooiman et al 2020, Lupton 2017)—can be further enhanced by the learnings and knowledge from gamification use in health (Edwards et al 2016). Chronic health management can thus consider digital health diagnostically relevant technology, focused on personal as much as clinical use, to have a potentially reciprocal relationship with supporting the improvement of outcomes via engaged and active consumers.

Screening tools are a useful example of building on symptomology knowledge for a condition and applying diagnostically useful questions that can help triage risk of disease and provide initial guidance to appropriate referral (Klyn et al 2019, Mansbach et al 2020). It is worth remembering that technologies that allow measuring or gathering of new information are equally crucial in changing our understanding of health conditions, as data sets and sophisticated data analysis allow us to find new observations and patterns that lead to new learnings. This potential for a reciprocal relationship between collecting new data and learning more about the data we should collect can be considered a cornerstone of diagnostics knowledge bases.

Digital health technology can enable the application of personal science. A biomedical perspective of person-centred health could incorporate the consumerled and -focused application of technology with the principles of personal science being addressed using such technology. From diagnosis, prognosis and all the way to rehabilitation/outcomes, implementation of digital health technologies are fundamentally person-centred.

Design and practice

Traditional clinic-driven diagnostic technology has been supported by the professional motivation to use and apply a technology correctly. The consumer, however, has no professional responsibility or liability in question. The motivations of and outcomes for the consumer therefore need to be understood from an entirely different perspective from that of the clinical mind or the technology designer if we want the consumer to use the technology, let alone correctly. The issue can be described as one of ‘compliance’. This chapter has considered various diagnostic digital health technologies, and here we will review two of them again in context of user compliance (see Box 8.5).

BOX 8.5

Examples of Human Factors Impacting Implementation of Diagnostic Technologies

Speech screening tests

The objective of the SPT described earlier, along with other screening tests available, is to assess the degree of impact on correctly hearing conversational speech, a tool that can be used as part of diagnosing hearing impairment. Available freely online, the public can use this test on any device, in any setting of their choice. Improving the validity of the test requires users to maintain the volume at an appropriate level, resist raising it as the test continues, take the test in a calm environment, ideally using speakers rather than noise-cancelling headphones and answer all the questions themselves. Human nature means that it is not possible to control most of these factors. An individual not ready to acknowledge the extent of their hearing difficulty could be taking this test at maximum volume, with good-quality headphones—if they take the test at all, of course.

Blood glucose monitoring

Successful diabetes control requires a degree of monitoring of glucose levels at certain parts of the day, such as after main meals. The healthcare professional managing a person’s care will decide on the regimen of medication, insulin and other factors according to these readings that indicate how well the person’s body is managing diabetes. While it is equally in the interest of the person with diabetes to take the readings at the appropriate times to show how their body is responding, it is ultimately in the person’s control whether they choose to consume a meal much higher in calories than recommended and then wait 4 hours instead of 2 hours after the meal to take a reading that would give a poor representation of how their body responded to that meal.

What all this means is that digital health technologies that can assist the diagnostic process offer an ever-increasing standard of health and clinical symptomology knowledge. However, the capacity to gather useful information depends on the functionality of these technologies, which is a challenge of design, along with the purpose of these tools, which is a challenge of implementation. Effective implementation of these technologies offers a genuine practice of person-centred care that understands the individual nature of all health and social care. Coupled with computational maturity that exists today, person-centred technologies have great potential to advance our knowledge of human health, starting at a biomedical understanding of symptomology and ending with the clinical judgment that remains key to the diagnostic decision that defines quality of care.

The future/the crystal ball

As diagnostic technology in digital health continues to evolve, we consider the potential directions this field will take in the coming years. Across the ideas discussed, we will see various ways in which digital technologies can increase coverage and outreach with reduced cost and footprint of the devices used.

Shift from device to profile centricity

The current landscape involves an array of independent devices, each with proprietary interfaces with siloed data storage. There is likely to be a move towards incorporation of data standards, allowing output of data onto a single platform. This will allow integration of data from various devices, linked into a single user profile.

Systems based integration between services and devices

The boundary between digital health technologies and other smart devices will blur; as interoperability becomes commonplace, diagnostic technologies may leverage other smart technology to improve capabilities and enhance the value delivered to users. There will also be a shift away from focusing only on isolated diagnostic technologies and thinking about how they are embedded in broader health and social care systems.

Automated data analysis with AI and ML

Diagnostic technologies will increasingly leverage ‘big data’, for predictive analytics to produce automated decisions.

Patient-reported outcome measures (PROMs), patient-reported experience measure (PREMs)—person-centred metrics/values ‘wellness’

Clinical studies have used relatively simple outcome measures such as mortality and length of sickness. These are important measures but are relatively crude and may not adequately capture outcomes relevant to users such as functional level or quality of life (e.g. users may survive an illness but be left with severe disability). Digital technologies and monitoring introduce a method to capture these PROMs (e.g. step counters to reflect activity levels), as well as PREMs (e.g. real-time feedback, POC ‘stress’ index).

Growth in personalised care based on development of phenotypes

Increased volume of data generated by individual users (or even small clusters of users), allows for construction of unique profiles that may be correlated with particular risk profiles or ability to respond to treatments. This may be driven by genomics, proteomics or development of digital phenotypes from physiological measurements, and allow for the delivery of personalised therapies.

For example, BRCA1 is a gene variant known to clearly indicate risk of breast cancer and can be screened for easily. Biomarkers of other types are increasingly being screened for to apply new knowledge regarding such profiles to improve treatment planning. Probable diagnosis is based on objective and available information from tests. Possible diagnosis, however, is further informed by subjective information such as descriptions of pain/discomfort/sensory experience or new areas of research or knowledge that have not yet gathered enough data to be included as part of the probabilistic diagnostic process. The balance between use of objective and subjective information will always be a challenge for the healthcare professional aiming to derive a diagnosis, especially in atypical/under-recognised presentations of illness.

Moderation of expectations for technology

While there has been an overwhelming optimism regarding the ability of technology to disrupt healthcare models and ‘revolutionise’ healthcare, a more balanced consideration of the role of digital technology is likely to develop.

The last few years have highlighted challenges in the health technology industry, with technology companies such as IBM and Google restructuring or winding down their health technology initiatives (Combs 2021, Moreno 2021). Other diagnostic tools that have been highly lauded for accuracy in one setting have also failed to demonstrate similar performance when deployed into new health and social care environments (Habib et al 2021).

The challenges of operationalising health technologies were highlighted during the COVID-19 pandemic. There was also significant momentum in using the pandemic as an opportunity to showcase the power of digital health. A vast array of AI-powered tools were developed; however, few (if any) of these tools have been properly evaluated and deployed into a clinical environment (Heaven 2021). Expensive automated contact-tracing apps failed to be successfully deployed, with the backbone of contact tracing in Australia falling onto a traditional manual check-in system based on QR (quick response) codes (White & Basshuysen 2021).

The future of digital technology for diagnostics will likely take a middle ground between relentless optimism and dismissive pessimism, with increasing uptake of technology into existing models of care, and potential extensions of those models, without necessarily requiring the large-scale ‘revolutionary’ changes that are sometimes touted.

Conclusion

The rapid growth in diagnostic technologies for digital health necessitates the development of an informed, rational approach to their development, evaluation and implementation. This chapter has provided a foundational overview of the diagnostic process and the role of technology within that process. It has outlined a range of approaches to evaluating diagnostic technology and highlighted key points in the implementation of diagnostic technology within a complex healthcare setting. Finally, it has provided some considerations on the future directions and developments in the field of diagnostic technologies within digital health.

Key concepts

  • A clear understanding of the intended purpose and application as technology continues to become ubiquitous to our lives will be key to defining the appropriate evaluation criteria.
  • Evaluation of diagnostic technology will continue to evolve, with changing stakeholders and the type of evidence available and required will likely continue to evolve. This will potentially re-shape the key tenets for evaluating a diagnostic technology, as iteratively as the advances in technology developments themselves.
  • Implementation of diagnostic digital technology has been a catalyst for change in healthcare. These technologies require a constant focus on the sociotechnical elements of health and social care and require continuous learning and development to support the iterative nature of technology use and implementation.
  • The role of the clinician in the diagnostic process can be reframed because of advances in digital diagnostic technology. The role of a human clinician in interpreting the output of these technologies to establish a diagnosis needs to be continuously considered and maintained.
  • The future and continued use of diagnostic technology in health and social care will continuously require careful consideration to assure the technologies and the implications of their use remain beneficial and effective and support safety and quality care for the future of health and social care.

Discussion questions

  1. 1. Describe how a diagnosis is developed, and how digital technologies can impact this process.
  2. 2. Describe how a new digital diagnostic technology is evaluated.
  3. 3. Outline the key challenges in implementation of a new digital diagnostic technology.

CASE STUDY

The hearing test has changed a lot since 1879, from a machine-based test (the ‘audiometer’) to an electronic version of the machine in 1919, so that more people could be tested more easily. It has eventually become digitised and linked to databases for ease of record keeping and linking to clinical networks. Today, audiometric tests have been recreated in many online formats freely available to the public. These iterations showed great steps forward in functionality from a clinically restricted set-up where the clinician chooses and invites the person to be tested, followed by an improvement in access within the clinical setting, eventually becoming a testing set-up that any person can find and take to the clinician if they choose to. The purpose has also changed though more subtly with each generation of hearing testing. The original motivation for testing hearing was to determine the likely nature of the hearing difficulty (i.e. diagnostic testing), which would inform if amplification with a hearing aid would be helpful for that person. This purpose has not strictly changed; however, the development of various formats for testing hearing capability (e.g. within background noise, speech or pure [single] tones, different volumes) all contribute to more information about the function of a person’s auditory system; that is, a combination of tests could assist a differential diagnosis. Where the person at home is taking any number of online hearing tests they find, they have the capacity to take the same test a year apart, or pre- and post-surgery, compare their results and confirm a suspicion they may have that their hearing has changed. Where a sudden loss of hearing occurs, as can happen after certain medication or surgical procedures, it is vital that a person becomes aware of and reports this as soon as possible. The test the person took at home or from a hospital bed could therefore provide diagnostically relevant information. The clinician may refer to these results for advice, further guided by the information gathered in the clinical encounter to investigate further and reach a decision of diagnosis.

Audiology today uses an array of clinical tests to diagnose a hearing loss. Many of these tests have versions that are also available online for people to take at home on their own, with results delivered in a manner that is understandable for everyone. Soon enough there will be a digitally enabled version of all audiological testing methods for the user to access. Openly shared tests on the internet can now improve the screening capability for under-resourced or out-of-reach communities, essentially acting as an early triage for referral to a clinical encounter applicable to the full population. The changing purposes of digital health technologies have therefore challenged who these tools are for. The information gathered by a diagnostic digital health technology is now designed for the user and less so the clinician. There is also no longer just a user but an active and increasingly informed user. The user owns and manages more and more of their own data, and they can share it with the clinician for a user-driven encounter rather than a clinically-driven encounter. This shift is occurring across various sectors of healthcare, some of which will and should remain clinically led, whereas some, such as hearing care, can now arguably safely embrace a user-led market with benefits for all involved.

Questions

  1. 1. A user with hearing loss who regularly wears hearing aids undergoes routine surgery. A nurse assisting post-surgery notices the user expressing they do not think they can hear properly with or without their hearing aids. The user reports to the nurse that they have contacted their treating hearing clinician (audiologist) through a specialised app to request a remote consultation. The nurse also notifies the appropriate hospital-based hearing test clinician. The surgery finished 18 hours prior. The user is still groggy, not in a fit state to leave their current location or care. Sudden hearing losses have a 72-hour window in which if treatment is sought, the loss can be rescued—otherwise it may remain permanent. Consider how digital technology creates new lines of communication that may not necessarily be aligned. What issues might this situation raise?
  2. 2. A user has had three separate hearing tests with different health providers: one from an audiologist, one from a general practitioner and one from an online hearing service. All three tests show different results. They are confused because treatment advice conflicts due to different results. What are some factors that the user might consider in trying to resolve this issue?

References