Sophie Brice, David Sly, Sing Chee Tan
Learning outcomes
On completion of this chapter, learners will have the capability to:
Key terms
consumer
diagnosis
digital
health
health and social care
medicine
technology
user
The COMPASS statement (Mather & Almond 2022) situates how Chapter 8 uses the implementation model. This chapter will address Context, Person-centred and Analysis as aspects of the COMPASS implementation model mnemonic.
Technological advances have paved the way for rapid growth of diagnostic technologies to support the delivery of digital healthcare. This chapter aims to support the development of an informed approach to navigate and unpack the bewildering range of diagnostic technologies that are now available in digital health. The aim will be achieved by presenting the perspectives of technology, clinical application and the user experience in a holistic and practical discussion relevant to all participants in health.
In this section, we explore how a diagnosis is reached and how understanding this process is foundational to the effective design and implementation of digital diagnostic technologies.
The term ‘diagnosis’ is derived from two root terms: gnosis (recognise / know) and dia (apart). Diagnosis reflects both the process and the schema used to establish the right clinical condition that explains the user’s symptomology (Balogh et al 2015b). It is crucial to recognise that a diagnosis is not the clinical condition itself (which is comprised of a pathological definition), but rather a decision allocating the most likely condition that aligns with the symptoms and signs the individual is experiencing.
The diagnostic process is complex, being shaped by numerous factors such as stakeholder values, political and economic factors, clinical judgment, available information and prevailing diagnostic criteria, all of which are variable. The need to weigh up competing priorities throughout this process highlights the importance of recognising that diagnostics goes beyond a particular test and requires ongoing clinician involvement.
Clinical misdiagnosis is often identified as a safety and quality issue. However, the importance of establishing the right diagnosis extends much further.
At an individual user level, the right diagnosis is needed to:
At a public health level, where government and community organisations allocate resources towards large-scale intervention plans and funding decisions, the right diagnosis:
It is important to recognise that deriving the right diagnosis may not always be critical. A diagnosis, being a piece of information, has value primarily in relation to the ability to act on the information. The purpose of the diagnosis must be placed in the broader context of the user and to ensure that the information from a diagnosis can be acted on for the user’s benefit.
To design more effective diagnostic technologies in digital health, we need to explore how a clinician arrives at a diagnosis. This can be understood within a three-stage framework of:
The first part of the diagnostic process is an exchange of information. The clinical encounter is between the person experiencing symptoms, clinically (and in this text) referred to as the ‘user’, and clinician. The encounter may be a conversation with a general practitioner, an introductory conversation upon arriving at a hospital, or any interaction with a health professional intended to assist with health and clinical needs.
The clinical encounter can be subdivided into three distinct phases to the clinical encounter (Balogh et al 2015a), designed to help guide the clinician towards the right diagnosis.
This involves collecting information on the user’s current issue, past clinical history and social history, and is all geared towards uncovering all relevant information. An aphorism in medicine is that information collected during this phase is enough to establish approximately 80% of diagnoses.
Physically examining the user comes next to identify signs that suggest a particular condition is or is not present.
Where information from the history and examination is inadequate, additional tests are ordered to help with the diagnosis. Although often construed as being key to a diagnosis, it is not often required. In addition, there is sometimes uncertainty in how an investigation or test result relates to actual user-relevant measures. An example of testing measures that may not relate to the right diagnosis is that many healthy individuals with no back pain may have an abnormal spine magnetic resonance imaging (MRI) image that confounds attributing back pain of some individuals to the appropriate cause (Brinjikji et al 2015).
Data collected during a clinical encounter needs to be analysed to identify the diagnosis. This involves transformation of clinical data to information (data in context), which is then used to derive a diagnosis.
There are various unconscious cognitive processes employed to match the clinical presentation with a diagnosis. The processes employed can be broadly aligned with popular descriptions of ‘system 1’ and ‘system 2’ thinking (Tay et al 2016, p. 2).
The cognitive processes required to convert data into useful clinical information potentially introduces bias and error. Although the use of algorithms embedded within automated decision support systems have been advocated to address these issues, there is a growing recognition of the inherent biases within algorithms and technology itself. This highlights the critical need for clinical judgment to be exercised in the analysis of data, and the ongoing need to ensure that this occurs in a context relevant to the users and their families.
Using clinical information to derive a diagnosis is an iterative process, which always involves a degree of uncertainty. This uncertainty is driven by several factors, including variations in
A rational and consistent approach to evaluating the evidence available is therefore crucial. Here are the main factors a clinician will adhere to in making their clinical diagnosis.
What all of this means is that there is no single measure (i.e. no single test or result) that establishes, with absolute certainty, the exact diagnosis. A final diagnosis is derived from an aggregate of the entire body of data that has been collected, interpreted through the lens of the clinician, who must then establish a diagnosis to be acted on (ideally ‘beyond reasonable doubt’). The technologies and tests available are the tools that support the collection of relevant clinical data that guides the diagnostic decision to be made.
Technology, particularly digital technology, plays an ever-increasing role in diagnostics. As information (data) is fundamental to inform clinical decisions, and as the amount of health data that is collected and stored digitally increases, so too is the reliance on digital technology in diagnostics increasing. For the purposes of our chapter, we will focus on the role of digital technology for diagnostics, where the information gathered is in a digital format by design.
The application of a technology can change or lead to new objectives, making the purpose of a technology also important in recognising innovation. The case for innovation, therefore, is one of functionality and purpose. Next in this chapter, we will focus now on technology itself in the diagnostic process, exploring how advancements in hardware and software have enabled digital health technologies and their use in healthcare. We will also look at understanding how these advancements in technology impact the needs and stakeholders, with a focus on how these have resulted in change and innovation in health and social care.
Technology has traditionally been separated into both hardware and software; however, evolution of these two components is linked. The invention of the silicon transistor and microchip hardware in the 1950s permitted the ability to acquire and store information digitally in silicon switches as a series of 1 or 0 binary digits (bits) (Isaacson 2014). Prior to this time, effective sharing and collaboration was limited by the impracticalities of paper hardcopies being printed or written material that could not be copied or shared easily or support iterative writing, editing or analysis. The new digital ‘bits’ allowed representation of information as numbers, and storage and calculations on these numbers could be incorporated into health and social care information, which could be shared more easily. Digitisation also led to the development of software, which made data acquisition, data entry and calculations more accessible and increased the capacity to work with larger amounts of data more efficiently than was possible with hardcopy formats (i.e. paper and printouts). Digital technology then depends on hardware, especially the microchip, with the software layer allowing increasingly more rapid advances in what can be achieved with digitised data, including health and social care data.
Up until the end of the 1970s, most development of hardware and software technology was a result of the drive for military advances and space exploration (Isaacson 2014). From the 1970s until today, computing has made its way into people’s homes and their personal and working lives in what is now known as consumer technology. In the 1970s and 1980s, video games and consoles were the driving force for advances in computing hardware and software. In the mid-1980s, Nintendo and other companies developed wildly popular handheld video games.
From the 1980s, cellular mobile telephones launched onto the user market (see Box 8.1), and in the early 2000s companies such as Palm developed smartphones. Smartphones added additional sensors and faster wireless communications hardware including bluetooth and 2G (second-generation cellular network). The popularity of the internet and its application to health led to the term ‘eHealth’, with people seeing the potential for improvement in health via digitisation of health data platforms. From 2007, with the launch of the Apple iPhone, and later the Fitbit and Apple Watch, developers began to see the potential in smart devices as diagnostic platforms (mHealth). Both eHealth and mHealth terms are less popular today and largely incorporated under the ‘digital health’ umbrella term.
The consumer electronic device market has been a primary reason for new advances in hardware and, due to the size of the market, user demand. Advances in microchips, including central processing units (CPUs), graphics processing units (GPUs), wireless technologies, virtual reality (VR), augmented reality (AR) and Internet of Things (IoT), have all been driven largely driven by demand for consumer products.
These advances have also facilitated the information technology revolution, where datasets rapidly became more accessible and connected via the internet. Networks became commonplace, initially as small, local networks such as within one building/organisation: an ‘intranet’. After this, local networks covered larger regions such as Minitel which reigned in France and the Broadcast teletext in the United Kingdom through the 1990s. These were the first opportunity for the general public to access a common network and ‘look up’ information such as a company phone number or cinema listings. By the end of the 1990s, the internet as we know it became common to the household in many communities, quickly allowing global unified access to the same sources of information.
Hardware advances continue to feed into the technological growth cycle. The current phase of improvements in sensor technology, such as miniaturisation and better accuracy, enables new and increased functionality and innovation in purpose. Hardware improvements also continue to increase the amount of data that can be collected and stored, supporting the growth of data science. Access is a consequential advantage of digitising information, as people gain the ability to share and copy data and the capacity to automate calculations (e.g. analysis). Arguably, the greatest advantage to the birth of digital technology has been the ‘computational’ capacity gained that has paved the way for analysis of increasing complexity, such as in the realms of meta-data and artificial intelligence (AI) science.
The triad of improvements in data storage capabilities, advances in computational capability (particularly parallel processing [Gaurav 2020]) and better communication methods to transfer this data has allowed for complex analysis of large data sets using novel machine learning (ML) methods and neural networks for AI (Warner et al 1964). This has seen mass market applications in areas such as object recognition; for example, Facebook’s automatic tagging of faces and recommendation systems such as those used by Netflix.
Another more recent area of advancement is AR and VR. The idea of AR was brought to public attention in the science fiction novel Airframe by Michael Crichton (of Jurassic Park fame), in which aircraft maintenance workers could use a headset to overlay visual electronic information about parts of an aircraft being inspected, an idea that has since been adapted in many more films (Crichton 1996). However, a much older conceptualisation we can all recognise is the Star Trek Holodeck, which was a VR and AR space used to investigate, explore and train. Application of AR accelerated with consumer technology, particularly with the release of Pokémon Go in 2017. The increasing availability of VR and AR technologies for the mass market, such as the Microsoft HoloLens, has raised interest in their application to health and social care (Thomas 2021).
These developments in hardware, information systems and computational capabilities form the backbone for recent advances in digital technologies for the purposes of diagnostics, as we will explore in the next section.
In the past decade, the companies most responsible for these advances in technology, such as those developing mobile telephones, the internet, apps, software, sensors and consumer electronics, have become much larger in market size than clinical device companies. These user product companies have recently begun advancing their own digital health technologies and strategies more rapidly. Apple, Samsung, Google, Microsoft, Meta and Amazon all have digital health technologies in software and/or hardware, a huge amount of patents in digital health diagnostics and a recognition that this is an enormous market.
These advances have included the development of ‘wearables’ with a focus on health, such as the Apple Watch and Fitbit. In many cases, wearables collect both health and clinical diagnostic information. These advances have also meant that some diagnostic technologies have been able to move from the specialised hospital and clinic to general practice, ambulatory monitoring, as well as home and consumer use. For example, accelerometers now fitted in hearing aids combined with heart rate detectors have been linked to create alert systems (Tan & Goyette 2019) (see Box 8.2).
Sensor hardware technology has been evolving rapidly in recent years, making this area a prominent driver of mobile diagnostics. Sensors essentially convert one form of energy into another. In health technology, sensors may detect chemical, physical or other physiological changes and almost always now convert this into electrical signals that can be digitised into data by a silicon chip. Given that computing, smartphone and other mobile digital platforms are now largely resulting in incremental improvements in storage and speed, it is the addition of new sensors to these platforms that are rapidly adding in new diagnostic capabilities. Thus, in recent times smart watches have added photoplethysmography (PPG) sensors to detect heart rate and oxygen saturation, electrical sensors to detect ECG and accelerometers to detect falls. There is rapid competition to add additional sensors and diagnostic capabilities to these platforms such as for seizure detection and blood glucose monitoring. Other examples include the deployments of ‘smart user’ hospital beds, particularly in the United States, that can monitor vital signs.
Many of these sensors are in standalone diagnostics devices in hospitals, clinics and now wearable devices. We are also beginning to see sensors that are located in the environment, the hospital or the home that can also monitor health and wellbeing and linking remotely to a database for clinical management (CSIRO 2021). Sensors and digital devices are also driving an increase in point-of-care (POC) diagnostics. POC testing is that which occurs outside a laboratory setting, and provides immediate results, such settings including bedside testing, remote testing, mobile testing and rapid diagnostics.
Digital health technologies have put diagnostics technologies in the hands of more health professionals, across varied roles, as well as in the hands of consumers. Examples include the Surveillance, Epidemiology and End Results (SEER) program’s electroencephalogram (EEG) and ECG sensor systems and platforms. SEER’s platform allows in-home 24/7 monitoring of neurology users with suspected epilepsy. While in-home Holter monitoring has been available for many decades, systems such as SEER’s are fully digital in that all the data can be harvested in real-time modern analytics and ML can help process the data to inform diagnostics and allow data mining for future algorithms that aim to predict seizure onset to improve the lives of users (Stirling et al 2021).
At present, most existing and newer digital health technologies are ‘assistive’ to the clinician; that is, they provide opportunity for great data gathering. However, they often require clinical intervention or interpretation. The data gathered may not always be of immediate use for a diagnosis; however, as we have learned so far, the information yielded from a technology is what contributes to the diagnostic process rather than the technology itself. As such, a technology may be considered part of a diagnostic toolkit if the information it provides can lead to diagnosis.
The application of computational methods to healthcare diagnostics has been explored since the 1960s (Warner et al 1964), but has recently seen a resurgence due to the advances discussed earlier. Developments such as the widespread uptake of electronic medical records (EMRs), and the ability to capture and store high-resolution radiology and pathology images, are now providing the essential datasets for developing ML and AI algorithms. For example, there is growing interest in the application of object recognition technologies in areas such as radiology and histopathology for automated diagnostics (Dikici et al 2020, Ibrahim et al 2019), and the use of real-time user data analytics for clinical decision support (Wachter 2015).
The impact of information exchange systems has been particularly significant in radiology, with the development of picture archiving and communication systems (PACs) facilitating access and interpretation of radiology images such as x-rays. The digitisation of these images has also allowed for the development of AI and ML tools to facilitate automated image interpretation (Dikici et al 2020, Huang 2011).
The progressive digitisation of health information within EMRs across Australia has provided an opportunity for automated analysis of this data by clinical decision support systems (CDSSs). CDSSs may range from basic (e.g. highlighting test results that are out of a predefined normal range) to more complex (e.g. prediction algorithms to identify users who are at risk of developing an illness).
While there has been significant interest in the role of computational analysis via ML or AI to aid in complex diagnostic decision-making, there remain numerous barriers to effective deployment of these technologies. These include variable data quality, restricted access to ‘siloed’ EMR interfaces and data and limited clinician trust of ‘black-box’ algorithms (i.e. where the processing of data is opaque, leading to uncertainty about how the algorithm’s recommendation was derived). Additional barriers to the effective deployment of CDSS technology pertain to the underdevelopment of the human–computer interface and understanding of how to effectively deliver the output of advanced analytics in a manner that supports clinical workflow and decision-making (e.g. minimising unnecessary alerts to avoid alert fatigue).
Nonetheless, the use of ML and AI to leverage big data in healthcare may continue to play a large part in future advances in digital health diagnostics. As more data is aggregated from an array of clinical records and health sensors, logged across time, there will be growing opportunities for computational methods to analyse these datasets for diagnostic purposes at various levels—ranging from individual risk prediction, to diagnostic process optimisation and even accelerating the development of diagnostic technologies. The confluence of these computational advances, with novel designs for the human–computer interface (such as AR), presents potential opportunities for improving the diagnostic process. The impact these technology developments can have on the diagnostic process is at an early yet very exciting stage!
We have learned that digital health technology innovations can challenge and shape various aspects of the diagnostic process (Fig 8.3). Modern technologies both support and create smarter applications and clinical capabilities. The impacts of these technologies are also quite broad. These can be grouped into three vital areas (context, data gathering and cognition process) to understand how digital health technologies can underpin shifts away from traditional practices to the future of diagnostics.
Context refers to the interpersonal relationships and environmental factors that shape the diagnostic process, and can be influenced by digital technologies in the following manner.
Digital technologies can impact the process of gathering data that support the diagnostic process in a number of ways.
Digital technologies may influence the cognitive processes that are used to formulate a diagnosis, regardless of whether ‘fast’ or ‘slow’ cognitive processes are used. For example, early machine learning/AI models are often examined in ‘human versus machine’-type studies but have failed to demonstrate a significant improvement over expert clinician diagnosis. This has led to a reconsideration of the role of these models, such as augmenting diagnostic processes to minimise error and bias in diagnosis. This may involve:
What all of this means is that in digital health, diagnostic technologies have introduced innovative functions and capabilities into the diagnostic process, many of which were previously considered too complex or impractical for routine clinical practice. Vast arrays of new information can now be gathered, but accurate interpretation of data to achieve a diagnosis remains a crucial process (see Box 8.3).
Prior to implementing any digital health technology for diagnostic purposes, a robust evaluation process ensures whether it is fit for purpose. ‘Purpose’ in this case varies according to the stakeholder lens being used for evaluation, as stakeholders would bring differing priorities for the diagnostic technology to address.
Specific stakeholder views to be considered are:
Evaluation of any diagnostic technology requires assessment of the available evidence base; the various perspectives outlined above will require and use different types of evidence. The choice to create different types of evidence also impacts the technology itself, ultimately shaping how it could and should be applied in digital health.
The decision to use a technology, as well as to evaluate whether to keep using it, has varying aims depending on who the stakeholder is and what context the technology is being evaluated for. We will break down five key perspectives and describe the elements and priorities of evaluation for these.
Clinician assessment of diagnostic technologies would focus on the following.
The performance of diagnostic technologies should be evaluated in well-designed studies that detail the technology’s performance against a clearly defined gold standard diagnostic test, and provide appropriate outcome measures such as sensitivity, specificity and accuracy. Although adequate performance is a key requirement for any diagnostic technology, it must also be demonstrated to ultimately improve relevant clinical outcomes. Relevant clinical outcomes reflect actual improvement in health, and may include measures such as mortality, length of illness or length of hospital stay.
This concept implies that the mantra of ‘more data is always better’ may not hold true in diagnostics, as the data needs to be acted on to achieve a change in outcome. Unfortunately, the best way to act on additional information is not always clear.
For example, the potential disjoints between ‘more data’ and a meaningful relationship to clinical outcomes was demonstrated by the PACMAN trial, which examined the use of a special monitoring device called the pulmonary artery catheter (PAC) in intensive care. The PAC provided huge amounts of data that in theory would help doctors better diagnose and manage sick patients. However, this high-quality study showed that patients had the same outcome, regardless of whether a PAC was used, ultimately leading the PAC to fall out of favour as a diagnostic tool routinely used in ICU.
More recently, measures of patient-reported outcome measures (PROMs) such as quality of life and level of function are of interest as an outcome. These can be challenging to measure due to the complex interaction of factors that may lead to a change in these outcomes.
Users are those that experience the health and social care journey. While users cannot be assumed to be trained or knowledgeable in the aspects that shape what care they will be privy to, they are, however, and always will be, the expert of their own experience. This in turn has implications on what information is sought, valued and accepted by the user in their health choices. Market-driven, populist views and economic information are examples of non-scientific information that can be part of users’ ‘accepted’ information. It is of increasing importance to accept the reality of the user’s perspective.
An example of this is the COVID-19 rapid antigen test (RAT) versus the polymerase chain reaction (PCR) test. The arrival of the COVID-19 pandemic has made most of us familiar with the forms of testing for the virus. The common options are a PCR test, which requires an uncomfortable swab to be taken, or a much more comfortable yet less accurate RAT that requires a less-invasive swab and is acceptable for self-administration. When considering what steps to take in seeking a COVID-19 diagnosis, the consumer will consider the cost of the test, access to the test, the time implications for travel/wait/work, and turnaround beyond knowing that the PCR test will be more accurate and reliable.
Successful application of a diagnostic technology impacts the validity of the data produced, contributing to the evidence base for the technology and, ultimately, the success of the technology itself. A brilliant technology loses its value if it is too difficult to use or implement. In the user perspective, the need to consider factors in compliance becomes very important. The attraction to create technology that affords the consumer control in their healthcare journey allows a potential market size for a technology to be that of the much larger group of potential users than the much smaller population of highly trained and specialised health professionals. The fact that a health professional will have exponentially more opportunity to use a tool may be overshadowed by the number of users who may well only use a tool once.
The digitalisation of diagnostic technology here has two core domains to consider:
The user and the clinician are both the users of a diagnostic technology; however, without the user on board, there will ultimately be no viable users.
The development of any product is a process that requires input from stakeholders that in effect are designers by contributing to the design, regardless of their qualified or allocated position. However, the allocated designer is not the only one who impacts the design of the product. All stakeholders have a differing want, need, objective and perspective, meaning that the initial idea or intention for a product is only one part of a process where many iterations of the development can change the purpose, intended user, implementation or objective altogether. The Australian innovation of using consumer-led data to prepare hearing aid fittings was a novel step in redesigning clinical digital testing technology for a consumer, not a clinician. This change required a user experience-led design brief where the user was the consumer, which was not common practice at the time. The application of the testing tool had two facets to its evaluation: scientific evaluation for clinical acceptance, and usability and access for consumer acceptance. The design of this tool therefore evolved from its scientific initial design needs to a much longer post-implementation phase of application-based design needs that would not compromise the scientific integrity (Blamey & Saunders 2015, Blamey et al 2015).
Governing agencies have a variety of roles in relation to the evaluation and approval of diagnostic technologies, such as safety and efficacy, as described in the previous sections. A key area of assessment, particularly in the context of the Australian taxpayer-funded universal health system, is a health economic assessment. A robust health economic evaluation would include a balanced assessment of the potential costs involved in providing subsidies to access this technology, weighed against the benefit in terms of improvement in population health. An example is the introduction of subsidies for continuous glucose monitors for diabetes, which has been demonstrated to reduce diabetic complications and associated costs with managing those complications. However, subsidies are only available for certain diabetic populations, where the potential long-term benefit to improving health and social care outcomes is greater.
Clinical devices and diagnostics are tightly controlled by regulatory bodies such as the TGA in Australia and the FDA in the United States. Clinical diagnostic devices monitor and assist diagnosis of a disease. In most cases, a medical professional is still the one making the diagnosis. Such diagnostic devices include medical imaging (MRI, CT, PET, ultrasound), clinical monitoring devices (vital signs monitoring) and pathology tests. For a clinical device or diagnostic technology to be approved by the regulatory authority, clinical efficacy and safety data must be supplied. Data on accuracy, precision and sensitivity/specificity of testing compared to the most relevant ‘gold standard’ and the associated characteristics of response curves (receiver operator characteristic [ROC]) that map where acceptable cut-offs for acceptance/decision-making are all factored into the evaluation by the regulatory bodies. In recent years, with the rise of digital health, many companies rapidly developed wearables and software that may often claim a diagnostic-level capability without having passed through such regulatory approvals. Recently, the FDA and TGA have provided updated guidelines about digital health products and the level of evidence required to justify a diagnostic (or treatment) claim. Software as a medical device (SaMD) has required new guidelines.
What all of this means is that the intended use is crucial in defining how a technology should be evaluated as a diagnostic tool. A digital health technology may well prove useful for a task other than what it was designed or intended for, or for a stakeholder other than the intended user. Any given context with a digital diagnostic technology has many potential factors that may change in importance as the circumstance, application or outcomes change. Wherever this occurs, it is thus important to recognise the innovation at hand and qualify it wherever possible for clarity and guidance on how a technology should or should not be used.
The act of putting a decision into effect is called implementation. A digital technology operated by a consumer rather than a medical professional is an opportunity to assess how useful and effective a tool can be. Gathering of information by the consumer or professional must occur first to be able to extract the most relevant information either as part of or to complement the clinical encounter. For the technology to be able to gather appropriate information at all, there are many factors of implementation that equally shape how effective it can be for its intended use, regardless of the planning, design and validation that led to the creation of the technology in the first place. Technology is a tool to gather and provide information. This section of the chapter will consider implementation of technologies that could provide diagnostically relevant information.
The ‘sociotechnical’ model recognises that technology is never used in isolation; it is deployed within a complex network of interacting social elements, including people and culture. These social elements influence the way a particular technology is used, and should be addressed to facilitate uptake of the technology for improving health outcomes (Carayon et al 2011). This requires consideration of the intended purposes of the technology and the context into which it is deployed, which then influences how the same technology may be used in different ways.
The intended purpose of digital diagnostic technologies can be grouped as follows.
How a diagnostic technology is used can mean that more than one of these groups can apply for any given task.
An example of this relates to measuring blood glucose for the 2 hours post meal to observe the response assists diagnosis of diabetes; that is, by observing the rate and peak of blood glucose and recovery in that time up to 4 hours post meal. However, if a person is at high risk of gestational diabetes, continuous monitoring of blood glucose over the weeks where onset of gestational diabetes is most likely to be seen can be used to alert a clinical team as soon as the symptoms start. This will reduce the risk of harm to the fetus by initiating treatment or management of the condition before the fetus can be exposed to any dangerous levels of blood glucose. Speech testing apps can also be used as a screening test or can be used periodically to monitor for changes in hearing over time, or in case of sudden loss of hearing, which has a vital window of 72 hours to seek clinical attention to recover the loss. Sensor technologies are increasingly able to perform continuous monitoring, allowing use as a screening tool as well as a tracking tool to detect change. The capacity to manage high volumes of data supports the ability to perform more readings, such as continuous monitoring set-ups, turning what once would have been impractical into a manageable and powerful tool.
Translational science is the discipline of using observations, whether collected clinically or at a community level, to inform and guide developments of interventions that aim to improve the health of individuals and the population at large. Observations in this manner can impact diagnostics, therapeutics, clinical procedures and more. The growth of technology allows self-tracking or measuring of human health variables, and has been discussed since 2010 as the idea of the ‘quantified self’ (Swan 2013, Wolf 2010). A clinical problem for the ideas of the quantified self was the application of the data gathered from a vast array of technologies, each based on a sample of one (Mirza et al 2017). Statistically, this means data collected from technology designed for the common consumer, where application is for a subject pool of one, is challenged by the lack of standardisation when trying to analyse or interpret across any more than the single subject. Digital health technologies do, however, show great promise in overcoming two key barriers of quantified-self-based data bringing potential to how they can contribute to translational science (Wolf & De Groot 2020). Data sharing is the first barrier in being able to effectively aggregate data from multiple consumers, due to security and privacy factors, which are discussed elsewhere in this textbook. The second key barrier is the support for using and applying technologies at large, in the hands of the consumer, to be clinically helpful and improve the path to clinical validation. On the other side of these barriers lies a huge potential to bring the personal science generated by quantified-self applications of potentially diagnostic technologies (Wolf & De Groot 2020), and translational science that will be of benefit, to an ever-increasing standard of health and clinical symptomology knowledge (see Box 8.4).
An exciting change of tides in how digital health diagnostic technologies are shaping clinical practice can be seen in hearing care. Teleaudiology is the use of telehealth as pertains to the speciality of audiology. Much like telehealth as a whole, there have been various long-standing applications of teleaudiology in which the end user is assisted by a local facilitator as well as the user acting independently in their own care, supported remotely by clinical guidance (Saunders et al 2019). The uncoupling of clinical presence from its guidance has taken time to be well recognised by the industry at large (Brice et al In Press), finally becoming recognised in a revision of Australia’s clinical audiology guidelines released in 2022 (Audiology Australia 2022a) and further supported by separate guidelines specifically focusing on teleaudiology (Audiology Australia 2022b). What has changed with regards to incorporating digital health technologies, both diagnostic and rehabilitative, is the acceptance of the variety of tools available and with more coming. More specifically, that the choice and application of a digital health technology is not where the clinical governance is to be defined but that the clinical judgment of choosing and using such tools is where the clinical governance is to be derived. This subtle but crucial shift acknowledges that there will be more technologies that come and go, each providing their own clinical value; for example, screening tools like the speech perception test (SPT), performance tests to gauge hearing aid functionality or auditory processing, or tinnitus assessments and other specialised clinical measures. This shift also acknowledges that the clinical governance and quality of care is a product of the clinician/care provided, not of the tools the clinician chooses to use. Clinical judgment has and will continue to be key in diagnostic process and quality of care.
Personalisation of data and medicine is the concept that the more you know about an individual and their individual characteristics, the more precisely their health can be managed. Clinically, personalisation really began with the ability to match genomic (genetic level) information of an individual to known outcomes from other people with similar characteristics. Matching details about the person to potential prognosis in this way was termed ‘precision medicine’ and attributed to Langreth and Waldholz, later being popularised as ‘personal medicine’ (Jørgensen 2019). Personalised medicine has had great value in pharmacology, gaining initial notoriety with relation to treating cancers, and continues to expand what information can be gathered about a person that can then be used to better inform care planning, treatment and expected outcomes based on knowing how treatment went for people with similar characteristics. Technology developments were instrumental in redefining which tests could be taken to gather what information, and continues to shape how much of modern clinical care can be personalised and to what degree. While person-centred care was a concept born out of psychology, where behaviour was the fundamental part of treatment, it has since become an integral part of the ideals of modern healthcare as discussed in Chapter 4. However, medicine is not primarily focused on behaviour and as such is arguably not necessarily interested in the person as much as their biology. The impact of acknowledging the person’s behaviour into the treatment plan and delivery was clearly demonstrated to lead to improved clinical outcomes noted in the 1990s, thus leading to a call for person-centred practices, especially regarding chronic health management (see Table 8.1). Taken together, the application of person-centred care in medicine followed by technology-enabled personalisation of medicine provides knowledge and tools to keep improving clinical outcomes by adopting a holistically personalised model of medicine and care from diagnostics to delivery.

| Decade | Author | Concept | Description | Application |
|---|---|---|---|---|
| 1940s | Rogers | Person centred | Person needs to be actively involved to achieve behaviour change | Shared care |
| 1990s | Clark | Behaviour-dependent clinical outcomes | Active role and improved clinical outcomes for medical behaviour-dependent management | Clinical outcomes |
| Wagner | Chronic care management | Refined into the chronic care model | Clinical outcomes | |
| Bandura | Self-efficacy in health management | Applies psychology theory of self-regulation to behavioural self-regulation in health psychology to introduce the value of self-efficacy in chronic health management | Care planning and delivery | |
| 2000s | Langreth and Waldholz | Precision (personalised) medicine | Medical and technology advancements allow genetic and pharmacological personalisation of medicine for better informed-tailored medicine, led by medicine rather than any person; however, it is the first step in bridging medicine with an individualised view of care planning | Information-led personalisation of medicine |
| 2010s | Wolf and de Groot | Personal science, quantified self | Technology-enabled data collection to support individualised health and medical science via diagnostic data and collection to inform decision-making and care planning | Data-led personalisation of health and care |
| Personalised health and care | From diagnostic to outcome management, personalising modelling, planning and delivery of health and social care; biology to behaviour are incorporated in an integrated, multidisciplinary model of medicine | Holistically personalised medicine, data and behaviour |
The past couple of decades have seen the diagnostic part of the clinical/health journey gain capacity to measure and manage vast amounts of data from one individual. The wealth of information that can now be gathered from any individual is at the core of the quantified-self movement and creates the discipline of ‘personal science’. The ubiquitous nature of person-based sensing and monitoring digital technologies capable of populating vast databases, where population-level multi-variate analysis can occur to support increasingly complex applications (Awad et al 2021, Li et al 2017), makes the reality of personalising health information for the consumer, outside of traditional clinical care, almost inevitable. Only recently have such large-scale applications and analyses become realistic, with huge potential to inform the diagnostic process in clinical encounters and, importantly, in the interpretation of the individual symptomology to a level not seen before (Wolf & De Groot 2020).
Digital health technologies can provide diagnostic information, such as continuous monitoring tools for self-tracking as part of further improving outcomes. Supporting positive and active engagement from the beginning of the health journey to better manage personal health—that is, potentially before illness or disease (Kooiman et al 2020, Lupton 2017)—can be further enhanced by the learnings and knowledge from gamification use in health (Edwards et al 2016). Chronic health management can thus consider digital health diagnostically relevant technology, focused on personal as much as clinical use, to have a potentially reciprocal relationship with supporting the improvement of outcomes via engaged and active consumers.
Screening tools are a useful example of building on symptomology knowledge for a condition and applying diagnostically useful questions that can help triage risk of disease and provide initial guidance to appropriate referral (Klyn et al 2019, Mansbach et al 2020). It is worth remembering that technologies that allow measuring or gathering of new information are equally crucial in changing our understanding of health conditions, as data sets and sophisticated data analysis allow us to find new observations and patterns that lead to new learnings. This potential for a reciprocal relationship between collecting new data and learning more about the data we should collect can be considered a cornerstone of diagnostics knowledge bases.
Digital health technology can enable the application of personal science. A biomedical perspective of person-centred health could incorporate the consumerled and -focused application of technology with the principles of personal science being addressed using such technology. From diagnosis, prognosis and all the way to rehabilitation/outcomes, implementation of digital health technologies are fundamentally person-centred.
Traditional clinic-driven diagnostic technology has been supported by the professional motivation to use and apply a technology correctly. The consumer, however, has no professional responsibility or liability in question. The motivations of and outcomes for the consumer therefore need to be understood from an entirely different perspective from that of the clinical mind or the technology designer if we want the consumer to use the technology, let alone correctly. The issue can be described as one of ‘compliance’. This chapter has considered various diagnostic digital health technologies, and here we will review two of them again in context of user compliance (see Box 8.5).
What all this means is that digital health technologies that can assist the diagnostic process offer an ever-increasing standard of health and clinical symptomology knowledge. However, the capacity to gather useful information depends on the functionality of these technologies, which is a challenge of design, along with the purpose of these tools, which is a challenge of implementation. Effective implementation of these technologies offers a genuine practice of person-centred care that understands the individual nature of all health and social care. Coupled with computational maturity that exists today, person-centred technologies have great potential to advance our knowledge of human health, starting at a biomedical understanding of symptomology and ending with the clinical judgment that remains key to the diagnostic decision that defines quality of care.
As diagnostic technology in digital health continues to evolve, we consider the potential directions this field will take in the coming years. Across the ideas discussed, we will see various ways in which digital technologies can increase coverage and outreach with reduced cost and footprint of the devices used.
The current landscape involves an array of independent devices, each with proprietary interfaces with siloed data storage. There is likely to be a move towards incorporation of data standards, allowing output of data onto a single platform. This will allow integration of data from various devices, linked into a single user profile.
The boundary between digital health technologies and other smart devices will blur; as interoperability becomes commonplace, diagnostic technologies may leverage other smart technology to improve capabilities and enhance the value delivered to users. There will also be a shift away from focusing only on isolated diagnostic technologies and thinking about how they are embedded in broader health and social care systems.
Diagnostic technologies will increasingly leverage ‘big data’, for predictive analytics to produce automated decisions.
Clinical studies have used relatively simple outcome measures such as mortality and length of sickness. These are important measures but are relatively crude and may not adequately capture outcomes relevant to users such as functional level or quality of life (e.g. users may survive an illness but be left with severe disability). Digital technologies and monitoring introduce a method to capture these PROMs (e.g. step counters to reflect activity levels), as well as PREMs (e.g. real-time feedback, POC ‘stress’ index).
Increased volume of data generated by individual users (or even small clusters of users), allows for construction of unique profiles that may be correlated with particular risk profiles or ability to respond to treatments. This may be driven by genomics, proteomics or development of digital phenotypes from physiological measurements, and allow for the delivery of personalised therapies.
For example, BRCA1 is a gene variant known to clearly indicate risk of breast cancer and can be screened for easily. Biomarkers of other types are increasingly being screened for to apply new knowledge regarding such profiles to improve treatment planning. Probable diagnosis is based on objective and available information from tests. Possible diagnosis, however, is further informed by subjective information such as descriptions of pain/discomfort/sensory experience or new areas of research or knowledge that have not yet gathered enough data to be included as part of the probabilistic diagnostic process. The balance between use of objective and subjective information will always be a challenge for the healthcare professional aiming to derive a diagnosis, especially in atypical/under-recognised presentations of illness.
While there has been an overwhelming optimism regarding the ability of technology to disrupt healthcare models and ‘revolutionise’ healthcare, a more balanced consideration of the role of digital technology is likely to develop.
The last few years have highlighted challenges in the health technology industry, with technology companies such as IBM and Google restructuring or winding down their health technology initiatives (Combs 2021, Moreno 2021). Other diagnostic tools that have been highly lauded for accuracy in one setting have also failed to demonstrate similar performance when deployed into new health and social care environments (Habib et al 2021).
The challenges of operationalising health technologies were highlighted during the COVID-19 pandemic. There was also significant momentum in using the pandemic as an opportunity to showcase the power of digital health. A vast array of AI-powered tools were developed; however, few (if any) of these tools have been properly evaluated and deployed into a clinical environment (Heaven 2021). Expensive automated contact-tracing apps failed to be successfully deployed, with the backbone of contact tracing in Australia falling onto a traditional manual check-in system based on QR (quick response) codes (White & Basshuysen 2021).
The future of digital technology for diagnostics will likely take a middle ground between relentless optimism and dismissive pessimism, with increasing uptake of technology into existing models of care, and potential extensions of those models, without necessarily requiring the large-scale ‘revolutionary’ changes that are sometimes touted.
The rapid growth in diagnostic technologies for digital health necessitates the development of an informed, rational approach to their development, evaluation and implementation. This chapter has provided a foundational overview of the diagnostic process and the role of technology within that process. It has outlined a range of approaches to evaluating diagnostic technology and highlighted key points in the implementation of diagnostic technology within a complex healthcare setting. Finally, it has provided some considerations on the future directions and developments in the field of diagnostic technologies within digital health.