Defining clinical characteristics and predictive factors of evolution of patients with COVID-19 across Europe
through Big Data, Artificial Intelligence (AI) and Natural Language Processing
Big Data in healthcare
Big Data defines a new method of generating knowledge that is possible due to two complimentary phenomena:
- Exponential accumulation of data, thanks to network collaboration (internet)
- Growing computing capacity to process it.
Big Data implies data reuse across different large volume databases with exploration purposes, other than those for which they were originally structured and populated.
Big Data is an extension of Statistics, as it virtually manages “the entire” set of events related to the studied fact, so that it interrelates a large enough number of variables, in order to infer correlations where the human mind is not able to do so.
Medical information is estimated to double every 5 years.
«Probably, no human activity generates as much data as healthcare»
AI in healthcare
AI describes the technology that allows computers to perform operations originally attributed uniquely to humans, such as pattern recognition, language, prediction…
The process by which machines are able to learn after having seen numerous examples of the same element is known as «machine learning».
This computational task has become exponentially more efficient thanks to the imitation of human neural networks, in what is called «deep learning».
The main applications of artificial intelligence in healthcare are:
- Medical natural language processing, which allows to exploit free or unstructured text from electronic health records.
- Identification of diagnostic or therapeutic patterns, which resemble the common practice of physicians but derived from insights gained from the collective data sets.
- Predictive analysis, which enables the anticipation of clinical events from patient clusters thereby stratifying their risk.
Electronic Medical Records (EMR) Reuse
The data recorded by clinicians during their usual practice generates a huge amount of valuable information. This is the representation of what happens with the attended casuistry in the real world, under the uncertain conditions of the environment.
A fundamental requirement in order to accelerate and expand the extraction process is the implementation of electronic health records systems, which not only allow information sharing (interoperability among levels of care) but also the reuse of it.
We can divide the reuse of the EMR into images, numbers and text:
- Images: more easily exploitable by an artificial intelligence and already existing cases today that overcome the predictive and diagnostic capacity of humans (the case of Google Deepmind diagnosing diabetic retinopathy, melanoma, breast cancer, pneumonia on chest x-ray…)
- Numbers: also, existing success cases today (prediction in ICU, prediction of hospital mortality, readmissions, stays…)
Medical Natural Language Processing
It is just not possible for the clinician to structure or codify every concept recorded in health records.
The granularity of information cannot be achieved unless free text interpretation techniques are used, which can capture all the richness of them. These techniques are grouped under one of the branches of artificial intelligence, called «Natural Language Processing».
It implies the combination of several techniques, in order to detect concepts and relationships among them. Relying on linguistic tools, statistics, databases and medical knowledge, this technology is able to disambiguate the particularities of human language: spelling mistakes, different ways of expressing negatives or speculations, use of acronyms, subjectivity, anaphora, subordinates … to determine which univocal concepts correspond to each clinical annotation.