Academic literature on the topic 'Disease Prediction and Monitoring Modelling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Disease Prediction and Monitoring Modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Disease Prediction and Monitoring Modelling"

1

Orakwue, Stella I., and Nkolika O. Nwazor. "Plant Disease Detection and Monitoring Using Artificial Neural Network." International Journal of Scientific Research and Management 10, no. 01 (January 3, 2022): 715–22. http://dx.doi.org/10.18535/ijsrm/v10i1.ec01.

Full text
Abstract:
Fungi have been identified as a major threat to crop production in the world. In this study, methods of improving the performance of plant disease detection and prediction using artificial neural network techniques are presented. The hyperspectral fungi dataset of 21 plant species were collected and trained using backpropagation algorithms of an artificial neural network to improve the conventional hyperspectral sensor. The system was modelled using self-defining equations and universal modelling diagrams and then implemented in the neural network toolbox in Matlab. The system was tested validated and the result showed a fungi detection accuracy of 96.61% and the percentage increment was 19.53%.
APA, Harvard, Vancouver, ISO, and other styles
2

KAIMI, I., and P. J. DIGGLE. "A hierarchical model for real-time monitoring of variation in risk of non-specific gastrointestinal infections." Epidemiology and Infection 139, no. 12 (February 9, 2011): 1854–62. http://dx.doi.org/10.1017/s0950268811000057.

Full text
Abstract:
SUMMARYThe AEGISS (Ascertainment and Enhancement of Disease Surveillance and Statistics) project uses spatio-temporal statistical methods to identify anomalies in the incidence of gastrointestinal infections in the UK. The focus of this paper is the modelling of temporal variation in incidence using data from the Southampton area in southern England. We identified and fitted a hierarchical stochastic model for the time series of daily incident cases to enable probabilistic prediction of temporal variation in risk, and demonstrated the resulting gains in predictive accuracy by comparison with a conventional analysis based on an over-dispersed Poisson log-linear regression model. We used Bayesian methods of inference in order to incorporate parameter uncertainty in our predictive inference of risk. Incorporation of our model in the overall spatio-temporal model, will contribute to the accurate and timely prediction of unusually high food-poisoning incidence, and thus to the identification and prevention of future outbreaks.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Y. P., N. H. Idris, F. M. Muharam, N. Asib, and Alvin M. S. Lau. "Comparison of different variable selection methods for predicting the occurrence of Metisa Plana in oil palm plantation using machine learning." IOP Conference Series: Earth and Environmental Science 1274, no. 1 (December 1, 2023): 012008. http://dx.doi.org/10.1088/1755-1315/1274/1/012008.

Full text
Abstract:
Abstract Monitoring and predicting the spatio-temporal distribution of crop pests and assessing related risks are crucial for effective pest management strategies. Machine learning techniques have shown potential in analysing agricultural data and providing accurate predictions. Variable selection plays a critical role in crop pest analysis by identifying the most informative and influential features that contribute to pest distribution and risk prediction. The current practice of choosing variable selection methods is mostly based on previous experience and may involve a certain degree of subjectivity. This paper aims to provide empirical comparisons of different variable selection methods for machine learning applications in crop pest spatio-temporal distribution and risk prediction. This study conducted various variable selection methods, including filter methods (information gain, chi-square test, mutual information), wrapper methods (RFE), and embedded methods (Random Forest), using worms pest (Metisa plana) in oil palm trees as the experimental subject. The initial set of variables included bioclimatic, vegetation indices, and terrain variables. The experimental results indicated that there was some overlap in the selected variables across different methods, bioclimatic variables (rainfall (RF), relative humidity (RH)) were selected as important variables by different methods; non-important variables like NDVI and elevation when added to the ANN modelling can clearly contribute to the improvement in prediction accuracy. These empirical findings can provide guidance for relevant data monitoring in the prediction of crop pest and disease outbreaks. Additionally, the results can serve as a reference for variable selection in spatiotemporal prediction of pests and diseases in other agricultural and forestry crops.
APA, Harvard, Vancouver, ISO, and other styles
4

Sharma, V., S. K. Ghosh, and S. Khare. "A PROPOSED FRAMEWORK FOR SURVEILLANCE OF DENGUE DISEASE AND PREDICTION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-M-1-2023 (April 21, 2023): 317–23. http://dx.doi.org/10.5194/isprs-archives-xlviii-m-1-2023-317-2023.

Full text
Abstract:
Abstract. Recurring outbreaks of dengue during past decades have affected public health and burdened resource constraint health systems across the world. Transmission of such diseases is a conjugation of various complex factors including vector dynamics, transmission mechanism, environmental conditions, cultural behaviours, and public health policies. Modelling and predicting early outbreaks is the key to an effective response to control the spread of disease. In this study, a comprehensive framework has been proposed to model dengue disease by integrating significant factors using different inputs, such as remote sensing, epidemiological data, and health infrastructure inputs. This framework for Dengue Disease Monitoring (DDM) model provides a conceptual architecture for integrating different data sources, visualization and assessment of disease status, and prediction analysis. The developed model will help forewarn the public health administration about the outbreak for planning interventions to limit the spread of dengue. Further, this forecasting model may be applied to manage the existing public health resources for medical and health infrastructure, also to determine the efficacy of vector surveillance and intervention programmes.
APA, Harvard, Vancouver, ISO, and other styles
5

Velasquez-Camacho, Luisa, Marta Otero, Boris Basile, Josep Pijuan, and Giandomenico Corrado. "Current Trends and Perspectives on Predictive Models for Mildew Diseases in Vineyards." Microorganisms 11, no. 1 (December 27, 2022): 73. http://dx.doi.org/10.3390/microorganisms11010073.

Full text
Abstract:
Environmental and economic costs demand a rapid transition to more sustainable farming systems, which are still heavily dependent on chemicals for crop protection. Despite their widespread application, powdery mildew (PM) and downy mildew (DM) continue to generate serious economic penalties for grape and wine production. To reduce these losses and minimize environmental impacts, it is important to predict infections with high confidence and accuracy, allowing timely and efficient intervention. This review provides an appraisal of the predictive tools for PM and DM in a vineyard, a specialized farming system characterized by high crop protection cost and increasing adoption of precision agriculture techniques. Different methodological approaches, from traditional mechanistic or statistic models to machine and deep learning, are outlined with their main features, potential, and constraints. Our analysis indicated that strategies are being continuously developed to achieve the required goals of ease of monitoring and timely prediction of diseases. We also discuss that scientific and technological advances (e.g., in weather data, omics, digital solutions, sensing devices, data science) still need to be fully harnessed, not only for modelling plant–pathogen interaction but also to develop novel, integrated, and robust predictive systems and related applied technologies. We conclude by identifying key challenges and perspectives for predictive modelling of phytopathogenic disease in vineyards.
APA, Harvard, Vancouver, ISO, and other styles
6

Alodat, Iyas. "Analysing and predicting COVID-19 AI tracking using artificial intelligence." International Journal of Modeling, Simulation, and Scientific Computing 12, no. 03 (April 17, 2021): 2141005. http://dx.doi.org/10.1142/s1793962321410051.

Full text
Abstract:
In this paper, we will discuss prediction methods to restrict the spread of the disease by tracking contact individuals via mobile application to individuals infected with the COVID-19 virus. We will track individuals using bluetooth technology and then we will save information in the central database when they are in touch. Monitoring cases and avoiding the infected person help with social distance. We also propose that sensors used by people to obtain blood oxygen saturation level and their body temperature will be used besides bluetooth monitoring. The estimation of the frequency of the disease is based on the data entered by the patient and also on the data gathered from the users who entered the program on the state of the disease. In this study, we will also propose the best way to restrict the spread of COVID-19 by using methods of artificial intelligence to predict the disease in Jordan using Tensorflow.
APA, Harvard, Vancouver, ISO, and other styles
7

Helget, Lindsay N., David J. Dillon, Bethany Wolf, Laura P. Parks, Sally E. Self, Evelyn T. Bruner, Evan E. Oates, and Jim C. Oates. "Development of a lupus nephritis suboptimal response prediction tool using renal histopathological and clinical laboratory variables at the time of diagnosis." Lupus Science & Medicine 8, no. 1 (August 2021): e000489. http://dx.doi.org/10.1136/lupus-2021-000489.

Full text
Abstract:
ObjectiveLupus nephritis (LN) is an immune complex-mediated glomerular and tubulointerstitial disease in patients with SLE. Prediction of outcomes at the onset of LN diagnosis can guide decisions regarding intensity of monitoring and therapy for treatment success. Currently, no machine learning model of outcomes exists. Several outcomes modelling works have used univariate or linear modelling but were limited by the disease heterogeneity. We hypothesised that a combination of renal pathology results and routine clinical laboratory data could be used to develop and to cross-validate a clinically meaningful machine learning early decision support tool that predicts LN outcomes at approximately 1 year.MethodsTo address this hypothesis, patients with LN from a prospective longitudinal registry at the Medical University of South Carolina enrolled between 2003 and 2017 were identified if they had renal biopsies with International Society of Nephrology/Renal Pathology Society pathological classification. Clinical laboratory values at the time of diagnosis and outcome variables at approximately 1 year were recorded. Machine learning models were developed and cross-validated to predict suboptimal response.ResultsFive machine learning models predicted suboptimal response status in 10 times cross-validation with receiver operating characteristics area under the curve values >0.78. The most predictive variables were interstitial inflammation, interstitial fibrosis, activity score and chronicity score from renal pathology and urine protein-to-creatinine ratio, white blood cell count and haemoglobin from the clinical laboratories. A web-based tool was created for clinicians to enter these baseline clinical laboratory and histopathology variables to produce a probability score of suboptimal response.ConclusionGiven the heterogeneity of disease presentation in LN, it is important that risk prediction models incorporate several data elements. This report provides for the first time a clinical proof-of-concept tool that uses the five most predictive models and simplifies understanding of them through a web-based application.
APA, Harvard, Vancouver, ISO, and other styles
8

Chua, Felix, Rama Vancheeswaran, Adrian Draper, Tejal Vaghela, Matthew Knight, Rahul Mogal, Jaswinder Singh, et al. "Early prognostication of COVID-19 to guide hospitalisation versus outpatient monitoring using a point-of-test risk prediction score." Thorax 76, no. 7 (March 10, 2021): 696–703. http://dx.doi.org/10.1136/thoraxjnl-2020-216425.

Full text
Abstract:
IntroductionRisk factors of adverse outcomes in COVID-19 are defined but stratification of mortality using non-laboratory measured scores, particularly at the time of prehospital SARS-CoV-2 testing, is lacking.MethodsMultivariate regression with bootstrapping was used to identify independent mortality predictors in patients admitted to an acute hospital with a confirmed diagnosis of COVID-19. Predictions were externally validated in a large random sample of the ISARIC cohort (N=14 231) and a smaller cohort from Aintree (N=290).Results983 patients (median age 70, IQR 53–83; in-hospital mortality 29.9%) were recruited over an 11-week study period. Through sequential modelling, a five-predictor score termed SOARS (SpO2, Obesity, Age, Respiratory rate, Stroke history) was developed to correlate COVID-19 severity across low, moderate and high strata of mortality risk. The score discriminated well for in-hospital death, with area under the receiver operating characteristic values of 0.82, 0.80 and 0.74 in the derivation, Aintree and ISARIC validation cohorts, respectively. Its predictive accuracy (calibration) in both external cohorts was consistently higher in patients with milder disease (SOARS 0–1), the same individuals who could be identified for safe outpatient monitoring. Prediction of a non-fatal outcome in this group was accompanied by high score sensitivity (99.2%) and negative predictive value (95.9%).ConclusionThe SOARS score uses constitutive and readily assessed individual characteristics to predict the risk of COVID-19 death. Deployment of the score could potentially inform clinical triage in preadmission settings where expedient and reliable decision-making is key. The resurgence of SARS-CoV-2 transmission provides an opportunity to further validate and update its performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Masih, Adven, and Alexander N. Medvedev. "Evaluating the performance of support vector machines based on different kernel methods for forecasting air pollutants." Вестник ВГУ. Серия: Системный анализ и информационные технологии, no. 3 (September 30, 2020): 5–14. http://dx.doi.org/10.17308/sait.2020.3/3035.

Full text
Abstract:
The alarming level of air pollution in urban centres is an urgent threat to human health. Its consequences can be measured in terms of health issues experienced by children, an increasing numbers of heart and lung diseases, and, most importantly, the number of pollution related deaths. That is why a lot of attention has recently been paid to air pollution monitoring and prediction modelling. In order to develop prediction models, the study uses Support Vector Machines (SVM) with linear, polynomial, radial base function, normalised polynomial, and Pearson VII function kernels to predict the hourly concentration of pollutants in the air. The paper analyses the monitoring dataset of air pollutants and meteorological parameters as input variable to predict the concentrations of various air pollutants. The prediction performance of the models was assessed by using evaluation metrics, namely the correlation coefficient, root mean squared error, relative absolute error, and relative root squared error. To validate the model, the accuracy of the predictive algorithm was tested against two widely and commonly applied regression approaches called multilayer perceptron and linear regression. Furthermore, back check prediction test was performed to examine the consistency of the models. According to the results, the Pearson VII function and normalised polynomial kernel yield the most accurate results in terms of the correlation coefficient and error values to predict the concentrations of atmospheric pollutants as compared to other SVM kernels and traditional prediction models.
APA, Harvard, Vancouver, ISO, and other styles
10

Mrara, Busisiwe, Fathima Paruk, Constance Sewani-Rusike, and Olanrewaju Oladimeji. "Development and validation of a clinical prediction model of acute kidney injury in intensive care unit patients at a rural tertiary teaching hospital in South Africa: a study protocol." BMJ Open 12, no. 7 (July 2022): e060788. http://dx.doi.org/10.1136/bmjopen-2022-060788.

Full text
Abstract:
IntroductionAcute kidney injury (AKI) is a decline in renal function lasting hours to days. The rising global incidence of AKI, and associated costs of renal replacement therapy, is a public health priority. With the only therapeutic option being supportive therapy, prevention and early diagnosis will facilitate timely interventions to prevent progression to chronic kidney disease. While many factors have been identified as predictive of AKI, none have shown adequate sensitivity or specificity on their own. Many tools have been developed in developed-country cohorts with higher rates of non-communicable disease, and few have been validated and practically implemented. The development and validation of a predictive tool incorporating clinical, biochemical and imaging parameters, as well as quantification of their impact on the development of AKI, should make timely and improved prediction of AKI possible. This study is positioned to develop and validate an AKI prediction tool in critically ill patients at a rural tertiary hospital in South Africa.Method and analysisCritically ill patients will be followed from admission until discharge or death. Risk factors for AKI will be identified and their impact quantified using statistical modelling. Internal validation of the developed model will be done on separate patients admitted at a different time. Furthermore, patients developing AKI will be monitored for 3 months to assess renal recovery and quality of life. The study will also explore the utility of endothelial monitoring using the biomarker Syndecan-1 and capillary leak measurements in predicting persistent AKI.Ethics and disseminationThe study has been approved by the Walter Sisulu University Faculty of Health Science Research Ethics and Biosafety Committee (WSU No. 005/2021), and the Eastern Cape Department of Health Research Ethics (approval number: EC 202103006). The findings will be shared with facility management, and presented at relevant conferences and seminars.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Disease Prediction and Monitoring Modelling"

1

Nimlang, Nanlok Henry. "Modélisation et prévision du risque de maladie à l'aide de la télédétection et du SIG : Application aux cas de paludisme au Nigeria." Electronic Thesis or Diss., IMT Mines Alès, 2024. http://www.theses.fr/2024EMAL0004.

Full text
Abstract:
Tout au long de la thèse, les objectifs de recherche, les buts et les questions de recherche énoncés sont suivis d'analyses détaillées, de processus et de méthodes utilisés pour les atteindre. Ceci se matérialise par des contributions visant à répondre aux questions de recherche en fonction de leurs méthodes et résultats respectifs. Dans ce mémoire, les contributions présentées sont principalement classées en deux domaines principaux : l'identification des paramètres des facteurs de risque spatiaux (écologiques, météorologiques, socio-économiques et épidémiologiques) et l'analyse. L'analyse géostatistique et géospatiale, la modélisation, la justification et la validation. La première contribution, sous forme de modélisation géospatiale des facteurs de risque du paludisme, utilise l'intuition des experts pour déterminer l'importance relative des facteurs de risque tels que les données écologiques, météorologiques, socio-économiques et épidémiologiques. Ce modèle permet d'évaluer la distribution spatiale du paludisme dans la zone d'étude et de comprendre de manière exhaustive les interactions complexes entre les vecteurs, les hôtes et l'environnement, dans le but de développer un outil efficace spécifiquement adapté à une prise de décision de gestion efficace dans le contexte de l'introduction de mesures et stratégies de lutte antivectorielle. Cette contribution vise à développer un modèle pouvant être utilisé pour examiner la relation entre les variables environnementales et leurs incidences causales de la maladie. Cela sera utilisé pour comprendre la propagation spatiale du risque, développer des systèmes d'alerte précoce, construire un mécanisme d'intervention approprié et évaluer la dynamique de transmission de la maladie. L'implémentation globale du modèle de risque du paludisme vise à mieux comprendre la complexité du risque de transmission du paludisme dans la zone d'étude et au Nigeria dans son ensemble. De plus, en identifiant les régions endémiques à risque de paludisme et en utilisant divers paramètres de facteurs de risque couvrant les différents domaines de la composition sociale, de l'environnement, du climat et des activités socio-économiques, cette thèse fournit aux décideurs les outils nécessaires pour cibler les mesures d'intervention contre le paludisme afin de planifier et de mettre en œuvre, aux côtés d'une surveillance vectorielle appropriée et d'une utilisation optimale des ressources rares. Enfin, compte tenu du manque de données entomologiques sur la distribution des vecteurs, le modèle de risque peut également aider les autorités à identifier les régions géographiques sur lesquelles les programmes de lutte antivectorielle et de surveillance doivent se concentrer
Throughout the thesis, the research aims, objectives and research questions presented are followed by detailed analyses, processes and methods used to achieve these tasks. This takes the form of contributions that aim to answer research questions based on their respective methods and results. In this thesis, the presented contributions are mainly categorized in to two main areas: the spatial risk factors parameters identification (Ecological, Meteorological, Socio-economic, and epidemiological) and analysis. Geostatistical and geospatial analysis, modelling, justification, and validation. The first contribution in the form of geospatial modelling of malaria risk factors using expert’s intuition to determine the relative importance of risk factors such as ecological, meteorological, socioeconomic, and epidemiological data. This model allows the assessment of the spatial distribution of malaria within the study area and a comprehensive understanding of the complex interactions between vectors, host and environment, with the aim of developing an effective tool specifically tailored to an effective management decision in the context of the introduction of Vector control measures and strategies. This contribution aims to develop a model that can be used to examine the relationship between environmental variables and their causative incidences of the disease. This will be used to understand the spatial spread of the risk, develop early warning systems, build an appropriate intervention mechanism and assess the transmission dynamics of the disease. The overall implementation of the Malaria Risk Model is aimed at better understanding the complexity of malaria transmission risk in the study area and in Nigeria as a whole. Furthermore, by identifying endemic malaria-prone regions and using various risk factor parameters covering the different domains of social composition, environment, climate and socio-economic activities, this thesis provides policymakers with the necessary tools to target malaria intervention measures to plan and implement alongside appropriate vector surveillance and optimal use of scarce resources. Lastly, given the lack of entomological data on vector distribution, the risk model can also help authorities identify the geographical regions where vector control programs and surveillance should focus
APA, Harvard, Vancouver, ISO, and other styles
2

Marmara, Vincent Anthony. "Prediction of Infectious Disease outbreaks based on limited information." Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/24624.

Full text
Abstract:
The last two decades have seen several large-scale epidemics of international impact, including human, animal and plant epidemics. Policy makers face health challenges that require epidemic predictions based on limited information. There is therefore a pressing need to construct models that allow us to frame all available information to predict an emerging outbreak and to control it in a timely manner. The aim of this thesis is to develop an early-warning modelling approach that can predict emerging disease outbreaks. Based on Bayesian techniques ideally suited to combine information from different sources into a single modelling and estimation framework, I developed a suite of approaches to epidemiological data that can deal with data from different sources and of varying quality. The SEIR model, particle filter algorithm and a number of influenza-related datasets were utilised to examine various models and methodologies to predict influenza outbreaks. The data included a combination of consultations and diagnosed influenza-like illness (ILI) cases for five influenza seasons. I showed that for the pandemic season, different proxies lead to similar behaviour of the effective reproduction number. For influenza datasets, there exists a strong relationship between consultations and diagnosed datasets, especially when considering time-dependent models. Individual parameters for different influenza seasons provided similar values, thereby offering an opportunity to utilise such information in future outbreaks. Moreover, my findings showed that when the temperature drops below 14°C, this triggers the first substantial rise in the number of ILI cases, highlighting that temperature data is an important signal to trigger the start of the influenza epidemic. Further probing was carried out among Maltese citizens and estimates on the under-reporting rate of the seasonal influenza were established. Based on these findings, a new epidemiological model and framework were developed, providing accurate real-time forecasts with a clear early warning signal to the influenza outbreak. This research utilised a combination of novel data sources to predict influenza outbreaks. Such information is beneficial for health authorities to plan health strategies and control epidemics.
APA, Harvard, Vancouver, ISO, and other styles
3

Memedi, Mevludin. "Mobile systems for monitoring Parkinson's disease." Doctoral thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:du-13797.

Full text
Abstract:
A challenge for the clinical management of Parkinson's disease (PD) is the large within- and between-patient variability in symptom profiles as well as the emergence of motor complications which represent a significant source of disability in patients. This thesis deals with the development and evaluation of methods and systems for supporting the management of PD by using repeated measures, consisting of subjective assessments of symptoms and objective assessments of motor function through fine motor tests (spirography and tapping), collected by means of a telemetry touch screen device. One aim of the thesis was to develop methods for objective quantification and analysis of the severity of motor impairments being represented in spiral drawings and tapping results. This was accomplished by first quantifying the digitized movement data with time series analysis and then using them in data-driven modelling for automating the process of assessment of symptom severity. The objective measures were then analysed with respect to subjective assessments of motor conditions. Another aim was to develop a method for providing comparable information content as clinical rating scales by combining subjective and objective measures into composite scores, using time series analysis and data-driven methods. The scores represent six symptom dimensions and an overall test score for reflecting the global health condition of the patient. In addition, the thesis presents the development of a web-based system for providing a visual representation of symptoms over time allowing clinicians to remotely monitor the symptom profiles of their patients. The quality of the methods was assessed by reporting different metrics of validity, reliability and sensitivity to treatment interventions and natural PD progression over time. Results from two studies demonstrated that the methods developed for the fine motor tests had good metrics indicating that they are appropriate to quantitatively and objectively assess the severity of motor impairments of PD patients. The fine motor tests captured different symptoms; spiral drawing impairment and tapping accuracy related to dyskinesias (involuntary movements) whereas tapping speed related to bradykinesia (slowness of movements). A longitudinal data analysis indicated that the six symptom dimensions and the overall test score contained important elements of information of the clinical scales and can be used to measure effects of PD treatment interventions and disease progression. A usability evaluation of the web-based system showed that the information presented in the system was comparable to qualitative clinical observations and the system was recognized as a tool that will assist in the management of patients.
APA, Harvard, Vancouver, ISO, and other styles
4

Ahmed, Siraj. "Prediction of Rate of Disease Progression in Parkinson’s Disease Patients Based on RNA-Sequence Using Deep Learning." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41411.

Full text
Abstract:
The advent of recent high throughput sequencing technologies resulted in an unexplored big data of genomics and transcriptomics that might help to answer various research questions in Parkinson’s disease(PD) progression. While the literature has revealed various predictive models that use longitudinal clinical data for disease progression, there is no predictive model based on RNA-Sequence data of PD patients. This study investigates how to predict the PD Progression for a patient’s next medical visit by capturing longitudinal temporal patterns in the RNA-Seq data. Data provided by Parkinson Progression Marker Initiative (PPMI) includes 423 PD patients with a variable number of visits for a period of 4 years. We propose a predictive model based on a Recurrent Neural Network (RNN) with dense connections. The results show that the proposed architecture is able to predict PD progression from high dimensional RNA-seq data with a Root Mean Square Error (RMSE) of 6.0 and rank-order correlation of (r=0.83, p<0.0001) between the predicted and actual disease status of PD. We show empirical evidence that the addition of dense connections and batch normalization into RNN layers boosts its training and generalization capability.
APA, Harvard, Vancouver, ISO, and other styles
5

Cresswell, Mark Philip. "Developing an integrated approach to epidemic forecasting, through the monitoring and prediction of meteorological variables associated with disease." Thesis, University of Liverpool, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kavanagh, Madeline. "The Rational Design of LRRK2 Inhibitors for Parkinson's Disease." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9762.

Full text
Abstract:
Parkinson’s disease is a chronic neurodegenerative disorder that affects 1-2% of the world’s population over the age of 65. Current treatments that reduce the severity of symptoms cause numerous side-effects and lose efficacy over the course of disease progression. Leucine-rich repeat kinase 2 (LRRK2) is a novel drug target for the development of disease modifying therapeutics for Parkinson’s disease. LRRK2 mutants have elevated kinase activity and, as such, chemical inhibitors have therapeutic potential. The physiological benefits that arise from chemically inhibiting LRRK2 have been proven through the use of generic kinase inhibitors and more recently the selective benzodiazepinone compound LRRK2IN1. LRRK2IN1 is a highly potent inhibitor, exhibiting a half-maximal inhibitory concentration (IC50) of 9 nM in cellular assays. However, LRRK2IN1 is not biologically available in the brain because it has poor physicochemical and pharmacokinetic properties. In previous research we rationally designed a LRRK2IN1 analogue (IN1_G) that was predicted to have improved metabolic stability and blood-brain barrier permeability. Preliminary biological analysis indicated that both LRRK2IN1 and IN1_G potently inhibited LRRK2-associated neuro-inflammation in vitro. However, the high molecular weight, topological polar surface area and lipophilicity of LRRK2IN1 and IN1_G were predicted to be incompatible with functional activity in vivo. Structural modifications were thus required to optimise compounds as neuro-protective treatments for Parkinson’s disease. Biological evaluation of the structural components of LRRK2IN1 and IN1_G indicated that the aniline-bipiperidine 1 motif was a moderately potent inhibitor of neuro-inflammation, whilst the tricyclic diazepinone motif IN1_H had no anti-inflammatory efficacy. In the current research a series of truncated LRRK2IN1/IN1_G analogues were rationally designed to determine if the diazepinone motif could be replaced with low molecular weight bioisosteres without affecting functional potency. In silico property predictions and scoring functions were used to guide the design of truncated analogues. The Schrödinger suite programs LigPrep, QikProp and Marvin were used to predict the physicochemical and pharmacokinetic properties of analogues. The recently described central nervous system multi-parameter optimisation score was used to select analogues that were likely to possess favourable pharmacokinetic and safety profiles. Analogues were docked in a homology model of the LRRK2 kinase domain that was developed in our previous research. Analogues that conformed to the binding mode of known kinase inhibitors and were predicted by GLIDE to bind to the LRRK2 homology model with high affinity were prioritised for synthesis. Twenty analogues were synthesised using methods known in the literature. The substrate scope of Buchwald-Hartwig chemistry was explored. Novel “all-water” chemistry was employed to synthesise N-benzyl aniline analogues. Methodology recently developed in our group was used to synthesise diazepine and oxazepine analogues of IN1_H. Analogues were assessed for anti-inflammatory efficacy in two cell-based assays. Four truncated analogues — 25, 30, 31 and 39 — had equivalent functional efficacy to LRRK2IN1/IN1_G, inhibiting the secretion of pro-inflammatory cytokines from stimulated primary human microglia by more than 43% at concentrations of 1 µM. These analogues were all predicted to have improved pharmacokinetic properties compared to LRRK2IN1/IN1_G and are excellent candidates for further development. The synthetic intermediate 63 was found to be highly potent (57% inhibition of cytokine secretion at 1 µM), which has suggested options for the development of future analogues. The potency of analogues 25, 30, 31 and 39 indicated that the tricyclic diazepinone motif was not essential for anti-inflammatory efficacy. Analogues from this research have been used to identify a role for LRRK2 in the pathology of severe brain cancer glioblastoma. Although their mechanisms of action have not yet been determined, it is clear that analogues developed in this research have potential applications in the treatment of numerous disorders driven by an inflammatory microenvironment. Further optimisation of the analogues developed in this research will provide the first disease-modifying therapeutics for Parkinson’s disease.
APA, Harvard, Vancouver, ISO, and other styles
7

Revie, James Alexander Michael. "Model-based cardiovascular monitoring in critical care for improved diagnosis of cardiac dysfunction." Thesis, University of Canterbury. Mechanical Engineering, 2013. http://hdl.handle.net/10092/7876.

Full text
Abstract:
Cardiovascular disease is a large problem in the intensive care unit (ICU) due to its high prevalence in modern society. In the ICU, intensive monitoring is required to help diagnose cardiac and circulatory dysfunction. However, complex interactions between the patient, disease, and treatment can hide the underlying disorder. As a result, clinical staff must often rely on their skill, intuition, and experience to choose therapy, increasing variability in care and patient outcome. To simplify this clinical scenario, model-based methods have been created to track subject-specific disease and treatment dependent changes in patient condition, using only clinically available measurements. The approach has been tested in two pig studies on acute pulmonary embolism and septic shock and in a human study on surgical recovery from mitral valve replacement. The model-based method was able to track known pathophysiological changes in the subjects and identified key determinants of cardiovascular health such as cardiac preload, afterload, and contractility. These metrics, which can be otherwise difficult to determine clinically, can be used to help provide targets for goal-directed therapies to help provide deliver the optimal level of therapy to the patient. Hence, this model-based approach provides a feasible and potentially practical means of improving patient care in the ICU.
APA, Harvard, Vancouver, ISO, and other styles
8

Van, Zyl Jacobus. "Modelling chaotic systems with neural networks : application to seismic event predicting in gold mines." Thesis, Stellenbosch : University of Stellenbosch, 2001. http://hdl.handle.net/10019.1/4580.

Full text
Abstract:
Thesis (MSc (Computer Science))-- University of Stellenbosch, 2001.
ENGLISH ABSTRACT: This thesis explores the use of neural networks for predicting difficult, real-world time series. We first establish and demonstrate methods for characterising, modelling and predicting well-known systems. The real-world system we explore is seismic event data obtained from a South African gold mine. We show that this data is chaotic. After preprocessing the raw data, we show that neural networks are able to predict seismic activity reasonably well.
AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die gebruik van neurale netwerke om komplekse, werklik bestaande tydreekse te voorspel. Ter aanvang noem en demonstreer ons metodes vir die karakterisering, modelering en voorspelling van bekende stelsels. Ons gaan dan voort en ondersoek seismiese gebeurlikheidsdata afkomstig van ’n Suid-Afrikaanse goudmyn. Ons wys dat die data chaoties van aard is. Nadat ons die rou data verwerk, wys ons dat neurale netwerke die tydreekse redelik goed kan voorspel.
Integrated Seismic Systems International
APA, Harvard, Vancouver, ISO, and other styles
9

Westerlund, Per. "Condition measuring and lifetime modelling of disconnectors, circuit breakers and other electrical power transmission equipment." Doctoral thesis, KTH, Elektroteknisk teori och konstruktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214984.

Full text
Abstract:
The supply of electricity is important in modern society, so the outages of the electric grid should be few and short, especially for the transmission grid. A summary of the history of the Swedish electrical system is presented. The objective is to be able to plan the maintenance better by following the condition of the equipment. The risk matrix can be used to choose which component to be maintained. The risk matrix is improved by adding a dimension, the uncertainty of the probability. The risk can be reduced along any dimension: better measurements, preventive maintenance or more redundancy. The number of dimensions can be reduced to two by following iso-risk lines calculated for the beta distribution. This thesis lists twenty surveys about circuit breakers and disconnectors, with statistics about the failures and the lifetime. It also presents about forty condition-measuring methods for circuit breakers and disconnectors, mostly applicable to the electric contacts and the mechanical parts. A method for scheduling thermography based on analysis of variance of the current is tried. Its aim is to reduce the uncertainty of thermography and it is able to explain two thirds of the variation using the time of the day, the day of the week and the week number as explanatory variables. However, the main problem remains as the current is in general too low. A system with IR sensors has been installed at the nine contacts of six disconnectors with the purpose of avoiding outages for maintenance if the contacts are in a good condition. The measured temperatures are sent by radio and regressed against the square of the current, the best exponent found. The coefficient of determination $R^2$ is high, greater than 0.9. The higher the regression coefficient is, the more heat is produced at the contact. So this ranks the different contacts. Finally a framework for lifetime modelling and condition measuring is presented. Lifetime modelling consists in associating a distribution of time to failure with each subpopulation. Condition measuring means measuring a parameter and estimating its value in the future. If it exceeds a threshold, maintenance should be carried out. The effect of maintenance of the contacts is shown for four disconnectors. An extension of the risk matrix with uncertainty, a survey of statistics and condition monitoring methods, a system with IR sensors at contacts, a thermography scheduling method and a framework for lifetime modelling and condition measuring are presented. They can improve the planning of outages for maintenance. Finally a framework for lifetime modelling and condition measuring is presented. Lifetime modelling consists in associating a distribution of time to failure with each subpopulation. Condition measuring means measuring a parameter and estimating its value in the future. If it exceeds a threshold, maintenance should be carried out. The effect of maintenance of the contacts is shown for four disconnectors. An extension of the risk matrix with uncertainty, a survey of statistics and condition monitoring methods, a system with IR sensors at contacts, a thermography scheduling method and a framework for lifetime modelling and condition measuring are presented. They can improve the planning of outages for maintenance.
Elförsörjningen är viktig i det moderna samhället, så avbrotten bör vara få och korta, särskilt i stamnätet. En kortfattad historik över det svenska elsystemet presenteras. Målet är att kunna planera avbrotten för underhåll bättre genom att veta mera om apparaternas skick. Det är svårt att planera avbrott för underhåll och utbyggnad. Riskmatrisen är verktyg för att välja vad som ska underhållas och den kan förbättras genom att lägga till en dimension, sannolikhetens osäkerhet. Risken kan minskas längs med varje dimension: bättre mätningar, förebyggande underhåll och mer redundans. Antalet dimensioner kan igen bli två genom att följa linjer med samma risk, som är beräknade för betafördelningen. Denna avhandling tar upp tjugo studier av fel i brytare och frånskiljare med data om felorsak och livslängd. Den har också en översikt av ett fyrtiotal olika metoder för tillståndsmätningar för brytare och frånskiljare, som huvudsakligen rör de elektriska kontakterna och de mekaniska delarna. Ett system med IR sensorer har installerats på de nio kontakterna på sex frånskiljare. Målet är att minska antalet avbrott för underhåll genom att skatta skicket när frånskiljarna är i drift. De uppmätta temperaturerna tas emot genom radio och behandlas genom regression mot kvadraten av strömmen, då den bästa exponenten för strömmen visade sig vara 2,0. Förklaringsfaktorn $R^2$ är hög, över 0,9. För varje kontakt ger det en regressionskoefficient. Ju högre koefficienten är, desto mer värme utvecklas det i kontakten, vilket kan leda till skador på materialet. Koefficienterna ger en rangordning av frånskiljarna. Systemet kan också användas för att minska eller öka den tillåtna strömmen baserat på skicket. Slutligen förklaras ett ramverk för livslängdsmodellering och tillståndsmätning. Livslängdsmodellering innebär att koppla en fördelning för tiden till fel med varje delpopulation. Med tillståndsmätning avses att mäta en parameter och skatta dess värde i framtiden. Om den överskrider en tröskel, måste apparaten underhållas. Effekten av underhåll visas för fyra frånskiljare. En utveckling av riskmatrisen med osäkerheten, en sammanställning av statistik och metoder för tillståndsövervakning, ett system med IR-sensor vid kontakerna, en metod för termografiplanering och ett ramverk för livslängdsmodellering och tillståndsmätningar presenteras. De kan förbättra avbrottsplaneringen.
El suministro de energía eléctrica es importante en la sociedad moderna. Por eso los cortes eléctricos deben ser poco frecuentes y de poca duración, sobre todo en la red de transmisión. Esta tesis resume la historia del sistema eléctrico sueco. El objetivo es planificar los cortes mejor siguiendo la condición de los aparatos. La matriz de riesgo se utiliza muchas veces para escoger en qué aparatos debería realizarse mantenimiento. Esta matriz se puede mejorar añadiendo una dimensión: la incertidumbre de la probabilidad. El riesgo puede ser disminuido siguiendo cada una de las tres dimensiones: mejores mediciones, mantenimiento preventivo y mayor redundancia. El número de dimensiones puede reducirse siguiendo líneas del mismo riesgo calculadas para la distribución beta. Esta tesis presenta veinte estudios de fallos en interruptores y seccionadores con datos sobre la causa y el tiempo hasta la avería. Contiene también una visión general de cuarenta métodos para medir la condición de seccionadores e interruptores, aplicables en su mayoría a los contactos eléctricos y los componentes mecánicos. Se ha instalado un sistema con sensores infrarrojos en los seis contactos de nueve seccionadores. El objetivo es disminuir los cortes de servicio para mantenimiento, estimando la condición con el seccionador en servicio. Las temperaturas son transmitidas por radio y se hace una regresión con el cuadrado de la corriente, ya que el mejor exponente de la corriente resultó ser 2,0. $R^2$ alcanza un valor de 0,9 indicando un buen ajuste de los datos por parte del modelo. Existe un coeficiente de regresión para cada contacto y este sirve para ordenar los contactos según la necesidad de mantenimiento, ya que cuanto mayor sea el coeficiente más calor se produce en el contacto. Finalmente se explica que el modelado de tiempo hasta la avería consiste en asignar una distribución estadística a cada equipo. La monitorización del estado consiste en medir y estimar un parámetro y luego predecir su valor en el futuro. Si va a sobrepasar un cierto límite, el equipo necesitará de mantenimiento. Se presenta el efecto de mantenimiento de cuatro seccionadores. Un desarrollo de la matriz de riesgo, un conjunto de estadísticas y métodos de monitoreo de condición, un sistema de sensores IR situados cerca de los contactos, en método de planificación de termografía y un concepto para explicar la modelización de tiempo hasta la avería y de la monitorización de la condición han sido presentados y hace posible una mejor planificación de los cortes de servicio.

QC 20170928

APA, Harvard, Vancouver, ISO, and other styles
10

Zecchin, Chiara. "Online Glucose Prediction in Type-1 Diabetes by Neural Network Models." Doctoral thesis, Università degli studi di Padova, 2014. http://hdl.handle.net/11577/3423574.

Full text
Abstract:
Diabetes mellitus is a chronic disease characterized by dysfunctions of the normal regulation of glucose concentration in the blood. In Type 1 diabetes the pancreas is unable to produce insulin, while in Type 2 diabetes derangements in insulin secretion and action occur. As a consequence, glucose concentration often exceeds the normal range (70-180 mg/dL), with short- and long-term complications. Hypoglycemia (glycemia below 70 mg/dL) can progress from measurable cognition impairment to aberrant behaviour, seizure and coma. Hyperglycemia (glycemia above 180 mg/dL) predisposes to invalidating pathologies, such as neuropathy, nephropathy, retinopathy and diabetic foot ulcers. Conventional diabetes therapy aims at maintaining glycemia in the normal range by tuning diet, insulin infusion and physical activity on the basis of 4-5 daily self-monitoring of blood glucose (SMBG) measurements, obtained by the patient using portable minimally-invasive lancing sensor devices. New scenarios in diabetes treatment have been opened in the last 15 years, when minimally invasive continuous glucose monitoring (CGM) sensors, able to monitor glucose concentration in the subcutis continuously (i.e. with a reading every 1 to 5 min) over several days (7-10 consecutive days), entered clinical research. CGM allows tracking glucose dynamics much more effectively than SMBG and glycemic time-series can be used both retrospectively, e.g. to optimize metabolic control therapy, and in real-time applications, e.g. to generate alerts when glucose concentration exceeds the normal range thresholds or in the so-called “artificial pancreas”, as inputs of the closed loop control algorithm. For what concerns real time applications, the possibility of preventing critical events is, clearly, even more appealing than just detecting them as they occur. This would be doable if glucose concentration were known in advance, approximately 30-45 min ahead in time. The quasi continuous nature of the CGM signal renders feasible the use of prediction algorithms which could allow the patient to take therapeutic decisions on the basis of future instead of current glycemia, possibly mitigating/ avoiding imminent critical events. Since the introduction of CGM devices, various methods for short-time prediction of glucose concentration have been proposed in the literature. They are mainly based on black box time series models and the majority of them uses only the history of the CGM signal as input. However, glucose dynamics are influenced by many factors, e.g. quantity of ingested carbohydrates, administration of drugs including insulin, physical activity, stress, emotions and inter- and intra-individual variability is high. For these reasons, prediction of glucose time course is a challenging topic and results obtained so far may be improved. The aim of this thesis is to investigate the possibility of predicting future glucose concentration, in the short term, using new models based on neural networks (NN) exploiting, apart from CGM history, other available information. In particular, we first develop an original model which uses, as inputs, the CGM signal and information on timing and carbohydrate content of ingested meals. The prediction algorithm is based on a feedforward NN in parallel with a linear predictor. Results are promising: the predictor outperforms widely used state of art techniques and forecasts are accurate and allow obtaining a satisfactory time anticipation. Then we propose a second model, which exploits a different NN architecture, a jump NN, which combines benefits of both feedforward NN and linear algorithm obtaining performance similar to the previously developed predictor, although the simpler structure. To conclude the analysis, information on doses of injected bolus of insulin are added as input of the jump NN and the relative importance of every input signal in determining the NN output is investigated by developing an original sensitivity analysis. All the proposed predictors are assessed on real data of Type 1 diabetics, collected during the European FP7 project DIAdvisor. To evaluate the clinical usefulness of prediction in improving diabetes management we also propose a new strategy to quantify, using an in silico environment, the reduction of hypoglycemia when alerts and relative therapy are triggered on the basis of prediction, obtained with our NN algorithm, instead of CGM. Finally, possible inclusion of additional pieces of information such as physical activity is investigated, though at a preliminary level. The thesis is organized as follows. Chapter 1 gives an introduction to the diabetes disease and the current technologies for CGM, presents state of art techniques for short-time prediction of glucose concentration of diabetics and states the aim and the novelty of the thesis. Chapter 2 discusses NN paradigms from a theoretical point of view and specifies technical details common to the design and implementation of all the NN algorithms proposed in the following. Chapter 3 describes the first prediction model we propose, based on a NN in parallel with a linear algorithm. Chapter 4 presents an alternative simpler architecture, based on a jump NN, and demonstrates its equivalence, in terms of performance, with the previously proposed algorithm. Chapter 5 further improves the jump NN, by adding new inputs and investigating their effective utility by a sensitivity analysis. Chapter 6 points out possible future developments, as the possibility of exploiting information on physical activity, reporting also a preliminary analysis. Finally, Chapter 7 describes the application of NN for generation of preventive hypoglycemic alerts and evaluates improvement of diabetes management in a simulated environment. Some concluding remarks end the thesis.
Il diabete mellito è una patologia cronica caratterizzata da disfunzioni della regolazione della concentrazione di glucosio nel sangue. Nel diabete di Tipo 1 il pancreas non produce l'ormone insulina, mentre nel diabete di Tipo 2 si verificano squilibri nella secrezione e nell'azione dell'insulina. Di conseguenza, spesso la concentrazione glicemica eccede le soglie di normalità (70-180 mg/dL), con complicazioni a breve e lungo termine. L'ipoglicemia (glicemia inferiore a 70 mg/dL) può risultare in alterazione delle capacità cognitive, cambiamenti d'umore, convulsioni e coma. L'iperglicemia (glicemia superiore a 180 mg/dL) predispone, nel lungo termine, a patologie invalidanti, come neuropatie, nefropatie, retinopatie e piede diabetico. L'obiettivo della terapia convenzionale del diabete è il mantenimento della glicemia nell'intervallo di normalità regolando la dieta, la terapia insulinica e l'esercizio fisico in base a 4-5 monitoraggi giornalieri della glicemia, (Self-Monitoring of Blood Glucose, SMBG), effettuati dal paziente stesso usando un dispositivo pungidito, portabile e minimamente invasivo. Negli ultimi 15 anni si sono aperti nuovi orizzonti nel trattamento del diabete, grazie all'introduzione, nella ricerca clinica, di sensori minimamente invasivi (Continuous Glucose Monitoring, CGM) capaci di misurare la glicemia nel sottocute in modo quasi continuo (ovvero con una misurazione ogni 1-5 min) per parecchi giorni consecutivi (dai 7 ai 10 giorni). I sensori CGM permettono di monitorare le dinamiche glicemiche in modo più fine delle misurazioni SMBG e le serie temporali di concentrazione glicemica possono essere utilizzate sia retrospettivamente, per esempio per ottimizzare la terapia di controllo metabolico, sia prospettivamente in tempo reale, per esempio per generare segnali di allarme quando la concentrazione glicemica oltrepassa le soglie di normalità o nel “pancreas artificiale”. Per quanto concerne le applicazioni in tempo reale, poter prevenire gli eventi critici sarebbe chiaramente più attraente che semplicemente individuarli, contestualmente al loro verificarsi. Ciò sarebbe fattibile se si conoscesse la concentrazione glicemia futura con circa 30-45 min di anticipo. La natura quasi continua del segnale CGM rende possibile l'uso di algoritmi predittivi che possono, potenzialmente, permettere ai pazienti diabetici di ottimizzare le decisioni terapeutiche sulla base della glicemia futura, invece che attuale, dando loro l'opportunità di limitare l'impatto di eventi pericolosi per la salute, se non di evitarli. Dopo l'introduzione nella pratica clinica dei dispositivi CGM, in letteratura, sono stati proposti vari metodi per la predizione a breve termine della glicemia. Si tratta principalmente di algoritmi basati su modelli di serie temporali e la maggior parte di essi utilizza solamente la storia del segnale CGM come ingresso. Tuttavia, le dinamiche glicemiche sono determinate da molti fattori, come la quantità di carboidrati ingeriti durante i pasti, la somministrazione di farmaci, compresa l'insulina, l'attività fisica, lo stress, le emozioni. Inoltre, la variabilità inter- e intra- individuale è elevata. Per questi motivi, predire l'andamento glicemico futuro è difficile e stimolante e c'è margine di miglioramento dei risultati pubblicati finora in letteratura. Lo scopo di questa tesi è investigare la possibilità di predire la concentrazione glicemica futura, nel breve termine, utilizzando modelli basati su reti neurali (Neural Network, NN) e sfruttando, oltre alla storia del segnale CGM, altre informazioni disponibili. Nel dettaglio, inizialmente svilupperemo un nuovo modello che utilizza, come ingressi, il segnale CGM e informazioni relative ai pasti ingeriti, (istante temporale e quantità di carboidrati). L'algoritmo predittivo sarà basato su una NN di tipo feedforward, in parallelo ad un modello lineare. I risultati sono promettenti: il modello è superiore ad algoritmi stato dell'arte ampiamente utilizzati, la predizione è accurata e il guadagno temporale è soddisfacente. Successivamente proporremo un nuovo modello basato su una differente architettura di NN, ovvero una “jump NN”, che fonde i benefici di una NN di tipo feedforward e di un algoritmo lineare, ottenendo risultati simili a quelli del modello precedentemente proposto, nonostante la sua struttura notevolmente più semplice. Per completare l'analisi, valuteremo l'inclusione, tra gli ingressi della jump NN, di segnali ottenuti sfruttando informazioni sulla terapia insulinica (istante temporale e dose dei boli iniettati) e valuteremo l'importanza e l'influenza relativa di ogni ingresso nella determinazione del valore glicemico predetto dalla NN, sviluppando un'originale analisi di sensitività. Tutti i modelli proposti saranno valutati su dati reali di pazienti diabetici di Tipo 1, raccolti durante il progetto Europeo FP7 (7th Framework Programme, Settimo Programma Quadro) DIAdvisor. Per valutare l'utilità clinica della predizione e il miglioramento della gestione della terapia diabetica proporremo una nuova strategia per la quantificazione, in simulazione, della riduzione del numero e della gravità degli eventi ipoglicemici nel caso gli allarmi, e la relativa terapia, siano determinati sulla base della concentrazione glicemica predetta, utilizzando il nostro algoritmo basato su NN, invece che su quella misurata dal sensore CGM. Infine, investigheremo, in modo preliminare, la possibilità di includere, tra gli ingressi della NN, ulteriori informazioni, come l'attività fisica. La tesi è organizzata come descritto in seguito. Il Capitolo 1 introduce la patologia diabetica e le attuali tecnologie CGM, presenta le tecniche stato dell'arte utilizzate per la predizione a breve termine della glicemia di pazienti diabetici e specifica gli scopi e le innovazioni della presente tesi. Il Capitolo 2 introduce le basi teoriche delle NN e specifica i dettagli tecnici che abbiamo scelto di adottare per lo sviluppo e l'implementazione di tutte le NN proposte in seguito. Il Capitolo 3 descrive il primo modello proposto, basato su una NN in parallelo a un algoritmo lineare. Il Capitolo 4 presenta una struttura alternativa più semplice, basata su una jump NN, e dimostra la sua equivalenza, in termini di prestazioni, con il modello precedentemente proposto. Il Capitolo 5 apporta ulteriori miglioramenti alla jump NN, aggiungendo nuovi ingressi e investigando la loro utilità effettiva attraverso un'analisi di sensitività. Il Capitolo 6 indica possibili sviluppi futuri, come l'inclusione di informazioni sull'attività fisica, presentando anche un'analisi preliminare. Infine, il Capitolo 7 applica la NN per la generazione di allarmi preventivi per l'ipoglicemia, valutando, in simulazione, il miglioramento della gestione del diabete. Alcuni commenti e osservazioni concludono la tesi.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Disease Prediction and Monitoring Modelling"

1

National Symposium on Hydrology (India) (11th 2004 Roorkee, India). Water quality: Monitoring, modelling, and prediction. Edited by Jain C. K, Trivedi R. C, Sharma K. D, National Institute of Hydrology (India), and India. Central Pollution Control Board. New Delhi: Allied Publishers, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Plant Virus Epidemics: Monitoring, Modelling and Predicting Outbreaks. Academic Press, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

(Editor), George D. McLean, Ronald G. Garrett (Editor), and William G. Ruesink (Editor), eds. Plant Virus Epidemics: Monitoring, Modelling and Predicting Outbreaks. Academic Press, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jain, C. K. Water Quality ; Monitoring, Modelling and Prediction. Allied Publishers Pvt. Ltd., 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Madhu, G., Sandeep Kautish, A. Govardhan, and Avinash Sharma, eds. Emerging Computational Approaches in Telehealth and Telemedicine: A Look at The Post-COVID-19 Landscape. BENTHAM SCIENCE PUBLISHERS, 2022. http://dx.doi.org/10.2174/97898150792721220101.

Full text
Abstract:
This book gives an overview of innovative approaches in telehealth and telemedicine. The Goal of the content is to inform readers about recent computer applications in e-health, including Internet of Things (IoT) and Internet of Medical Things (IoMT) technology. The 9 chapters will guide readers to determine the urgency to intervene in specific medical cases, and to assess risk to healthcare workers. The focus on telehealth along with telemedicine, encompasses a broader spectrum of remote healthcare services for the reader to understand. Chapters cover the following topics: - A COVID-19 care system for virus precaution, prevention, and treatment - The Internet of Things (IoT) in Telemedicine, - Artificial Intelligence for Remote Patient Monitoring systems - Machine Learning in Telemedicine - Convolutional Neural Networks for the detection and prediction of melanoma in skin lesions - COVID-19 virus contact tracing via mobile apps - IoT and Cloud convergence in healthcare - Lung cancer classification and detection using deep learning - Telemedicine in India This book will assist students, academics, and medical professionals in learning about cutting-edge telemedicine technologies. It will also inform beginner researchers in medicine about upcoming trends, problems, and future research paths in telehealth and telemedicine for infectious disease control and cancer diagnosis.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Disease Prediction and Monitoring Modelling"

1

Urošević, Vladimir, Nikola Vojičić, Aleksandar Jovanović, Borko Kostić, Sergio Gonzalez-Martinez, María Fernanda Cabrera-Umpiérrez, Manuel Ottaviano, Luca Cossu, Andrea Facchinetti, and Giacomo Cappon. "BRAINTEASER Architecture for Integration of AI Models and Interactive Tools for Amyotrophic Lateral Sclerosis (ALS) and Multiple Sclerosis (MS) Progression Prediction and Management." In Digital Health Transformation, Smart Ageing, and Managing Disability, 16–25. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-43950-6_2.

Full text
Abstract:
AbstractThe presented platform architecture and deployed implementation in real-life clinical and home care settings on four Amyotrophic Lateral Sclerosis (ALS) and Multiple Sclerosis (MS) study sites, integrates the novel working tools for improved disease management with the initial releases of the AI models for disease monitoring. The described robust industry-standard scalable platform is to be a referent example of the integration approach based on loose coupling APIs and industry open standard human-readable and language-independent interface specifications, and its successful baseline implementation for further upcoming releases of additional and more advanced AI models and supporting pipelines (such as for ALS and MS progression prediction, patient stratification, and ambiental exposure modelling) in the following development.
APA, Harvard, Vancouver, ISO, and other styles
2

Riccetti, M., M. Romano, and S. Talice. "Coastal Discharges Monitoring by an Airborne Remote Sensing System." In Water Pollution: Modelling, Measuring and Prediction, 369–80. Dordrecht: Springer Netherlands, 1991. http://dx.doi.org/10.1007/978-94-011-3694-5_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mohanty, U. C., and Akhilesh Gupta. "Deterministic Methods for Prediction of Tropical Cyclone Tracks." In Modelling and Monitoring of Coastal Marine Processes, 141–70. Dordrecht: Springer Netherlands, 2008. http://dx.doi.org/10.1007/978-1-4020-8327-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fusco, Terence, Yaxin Bi, Haiying Wang, and Fiona Browne. "Infectious Disease Prediction Modelling Using Synthetic Optimisation Approaches." In Communications in Computer and Information Science, 141–59. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-26636-3_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Neetoo, Hudaa, Yasser Chuttur, Azina Nazurally, Sandhya Takooree, and Nooreen Mamode Ally. "Crop Disease Prediction Using Multiple Linear Regression Modelling." In Soft Computing and its Engineering Applications, 312–26. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05767-0_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kaur, Harmohanjeet, Pooja Shah, Samya Muhuri, and Suchi Kumari. "A Disease Prediction Framework Based on Predictive Modelling." In Data Science and Network Engineering, 271–83. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-6755-1_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Thodberg, Hans Henrik, Anders Juul, Jens Lomholt, David D. Martin, Oskar G. Jenni, Jon Caflisch, Michael B. Ranke, and Sven Kreiborg. "Adult Height Prediction Models." In Handbook of Growth and Growth Monitoring in Health and Disease, 27–57. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-1795-9_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bansal, Devyanshi, Supriya Raheja, and Manoj Kumar. "Fatty Liver Disease Prediction: Using Machine Learning Algorithms." In Predictive Data Modelling for Biomedical Data and Imaging, 279–94. New York: River Publishers, 2024. http://dx.doi.org/10.1201/9781003516859-13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dai, Zili, and Yu Huang. "The State of the Art of SPH Modelling for Flow-slide Propagation." In Modern Technologies for Landslide Monitoring and Prediction, 155–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-45931-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kanungo, D. P. "Ground Based Real Time Monitoring System Using Wireless Instrumentation for Landslide Prediction." In Landslides: Theory, Practice and Modelling, 105–20. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77377-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Disease Prediction and Monitoring Modelling"

1

Kommineni, Sivaram, Sanvitha Muddana, and Rajiv Senapati. "Explainable Artificial Intelligence based ML Models for Heart Disease Prediction." In 2024 3rd International Conference on Computational Modelling, Simulation and Optimization (ICCMSO), 160–64. IEEE, 2024. http://dx.doi.org/10.1109/iccmso61761.2024.00042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Meena, Jaishree, Vivek Shukla, Amit Jain, Vishan Kumar Gupta, and Paras Jain. "IoT and BSN Applications for Real Time Monitoring and Disease Prediction." In 2024 OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 4.0, 1–5. IEEE, 2024. http://dx.doi.org/10.1109/otcon60325.2024.10688283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Silagpo, Gilbert M., Elman John M. Cabacang, Ronald L. Ilustrisimo, Miguelito R. Inajada, and Jayson C. Jueco. "Monitoring and Prediction of Household Power Consumption using Internet of Things and ARIMA." In 2024 3rd International Conference on Computational Modelling, Simulation and Optimization (ICCMSO), 170–75. IEEE, 2024. http://dx.doi.org/10.1109/iccmso61761.2024.00044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rajliwall, Nitten S., Girija Chetty, and Rachel Davey. "Chronic disease risk monitoring based on an innovative predictive modelling framework." In 2017 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2017. http://dx.doi.org/10.1109/ssci.2017.8285257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fusco, Terence, Yaxin Bi, Haiying Wang, and Fiona Browne. "Synthetic Optimisation Techniques for Epidemic Disease Prediction Modelling." In 7th International Conference on Data Science, Technology and Applications. SCITEPRESS - Science and Technology Publications, 2018. http://dx.doi.org/10.5220/0006823800950106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ganesh, Chilukuri Sai, Royyala Sai Kishore, K. Varshith Goud, Bhaskerreddy Kethireddy, Saroja Kumar Rout, and S. Ranith Reddy. "Data-Driven Disease Prediction and Lifestyle Monitoring System." In 2024 1st International Conference on Cognitive, Green and Ubiquitous Computing (IC-CGU). IEEE, 2024. http://dx.doi.org/10.1109/ic-cgu58078.2024.10530759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Yanran, Yitong Liu, Jin Luo, and Xiao Sun. "Heart disease prediction model using tree-based method." In 2nd International Conference on Applied Mathematics, Modelling, and Intelligent Computing (CAMMIC 2022), edited by Chi-Hua Chen, Xuexia Ye, and Hari Mohan Srivastava. SPIE, 2022. http://dx.doi.org/10.1117/12.2639449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hirose, Hideo, Toru Nakazono, Masakazu Tokunaga, Takenori Sakumura, Sirajummonira Sumi, and Junaida Sulaiman. "Seasonal Infectious Disease Spread Prediction Using Matrix Decomposition Method." In 2013 Fourth International Conference on Intelligent Systems, Modelling and Simulation (ISMS 2013). IEEE, 2013. http://dx.doi.org/10.1109/isms.2013.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sakellarios, Antonis I., Vasileios C. Pezoulas, Christos Bourantas, Katerina K. Naka, Lampros K. Michalis, Patrick W. Serruys, Gregg Stone, Hector M. Garcia-Garcia, and Dimitrios I. Fotiadis. "Prediction of atherosclerotic disease progression combining computational modelling with machine learning." In 2020 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) in conjunction with the 43rd Annual Conference of the Canadian Medical and Biological Engineering Society. IEEE, 2020. http://dx.doi.org/10.1109/embc44109.2020.9176435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Priyanga, P., and N. C. Naveen. "Web Analytics Support System for Prediction of Heart Disease Using Naive Bayes Weighted Approach (NBwa)." In 2017 Asia Modelling Symposium (AMS). 11th International Conference on Mathematical Modelling & Computer Simulation. IEEE, 2017. http://dx.doi.org/10.1109/ams.2017.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Disease Prediction and Monitoring Modelling"

1

Van Lancker, V., L. Kint, G. Montereale-Gavazzi, N. Terseleer, V. Chademenos, T. Missiaen, R. De Mol, et al. How subsurface voxel modelling and uncertainty analysis contribute to habitat-change prediction and monitoring. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2017. http://dx.doi.org/10.4095/305937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stebbing, Nicola, Claire Witham, Frances Beckett, Helen Webster, Lois Huggett, and David Thomson. © Crown copyright 2024, Met Office Page 1 of 43 Can we improve plume dispersal modelling for fire related emergency response operations by utilising short-range dispersion schemes? Met Office, September 2024. http://dx.doi.org/10.62998/wnnr5415.

Full text
Abstract:
Large fires that produce plumes of smoke and other contaminants can cause harm to both people and the environment. To support UK emergency responders, the Met Office Environmental Monitoring and Response Centre (EMARC) provides dedicated weather advice and forecasts of the plume in the form of CHEmical METeorological (CHEMET) reports. The plume’s expected location, extent and relative air concentrations of pollutants are predicted using the Numerical Atmospheric-dispersion Modelling Environment (NAME), which simulates the transport and dispersion of pollutants using numerical weather prediction data. During major fires, air quality monitoring equipment is deployed to confirm the presence of elevated concentrations of contaminants. We use ground-level air concentration measurements from multiple events to evaluate the operational set-up of NAME. We investigate both the output averaging depth used to calculate air concentrations and the use of three optional NAME schemes that are designed to improve the representation of short-range dispersal dynamics: the near-source scheme, the plume-rise scheme, and the urban scheme. We find that using the current operational output averaging depth of 100 m produces model air concentrations that compare best to point observations at the surface, and that using the near-source and urban schemes further improves the fit. However, using these more computationally expensive schemes has little impact on the modelled location and extent of the plume, suggesting they may offer no advantage over using the current operational set-up to produce CHEMETs. Using the plume-rise scheme strongly influences the predicted plume location, extent and surface concentrations. Further work is needed to understand whether its application is appropriate for simulating plumes from fires. We conclude that the current operational set-up can be maintained while the significance of the impact the optional schemes have on CHEMET plume dispersal forecasts is considered further.
APA, Harvard, Vancouver, ISO, and other styles
3

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Full text
Abstract:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography