Academic literature on the topic 'TD2 prediction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'TD2 prediction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "TD2 prediction"

1

Ramani, A. L. "Prediction of First Lactation Milk Yield on The Basis of Test Day Yield Using Multiple Linear Regression in Gir Cows." Indian Journal of Pure & Applied Biosciences 12, no. 3 (June 30, 2024): 33–36. http://dx.doi.org/10.18782/2582-2845.9086.

Full text
Abstract:
The test-day model is a method of choice for the study of milk yield traits, and this method is very important in countries like India, where herd size is generally smaller and lacks a well-established milk recording system. The present investigation was carried out on 365 records of Gir cows maintained at a cattle breeding farm from 1986 to 2014 with the objective of predicting the first lactation milk yield using monthly test day milk records. MLR was used with the backward elimination method. The optimum equation had total five variables (test days) viz. TD1, TD2, TD3, TD4 and TD5. This equation gave an accuracy of prediction of 77.71%. Therefore, it is concluded that first lactation 305 day milk yield could be predicted as early as 5th month of lactation with higher degree of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Ríos, Rafael, Carmen Belén Lupiañez, Daniele Campa, Alessandro Martino, Joaquin Martínez-López, Manuel Martínez-Bueno, Judit Varkonyi, et al. "Type 2 diabetes-related variants influence the risk of developing multiple myeloma: results from the IMMEnSE consortium." Endocrine-Related Cancer 22, no. 4 (August 2015): 545–59. http://dx.doi.org/10.1530/erc-15-0029.

Full text
Abstract:
Type 2 diabetes (T2D) has been suggested to be a risk factor for multiple myeloma (MM), but the relationship between the two traits is still not well understood. The aims of this study were to evaluate whether 58 genome-wide-association-studies (GWAS)-identified common variants for T2D influence the risk of developing MM and to determine whether predictive models built with these variants might help to predict the disease risk. We conducted a case–control study including 1420 MM patients and 1858 controls ascertained through the International Multiple Myeloma (IMMEnSE) consortium. Subjects carrying the KCNQ1rs2237892T allele or the CDKN2A-2Brs2383208G/G, IGF1rs35767T/T and MADDrs7944584T/T genotypes had a significantly increased risk of MM (odds ratio (OR)=1.32–2.13) whereas those carrying the KCNJ11rs5215C, KCNJ11rs5219T and THADArs7578597C alleles or the FTOrs8050136A/A and LTArs1041981C/C genotypes showed a significantly decreased risk of developing the disease (OR=0.76–0.85). Interestingly, a prediction model including those T2D-related variants associated with the risk of MM showed a significantly improved discriminatory ability to predict the disease when compared to a model without genetic information (area under the curve (AUC)=0.645 vs AUC=0.629; P=4.05×10−06). A gender-stratified analysis also revealed a significant gender effect modification for ADAM30rs2641348 and NOTCH2rs10923931 variants (Pinteraction=0.001 and 0.0004, respectively). Men carrying the ADAM30rs2641348C and NOTCH2rs10923931T alleles had a significantly decreased risk of MM whereas an opposite but not significant effect was observed in women (ORM=0.71 and ORM=0.66 vs ORW=1.22 and ORW=1.15, respectively). These results suggest that TD2-related variants may influence the risk of developing MM and their genotyping might help to improve MM risk prediction models.
APA, Harvard, Vancouver, ISO, and other styles
3

Klein, Matthias S., and Jane Shearer. "Metabolomics and Type 2 Diabetes: Translating Basic Research into Clinical Application." Journal of Diabetes Research 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/3898502.

Full text
Abstract:
Type 2 diabetes (T2D) and its comorbidities have reached epidemic proportions, with more than half a billion cases expected by 2030. Metabolomics is a fairly new approach for studying metabolic changes connected to disease development and progression and for finding predictive biomarkers to enable early interventions, which are most effective against T2D and its comorbidities. In metabolomics, the abundance of a comprehensive set of small biomolecules (metabolites) is measured, thus giving insight into disease-related metabolic alterations. This review shall give an overview of basic metabolomics methods and will highlight current metabolomics research successes in the prediction and diagnosis of T2D. We summarized key metabolites changing in response to T2D. Despite large variations in predictive biomarkers, many studies have replicated elevated plasma levels of branched-chain amino acids and their derivatives, aromatic amino acids andα-hydroxybutyrate ahead of T2D manifestation. In contrast, glycine levels and lysophosphatidylcholine C18:2 are depressed in both predictive studies and with overt disease. The use of metabolomics for predicting T2D comorbidities is gaining momentum, as are our approaches for translating basic metabolomics research into clinical applications. As a result, metabolomics has the potential to enable informed decision-making in the realm of personalized medicine.
APA, Harvard, Vancouver, ISO, and other styles
4

Luo, Yufang, Zi Guo, Honghui He, Youbo Yang, Shaoli Zhao, and Zhaohui Mo. "Predictive Model of Type 2 Diabetes Remission after Metabolic Surgery in Chinese Patients." International Journal of Endocrinology 2020 (October 8, 2020): 1–13. http://dx.doi.org/10.1155/2020/2965175.

Full text
Abstract:
Introduction. Metabolic surgery is an effective treatment for type 2 diabetes (T2D). At present, there is no authoritative standard for predicting postoperative T2D remission in clinical use. In general, East Asian patients with T2D have a lower body mass index and worse islet function than westerners. We aimed to look for clinical predictors of T2D remission after metabolic surgery in Chinese patients, which may provide insights for patient selection. Methods. Patients with T2D who underwent metabolic surgery at the Third Xiangya Hospital between October 2008 and March 2017 were enrolled. T2D remission was defined as an HbA1c level below 6.5% and an FPG concentration below 7.1 mmol/L for at least one year in the absence of antidiabetic medications. Results. (1) Independent predictors of short-term T2D remission (1-2 years) were age and C-peptide area under the curve (C-peptide AUC); independent predictors of long-term T2D remission (4–6 years) were C-peptide AUC and fasting plasma glucose (FPG). (2) The optimal cutoff value for C-peptide AUC in predicting T2D remission was 30.93 ng/ml, with a specificity of 67.3% and sensitivity of 75.8% in the short term and with a specificity of 61.9% and sensitivity of 81.5% in the long term, respectively. The areas under the ROC curves are 0.674 and 0.623 in the short term and long term, respectively. (3) We used three variables (age, C-peptide AUC, and FPG) to construct a remission prediction score (ACF), a multidimensional 9-point scale, along which greater scores indicate a better chance of T2D remission. We compared our scoring system with other reported models (ABCD, DiaRem, and IMS). The ACF scoring system had the best distribution of patients and prognostic significance according to the ROC curves. Conclusion. Presurgery age, C-peptide AUC, and FPG are independent predictors of T2D remission after metabolic surgery. Among these, C-peptide AUC plays a decisive role in both short- and long-term remission prediction, and the optimal cutoff value for C-peptide AUC in predicting T2D remission was 30.93 ng/ml, with moderate predictive values. The ACF score is a simple reliable system that can predict T2D remission among Chinese patients.
APA, Harvard, Vancouver, ISO, and other styles
5

Ortiz Zuñiga, Angel Michael, Rafael Simó, Octavio Rodriguez-Gómez, Cristina Hernández, Adrian Rodrigo, Laura Jamilis, Laura Campo, Montserrat Alegret, Merce Boada, and Andreea Ciudin. "Clinical Applicability of the Specific Risk Score of Dementia in Type 2 Diabetes in the Identification of Patients with Early Cognitive Impairment: Results of the MOPEAD Study in Spain." Journal of Clinical Medicine 9, no. 9 (August 24, 2020): 2726. http://dx.doi.org/10.3390/jcm9092726.

Full text
Abstract:
Introduction: Although the Diabetes Specific Dementia Risk Score (DSDRS) was proposed for predicting risk of dementia at 10 years, its usefulness as a screening tool is unknown. For this purpose, the European consortium MOPEAD included the DSDRS within the specific strategy for screening of cognitive impairment in type 2 diabetes (T2D) patients attended in a third-level hospital. Material and Methods: T2D patients > 65 years, without known cognitive impairment, attended in a third-level hospital, were evaluated. As per MOPEAD protocol, patients with MMSE ≤ 27 or DSDRS ≥ 7 were referred to the memory clinic for complete neuropsychological assessment. Results: 112 T2D patients were recruited. A total of 82 fulfilled the criteria for referral to the memory unit (43 of them declined referral: 48.8% for associated comorbidities, 37.2% lack of interest, 13.95% lack of social support). At the Fundació ACE’s Memory Clinic, 34 cases (87.2%) of mild cognitive impairment (MCI) and 3 cases (7.7%) of dementia were diagnosed. The predictive value of DSDRS ≥ 7 as a screening tool of cognitive impairment was AUROC = 0.739, p 0.024, CI 95% (0.609–0.825). Conclusions: We found a high prevalence of unknown cognitive impairment in TD2 patients who attended a third-level hospital. The DSDRS was found to be a useful screening tool. The presence of associated comorbidities was the main factor of declining referral.
APA, Harvard, Vancouver, ISO, and other styles
6

Vettoretti, Martina, Enrico Longato, Alessandro Zandonà, Yan Li, José Antonio Pagán, David Siscovick, Mercedes R. Carnethon, Alain G. Bertoni, Andrea Facchinetti, and Barbara Di Camillo. "Addressing practical issues of predictive models translation into everyday practice and public health management: a combined model to predict the risk of type 2 diabetes improves incidence prediction and reduces the prevalence of missing risk predictions." BMJ Open Diabetes Research & Care 8, no. 1 (July 2020): e001223. http://dx.doi.org/10.1136/bmjdrc-2020-001223.

Full text
Abstract:
IntroductionMany predictive models for incident type 2 diabetes (T2D) exist, but these models are not used frequently for public health management. Barriers to their application include (1) the problem of model choice (some models are applicable only to certain ethnic groups), (2) missing input variables, and (3) the lack of calibration. While (1) and (2) drives to missing predictions, (3) causes inaccurate incidence predictions. In this paper, a combined T2D risk model for public health management that addresses these three issues is developed.Research design and methodsThe combined T2D risk model combines eight existing predictive models by weighted average to overcome the problem of missing incidence predictions. Moreover, the combined model implements a simple recalibration strategy in which the risk scores are rescaled based on the T2D incidence in the target population. The performance of the combined model was compared with that of the eight existing models using data from two test datasets extracted from the Multi-Ethnic Study of Atherosclerosis (MESA; n=1031) and the English Longitudinal Study of Ageing (ELSA; n=4820). Metrics of discrimination, calibration, and missing incidence predictions were used for the assessment.ResultsThe combined T2D model performed well in terms of both discrimination (concordance index: 0.83 on MESA; 0.77 on ELSA) and calibration (expected to observed event ratio: 1.00 on MESA; 1.17 on ELSA), similarly to the best-performing existing models. However, while the existing models yielded a large percentage of missing predictions (17%–45% on MESA; 63%–64% on ELSA), this was negligible with the combined model (0% on MESA, 4% on ELSA).ConclusionsLeveraging on existing literature T2D predictive models, a simple approach based on risk score rescaling and averaging was shown to provide accurate and robust incidence predictions, overcoming the problem of recalibration and missing predictions in practical application of predictive models.
APA, Harvard, Vancouver, ISO, and other styles
7

Wen, Min, Song Yang, Augustin Vintzileos, Wayne Higgins, and Renhe Zhang. "Impacts of Model Resolutions and Initial Conditions on Predictions of the Asian Summer Monsoon by the NCEP Climate Forecast System." Weather and Forecasting 27, no. 3 (June 1, 2012): 629–46. http://dx.doi.org/10.1175/waf-d-11-00128.1.

Full text
Abstract:
Abstract A series of 60-day hindcasts by the Climate Forecast System (CFS) of the National Centers for Environmental Prediction is analyzed to understand the impacts of atmospheric model resolutions and initial conditions on predictions of the Asian summer monsoon. The experiments, for the time period 2002–06 and with 14 ensemble members, are conducted at resolutions of T62, T126, and T254. They are initialized every 5 days from May to August, using the operational global atmospheric data assimilation system and operational global ocean data assimilation. It is found that, in predicting the magnitude and the timing of monsoon rainfall over lands, high model resolutions overall perform better than lower model resolutions. The increase in prediction skills with model resolution is more apparent over South Asia than over Southeast Asia. The largest improvement is seen over the Tibetan Plateau, at least for precipitation. However, the increase in model resolution does not enhance the skill of the predictions over oceans. Overall, model resolution has larger impacts than do the initial conditions on predicting the development of the Asian summer monsoon in the early season. However, higher model resolutions such as T382 may be needed for the CFS to simulate and predict many features of the monsoon more realistically.
APA, Harvard, Vancouver, ISO, and other styles
8

Kumar, Mukkesh, Li Ting Ang, Cindy Ho, Shu E. Soh, Kok Hian Tan, Jerry Kok Yen Chan, Keith M. Godfrey, et al. "Machine Learning–Derived Prenatal Predictive Risk Model to Guide Intervention and Prevent the Progression of Gestational Diabetes Mellitus to Type 2 Diabetes: Prediction Model Development Study." JMIR Diabetes 7, no. 3 (July 5, 2022): e32366. http://dx.doi.org/10.2196/32366.

Full text
Abstract:
Background The increasing prevalence of gestational diabetes mellitus (GDM) is concerning as women with GDM are at high risk of type 2 diabetes (T2D) later in life. The magnitude of this risk highlights the importance of early intervention to prevent the progression of GDM to T2D. Rates of postpartum screening are suboptimal, often as low as 13% in Asian countries. The lack of preventive care through structured postpartum screening in several health care systems and low public awareness are key barriers to postpartum diabetes screening. Objective In this study, we developed a machine learning model for early prediction of postpartum T2D following routine antenatal GDM screening. The early prediction of postpartum T2D during prenatal care would enable the implementation of effective strategies for diabetes prevention interventions. To our best knowledge, this is the first study that uses machine learning for postpartum T2D risk assessment in antenatal populations of Asian origin. Methods Prospective multiethnic data (Chinese, Malay, and Indian ethnicities) from 561 pregnancies in Singapore’s most deeply phenotyped mother-offspring cohort study—Growing Up in Singapore Towards healthy Outcomes—were used for predictive modeling. The feature variables included were demographics, medical or obstetric history, physical measures, lifestyle information, and GDM diagnosis. Shapley values were combined with CatBoost tree ensembles to perform feature selection. Our game theoretical approach for predictive analytics enables population subtyping and pattern discovery for data-driven precision care. The predictive models were trained using 4 machine learning algorithms: logistic regression, support vector machine, CatBoost gradient boosting, and artificial neural network. We used 5-fold stratified cross-validation to preserve the same proportion of T2D cases in each fold. Grid search pipelines were built to evaluate the best performing hyperparameters. Results A high performance prediction model for postpartum T2D comprising of 2 midgestation features—midpregnancy BMI after gestational weight gain and diagnosis of GDM—was developed (BMI_GDM CatBoost model: AUC=0.86, 95% CI 0.72-0.99). Prepregnancy BMI alone was inadequate in predicting postpartum T2D risk (ppBMI CatBoost model: AUC=0.62, 95% CI 0.39-0.86). A 2-hour postprandial glucose test (BMI_2hour CatBoost model: AUC=0.86, 95% CI 0.76-0.96) showed a stronger postpartum T2D risk prediction effect compared to fasting glucose test (BMI_Fasting CatBoost model: AUC=0.76, 95% CI 0.61-0.91). The BMI_GDM model was also robust when using a modified 2-point International Association of the Diabetes and Pregnancy Study Groups (IADPSG) 2018 criteria for GDM diagnosis (BMI_GDM2 CatBoost model: AUC=0.84, 95% CI 0.72-0.97). Total gestational weight gain was inversely associated with postpartum T2D outcome, independent of prepregnancy BMI and diagnosis of GDM (P=.02; OR 0.88, 95% CI 0.79-0.98). Conclusions Midgestation weight gain effects, combined with the metabolic derangements underlying GDM during pregnancy, signal future T2D risk in Singaporean women. Further studies will be required to examine the influence of metabolic adaptations in pregnancy on postpartum maternal metabolic health outcomes. The state-of-the-art machine learning model can be leveraged as a rapid risk stratification tool during prenatal care. Trial Registration ClinicalTrials.gov NCT01174875; https://clinicaltrials.gov/ct2/show/NCT01174875
APA, Harvard, Vancouver, ISO, and other styles
9

Di Camillo, Barbara, Liisa Hakaste, Francesco Sambo, Rafael Gabriel, Jasmina Kravic, Bo Isomaa, Jaakko Tuomilehto, et al. "HAPT2D: high accuracy of prediction of T2D with a model combining basic and advanced data depending on availability." European Journal of Endocrinology 178, no. 4 (April 2018): 331–41. http://dx.doi.org/10.1530/eje-17-0921.

Full text
Abstract:
Objective Type 2 diabetes arises from the interaction of physiological and lifestyle risk factors. Our objective was to develop a model for predicting the risk of T2D, which could use various amounts of background information. Research design and methods We trained a survival analysis model on 8483 people from three large Finnish and Spanish data sets, to predict the time until incident T2D. All studies included anthropometric data, fasting laboratory values, an oral glucose tolerance test (OGTT) and information on co-morbidities and lifestyle habits. The variables were grouped into three sets reflecting different degrees of information availability. Scenario 1 included background and anthropometric information; Scenario 2 added routine laboratory tests; Scenario 3 also added results from an OGTT. Predictive performance of these models was compared with FINDRISC and Framingham risk scores. Results The three models predicted T2D risk with an average integrated area under the ROC curve equal to 0.83, 0.87 and 0.90, respectively, compared with 0.80 and 0.75 obtained using the FINDRISC and Framingham risk scores. The results were validated on two independent cohorts. Glucose values and particularly 2-h glucose during OGTT (2h-PG) had highest predictive value. Smoking, marital and professional status, waist circumference, blood pressure, age and gender were also predictive. Conclusions Our models provide an estimation of patient’s risk over time and outweigh FINDRISC and Framingham traditional scores for prediction of T2D risk. Of note, the models developed in Scenarios 1 and 2, only exploited variables easily available at general patient visits.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhu, Jianlong, Dehui Guo, Liying Liu, and Jing Zhong. "Serum Galectin-3 Predicts Mortality in Venoarterial Extracorporeal Membrane Oxygenation Patients." Cardiology Research and Practice 2023 (September 30, 2023): 1–8. http://dx.doi.org/10.1155/2023/3917156.

Full text
Abstract:
Objective. We investigated the potential use of galectin-3 (Gal-3) as a prognostic indicator for patients with cardiogenic shock and developed a predictive mortality model for venoarterial extracorporeal membrane oxygenation (VA-ECMO). Methods. We prospectively studied patients (survivors and nonsurvivors) who received VA-ECMO for cardiogenic shock from 2019 to 2021. We recorded baseline data, Gal-3, and B-type natriuretic peptide (BNP) before ECMO and 24–72 h after ECMO. We used multivariable logistic regression to analyze significant risk factors and construct a VA-ECMO death prediction model. Receiver operating characteristic (ROC) curves were plotted to assess the predictive efficacy of the model. Results. We enrolled 73 patients with cardiogenic shock who received VA-ECMO support; 38 (52.05%) died in hospital. The median age was 57 years (interquartile range (IQR): 48–67 years); the median duration of ECMO therapy was 5.8 days (IQR: 4.62–7.57 days); and the median intensive care unit stay was 19.04 days (IQR: 13.92–26.15 days). Compared with the nonsurvivors, survivors had lower acute physiology and chronic health evaluation (APACHE) II scores ( p < 0.001), increased left ventricular ejection fraction ( p < 0.05), lower Gal-3 levels at 24 and 72 h (both p = 0.001), lower BNP levels at 24 and 72 h (both p = 0.001), and higher platelet counts ( p = 0.009). Further multivariable analysis showed that APACHE II score, BNP-T72, and Gal-3-T72 were independent risk factors for death in VA-ECMO patients. Gal-3 and BNP were positively correlated ( p < 0.05) and decreased significantly during ECMO treatment. The areas under the ROC curve (AUC) for APACHE II score, Gal-3-T72, and BNP-T72 were 0.687, 0.799, and 0.723, respectively. We constructed a combined prediction model with an AUC of 0.884 ( p < 0.01). Conclusion. Gal-3 may serve as a prognostic indicator for patients receiving VA-ECMO for cardiogenic shock. The combined early warning score is a simple and effective tool for predicting mortality in VA-ECMO patients.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "TD2 prediction"

1

Dursoniah, Danilo. "Modélisation computationnelle de l’absorption intestinale du glucose pour la prédiction du diabète de Type 2." Electronic Thesis or Diss., Université de Lille (2022-....), 2024. http://www.theses.fr/2024ULILB023.

Full text
Abstract:
Les travaux sur le diabète de types 2 (DT2) se sont jusqu'à présent très majoritairement focalisés sur le rôle de la fonction bêta du pancréas et de la sensibilité à l'insuline. De nombreux indices, de précision et de pertinence variables, ont été proposés pour les mesurer. Ces indices sont calculés à partir de modèles plus ou moins complexes utilisant des données statiques de glycémie à jeun, ou des données dynamiques de test glycémique oral.La chirurgie bariatrique a mis en évidence l'existence d'un troisième paramètre pouvant potentiellement être une cause de DT2 : l'absorption intestinale de glucose (IGA). Contrairement à la fonction bêta du pancréas et à la sensibilité à l'insuline, aucun indice n'a encore été proposé pour mesurer l'effet de ce paramètre sur le DT2. Mesurer expérimentalement l'absorption intestinale de glucose nécessite l'accès à la veine porte, ce qui est en pratique impossible sur les humains. Une technique expérimentale de multi-traceur utilisant le glucose marqué a été proposée en alternative mais celle-ci demeure très difficile à réaliser et demande une expertise qui ne permet pas son utilisant en routine clinique. Notons aussi que les approches de modélisation jusqu'à présent proposées pour prédire la réponse postprandiale de glucose nécessitent ce gold standard. Les rares modèles existants ne sont que partiellement mécanistiques et sont relativement complexes. Cette thèse propose de surmonter ces problèmes.Ainsi en première contribution, nous reproduisons, dans un premier temps, le modèle post-prandial de Dalla Man et les simulations de l'article de référence (Dalla Man et al., 2007). Ce modèle étant décrit exclusivement à l'aide d'EDO, nous l'avons partiellement retranscrit en un système de réactions chimiques afin de mettre en perspective les aspects relevant de mécanismes physiologiques. Cette implémentation a d'abord permis de réaliser un travail de reproductibilité - malgré l'absence des données originales de l'article de référence - puis de confronter le modèle à nos données cliniques OBEDIAB, montrant ainsi ses limites en terme d'estimations et d'identifiabilité. C'est ainsi, qu'en seconde contribution majeure, pour contourner le recours au gold standard du multi-traceur, nous avons utilisé le D-xylose, un sucre analogue du glucose, comme biomarqueur afin d'observer directement l'IGA, disponibles dans notre jeu de données pré-cliniques, issu d'expériences effectuées sur des cochons nains, ou minipigs. Nous avons réalisé le premier modèle de D-xylose à notre connaissance. Ce modèle a été sélectionné par estimation de paramètres sur nos jeux de données, puis à l'aide d'une analyse d'identifiabilité pratique et d'une analyse de sensibilité globale. Ces analyses ont également permis d'étudier les contributions relatives de la vidange gastrique et de l'absorption intestinale sur le profil de la dynamique de D-xylose. Enfin nous nous interrogerons sur les liens entre la modélisation de la glycémie et celle de la réponse de D-xylose postprandial tout en envisageant les applications cliniques et les limites du modèle de D-xylose.Mots-clés: Biologie des systèmes, modélisation, réseaux de réactions chimiques, équations différentielles ordinaires, estimation de paramètres, analyse d'identifiabilité, diabète de type 2, D-xylose
Research on type 2 diabetes (T2D) has so far predominantly focused on the role of pancreatic beta function and insulin sensitivity. Numerous indices, of varying precision and relevance, have been proposed to measure these factors. These indices are calculated using more or less complex models based on static fasting glucose data or dynamic oral glucose test data.Bariatric surgery has highlighted the existence of a third parameter that could potentially be a cause of T2D: intestinal glucose absorption (IGA). Unlike pancreatic beta function and insulin sensitivity, no index has yet been proposed to measure the effect of this parameter on T2D. Experimentally measuring intestinal glucose absorption requires access to the portal vein, which is practically impossible in humans. An experimental multi-tracer technique using labeled glucose has been proposed as an alternative, but it remains very difficult to implement and requires expertise that prevents its routine clinical use. It should also be noted that the modeling approaches proposed so far to predict the postprandial glucose response require this gold standard. The few existing models are only partially mechanistic and relatively complex. This thesis proposes to overcome these problems.Thus, as a first contribution, we initially reproduce the postprandial model of Dalla Man and the simulations from the reference article (Dalla Man et al., 2007). Since this model is exclusively described using ODEs, we have partially transcribed it into a system of chemical reactions to put the relevant physiological mechanisms into perspective. This implementation first allowed us to carry out reproducibility work - despite the absence of the original data from the reference article - and then to compare the model with our OBEDIAB clinical data, thus showing its limitations in terms of estimations and identifiability.As a second major contribution, to circumvent the use of the multi-tracer gold standard, we used D-xylose, a glucose analog, as a biomarker to directly observe IGA, available in our pre-clinical dataset from experiments conducted on minipigs. To our knowledge, we developed the first D-xylose model. This model was selected through parameter estimation on our datasets, followed by a practical identifiability analysis and a global sensitivity analysis. These analyses also allowed us to study the relative contributions of gastric emptying and intestinal absorption on the D-xylose dynamic profile. Finally, we will explore the links between blood glucose modeling and postprandial D-xylose response modeling while considering the clinical applications and limitations of the D-xylose model.Keywords: Systems biology, modeling, chemical reaction networks, ordinary differential equations, parameter estimation, identifiability analysis, type 2 diabetes, D-xylose
APA, Harvard, Vancouver, ISO, and other styles
2

Bostock, Adam K. "Prediction and reduction of traffic pollution in urban areas." Thesis, University of Nottingham, 1994. http://eprints.nottingham.ac.uk/14352/.

Full text
Abstract:
This thesis is the result of five years research into road traffic emissions of air pollutants. It includes a review of traffic pollution studies and models, and a description of the PREDICT model suite and PREMIT emissions model. These models were used to evaluate environmentally sensitive traffic control strategies, some of which were based on the use of Advanced Transport Telematics (ATT). This research has improved our understanding of traffic emissions. It studied emissions of the following pollutants: carbon monoxide (CO), hydrocarbons (HC) and oxides of nitrogen (NOx). PREMIT modelled emissions from each driving mode (cruise, acceleration, deceleration and idling) and, consequently, predicted relatively complex emission characteristics for some scenarios. Results suggest that emission models should represent emissions by driving mode, instead of using urban driving cycles or average speeds. Emissions of NOx were more complex than those of CO and HC. The change in NOx, caused by a particular strategy, could be similar or opposite to changes in CO and HC. Similarly, for some scenarios, a reduction in stops and delay did not reduce emissions of NOx. It was also noted that the magnitude of changes in emissions of NOx were usually much less than the corresponding changes in CO and HC. In general, the traffic control strategies based on the adjustment of signal timings were not effective in reducing total network emissions. However, high emissions of pollutants on particular links could, potentially, be reduced by changing signal timings. For many links, mutually exclusive strategies existed for reducing emissions of CO and HC, and emissions of NOx. Hence, a decision maker may have to choose which pollutants are to be reduced, and which can be allowed to increase. The environmental area licensing strategy gave relatively large reductions in emissions of all pollutants. This strategy was superior to the traffic signal timing strategies because it had no detrimental impact on the efficiency of the traffic network and gave simultaneous reductions in emissions of CO, HC and NOx.
APA, Harvard, Vancouver, ISO, and other styles
3

Reynolds, Shirley Anne. "Monitoring and prediction of air pollution from traffic in the urban environment." Thesis, University of Nottingham, 1996. http://eprints.nottingham.ac.uk/11740/.

Full text
Abstract:
Traffic-related air pollution is now a major concern. The Rio Earth Summit and the Government's commitment to Agenda 21 has led to Local Authorities taking responsibility to manage the growing number of vehicles and to reduce the impact of traffic on the environment. There is an urgent need to effectively monitor urban air quality at reasonable cost and to develop long and short term air pollution prediction models. The aim of the research described was to investigate relationships between traffic characteristics and kerbside air pollution concentrations. Initially, the only pollution monitoring equipment available was basic and required constant supervision. The traffic data was made available from the demand-responsive traffic signal control systems in Leicestershire and Nottinghamshire. However, it was found that the surveys were too short to produce statistically significant results, and no useful conclusions could be drawn. Subsequently, an automatic, remote kerbside monitoring system was developed specifically for this research. The data collected was analysed using multiple regression techniques in an attempt to obtain an empirical relationship which could be used to predict roadside pollution concentrations from traffic and meteorological data. However, the residual series were found to be autocorrelated, which meant that the statistical tests were invalid. It was then found to be possible to fit an accurate model to the data using time series analysis, but that it could not predict levels even in the short-term. Finally, a semi-empirical model was developed by estimating the proportion of vehicles passing a point in each operating mode (cruising, accelerating, decelerating and idling) and using real data to derive the coefficients. Unfortunately, it was again not possible to define a reliable predictive relationship. However, suggestions have been made about how this research could be progressed to achieve its aim.
APA, Harvard, Vancouver, ISO, and other styles
4

Zerbeto, Ana Paula. "Melhor preditor empírico aplicado aos modelos beta mistos." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-09042014-132109/.

Full text
Abstract:
Os modelos beta mistos são amplamente utilizados na análise de dados que apresentam uma estrutura hierárquica e que assumem valores em um intervalo restrito conhecido. Com o objetivo de propor um método de predição dos componentes aleatórios destes, os resultados previamente obtidos na literatura para o preditor de Bayes empírico foram estendidos aos modelos de regressão beta com intercepto aleatório normalmente distribuído. O denominado melhor preditor empírico (MPE) proposto tem aplicação em duas situações diferentes: quando se deseja fazer predição sobre os efeitos individuais de novos elementos de grupos que já fizeram parte da base de ajuste e quando os grupos não pertenceram à tal base. Estudos de simulação foram delineados e seus resultados indicaram que o desempenho do MPE foi eficiente e satisfatório em diversos cenários. Ao utilizar-se da proposta na análise de dois bancos de dados da área da saúde, observou-se os mesmos resultados obtidos nas simulações nos dois casos abordados. Tanto nas simulações, quanto nas análises de dados reais, foram observados bons desempenhos. Assim, a metodologia proposta se mostrou promissora para o uso em modelos beta mistos, nos quais se deseja fazer predições.
The mixed beta regression models are extensively used to analyse data with hierarquical structure and that take values in a restricted and known interval. In order to propose a prediction method for their random components, the results previously obtained in the literature for the empirical Bayes predictor were extended to beta regression models with random intercept normally distributed. The proposed predictor, called empirical best predictor (EBP), can be applied in two situations: when the interest is predict individuals effects for new elements of groups that were already analysed by the fitted model and, also, for elements of new groups. Simulation studies were designed and their results indicated that the performance of EBP was efficient and satisfatory in most of scenarios. Using the propose to analyse two health databases, the same results of simulations were observed in both two cases of application, and good performances were observed. So, the proposed method is promissing for the use in predictions for mixed beta regression models.
APA, Harvard, Vancouver, ISO, and other styles
5

Porto, Faimison Rodrigues. "Cross-project defect prediction with meta-Learning." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-21032018-163840/.

Full text
Abstract:
Defect prediction models assist tester practitioners on prioritizing the most defect-prone parts of the software. The approach called Cross-Project Defect Prediction (CPDP) refers to the use of known external projects to compose the training set. This approach is useful when the amount of historical defect data of a company to compose the training set is inappropriate or insufficient. Although the principle is attractive, the predictive performance is a limiting factor. In recent years, several methods were proposed aiming at improving the predictive performance of CPDP models. However, to the best of our knowledge, there is no evidence of which CPDP methods typically perform best. Moreover, there is no evidence on which CPDP methods perform better for a specific application domain. In fact, there is no machine learning algorithm suitable for all domains. The decision task of selecting an appropriate algorithm for a given application domain is investigated in the meta-learning literature. A meta-learning model is characterized by its capacity of learning from previous experiences and adapting its inductive bias dynamically according to the target domain. In this work, we investigate the feasibility of using meta-learning for the recommendation of CPDP methods. In this thesis, three main goals were pursued. First, we provide an experimental analysis to investigate the feasibility of using Feature Selection (FS) methods as an internal procedure to improve the performance of two specific CPDP methods. Second, we investigate which CPDP methods present typically best performances. We also investigate whether the typically best methods perform best for the same project datasets. The results reveal that the most suitable CPDP method for a project can vary according to the project characteristics, which leads to the third investigation of this work. We investigate the several particularities inherent to the CPDP context and propose a meta-learning solution able to learn from previous experiences and recommend a suitable CDPD method according to the characteristics of the project being predicted. We evaluate the learning capacity of the proposed solution and its performance in relation to the typically best CPDP methods.
Modelos de predição de defeitos auxiliam profissionais de teste na priorização de partes do software mais propensas a conter defeitos. A abordagem de predição de defeitos cruzada entre projetos (CPDP) refere-se à utilização de projetos externos já conhecidos para compor o conjunto de treinamento. Essa abordagem é útil quando a quantidade de dados históricos de defeitos é inapropriada ou insuficiente para compor o conjunto de treinamento. Embora o princípio seja atrativo, o desempenho de predição é um fator limitante nessa abordagem. Nos últimos anos, vários métodos foram propostos com o intuito de melhorar o desempenho de predição de modelos CPDP. Contudo, na literatura, existe uma carência de estudos comparativos que apontam quais métodos CPDP apresentam melhores desempenhos. Além disso, não há evidências sobre quais métodos CPDP apresentam melhor desempenho para um domínio de aplicação específico. De fato, não existe um algoritmo de aprendizado de máquina que seja apropriado para todos os domínios de aplicação. A tarefa de decisão sobre qual algoritmo é mais adequado a um determinado domínio de aplicação é investigado na literatura de meta-aprendizado. Um modelo de meta-aprendizado é caracterizado pela sua capacidade de aprender a partir de experiências anteriores e adaptar seu viés de indução dinamicamente de acordo com o domínio alvo. Neste trabalho, nós investigamos a viabilidade de usar meta-aprendizado para a recomendação de métodos CPDP. Nesta tese são almejados três principais objetivos. Primeiro, é conduzida uma análise experimental para investigar a viabilidade de usar métodos de seleção de atributos como procedimento interno de dois métodos CPDP, com o intuito de melhorar o desempenho de predição. Segundo, são investigados quais métodos CPDP apresentam um melhor desempenho em um contexto geral. Nesse contexto, também é investigado se os métodos com melhor desempenho geral apresentam melhor desempenho para os mesmos conjuntos de dados (ou projetos de software). Os resultados revelam que os métodos CPDP mais adequados para um projeto podem variar de acordo com as características do projeto sendo predito. Essa constatação conduz à terceira investigação realizada neste trabalho. Foram investigadas as várias particularidades inerentes ao contexto CPDP a fim de propor uma solução de meta-aprendizado capaz de aprender com experiências anteriores e recomendar métodos CPDP adequados, de acordo com as características do software. Foram avaliados a capacidade de meta-aprendizado da solução proposta e a sua performance em relação aos métodos base que apresentaram melhor desempenho geral.
APA, Harvard, Vancouver, ISO, and other styles
6

Kattekola, Sravanthi. "Weather Radar image Based Forecasting using Joint Series Prediction." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1238.

Full text
Abstract:
Accurate rainfall forecasting using weather radar imagery has always been a crucial and predominant task in the field of meteorology [1], [2], [3] and [4]. Competitive Radial Basis Function Neural Networks (CRBFNN) [5] is one of the methods used for weather radar image based forecasting. Recently, an alternative CRBFNN based approach [6] was introduced to model the precipitation events. The difference between the techniques presented in [5] and [6] is in the approach used to model the rainfall image. Overall, it was shown that the modified CRBFNN approach [6] is more computationally efficient compared to the CRBFNN approach [5]. However, both techniques [5] and [6] share the same prediction stage. In this thesis, a different GRBFNN approach is presented for forecasting Gaussian envelope parameters. The proposed method investigates the concept of parameter dependency among Gaussian envelopes. Experimental results are also presented to illustrate the advantage of parameters prediction over the independent series prediction.
APA, Harvard, Vancouver, ISO, and other styles
7

Conroy, Sean F. "Nonproliferation Regime Compliance: Prediction and Measure Using UNSCR 1540." ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2308.

Full text
Abstract:
This dissertation investigates factors that predict compliance with international regimes, specifically the Non-Proliferation Regime. Generally accepted in international relations literature, is Krasner’s (1983) definition that regimes are “sets of implicit or explicit principles, norms, rules, and decision-making procedures around which actor expectations converge in a given [issue] area of international relations.” Using institutionalization as a framework, I hypothesize that compliance is a function of the respect for which a nation has for the rule of law. I investigate the NP regime through the lens of United Nations Security Council Resolution 1540, a mandate for member nations to enact domestic legislation criminalizing the proliferation of Weapons of Mass Destruction. Using NP regime compliance and implementation of UNSCR 1540’s mandates as dependent variables, I test the hypotheses with the following independent variables: rule of law, political competition, and regional compliance. I also present qualitative case studies on Argentina, South Africa, and Malaysia. The quantitative results of these analyses indicated a strong relationship between rule of law and regional compliance and a nation’s compliance with the overall NP regime and implementation of UNSCR 1540. These results indicate a nation will institutionalize the NP norms, and comply with the specifics of implementation. The results of in-depth analysis of Argentina, South Africa, and Malaysia showed that predicting an individual nation’s compliance is more complex than descriptions of government capacity or geography. Argentina and South Africa, expected by the hypotheses to exhibit low to medium compliance and implementation, scored high and well above their region for both measures. Malaysia, expected to score high in compliance, scored low. Findings thus reveal that rule of law is probably less influential on individual cases and regional compliance and cooperation better predictors of a nation’s compliance with a security regime.
APA, Harvard, Vancouver, ISO, and other styles
8

Mishra, Avdesh. "Effective Statistical Energy Function Based Protein Un/Structure Prediction." ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2674.

Full text
Abstract:
Proteins are an important component of living organisms, composed of one or more polypeptide chains, each containing hundreds or even thousands of amino acids of 20 standard types. The structure of a protein from the sequence determines crucial functions of proteins such as initiating metabolic reactions, DNA replication, cell signaling, and transporting molecules. In the past, proteins were considered to always have a well-defined stable shape (structured proteins), however, it has recently been shown that there exist intrinsically disordered proteins (IDPs), which lack a fixed or ordered 3D structure, have dynamic characteristics and therefore, exist in multiple states. Based on this, we extend the mapping of protein sequence not only to a fixed stable structure but also to an ensemble of protein conformations, which help us explain the complex interaction within a cell that was otherwise obscured. The objective of this dissertation is to develop effective ab initio methods and tools for protein un/structure prediction by developing effective statistical energy function, conformational search method, and disulfide connectivity patterns predictor. The key outcomes of this dissertation research are: i) a sequence and structure-based energy function for structured proteins that includes energetic terms extracted from hydrophobic-hydrophilic properties, accessible surface area, torsion angles, and ubiquitously computed dihedral angles uPhi and uPsi, ii) an ab initio protein structure predictor that combines optimal energy function derived from sequence and structure-based properties of proteins and an effective conformational search method which includes angular rotation and segment translation strategies, iii) an SVM with RBF kernel-based framework to predict disulfide connectivity pattern, iv) a hydrophobic-hydrophilic property based energy function for unstructured proteins, and v) an ab initio conformational ensemble generator that combines energy function and conformational search method for unstructured proteins which can help understand the biological systems involving IDPs and assist in rational drugs design to cure critical diseases such as cancer or cardiovascular diseases caused by challenging states of IDPs.
APA, Harvard, Vancouver, ISO, and other styles
9

Satiro, Lucas Santos. "Crop prediction and soil response to sugarcane straw removal." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/11/11140/tde-03052018-171843/.

Full text
Abstract:
Concerns about global warming and climate change have triggered a growing demand for renewable energy. In this scenario, the interest in using sugarcane straw as raw material for energy production has increased. However, straw plays an important role in maintaining soil quality. In addition, uncertainties as to produced straw amount and the straw removal impact on the stalk yield have raised doubts as to the use this raw material. In this sense, the objective this study was evaluate the short-term (2-year) the sugarcane straw removal impacts on soil and yield modeling of sugarcane stalk and straw, using soil attributes of different layers. Two experiments were carried out in São Paulo state, Brazil: one at Capivari (sandy clay loam soil) and another at Valparaíso (sandy loam soil). We have tested five rates of straw removal (i.e., equivalent to 0, 25, 50, 75 and 100 %). Soil samples were taken from 0-2.5, 2.5-5, 5-10, 10-20 and 20-30 cm layers to analyze pH, total C and N, P, K, Ca, Mg, bulk density and soil penetration resistance. Plant samples were collected to determine the straw and stalk yield. The impacts caused by straw removal differed between the areas, however, they concentrated on the more soil superficial layer. In sandy clay loam soil, straw removal led to organic carbon depletion and soil compaction, while in the sandy loam soil the chemical attributes (i.e. Ca and Mg contents) were the most impacted. In general, the results suggest that straw removal causes reduction more significant in soil quality for the sandy clay loam soil. The results indicate the possibility to remove about half-straw amount deposited on soil\'s surface (8.7 Mg ha-1 straw remaining) without causing severe implications on the quality of this soil. In contrast, although any amount of straw was sufficient to cause alterations the quality of the sandy loam soil, these impacts were less intense and are not magnified with the increase of straw removal. It was possible to model sugarcane straw and stalk yield using soil attributes. The 0-20 cm layer was the most important layer in the stalk yield definition, whereas the 0-5 cm layer, which the impacts caused by the straw removal were concentrated, was less important. Thus, we noticed that impacts caused to soil by straw removal have little influence on crop productivity. Straw prediction has proved more complex and possibly requires additional information (e.g crop and climate information) for good results to be obtained. Overall, the results suggest that the planned removal of straw for energy purposes can occur in a sustainable way, but should take into account site conditions, e.g soil properties. However, long-term research with different approaches is still necessary, both to follow up and confirm our results, and to develop ways to reduce damage caused by this activity.
Preocupações acerca do aquecimento global e mudanças climáticas tem provocado uma crescente demanda por energias renováveis. Nesse cenário, tem aumentado o interesse em utilizar a palha de cana-de-açúcar como matéria prima para produção de energia. Contudo, a palha desempenha importante papel na manutenção da qualidade do solo. Aliado a isso, incertezas quanto a quantidade de palha produzida e o impacto da remoção da palha na produção de colmos tem levantado duvidas quanto ao uso dessa matéria prima. Nesse sentido, o objetivo desse estudo foi avaliar a curto prazo (2 anos) os impactos da remoção da palha de cana-de-açucar no solo, e modelar a produção de palha e colmo de cana-de-açucar utilizando atributos do solo de diferentes camadas. Para tanto, foram conduzidos dois experimentos nos municípios de Capivari (solo de textura média) e Valparaíso (solo de textura arenosa), estado de São Paulo, Brasil. Foram testados cinco taxas de remoção de palha (i.e., equivalentes a 0, 25, 50, 75 e 100 %). Amostras de solo foram coletadas nas camadas 0-2,5, 2,5-5, 5-10, 10-20 e 20-30 cm de profundidade para determinação de C, N, pH, P, K, Ca, Mg, densidade do solo e resistência do solo a penetração. Amostras de planta foram coletadas para determinar a produção de colmo e palha. Os impactos causados pela remoção da palha diferiu entre as áreas, no entato, se concentraram na camada mais superficial do solo. No solo de textura média a remoção da palha levou a depleção do carbono orgânico e a compactação do solo, enquanto que, no solo de textura arenosa os atributos químicos (i.e teores de Ca e Mg) foram os mais impactados. Os resultados indicam a possibilidade de remover cerca de metade da quantidade de palha depositada sobre o solo (8.7 Mg ha-1 palha remanecente) sem causar graves implicações na qualidade deste solo. Em contraste, no solo de textura arenosa, qualquer quantidade de palha foi suficiente para causar alterações na qualidade do solo, contudo, essas alterações foram menos intensas e não aumentaram com as taxas de remoção da palha. Foi possível modelar a produção de colmo e palha de cana-de-açucar utilizando atributos do solo. A camada 0-20 cm foi a mais importante na definição da produção de colmos, ao passo que a camada 0-5 cm, camada em que se concentra os impactos causados pela remoção da palha, foi menos importante. Assim, notamos que os impactos causados ao solo pela remoção da palha tem pouca influencia na produtividade da cultura. A predição da palha se mostrou mais complexa e possivelmente requer informações adicionas (e.g informações da cultivar e de clima) para que bons resultados sejam obtidos. No geral, os resultados sugerem que a remoção planejada da palha para fins energéticos pode ocorre de maneira susutentável, porém deve levar em conta condições locais, e.g propriedades do solo. Contudo, pesquisas de longo prazo com diferentes abordagens ainda são necessárias, tanto para acompanhar e confirmar nossos resultados, como para desenvolver soluções que atenuem os danos causados por esta atividade.
APA, Harvard, Vancouver, ISO, and other styles
10

Segura, Gustavo Alonso Nuñez. "Energy consumption prediction in software-defined wirelwss sensor networks." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-04052018-113551/.

Full text
Abstract:
Energy conservation is a main concern in Wireless Sensor Networks (WSN). To reduce energy consumption it is important to know how it is spent and how much is available during the node and network operation. Several previous works have proposed energy consumption models focused on the communication module, while neglecting the processing and sensing activities. Other works presented more complex and complete models, but lacked experiments to demonstrate their accuracy in real deployments. The main objective of this work is to design and to evaluate an accurate energy consumption model for WSN, which considers the sensing, processing, and communication modules usage. This model was used to implement two energy consumption prediction mechanism. One mechanism is based in Markov chains and the other one is based in time series analysis. The metrics to evaluate the model and prediction mechanisms performance were: energy consumption estimation accuracy, energy consumption prediction accuracy, and node\'s communication and processing resources usage. The energy consumption prediction mechanisms performance was compared using two implementation schemes: running the prediction algorithm in the sensor node and running the prediction algorithm in a Software-Defined Networking controller. The implementation was conducted using IT-SDN, a Software-Defined Wireless Sensor Network framework. For the evaluation, simulation and emulation used COOJA, while testbed experiments used TelosB devices. Results showed that considering the sensing, processing, and communication energy consumption into the model, it is possible to obtain an accurate energy consumption estimation for Wireless Sensor Networks. Also, the use of a Software-Defined Networking controller for processing complex prediction algorithms can improve the prediction accuracy.
A conservação da energia é uma das principais preocupações nas Redes de Sensores Sem Fio (WSN, do inglês Wireless Sensor Networks). Para reduzir o consumo de energia, é importante saber como a energia é gasta e quanta energia há disponível durante o funcionamento da rede. Diversos trabalhos anteriores propuseram modelos de consumo de energia focados no módulo de comunicação, ignorando o consumo por tarefas de processamento e sensoriamento. Outros trabalhos apresentam modelos mais completos e complexos, mas carecem de experimentos que demonstrem a exatidão em dispositivos reais. O objetivo principal deste trabalho é projetar e avaliar um modelo de consumo de energia para WSN que considere o consumo por sensoriamento, processamento e comunicação. Este modelo foi utilizado para implementar dois mecanismos de previsão de consumo de energia, um deles baseado em cadeias de Markov e o outro baseado em séries temporais. As métricas para avaliar o desempenho do modelo e dos mecanismos de previsão de consumo de energia foram: exatidão da estimativa de consumo de energia, exatidão da previsão de consumo de energia e uso dos recursos de comunicação e processamento do nó. O desempenho dos mecanismos de previsão de consumo de energia foram comparados utilizando dois esquemas de implementação: rodando o algoritmo de previsão no nó sensor e rodando o algoritmo de previsão em um controlador de rede definida por software. A implementação foi conduzida utilizando IT-SDN, um arcabouço de desenvolvimento de redes de sensores sem fio definidas por software. A avaliação foi feita com simulações e emulações utilizando o simulador COOJA e ensaios com dispositivos reais utilizando o TelosB. Os resultados mostraram que considerando o consumo de energia por sensoriamento, processamento e communicação, é possivel fazer uma estimativa de consumo de energia em redes de sensores sem fio com uma boa exatidão. Ainda, o uso de um controlador de rede definida por software para processamento de algoritmos de previsão complexos pode aumentar a exatidão da previsão.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "TD2 prediction"

1

D, Dang L., Coats E. E, and George C. Marshall Space Flight Center., eds. Engineering and programming manual: Two-dimensional kinetic reference computer program (TDK). Marshall Space Flight Center, Al: George C. Marshall Space Flight Center, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Moulton, Calum D. Novel pharmacological targets. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780198789284.003.0013.

Full text
Abstract:
There is a bidirectional relationship between depression and type 2 diabetes (T2D). Patients with comorbid depression and T2D are at high risk of complications and premature mortality. Conventional treatments for depression do not consistently improve diabetes outcomes, despite improving depressive symptoms. Shared mechanisms may underpin both depression and T2D, providing novel pharmacological targets to treat both conditions simultaneously. There are several candidate pathways. For inflammation and vitamin D deficiency, there is good cross-sectional evidence to support an association with depression in T2D. Prospective epidemiological studies are needed to test biological pathways as predictive biomarkers of depression and T2D. Intervention studies are needed to test the modifiability of these pathways. Repurposing of established diabetes treatments may provide a ‘multiple hit’ strategy. The identification and modification of novel biological targets has the potential to treat both depression and T2D, as well as reducing longer term morbidity and mortality.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "TD2 prediction"

1

Kariya, Takeaki, and Hiroshi Tsuda. "Prediction of Individual Bond Prices via the TDM Model." In Modelling and Prediction Honoring Seymour Geisser, 350–56. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-2414-3_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Novitski, Pavel, Cheli Melzer Cohen, Avraham Karasik, Varda Shalev, Gabriel Hodik, and Robert Moskovitch. "All-Cause Mortality Prediction in T2D Patients." In Artificial Intelligence in Medicine, 3–13. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59137-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Busatto, Anna, Jonathan Krauß, Evianne Kruithof, Hermenegild Arevalo, and Ilse van Herck. "Electromechanical In Silico Testing Alters Predicted Drug-Induced Risk to Develop Torsade de Pointes." In Computational Physiology, 19–29. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25374-4_2.

Full text
Abstract:
AbstractTorsade de Pointes (TdP) is a type of ventricular tachycardia that can occur as a side effect of several medications. The Comprehensive in vitro Proarrhythmia Assay (CiPA) is a novel testing paradigm that utilizes single cell electrophysiological simulations to predict TdP risk for drugs that could potentially be used clinically. However, the effects on mechanical performance and mechano-electrical feedback are neglected. Here, we demonstrate that including electromechanical simulations in CiPA testing can provide additional insights into the predicted drug-induced TdP risk. In this work, we analyzed six drugs, namely flecainide, ibutilide, metronidazole, mexiletine, quinidine and ranolazine. We compared previously classified risks (low, intermediate, high) with our fully coupled electromechanical simulation results based upon the action potential, the electromechanical window, and the maximum active tension [1]. For ranolazine and metronidazole the predicted risk changed from low to intermediate and intermediate to high, respectively. For the latter, while electrophysiological markers indicated a low risk, the active tension decreased by 58% which can result in a loss of heart function. Therefore, adding mechanics to CiPA testing results in an altered prediction of drug-related TdP risk.
APA, Harvard, Vancouver, ISO, and other styles
4

Ozturk, Berk, Tom Lawton, Stephen Smith, and Ibrahim Habli. "Balancing Acts: Tackling Data Imbalance in Machine Learning for Predicting Myocardial Infarction in Type 2 Diabetes." In Studies in Health Technology and Informatics. IOS Press, 2024. http://dx.doi.org/10.3233/shti240491.

Full text
Abstract:
Type 2 Diabetes (T2D) is a prevalent lifelong health condition. It is predicted that over 500 million adults will be diagnosed with T2D by 2040. T2D can develop at any age, and if it progresses, it may cause serious comorbidities. One of the most critical T2D-related comorbidities is Myocardial Infarction (MI), known as heart attack. MI is a life-threatening medical emergency, and it is important to predict it and intervene in a timely manner. The use of Machine Learning (ML) for clinical prediction is gaining pace, but the class imbalance in predictive models is a key challenge for establishing a trustworthy deployment of the technology. This may lead to bias and overfitting in the ML models, and it may cause misleading interpretations of the ML outputs. In our study, we showed how systematic use of Class Imbalance Handling (CIH) techniques may improve the performance of the ML models. We used the Connected Bradford dataset, consisting of over one million real-world health records. Three commonly used CIH techniques, Oversampling, Undersampling, and Class Weighting (CW) have been used for Naive Bayes (NB), Neural Network (NN), Random Forest (RF), Support Vector Machine (SVM), and Ensemble models. We report that CW overperforms among the other techniques with the highest Accuracy and F1 values of 0.9948 and 0.9556, respectively. Applying the most appropriate CIH techniques for the ML models using real-world healthcare data provides promising results for helping to reduce the risk of MI in patients with T2D.
APA, Harvard, Vancouver, ISO, and other styles
5

Pirdavani, Ali, Tom Bellemans, Tom Brijs, Bruno Kochan, and Geert Wets. "Traffic Safety Implications of Travel Demand Management Policies." In Transportation Systems and Engineering, 1082–107. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8473-7.ch055.

Full text
Abstract:
Travel Demand Management (TDM) consists of a variety of policy measures that affect the transportation system's effectiveness by changing travel behavior. Although the primary objective to implement such TDM strategies is not to improve traffic safety, their impact on traffic safety should not be neglected. The main purpose of this study is to investigate differences in the traffic safety consequences of two TDM scenarios: a fuel-cost increase scenario (i.e. increasing the fuel price by 20%) and a teleworking scenario (i.e. 5% of the working population engages in teleworking). Since TDM strategies are usually conducted at a geographically aggregated level, crash prediction models that are used to evaluate such strategies should also be developed at an aggregate level. Moreover, given that crash occurrences are often spatially heterogeneous and are affected by many spatial variables, the existence of spatial correlation in the data is also examined. The results indicate the necessity of accounting for the spatial correlation when developing crash prediction models. Therefore, Zonal Crash Prediction Models (ZCPMs) within the geographically weighted generalized linear modeling framework are developed to incorporate the spatial variations in association between the Number Of Crashes (NOCs) (including fatal, severe, and slight injury crashes recorded between 2004 and 2007) and a set of explanatory variables. Different exposure, network, and socio-demographic variables of 2200 traffic analysis zones in Flanders, Belgium, are considered as predictors of crashes. An activity-based transportation model is adopted to produce exposure metrics. This enables a more detailed and reliable assessment while TDM strategies are inherently modeled in the activity-based models. In this chapter, several ZCPMs with different severity levels and crash types are developed to predict the NOCs. The results show considerable traffic safety benefits of conducting both TDM scenarios at an average level. However, there are certain differences when considering changes in NOCs by different crash types.
APA, Harvard, Vancouver, ISO, and other styles
6

Pirdavani, Ali, Tom Bellemans, Tom Brijs, Bruno Kochan, and Geert Wets. "Traffic Safety Implications of Travel Demand Management Policies." In Data Science and Simulation in Transportation Research, 115–40. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4920-0.ch007.

Full text
Abstract:
Travel Demand Management (TDM) consists of a variety of policy measures that affect the transportation system’s effectiveness by changing travel behavior. Although the primary objective to implement such TDM strategies is not to improve traffic safety, their impact on traffic safety should not be neglected. The main purpose of this study is to investigate differences in the traffic safety consequences of two TDM scenarios: a fuel-cost increase scenario (i.e. increasing the fuel price by 20%) and a teleworking scenario (i.e. 5% of the working population engages in teleworking). Since TDM strategies are usually conducted at a geographically aggregated level, crash prediction models that are used to evaluate such strategies should also be developed at an aggregate level. Moreover, given that crash occurrences are often spatially heterogeneous and are affected by many spatial variables, the existence of spatial correlation in the data is also examined. The results indicate the necessity of accounting for the spatial correlation when developing crash prediction models. Therefore, Zonal Crash Prediction Models (ZCPMs) within the geographically weighted generalized linear modeling framework are developed to incorporate the spatial variations in association between the Number Of Crashes (NOCs) (including fatal, severe, and slight injury crashes recorded between 2004 and 2007) and a set of explanatory variables. Different exposure, network, and socio-demographic variables of 2200 traffic analysis zones in Flanders, Belgium, are considered as predictors of crashes. An activity-based transportation model is adopted to produce exposure metrics. This enables a more detailed and reliable assessment while TDM strategies are inherently modeled in the activity-based models. In this chapter, several ZCPMs with different severity levels and crash types are developed to predict the NOCs. The results show considerable traffic safety benefits of conducting both TDM scenarios at an average level. However, there are certain differences when considering changes in NOCs by different crash types.
APA, Harvard, Vancouver, ISO, and other styles
7

Öztayşi, Başar, Ahmet Tezcan Tekin, Cansu Özdikicioğlu, and Kerim Caner Tümkaya. "Personalized Content Recommendation Engine for Web Publishing Services Using Textmining and Predictive Analytics." In Advances in Business Information Systems and Analytics, 113–24. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-2148-8.ch007.

Full text
Abstract:
Recommendation systems have become very important especially for internet based business such as e-commerce and web publishing. While content based filtering and collaborative filtering are most commonly used groups in recommendation systems there are still researches for new approaches. In this study, a personalized recommendation system based on text mining and predictive analytics is proposed for a real world web publishing company. The approach given in this chapter first preprocesses existing web contents, integrate the structured data with history of a specific user and create an extended TDM for the user. Then this data is used for prediction of the users interest in new content. In order to reach that point, SVM, K-NN and Naïve Bayesian methods are used. Finally, the best performing method is used for determining the interest level of the user in a new content. Based on the forecasted interest levels the system recommends among the alternatives.
APA, Harvard, Vancouver, ISO, and other styles
8

Console, Davide, Marta Lenatti, Davide Simeone, Karim Keshavjee, Aziz Guergachi, Maurizio Mongelli, and Alessia Paglialonga. "Exploring Prediabetes Pathways Using Explainable AI on Data from Electronic Medical Records." In Studies in Health Technology and Informatics. IOS Press, 2024. http://dx.doi.org/10.3233/shti240519.

Full text
Abstract:
This study leverages data from a Canadian database of primary care Electronic Medical Records to develop machine learning models predicting type 2 diabetes mellitus (T2D), prediabetes, or normoglycemia. These models are used as a basis for extracting counterfactual explanations and derive personalized changes in biomarkers to prevent T2D onset, particularly in the still reversible prediabetic state. The models achieve satisfactory performance. Furthermore, feature importance analysis underscores the significance of fasting blood sugar and glycated hemoglobin, while counterfactuals explanations emphasize the centrality of keeping body mass index and cholesterol indicators within or close to the clinically desirable ranges. This research highlights the potential of machine learning and counterfactual explanations in guiding preventive interventions that may help slow down the progression from prediabetes to T2D on an individual basis, eventually fostering a recovery from prediabetes to a normoglycemic state.
APA, Harvard, Vancouver, ISO, and other styles
9

Ott, Katharina, Santiago Cepeda, Dennis Hartmann, Frank Kramer, and Dominik Müller. "Predicting Overall Survival of Glioblastoma Patients Using Deep Learning Classification Based on MRIs." In Studies in Health Technology and Informatics. IOS Press, 2024. http://dx.doi.org/10.3233/shti240878.

Full text
Abstract:
Introduction: Glioblastoma (GB) is one of the most aggressive tumors of the brain. Despite intensive treatment, the average overall survival (OS) is 15–18 months. Therefore, it is helpful to be able to assess a patient’s OS to tailor treatment more specifically to the course of the disease. Automated analysis of routinely generated MRI sequences (FLAIR, T1, T1CE, and T2) using deep learning-based image classification has the potential to enable accurate OS predictions. Methods: In this work, a method was developed and evaluated that classifies the OS into three classes – “short”, “medium” and “long”. For this purpose, the four MRI sequences of a person were corrected using bias-field correction and merged into one image. The pipeline was realized by a bagging model using 5-fold cross-validation and the ResNet50 architecture. Results: The best model was able to achieve an F1-score of 0.51 and an accuracy of 0.67. In addition, this work enabled a largely clear differentiation of the “short” and “long” classes, which offers high clinical significance as decision support. Conclusion: Automated analysis of MRI scans using deep learning-based image classification has the potential to enable accurate OS prediction in glioblastomas.
APA, Harvard, Vancouver, ISO, and other styles
10

"Introduction." In Examining a New Automobile Global Manufacturing System, 1–4. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8746-1.ch001.

Full text
Abstract:
In this book, by predicting the form of next generation automobile manufacturing, the author examines a new automobile global manufacturing system (NAGMS) that contains the hardware system with five core elements—TDS, TPS, TMS, TIS, and TJS (total development system, total production system, total marketing system, total intelligence management system, and total job quality management system)—for transforming management technology into automobile management strategy—surpassing JIT.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "TD2 prediction"

1

Thornburgh, Robert, Andrew Kreshock, and Matthew Wilbur. "A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison." In Vertical Flight Society 71st Annual Forum & Technology Display, 1–14. The Vertical Flight Society, 2015. http://dx.doi.org/10.4050/f-0071-2015-10271.

Full text
Abstract:
This paper presents the results from an ongoing effort to produce improved correlation between analytical hub force and moment prediction and those measured during wind-tunnel testing on the Aeroelastic Rotor Experimental System (ARES), a conventional rotor testbed commonly used at the Langley Transonic Dynamics Tunnel (TDT). A frequency-dependent transformation between loads at the rotor hub and outputs of the testbed balance is produced from frequency response functions measured during vibration testing of the system. The resulting transformation is used as a dynamic calibration of the balance to transform hub loads predicted by comprehensive analysis into predicted balance outputs. In addition to detailing the transformation process, this paper also presents a set of wind-tunnel test cases, with comparisons between the measured balance outputs and transformed predictions from the comprehensive analysis code CAMRAD II. The modal response of the testbed is discussed and compared to a detailed finite-element model. Results reveal that the modal response of the testbed exhibits a number of characteristics that make accurate dynamic balance predictions challenging, even with the use of the balance transformation.
APA, Harvard, Vancouver, ISO, and other styles
2

Lant, T., C. Keefe, C. Davies, B. McGhee, N. Simms, and T. Fry. "Modeling Fireside Corrosion of Heat Exchanger Materials in Advanced Energy Systems." In AM-EPRI 2010, edited by D. Gandy, J. Shingledecker, and R. Viswanathan, 255–67. ASM International, 2010. http://dx.doi.org/10.31399/asm.cp.am-epri-2010p0255.

Full text
Abstract:
Abstract This paper outlines a comprehensive UK-based research project (2007-2010) focused on developing fireside corrosion models for heat exchangers in ultra-supercritical plants. The study evaluates both conventional materials like T22 and advanced materials such as Super 304H, examining their behavior under various test environments with metal skin temperatures ranging from 425°C to 680°C. The research aims to generate high-quality data on corrosion behavior for materials used in both furnace and convection sections, ultimately producing reliable corrosion prediction models for boiler tube materials operating under demanding conditions. The project addresses some limitations of existing models for these new service conditions and provides a brief review of the fuels and test environments used in the program. Although modeling is still limited, preliminary results have been presented, focusing on predicting fireside corrosion rates for furnace walls, superheaters, and reheaters under various service environments. These environments include those created by oxyfuel operation, coal-biomass co-firing, and more traditional coal firing.
APA, Harvard, Vancouver, ISO, and other styles
3

Dharsan, M. Vishaal, M. Vishnu, S. Surya Pravesh, and S. Kaliappan. "Next-Gen RO Purifier with Smart TDS Adjustment and AI Filter Lifespan Prediction." In 2024 Second International Conference on Intelligent Cyber Physical Systems and Internet of Things (ICoICI), 1514–18. IEEE, 2024. http://dx.doi.org/10.1109/icoici62503.2024.10696384.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Heecheol. "A Comparison of Channel Prediction Models for TDD Systems with Imperfect Channel Reciprocity." In 2024 15th International Conference on Information and Communication Technology Convergence (ICTC), 1404–5. IEEE, 2024. https://doi.org/10.1109/ictc62082.2024.10827648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Alkhalaqi, Aysha, Fatima Alnaimi, Rouda Qassmi, and Hiba Bawadi. "Predictive Power of Body Visceral Adiposity Index, Body Adiposity Index and Body Mass Index for Type 2 Diabetes in Qatari Population." In Qatar University Annual Research Forum & Exhibition. Qatar University Press, 2020. http://dx.doi.org/10.29117/quarfe.2020.0208.

Full text
Abstract:
Background: The prevalence of type 2 diabetes (T2D) has increased recently in Qatar. Body mass index (BMI) is a predictor of T2D in many populations. However, BMI is based on height and weight measurements and not on body adiposity. Therefore, the utility of BMI for predicting the risk of T2D has been questioned, and visceral adiposity (VAI) appears to be a better predictor of T2D. Objective: This study is aimed to assess the relative effectiveness of visceral adiposity index (VAI) and body adiposity index (BAI), in comparison with body mass index (BMI), for T2D among Qatari adults. Methodology: A random sample of 1103 adult Qatari nationals over 20 years old were included in this study; this data was obtained from the Qatar National Biobank. We performed a multivariate logistic regression to examine the association between VAI, BAI, BMI, and T2D, and computed zscores for VAI, BAI and BMI. Results: VAI z-scores showed the strongest association with the risk of T2D (OR, 1.44; 95% CI: 1.24–1.68) compared with the zscores for BAI (OR, 1.15; 95% CI: 0.93–1.43) and BMI (OR, 1.33; 95% CI: 1.11–1.59). Subgroup analyses indicated that the association was stronger between VAI and T2D in Qatari women than in men. Conclusion: VAI was a strong and independent predictor of T2D among the Qatari adult population. Therefore, VAI could be a useful tool for predicting the risk of T2D among Qatari adults.
APA, Harvard, Vancouver, ISO, and other styles
6

Shen, Yu-Yi, Guannan Deng, Xin Wang, Amy T. Kan, and Mason B. Tomson. "Improved Scale Prediction for High Calcium Containing Produced Brine at High Temperature and High Pressure Conditions." In SPE International Conference on Oilfield Chemistry. SPE, 2023. http://dx.doi.org/10.2118/213827-ms.

Full text
Abstract:
Abstract Scale prevention is one of the most important problems in the oil and gas industry. Due to the more aggressive production behavior recently, there are more chances to encounter high temperature, high pressure, and high TDS conditions. This study focuses on improving the scale prediction in the condition of high temperature (up to 210°C), and TDS (total dissolved solids, over 300,000 mg/L) with calcium concentration up to 2.0 molality (m). A hydrothermal autoclave reactor was developed for solubility measurement. The solubility of anhydrite was measured in the CaCl2-NaCl-H2O solution with constant ionic strength of 4 m. Results shows that the ionic strength effect and the Ca-SO4 association would increase the anhydrite solubility while the common ion effect decreased the anhydrite solubility. The measured solubility data can develop the virial coefficient for the ion interaction of Ca2+ and SO42. This virial coefficient can then be applied in Pitzer models to improve the calculation for the saturation index of scale. Quantifying the Ca-SO4 interaction parameters can make a better prediction of mineral solubility with high calcium concentration. The results can also improve not only anhydrite but all of the sulfate scale predictions at high temperature with high TDS conditions. This study offers a reliable and efficient method to obtain solubility under high temperature conditions and expands the scale prediction of the production brine with high calcium concentration at higher temperature and pressure limits.
APA, Harvard, Vancouver, ISO, and other styles
7

Ugoyah, Joy Chiekumali, Joseph Atubokiki Ajienka, Virtue Urunwo Wachikwu-Elechi, and Sunday Sunday Ikiensikimama. "Prediction of Scale Precipitation by Modelling its Thermodynamic Properties using Machine Learning Engineering." In SPE Nigeria Annual International Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/212010-ms.

Full text
Abstract:
Abstract During oil and gas production, scaling is a flow assurance problem commonly experienced in most regions. For scale control to be effective and less expensive, accurate prediction of scaling immediately deposition commences is important. This paper provides a model for the prediction of Barium Sulphate (BaSO4) and Calcium Carbonate (CaCO3) oilfield scales built using machine learning. Thermodynamic and compositional properties including temperature, pressure, PH, CO2 mole fraction, Total Dissolved Solids (TDS), and ion compositions of water samples from wells where BaSO4 and CaCO3 scales were observed are analysed and used to train the machine learning model. The results of the modelling indicate that the Decision tree model that had an accuracy of 0.91 value using Area Under Curve (AUC) score, performed better in predicting scale precipitation in the wells than the other Decision tree models that had AUC scores of 0.88 and 0.87. The model can guide early prediction and control of scaling during oil and gas production operations.
APA, Harvard, Vancouver, ISO, and other styles
8

Hejazi, Rasoul, Andrew Grime, Mark Randolph, and Mike Efthymiou. "A Bayesian Machine Learning Approach for Efficient Integrity Management of Steel Lazy Wave Risers." In ASME 2020 39th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/omae2020-18190.

Full text
Abstract:
Abstract In-service integrity management (IM) of steel lazy wave risers (SLWRs) can benefit significantly from quantitative assessment of the overall risk of system failure as it can provide an effective tool for decision making. SLWRs are prone to fatigue failure within their touchdown zone (TDZ). This failure mode needs to be evaluated rigorously in riser IM processes because fatigue is an ongoing degradation mechanism threatening the structural integrity of risers throughout their service life. However, accurately evaluating the probability of fatigue failure for riser systems within a useful time frame is challenging due to the need to run a large number of nonlinear, dynamic numerical time domain simulations. Applying the Bayesian framework for machine learning, through the use of Gaussian Processes (GP) for regression, offers an attractive solution to overcome the burden of prohibitive simulation run times. GPs are stochastic, data-driven predictive models which incorporate the underlying physics of the problem in the learning process, and facilitate rapid probabilistic assessments with limited loss in accuracy. This paper proposes an efficient framework for practical implementation of a GP to create predictive models for the estimation of fatigue responses at SLWR hotspots. Such models are able to perform stochastic response prediction within a few milliseconds, thus enabling rapid prediction of the probability of SLWR fatigue failure. A realistic North West Shelf (NWS) case study is used to demonstrate the framework, comprising a 20” SLWR connected to a representative floating facility located in 950 m water depth. A full hindcast metocean dataset with associated statistical distributions are used for the riser long-term fatigue loading conditions. Numerical simulation and sampling techniques are adopted to generate a simulation-based dataset for training the data-driven model. In addition, a recently developed dimensionality reduction technique is employed to improve efficiency and reduce complexity of the learning process. The results show that the stochastic predictive models developed by the suggested framework can predict the long-term TDZ fatigue damage of SLWRs due to vessel motions with an acceptable level of accuracy for practical purposes.
APA, Harvard, Vancouver, ISO, and other styles
9

Dave, Eshan V., Sofie Leon, and Kyoungsoo Park. "Thermal Cracking Prediction Model and Software for Asphalt Pavements." In First Congress of Transportation and Development Institute (TDI). Reston, VA: American Society of Civil Engineers, 2011. http://dx.doi.org/10.1061/41167(398)64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Shen, Yu-Yi, Guannan Deng, Xin Wang, Yuqing Ye, Amit Reiss, Xuanzhu Yao, Daniel Pimentel, Cianna Leschied, Amy T. Kan, and Mason B. Tomson. "Impact of High Calcium Concentrations on Barite Scale Prediction Under High Temperature and High Pressure Conditions." In SPE Oilfield Scale Symposium. SPE, 2024. http://dx.doi.org/10.2118/218707-ms.

Full text
Abstract:
Abstract Scale prediction and inhibition is one of the crucial challenges in the oil and gas industry. Thriving demand for gasoline drives the oil and gas industry into intensified production. Many of these unconventional sites face the challenge of high temperature and high pressure (HTHP) issues. This study focuses on improving the scale prediction of barite in the condition for calcium concentration up to 2 m with pressure up to 18,000 psi, temperature up to 200°C, and TDS (total dissolved solids) over 300,000 mg/L. A flow-through apparatus capable of simulating HTHP conditions was developed, and barite solubility was measured. The study assesses the solubility of barite in feed solutions containing different concentrations of CaCl2, NaCl, and Na2SO4. A reliable solubility prediction model, based on Pitzer ion-interaction theory, is developed for barite to encompass a wide range of brine compositions as well as extended temperature and pressure conditions (T&lt;200°C, P&lt;18,000 psi, and Ca&lt;2 m). Findings reveal that the barite solubility increases with the ionic strength while some ion interactions remain unclear at HTHP conditions. Quantifying ion interaction parameters related to divalent ions (Ca2+, Ba2+, SO42−) gives more reliable predictions of mineral solubility at high calcium concentrations. An accurate prediction of barite scale formation in oil field brine enables better control of inhibitor dosage and reduces unnecessary environmental impacts.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "TD2 prediction"

1

Cazenave, Pablo. PR-328-153721-R01 Development of an Industry Test Facility and Qualification Process for ILI Technology. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), December 2016. http://dx.doi.org/10.55274/r0011020.

Full text
Abstract:
The project "Development of an Industry Test Facility and Qualification Processes for in-line inspection (ILI) technology Evaluation and Enhancements" aims to expand knowledge of ILI technology performance and identify gaps where new technology is needed. Additionally, this project aims to provide a continuing resource for ILI technology developers, researchers and pipeline operators to have access to test samples with a range of pipeline integrity threats and vintages and in-line technology test facilities at the Pipeline Research Council International, Inc. (PRCI) Technology Development and Deployment Center (TDC), a PRCI managed facility available for future industry and PHMSA research projects. An ILI pull test facility was designed and constructed as part of this project based on industry state of the art and opportunities for capability improvement. The major ILI technology provid-ers, together with pipeline operator team members, reviewed the TDC sample inventory and de-signed a series of ILI performance tests illustrating one of multiple possible research objectives, culminating in 16 inch and 24 inch nominal diameter test strings. The ILI technology providers proposed appropriate inspection tools based on limited knowledge of the integrity conditions in the test strings, a series of pull tests of the provided ILI tools were performed and the technology providers delivered reports of integrity anomaly location and physical dimensions for perfor-mance evaluation. PRCI engaged Blade Energy Partners, Ltd. (Blade) to conduct the evaluation of the ILI data obtained from repeated testing on the 16 and 24 inch pipeline strings at the TDC. Blade Energy was also requested by the PRCI Project Team to incorporate prior work concerning the development of the PRCI ILI test facility to serve as a final report for the PRCI project. The resulting data was analyzed, aligned, compared to truth data and evaluated by Blade, with the findings presented in this report. Quantitative measures of detection and sizing performance were disclosed in-confidence to the individual ILI technology providers. For instances where ILI predictions were outside of claimed performance, the vendors were given a limited sample of actual defect data to enable re-analysis, thus demonstrating the potential for improved integrity assessment with validation measurements. This report has a related webinar.
APA, Harvard, Vancouver, ISO, and other styles
2

Friedman, Shmuel, Jon Wraith, and Dani Or. Geometrical Considerations and Interfacial Processes Affecting Electromagnetic Measurement of Soil Water Content by TDR and Remote Sensing Methods. United States Department of Agriculture, 2002. http://dx.doi.org/10.32747/2002.7580679.bard.

Full text
Abstract:
Time Domain Reflectometry (TDR) and other in-situ and remote sensing dielectric methods for determining the soil water content had become standard in both research and practice in the last two decades. Limitations of existing dielectric methods in some soils, and introduction of new agricultural measurement devices or approaches based on soil dielectric properties mandate improved understanding of the relationship between the measured effective permittivity (dielectric constant) and the soil water content. Mounting evidence indicates that consideration must be given not only to the volume fractions of soil constituents, as most mixing models assume, but also to soil attributes and ambient temperature in order to reduce errors in interpreting measured effective permittivities. The major objective of the present research project was to investigate the effects of the soil geometrical attributes and interfacial processes (bound water) on the effective permittivity of the soil, and to develop a theoretical frame for improved, soil-specific effective permittivity- water content calibration curves, which are based on easily attainable soil properties. After initializing the experimental investigation of the effective permittivity - water content relationship, we realized that the first step for water content determination by the Time Domain Reflectometry (TDR) method, namely, the TDR measurement of the soil effective permittivity still requires standardization and improvement, and we also made more efforts than originally planned towards this objective. The findings of the BARD project, related to these two consequential steps involved in TDR measurement of the soil water content, are expected to improve the accuracy of soil water content determination by existing in-situ and remote sensing dielectric methods and to help evaluate new water content sensors based on soil electrical properties. A more precise water content determination is expected to result in reduced irrigation levels, a matter which is beneficial first to American and Israeli farmers, and also to hydrologists and environmentalists dealing with production and assessment of contamination hazards of this progressively more precious natural resource. The improved understanding of the way the soil geometrical attributes affect its effective permittivity is expected to contribute to our understanding and predicting capability of other, related soil transport properties such as electrical and thermal conductivity, and diffusion coefficients of solutes and gas molecules. In addition, to the originally planned research activities we also investigated other related problems and made many contributions of short and longer terms benefits. These efforts include: Developing a method and a special TDR probe for using TDR systems to determine also the soil's matric potential; Developing a methodology for utilizing the thermodielectric effect, namely, the variation of the soil's effective permittivity with temperature, to evaluate its specific surface area; Developing a simple method for characterizing particle shape by measuring the repose angle of a granular material avalanching in water; Measurements and characterization of the pore scale, saturation degree - dependent anisotropy factor for electrical and hydraulic conductivities; Studying the dielectric properties of cereal grains towards improved determination of their water content. A reliable evaluation of the soil textural attributes (e.g. the specific surface area mentioned above) and its water content is essential for intensive irrigation and fertilization processes and within extensive precision agriculture management. The findings of the present research project are expected to improve the determination of cereal grain water content by on-line dielectric methods. A precise evaluation of grain water content is essential for pricing and evaluation of drying-before-storage requirements, issues involving energy savings and commercial aspects of major economic importance to the American agriculture. The results and methodologies developed within the above mentioned side studies are expected to be beneficial to also other industrial and environmental practices requiring the water content determination and characterization of granular materials.
APA, Harvard, Vancouver, ISO, and other styles
3

MR MSK Cartilage for Joint Disease, Consensus Profile. Chair Thomas Link and Xiaojuan Li. Radiological Society of North America (RSNA) / Quantitative Imaging Biomarkers Alliance (QIBA), September 2021. http://dx.doi.org/10.1148/qiba/20210925.

Full text
Abstract:
The goal of a QIBA Profile is to help achieve a useful level of performance for a given biomarker. The Claim (Section 2) describes the biomarker performance. The Activities (Section 3) contribute to generating the biomarker. Requirements are placed on the Actors that participate in those activities as necessary to achieve the Claim. Assessment Procedures (Section 4) for evaluating specific requirements are defined as needed. This QIBA Profile (MR-based cartilage compositional biomarkers (T1ρ, T2) ) addresses the application of T1ρ and T2 for the quantification of cartilage composition, which can be used as an imaging biomarker to diagnose, predict and monitor early osteoarthritis. It places requirements on Acquisition Devices, Technologists, MRI Physicists, Radiologists, Reconstruction Software and Image Analysis Tools involved in Subject Handling, Image Data Acquisition, Image Data Reconstruction, Image Quality Assurance (QA) and Image Analysis. The requirements are focused on achieving sufficient reproducibility and accuracy for measuring cartilage composition. The clinical performance target is to achieve a reproducibility of 4-5% for measurements of global cartilage composition with T2 and T1ρ relaxation time measurements and a 95% confidence level for a true/critical change in cartilage composition (least significant change) with a precision of 11-14% and 9-12% if only an increase is expected (claim is one-sided). The target applies to 3T MR scanners of one manufacturer with identical scan parameters across different sites. It does not apply to scanners from different manufacturers. This document is intended to help clinicians basing decisions on this biomarker, imaging staff generating this biomarker, vendor staff developing related products, purchasers of such products and investigators designing trials with imaging endpoints. Note that this document only states requirements to achieve the claim, not “requirements on standard of care.” Conformance to this Profile is secondary to properly caring for the patient. Summary for Clinical Trial Use The MR-based cartilage compositional biomarkers profile defines the behavioral performance levels and quality control specifications for T1ρ, T2 scans used in single- and multi-center clinical trials of osteoarthritis and other trials assessing cartilage composition longitudinally with a focus on therapies to treat degenerative joint disease. While the emphasis is on clinical trials, this process is also intended to be applied for clinical practice. The specific claims for accuracy are detailed below in the Claims. The specifications that must be met to achieve conformance with this Profile correspond to acceptable levels specified in the T1ρ, T2 Protocols. The aim of the QIBA Profile specifications is to minimize intra- and inter-subject, intra- and inter-platform, and interinstitutional variability of quantitative scan data due to factors other than the intervention under investigation. T1ρ and T2 studies performed according to the technical specifications of this QIBA Profile in clinical trials can provide quantitative data for single timepoint assessments (e.g. disease burden, investigation of predictive and/or prognostic biomarker(s)) and/or for multi-time-point comparative assessments (e.g., response assessment, investigation of predictive and/or prognostic biomarkers of treatment efficacy). A motivation for the development of this Profile is that while a typical MR T1ρ and T2 measurement may be stable over days or weeks, this stability cannot be expected over the time that it takes to complete a clinical trial. In addition, there are well known differences between scanners and the operation of the same type of scanner at different imaging sites. The intended audiences of this document include: Biopharmaceutical companies, rheumatologists and orthopedic surgeons, and clinical trial scientists designing trials with imaging endpoints. Clinical research professionals. Radiologists, technologists, physicists and administrators at healthcare institutions considering specifications for procuring new MRI equipment for cartilage measurements. Radiologists, technologists, and physicists designing T1ρ and T2 acquisition protocols. Radiologists, and other physicians making quantitative measurements from T1ρ and T2 sequence protocols. Regulators, rheumatologists, orthopedic surgeons, and others making decisions based on quantitative image measurements. Technical staff of software and device manufacturers who create products for this purpose. Note that specifications stated as 'requirements' in this document are only requirements to achieve the claim, not 'requirements on standard of care.' Specifically, meeting the goals of this Profile is secondary to properly caring for the patient.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography