Journal articles on the topic 'Efficacité des algorithmes'

To see the other types of publications on this topic, follow the link: Efficacité des algorithmes.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 journal articles for your research on the topic 'Efficacité des algorithmes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Distinguin, L., M. Blanchard, I. Rouillon, M. Parodi, and N. Loundon. "Réimplantation cochléaire chez l’enfant : efficacité des algorithmes décisionnels." Annales françaises d'Oto-rhino-laryngologie et de Pathologie Cervico-faciale 135, no. 4 (September 2018): 235–39. http://dx.doi.org/10.1016/j.aforl.2017.11.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Berriche, Amira, Dominique Crié, and Michel Calciu. "Une Approche Computationnelle Ancrée : Étude de cas des tweets du challenge #Movember en prévention de santé masculine." Décisions Marketing N° 112, no. 4 (January 25, 2024): 79–103. http://dx.doi.org/10.3917/dm.112.0079.

Full text
Abstract:
• Objectif L’objectif de cette étude est de présenter l’approche méthodologique computationnelle ancrée qui repose sur une démarche d’interprétation par les chercheurs des thèmes détectés par les algorithmes d’intelligence artificielle (IA) puis de l’appliquer au cas #Movember. • Méthodologie Une classification non supervisée par LDA et une analyse de sentiment ont été réalisées sur 144 906 tweets provenant de différents pays participants (France, Italie, Belgique, Australie, USA, UK, Arabie Saoudite, etc.). • Résultats Les résultats montrent que le processus de l’engagement individuel au mouvement social #Movember est composé de trois principaux éléments : (1) 4 segments d’engagement individuel (sympathisants, conscients, engagés et maintiens), (2) émotions collectives (positives et négatives) et (3) facteurs cognitifs et motivationnels (calcul bénéfices-coûts, efficacité collective et identité). • Implications managériales Les résultats proposent des actions marketing adaptées à chaque segment pour aider à la fois les organisateurs du mouvement #Movember et les professionnels de santé (PS) à atteindre deux principaux objectifs : (1) dépistage et (2) notoriété, recrutement et collecte de dons, grâce au big data, par le ciblage des personnes avec antécédents familiaux. • Originalité Les recherches sur #Movember utilisent habituellement les algorithmes supervisés qui présentent plusieurs limites tels que biais de confirmation, manque de répétabilité et une exigence en temps. Ce travail utilise le modèle non supervisé LDA pour identifier des concepts latents par la machine dans une perspective computationnelle ancrée (Computational Grounded Theory, CGT).
APA, Harvard, Vancouver, ISO, and other styles
3

Geoffroy, P. A. "Les objets connectés sont-ils le futur de la sémiologie psychiatrique ?" European Psychiatry 30, S2 (November 2015): S78. http://dx.doi.org/10.1016/j.eurpsy.2015.09.353.

Full text
Abstract:
Les objets connectés de santé se multiplient et sont utilisés très largement par le grand public. Ils font maintenant partie de notre quotidien et constitue un enjeu économique majeur. Ces outils peuvent être des applications mobiles ou des objets connectés, comme des bracelets de type actimètre, podomètre, des tensiomètres, des capteurs de fréquence cardiaque, etc. [1,2]. Le traitement des données issues de ces objets connectés est-il possible à visée médicale ? S’il ne fait aucun doute que ces objets facilitent l’accès au soin, leur intérêt sémiologique et leur efficacité thérapeutique n’est que trop rarement testé scientifiquement . Cette communication évaluera les bénéfices en santé que les médecins peuvent attendre de ces objets connectés [1,4]. L’utilisation de ces objets devra répondre à des impératifs d’efficacité en matière de santé individuelle et globale, mais aussi à des impératifs éthiques, de protection des données recueillies et de sûreté sanitaire. Cette nouvelle aire de l’e-santé se traduira par le développement nécessaire de nouveaux algorithmes de dépistage, de diagnostic et de décisions thérapeutiques.
APA, Harvard, Vancouver, ISO, and other styles
4

Bibault, J. E., X. Mirabel, T. Lacornerie, C. Dewas, A. Jouin, E. Bogart, E. Tresch, and É. Lartigau. "Radiothérapie stéréotaxique des carcinomes bronchiques de petit stade chez 205 patients : efficacité, toxicité et comparaison dosimétrique de deux algorithmes de calcul de la dose." Cancer/Radiothérapie 18, no. 5-6 (October 2014): 621. http://dx.doi.org/10.1016/j.canrad.2014.07.100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Bole. "Comparative Analysis of Machine Learning Algorithms for Sentiment Classification in Amazon Reviews." Highlights in Business, Economics and Management 24 (January 22, 2024): 1389–400. http://dx.doi.org/10.54097/eqmavw44.

Full text
Abstract:
Sentiment analysis serves as a crucial approach for gauging public opinion across various sectors, including the realm of product reviews. This study focuses on the evaluation of customer sentiments in Amazon reviews using an array of machine learning algorithms—Logistic Regression, Random Forest, Convolutional Neural Networks (CNN), and Long Short-Term Memory (LSTM) networks. The methodology employed is robust, characterized by meticulous parameter tuning based on both theoretical and empirical considerations. Comparative analysis of these algorithms, grounded in accuracy and other performance metrics, offers valuable insights into their respective efficacies and limitations for sentiment analysis tasks. The findings of this study contribute to an enhanced understanding of the performance of different machine learning algorithms in sentiment classification and provide a foundation for future research in this domain.
APA, Harvard, Vancouver, ISO, and other styles
6

Mohammed, Zouiten, Chaaouan Hanae, and Setti Larbi. "Comparative study on machine learning algorithms for early fire forest detection system using geodata." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 5 (October 1, 2020): 5507. http://dx.doi.org/10.11591/ijece.v10i5.pp5507-5513.

Full text
Abstract:
Forest fires have caused considerable losses to ecologies, societies and economies worldwide. To minimize these losses and reduce forest fires, modeling and predicting the occurrence of forest fires are meaningful because they can support forest fire prevention and management. In recent years, the convolutional neural network (CNN) has become an important state-of-the-art deep learning algorithm, and its implementation has enriched many fields. Therefore, a competitive spatial prediction model for automatic early detection of wild forest fire using machine learning algorithms can be proposed. This model can help researchers to predict forest fires and identify risk zonas. System using machine learning algorithm on geodata will be able to notify in real time the interested parts and authorities by providing alerts and presenting on maps based on geographical treatments for more efficacity and analyzing of the situation. This research extends the application of machine learning algorithms for early fire forest prediction to detection and representation in geographical information system (GIS) maps.
APA, Harvard, Vancouver, ISO, and other styles
7

Sel, Artun, Bilgehan Sel, Umit Coskun, and Cosku Kasnakoglu. "Comparative Study of an EKF-Based Parameter Estimation and a Nonlinear Optimization-Based Estimation on PMSM System Identification." Energies 14, no. 19 (September 25, 2021): 6108. http://dx.doi.org/10.3390/en14196108.

Full text
Abstract:
In this study, two different parameter estimation algorithms are studied and compared. Iterated EKF and a nonlinear optimization algorithm based on on-line search methods are implemented to estimate parameters of a given permanent magnet synchronous motor whose dynamics are assumed to be known and nonlinear. In addition to parameters, initial conditions of the dynamical system are also considered to be unknown, and that comprises one of the differences of those two algorithms. The implementation of those algorithms for the problem and adaptations of the methods are detailed for some other variations of the problem that are reported in the literature. As for the computational aspect of the study, a convexity study is conducted to obtain the spherical neighborhood of the unknown terms around their correct values in the space. To obtain such a range is important to determine convexity properties of the optimization problem given in the estimation problem. In this study, an EKF-based parameter estimation algorithm and an optimization-based method are designed for a given nonlinear dynamical system. The design steps are detailed, and the efficacies and shortcomings of both algorithms are discussed regarding the numerical simulations.
APA, Harvard, Vancouver, ISO, and other styles
8

Shenify, Mohamed, Fokrul Alom Mazarbhuiya, and A. S. Wungreiphi. "Detecting IoT Anomalies Using Fuzzy Subspace Clustering Algorithms." Applied Sciences 14, no. 3 (February 2, 2024): 1264. http://dx.doi.org/10.3390/app14031264.

Full text
Abstract:
There are many applications of anomaly detection in the Internet of Things domain. IoT technology consists of a large number of interconnecting digital devices not only generating huge data continuously but also making real-time computations. Since IoT devices are highly exposed due to the Internet, they frequently meet with the challenges of illegitimate access in the form of intrusions, anomalies, fraud, etc. Identifying these illegitimate accesses can be an exciting research problem. In numerous applications, either fuzzy clustering or rough set theory or both have been successfully employed. As the data generated in IoT domains are high-dimensional, the clustering methods used for lower-dimensional data cannot be efficiently applied. Also, very few methods were proposed for such applications until today with limited efficacies. So, there is a need to address the problem. In this article, mixed approaches consisting of nano topology and fuzzy clustering techniques have been proposed for anomaly detection in the IoT domain. The methods first use nano topology of rough set theory to generate CORE as a subspace and then employ a couple of well-known fuzzy clustering techniques on it for the detection of anomalies. As the anomalies are detected in the lower dimensional space, and fuzzy clustering algorithms are involved in the methods, the performances of the proposed approaches improve comparatively. The effectiveness of the methods is evaluated using time-complexity analysis and experimental studies with a synthetic dataset and a real-life dataset. Experimentally, it has been found that the proposed approaches outperform the traditional fuzzy clustering algorithms in terms of detection rates, accuracy rates, false alarm rates and computation times. Furthermore, nano topological and common Mahalanobis distance-based fuzzy c-means algorithm (NT-CM-FCM) is the best among all traditional or nano topology-based algorithms, as it has accuracy rates of 84.02% and 83.21%, detection rates of 80.54% and 75.37%, and false alarm rates of 7.89% and 9.09% with the KDDCup’99 dataset and Kitsune Network Attack Dataset, respectively.
APA, Harvard, Vancouver, ISO, and other styles
9

Mortazavi, Amin, Amir Rashidi, Mostafa Ghaderi-Zefrehei, Parham Moradi, Mohammad Razmkabir, Ikhide G. Imumorin, Sunday O. Peters, and Jacqueline Smith. "Constraint-Based, Score-Based and Hybrid Algorithms to Construct Bayesian Gene Networks in the Bovine Transcriptome." Animals 12, no. 10 (May 19, 2022): 1305. http://dx.doi.org/10.3390/ani12101305.

Full text
Abstract:
Bayesian gene networks are powerful for modelling causal relationships and incorporating prior knowledge for making inferences about relationships. We used three algorithms to construct Bayesian gene networks around genes expressed in the bovine uterus and compared the efficacies of the algorithms. Dataset GSE33030 from the Gene Expression Omnibus (GEO) repository was analyzed using different algorithms for hub gene expression due to the effect of progesterone on bovine endometrial tissue following conception. Six different algorithms (grow-shrink, max-min parent children, tabu search, hill-climbing, max-min hill-climbing and restricted maximum) were compared in three higher categories, including constraint-based, score-based and hybrid algorithms. Gene network parameters were estimated using the bnlearn bundle, which is a Bayesian network structure learning toolbox implemented in R. The results obtained indicated the tabu search algorithm identified the highest degree between genes (390), Markov blankets (25.64), neighborhood sizes (8.76) and branching factors (4.38). The results showed that the highest number of shared hub genes (e.g., proline dehydrogenase 1 (PRODH), Sam-pointed domain containing Ets transcription factor (SPDEF), monocyte-to-macrophage differentiation associated 2 (MMD2), semaphorin 3E (SEMA3E), solute carrier family 27 member 6 (SLC27A6) and actin gamma 2 (ACTG2)) was seen between the hybrid and the constraint-based algorithms, and these genes could be recommended as central to the GSE33030 data series. Functional annotation of the hub genes in uterine tissue during progesterone treatment in the pregnancy period showed that the predicted hub genes were involved in extracellular pathways, lipid and protein metabolism, protein structure and post-translational processes. The identified hub genes obtained by the score-based algorithms had a role in 2-arachidonoylglycerol and enzyme modulation. In conclusion, different algorithms and subsequent topological parameters were used to identify hub genes to better illuminate pathways acting in response to progesterone treatment in the bovine uterus, which should help with our understanding of gene regulatory networks in complex trait expression.
APA, Harvard, Vancouver, ISO, and other styles
10

Abdul Latiff, Liza, Huda Adibah Mohd Ramli, Ani Liza Asnawi, and Nur Haliza Abdul Wahab. "A STUDY ON CHANNEL AND DELAY-BASED SCHEDULING ALGORITHMS FOR LIVE VIDEO STREAMING IN THE FIFTH GENERATION LONG TERM EVOLUTION-ADVANCED." IIUM Engineering Journal 23, no. 1 (January 4, 2022): 233–43. http://dx.doi.org/10.31436/iiumej.v23i1.2115.

Full text
Abstract:
This paper investigates the performance of a number of channel and delay-based scheduling algorithms for an efficient QoS (Quality of Service) provision with more live video streaming users over the Fifth Generation Long-Term Evolution-Advanced (5G LTE-A) network. These algorithms were developed for use in legacy wireless networks and minor changes were made to enable these algorithms to perform packet scheduling in the downlink 5G LTE-A. The efficacies of the EXP and M-LWDF algorithms in maximizing the number of live video streaming users at the desired transmission reliability, minimizing the average network delay and maximizing network throughput, are shown via simulations. As the M-LWDF has a simpler mathematical equation as compared to the EXP, it is more favoured for implementation in the complex downlink 5G LTE-A. ABSTRAK: Kertas ini menyiasat prestasi sebilangan saluran dan algoritma penjadualan berdasarkan kelewatan untuk penyediaan QoS (Kualiti Perkhidmatan) yang cekap dengan banyak pengguna video secara langsung melalui rangkaian Generasi Kelima Long-Term Evolution Advanced (5G LTE-A). Algoritma-algoritma yang disiasat di dalam kertas ini dicadangkan untuk digunakan dalam generasi rangkaian tanpa wayar yang lama dan sedikit perubahan dibuat untuk membolehkan algoritma ini menyokong penjadualan paket dalam downlink 5G LTE-A. Keberkesanan EXP dan M-LWDF algoritma dalam memaksimumkan jumlah pengguna pada kebolehpercayaan transmisi yang diinginkan dari streaming video secara langsung, meminimumkan kelewatan rangkaian, dan memaksimumkan truput rangkaian ditunjukkan melalui simulasi. Namun, dengan M-LWDF mempunyai formula matematik yang mudah dibandingkan dengan EXP, ia lebih sesuai untuk digunakan dalam downlink 5G LTE-A yang lebih kompleks.
APA, Harvard, Vancouver, ISO, and other styles
11

Darmon, Patrice, Jean-François Yale, Stewart B. Harris, Lori Berard, Mélanie Groleau, Pasha Javadil, and John Stewart. "Efficacité et sécurité d’un algorithme pragmatique de titration à 1 unité par jour par le patient (INSIGHT) pour l’insuline Glargine 300 U/ml (Gla-300)." Diabetes & Metabolism 43, no. 2 (March 2017): A60—A61. http://dx.doi.org/10.1016/s1262-3636(17)30275-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Routh, Devin, Lindsi Seegmiller, Charlie Bettigole, Catherine Kuhn, Chadwick D. Oliver, and Henry B. Glick. "Improving the Reliability of Mixture Tuned Matched Filtering Remote Sensing Classification Results Using Supervised Learning Algorithms and Cross-Validation." Remote Sensing 10, no. 11 (October 23, 2018): 1675. http://dx.doi.org/10.3390/rs10111675.

Full text
Abstract:
Mixture tuned matched filtering (MTMF) image classification capitalizes on the increasing spectral and spatial resolutions of available hyperspectral image data to identify the presence, and potentially the abundance, of a given cover type or endmember. Previous studies using MTMF have relied on extensive user input to obtain a reliable classification. In this study, we expand the traditional MTMF classification by using a selection of supervised learning algorithms with rigorous cross-validation. Our approach removes the need for subjective user input to finalize the classification, ultimately enhancing replicability and reliability of the results. We illustrate this approach with an MTMF classification case study focused on leafy spurge (Euphorbia esula), an invasive forb in Western North America, using free 30-m hyperspectral data from the National Aeronautics and Space Administration’s (NASA) Hyperion sensor. Our protocol shows for our data, a potential overall accuracy inflation between 18.4% and 30.8% without cross-validation and according to the supervised learning algorithm used. We propose this new protocol as a final step for the MTMF classification algorithm and suggest future researchers report a greater suite of accuracy statistics to affirm their classifications’ underlying efficacies.
APA, Harvard, Vancouver, ISO, and other styles
13

Gomi, Tsutomu, and Yukio Koibuchi. "Use of a Total Variation Minimization Iterative Reconstruction Algorithm to Evaluate Reduced Projections during Digital Breast Tomosynthesis." BioMed Research International 2018 (June 19, 2018): 1–14. http://dx.doi.org/10.1155/2018/5239082.

Full text
Abstract:
Purpose. We evaluated the efficacies of the adaptive steepest descent projection onto convex sets (ASD-POCS), simultaneous algebraic reconstruction technique (SART), filtered back projection (FBP), and maximum likelihood expectation maximization (MLEM) total variation minimization iterative algorithms for reducing exposure doses during digital breast tomosynthesis for reduced projections. Methods. Reconstructions were evaluated using normal (15 projections) and half (i.e., thinned-out normal) projections (seven projections). The algorithms were assessed by determining the full width at half-maximum (FWHM), and the BR3D Phantom was used to evaluate the contrast-to-noise ratio (CNR) for the in-focus plane. A mean similarity measure of structural similarity (MSSIM) was also used to identify the preservation of contrast in clinical cases. Results. Spatial resolution tended to deteriorate in ASD-POCS algorithm reconstructions involving a reduced number of projections. However, the microcalcification size did not affect the rate of FWHM change. The ASD-POCS algorithm yielded a high CNR independently of the simulated mass lesion size and projection number. The ASD-POCS algorithm yielded a high MSSIM in reconstructions from reduced numbers of projections. Conclusions. The ASD-POCS algorithm can preserve contrast despite a reduced number of projections and could therefore be used to reduce radiation doses.
APA, Harvard, Vancouver, ISO, and other styles
14

Nayanjyoti Mazumdar, Et al. "Significance of Data Structures and Data Retrieval Techniques on Sequence Rule Mining Efficacy." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (October 30, 2023): 508–18. http://dx.doi.org/10.17762/ijritcc.v11i9.8838.

Full text
Abstract:
Sequence mining intends to discover rules from diverse datasets by implementing Rule Mining Algorithms with efficient data structures and data retrieval techniques. Traditional algorithms struggle in handling variable support measures which may involve repeated reconstruction of the underlying data structures with changing thresholds. To address these issues the premiere Sequence Mining Algorithm, AprioriAll is implemented against an Educational and a Financial Dataset, using the HASH and the TRIE data structures with scan reduction techniques. Primary idea is to study the impact of data structures and retrieval techniques on the rule mining process in handling diverse datasets. Performance Evaluation Matrices- Support, Confidence and Lifts are considered for testing the efficacies of the algorithm in terms of memory requirements and execution time complexities. Results unveil the excellence of Hashing in tree construction time and memory overhead for fixed sets of pre-defined support thresholds. Whereas, TRIE may avoid reconstruction and is capable of handling dynamic support thresholds, leading to shorter rule discovery time but higher memory consumption. This study highlights the effectiveness of Hash and TRIE data structures considering the dataset characteristics during rule mining. It underscores the importance of appropriate data structures based on dataset features, scanning techniques, and user-defined parameters.
APA, Harvard, Vancouver, ISO, and other styles
15

Massago, Miyoko, Mamoru Massago, Pedro Henrique Iora, Sanderland José Tavares Gurgel, Celso Ivam Conegero, Idalina Diair Regla Carolino, Maria Muzanila Mushi, et al. "Applicability of machine learning algorithm to predict the therapeutic intervention success in Brazilian smokers." PLOS ONE 19, no. 3 (March 4, 2024): e0295970. http://dx.doi.org/10.1371/journal.pone.0295970.

Full text
Abstract:
Smoking cessation is an important public health policy worldwide. However, as far as we know, there is a lack of screening of variables related to the success of therapeutic intervention (STI) in Brazilian smokers by machine learning (ML) algorithms. To address this gap in the literature, we evaluated the ability of eight ML algorithms to correctly predict the STI in Brazilian smokers who were treated at a smoking cessation program in Brazil between 2006 and 2017. The dataset was composed of 12 variables and the efficacies of the algorithms were measured by accuracy, sensitivity, specificity, positive predictive value (PPV) and area under the receiver operating characteristic curve. We plotted a decision tree flowchart and also measured the odds ratio (OR) between each independent variable and the outcome, and the importance of the variable for the best model based on PPV. The mean global values for the metrics described above were, respectively, 0.675±0.028, 0.803±0.078, 0.485±0.146, 0.705±0.035 and 0.680±0.033. Supporting vector machines performed the best algorithm with a PPV of 0.726±0.031. Smoking cessation drug use was the roof of decision tree with OR of 4.42 and importance of variable of 100.00. Increase in the number of relapses also promoted a positive outcome, while higher consumption of cigarettes resulted in the opposite. In summary, the best model predicted 72.6% of positive outcomes correctly. Smoking cessation drug use and higher number of relapses contributed to quit smoking, while higher consumption of cigarettes showed the opposite effect. There are important strategies to reduce the number of smokers and increase STI by increasing services and drug treatment for smokers.
APA, Harvard, Vancouver, ISO, and other styles
16

Cuevas, Erik, Jorge Gálvez, Salvador Hinojosa, Omar Avalos, Daniel Zaldívar, and Marco Pérez-Cisneros. "A Comparison of Evolutionary Computation Techniques for IIR Model Identification." Journal of Applied Mathematics 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/827206.

Full text
Abstract:
System identification is a complex optimization problem which has recently attracted the attention in the field of science and engineering. In particular, the use of infinite impulse response (IIR) models for identification is preferred over their equivalent FIR (finite impulse response) models since the former yield more accurate models of physical plants for real world applications. However, IIR structures tend to produce multimodal error surfaces whose cost functions are significantly difficult to minimize. Evolutionary computation techniques (ECT) are used to estimate the solution to complex optimization problems. They are often designed to meet the requirements of particular problems because no single optimization algorithm can solve all problems competitively. Therefore, when new algorithms are proposed, their relative efficacies must be appropriately evaluated. Several comparisons among ECT have been reported in the literature. Nevertheless, they suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. This study presents the comparison of various evolutionary computation optimization techniques applied to IIR model identification. Results over several models are presented and statistically validated.
APA, Harvard, Vancouver, ISO, and other styles
17

Singh, Manpreet, Priyankar Choudhary, Anterpreet Kaur Bedi, Saurav Yadav, and Rishi Singh Chhabra. "Compressive Strength Estimation of Waste Marble Powder Incorporated Concrete Using Regression Modelling." Coatings 13, no. 1 (December 30, 2022): 66. http://dx.doi.org/10.3390/coatings13010066.

Full text
Abstract:
A tremendous volumetric increase in waste marble powder as industrial waste has recently resulted in high environmental concerns of water, soil and air pollution. In this paper, we exploit the capabilities of machine learning to compressive strength prediction of concrete incorporating waste marble powder for future use. Experimentation has been carried out using different compositions of waste marble powder in concrete and varying water binder ratios of 0.35, 0.40 and 0.45 for the analysis. Effect of different dosages of superplasticizer has also been considered. In this paper, different regression algorithms to analyse the effect of waste marble powder on concrete, viz., multiple linear regression, K-nearest neighbour, support vector regression, decision tree, random forest, extra trees and gradient boosting, have been exploited and their efficacies have been compared using various statistical metrics. Experiments reveal random forest as the best model for compressive strength prediction with an R2 value of 0.926 and mean absolute error of 1.608. Further, shapley additive explanations and variance inflation factor analysis showcase the capabilities of the best achieved regression model in optimizing the use of marble powder as partial replacement of cement in concrete.
APA, Harvard, Vancouver, ISO, and other styles
18

Amadieu, Romain, Adéla Chahine, and Sophie Breinig. "Place et utilisation de la procalcitonine en réanimation pédiatrique et néonatale." Médecine Intensive Réanimation 30, no. 3 (August 23, 2021): 257–70. http://dx.doi.org/10.37051/mir-00072.

Full text
Abstract:
La procalcitonine (PCT) est un biomarqueur d’infection bactérienne de plus en plus utilisé en pratique clinique. Dans un service de réanimation, l’interprétation des dosages sanguins de PCT peut être affectée par de nombreuses situations inflammatoires (brûlure, traumatisme, chirurgies extensives dont cardiaque, transfusion massive, insuffisance rénale…). Une revue de la littérature pédiatrique est réalisée et axée sur l’utilisation de la PCT dans trois domaines : marqueur diagnostique d’infection bactérienne ; marqueur d’exclusion d’infection bactérienne ; guide de la durée d’antibiothérapie. En réanimation pédiatrique, la PCT a une précision modérée pour le diagnostic d’infection bactérienne. La suspicion d’infection bactérienne doit rester clinique et conduire, quelle que soit la valeur de la PCT, à l’administration précoce d’une antibiothérapie probabiliste large spectre, secondairement adaptée à l’agent pathogène identifié et à son profil de sensibilité. De par sa valeur prédictive négative élevée, l’utilisation d’un algorithme guidé par la PCT semble intéressante en réanimation pédiatrique pour raccourcir la durée d’antibiothérapie totale et large spectre sans augmentation de réinfection, en utilisant des critères tels que PCT <0.5 ng/mL ou diminuant ≥50-80% par rapport à la valeur maximale. L'algorithme a démontré son efficacité et sa sécurité avec un haut niveau de preuve en réanimation adulte. Cependant, toutes les études pédiatriques publiées précédemment n’étaient pas randomisées. Deux essais contrôlés randomisés pédiatriques sont actuellement en cours : une large étude multicentrique française en réanimation néonatale et une étude monocentrique américaine en réanimation pédiatrique. Le nombre de sujet à inclure doit être suffisamment important pour valider l’objectif de sécurité (non infériorité en termes de mortalité). Enfin, le respect des recommandations pourrait à lui seul diminuer la durée d’antibiothérapie actuelle.
APA, Harvard, Vancouver, ISO, and other styles
19

Yeng, Prosper Kandabongee, Ashenafi Zebene Woldaregay, Terje Solvoll, and Gunnar Hartvigsen. "Cluster Detection Mechanisms for Syndromic Surveillance Systems: Systematic Review and Framework Development." JMIR Public Health and Surveillance 6, no. 2 (May 26, 2020): e11512. http://dx.doi.org/10.2196/11512.

Full text
Abstract:
Background The time lag in detecting disease outbreaks remains a threat to global health security. The advancement of technology has made health-related data and other indicator activities easily accessible for syndromic surveillance of various datasets. At the heart of disease surveillance lies the clustering algorithm, which groups data with similar characteristics (spatial, temporal, or both) to uncover significant disease outbreak. Despite these developments, there is a lack of updated reviews of trends and modelling options in cluster detection algorithms. Objective Our purpose was to systematically review practically implemented disease surveillance clustering algorithms relating to temporal, spatial, and spatiotemporal clustering mechanisms for their usage and performance efficacies, and to develop an efficient cluster detection mechanism framework. Methods We conducted a systematic review exploring Google Scholar, ScienceDirect, PubMed, IEEE Xplore, ACM Digital Library, and Scopus. Between January and March 2018, we conducted the literature search for articles published to date in English in peer-reviewed journals. The main eligibility criteria were studies that (1) examined a practically implemented syndromic surveillance system with cluster detection mechanisms, including over-the-counter medication, school and work absenteeism, and disease surveillance relating to the presymptomatic stage; and (2) focused on surveillance of infectious diseases. We identified relevant articles using the title, keywords, and abstracts as a preliminary filter with the inclusion criteria, and then conducted a full-text review of the relevant articles. We then developed a framework for cluster detection mechanisms for various syndromic surveillance systems based on the review. Results The search identified a total of 5936 articles. Removal of duplicates resulted in 5839 articles. After an initial review of the titles, we excluded 4165 articles, with 1674 remaining. Reading of abstracts and keywords eliminated 1549 further records. An in-depth assessment of the remaining 125 articles resulted in a total of 27 articles for inclusion in the review. The result indicated that various clustering and aberration detection algorithms have been empirically implemented or assessed with real data and tested. Based on the findings of the review, we subsequently developed a framework to include data processing, clustering and aberration detection, visualization, and alerts and alarms. Conclusions The review identified various algorithms that have been practically implemented and tested. These results might foster the development of effective and efficient cluster detection mechanisms in empirical syndromic surveillance systems relating to a broad spectrum of space, time, or space-time.
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Lei, David Brunell, Clifford Stephan, James Mancuso, Xiaohui Yu, Bin He, Timothy C. Thompson, et al. "Driver network as a biomarker: systematic integration and network modeling of multi-omics data to derive driver signaling pathways for drug combination prediction." Bioinformatics 35, no. 19 (February 15, 2019): 3709–17. http://dx.doi.org/10.1093/bioinformatics/btz109.

Full text
Abstract:
Abstract Motivation Drug combinations that simultaneously suppress multiple cancer driver signaling pathways increase therapeutic options and may reduce drug resistance. We have developed a computational systems biology tool, DrugComboExplorer, to identify driver signaling pathways and predict synergistic drug combinations by integrating the knowledge embedded in vast amounts of available pharmacogenomics and omics data. Results This tool generates driver signaling networks by processing DNA sequencing, gene copy number, DNA methylation and RNA-seq data from individual cancer patients using an integrated pipeline of algorithms, including bootstrap aggregating-based Markov random field, weighted co-expression network analysis and supervised regulatory network learning. It uses a systems pharmacology approach to infer the combinatorial drug efficacies and synergy mechanisms through drug functional module-induced regulation of target expression analysis. Application of our tool on diffuse large B-cell lymphoma and prostate cancer demonstrated how synergistic drug combinations can be discovered to inhibit multiple driver signaling pathways. Compared with existing computational approaches, DrugComboExplorer had higher prediction accuracy based on in vitro experimental validation and probability concordance index. These results demonstrate that our network-based drug efficacy screening approach can reliably prioritize synergistic drug combinations for cancer and uncover potential mechanisms of drug synergy, warranting further studies in individual cancer patients to derive personalized treatment plans. Availability and implementation DrugComboExplorer is available at https://github.com/Roosevelt-PKU/drugcombinationprediction. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
21

Singhal, Jyoti, and Neamat ElGayar. "SENTIMENT ANALYSIS OF COVID-19 VACCINE TWEETS." International Research Journal of Computer Science 9, no. 4 (April 30, 2022): 61–88. http://dx.doi.org/10.26562/irjcs.2021.v0904.003.

Full text
Abstract:
In the last decade, social media has emerged as the largest centralized source of opinions, expressions, blogs and micro-blogs, news, and other information. It has presented a great opportunity for the researchers, industries, and governments to understand the behavior of their customers and constituents to better align their products and services with their customers’ and citizens’ requirements. Among the social media sources, Twitter is a unique source in that data (microblogs) is unstructured and is available for free. Twitter is used widely across the globe and its microblog concept lends itself to analyze the underlying sentiment. A recent debate has been on the COVID vaccines – whether the potential benefits outweigh the side effects. Currently, there are many vaccines available with different claimed efficacy against the virus. The varying efficacies of these vaccines have attracted a public discourse. This research aims to analyze COVID-19 vaccines related tweets to better understand the pattern of public sentiments and opinions about the vaccines with respect to their side effects, potency, availability, and efficacy. The tweets are categorized and analyzed based on their polarity and subjectivity towards the vaccines. To perform the classification of tweets based on aspects, machine learning techniques such as Logistic Regression (LR), Naïve Bayes (MNB), and Support Vector Machine (SVM) along with deep learning technique Long Term Short Memory (LSTM) are used. All these classification algorithms are then compared and evaluated on the measures of precision, recall, accuracy score and F1-score. Apart from categorization and classification, topic modelling method LDA is used to extract the topics based on their similarity and frequency that can sum up the sentiment of common public towards whole process of COVID vaccine. Based on 60,000 tweets between 1-March-2021 to 31-May-2021, overall public sentiment for vaccine indicated a positive trend. Analyzing aspects of vaccines, efficacy of vaccines has turned out to be the most positive aspect which has encouraged people to advocate for vaccines. With evaluation and comparison of model’s performance, bidirectional LSTM with 92% accuracy has outperformed all the machine learning algorithms. Among all the machine learning algorithms based on different vectorization techniques, Logistic Regression achieved the highest accuracy of 73% with count vectorizer while SVM and MNB got accuracy of 64% only. Topic modeling with LDA for whole dataset with 60,000 tweets, yielded 4optimum number of topics with coherence score of 0.625. Most of the topics had common theme of availability, efficacy, and side effect.
APA, Harvard, Vancouver, ISO, and other styles
22

Mukherjee, Taniya, Isha Sangal, Biswajit Sarkar, and Tamer M. Alkadash. "Mathematical estimation for maximum flow of goods within a cross-dock to reduce inventory." Mathematical Biosciences and Engineering 19, no. 12 (2022): 13710–31. http://dx.doi.org/10.3934/mbe.2022639.

Full text
Abstract:
<abstract><p>Supply chain management has recently renovated its strategy by implementing a cross-docking scheme. Cross-docking is a calculated logistics strategy where freight emptied from inbound vehicles is handled straightforwardly onto outbound vehicles, eliminating the intermediate storage process. The cross-docking approach thrives on the minimum storage time of goods in the inventory. Most of the cross-docks avail temporary storage docks where items can be stored for up to 24 hours before being packed up for transportation. The storage capacity of the cross-dock varies depending on the nature of ownership. In the rented cross-docks center, the temporary storage docks are considered of infinite capacity. This study believes that the temporary storage facilities owned by the cross-dock center are of finite capacity, which subsequently affects the waiting time of the goods. The flow rate of goods within the cross-docks is expected to be maximum to avoid long waiting for goods in the queue. This paper uses a series of max-flow algorithms, namely Ford Fulkerson, Edmond Karp, and Dinic's, to optimize the flow of goods between the inbound port and the outbound dock and present a logical explanation to reduce the waiting time of the trucks. A numerical example is analyzed to prove the efficacity of the algorithm in finding maximum flow. The result demonstrates that Dinic's algorithm performs better than the Ford Fulkerson and Edmond Karp algorithm at addressing the problem of maximum flow at the cross-dock. The algorithm effectively provided the best result regarding iteration and time complexity. In addition, it also suggested the bottleneck paths of the network in determining the maximum flow.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
23

Mokgautsi, Ntlotlang, Yu-Chi Wang, Bashir Lawal, Harshita Khedkar, Maryam Rachmawati Sumitra, Alexander T. H. Wu, and Hsu-Shan Huang. "Network Pharmacological Analysis through a Bioinformatics Approach of Novel NSC765600 and NSC765691 Compounds as Potential Inhibitors of CCND1/CDK4/PLK1/CD44 in Cancer Types." Cancers 13, no. 11 (May 21, 2021): 2523. http://dx.doi.org/10.3390/cancers13112523.

Full text
Abstract:
Cyclin D1 (CCND1) and cyclin-dependent kinase 4 (CDK4) both play significant roles in regulating cell cycle progression, while polo-like kinase 1 (PLK1) regulates cell differentiation and tumor progression, and activates cancer stem cells (CSCs), with the cluster of differentiation 44 (CD44) surface marker mostly being expressed. These oncogenes have emerged as promoters of metastasis in a variety of cancer types. In this study, we employed comprehensive computational and bioinformatics analyses to predict drug targets of our novel small molecules, NSC765600 and NSC765691, respectively derived from diflunisal and fostamatinib. The target prediction tools identified CCND1/CDK4/PLK1/CD44 as target genes for NSC765600 and NSC765691 compounds. Additionally, the results of our in silico molecular docking analysis showed unique ligand–protein interactions with putative binding affinities of NSC765600 and NSC765691 with CCND1/CDK4/PLK1/CD44 oncogenic signaling pathways. Moreover, we used drug-likeness precepts as our guidelines for drug design and development, and found that both compounds passed the drug-likeness criteria of molecular weight, polarity, solubility, saturation, flexibility, and lipophilicity, and also exhibited acceptable pharmacokinetic properties. Furthermore, we used development therapeutics program (DTP) algorithms and identified similar fingerprints and mechanisms of NSC765600 and NSC765691 with synthetic compounds and standard anticancer agents in the NCI database. We found that NSC765600 and NSC765691 displayed antiproliferative and cytotoxic effects against a panel of NCI-60 cancer cell lines. Based on these finding, NSC765600 and NSC765691 exhibited satisfactory levels of safety with regard to toxicity, and met all of the required criteria for drug-likeness precepts. Currently, further in vitro and in vivo investigations in tumor-bearing mice are in progress to study the potential treatment efficacies of the novel NSC765600 and NSC765691 small molecules.
APA, Harvard, Vancouver, ISO, and other styles
24

RAJDEV, POOJA, MATTHEW WARD, and PEDRO IRAZOQUI. "EFFECT OF STIMULUS PARAMETERS IN THE TREATMENT OF SEIZURES BY ELECTRICAL STIMULATION IN THE KAINATE ANIMAL MODEL." International Journal of Neural Systems 21, no. 02 (April 2011): 151–62. http://dx.doi.org/10.1142/s0129065711002730.

Full text
Abstract:
Preliminary results from animal and clinical studies demonstrate that electrical stimulation of brain structures can reduce seizure frequency in patients with refractory epilepsy. Since most researchers derive stimulation parameters by trial and error, it is unclear what stimulation frequency, amplitude and duration constitutes a set of optimal stimulation parameters for aborting seizure activity in a given patient. In this investigation, we begin to quantify the independent effects of stimulation parameters on electrographic seizures, such that they could be used to develop an efficient closed-loop prosthesis that intervenes before the clinical onset of a seizure and seizure generalization. Biphasic stimulation is manually delivered to the hippocampus in response to a visually detected electrographic seizure. Such focal, responsive stimulation allows for anti-seizure treatment delivery with improved temporal and spatial specificity over conventional open-loop stimulation paradigms, with the possibility of avoiding tissue damage stemming from excessive exposure to electrical stimulation. We retrospectively examine the effects of stimulation frequency (low, medium and high), pulse-width (low and high) and amplitude (low and high) in seizures recorded from 23 kainic acid treated rats. We also consider the effects of total charge delivered and the rate of charge delivery, and identify stimulation parameter sets that induce after-discharges or more seizures. Among the stimulation parameters evaluated, we note 2 major findings. First, stimulation frequency is a key parameter for inhibiting seizure activity; the anti-seizure effect cannot be attributed to only the charge delivered per phase. Second, an after-discharge curve shows that as the frequency and pulse-width of stimulation increases, smaller pulse amplitudes are capable of eliciting an after-discharge. It is expected that stimulation parameter optimization will lead to devices with enhanced treatment efficacies and reduced side-effect profiles, especially when used in conjunction with seizure prediction or detection algorithms in a closed-loop control application.
APA, Harvard, Vancouver, ISO, and other styles
25

Vidwans, Niraj Ashutosh, Bhupesh Pydiraju Y, Eshan Sandhu, Pushkar P. Lele, and Sreeram Vaddiraju. "Using Cell Motility and Particle Tracking to Deduce Mechanisms and Kinetics Underlying Photocatalytic Water Disinfection in Real Time." ECS Meeting Abstracts MA2023-02, no. 18 (December 22, 2023): 1198. http://dx.doi.org/10.1149/ma2023-02181198mtgabs.

Full text
Abstract:
Photocatalysis is a promising water disinfection method that has the potential to complement traditional approaches to water remediation. Large-scale implementation of photocatalysis for water remediation requires a two-fold improvement to the current state-of-the-art: understanding of the mechanisms underlying photocatalyst-pathogen interaction and pathogen inactivation during photocatalysis, and a process for rapid catalyst discovery that provides for evaluating photocatalyst efficacies in a reliable manner. Toward this goal, we will discuss the use of novel tools developed in our laboratory to track the viability loss of bacterial cells inactivated by photocatalytic treatment in real time. Using optical microscopy and particle-tracking algorithms, we observed that, when exposed to titanium dioxide (TiO2) nanowire-assisted photocatalytic stressors, the change in the motility of Escherichia coli (E. coli) cells tracks viability loss precisely. Furthermore, using phase and fluorescence optical microscopy, real-time observations of the interactions between the cells and the photocatalyst nanowires were also performed. Our findings suggest that these interactions occur through collisions between the nanowires and bacteria. Based on these observations, we developed a phenomenological model explaining the pseudo-first-order kinetics of E. coli inactivation. Through the use of fluorescence-based microscopy, we observed that the motility loss (and hence viability loss) was due to the dissipation of the proton motive force that powers motility, caused by cell membrane integrity loss. Our experiments show a good match between the kinetics of motility loss and viability loss for various operating conditions, showing that the methods presented here are versatile in applicability. Overall, our results indicate that these in situ methods offer significant time-savings for characterizing viability loss of pathogens over the traditional ex-situ methods. As indicated above, these methods will also help accelerate the evaluation of novel antibacterial agents, including, but not limited to novel photocatalysts, against emerging treatment-resistant pathogens, for applications ranging from water disinfection to equipment decontamination in healthcare facilities.
APA, Harvard, Vancouver, ISO, and other styles
26

RAJAOUI, Nordine. "BAYÉSIEN VERSUS CMA-ES : OPTIMISATION DES HYPERPARAMÈTRES ML [PARTIE 2]." Management & Data Science, 2023. http://dx.doi.org/10.36863/mds.a.25154.

Full text
Abstract:
En première partie, nous avons présenté et illustré le fonctionnement de deux algorithmes d'optimisation: l'optimisation bayésienne et le CMA-ES. Si le premier est très connu de la sphère data scientist notamment via la bibliothèque Hyperopt de Python, le deuxième est moins utilisé dans le processus de recherche des meilleurs hyperparamètres pour un modèle donné afin d'améliorer ses performances. Il est donc intéressant de se demander si cet algorithme a son utilité en ML notamment dans la recherche des hyperparamètres. Nous allons analyser dans cet article les différents facteurs dont dépend la pertinence de l'utilisation du CMA-ES en ML en comparant son efficacité avec Hyperopt.
APA, Harvard, Vancouver, ISO, and other styles
27

Mann, Juliana, Victoria Cox, Sean Gorman, and Piera Calissi. "Barriers to and Facilitators of Delabelling of Antimicrobial Allergies: A Qualitative Meta-synthesis." Canadian Journal of Hospital Pharmacy 77, no. 1 (February 14, 2024). http://dx.doi.org/10.4212/cjhp.3490.

Full text
Abstract:
Background: Patients who report penicillin allergies may receive alternative antibiotics. Such substitution contributes to antimicrobial resistance, lower treatment efficacy, increased frequency of adverse events, and increased costs. Approximately 90% of individuals who report a penicillin allergy can tolerate a penicillin. Objective: To identify the barriers to and facilitators of removal by health care workers of inaccurate antimicrobial allergies from patient records, known as delabelling. Data Sources: The MEDLINE database was searched from inception to December 29, 2020. Study Selection and Data Extraction: Qualitative studies evaluating health care professionals’ perceptions of barriers to and/or facilitators of the act of delabelling a patient’s antimicrobial allergies were included in the meta-synthesis. Data Synthesis: The Theoretical Domains Framework was used to code and group individual utterances from the included studies, which were mapped to the Behaviour Change Wheel and corresponding intervention function and policy categories. Results: Four studies met the inclusion criteria. Eight themes were identified as representing barriers to delabelling: delabelling skills, patient education skills, knowledge, electronic health records (EHRs), communication frameworks, time, fear about allergic reactions, and professional roles. Behaviour change interventions that may overcome these barriers include education, training, algorithms and toolkits, changes to EHRs, use of dedicated personnel, policies, incentivization of correct labelling, and an audit system. Conclusions: Eight themes were identified as barriers to delabelling of antimicrobial allergies. Future behaviour change interventions to address these barriers were proposed. Confidence in the findings of this study was judged to be moderate, according to the GRADE CERQual approach. RÉSUMÉ Contexte : Les patients qui signalent des allergies à la pénicilline peuvent recevoir d’autres antibiotiques. Une telle substitution contribue à la résistance aux antimicrobiens, à une moindre efficacité du traitement, à une fréquence accrue des événements indésirables et à une augmentation des coûts. Environ 90 % des personnes qui déclarent une allergie à la pénicilline peuvent la tolérer. Objectif : Identifier les obstacles à l’élimination par les travailleurs de la santé des allergies antimicrobiennes inexactes des dossiers des patients, ce que l’on appelle « le désétiquetage », et les facteurs qui le favorisent. Sources des données : La base de données MEDLINE a été consultée depuis sa création jusqu’au 29 décembre 2020.Sélection de l’étude et extraction des données : Des études qualitatives évaluant les perceptions des professionnels de la santé quant aux obstacles à l’acte de désétiquetage des allergies aux antimicrobiens d’un patient et les facilitateurs de celui-ci ont été incluses dans la métasynthèse. Synthèse des données : Le cadre théorique des domaines a été utilisé pour coder et regrouper les énoncés individuels, qui ont ensuite été associés à la roue du changement de comportement ainsi qu’aux catégories de fonctions et de politiques d’intervention correspondantes. Résultats : Quatre études répondaient aux critères d’inclusion. Huit thèmes ont été identifiés comme représentant des obstacles au désétiquetage : les compétences en la matière, les compétences en matière d’éducation des patients, les connaissances, les dossiers de santé électroniques (DSE), les cadres de communication, le temps, la peur des réactions allergiques et les rôles professionnels. Les interventions visant le changement de comportement qui peuvent surmonter ces obstacles comprennent l’éducation, la formation, les algorithmes et les boîtes à outils de désétiquetage, la modification des DSE, le recours à du personnel dédié, des politiques, l’incitation à un étiquetage correct et un système d’audit. Conclusions : Huit thèmes ont été identifiés comme étant des obstacles au désétiquetage des allergies aux antimicrobiens. De futures interventions ciblant le changement de comportement pour les surmonter ont été proposées. La confiance dans les résultats de cette étude a été jugée modérée, selon l’approche GRADE CERQual.
APA, Harvard, Vancouver, ISO, and other styles
28

Jenkins, Phillip R., Matthew J. Robbins, and Brian J. Lunday. "Approximate Dynamic Programming for Military Medical Evacuation Dispatching Policies." INFORMS Journal on Computing, August 14, 2020. http://dx.doi.org/10.1287/ijoc.2019.0930.

Full text
Abstract:
Military medical planners must consider how aerial medical evacuation (MEDEVAC) assets will be dispatched when preparing for and supporting high-intensity combat operations. The dispatching authority seeks to dispatch MEDEVAC assets to prioritized requests for service, such that battlefield casualties are effectively and efficiently transported to nearby medical-treatment facilities. We formulate and solve a discounted, infinite-horizon Markov decision process (MDP) model of the MEDEVAC dispatching problem. Because the high dimensionality and uncountable state space of our MDP model renders classical dynamic programming solution methods intractable, we instead apply approximate dynamic programming (ADP) solution methods to produce high-quality dispatching policies relative to the currently practiced closest-available dispatching policy. We develop, test, and compare two distinct ADP solution techniques, both of which utilize an approximate policy iteration (API) algorithmic framework. The first algorithm uses least-squares temporal differences (LSTD) learning for policy evaluation, whereas the second algorithm uses neural network (NN) learning. We construct a notional, yet representative planning scenario based on high-intensity combat operations in southern Azerbaijan to demonstrate the applicability of our MDP model and to compare the efficacies of our proposed ADP solution techniques. We generate 30 problem instances via a designed experiment to examine how selected problem features and algorithmic features affect the quality of solutions attained by our ADP policies. Results show that the respective policies determined by the NN-API and LSTD-API algorithms significantly outperform the closest-available benchmark policies in 27 (90%) and 24 (80%) of the problem instances examined. Moreover, the NN-API policies significantly outperform the LSTD-API policies in each of the problem instances examined. Compared with the closest-available policy for the baseline problem instance, the NN-API policy decreases the average response time of important urgent (i.e., life-threatening) requests by 39 minutes. These research models, methodologies, and results inform the implementation and modification of current and future MEDEVAC tactics, techniques, and procedures, as well as the design and purchase of future aerial MEDEVAC assets.
APA, Harvard, Vancouver, ISO, and other styles
29

Malik Muhammad Hussain, Farrukh Shehzad, Muhammad Islam, Ashique Ali Chohan, Rashid Ahmed, and H. M. Muddasar Jamil Shera. "Measuring the Performance of Supervised Machine Learning Algorithms for Optimizing Wheat Productivity Prediction Models: A Comparative Study." Proceedings of the Pakistan Academy of Sciences: A. Physical and Computational Sciences 60, no. 4 (December 12, 2023). http://dx.doi.org/10.53560/ppasa(60-4)820.

Full text
Abstract:
The issue of precise crop prediction gained worldwide attention in the midst of food security concerns. In this study, the efficacies of different machine learning (ML) algorithms, i.e., multiple linear regression (MLR), decision tree regression (DTR), random forest regression (RFR), and support vector regression (SVR) are integrated to predict wheat productivity. The performances of ML algorithms are then measured to get the optimized model. The updated dataset is collected from the Crop Reporting Service for various agronomical constraints. Randomized data partitions, hyper-parametric tuning, complexity analysis, cross-validation measures, learning curves, evaluation metrics and prediction errors are used to get the optimized model. ML model is applied using 75% training dataset and 25% testing datasets. RFR achieved the highest R2 value of 0.90 for the training model, followed by DTR, MLR, and SVR. In the testing model, RFR also achieved an R2 value of 0.74, followed by MLR, DTR, and SVR. The lowest prediction error (P.E) is found for the RFR, followed by DTR, MLR, and SVR. K-Fold cross-validation measures also depict that RFR is an optimized model when compared with DTR, MLR and SVR.
APA, Harvard, Vancouver, ISO, and other styles
30

Yildirim, M., C. Salbach, B. R. Milles, C. Reich, N. Frey, E. Giannitsis, and M. Mueller-Hennessen. "Comparison of the clinical chemistry score to other biomarker algorithms for rapid rule-out of acute myocardial infarction and risk stratification." European Heart Journal: Acute Cardiovascular Care 13, Supplement_1 (April 2024). http://dx.doi.org/10.1093/ehjacc/zuae036.058.

Full text
Abstract:
Abstract Funding Acknowledgements None. Background The clinical chemistry score (CCS) comprising high-sensitivity (hs) cardiac troponins (cTn), glucose and estimated glomerular filtration rate has been previously validated with superior accuracy for detection and risk stratification of acute myocardial infarction (AMI) compared to hs-cTn alone. Methods The CCS was directly compared to other biomarker-based algorithms for rapid rule-out and prognostication of AMI including the hs-cTnT limit-of-blank (LOB, &lt;3 ng/L) or limit-of-detection (LOD, &lt;5 ng/L) and the dual marker strategy (DMS) (copeptin &lt;10 pmol/L and hs-cTnT ≤14 ng/L) in 1506 patients presenting to the emergency department (ED) with symptoms suggestive of acute coronary syndrome. Negative predictive values (NPV) and sensitivities for rule-out of AMI were assessed and outcomes included rates of the combined end-point of all-cause mortality, myocardial re-infarction and stroke within 12 months. Results NPVs of 100% (98.3-100%) could be found for a CCS=0, hs-cTnT LoB and hs-cTnT LoD with rule-out efficacies of 11.1%, 7.6% and 18.3% as well as specificities of 13.0% (9.9-16.6%), 8.8% (7.3-10.5%) and 21.4% (19.2-23.8%), respectively. A CCS≤ 1 achieved a rule-out in 32.2% of all patients with a NPV of 99.6% (98.4-99.9%) and specificity of 37.4% (34.2-40.5%) compared to a rule-out efficacy of 51.2%, NPV of 99.0 (98.0-99.5) and specificity of 59.7% (57.0-62.4%) for the DMS. Rates of the combined end-point of death/AMI within 30 days ranged between 0.0% and 0.5% for all fast-rule-out protocols. Conclusions The CCS enables a reliable rule-out of AMI with low outcome rates in short and long-term follow-up for a specific population of ED patients. However, compared to a single or dual biomarker strategy, the CCS rule-out is attenuated by a loss of specificity and lower efficacy. Thus, the clinical benefit of the CCS in clinical practice seems to be negligible.
APA, Harvard, Vancouver, ISO, and other styles
31

Khan, Waleed Ali, Zhenhua Rui, Ting Hu, Yueliang Liu, Fengyuan Zhang, and Yang Zhao. "Application of Machine Learning and Optimization of Oil Recovery and CO2 Sequestration in the Tight Oil Reservoir." SPE Journal, March 1, 2024, 1–21. http://dx.doi.org/10.2118/219731-pa.

Full text
Abstract:
Summary In recent years, shale and tight reservoirs have become an essential source of hydrocarbon production since advanced multistage and horizontal drilling techniques were developed. Tight oil reservoirs contain huge oil reserves but suffer from low recovery factors. For tight oil reservoirs, CO2-water alternating gas (CO2-WAG) is one of the preferred tertiary methods to enhance the overall cumulative oil production while also sequestering significant amounts of injected CO2. However, the evaluation of CO2-WAG is strongly dependent on the injection parameters, which renders numerical simulations computationally expensive. In this study, a novel approach has been developed that utilized machine learning (ML)-assisted computational workflow in optimizing a CO2-WAG project for a low-permeability oil reservoir considering both hydrocarbon recovery and CO2 storage efficacies. To make the predictive model more robust, two distinct proxy models—multilayered neural network (MLNN) models coupled with particle swarm optimization (PSO) and genetic algorithms (GAs)—were trained and optimized to forecast the cumulative oil production and CO2 storage. Later, the optimized results from the two algorithms were compared. The optimized workflow was used to maximize the predefined objective function. For this purpose, a field-scaled numerical simulation model of the Changqing Huang 3 tight oil reservoir was constructed. By December 2060, the base case predicts a cumulative oil production of 0.368 million barrels (MMbbl) of oil, while the MLNN-PSO and MLNN-GA forecast 0.389 MMbbl and 0.385 MMbbl, respectively. As compared with the base case (USD 150.5 million), MLNN-PSO and MLNN-GA predicted a further increase in the oil recovery factor by USD 159.2 million and USD 157.6 million, respectively. In addition, the base case predicts a CO2 storage amount of 1.09×105 tons, whereas the estimates from MLNN-PSO and MLNN-GA are 1.26×105 tons and 1.21×105 tons, respectively. Compared with the base case, CO2 storage for the MLNN-PSO and MLNN-GA increased by 15.5% and 11%, respectively. In terms of the performance analysis of the two algorithms, both showed remarkable performance. PSO-developed proxies were 16 times faster and GA proxies were 10 times faster as compared with the reservoir simulation in finding the optimal solution. The developed optimization workflow is extremely efficient and computationally robust. The experiences and lessons will provide valuable insights into the decision-making process and in optimizing the Changqing Huang 3 low-permeability oil reservoir.
APA, Harvard, Vancouver, ISO, and other styles
32

Blais, Louise. "Biopolitique." Anthropen, 2019. http://dx.doi.org/10.17184/eac.anthropen.105.

Full text
Abstract:
On doit à Michel Foucault la notion de biopolitique, proposée dès 1974, et dont il en attribuera l’héritage à son maitre, Georges Canguilhem. Depuis, la notion de biopolitique occupe une place non négligeable dans des domaines et disciplines aussi variés que le « management » privé ou public, la santé et les services sociaux, le commerce ou les sciences humaines et sociales (littérature, philosophie, sociologie, anthropologie….). La biopolitique est au cœur des processus de normalisation et de contrôle social. Citons d’emblée Foucault : « Le contrôle de la société sur les individus ne s’effectue pas seulement par la conscience ou par l’idéologie, mais aussi dans le corps et avec le corps. Le corps est une réalité biopolitique ; la médecine est une stratégie biopolitique » (Foucault, 1994 : 210). La biopolitique, soutient Foucault, est une stratégie politique de la gouvernance qu’il faut situer dans le cadre qui l’a vu naitre : l’émergence du libéralisme (Foucault, 2004). La biopolitique désigne le nouvel objet de gouvernance des sociétés libérales depuis deux siècles: la population comme ensemble des gouvernés dans leur existence biologique (Gros et al, 2013). La biopolitique est tout à la fois stratégie politique, outil de savoir/pouvoir et pratique gouvernementale/institutionnelle. Sa tâche, sa responsabilité, son mandat est de s’occuper de la « santé » des populations: natalité, mortalité, morbidité, hygiène, alimentation, sexualité, pollution, pauvreté, comportements… l’air, l’eau, les constructions, les égouts …. Le champ de la santé s’étend alors à l’infini, à travers un panoptique, c’est à dire, ce dispositif qui rend possible l’idée d’un regard englobant portant sur chacun des individus (Foucault, 1994 : 261). C’est en ce sens que, pour Foucault, la médecine ne se réduit pas à la seule figure du médecin; elle est une « stratégie biopolitique » qui se déploie et s’incarne dans un dispositif institutionnel et professionnel indispensable à la gouvernance des sociétés (néo)libérales (Foucault, 1994 : 210). C’est aussi en ce sens que Guillaume le Blanc (2006 :154) soutiendra que : « La médicalisation de la vie humaine est l’évènement majeur de la biopolitique ». De ce point de vue, les études populationnelles et épidémiologiques, dont les premières remontent au 19e siècle (Blais, 2006) prennent toute leur importance comme outils de la gouvernance. D’une part, elles nourrissent les choix et décisions des gouvernants concernant les populations à gouverner, choix et décisions qui sont à la fois d’ordre politique, économique, social et culturel, et qui s’inscrivent dans des rapports de pouvoir. D’autre part, elles modélisent les représentations des populations (des gouvernés) dans leur existence biologique et sociale. La biopolitique est en ce sens un mode de connaissance, à la fois des populations en tant qu’agrégats d’individus, et de soi en tant qu’individu dans la collectivité. La biopolitique est, chez Foucault, un outil qui forge les normes, outil essentiel à la gouvernance et ses instances de pratiques : la justice, bien sûr, mais aussi, et notamment, les institutions de la santé, des services sociaux, de l’éducation, du travail… Elle établit des normes visuelles (les apparences, les comportements, les performances, les existences biologiques…) et discursives (les manières de nommer les choses, de les dire, le dicible, ce qui est recevable, la parole, l’expression, l’argumentation…). Elle modélise les représentations faites de la norme, des représentations autant de l’autre, du différent, de la non-norme, que de soi en tant qu’individu(s) par rapport et en rapport(s) à autrui et sa place dans la collectivité. Comme le souligne le Blanc (2006 :9), chez Foucault la vie est qualifiée par des normes qui sont tout à la fois des normes de savoir et des normes de pouvoir. Toutefois, le contrôle social n’est pas que processus unidirectionnel, hiérarchique ou « top-down », ce qui serait inadéquat pour rendre compte de la complexité de son mode opératoire. Judith Revel (2008 : 28) résume ainsi le fonctionnement de la biopolitique néolibérale et ce qui en fait l’efficacité dans la pensée de Foucault, efficacité dans le sens de « comment ça marche ». Le contrôle social, dit-elle, est « une économie du pouvoir qui gère la société en fonction de modèles normatifs » de l’appareil d’État et ses institutions. En même temps, pour qu’il ne soit pas que répression autoritaire, le contrôle social opère par l’intériorisation de la norme chez les individus, une « pénétration fine du pouvoir dans les mailles de la vie », que Foucault appelait le « pouvoir capillaire ». En tant que mode de connaissance, la biopolitique produit du savoir et donc, selon la formule consacrée, du pouvoir. D’une part, il y a le(s) savoir(s) qui alimente(nt) les gouvernants dans l’exercice du pouvoir. Les classifications et catégories toujours plus différenciées de la biopolitique produisent des individus objectivés d’une population à gérer, l’individu-objet sur lequel agissent les institutions de la gouvernance (Blais 2006). Sur ce point, Foucault rejoint des auteurs comme Illich (1975), Goffman (1968) et Castel (1981, 1979, 1977) qui ont analysé et exposé les effets contreproductifs, stigmatisants, assujettissants ou normalisants de la pensée et des pratiques classificatrices dès lors qu’elles enferment les individus dans des catégories. D’autre part, il y a le(s) savoir(s) qui alimente(nt) aussi les gouvernés dans leur rapport à la norme, dans les manières de l’intérioriser à travers les choix, décisions et pratiques qui tissent toute vie au quotidien. Un savoir qui produit ainsi un individu-sujet, un sujet pensant et agissant. En d’autres termes, le sujet émerge à travers les catégories qui le définissent. La biopolitique renvoie inévitablement à la question de la manière (ou l’art, dirait Foucault) de gouverner (Gros et al, 2013 : 6). À l’ère du numérique, du Big Data, des algorithmes, qui connaissent un essor global depuis la mort de Foucault en 1984, la notion de biopolitique est-t-elle encore un outil d’analyse efficace des modalités de contrôle et de gouvernement des populations? Pour certains, dont Pierre Dardot et Christian Laval (2016), ce passage du gouvernement des corps, c’est à dire à une forme de pouvoir qui s’exerce sur les corps par une surveillance individualisée, au gouvernement de soi-même implique un nouveau mode de gouvernance. Celui qui se met en place s’appuierait moins, argüent-ils, sur les normes et contrôles de la biopolitique, que sur l’idée de la liberté des sujets qu’il s’agit de gouverner par des incitations et mesures les laissant en apparence libres d’agir, en canalisant, voire en manipulant les intérêts des individus et des populations. C’est ce que Foucault appelait la « conduite des conduites ». Dardot et Laval donnent comme exemple de telles mesures celui du code de la route où la liberté est celle du « choix » du chemin et de la destination, mais selon les règles de la route (vitesse, permis, etc). D’autres diront que le pouvoir d’accumulation de masses de données par les Facebook, Google et autres grands joueurs de l’internet dessine un nouvel art de la gouvernance où la surveillance a cédé au profilage. D’un régime de normalisation on passe à un régime de neutralisation, soutient Antoinette Rouvroy (2018 : 63). Et pour Mondher Kilani, la biopolitique détient désormais un « … pouvoir démultiplié de surveillance et d’engloutissement des individus et des conscience,… » (Kilani, 2018 : 292). Il s’agit alors d’étudier les biopolitiques contemporaines là où elles se redéfinissent en permanence (Fassin, 2006 : 40). Si les catégories de la biopolitique ont tendance à objectiver les individus, elles contiennent aussi une source de re-subjectivation. Chez Foucault, le processus de re-subjectivation ne se réduit pas à l’individu : se défaire des marques objectivantes de la pensée et de la pratique classificatrice ne se fait pas seul. La création de nouvelles pratiques arrivent aussi par le bas, comme en témoigne l’impact des mouvements féministes, écologistes, homosexuels, transgenres, de personnes psychiatrisées….. C’est pourquoi Foucault s’intéressait aux micro-pratiques (dans les prisons, les milieux psychiatriques, etc) comme pratiques de liberté et lieux de dé-assujettissement. D’où l’importance pour les sciences humaines et sociales d’étudier et d’exposer les nouveaux modes opératoires de la biopolitique, mais aussi les micro-pratiques de résistance, de liberté, les contre-pouvoirs qui se créent dans les interstices de la société. Car la «vie politique» est constituée d’un débat permanent entre gouvernés et gouvernants (Gros et al, 2013 : 7).
APA, Harvard, Vancouver, ISO, and other styles
33

Roesel, R., D. Christoforidis, S. G. Popeskou, S. Faes, A. Vanoni, J. Galafassi, A. Ferrario Di Tor Vajana E Di Medea, and L. M. Piotet. "Ondansetron for Low Anterior Resection Syndrome (LARS): A Double Blind, Placebo Controlled, Cross-Over, Randomized Study." British Journal of Surgery 110, Supplement_5 (June 2023). http://dx.doi.org/10.1093/bjs/znad178.026.

Full text
Abstract:
Abstract Background Low Anterior Resection Syndrome (LARS) after rectal resection is common and debilitating. Current management strategies include behavioural and dietary modifications, physiotherapy, antidiarrheal drugs, enemas and neuromodulation, but results are not always satisfactory. Aims This study examines the efficacity and safety of Ondansetron, a serotonin receptor antagonist, to treat patients with LARS. Methods This is a randomized, multi-centric, double-blinded, placebo-controlled, cross-over study. Patients with LARS (LARS score &gt;20) no longer than 2 years after rectal resection were randomised to receive either 4 weeks of Ondansetron followed by 4 weeks of placebo (O-P group) or 4 weeks of placebo followed by 4 weeks of Ondansetron (P-O group). The primary endpoint was LARS severity measured using the LARS score; secondary endpoints were incontinence (Vaizey score) and quality of life (IBS-QoL questionnaire). Patients’ scores and questionnaires were completed at baseline and after each 4-week treatment period. Results Out of 46 randomized patients, 38 were included in the analysis. From baseline to the end of the first period, in the O-P group, the mean (SD) LARS score decreased by 25% (from 36.6 (5.6) to 27.3 (11.5)) and the proportion of patients with major LARS (score &gt;30) went from 15/17 (88%) to 7/17 (41%), (p=0.001). In the P-O group, the mean (SD) LARS score decreased by 12% (from 37 (4.8) to 32.6 (9.1)), and the proportion of major LARS went from 19/21 (90%) to 16/21 (76%). After cross over, LARS scores deteriorated again in the O-P group receiving placebo, but further improved in the P-O group receiving Ondansetron. Mean Vaizey scores and IBS QoL scores followed a similar pattern. Conclusions Ondansetron should be included in the treatment algorithms of LARS after low anterior resection for rectal cancer because it is a safe and simple treatment that appears to improve both symptoms and quality of life.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Man, Wei Cui, Xiaole Bai, Yating Fang, Hongbin Yao, Xingru Zhang, Fanzhang Lei, and Bofeng Zhu. "Comprehensive evaluations of individual discrimination, kinship analysis, genetic relationship exploration and biogeographic origin prediction in Chinese Dongxiang group by a 60-plex DIP panel." Hereditas 160, no. 1 (March 29, 2023). http://dx.doi.org/10.1186/s41065-023-00271-2.

Full text
Abstract:
Abstract Background Dongxiang group, as an important minority, resides in Gansu province which is located at the northwest China, forensic detection system with more loci needed to be studied to improve the application efficiency of forensic case investigation in this group. Methods A 60-plex system including 57 autosomal deletion/insertion polymorphisms (A-DIPs), 2 Y chromosome DIPs (Y-DIPs) and the sex determination locus (Amelogenin) was explored to evaluate the forensic application efficiencies of individual discrimination, kinship analysis and biogeographic origin prediction in Gansu Dongxiang group based on the 60-plex genotype results of 233 unrelated Dongxiang individuals. The 60-plex genotype results of 4582 unrelated individuals from 33 reference populations in five different continents were also collected to analyze the genetic background of Dongxiang group and its genetic relationships with other continental populations. Results The system showed high individual discrimination power, as the cumulative power of discrimination (CPD), cumulative power of exclusion (CPE) for trio and cumulative match probability (CMP) values were 0.99999999999999999999997297, 0.999980 and 2.7029E− 24, respectively. The system could distinguish 98.12%, 93.78%, 82.18%, 62.35% and 39.32% of full sibling pairs from unrelated individual pairs, when the likelihood ratio (LR) limits were set as 1, 10, 100, 1000 and 10,000 based on the simulated family samples, respectively. Additionally, Dongxiang group had the close genetic distances with populations in East Asia, especially showed the intimate genetic relationships with Chinese Han populations, which were concluded from the genetic affinities and genetic background analyses of Dongxiang group and 33 reference populations. In terms of the effectiveness of biogeographic origin inference, different artificial intelligent algorithms possessed different efficacies. Among them, the random forest (RF) and extreme gradient boosting (XGBoost) algorithm models could accurately predict the biogeographic origins of 99.7% and 90.59% of three and five continental individuals, respectively. Conclusion This 60-plex system had good performance for individual discrimination, kinship analysis and biogeographic origin prediction in Dongxiang group, which could be used as a powerful tool for case investigation.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Lei, Yongquan Chen, Weijing Hu, Bo Wu, Linfeng Ye, Dongwen Wang, and Tao Bai. "A novel necroptosis-related long noncoding RNA model for predicting clinical features, immune characteristics, and therapeutic response in clear cell renal cell carcinoma." Frontiers in Immunology 14 (August 2, 2023). http://dx.doi.org/10.3389/fimmu.2023.1230267.

Full text
Abstract:
BackgroundNecroptosis is an immune-related cell death pathway involved in the regulation of the tumor microenvironment (TME). Here, we aimed to explore the role of necroptosis in clear cell renal cell carcinoma (ccRCC) and construct a necroptosis-related lncRNA (NRL) model to assess its potential association with clinical characteristics and immune status.MethodsGene expression profiles and clinical data for ccRCC patients were obtained from the Cancer Genome Atlas (TCGA). Pearson’s correlation, univariate Cox, and least absolute shrinkage and selection operator analyses were used to develop an NRL model. Kaplan–Meier (K-M) and receiver operating characteristic (ROC) curve analyses were used to determine the prognostic value of the NRL model. The clinical information was used to assess the diagnostic value of the NRL model. The TME, immune function, immune cell infiltration, and immune checkpoints associated with the NRL model risk score were studied using the ESTIMATE, GSEA, ssGSEA, and CIBERSORT algorithms. The immunophenoscore (IPS) and half-maximal inhibitory concentration (IC50) were used to compare the efficacies of immunotherapy and chemotherapy based on the NRL model. Finally, in vitro assays were performed to confirm the biological roles of NRLs.ResultsA total of 18 necroptosis-related genes and 285 NRLs in ccRCC were identified. A four-NRL model was constructed and showed good performance in the diagnosis and prognosis of ccRCC patients. The ESTIMATE scores, tumor mutation burden, and tumor stemness indices were significantly correlated with NRL model risk score. Immune functions such as chemokine receptors and immune receptor activity showed differences between different risk groups. The infiltration of immunosuppressive cells such as Tregs was higher in high-risk patients than in low-risk patients. High-risk patients were more sensitive to immunotherapy and some chemotherapy drugs, such as sunitinib and temsirolimus. Finally, the expression of NRLs included in the model was verified, and knocking down these NRLs in tumor cells affected cell proliferation, migration, and invasion.ConclusionNecroptosis plays an important role in the progression of ccRCC. The NRL model we constructed can be used to predict the clinical characteristics and immune features of ccRCC patients.
APA, Harvard, Vancouver, ISO, and other styles
36

Lu, Xuefang, Weiyin Vivian Liu, Yuchen Yan, Wenbing Yang, Changsheng Liu, Wei Gong, Guangnan Quan, Jiawei Jiang, Lei Yuan, and Yunfei Zha. "Evaluation of deep learning-based reconstruction late gadolinium enhancement images for identifying patients with clinically unrecognized myocardial infarction." BMC Medical Imaging 24, no. 1 (May 31, 2024). http://dx.doi.org/10.1186/s12880-024-01308-2.

Full text
Abstract:
Abstract Background The presence of infarction in patients with unrecognized myocardial infarction (UMI) is a critical feature in predicting adverse cardiac events. This study aimed to compare the detection rate of UMI using conventional and deep learning reconstruction (DLR)-based late gadolinium enhancement (LGEO and LGEDL, respectively) and evaluate optimal quantification parameters to enhance diagnosis and management of suspected patients with UMI. Methods This prospective study included 98 patients (68 men; mean age: 55.8 ± 8.1 years) with suspected UMI treated at our hospital from April 2022 to August 2023. LGEO and LGEDL images were obtained using conventional and commercially available inline DLR algorithms. The myocardial signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and percentage of enhanced area (Parea) employing the signal threshold versus reference mean (STRM) approach, which correlates the signal intensity (SI) within areas of interest with the average SI of normal regions, were analyzed. Analysis was performed using the standard deviation (SD) threshold approach (2SD–5SD) and full width at half maximum (FWHM) method. The diagnostic efficacies based on LGEDL and LGEO images were calculated. Results The SNRDL and CNRDL were two times better than the SNRO and CNRO, respectively (P < 0.05). Parea−DL was elevated compared to Parea−O using the threshold methods (P < 0.05); however, no intergroup difference was found based on the FWHM method (P > 0.05). The Parea−DL and Parea−O also differed except between the 2SD and 3SD and the 4SD/5SD and FWHM methods (P < 0.05). The receiver operating characteristic curve analysis revealed that each SD method exhibited good diagnostic efficacy for detecting UMI, with the Parea−DL having the best diagnostic efficacy based on the 5SD method (P < 0.05). Overall, the LGEDL images had better image quality. Strong diagnostic efficacy for UMI identification was achieved when the STRM was ≥ 4SD and ≥ 3SD for the LGEDL and LGEO, respectively. Conclusions STRM selection for LGEDL magnetic resonance images helps improve clinical decision-making in patients with UMI. This study underscored the importance of STRM selection for analyzing LGEDL images to enhance diagnostic accuracy and clinical decision-making for patients with UMI, further providing better cardiovascular care.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography