Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Algorithme de tri.

Articles de revues sur le sujet « Algorithme de tri »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Algorithme de tri ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

El Hage, W. « Vers une mise à jour des recommandations sur la prise en charge pharmacologique de la dépression résistante ». European Psychiatry 28, S2 (novembre 2013) : 42. http://dx.doi.org/10.1016/j.eurpsy.2013.09.108.

Texte intégral
Résumé :
Le réseau national français de soins spécifiques centrés sur la dépression résistante présente un algorithme décisionnel pour une prise en charge pharmacologique adaptée de la dépression majeure non-bipolaire. Cet algorithme tient compte des données de la littérature internationale, ainsi que des différentes recommandations nationales et internationales. Le but est de mettre à disposition des cliniciens français un algorithme décisionnel thérapeutique d’usage pratique, tenant compte de l’évidence scientifique et des molécules aujourd’hui disponibles en France. Ces recommandations considèrent, au-delà du choix initial d’un antidépresseur, selon le niveau de résistance, le changement d’antidépresseur, le recours aux combinaisons d’antidépresseurs avec la place des antidépresseurs de première et seconde génération (tri- ou tétracycliques, mirtazapine, miansérine), ainsi que les diverses stratégies possibles de potentialisation (lithium, carbamazépine, lamotrigine, divalproate, hormones thyroïdiennes, buspirone, bupropion, pindolol, antipsychotiques atypiques, stimulants…). Ces recommandations en matière de prescription sont destinées à un large usage clinique avec pour objectif de faciliter et améliorer la prise en charge thérapeutique des dépressions résistantes.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Lin, H. X. « Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis ». Scientific Programming 12, no 2 (2004) : 91–100. http://dx.doi.org/10.1155/2004/169467.

Texte intégral
Résumé :
Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism) can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm).
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wang, Li Guo, Yue Shuang Yang et Ting Ting Lu. « Semi-Supervised Classification for Hyperspectral Image Based on Tri-Training ». Applied Mechanics and Materials 687-691 (novembre 2014) : 3644–47. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.3644.

Texte intégral
Résumé :
Hyperspectral image classification is difficult due to the high dimensional features but limited training samples. Tri-training learning is a widely used semi-supervised classification method that addresses the problem of lacking of labeled examples. In this paper, a novel semi-supervised learning algorithm based on tri-training method is proposed. The proposed algorithm combines margin sampling (MS) technique and differential evolution (DE) algorithm to select the most informative samples and perturb them randomly. Then the samples we obtained, which can fulfill the labeled data distribution and introduce diversity to multiple classifiers, are added to training set to train base classifiers for tri-training. The proposed algorithm is experimentally validated using real hyperspectral data sets, indicating that the combination of MS and DE can significantly reduce the need of labeled samples while achieving high accuracy compared with state-of-the-art algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Yuan, Yali, Liuwei Huo, Yachao Yuan et Zhixiao Wang. « Semi-supervised tri-Adaboost algorithm for network intrusion detection ». International Journal of Distributed Sensor Networks 15, no 6 (juin 2019) : 155014771984605. http://dx.doi.org/10.1177/1550147719846052.

Texte intégral
Résumé :
Network intrusion detection is a relatively mature research topic, but one that remains challenging particular as technologies and threat landscape evolve. Here, a semi-supervised tri-Adaboost (STA) algorithm is proposed. In the algorithm, three different Adaboost algorithms are used as the weak classifiers (both for continuous and categorical data), constituting the decision stumps in the tri-training method. In addition, the chi-square method is used to reduce the dimension of feature and improve computational efficiency. We then conduct extensive numerical studies using different training and testing samples in the KDDcup99 dataset and discover the flows demonstrated that (1) high accuracy can be obtained using a training dataset which consists of a small number of labeled and a large number of unlabeled samples. (2) The algorithm proposed is reproducible and consistent over different runs. (3) The proposed algorithm outperforms other existing learning algorithms, even with only a small amount of labeled data in the training phase. (4) The proposed algorithm has a short execution time and a low false positive rate, while providing a desirable detection rate.
Styles APA, Harvard, Vancouver, ISO, etc.
5

You, Qi, Jun Sun, Feng Pan, Vasile Palade et Bilal Ahmad. « DMO-QPSO : A Multi-Objective Quantum-Behaved Particle Swarm Optimization Algorithm Based on Decomposition with Diversity Control ». Mathematics 9, no 16 (16 août 2021) : 1959. http://dx.doi.org/10.3390/math9161959.

Texte intégral
Résumé :
The decomposition-based multi-objective evolutionary algorithm (MOEA/D) has shown remarkable effectiveness in solving multi-objective problems (MOPs). In this paper, we integrate the quantum-behaved particle swarm optimization (QPSO) algorithm with the MOEA/D framework in order to make the QPSO be able to solve MOPs effectively, with the advantage of the QPSO being fully used. We also employ a diversity controlling mechanism to avoid the premature convergence especially at the later stage of the search process, and thus further improve the performance of our proposed algorithm. In addition, we introduce a number of nondominated solutions to generate the global best for guiding other particles in the swarm. Experiments are conducted to compare the proposed algorithm, DMO-QPSO, with four multi-objective particle swarm optimization algorithms and one multi-objective evolutionary algorithm on 15 test functions, including both bi-objective and tri-objective problems. The results show that the performance of the proposed DMO-QPSO is better than other five algorithms in solving most of these test problems. Moreover, we further study the impact of two different decomposition approaches, i.e., the penalty-based boundary intersection (PBI) and Tchebycheff (TCH) approaches, as well as the polynomial mutation operator on the algorithmic performance of DMO-QPSO.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhao, Ya Hui, Hong Li Wang et Rong Yi Cui. « Abnormal Voice Detection Algorithm Based on Semi-Supervised Co-Training Algorithm ». Advanced Materials Research 461 (février 2012) : 117–22. http://dx.doi.org/10.4028/www.scientific.net/amr.461.117.

Texte intégral
Résumé :
The AR-Tri-training algorithm is proposed for applying to the abnormal voice detection, and voice detection software is designed by mixed programming used Matlab and VC in this paper. Firstly, training samples are collected and the features of each sample are extracted including centroid, spectral entropy, wavelet and MFCC. Secondly, the assistant learning strategy is proposed, AR-Tri-training algorithm is designed by combining the rich information strategy. Finally, Classifiers are trained by using AR-Tri-training algorithm, and the integrated classifier is applied to voice detection. As can be drawn from the experimental results, AR-Tri-training not only removes mislabeled examples in training process, but also takes full advantage of the unlabeled examples and wrong-learning examples on validation set
Styles APA, Harvard, Vancouver, ISO, etc.
7

Gumaida, Bassam, Chang Liu et Juan Luo. « GTMA : Localization in Wireless Sensor Network Based a Group of Tri-Mobile Anchors ». Journal of Computational and Theoretical Nanoscience 14, no 1 (1 janvier 2017) : 847–57. http://dx.doi.org/10.1166/jctn.2017.6287.

Texte intégral
Résumé :
Sensors positioning with high accuracy is a fundamental stone in Wireless Sensor Networks (WSNs). In outdoor environment which is hostile and unreachable, mobile anchors method is considered as an appropriate solution for locating unknown nodes. In this case, the key problems are the trajectory mapping and the needed number of mobile anchors. These mobile anchors should travel along the trajectory in order to determine the positions of unknown nodes with minimum localization error. In this paper, a localization algorithm named Group of Tri-Mobile Anchors (GTMA) is proposed, which is based on a group of tri-mobile anchors with adjustable square trajectory in the deployment area. The position of a target node is calculated by Trilateration. Simulation results show that the performance of the proposed algorithm GTMA is better than that of other algorithms adopted one mobile anchor, e.g., HILBERT, LMAT and SPIRAL algorithms. This is clearly evident in both the localization accuracy and trajectory planning.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Cui, Long Jie, Hong Li Wang et Rong Yi Cui. « AR-Tri-Training : Tri-Training with Assistant Strategy ». Applied Mechanics and Materials 513-517 (février 2014) : 1840–44. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.1840.

Texte intégral
Résumé :
The classification performance of the classifier is weakened because the noise samples are introduced for the use of unlabeled samples in Tri-training. In this paper a new Tri-training style algorithm named AR-Tri-training (Tri-training with assistant and rich strategy) is proposed. Firstly, the assistant learning strategy is posed. Then the supporting learner is designed by combining the assistant learning strategy with rich information strategy. The number of mislabeled samples produced in the iterations of three classifiers mutually labeling are reduced by use of the supporting learner, moreover the unlabeled samples and the misclassified samples of validation set can be fully used. The proposed algorithm is applied to voice recognition. The experimental results show that AR-Tri-training algorithm can compensate for the shortcomings of Tri-training algorithm, further improve the testing rate.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wang, Xin, Yuan Shan Lin, Di Wu, D. W. Yang, X. K. Wang et Shun De Gao. « An Algorithm for Distance Computing Based on Tri-Mesh ». Materials Science Forum 626-627 (août 2009) : 669–74. http://dx.doi.org/10.4028/www.scientific.net/msf.626-627.669.

Texte intégral
Résumé :
Along with the development of simulation technology, systems of three-dimensional simulation are being used in more and more domains. The distance of objects is one of the important measures which are used to estimate the relationship of object’s position and to weigh the attributes of their own movement. The distance computing places an important role in the system of 3D simulation. In this paper, the two-phases (broad-phase and narrow-phase) algorithm for computing distance is presented. Using the algorithm, finally we have got the minimum distance of objects, and the feasibility and veracity of the algorithms are proved by an example.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Wang, Ende, Jinlei Jiao, Jingchao Yang, Dongyi Liang et Jiandong Tian. « Tri-SIFT : A Triangulation-Based Detection and Matching Algorithm for Fish-Eye Images ». Information 9, no 12 (26 novembre 2018) : 299. http://dx.doi.org/10.3390/info9120299.

Texte intégral
Résumé :
Keypoint matching is of fundamental importance in computer vision applications. Fish-eye lenses are convenient in such applications that involve a very wide angle of view. However, their use has been limited by the lack of an effective matching algorithm. The Scale Invariant Feature Transform (SIFT) algorithm is an important technique in computer vision to detect and describe local features in images. Thus, we present a Tri-SIFT algorithm, which has a set of modifications to the SIFT algorithm that improve the descriptor accuracy and matching performance for fish-eye images, while preserving its original robustness to scale and rotation. After the keypoint detection of the SIFT algorithm is completed, the points in and around the keypoints are back-projected to a unit sphere following a fish-eye camera model. To simplify the calculation in which the image is on the sphere, the form of descriptor is based on the modification of the Gradient Location and Orientation Histogram (GLOH). In addition, to improve the invariance to the scale and the rotation in fish-eye images, the gradient magnitudes are replaced by the area of the surface, and the orientation is calculated on the sphere. Extensive experiments demonstrate that the performance of our modified algorithms outweigh that of SIFT and other related algorithms for fish-eye images.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Arini, Arini, Luh Kesuma Wardhani et Dimas Octaviano. « Perbandingan Seleksi Fitur Term Frequency & ; Tri-Gram Character Menggunakan Algoritma Naïve Bayes Classifier (Nbc) Pada Tweet Hashtag #2019gantipresiden ». KILAT 9, no 1 (25 avril 2020) : 103–14. http://dx.doi.org/10.33322/kilat.v9i1.878.

Texte intégral
Résumé :
Towards an election year (elections) in 2019 to come, many mass campaign conducted through social media networks one of them on twitter. One online campaign is very popular among the people of the current campaign with the hashtag #2019GantiPresiden. In studies sentiment analysis required hashtag 2019GantiPresiden classifier and the selection of robust functionality that mendaptkan high accuracy values. One of the classifier and feature selection algorithms are Naive Bayes classifier (NBC) with Tri-Gram feature selection Character & Term-Frequency which previous research has resulted in a fairly high accuracy. The purpose of this study was to determine the implementation of Algorithm Naive Bayes classifier (NBC) with each selection and compare features and get accurate results from Algorithm Naive Bayes classifier (NBC) with both the selection of the feature. The author uses the method of observation to collect data and do the simulation. By using the data of 1,000 tweets originating from hashtag # 2019GantiPresiden taken on 15 September 2018, the author divides into two categories: 950 tweets as training data and 50 tweets as test data where the labeling process using methods Lexicon Based sentiment. From this study showed Naïve Bayes classifier algorithm accuracy (NBC) with feature selection Character Tri-Gram by 76% and Term-Frequency by 74%,the result show that the feature selection Character Tri-Gram better than Term-Frequency.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Lim, Myeong Jun, Jin Ho Cho, Young Sun Cho et Tae Seong Kim. « Directional Human Fall Recognition Using a Pair of Accelerometer and Gyroscope Sensors ». Applied Mechanics and Materials 135-136 (octobre 2011) : 449–54. http://dx.doi.org/10.4028/www.scientific.net/amm.135-136.449.

Texte intégral
Résumé :
Human fall in the elderly population is one of the major causes of injury or bone fracture: it can be a cause of various injuries (e.g., fracture, concussion, and joint inflammation). It also could be a possible cause of death in a severe case. To detect human fall, various fall detection algorithms have been devised. Most fall detection algorithms rely on signals from a single accelerometer or gyroscope and use a threshold-based method to detect the human fall. However, these algorithms need careful adjustment of a threshold for each subject and cannot detect the direction of falls. In this study, we propose a novel fall recognition algorithm using a pair of a tri-axial accelerometer and a tri-axial gyroscope. Our fall recognition algorithm utilizes a set of augmented features including autoregressive (AR) modeling coefficients of signals, signal magnitude area (SMA), and gradients of angles from the sensors. After Linear Discriminant Analysis (LDA) of the augmented features, an Artificial Neural Nets (ANNs) is utilized to recognize four directional human falls: namely forward fall, backward fall, right-side fall, and left-side fall. Our recognition results show the mean recognition rate of 95.8%. Our proposed fall recognition technique should be useful in the investigation of fall-related injuries and possibly in the prevention of falls for the elderly.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Chen, Xinying, et Yihui Qiu. « Research on an Improved Adaptive Image Enhancement Algorithm ». Journal of Physics : Conference Series 2560, no 1 (1 août 2023) : 012019. http://dx.doi.org/10.1088/1742-6596/2560/1/012019.

Texte intégral
Résumé :
Abstract Aiming at the image degradation problems that occur during industrial activities and the shortcomings of traditional fuzzy enhancement algorithms in dealing with image degradation problems with high algorithm complexity and poor enhancement results. Based on the tuned tri-threshold fuzzy intensification algorithm, this paper proposes an adaptive image enhancement algorithm, which combines three kinds of degraded images in reality (i.e., images in dusty, night and foggy environments) for simulation verification, and compares the visual effects of the enhanced images and makes objective quantitative evaluation. The experimental results show that: the proposed algorithm can not only effectively improve the contrast of the image, keep the detailed information of the image, and make the image present a more natural visual effect, but also improve the quality evaluation index of the image, and make the variance value is improved by more than 5% compared with other comparison algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Ge, Shaodi, Hongjun Li et Liuhong Luo. « Constrained Dual Graph Regularized Orthogonal Nonnegative Matrix Tri-Factorization for Co-Clustering ». Mathematical Problems in Engineering 2019 (26 décembre 2019) : 1–17. http://dx.doi.org/10.1155/2019/7565640.

Texte intégral
Résumé :
Coclustering approaches for grouping data points and features have recently been receiving extensive attention. In this paper, we propose a constrained dual graph regularized orthogonal nonnegative matrix trifactorization (CDONMTF) algorithm to solve the coclustering problems. The new method improves the clustering performance obviously by employing hard constraints to retain the priori label information of samples, establishing two nearest neighbor graphs to encode the geometric structure of data manifold and feature manifold, and combining with biorthogonal constraints as well. In addition, we have also derived the iterative optimization scheme of CDONMTF and proved its convergence. Clustering experiments on 5 UCI machine-learning data sets and 7 image benchmark data sets show that the achievement of the proposed algorithm is superior to that of some existing clustering algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Tran, Mickaël, Denis Maurel et Agata Savary. « Implantation d’un tri lexical respectant la particularité des noms propres ». Lingvisticæ Investigationes. International Journal of Linguistics and Language Resources 28, no 2 (28 mars 2006) : 303–23. http://dx.doi.org/10.1075/li.28.2.07tra.

Texte intégral
Résumé :
The computational treatment of multilingual language resources (as in the Prolex projet, cf (Grass et al. 2002)) should respect lexical conventions admitted by each language’s native speakers. These conventions may vary from one language to another, as in the case of alphabetical sorting algorithm. This algorithm must take a number of universal as well as language-dependent particularities into account, such as the distinction of upper- and lowercase letters, the sorting bi-directionality (from the left to the right or conversely), the role of diacritics (resulting either in variants of a letter, as é, è and ê in French, or in independent letters, as å in Danish or ą in Polish), the role of punctuation characters, the multi-character letters (as ch or ll in Spanish, or dzs in Hungarian) and the ligatures (as œ in French, or ß in German). We describe a Unicode-based sorting algorithm inspired by (LaBonté 1998) for proper names. In the particular case of the sorting of the proper names, three additional points are to be taken into account : the presence of numerical values (Arab numerals or Roman numerals), the variation of spelling of the ligatures and the permutation in the sorting of the multi-word units. Apart from the word list to be sorted, its input is a language-dependent code table which defines the language’s alphabet, the number of algorithm’s passes, the direction of each pass, and the order of letters or groups of letters in each pass. The implementation of the algorithm is done by a finite-state transducer which allows a fast assignment of sort keys to words. The algorithm proved correct for European languages such as English, French, and Polish, as well as for Thai. It outperforms other sorting algorithms, such as those implemented in Intex (Silberztein 1993) and Unitex (Paumier 2003) systems.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Rathee, Manisha, et T. V. Vijay Kumar. « DNA Fragment Assembly Using Multi-Objective Genetic Algorithms ». International Journal of Applied Evolutionary Computation 5, no 3 (juillet 2014) : 84–108. http://dx.doi.org/10.4018/ijaec.2014070105.

Texte intégral
Résumé :
DNA Fragment Assembly Problem (FAP) is concerned with the reconstruction of the target DNA, using the several hundreds (or thousands) of sequenced fragments, by identifying the right order and orientation of each fragment in the layout. Several algorithms have been proposed for solving FAP. Most of these have solely dwelt on the single objective of maximizing the sum of the overlaps between adjacent fragments in order to optimize the fragment layout. This paper aims to formulate this FAP as a bi-objective optimization problem, with the two objectives being the maximization of the overlap between the adjacent fragments and the minimization of the overlap between the distant fragments. Moreover, since there is greater desirability for having lesser number of contigs, FAP becomes a tri-objective optimization problem where the minimization of the number of contigs becomes the additional objective. These problems were solved using the multi-objective genetic algorithm NSGA-II. The experimental results show that the NSGA-II-based Bi-Objective Fragment Assembly Algorithm (BOFAA) and the Tri-Objective Fragment Assembly Algorithm (TOFAA) are able to produce better quality layouts than those generated by the GA-based Single Objective Fragment Assembly Algorithm (SOFAA). Further, the layouts produced by TOFAA are also comparatively better than those produced using BOFAA.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Mandal, Moumita, Pawan Kumar Singh, Muhammad Fazal Ijaz, Jana Shafi et Ram Sarkar. « A Tri-Stage Wrapper-Filter Feature Selection Framework for Disease Classification ». Sensors 21, no 16 (18 août 2021) : 5571. http://dx.doi.org/10.3390/s21165571.

Texte intégral
Résumé :
In machine learning and data science, feature selection is considered as a crucial step of data preprocessing. When we directly apply the raw data for classification or clustering purposes, sometimes we observe that the learning algorithms do not perform well. One possible reason for this is the presence of redundant, noisy, and non-informative features or attributes in the datasets. Hence, feature selection methods are used to identify the subset of relevant features that can maximize the model performance. Moreover, due to reduction in feature dimension, both training time and storage required by the model can be reduced as well. In this paper, we present a tri-stage wrapper-filter-based feature selection framework for the purpose of medical report-based disease detection. In the first stage, an ensemble was formed by four filter methods—Mutual Information, ReliefF, Chi Square, and Xvariance—and then each feature from the union set was assessed by three classification algorithms—support vector machine, naïve Bayes, and k-nearest neighbors—and an average accuracy was calculated. The features with higher accuracy were selected to obtain a preliminary subset of optimal features. In the second stage, Pearson correlation was used to discard highly correlated features. In these two stages, XGBoost classification algorithm was applied to obtain the most contributing features that, in turn, provide the best optimal subset. Then, in the final stage, we fed the obtained feature subset to a meta-heuristic algorithm, called whale optimization algorithm, in order to further reduce the feature set and to achieve higher accuracy. We evaluated the proposed feature selection framework on four publicly available disease datasets taken from the UCI machine learning repository, namely, arrhythmia, leukemia, DLBCL, and prostate cancer. Our obtained results confirm that the proposed method can perform better than many state-of-the-art methods and can detect important features as well. Less features ensure less medical tests for correct diagnosis, thus saving both time and cost.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Trojan, Flavio, Pablo Isaias Rojas Fernandez, Marcio Guerreiro, Lucas Biuk, Mohamed A. Mohamed, Pierluigi Siano, Roberto F. Dias Filho, Manoel H. N. Marinho et Hugo Valadares Siqueira. « Class Thresholds Pre-Definition by Clustering Techniques for Applications of ELECTRE TRI Method ». Energies 16, no 4 (15 février 2023) : 1936. http://dx.doi.org/10.3390/en16041936.

Texte intégral
Résumé :
The sorting problem in the Multi-criteria Decision Analysis (MCDA) has been used to address issues whose solutions involve the allocation of alternatives in classes. Traditional multi-criteria methods are commonly used for this task, such as ELECTRE TRI, AHP-Sort, UTADIS, PROMETHEE, GAYA, etc. While using these approaches to perform the sorting procedure, the decision-makers define profiles (thresholds) for classes to compare the alternatives within these profiles. However, most such applications are based on subjective tasks, i.e., decision-makers’ expertise, which sometimes might be imprecise. To fill that gap, in this paper, a comparative analysis using the multi-criteria method ELECTRE TRI and clustering algorithms is performed to obtain an auxiliary procedure to define initial thresholds for the ELECTRE TRI method. In this proposed methodology, K-Means, K-Medoids, Fuzzy C-Means algorithms, and Bio-Inspired metaheuristics such as PSO, Differential Evolution, and Genetic algorithm for clustering are tested considering a dataset from a fundamental problem of sorting in Water Distribution Networks. The computational performances indicate that Fuzzy C-Means was more suitable for achieving the desired response. The practical contributions show a relevant procedure to provide an initial view of boundaries in multi-criteria sorting methods based on the datasets from specific applications. Theoretically, it is a new development to pre-define the initial limits of classes for the sorting problem in multi-criteria approach.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Wang, Jundi, Xing Wang, Yuanrong Tian, Zhenkun Chen et You Chen. « A Radar Emitter Recognition Mechanism Based on IFS-Tri-Training Classification Processing ». Electronics 11, no 7 (29 mars 2022) : 1078. http://dx.doi.org/10.3390/electronics11071078.

Texte intégral
Résumé :
Radar Warning Receiver (RWR) is one of the basic pieces of combat equipment necessary for the electromagnetic situational awareness of aircraft in modern operations and requires good rapid performance and accuracy. This paper proposes a data processing flow for radar warning devices based on a hierarchical processing mechanism to address the issue of existing algorithms’ inability to balance real-time and accuracy. In the front-level information processing module, multi-attribute decision-making under intuitionistic fuzzy information (IFS) is used to process radar signals with certain prior knowledge to achieve rapid performance. In the post-level information processing module, an improved tri-training method is used to ensure accurate recognition of signals with low pre-level recognition accuracy. To improve the performance of tri-training in identifying radar emitters, the original algorithm is combined with the modified Hyperbolic Tangent Weight (MHTW) to address the problem of data imbalance in the radar identification problem. Simultaneously, cross entropy is employed to enhance the sample selection mechanism, allowing the algorithm to converge rapidly.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Ye, Bangyu. « A Set Intersection Algorithm Via x-Fast Trie ». Journal of Computers 11, no 2 (mars 2016) : 91–98. http://dx.doi.org/10.17706/jcp.11.2.91-98.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Yang, Song, William S. Olson, Jian-Jian Wang, Thomas L. Bell, Eric A. Smith et Christian D. Kummerow. « Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part II : Evaluation of Estimates Using Independent Data ». Journal of Applied Meteorology and Climatology 45, no 5 (1 mai 2006) : 721–39. http://dx.doi.org/10.1175/jam2370.1.

Texte intégral
Résumé :
Abstract Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5°-resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r ∼0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5°-resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5°-resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5°-resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Gao, Lyuzhou, Liqin Cao, Yanfei Zhong et Zhaoyang Jia. « Field-Based High-Quality Emissivity Spectra Measurement Using a Fourier Transform Thermal Infrared Hyperspectral Imager ». Remote Sensing 13, no 21 (5 novembre 2021) : 4453. http://dx.doi.org/10.3390/rs13214453.

Texte intégral
Résumé :
Emissivity information derived from thermal infrared (TIR) hyperspectral imagery has the advantages of both high spatial and spectral resolutions, which facilitate the detection and identification of the subtle spectral features of ground targets. Despite the emergence of several different TIR hyperspectral imagers, there are still no universal spectral emissivity measurement standards for TIR hyperspectral imagers in the field. In this paper, we address the problems encountered when measuring emissivity spectra in the field and propose a practical data acquisition and processing framework for a Fourier transform (FT) TIR hyperspectral imager—the Hyper-Cam LW—to obtain high-quality emissivity spectra in the field. This framework consists of three main parts. (1) The performance of the Hyper-Cam LW sensor was evaluated in terms of the radiometric calibration and measurement noise, and a data acquisition procedure was carried out to obtain the useful TIR hyperspectral imagery in the field. (2) The data quality of the original TIR hyperspectral imagery was improved through preprocessing operations, including band selection, denoising, and background radiance correction. A spatial denoising method was also introduced to preserve the atmospheric radiance features in the spectra. (3) Three representative temperature-emissivity separation (TES) algorithms were evaluated and compared based on the Hyper-Cam LW TIR hyperspectral imagery, and the optimal TES algorithm was adopted to determine the final spectral emissivity. These algorithms are the iterative spectrally smooth temperature and emissivity separation (ISSTES) algorithm, the improved Advanced Spaceborne Thermal Emission and Reflection Radiometer temperature and emissivity separation (ASTER-TES) algorithm, and the Fast Line-of-sight Atmospheric Analysis of Hypercubes-IR (FLAASH-IR) algorithm. The emissivity results from these different methods were compared to the reference spectra measured by a Model 102F spectrometer. The experimental results indicated that the retrieved emissivity spectra from the ISSTES algorithm were more accurate than the spectra retrieved by the other methods on the same Hyper-Cam LW field data and had close consistency with the reference spectra obtained from the Model 102F spectrometer. The root-mean-square error (RMSE) between the retrieved emissivity and the standard spectra was 0.0086, and the spectral angle error was 0.0093.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Bajaj, Anu, et Om Prakash Sangwan. « Tri-level regression testing using nature-inspired algorithms ». Innovations in Systems and Software Engineering 17, no 1 (18 janvier 2021) : 1–16. http://dx.doi.org/10.1007/s11334-021-00384-9.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Mazur, Jan, et Zbigniew Świętach. « On Some DOA Algorithms for Tri-axial Geophone ». International Journal of Electronics and Telecommunications 59, no 1 (1 mars 2013) : 67–73. http://dx.doi.org/10.2478/eletel-2013-0008.

Texte intégral
Résumé :
Abstract In this paper a short study of some basic methods of DOA of a seismic wave using so called tri-axial geophone has been presented. The proposed methods exploit the properties of Rayleigh surface plane wave to find DOA of an incoming seismic wave using inner products of appropriately filtered signals recorded by geophones. The advantage of the proposed method is its simplicity and ease of implementation in small DSP or application processors still retaining pretty good accuracy. A number of example results for real data have been given.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Huskey, Richard, Amelia Couture Bue, Allison Eden, Clare Grall, Dar Meshi, Kelsey Prena, Ralf Schmälzle, Christin Scholz, Benjamin O. Turner et Shelby Wilcox. « Marr’s Tri-Level Framework Integrates Biological Explanation Across Communication Subfields ». Journal of Communication 70, no 3 (1 juin 2020) : 356–78. http://dx.doi.org/10.1093/joc/jqaa007.

Texte intégral
Résumé :
Abstract In this special issue devoted to speaking across communication subfields, we introduce a domain general explanatory framework that integrates biological explanation with communication science and organizes our field around a shared explanatory empirical model. Specifically, we draw on David Marr’s classical framework, which subdivides the explanation of human behavior into three levels: computation (why), algorithm (what), and implementation (how). Prior theorizing and research in communication has primarily addressed Marr’s computational level (why), but has less frequently investigated algorithmic (what) or implementation (how all communication phenomena emerge from and rely on biological processes) explanations. Here, we introduce Marr’s framework and apply it to three research domains in communication science—audience research, persuasion, and social comparisons—to demonstrate what a unifying framework for explaining communication across the levels of why, what, and how can look like, and how Marr’s framework speaks to and receives input from all subfields of communication inquiry.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Wang, Kun, Jinggeng Gao, Xiaohua Kang et Huan Li. « Improved tri-training method for identifying user abnormal behavior based on adaptive golden jackal algorithm ». AIP Advances 13, no 3 (1 mars 2023) : 035030. http://dx.doi.org/10.1063/5.0147299.

Texte intégral
Résumé :
Identification of abnormal user behavior helps reduce non-technical losses and regulatory operating costs for power marketing departments. Therefore, this paper proposes an adaptive golden jackal algorithm optimization improved tri-training method to identify user abnormal behavior. First, this paper constructs multiple weak learners based on the abnormal behavior data of users, combined with the method of sampling and putting back, and uses the filtering method to select the tri-training base model. Second, aiming at the problem that the traditional optimization algorithm has a slow convergence speed and is easy to fall into local optimization, the adaptive golden jackal algorithm is used to realize the parameter optimization of tri-training. Based on the electricity consumption data of a certain place in the past five years, it is found that the model can provide stable identification results: accuracy = 0.987, f1- score = 0.973.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Parise, Martina, Sergio Di Molfetta, Roberta Teresa Graziano, Raffaella Fiorentino, Antonio Cutruzzolà, Agostino Gnasso et Concetta Irace. « A Head-to-Head Comparison of Two Algorithms for Adjusting Mealtime Insulin Doses Based on CGM Trend Arrows in Adult Patients with Type 1 Diabetes : Results from an Exploratory Study ». International Journal of Environmental Research and Public Health 20, no 5 (23 février 2023) : 3945. http://dx.doi.org/10.3390/ijerph20053945.

Texte intégral
Résumé :
Background: Continuous glucose monitoring (CGM) users are encouraged to consider trend arrows before injecting a meal bolus. We evaluated the efficacy and safety of two different algorithms for trend-informed bolus adjustments, the Diabetes Research in Children Network/Juvenile Diabetes Research Foundation (DirectNet/JDRF) and the Ziegler algorithm, in type 1 diabetes. Methods: We conducted a cross-over study of type 1 diabetes patients using Dexcom G6. Participants were randomly assigned to either the DirectNet/JDRF or the Ziegler algorithm for two weeks. After a 7-day wash-out period with no trend-informed bolus adjustments, they crossed to the alternative algorithm. Results: Twenty patients, with an average age of 36 ± 10 years, completed this study. Compared to the baseline and the DirectNet/JDRF algorithm, the Ziegler algorithm was associated with a significantly higher time in range (TIR) and lower time above range and mean glucose. A separate analysis of patients on CSII and MDI revealed that the Ziegler algorithm provides better glucose control and variability than DirectNet/JDRF in CSII-treated patients. The two algorithms were equally effective in increasing TIR in MDI-treated patients. No severe hypoglycemic or hyperglycemic episode occurred during the study. Conclusions: The Ziegler algorithm is safe and may provide better glucose control and variability than the DirectNet/JDRF over a two-week period, especially in patients treated with CSII.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Zagrodnik, Joseph P., et Haiyan Jiang. « Investigation of PR and TMI Version 6 and Version 7 Rainfall Algorithms in Landfalling Tropical Cyclones Relative to the NEXRAD Stage-IV Multisensor Precipitation Estimate Dataset ». Journal of Applied Meteorology and Climatology 52, no 12 (décembre 2013) : 2809–27. http://dx.doi.org/10.1175/jamc-d-12-0274.1.

Texte intégral
Résumé :
AbstractRainfall estimates from versions 6 (V6) and 7 (V7) of the Tropical Rainfall Measuring Mission (TRMM) precipitation radar (PR) 2A25 and Microwave Imager (TMI) 2A12 algorithms are compared relative to the Next Generation Weather Radar (NEXRAD) Multisensor Precipitation Estimate stage-IV hourly rainfall product. The dataset consists of 252 TRMM overpasses of tropical cyclones from 2002 to 2010 within a 230-km range of southeastern U.S. Weather Surveillance Radar-1988 Doppler (WSR-88D) sites. All rainfall estimates are averaged to a uniform 1/7° square grid. The grid boxes are also divided by their TMI surface designation (land, ocean, or coast). A detailed statistical analysis is undertaken to determine how changes to the TRMM rainfall algorithms in the latest version (V7) are influencing the rainfall retrievals relative to ground reference data. Version 7 of the PR 2A25 is the best-performing algorithm over all three surface types. Over ocean, TMI 2A12 V7 is improved relative to V6 at high rain rates. At low rain rates, the new ocean TMI V7 probability-of-rain parameter creates ambiguity in differentiating light rain (≤0.5 mm h−1) and nonraining areas. Over land, TMI V7 underestimates stage IV more than V6 does at a wide range of rain rates, resulting in an increased negative bias. Both versions of the TMI coastal algorithm are also negatively biased at both moderate and heavy rain rates. Some of the TMI biases can be explained by uncertain relationships between rain rate and 85-GHz ice scattering.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Shige, Shoichi, Satoshi Kida, Hiroki Ashiwake, Takuji Kubota et Kazumasa Aonashi. « Improvement of TMI Rain Retrievals in Mountainous Areas ». Journal of Applied Meteorology and Climatology 52, no 1 (janvier 2013) : 242–54. http://dx.doi.org/10.1175/jamc-d-12-074.1.

Texte intégral
Résumé :
AbstractHeavy rainfall associated with shallow orographic rainfall systems has been underestimated by passive microwave radiometer algorithms owing to weak ice scattering signatures. The authors improve the performance of estimates made using a passive microwave radiometer algorithm, the Global Satellite Mapping of Precipitation (GSMaP) algorithm, from data obtained by the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) for orographic heavy rainfall. An orographic/nonorographic rainfall classification scheme is developed on the basis of orographically forced upward vertical motion and the convergence of surface moisture flux estimated from ancillary data. Lookup tables derived from orographic precipitation profiles are used to estimate rainfall for an orographic rainfall pixel, whereas those derived from original precipitation profiles are used to estimate rainfall for a nonorographic rainfall pixel. Rainfall estimates made using the revised GSMaP algorithm are in better agreement with estimates from data obtained by the radar on the TRMM satellite and by gauge-calibrated ground radars than are estimates made using the original GSMaP algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Azadmanesh, Alireza, et Majid Sina. « Application of Cukoo Algorithm to Improve Economic Scheduling In Grid Computing ». Environment Conservation Journal 16, SE (5 décembre 2015) : 115–25. http://dx.doi.org/10.36953/ecj.2015.se1612.

Texte intégral
Résumé :
In this article, Cukoo algorithm was used to improve the economic scheduling in grid computing. Different economic calculation methods is evaluated given the synchronization importance of time and cost criteria. Using TCI algorithms, a new algorithm was proposed based on Cukoo algorithm. Evaluation the implementation process of this algorithm needs to new algorithm based on the previous proposal that has the ability to regard prerequisite tasks to deliver optimum results. To compare and evaluate the capabilities of the proposed method, we need a simulator that can be used for many different tasks in the DAG to provide the model to deal with the proposed algorithm. The results of our proposed method compared to TCI method has a huge amount of optimality. Also, it was shown that the population growth in the process of review, increases the speed to achieve optimal value and the higher repetitions will reduce the time to reach optimal results.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Renaudin, Valérie, Muhammad Haris Afzal et Gérard Lachapelle. « Complete Triaxis Magnetometer Calibration in the Magnetic Domain ». Journal of Sensors 2010 (2010) : 1–10. http://dx.doi.org/10.1155/2010/967245.

Texte intégral
Résumé :
This paper presents an algorithm for calibrating erroneous tri-axis magnetometers in the magnetic field domain. Unlike existing algorithms, no simplification is made on the nature of errors to ease the estimation. A complete error model, including instrumentation errors (scale factors, nonorthogonality, and offsets) and magnetic deviations (soft and hard iron) on the host platform, is elaborated. An adaptive least squares estimator provides a consistent solution to the ellipsoid fitting problem and the magnetometer's calibration parameters are derived. The calibration is experimentally assessed with two artificial magnetic perturbations introduced close to the sensor on the host platform and without additional perturbation. In all configurations, the algorithm successfully converges to a good estimate of the said errors. Comparing the magnetically derived headings with a GNSS/INS reference, the results show a major improvement in terms of heading accuracy after the calibration.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Reddy, Mamatha, Vaasanthi Chintala et Bairam Balaji. « Predicting mortality using prognostic scores and electrocardiographic parameters in ST-elevation myocardial infarction patients undergoing thrombolysis ». Journal of Ideas in Health 5, no 4 (10 décembre 2022) : 760–65. http://dx.doi.org/10.47108/jidhealth.vol5.iss4.253.

Texte intégral
Résumé :
Background: The short- and long-term outcomes of thrombolysis has been predicted by various scores and models based on the electrocardiogram. This study aimed to compare various mortality predictors in ST-elevation myocardial infarction (STEMI) patients undergoing thrombolysis. Methods: A prospective, case-control, single-center study was performed at MGM Hospital, Warangal, India, between November 2019 and November 2021. A total of 100 STEMI patients were enrolled, out of which 50 were controls (patients who survived after seven days of thrombolysis) and 50 were cases (patients who died after seven days of thrombolysis). Aldrich score, TIMI risk index (TRI), Sclarovsky-Birnbaum Ischemia Grading (SB-IG) algorithm, presence of Q waves, total ST-segment deviation, and the number of leads with ST-segment elevation (STE) in anterior wall MI (AWMI) were calculated. Results: The mean age of the case group was 55.3 ± 11.6 years, and that of the control group was 55.5 ± 10.1 years. Males comprised 46.0% and 66.0% of the case and control groups. The c-statistic of TRI was found to be the highest (c = 0.68; P = 0.001), followed by the SB-IG algorithm (c = 0.58; P = 0.021), the sum of R waves in AWMI (c = 0.5; P = 0.019), the number of leads with STE in AWMI (c = 0.47; P = 0.778), total ST-segment deviation (c = 0.47; P = 0.552), Aldrich score for AWMI (c = 0.43; P = 0.590), presence of Q waves (c = 0.40; P = 0.676), and Aldrich score for inferior wall MI (c = 0.32; P = 0.071). Conclusion: TRI and SB-IG algorithms had moderate accuracy in predicting seven-day mortality in STEMI patients undergoing thrombolysis. Other scores and parameters viz. Aldrich score, presence of Q waves, total ST-segment deviation, and the number of leads with STE in AWMI had very poor accuracy in predicting in-hospital outcomes. More extensive studies with longer durations are required to validate our findings.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Mohammad, Umar, Yusuf Hamdan, Aarah Sardesai et Merve Gokgol. « A Novel Algorithm for Professor Recommendation in Higher Education ». London Journal of Social Sciences, no 6 (17 septembre 2023) : 12–19. http://dx.doi.org/10.31039/ljss.2023.6.98.

Texte intégral
Résumé :
This paper introduces a novel professor-recommendation system designed specifically for community college and university courses. Building upon an existing algorithm for one-on-one teacher recommendations, we leveraged insights from the literature on Massive Open Online Course (MOOC) recommender algorithms. By analysing various approaches, we combined and refined ideas to develop an optimised system. Our approach utilises a tri-module framework that incorporates supervised and unsupervised learning techniques. The first module employs a Gradient-boosted Decision Tree algorithm, augmented with multiple factors and student dropout rates as ground truth, to generate a ranking score. The second module applies Apriori Association and Density-based Spatial Clustering of Applications with Noise (DBSCAN) algorithms to analyse these factors and identify professors with similar characteristics. In the third module, item-based collaborative filtering is employed, incorporating user ratings and the cosine similarity algorithm. The outputs from these three modules are subsequently integrated through a weighted average. This addition enables the system to prioritise opportunities for new professors, thereby ensuring a balanced recommendation approach. The resulting combined ranking score provides accurate recommendations for course instructors. This approach can be integrated into university course selection software for the benefit of both students and educational institutions.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Jang, Doyoung, Jongmann Kim, Yong Bae Park et Hosung Choo. « Study of an Atmospheric Refractivity Estimation from a Clutter Using Genetic Algorithm ». Applied Sciences 12, no 17 (26 août 2022) : 8566. http://dx.doi.org/10.3390/app12178566.

Texte intégral
Résumé :
In this paper, a method for estimating atmospheric refractivity from sea and land clutters is proposed. To estimate the atmospheric refractivity, clutter power spectrums based on an artificial tri-linear model are calculated using an Advanced Refractive Prediction System (AREPS) simulator. Then, the clutter power spectrums are again obtained based on the measured atmospheric refractivity data using the AREPS simulator. In actual operation, this spectrum from measured reflectivity can be replaced with real-time clutter spectrums collected from radars. A cost function for the genetic algorithm (GA) is then defined based on the difference between the two clutter power spectrums to predict the atmospheric refractivity using the artificial tri-linear model. The optimum variables of the tri-linear model are determined at a minimum cost in the GA process. The results demonstrate that atmospheric refractivity can be predicted using the proposed method from the clutter powers.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Luo, Jianfu, Jinsheng Zhou, Xi Jiang et Haodong Lv. « A Modification of the Imperialist Competitive Algorithm with Hybrid Methods for Multi-Objective Optimization Problems ». Symmetry 14, no 1 (16 janvier 2022) : 173. http://dx.doi.org/10.3390/sym14010173.

Texte intégral
Résumé :
This paper proposes a modification of the imperialist competitive algorithm to solve multi-objective optimization problems with hybrid methods (MOHMICA) based on a modification of the imperialist competitive algorithm with hybrid methods (HMICA). The rationale for this is that there is an obvious disadvantage of HMICA in that it can only solve single-objective optimization problems but cannot solve multi-objective optimization problems. In order to adapt to the characteristics of multi-objective optimization problems, this paper improves the establishment of the initial empires and colony allocation mechanism and empire competition in HMICA, and introduces an external archiving strategy. A total of 12 benchmark functions are calculated, including 10 bi-objective and 2 tri-objective benchmarks. Four metrics are used to verify the quality of MOHMICA. Then, a new comprehensive evaluation method is proposed, called “radar map method”, which could comprehensively evaluate the convergence and distribution performance of multi-objective optimization algorithm. It can be seen from the four coordinate axes of the radar maps that this is a symmetrical evaluation method. For this evaluation method, the larger the radar map area is, the better the calculation result of the algorithm. Using this new evaluation method, the algorithm proposed in this paper is compared with seven other high-quality algorithms. The radar map area of MOHMICA is at least 14.06% larger than that of other algorithms. Therefore, it is proven that MOHMICA has advantages as a whole.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Wang, Ruoran, Xihang Zeng, Yujuan Long, Jing Zhang, Hong Bo, Min He et Jianguo Xu. « Prediction of Mortality in Geriatric Traumatic Brain Injury Patients Using Machine Learning Algorithms ». Brain Sciences 13, no 1 (3 janvier 2023) : 94. http://dx.doi.org/10.3390/brainsci13010094.

Texte intégral
Résumé :
Background: The number of geriatric traumatic brain injury (TBI) patients is increasing every year due to the population’s aging in most of the developed countries. Unfortunately, there is no widely recognized tool for specifically evaluating the prognosis of geriatric TBI patients. We designed this study to compare the prognostic value of different machine learning algorithm-based predictive models for geriatric TBI. Methods: TBI patients aged ≥65 from the Medical Information Mart for Intensive Care-III (MIMIC-III) database were eligible for this study. To develop and validate machine learning algorithm-based prognostic models, included patients were divided into a training set and a testing set, with a ratio of 7:3. The predictive value of different machine learning based models was evaluated by calculating the area under the receiver operating characteristic curve, sensitivity, specificity, accuracy and F score. Results: A total of 1123 geriatric TBI patients were included, with a mortality of 24.8%. Non-survivors had higher age (82.2 vs. 80.7, p = 0.010) and lower Glasgow Coma Scale (14 vs. 7, p < 0.001) than survivors. The rate of mechanical ventilation was significantly higher (67.6% vs. 25.9%, p < 0.001) in non-survivors while the rate of neurosurgical operation did not differ between survivors and non-survivors (24.3% vs. 23.0%, p = 0.735). Among different machine learning algorithms, Adaboost (AUC: 0.799) and Random Forest (AUC: 0.795) performed slightly better than the logistic regression (AUC: 0.792) on predicting mortality in geriatric TBI patients in the testing set. Conclusion: Adaboost, Random Forest and logistic regression all performed well in predicting mortality of geriatric TBI patients. Prognostication tools utilizing these algorithms are helpful for physicians to evaluate the risk of poor outcomes in geriatric TBI patients and adopt personalized therapeutic options for them.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Bolduc, David L., Vilmar Villa, David J. Sandgren, G. David Ledney, William F. Blakely et Rolf Bünger. « Application of Multivariate Modeling for Radiation Injury Assessment : A Proof of Concept ». Computational and Mathematical Methods in Medicine 2014 (2014) : 1–17. http://dx.doi.org/10.1155/2014/685286.

Texte intégral
Résumé :
Multivariate radiation injury estimation algorithms were formulated for estimating severe hematopoietic acute radiation syndrome (H-ARS) injury (i.e., response category three or RC3) in a rhesus monkey total-body irradiation (TBI) model. Classical CBC and serum chemistry blood parameters were examined prior to irradiation (d 0) and on d 7, 10, 14, 21, and 25 after irradiation involving 24 nonhuman primates (NHP) (Macaca mulatta) given 6.5-Gy60CoΥ-rays (0.4 Gy min−1) TBI. A correlation matrix was formulated with the RC3 severity level designated as the “dependent variable” and independent variables down selected based on their radioresponsiveness and relatively low multicollinearity using stepwise-linear regression analyses. Final candidate independent variables included CBC counts (absolute number of neutrophils, lymphocytes, and platelets) in formulating the “CBC” RC3 estimation algorithm. Additionally, the formulation of a diagnostic CBC and serum chemistry “CBC-SCHEM” RC3 algorithm expanded upon the CBC algorithm model with the addition of hematocrit and the serum enzyme levels of aspartate aminotransferase, creatine kinase, and lactate dehydrogenase. Both algorithms estimated RC3 with over 90% predictive power. Only the CBC-SCHEM RC3 algorithm, however, met the critical three assumptions of linear least squares demonstrating slightly greater precision for radiation injury estimation, but with significantly decreased prediction error indicating increased statistical robustness.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Kong, Yating, Jide Li, Liangpeng Hu et Xiaoqiang Li. « Semi-Supervised Learning Matting Algorithm Based on Semantic Consistency of Trimaps ». Applied Sciences 13, no 15 (26 juillet 2023) : 8616. http://dx.doi.org/10.3390/app13158616.

Texte intégral
Résumé :
Image matting methods based on deep learning have made tremendous success. However, the success of previous image matting methods typically relies on a massive amount of pixel-level labeled data, which are time-consuming and costly to obtain. This paper first proposes a semi-supervised deep learning matting algorithm based on semantic consistency of trimaps (Tri-SSL), which uses trimaps to provide weakly supervised signals for the unlabeled data, to reduce the labeling cost. Tri-SSL is a single-stage semi-supervised algorithm that consists of a supervised branch and a weakly supervised branch that share the same network in one iteration during training. The supervised branch is consistent with standard supervised matting methods. In the weakly supervised branch, trimaps of different granularities are used as weakly supervised signals for unlabeled images, and the two trimaps are naturally perturbed samples. Orientation consistency constraints are imposed on the prediction results of trimaps of different granuliarty and the intermediate features of the network. Experimental results show that Tri-SSL improves model performance by effectively utilizing unlabeled data.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Bruschetta, Roberta, Gennaro Tartarisco, Lucia Francesca Lucca, Elio Leto, Maria Ursino, Paolo Tonin, Giovanni Pioggia et Antonio Cerasa. « Predicting Outcome of Traumatic Brain Injury : Is Machine Learning the Best Way ? » Biomedicines 10, no 3 (16 mars 2022) : 686. http://dx.doi.org/10.3390/biomedicines10030686.

Texte intégral
Résumé :
One of the main challenges in traumatic brain injury (TBI) patients is to achieve an early and definite prognosis. Despite the recent development of algorithms based on artificial intelligence for the identification of these prognostic factors relevant for clinical practice, the literature lacks a rigorous comparison among classical regression and machine learning (ML) models. This study aims at providing this comparison on a sample of TBI patients evaluated at baseline (T0), after 3 months from the event (T1), and at discharge (T2). A Classical Linear Regression Model (LM) was compared with independent performances of Support Vector Machine (SVM), k-Nearest Neighbors (k-NN), Naïve Bayes (NB) and Decision Tree (DT) algorithms, together with an ensemble ML approach. The accuracy was similar among LM and ML algorithms on the analyzed sample when two classes of outcome (Positive vs. Negative) approach was used, whereas the NB algorithm showed the worst performance. This study highlights the utility of comparing traditional regression modeling to ML, particularly when using a small number of reliable predictor variables after TBI. The dataset of clinical data used to train ML algorithms will be publicly available to other researchers for future comparisons.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Bello, A. U., et M. O. Nnakwe. « An asynchronous inertial algorithm for solving convex feasibility problems with strict pseudo-contractions in Hilbert spaces ». Proceedings of the Edinburgh Mathematical Society 65, no 1 (février 2022) : 229–43. http://dx.doi.org/10.1017/s0013091522000049.

Texte intégral
Résumé :
AbstractFinding a common point in a finite intersection of sets, say, $C=\cap _{i=1}^{n} F(T_i)$, where each $T_i$ is a non-expansive-type mapping, is a central task in mathematics as it cuts across different areas of application, such as road design and medical image reconstruction. There are many algorithms for approximating solutions of such problems. Of particular interest in the implementation of these algorithms are cost and speed. This is due to the large computations to be performed at each step of the iterative process. One of the most efficient methods that optimizes the time of computation and cost of implementation is the asynchronous-parallel algorithm method. In this paper, we prove a weak convergence theorem for the asynchronous sequential inertial (ASI) algorithm (introduced by Heaton and Censor in [H. Heaton and Y. Censor, Asynchronous sequential inertial iterations for common fixed points problems with an application to linear systems, J. Glob. Optim. 74 (2019), 95–119.] ) for strictly pseudo-contractive mappings in Hilbert spaces. Under additional mild conditions, we also obtain a strong convergence theorem. Finally, we apply the ASI algorithm to solving convex minimization problems and Hammerstein integral equations.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Wu, Yafei, Chao He, Yao Shan, Shuai Zhao et Shunhua Zhou. « An Optimized Instance Segmentation of Underlying Surface in Low-Altitude TIR Sensing Images for Enhancing the Calculation of LSTs ». Sensors 24, no 9 (5 mai 2024) : 2937. http://dx.doi.org/10.3390/s24092937.

Texte intégral
Résumé :
The calculation of land surface temperatures (LSTs) via low-altitude thermal infrared remote (TIR) sensing images at a block scale is gaining attention. However, the accurate calculation of LSTs requires a precise determination of the range of various underlying surfaces in the TIR images, and existing approaches face challenges in effectively segmenting the underlying surfaces in the TIR images. To address this challenge, this study proposes a deep learning (DL) methodology to complete the instance segmentation and quantification of underlying surfaces through the low-altitude TIR image dataset. Mask region-based convolutional neural networks were utilized for pixel-level classification and segmentation with an image dataset of 1350 annotated TIR images of an urban rail transit hub with a complex distribution of underlying surfaces. Subsequently, the hyper-parameters and architecture were optimized for the precise classification of the underlying surfaces. The algorithms were validated using 150 new TIR images, and four evaluation indictors demonstrated that the optimized algorithm outperformed the other algorithms. High-quality segmented masks of the underlying surfaces were generated, and the area of each instance was obtained by counting the true-positive pixels with values of 1. This research promotes the accurate calculation of LSTs based on the low-altitude TIR sensing images.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Zhao, Jia, Yuhang Luo, Renbin Xiao, Runxiu Wu et Tanghuai Fan. « Tri-Training Algorithm for Adaptive Nearest Neighbor Density Editing and Cross Entropy Evaluation ». Entropy 25, no 3 (9 mars 2023) : 480. http://dx.doi.org/10.3390/e25030480.

Texte intégral
Résumé :
Tri-training expands the training set by adding pseudo-labels to unlabeled data, which effectively improves the generalization ability of the classifier, but it is easy to mislabel unlabeled data into training noise, which damages the learning efficiency of the classifier, and the explicit decision mechanism tends to make the training noise degrade the accuracy of the classification model in the prediction stage. This study proposes the Tri-training algorithm for adaptive nearest neighbor density editing and cross-entropy evaluation (TTADEC), which is used to reduce the training noise formed during the classifier iteration and to solve the problem of inaccurate prediction by explicit decision mechanism. First, the TTADEC algorithm uses the nearest neighbor editing to label high-confidence samples. Then, combined with the relative nearest neighbor to define the local density of samples to screen the pre-training samples, and then dynamically expand the training set by adaptive technique. Finally, the decision process uses cross-entropy to evaluate the completed base classifier of training and assign appropriate weights to it to construct a decision function. The effectiveness of the TTADEC algorithm is verified on the UCI dataset, and the experimental results show that compared with the standard Tri-training algorithm and its improvement algorithm, the TTADEC algorithm has better classification performance and can effectively deal with the semi-supervised classification problem where the training set is insufficient.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Ramos, Thaís A. R., Nilbson R. O. Galindo, Raúl Arias-Carrasco, Cecília F. da Silva, Vinicius Maracaja-Coutinho et Thaís G. do Rêgo. « RNAmining : A machine learning stand-alone and web server tool for RNA coding potential prediction ». F1000Research 10 (26 avril 2021) : 323. http://dx.doi.org/10.12688/f1000research.52350.1.

Texte intégral
Résumé :
Non-coding RNAs (ncRNAs) are important players in the cellular regulation of organisms from different kingdoms. One of the key steps in ncRNAs research is the ability to distinguish coding/non-coding sequences. We applied seven machine learning algorithms (Naive Bayes, SVM, KNN, Random Forest, XGBoost, ANN and DL) through 15 model organisms from different evolutionary branches. Then, we created a stand-alone and web server tool (RNAmining) to distinguish coding and non-coding sequences, selecting the algorithm with the best performance (XGBoost). Firstly, we used coding/non-coding sequences downloaded from Ensembl (April 14th, 2020). Then, coding/non-coding sequences were balanced, had their tri-nucleotides counts analysed and we performed a normalization by the sequence length. Thus, in total we built 180 models. All the machine learning algorithms tests were performed using 10-folds cross-validation and we selected the algorithm with the best results (XGBoost) to implement at RNAmining. Best F1-scores ranged from 97.56% to 99.57% depending on the organism. Moreover, we produced a benchmarking with other tools already in literature (CPAT, CPC2, RNAcon and Transdecoder) and our results outperformed them, opening opportunities for the development of RNAmining, which is freely available at https://rnamining.integrativebioinformatics.me/.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Ye, Xin, Rongyuan Liu, Jian Hui et Jian Zhu. « Land Surface Temperature Estimation from Landsat-9 Thermal Infrared Data Using Ensemble Learning Method Considering the Physical Radiance Transfer Process ». Land 12, no 7 (26 juin 2023) : 1287. http://dx.doi.org/10.3390/land12071287.

Texte intégral
Résumé :
Accurately estimating land surface temperature (LST) is a critical concern in thermal infrared (TIR) remote sensing. According to the thermal radiance transfer equation, the observed data in each channel are coupled with both emissivity and atmospheric parameters in addition to the LST. To solve this ill-posed problem, classical algorithms often require the input of external parameters such as land surface emissivity and atmospheric profiles, which are often difficult to obtain accurately and timely, and this may introduce additional errors and limit the applicability of the LST retrieval algorithms. To reduce the dependence on external parameters, this paper proposes a new algorithm to directly estimate the LST from the top-of-atmosphere brightness temperature in Landsat-9 two-channel TIR data (channels 10 and 11) without external parameters. The proposed algorithm takes full advantage of the adeptness of the ensemble learning method to solve nonlinear problems. It considers the physical radiance transfer process and adds the leaving-ground bright temperature and atmospheric water vapor index to the input feature set. The experimental results show that the new algorithm achieves accurate LST estimation results compared with the ground-measured LST and is consistent with the Landsat-9 LST product. In subsequent work, further studies will be undertaken on developing end-to-end deep learning models, mining more in-depth features between TIR channels, and reducing the effect of spatial heterogeneity on accuracy validation.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Ou, Depin, Kun Tan, Qian Du, Jishuai Zhu, Xue Wang et Yu Chen. « A Novel Tri-Training Technique for the Semi-Supervised Classification of Hyperspectral Images Based on Regularized Local Discriminant Embedding Feature Extraction ». Remote Sensing 11, no 6 (18 mars 2019) : 654. http://dx.doi.org/10.3390/rs11060654.

Texte intégral
Résumé :
This paper introduces a novel semi-supervised tri-training classification algorithm based on regularized local discriminant embedding (RLDE) for hyperspectral imagery. In this algorithm, the RLDE method is used for optimal feature information extraction, to solve the problems of singular values and over-fitting, which are the main problems in the local discriminant embedding (LDE) and local Fisher discriminant analysis (LFDA) methods. An active learning method is then used to select the most useful and informative samples from the candidate set. In the experiments undertaken in this study, the three base classifiers were multinomial logistic regression (MLR), k-nearest neighbor (KNN), and random forest (RF). To confirm the effectiveness of the proposed RLDE method, experiments were conducted on two real hyperspectral datasets (Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Reflective Optics System Imaging Spectrometer (ROSIS)), and the proposed RLDE tri-training algorithm was compared with its counterparts of tri-training alone, LDE, and LFDA. The experiments confirmed that the proposed approach can effectively improve the classification accuracy for hyperspectral imagery.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Kang, Taekyung, Gwang Sil Kim, Young Sup Byun, Jongwoo Kim, Sollip Kim, Jeonghyun Chang et Soo Jin Yoo. « An Algorithmic Approach Is Superior to the 99th Percentile Upper Reference Limits of High Sensitivity Troponin as a Threshold for Safe Discharge from the Emergency Department ». Medicina 57, no 10 (12 octobre 2021) : 1083. http://dx.doi.org/10.3390/medicina57101083.

Texte intégral
Résumé :
Background and Objectives: High-sensitivity cardiac troponin I (hs-TnI) is an important indicator of acute myocardial infarction (AMI) among patients presenting with chest discomfort at the emergency department (ED). We aimed to determine a reliable hs-TnI cut-off by comparing various values for a baseline single measurement and an algorithmic approach. Materials and Methods: We retrospectively reviewed the hs-TnI values of patients who presented to our ED with chest discomfort between June 2019 and June 2020. We evaluated the diagnostic accuracy of AMI with the Beckman Coulter Access hs-TnI assay by comparing the 99th percentile upper reference limits (URLs) based on the manufacturer’s claims, the newly designated URLs in the Korean population, and an algorithmic approach. Results: A total of 1296 patients who underwent hs-TnI testing in the ED were reviewed and 155 (12.0%) were diagnosed with AMI. With a single measurement, a baseline hs-TnI cut-off of 18.4 ng/L showed the best performance for the whole population with a sensitivity of 78.7%, specificity of 95.7%, negative predictive value (NPV) of 97.1%, and positive predictive value (PPV) of 71.3%. An algorithm using baseline and 2–3 h hs-TnI values showed an 100% sensitivity, 97.7% specificity, an NPV of 100%, and a PPV of 90.1%. This algorithm used a cut-off of <4 ng/L for a single measurement 3 h after symptom onset or an initial level of <5 ng/L and a change of <5 ng/L to rule a patient out, and a cut-off of ≥50 ng/L for a single measurement or a change of ≥20 ng/L to rule a patient in. Conclusions: The algorithmic approach using serial measurements could help differentiate AMI patients from patients who could be safely discharged from the ED, ensuring that patients were triaged accurately and did not undergo unnecessary testing. The cut-off values from previous studies in different countries were effective in the Korean population.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Dong, Shuda, et Heng Wang. « A Robust Tri-Electromagnet-Based 6-DoF Pose Tracking System Using an Error-State Kalman Filter ». Sensors 24, no 18 (13 septembre 2024) : 5956. http://dx.doi.org/10.3390/s24185956.

Texte intégral
Résumé :
Magnetic pose tracking is a non-contact, accurate, and occlusion-free method that has been increasingly employed to track intra-corporeal medical devices such as endoscopes in computer-assisted medical interventions. In magnetic pose-tracking systems, a nonlinear estimation algorithm is needed to recover the pose information from magnetic measurements. In existing pose estimation algorithms such as the extended Kalman filter (EKF), the 3-DoF orientation in the S<sup>3</sup> manifold is normally parametrized as unit quaternions and simply treated as a vector in the Euclidean space, which causes a violation of the unity constraint of quaternions and reduces pose tracking accuracy. In this paper, a pose estimation algorithm based on the error-state Kalman filter (ESKF) is proposed to improve the accuracy and robustness of electromagnetic tracking systems. The proposed system consists of three electromagnetic coils for magnetic field generation and a tri-axial magnetic sensor attached to the target object for field measurement. A strategy of sequential coil excitation is developed to separate the magnetic fields from different coils and reject magnetic disturbances. Simulation and experiments are conducted to evaluate the pose tracking performance of the proposed ESKF algorithm, which is also compared with standard EKF and constrained EKF. It is shown that the ESKF can effectively maintain the quaternion unity and thus achieve a better tracking accuracy, i.e., a Euclidean position error of 2.23 mm and an average orientation angle error of 0.45°. The disturbance rejection performance of the electromagnetic tracking system is also experimentally validated.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Lei, Hao Ran, Shuai Chen, Yao Wei Chang et Lei Jie Wang. « Research on the Hardware-in-the-Loop Simulation for High Dynamic SINS/GNSS Integrated Navigation System ». Advanced Materials Research 846-847 (novembre 2013) : 378–82. http://dx.doi.org/10.4028/www.scientific.net/amr.846-847.378.

Texte intégral
Résumé :
In the process of developing guided munitions, ground test can only verify the performance of integrated navigation system in low dynamic condition, and its costly and risky to use means of authentication such as flight test and throw experiment. This paper proposes a kind of hardware-in-the-loop simulation (HILS) scheme with tri-axial turntable for verifying the performance of navigation system in high dynamic condition. It respectively uses quaternion method and four-sample rotation vector algorithm as attitude updating algorithms for comparison. On the basis of analyzing the characteristics of some tactical missile and the HILS system, the error sources of integrated navigation system in the simulation with turntable and that without turntable are discussed in detail. The results of HILS show that integrated navigation system is of good performance under high dynamic environment; moreover, for the fiber optic gyroscope (FOG) inertial measurement unit (IMU) which outputs angular rate, quaternion method is better than four-sample rotation vector algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Kannangai, Rajesh, Sandeep Ramalingam, Selvaraj Pradeepkumar, Kannan Damodharan et Gopalan Sridharan. « Hospital-Based Evaluation of Two Rapid Human Immunodeficiency Virus Antibody Screening Tests ». Journal of Clinical Microbiology 38, no 9 (2000) : 3445–47. http://dx.doi.org/10.1128/jcm.38.9.3445-3447.2000.

Texte intégral
Résumé :
Two rapid human immunodeficiency virus (HIV) screening assays, HIV TRI-DOT and HIV-SPOT were compared with standard enzyme-linked immunosorbent assays according to a testing algorithm. Sensitivities and specificities in the real-time evaluation were 99.5 and 99.9% for TRI-DOT and 98.2 and 99.7% for HIV-SPOT, respectively. These two tests are suitable for use where facilities and laboratory expertise are limited.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Li, Sheng Miao, Ke Yan Xiao, Xiao Ya Luo, Chun Hua Wen et Xi Gan. « Research on the Application of 3D Modeling and Visualization Method in Construction Mine Model ». Advanced Materials Research 926-930 (mai 2014) : 3208–11. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.3208.

Texte intégral
Résumé :
The spatial data of mine is analyzed and processed in this study. This research mainly include: calculate 3d coordinate of points of drill hole axis, calculate 3d coordinates of drill hole axis and stratum surface, insert virtual drill hole and calculate it's ostiole 3d coordinate, divide and number stratum of study area. Finally, this research design drill hole database and realize storage and management of mine geological data. This study also researched the classification and characteristics of 3d spatial data model. Based on distribution characteristics of mine data and application purpose of 3d model, this paper choose quasi tri-prism as basic volume to build 3d geological model. The improvement of data structure and modeling algorithm of quasi tri-prism make it can better adapt to the complex geological body modeling. This research study the expansion rule of triangle, modeling algorithm of quasi tri-prism and finally design geologic body database and realize storage and management of geological modeling data.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie