Littérature scientifique sur le sujet « Selective classifier »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Sommaire
Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Selective classifier ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Articles de revues sur le sujet "Selective classifier"
Pernkopf, Franz. « Bayesian network classifiers versus selective -NN classifier ». Pattern Recognition 38, no 1 (janvier 2005) : 1–10. http://dx.doi.org/10.1016/j.patcog.2004.05.012.
Texte intégralWares, Scott, John Isaacs et Eyad Elyan. « Burst Detection-Based Selective Classifier Resetting ». Journal of Information & ; Knowledge Management 20, no 02 (23 avril 2021) : 2150027. http://dx.doi.org/10.1142/s0219649221500271.
Texte intégralLi, Kai, et Hong Tao Gao. « A Subgraph-Based Selective Classifier Ensemble Algorithm ». Advanced Materials Research 219-220 (mars 2011) : 261–64. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.261.
Texte intégralWiener, Yair, et Ran El-Yaniv. « Agnostic Pointwise-Competitive Selective Classification ». Journal of Artificial Intelligence Research 52 (26 janvier 2015) : 171–201. http://dx.doi.org/10.1613/jair.4439.
Texte intégralWang, Yan, Xiu Xia Wang et Sheng Lai. « A Kind of Combination Feature Division and Diversity Measure of Multi-Classifier Selective Ensemble Algorithm ». Applied Mechanics and Materials 63-64 (juin 2011) : 55–58. http://dx.doi.org/10.4028/www.scientific.net/amm.63-64.55.
Texte intégralLiu, Li Min, et Xiao Ping Fan. « A Survey : Clustering Ensemble Selection ». Advanced Materials Research 403-408 (novembre 2011) : 2760–63. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2760.
Texte intégralNikhar, Sonam, et A. M. Karandikar. « Prediction of Heart Disease Using Different Classification Techniques ». APTIKOM Journal on Computer Science and Information Technologies 2, no 2 (1 juillet 2017) : 68–76. http://dx.doi.org/10.11591/aptikom.j.csit.106.
Texte intégralTao, Xiaoling, Yong Wang, Yi Wei et Ye Long. « Network Traffic Classification Based on Multi-Classifier Selective Ensemble ». Recent Advances in Electrical & ; Electronic Engineering (Formerly Recent Patents on Electrical & ; Electronic Engineering) 8, no 2 (9 septembre 2015) : 88–94. http://dx.doi.org/10.2174/235209650802150909112547.
Texte intégralWei, Leyi, Shixiang Wan, Jiasheng Guo et Kelvin KL Wong. « A novel hierarchical selective ensemble classifier with bioinformatics application ». Artificial Intelligence in Medicine 83 (novembre 2017) : 82–90. http://dx.doi.org/10.1016/j.artmed.2017.02.005.
Texte intégralZhang, Xiao Hua, Zhi Fei Liu, Ya Jun Guo et Li Qiang Zhao. « Selective Facial Expression Recognition Using fastICA ». Advanced Materials Research 433-440 (janvier 2012) : 2755–61. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.2755.
Texte intégralThèses sur le sujet "Selective classifier"
Sayin, Günel Burcu. « Towards Reliable Hybrid Human-Machine Classifiers ». Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/349843.
Texte intégralBOLDT, F. A. « Classifier Ensemble Feature Selection for Automatic Fault Diagnosis ». Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/9872.
Texte intégral"An efficient ensemble feature selection scheme applied for fault diagnosis is proposed, based on three hypothesis: a. A fault diagnosis system does not need to be restricted to a single feature extraction model, on the contrary, it should use as many feature models as possible, since the extracted features are potentially discriminative and the feature pooling is subsequently reduced with feature selection; b. The feature selection process can be accelerated, without loss of classification performance, combining feature selection methods, in a way that faster and weaker methods reduce the number of potentially non-discriminative features, sending to slower and stronger methods a filtered smaller feature set; c. The optimal feature set for a multi-class problem might be different for each pair of classes. Therefore, the feature selection should be done using an one versus one scheme, even when multi-class classifiers are used. However, since the number of classifiers grows exponentially to the number of the classes, expensive techniques like Error-Correcting Output Codes (ECOC) might have a prohibitive computational cost for large datasets. Thus, a fast one versus one approach must be used to alleviate such a computational demand. These three hypothesis are corroborated by experiments. The main hypothesis of this work is that using these three approaches together is possible to improve significantly the classification performance of a classifier to identify conditions in industrial processes. Experiments have shown such an improvement for the 1-NN classifier in industrial processes used as case study."
Thapa, Mandira. « Optimal Feature Selection for Spatial Histogram Classifiers ». Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1513710294627304.
Texte intégralGustafsson, Robin. « Ordering Classifier Chains using filter model feature selection techniques ». Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14817.
Texte intégralDuangsoithong, Rakkrit. « Feature selection and casual discovery for ensemble classifiers ». Thesis, University of Surrey, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.580345.
Texte intégralKo, Albert Hung-Ren. « Static and dynamic selection of ensemble of classifiers ». Thèse, Montréal : École de technologie supérieure, 2007. http://proquest.umi.com/pqdweb?did=1467895171&sid=2&Fmt=2&clientId=46962&RQT=309&VName=PQD.
Texte intégral"A thesis presented to the École de technologie supérieure in partial fulfillment of the thesis requirement for the degree of the Ph.D. engineering". CaQMUQET Bibliogr. : f. [237]-246. Également disponible en version électronique. CaQMUQET
McCrae, Richard. « The Impact of Cost on Feature Selection for Classifiers ». Thesis, Nova Southeastern University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=13423087.
Texte intégralSupervised machine learning models are increasingly being used for medical diagnosis. The diagnostic problem is formulated as a binary classification task in which trained classifiers make predictions based on a set of input features. In diagnosis, these features are typically procedures or tests with associated costs. The cost of applying a trained classifier for diagnosis may be estimated as the total cost of obtaining values for the features that serve as inputs for the classifier. Obtaining classifiers based on a low cost set of input features with acceptable classification accuracy is of interest to practitioners and researchers. What makes this problem even more challenging is that costs associated with features vary with patients and service providers and change over time.
This dissertation aims to address this problem by proposing a method for obtaining low cost classifiers that meet specified accuracy requirements under dynamically changing costs. Given a set of relevant input features and accuracy requirements, the goal is to identify all qualifying classifiers based on subsets of the feature set. Then, for any arbitrary costs associated with the features, the cost of the classifiers may be computed and candidate classifiers selected based on cost-accuracy tradeoff. Since the number of relevant input features k tends to be large for typical diagnosis problems, training and testing classifiers based on all 2k – 1 possible non-empty subsets of features is computationally prohibitive. Under the reasonable assumption that the accuracy of a classifier is no lower than that of any classifier based on a subset of its input features, this dissertation aims to develop an efficient method to identify all qualifying classifiers.
This study used two types of classifiers—artificial neural networks and classification trees—that have proved promising for numerous problems as documented in the literature. The approach was to measure the accuracy obtained with the classifiers when all features were used. Then, reduced thresholds of accuracy were arbitrarily established which were satisfied with subsets of the complete feature set. Threshold values for three measures—true positive rates, true negative rates, and overall classification accuracy were considered for the classifiers. Two cost functions were used for the features; one used unit costs and the other random costs. Additional manipulation of costs was also performed.
The order in which features were removed was found to have a material impact on the effort required (removing the most important features first was most efficient, removing the least important features first was least efficient). The accuracy and cost measures were combined to produce a Pareto-Optimal Frontier. There were consistently few elements on this Frontier. At most 15 subsets were on the Frontier even when there were hundreds of thousands of acceptable feature sets. Most of the computational time is taken for training and testing the models. Given costs, models in the Pareto-Optimal Frontier can be efficiently identified and the models may be presented to decision makers. Both the Neural Networks and the Decision Trees performed in a comparable fashion suggesting that any classifier could be employed.
McCrae, Richard Clyde. « The Impact of Cost on Feature Selection for Classifiers ». Diss., NSUWorks, 2018. https://nsuworks.nova.edu/gscis_etd/1057.
Texte intégralPinagé, Felipe Azevedo, et 92-98187-1016. « Handling Concept Drift Based on Data Similarity and Dynamic Classifier Selection ». Universidade Federal do Amazonas, 2017. http://tede.ufam.edu.br/handle/tede/5956.
Texte intégralApproved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2017-10-16T18:54:52Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese - Felipe A. Pinagé.pdf: 1786179 bytes, checksum: 25c2a867ba549f75fe4adf778d3f3ad0 (MD5)
Made available in DSpace on 2017-10-16T18:54:52Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese - Felipe A. Pinagé.pdf: 1786179 bytes, checksum: 25c2a867ba549f75fe4adf778d3f3ad0 (MD5) Previous issue date: 2017-07-28
FAPEAM - Fundação de Amparo à Pesquisa do Estado do Amazonas
In real-world applications, machine learning algorithms can be employed to perform spam detection, environmental monitoring, fraud detection, web click stream, among others. Most of these problems present an environment that changes over time due to the dynamic generation process of the data and/or due to streaming data. The problem involving classification tasks of continuous data streams has become one of the major challenges of the machine learning domain in the last decades because, since data is not known in advance, it must be learned as it becomes available. In addition, fast predictions about data should be performed to support often real time decisions. Currently in the literature, methods based on accuracy monitoring are commonly used to detect changes explicitly. However, these methods may become infeasible in some real-world applications especially due to two aspects: they may need human operator feedback, and may depend on a significant decrease of accuracy to be able to detect changes. In addition, most of these methods are also incremental learning-based, since they update the decision model for every incoming example. However, this may lead the system to unnecessary updates. In order to overcome these problems, in this thesis, two semi-supervised methods based on estimating and monitoring a pseudo error are proposed to detect changes explicitly. The decision model is updated only after changing detection. In the first method, the pseudo error is calculated using similarity measures by monitoring the dissimilarity between past and current data distributions. The second proposed method employs dynamic classifier selection in order to improve the pseudo error measurement. As a consequence, this second method allows classifier ensemble online self-training. The experiments conducted show that the proposed methods achieve competitive results, even when compared to fully supervised incremental learning methods. The achievement of these methods, especially the second method, is relevant since they lead change detection and reaction to be applicable in several practical problems reaching high accuracy rates, where usually is not possible to generate the true labels of the instances fully and immediately after classification.
Em aplicações do mundo real, algoritmos de aprendizagem de máquina podem ser usados para detecção de spam, monitoramento ambiental, detecção de fraude, fluxo de cliques na Web, dentre outros. A maioria desses problemas apresenta ambientes que sofrem mudanças com o passar do tempo, devido à natureza dinâmica de geração dos dados e/ou porque envolvem dados que ocorrem em fluxo. O problema envolvendo tarefas de classificação em fluxo contínuo de dados tem se tornado um dos maiores desafios na área de aprendizagem de máquina nas últimas décadas, pois, como os dados não são conhecidos de antemão, eles devem ser aprendidos à medida que são processados. Além disso, devem ser feitas previsões rápidas a respeito desses dados para dar suporte à decisões muitas vezes tomadas em tempo real. Atualmente, métodos baseados em monitoramento da acurácia de classificação são geralmente usados para detectar explicitamente mudanças nos dados. Entretanto, esses métodos podem tornar-se inviáveis em aplicações práticas, especialmente devido a dois aspectos: a necessidade de uma realimentação do sistema por um operador humano, e a dependência de uma queda significativa da acurácia para que mudanças sejam detectadas. Além disso, a maioria desses métodos é baseada em aprendizagem incremental, onde modelos de predição são atualizados para cada instância de entrada, fato que pode levar a atualizações desnecessárias do sistema. A fim de tentar superar todos esses problemas, nesta tese são propostos dois métodos semi-supervisionados de detecção explícita de mudanças em dados, os quais baseiam-se na estimação e monitoramento de uma métrica de pseudo-erro. O modelo de decisão é atualizado somente após a detecção de uma mudança. No primeiro método proposto, o pseudo-erro é monitorado a partir de métricas de similaridade calculadas entre a distribuição atual e distribuições anteriores dos dados. O segundo método proposto utiliza seleção dinâmica de classificadores para aumentar a precisão do cálculo do pseudo-erro. Como consequência, nosso método possibilita que conjuntos de classificadores online sejam criados a partir de auto-treinamento. Os experimentos apresentaram resultados competitivos quando comparados inclusive com métodos baseados em aprendizagem incremental totalmente supervisionada. A proposta desses dois métodos, especialmente do segundo, é relevante por permitir que tarefas de detecção e reação a mudanças sejam aplicáveis em diversos problemas práticos alcançando altas taxas de acurácia, dado que, na maioria dos problemas práticos, não é possível obter o rótulo de uma instância imediatamente após sua classificação feita pelo sistema.
デイビッド, ア., et David Ha. « Boundary uncertainty-based classifier evaluation ». Thesis, https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13128126/?lang=0, 2019. https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13128126/?lang=0.
Texte intégralWe propose a general method that makes accurate evaluation of any classifier model for realistic tasks, both in a theoretical sense despite the finiteness of the available data, and in a practical sense in terms of computation costs. The classifier evaluation challenge arises from the bias of the classification error estimate that is only based on finite data. We bypass this existing difficulty by proposing a new classifier evaluation measure called "boundary uncertainty'' whose estimate based on finite data can be considered a reliable representative of its expectation based on infinite data, and demonstrate the potential of our approach on three classifier models and thirteen datasets.
博士(工学)
Doctor of Philosophy in Engineering
同志社大学
Doshisha University
Livres sur le sujet "Selective classifier"
B, Krimmel Michael, et Hartz Emilie K, dir. Prison librarianship : A selective, annotated, classified bibliography, 1945-1985. Jefferson, N.C : McFarland, 1987.
Trouver le texte intégralMitchell, Alastair. Classified selective list of reading and other published material for the community worker. 2e éd. London : National Federation of Community Organisations, 1988.
Trouver le texte intégralRidgway, Peggi. Romancing in the personal ads : How to find your partner in the classifieds. La Mirada, CA : Wordpictures, 1996.
Trouver le texte intégralBroom, Herbert. Selection of Legal Maxims : Classified and Illustrated. Creative Media Partners, LLC, 2018.
Trouver le texte intégralBroom, Herbert. Selection of Legal Maxims, Classified and Illustrated. Creative Media Partners, LLC, 2018.
Trouver le texte intégralBroom, Herbert. Selection of Legal Maxims : Classified and Illustrated. Creative Media Partners, LLC, 2018.
Trouver le texte intégralSelection of Legal Maxims : Classified and Illustrated. Creative Media Partners, LLC, 2022.
Trouver le texte intégralSelection of Legal Maxims : Classified and Illustrated. Creative Media Partners, LLC, 2022.
Trouver le texte intégralBroom, Herbert. Selection of Legal Maxims, Classified and Illustrated. Creative Media Partners, LLC, 2018.
Trouver le texte intégralBroom, Herbert. A Selection of Legal Maxims : Classified and Illustrated. Franklin Classics, 2018.
Trouver le texte intégralChapitres de livres sur le sujet "Selective classifier"
Li, Nan, Yuan Jiang et Zhi-Hua Zhou. « Multi-label Selective Ensemble ». Dans Multiple Classifier Systems, 76–88. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20248-8_7.
Texte intégralLi, Nan, et Zhi-Hua Zhou. « Selective Ensemble under Regularization Framework ». Dans Multiple Classifier Systems, 293–303. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02326-2_30.
Texte intégralLi, Nan, et Zhi-Hua Zhou. « Selective Ensemble of Classifier Chains ». Dans Multiple Classifier Systems, 146–56. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38067-9_13.
Texte intégralLu, Xuyao, Yan Yang et Hongjun Wang. « Selective Clustering Ensemble Based on Covariance ». Dans Multiple Classifier Systems, 179–89. Berlin, Heidelberg : Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38067-9_16.
Texte intégralKrasotkina, Olga, Oleg Seredin et Vadim Mottl. « Supervised Selective Combination of Diverse Object-Representation Modalities for Regression Estimation ». Dans Multiple Classifier Systems, 89–99. Cham : Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20248-8_8.
Texte intégralTatarchuk, Alexander, Eugene Urlov, Vadim Mottl et David Windridge. « A Support Kernel Machine for Supervised Selective Combining of Diverse Pattern-Recognition Modalities ». Dans Multiple Classifier Systems, 165–74. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12127-2_17.
Texte intégralTatarchuk, Alexander, Valentina Sulimova, David Windridge, Vadim Mottl et Mikhail Lange. « Supervised Selective Combining Pattern Recognition Modalities and Its Application to Signature Verification by Fusing On-Line and Off-Line Kernels ». Dans Multiple Classifier Systems, 324–34. Berlin, Heidelberg : Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02326-2_33.
Texte intégralVelasco, Horacio M. González, Carlos J. García Orellana, Miguel Macías Macías et Ramón Gallardo Caballero. « Selective Color Edge Detector Based on a Neural Classifier ». Dans Advanced Concepts for Intelligent Vision Systems, 84–91. Berlin, Heidelberg : Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11558484_11.
Texte intégralAshwini, S. S., M. Z. Kurian et M. Nagaraja. « Lung Cancer Detection and Prediction Using Customized Selective Segmentation Technique with SVM Classifier ». Dans Emerging Research in Computing, Information, Communication and Applications, 37–44. Singapore : Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1342-5_4.
Texte intégralHue, Carine, Marc Boullé et Vincent Lemaire. « Online Learning of a Weighted Selective Naive Bayes Classifier with Non-convex Optimization ». Dans Advances in Knowledge Discovery and Management, 3–17. Cham : Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45763-5_1.
Texte intégralActes de conférences sur le sujet "Selective classifier"
Germi, Saeed Bakhshi, Esa Rahtu et Heikki Huttunen. « Selective Probabilistic Classifier Based on Hypothesis Testing ». Dans 2021 9th European Workshop on Visual Information Processing (EUVIP). IEEE, 2021. http://dx.doi.org/10.1109/euvip50544.2021.9483967.
Texte intégralAhmad, Irshad, Abdul Muhamin Naeem, Muhammad Islam et Azween Bin Abdullah. « Statistical Based Real-Time Selective Herbicide Weed Classifier ». Dans 2007 IEEE International Multitopic Conference (INMIC). IEEE, 2007. http://dx.doi.org/10.1109/inmic.2007.4557689.
Texte intégralChen, Jingnian, et Li Xu. « A Hybrid Selective Classifier for Categorizing Incomplete Data ». Dans 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery. IEEE, 2009. http://dx.doi.org/10.1109/fskd.2009.257.
Texte intégralFan, Yawen, Husheng Li et Chao Tian. « Selective Sampling Based Efficient Classifier Representation in Distributed Learning ». Dans GLOBECOM 2016 - 2016 IEEE Global Communications Conference. IEEE, 2016. http://dx.doi.org/10.1109/glocom.2016.7842257.
Texte intégralNing, Bo, XianBin Cao, YanWu Xu et Jun Zhang. « Virus-evolutionary genetic algorithm based selective ensemble classifier for pedestrian detection ». Dans the first ACM/SIGEVO Summit. New York, New York, USA : ACM Press, 2009. http://dx.doi.org/10.1145/1543834.1543893.
Texte intégralBoulle, M. « Regularization and Averaging of the Selective Naïve Bayes classifier ». Dans The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246637.
Texte intégralBai, Lixia, Hong Li et Weifeng Gao. « A Selective Ensemble Classifier Using Multiobjective Optimization Based Extreme Learning Machine Algorithm ». Dans 2021 17th International Conference on Computational Intelligence and Security (CIS). IEEE, 2021. http://dx.doi.org/10.1109/cis54983.2021.00017.
Texte intégralOrtiz-Bayliss, Jose C., Hugo Terashima-Marin et Santiago E. Conant-Pablos. « Using learning classifier systems to design selective hyper-heuristics for constraint satisfaction problems ». Dans 2013 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2013. http://dx.doi.org/10.1109/cec.2013.6557885.
Texte intégralHonda, Toshifumi, Ryo Nakagaki, Obara Kenji et Yuji Takagi. « Fuzzy selective voting classifier with defect extraction based on comparison within an image ». Dans International Symposium on Multispectral Image Processing and Pattern Recognition, sous la direction de S. J. Maybank, Mingyue Ding, F. Wahl et Yaoting Zhu. SPIE, 2007. http://dx.doi.org/10.1117/12.750528.
Texte intégralBalasubramanian, Ram, M. A. El-Sharkawi, R. J. Marks, Jae-Byung Jung, R. T. Miyamoto, G. M. Andersen, C. J. Eggen et W. L. J. Fox. « Self-selective clustering of training data using the maximally-receptive classifier/regression bank ». Dans 2009 IEEE International Conference on Systems, Man and Cybernetics - SMC. IEEE, 2009. http://dx.doi.org/10.1109/icsmc.2009.5346820.
Texte intégralRapports d'organisations sur le sujet "Selective classifier"
Searcy, Stephen W., et Kalman Peleg. Adaptive Sorting of Fresh Produce. United States Department of Agriculture, août 1993. http://dx.doi.org/10.32747/1993.7568747.bard.
Texte intégralWebb, Geoffrey, et Mark Carman. Dynamic Dimensionality Selection for Bayesian Classifier Ensembles. Fort Belvoir, VA : Defense Technical Information Center, mars 2015. http://dx.doi.org/10.21236/ada614917.
Texte intégralZchori-Fein, Einat, Judith K. Brown et Nurit Katzir. Biocomplexity and Selective modulation of whitefly symbiotic composition. United States Department of Agriculture, juin 2006. http://dx.doi.org/10.32747/2006.7591733.bard.
Texte intégralDzanku, Fred M., et Louis S. Hodey. Achieving Inclusive Oil Palm Commercialisation in Ghana. Institute of Development Studies (IDS), février 2022. http://dx.doi.org/10.19088/apra.2022.007.
Texte intégralZhao, Bingyu, Saul Burdman, Ronald Walcott, Tal Pupko et Gregory Welbaum. Identifying pathogenic determinants of Acidovorax citrulli toward the control of bacterial fruit blotch of cucurbits. United States Department of Agriculture, janvier 2014. http://dx.doi.org/10.32747/2014.7598168.bard.
Texte intégralBrosh, Arieh, Gordon Carstens, Kristen Johnson, Ariel Shabtay, Joshuah Miron, Yoav Aharoni, Luis Tedeschi et Ilan Halachmi. Enhancing Sustainability of Cattle Production Systems through Discovery of Biomarkers for Feed Efficiency. United States Department of Agriculture, juillet 2011. http://dx.doi.org/10.32747/2011.7592644.bard.
Texte intégralMultiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, mars 2022. http://dx.doi.org/10.4271/2022-01-0616.
Texte intégral