Literatura académica sobre el tema "Selective classifier"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Selective classifier".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Selective classifier"
Pernkopf, Franz. "Bayesian network classifiers versus selective -NN classifier". Pattern Recognition 38, n.º 1 (enero de 2005): 1–10. http://dx.doi.org/10.1016/j.patcog.2004.05.012.
Texto completoWares, Scott, John Isaacs y Eyad Elyan. "Burst Detection-Based Selective Classifier Resetting". Journal of Information & Knowledge Management 20, n.º 02 (23 de abril de 2021): 2150027. http://dx.doi.org/10.1142/s0219649221500271.
Texto completoLi, Kai y Hong Tao Gao. "A Subgraph-Based Selective Classifier Ensemble Algorithm". Advanced Materials Research 219-220 (marzo de 2011): 261–64. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.261.
Texto completoWiener, Yair y Ran El-Yaniv. "Agnostic Pointwise-Competitive Selective Classification". Journal of Artificial Intelligence Research 52 (26 de enero de 2015): 171–201. http://dx.doi.org/10.1613/jair.4439.
Texto completoWang, Yan, Xiu Xia Wang y Sheng Lai. "A Kind of Combination Feature Division and Diversity Measure of Multi-Classifier Selective Ensemble Algorithm". Applied Mechanics and Materials 63-64 (junio de 2011): 55–58. http://dx.doi.org/10.4028/www.scientific.net/amm.63-64.55.
Texto completoLiu, Li Min y Xiao Ping Fan. "A Survey: Clustering Ensemble Selection". Advanced Materials Research 403-408 (noviembre de 2011): 2760–63. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2760.
Texto completoNikhar, Sonam y A. M. Karandikar. "Prediction of Heart Disease Using Different Classification Techniques". APTIKOM Journal on Computer Science and Information Technologies 2, n.º 2 (1 de julio de 2017): 68–76. http://dx.doi.org/10.11591/aptikom.j.csit.106.
Texto completoTao, Xiaoling, Yong Wang, Yi Wei y Ye Long. "Network Traffic Classification Based on Multi-Classifier Selective Ensemble". Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 8, n.º 2 (9 de septiembre de 2015): 88–94. http://dx.doi.org/10.2174/235209650802150909112547.
Texto completoWei, Leyi, Shixiang Wan, Jiasheng Guo y Kelvin KL Wong. "A novel hierarchical selective ensemble classifier with bioinformatics application". Artificial Intelligence in Medicine 83 (noviembre de 2017): 82–90. http://dx.doi.org/10.1016/j.artmed.2017.02.005.
Texto completoZhang, Xiao Hua, Zhi Fei Liu, Ya Jun Guo y Li Qiang Zhao. "Selective Facial Expression Recognition Using fastICA". Advanced Materials Research 433-440 (enero de 2012): 2755–61. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.2755.
Texto completoTesis sobre el tema "Selective classifier"
Sayin, Günel Burcu. "Towards Reliable Hybrid Human-Machine Classifiers". Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/349843.
Texto completoBOLDT, F. A. "Classifier Ensemble Feature Selection for Automatic Fault Diagnosis". Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/9872.
Texto completo"An efficient ensemble feature selection scheme applied for fault diagnosis is proposed, based on three hypothesis: a. A fault diagnosis system does not need to be restricted to a single feature extraction model, on the contrary, it should use as many feature models as possible, since the extracted features are potentially discriminative and the feature pooling is subsequently reduced with feature selection; b. The feature selection process can be accelerated, without loss of classification performance, combining feature selection methods, in a way that faster and weaker methods reduce the number of potentially non-discriminative features, sending to slower and stronger methods a filtered smaller feature set; c. The optimal feature set for a multi-class problem might be different for each pair of classes. Therefore, the feature selection should be done using an one versus one scheme, even when multi-class classifiers are used. However, since the number of classifiers grows exponentially to the number of the classes, expensive techniques like Error-Correcting Output Codes (ECOC) might have a prohibitive computational cost for large datasets. Thus, a fast one versus one approach must be used to alleviate such a computational demand. These three hypothesis are corroborated by experiments. The main hypothesis of this work is that using these three approaches together is possible to improve significantly the classification performance of a classifier to identify conditions in industrial processes. Experiments have shown such an improvement for the 1-NN classifier in industrial processes used as case study."
Thapa, Mandira. "Optimal Feature Selection for Spatial Histogram Classifiers". Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1513710294627304.
Texto completoGustafsson, Robin. "Ordering Classifier Chains using filter model feature selection techniques". Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14817.
Texto completoDuangsoithong, Rakkrit. "Feature selection and casual discovery for ensemble classifiers". Thesis, University of Surrey, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.580345.
Texto completoKo, Albert Hung-Ren. "Static and dynamic selection of ensemble of classifiers". Thèse, Montréal : École de technologie supérieure, 2007. http://proquest.umi.com/pqdweb?did=1467895171&sid=2&Fmt=2&clientId=46962&RQT=309&VName=PQD.
Texto completo"A thesis presented to the École de technologie supérieure in partial fulfillment of the thesis requirement for the degree of the Ph.D. engineering". CaQMUQET Bibliogr. : f. [237]-246. Également disponible en version électronique. CaQMUQET
McCrae, Richard. "The Impact of Cost on Feature Selection for Classifiers". Thesis, Nova Southeastern University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=13423087.
Texto completoSupervised machine learning models are increasingly being used for medical diagnosis. The diagnostic problem is formulated as a binary classification task in which trained classifiers make predictions based on a set of input features. In diagnosis, these features are typically procedures or tests with associated costs. The cost of applying a trained classifier for diagnosis may be estimated as the total cost of obtaining values for the features that serve as inputs for the classifier. Obtaining classifiers based on a low cost set of input features with acceptable classification accuracy is of interest to practitioners and researchers. What makes this problem even more challenging is that costs associated with features vary with patients and service providers and change over time.
This dissertation aims to address this problem by proposing a method for obtaining low cost classifiers that meet specified accuracy requirements under dynamically changing costs. Given a set of relevant input features and accuracy requirements, the goal is to identify all qualifying classifiers based on subsets of the feature set. Then, for any arbitrary costs associated with the features, the cost of the classifiers may be computed and candidate classifiers selected based on cost-accuracy tradeoff. Since the number of relevant input features k tends to be large for typical diagnosis problems, training and testing classifiers based on all 2k – 1 possible non-empty subsets of features is computationally prohibitive. Under the reasonable assumption that the accuracy of a classifier is no lower than that of any classifier based on a subset of its input features, this dissertation aims to develop an efficient method to identify all qualifying classifiers.
This study used two types of classifiers—artificial neural networks and classification trees—that have proved promising for numerous problems as documented in the literature. The approach was to measure the accuracy obtained with the classifiers when all features were used. Then, reduced thresholds of accuracy were arbitrarily established which were satisfied with subsets of the complete feature set. Threshold values for three measures—true positive rates, true negative rates, and overall classification accuracy were considered for the classifiers. Two cost functions were used for the features; one used unit costs and the other random costs. Additional manipulation of costs was also performed.
The order in which features were removed was found to have a material impact on the effort required (removing the most important features first was most efficient, removing the least important features first was least efficient). The accuracy and cost measures were combined to produce a Pareto-Optimal Frontier. There were consistently few elements on this Frontier. At most 15 subsets were on the Frontier even when there were hundreds of thousands of acceptable feature sets. Most of the computational time is taken for training and testing the models. Given costs, models in the Pareto-Optimal Frontier can be efficiently identified and the models may be presented to decision makers. Both the Neural Networks and the Decision Trees performed in a comparable fashion suggesting that any classifier could be employed.
McCrae, Richard Clyde. "The Impact of Cost on Feature Selection for Classifiers". Diss., NSUWorks, 2018. https://nsuworks.nova.edu/gscis_etd/1057.
Texto completoPinagé, Felipe Azevedo y 92-98187-1016. "Handling Concept Drift Based on Data Similarity and Dynamic Classifier Selection". Universidade Federal do Amazonas, 2017. http://tede.ufam.edu.br/handle/tede/5956.
Texto completoApproved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2017-10-16T18:54:52Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese - Felipe A. Pinagé.pdf: 1786179 bytes, checksum: 25c2a867ba549f75fe4adf778d3f3ad0 (MD5)
Made available in DSpace on 2017-10-16T18:54:52Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese - Felipe A. Pinagé.pdf: 1786179 bytes, checksum: 25c2a867ba549f75fe4adf778d3f3ad0 (MD5) Previous issue date: 2017-07-28
FAPEAM - Fundação de Amparo à Pesquisa do Estado do Amazonas
In real-world applications, machine learning algorithms can be employed to perform spam detection, environmental monitoring, fraud detection, web click stream, among others. Most of these problems present an environment that changes over time due to the dynamic generation process of the data and/or due to streaming data. The problem involving classification tasks of continuous data streams has become one of the major challenges of the machine learning domain in the last decades because, since data is not known in advance, it must be learned as it becomes available. In addition, fast predictions about data should be performed to support often real time decisions. Currently in the literature, methods based on accuracy monitoring are commonly used to detect changes explicitly. However, these methods may become infeasible in some real-world applications especially due to two aspects: they may need human operator feedback, and may depend on a significant decrease of accuracy to be able to detect changes. In addition, most of these methods are also incremental learning-based, since they update the decision model for every incoming example. However, this may lead the system to unnecessary updates. In order to overcome these problems, in this thesis, two semi-supervised methods based on estimating and monitoring a pseudo error are proposed to detect changes explicitly. The decision model is updated only after changing detection. In the first method, the pseudo error is calculated using similarity measures by monitoring the dissimilarity between past and current data distributions. The second proposed method employs dynamic classifier selection in order to improve the pseudo error measurement. As a consequence, this second method allows classifier ensemble online self-training. The experiments conducted show that the proposed methods achieve competitive results, even when compared to fully supervised incremental learning methods. The achievement of these methods, especially the second method, is relevant since they lead change detection and reaction to be applicable in several practical problems reaching high accuracy rates, where usually is not possible to generate the true labels of the instances fully and immediately after classification.
Em aplicações do mundo real, algoritmos de aprendizagem de máquina podem ser usados para detecção de spam, monitoramento ambiental, detecção de fraude, fluxo de cliques na Web, dentre outros. A maioria desses problemas apresenta ambientes que sofrem mudanças com o passar do tempo, devido à natureza dinâmica de geração dos dados e/ou porque envolvem dados que ocorrem em fluxo. O problema envolvendo tarefas de classificação em fluxo contínuo de dados tem se tornado um dos maiores desafios na área de aprendizagem de máquina nas últimas décadas, pois, como os dados não são conhecidos de antemão, eles devem ser aprendidos à medida que são processados. Além disso, devem ser feitas previsões rápidas a respeito desses dados para dar suporte à decisões muitas vezes tomadas em tempo real. Atualmente, métodos baseados em monitoramento da acurácia de classificação são geralmente usados para detectar explicitamente mudanças nos dados. Entretanto, esses métodos podem tornar-se inviáveis em aplicações práticas, especialmente devido a dois aspectos: a necessidade de uma realimentação do sistema por um operador humano, e a dependência de uma queda significativa da acurácia para que mudanças sejam detectadas. Além disso, a maioria desses métodos é baseada em aprendizagem incremental, onde modelos de predição são atualizados para cada instância de entrada, fato que pode levar a atualizações desnecessárias do sistema. A fim de tentar superar todos esses problemas, nesta tese são propostos dois métodos semi-supervisionados de detecção explícita de mudanças em dados, os quais baseiam-se na estimação e monitoramento de uma métrica de pseudo-erro. O modelo de decisão é atualizado somente após a detecção de uma mudança. No primeiro método proposto, o pseudo-erro é monitorado a partir de métricas de similaridade calculadas entre a distribuição atual e distribuições anteriores dos dados. O segundo método proposto utiliza seleção dinâmica de classificadores para aumentar a precisão do cálculo do pseudo-erro. Como consequência, nosso método possibilita que conjuntos de classificadores online sejam criados a partir de auto-treinamento. Os experimentos apresentaram resultados competitivos quando comparados inclusive com métodos baseados em aprendizagem incremental totalmente supervisionada. A proposta desses dois métodos, especialmente do segundo, é relevante por permitir que tarefas de detecção e reação a mudanças sejam aplicáveis em diversos problemas práticos alcançando altas taxas de acurácia, dado que, na maioria dos problemas práticos, não é possível obter o rótulo de uma instância imediatamente após sua classificação feita pelo sistema.
デイビッド, ア. y David Ha. "Boundary uncertainty-based classifier evaluation". Thesis, https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13128126/?lang=0, 2019. https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13128126/?lang=0.
Texto completoWe propose a general method that makes accurate evaluation of any classifier model for realistic tasks, both in a theoretical sense despite the finiteness of the available data, and in a practical sense in terms of computation costs. The classifier evaluation challenge arises from the bias of the classification error estimate that is only based on finite data. We bypass this existing difficulty by proposing a new classifier evaluation measure called "boundary uncertainty'' whose estimate based on finite data can be considered a reliable representative of its expectation based on infinite data, and demonstrate the potential of our approach on three classifier models and thirteen datasets.
博士(工学)
Doctor of Philosophy in Engineering
同志社大学
Doshisha University
Libros sobre el tema "Selective classifier"
B, Krimmel Michael y Hartz Emilie K, eds. Prison librarianship: A selective, annotated, classified bibliography, 1945-1985. Jefferson, N.C: McFarland, 1987.
Buscar texto completoMitchell, Alastair. Classified selective list of reading and other published material for the community worker. 2a ed. London: National Federation of Community Organisations, 1988.
Buscar texto completoRidgway, Peggi. Romancing in the personal ads: How to find your partner in the classifieds. La Mirada, CA: Wordpictures, 1996.
Buscar texto completoBroom, Herbert. Selection of Legal Maxims: Classified and Illustrated. Creative Media Partners, LLC, 2018.
Buscar texto completoBroom, Herbert. Selection of Legal Maxims, Classified and Illustrated. Creative Media Partners, LLC, 2018.
Buscar texto completoBroom, Herbert. Selection of Legal Maxims: Classified and Illustrated. Creative Media Partners, LLC, 2018.
Buscar texto completoSelection of Legal Maxims: Classified and Illustrated. Creative Media Partners, LLC, 2022.
Buscar texto completoSelection of Legal Maxims: Classified and Illustrated. Creative Media Partners, LLC, 2022.
Buscar texto completoBroom, Herbert. Selection of Legal Maxims, Classified and Illustrated. Creative Media Partners, LLC, 2018.
Buscar texto completoBroom, Herbert. A Selection of Legal Maxims: Classified and Illustrated. Franklin Classics, 2018.
Buscar texto completoCapítulos de libros sobre el tema "Selective classifier"
Li, Nan, Yuan Jiang y Zhi-Hua Zhou. "Multi-label Selective Ensemble". En Multiple Classifier Systems, 76–88. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20248-8_7.
Texto completoLi, Nan y Zhi-Hua Zhou. "Selective Ensemble under Regularization Framework". En Multiple Classifier Systems, 293–303. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02326-2_30.
Texto completoLi, Nan y Zhi-Hua Zhou. "Selective Ensemble of Classifier Chains". En Multiple Classifier Systems, 146–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38067-9_13.
Texto completoLu, Xuyao, Yan Yang y Hongjun Wang. "Selective Clustering Ensemble Based on Covariance". En Multiple Classifier Systems, 179–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38067-9_16.
Texto completoKrasotkina, Olga, Oleg Seredin y Vadim Mottl. "Supervised Selective Combination of Diverse Object-Representation Modalities for Regression Estimation". En Multiple Classifier Systems, 89–99. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20248-8_8.
Texto completoTatarchuk, Alexander, Eugene Urlov, Vadim Mottl y David Windridge. "A Support Kernel Machine for Supervised Selective Combining of Diverse Pattern-Recognition Modalities". En Multiple Classifier Systems, 165–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12127-2_17.
Texto completoTatarchuk, Alexander, Valentina Sulimova, David Windridge, Vadim Mottl y Mikhail Lange. "Supervised Selective Combining Pattern Recognition Modalities and Its Application to Signature Verification by Fusing On-Line and Off-Line Kernels". En Multiple Classifier Systems, 324–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02326-2_33.
Texto completoVelasco, Horacio M. González, Carlos J. García Orellana, Miguel Macías Macías y Ramón Gallardo Caballero. "Selective Color Edge Detector Based on a Neural Classifier". En Advanced Concepts for Intelligent Vision Systems, 84–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11558484_11.
Texto completoAshwini, S. S., M. Z. Kurian y M. Nagaraja. "Lung Cancer Detection and Prediction Using Customized Selective Segmentation Technique with SVM Classifier". En Emerging Research in Computing, Information, Communication and Applications, 37–44. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1342-5_4.
Texto completoHue, Carine, Marc Boullé y Vincent Lemaire. "Online Learning of a Weighted Selective Naive Bayes Classifier with Non-convex Optimization". En Advances in Knowledge Discovery and Management, 3–17. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45763-5_1.
Texto completoActas de conferencias sobre el tema "Selective classifier"
Germi, Saeed Bakhshi, Esa Rahtu y Heikki Huttunen. "Selective Probabilistic Classifier Based on Hypothesis Testing". En 2021 9th European Workshop on Visual Information Processing (EUVIP). IEEE, 2021. http://dx.doi.org/10.1109/euvip50544.2021.9483967.
Texto completoAhmad, Irshad, Abdul Muhamin Naeem, Muhammad Islam y Azween Bin Abdullah. "Statistical Based Real-Time Selective Herbicide Weed Classifier". En 2007 IEEE International Multitopic Conference (INMIC). IEEE, 2007. http://dx.doi.org/10.1109/inmic.2007.4557689.
Texto completoChen, Jingnian y Li Xu. "A Hybrid Selective Classifier for Categorizing Incomplete Data". En 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery. IEEE, 2009. http://dx.doi.org/10.1109/fskd.2009.257.
Texto completoFan, Yawen, Husheng Li y Chao Tian. "Selective Sampling Based Efficient Classifier Representation in Distributed Learning". En GLOBECOM 2016 - 2016 IEEE Global Communications Conference. IEEE, 2016. http://dx.doi.org/10.1109/glocom.2016.7842257.
Texto completoNing, Bo, XianBin Cao, YanWu Xu y Jun Zhang. "Virus-evolutionary genetic algorithm based selective ensemble classifier for pedestrian detection". En the first ACM/SIGEVO Summit. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1543834.1543893.
Texto completoBoulle, M. "Regularization and Averaging of the Selective Naïve Bayes classifier". En The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246637.
Texto completoBai, Lixia, Hong Li y Weifeng Gao. "A Selective Ensemble Classifier Using Multiobjective Optimization Based Extreme Learning Machine Algorithm". En 2021 17th International Conference on Computational Intelligence and Security (CIS). IEEE, 2021. http://dx.doi.org/10.1109/cis54983.2021.00017.
Texto completoOrtiz-Bayliss, Jose C., Hugo Terashima-Marin y Santiago E. Conant-Pablos. "Using learning classifier systems to design selective hyper-heuristics for constraint satisfaction problems". En 2013 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2013. http://dx.doi.org/10.1109/cec.2013.6557885.
Texto completoHonda, Toshifumi, Ryo Nakagaki, Obara Kenji y Yuji Takagi. "Fuzzy selective voting classifier with defect extraction based on comparison within an image". En International Symposium on Multispectral Image Processing and Pattern Recognition, editado por S. J. Maybank, Mingyue Ding, F. Wahl y Yaoting Zhu. SPIE, 2007. http://dx.doi.org/10.1117/12.750528.
Texto completoBalasubramanian, Ram, M. A. El-Sharkawi, R. J. Marks, Jae-Byung Jung, R. T. Miyamoto, G. M. Andersen, C. J. Eggen y W. L. J. Fox. "Self-selective clustering of training data using the maximally-receptive classifier/regression bank". En 2009 IEEE International Conference on Systems, Man and Cybernetics - SMC. IEEE, 2009. http://dx.doi.org/10.1109/icsmc.2009.5346820.
Texto completoInformes sobre el tema "Selective classifier"
Searcy, Stephen W. y Kalman Peleg. Adaptive Sorting of Fresh Produce. United States Department of Agriculture, agosto de 1993. http://dx.doi.org/10.32747/1993.7568747.bard.
Texto completoWebb, Geoffrey y Mark Carman. Dynamic Dimensionality Selection for Bayesian Classifier Ensembles. Fort Belvoir, VA: Defense Technical Information Center, marzo de 2015. http://dx.doi.org/10.21236/ada614917.
Texto completoZchori-Fein, Einat, Judith K. Brown y Nurit Katzir. Biocomplexity and Selective modulation of whitefly symbiotic composition. United States Department of Agriculture, junio de 2006. http://dx.doi.org/10.32747/2006.7591733.bard.
Texto completoDzanku, Fred M. y Louis S. Hodey. Achieving Inclusive Oil Palm Commercialisation in Ghana. Institute of Development Studies (IDS), febrero de 2022. http://dx.doi.org/10.19088/apra.2022.007.
Texto completoZhao, Bingyu, Saul Burdman, Ronald Walcott, Tal Pupko y Gregory Welbaum. Identifying pathogenic determinants of Acidovorax citrulli toward the control of bacterial fruit blotch of cucurbits. United States Department of Agriculture, enero de 2014. http://dx.doi.org/10.32747/2014.7598168.bard.
Texto completoBrosh, Arieh, Gordon Carstens, Kristen Johnson, Ariel Shabtay, Joshuah Miron, Yoav Aharoni, Luis Tedeschi y Ilan Halachmi. Enhancing Sustainability of Cattle Production Systems through Discovery of Biomarkers for Feed Efficiency. United States Department of Agriculture, julio de 2011. http://dx.doi.org/10.32747/2011.7592644.bard.
Texto completoMultiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, marzo de 2022. http://dx.doi.org/10.4271/2022-01-0616.
Texto completo