Добірка наукової літератури з теми "Selective classifier"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Selective classifier".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Selective classifier"
Pernkopf, Franz. "Bayesian network classifiers versus selective -NN classifier." Pattern Recognition 38, no. 1 (January 2005): 1–10. http://dx.doi.org/10.1016/j.patcog.2004.05.012.
Повний текст джерелаWares, Scott, John Isaacs, and Eyad Elyan. "Burst Detection-Based Selective Classifier Resetting." Journal of Information & Knowledge Management 20, no. 02 (April 23, 2021): 2150027. http://dx.doi.org/10.1142/s0219649221500271.
Повний текст джерелаLi, Kai, and Hong Tao Gao. "A Subgraph-Based Selective Classifier Ensemble Algorithm." Advanced Materials Research 219-220 (March 2011): 261–64. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.261.
Повний текст джерелаWiener, Yair, and Ran El-Yaniv. "Agnostic Pointwise-Competitive Selective Classification." Journal of Artificial Intelligence Research 52 (January 26, 2015): 171–201. http://dx.doi.org/10.1613/jair.4439.
Повний текст джерелаWang, Yan, Xiu Xia Wang, and Sheng Lai. "A Kind of Combination Feature Division and Diversity Measure of Multi-Classifier Selective Ensemble Algorithm." Applied Mechanics and Materials 63-64 (June 2011): 55–58. http://dx.doi.org/10.4028/www.scientific.net/amm.63-64.55.
Повний текст джерелаLiu, Li Min, and Xiao Ping Fan. "A Survey: Clustering Ensemble Selection." Advanced Materials Research 403-408 (November 2011): 2760–63. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2760.
Повний текст джерелаNikhar, Sonam, and A. M. Karandikar. "Prediction of Heart Disease Using Different Classification Techniques." APTIKOM Journal on Computer Science and Information Technologies 2, no. 2 (July 1, 2017): 68–76. http://dx.doi.org/10.11591/aptikom.j.csit.106.
Повний текст джерелаTao, Xiaoling, Yong Wang, Yi Wei, and Ye Long. "Network Traffic Classification Based on Multi-Classifier Selective Ensemble." Recent Advances in Electrical & Electronic Engineering (Formerly Recent Patents on Electrical & Electronic Engineering) 8, no. 2 (September 9, 2015): 88–94. http://dx.doi.org/10.2174/235209650802150909112547.
Повний текст джерелаWei, Leyi, Shixiang Wan, Jiasheng Guo, and Kelvin KL Wong. "A novel hierarchical selective ensemble classifier with bioinformatics application." Artificial Intelligence in Medicine 83 (November 2017): 82–90. http://dx.doi.org/10.1016/j.artmed.2017.02.005.
Повний текст джерелаZhang, Xiao Hua, Zhi Fei Liu, Ya Jun Guo, and Li Qiang Zhao. "Selective Facial Expression Recognition Using fastICA." Advanced Materials Research 433-440 (January 2012): 2755–61. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.2755.
Повний текст джерелаДисертації з теми "Selective classifier"
Sayin, Günel Burcu. "Towards Reliable Hybrid Human-Machine Classifiers." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/349843.
Повний текст джерелаBOLDT, F. A. "Classifier Ensemble Feature Selection for Automatic Fault Diagnosis." Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/9872.
Повний текст джерела"An efficient ensemble feature selection scheme applied for fault diagnosis is proposed, based on three hypothesis: a. A fault diagnosis system does not need to be restricted to a single feature extraction model, on the contrary, it should use as many feature models as possible, since the extracted features are potentially discriminative and the feature pooling is subsequently reduced with feature selection; b. The feature selection process can be accelerated, without loss of classification performance, combining feature selection methods, in a way that faster and weaker methods reduce the number of potentially non-discriminative features, sending to slower and stronger methods a filtered smaller feature set; c. The optimal feature set for a multi-class problem might be different for each pair of classes. Therefore, the feature selection should be done using an one versus one scheme, even when multi-class classifiers are used. However, since the number of classifiers grows exponentially to the number of the classes, expensive techniques like Error-Correcting Output Codes (ECOC) might have a prohibitive computational cost for large datasets. Thus, a fast one versus one approach must be used to alleviate such a computational demand. These three hypothesis are corroborated by experiments. The main hypothesis of this work is that using these three approaches together is possible to improve significantly the classification performance of a classifier to identify conditions in industrial processes. Experiments have shown such an improvement for the 1-NN classifier in industrial processes used as case study."
Thapa, Mandira. "Optimal Feature Selection for Spatial Histogram Classifiers." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1513710294627304.
Повний текст джерелаGustafsson, Robin. "Ordering Classifier Chains using filter model feature selection techniques." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14817.
Повний текст джерелаDuangsoithong, Rakkrit. "Feature selection and casual discovery for ensemble classifiers." Thesis, University of Surrey, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.580345.
Повний текст джерелаKo, Albert Hung-Ren. "Static and dynamic selection of ensemble of classifiers." Thèse, Montréal : École de technologie supérieure, 2007. http://proquest.umi.com/pqdweb?did=1467895171&sid=2&Fmt=2&clientId=46962&RQT=309&VName=PQD.
Повний текст джерела"A thesis presented to the École de technologie supérieure in partial fulfillment of the thesis requirement for the degree of the Ph.D. engineering". CaQMUQET Bibliogr. : f. [237]-246. Également disponible en version électronique. CaQMUQET
McCrae, Richard. "The Impact of Cost on Feature Selection for Classifiers." Thesis, Nova Southeastern University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=13423087.
Повний текст джерелаSupervised machine learning models are increasingly being used for medical diagnosis. The diagnostic problem is formulated as a binary classification task in which trained classifiers make predictions based on a set of input features. In diagnosis, these features are typically procedures or tests with associated costs. The cost of applying a trained classifier for diagnosis may be estimated as the total cost of obtaining values for the features that serve as inputs for the classifier. Obtaining classifiers based on a low cost set of input features with acceptable classification accuracy is of interest to practitioners and researchers. What makes this problem even more challenging is that costs associated with features vary with patients and service providers and change over time.
This dissertation aims to address this problem by proposing a method for obtaining low cost classifiers that meet specified accuracy requirements under dynamically changing costs. Given a set of relevant input features and accuracy requirements, the goal is to identify all qualifying classifiers based on subsets of the feature set. Then, for any arbitrary costs associated with the features, the cost of the classifiers may be computed and candidate classifiers selected based on cost-accuracy tradeoff. Since the number of relevant input features k tends to be large for typical diagnosis problems, training and testing classifiers based on all 2k – 1 possible non-empty subsets of features is computationally prohibitive. Under the reasonable assumption that the accuracy of a classifier is no lower than that of any classifier based on a subset of its input features, this dissertation aims to develop an efficient method to identify all qualifying classifiers.
This study used two types of classifiers—artificial neural networks and classification trees—that have proved promising for numerous problems as documented in the literature. The approach was to measure the accuracy obtained with the classifiers when all features were used. Then, reduced thresholds of accuracy were arbitrarily established which were satisfied with subsets of the complete feature set. Threshold values for three measures—true positive rates, true negative rates, and overall classification accuracy were considered for the classifiers. Two cost functions were used for the features; one used unit costs and the other random costs. Additional manipulation of costs was also performed.
The order in which features were removed was found to have a material impact on the effort required (removing the most important features first was most efficient, removing the least important features first was least efficient). The accuracy and cost measures were combined to produce a Pareto-Optimal Frontier. There were consistently few elements on this Frontier. At most 15 subsets were on the Frontier even when there were hundreds of thousands of acceptable feature sets. Most of the computational time is taken for training and testing the models. Given costs, models in the Pareto-Optimal Frontier can be efficiently identified and the models may be presented to decision makers. Both the Neural Networks and the Decision Trees performed in a comparable fashion suggesting that any classifier could be employed.
McCrae, Richard Clyde. "The Impact of Cost on Feature Selection for Classifiers." Diss., NSUWorks, 2018. https://nsuworks.nova.edu/gscis_etd/1057.
Повний текст джерелаPinagé, Felipe Azevedo, and 92-98187-1016. "Handling Concept Drift Based on Data Similarity and Dynamic Classifier Selection." Universidade Federal do Amazonas, 2017. http://tede.ufam.edu.br/handle/tede/5956.
Повний текст джерелаApproved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2017-10-16T18:54:52Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese - Felipe A. Pinagé.pdf: 1786179 bytes, checksum: 25c2a867ba549f75fe4adf778d3f3ad0 (MD5)
Made available in DSpace on 2017-10-16T18:54:52Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Tese - Felipe A. Pinagé.pdf: 1786179 bytes, checksum: 25c2a867ba549f75fe4adf778d3f3ad0 (MD5) Previous issue date: 2017-07-28
FAPEAM - Fundação de Amparo à Pesquisa do Estado do Amazonas
In real-world applications, machine learning algorithms can be employed to perform spam detection, environmental monitoring, fraud detection, web click stream, among others. Most of these problems present an environment that changes over time due to the dynamic generation process of the data and/or due to streaming data. The problem involving classification tasks of continuous data streams has become one of the major challenges of the machine learning domain in the last decades because, since data is not known in advance, it must be learned as it becomes available. In addition, fast predictions about data should be performed to support often real time decisions. Currently in the literature, methods based on accuracy monitoring are commonly used to detect changes explicitly. However, these methods may become infeasible in some real-world applications especially due to two aspects: they may need human operator feedback, and may depend on a significant decrease of accuracy to be able to detect changes. In addition, most of these methods are also incremental learning-based, since they update the decision model for every incoming example. However, this may lead the system to unnecessary updates. In order to overcome these problems, in this thesis, two semi-supervised methods based on estimating and monitoring a pseudo error are proposed to detect changes explicitly. The decision model is updated only after changing detection. In the first method, the pseudo error is calculated using similarity measures by monitoring the dissimilarity between past and current data distributions. The second proposed method employs dynamic classifier selection in order to improve the pseudo error measurement. As a consequence, this second method allows classifier ensemble online self-training. The experiments conducted show that the proposed methods achieve competitive results, even when compared to fully supervised incremental learning methods. The achievement of these methods, especially the second method, is relevant since they lead change detection and reaction to be applicable in several practical problems reaching high accuracy rates, where usually is not possible to generate the true labels of the instances fully and immediately after classification.
Em aplicações do mundo real, algoritmos de aprendizagem de máquina podem ser usados para detecção de spam, monitoramento ambiental, detecção de fraude, fluxo de cliques na Web, dentre outros. A maioria desses problemas apresenta ambientes que sofrem mudanças com o passar do tempo, devido à natureza dinâmica de geração dos dados e/ou porque envolvem dados que ocorrem em fluxo. O problema envolvendo tarefas de classificação em fluxo contínuo de dados tem se tornado um dos maiores desafios na área de aprendizagem de máquina nas últimas décadas, pois, como os dados não são conhecidos de antemão, eles devem ser aprendidos à medida que são processados. Além disso, devem ser feitas previsões rápidas a respeito desses dados para dar suporte à decisões muitas vezes tomadas em tempo real. Atualmente, métodos baseados em monitoramento da acurácia de classificação são geralmente usados para detectar explicitamente mudanças nos dados. Entretanto, esses métodos podem tornar-se inviáveis em aplicações práticas, especialmente devido a dois aspectos: a necessidade de uma realimentação do sistema por um operador humano, e a dependência de uma queda significativa da acurácia para que mudanças sejam detectadas. Além disso, a maioria desses métodos é baseada em aprendizagem incremental, onde modelos de predição são atualizados para cada instância de entrada, fato que pode levar a atualizações desnecessárias do sistema. A fim de tentar superar todos esses problemas, nesta tese são propostos dois métodos semi-supervisionados de detecção explícita de mudanças em dados, os quais baseiam-se na estimação e monitoramento de uma métrica de pseudo-erro. O modelo de decisão é atualizado somente após a detecção de uma mudança. No primeiro método proposto, o pseudo-erro é monitorado a partir de métricas de similaridade calculadas entre a distribuição atual e distribuições anteriores dos dados. O segundo método proposto utiliza seleção dinâmica de classificadores para aumentar a precisão do cálculo do pseudo-erro. Como consequência, nosso método possibilita que conjuntos de classificadores online sejam criados a partir de auto-treinamento. Os experimentos apresentaram resultados competitivos quando comparados inclusive com métodos baseados em aprendizagem incremental totalmente supervisionada. A proposta desses dois métodos, especialmente do segundo, é relevante por permitir que tarefas de detecção e reação a mudanças sejam aplicáveis em diversos problemas práticos alcançando altas taxas de acurácia, dado que, na maioria dos problemas práticos, não é possível obter o rótulo de uma instância imediatamente após sua classificação feita pelo sistema.
デイビッド, ア., and David Ha. "Boundary uncertainty-based classifier evaluation." Thesis, https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13128126/?lang=0, 2019. https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13128126/?lang=0.
Повний текст джерелаWe propose a general method that makes accurate evaluation of any classifier model for realistic tasks, both in a theoretical sense despite the finiteness of the available data, and in a practical sense in terms of computation costs. The classifier evaluation challenge arises from the bias of the classification error estimate that is only based on finite data. We bypass this existing difficulty by proposing a new classifier evaluation measure called "boundary uncertainty'' whose estimate based on finite data can be considered a reliable representative of its expectation based on infinite data, and demonstrate the potential of our approach on three classifier models and thirteen datasets.
博士(工学)
Doctor of Philosophy in Engineering
同志社大学
Doshisha University
Книги з теми "Selective classifier"
B, Krimmel Michael, and Hartz Emilie K, eds. Prison librarianship: A selective, annotated, classified bibliography, 1945-1985. Jefferson, N.C: McFarland, 1987.
Знайти повний текст джерелаMitchell, Alastair. Classified selective list of reading and other published material for the community worker. 2nd ed. London: National Federation of Community Organisations, 1988.
Знайти повний текст джерелаRidgway, Peggi. Romancing in the personal ads: How to find your partner in the classifieds. La Mirada, CA: Wordpictures, 1996.
Знайти повний текст джерелаBroom, Herbert. Selection of Legal Maxims: Classified and Illustrated. Creative Media Partners, LLC, 2018.
Знайти повний текст джерелаBroom, Herbert. Selection of Legal Maxims, Classified and Illustrated. Creative Media Partners, LLC, 2018.
Знайти повний текст джерелаBroom, Herbert. Selection of Legal Maxims: Classified and Illustrated. Creative Media Partners, LLC, 2018.
Знайти повний текст джерелаSelection of Legal Maxims: Classified and Illustrated. Creative Media Partners, LLC, 2022.
Знайти повний текст джерелаSelection of Legal Maxims: Classified and Illustrated. Creative Media Partners, LLC, 2022.
Знайти повний текст джерелаBroom, Herbert. Selection of Legal Maxims, Classified and Illustrated. Creative Media Partners, LLC, 2018.
Знайти повний текст джерелаBroom, Herbert. A Selection of Legal Maxims: Classified and Illustrated. Franklin Classics, 2018.
Знайти повний текст джерелаЧастини книг з теми "Selective classifier"
Li, Nan, Yuan Jiang, and Zhi-Hua Zhou. "Multi-label Selective Ensemble." In Multiple Classifier Systems, 76–88. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20248-8_7.
Повний текст джерелаLi, Nan, and Zhi-Hua Zhou. "Selective Ensemble under Regularization Framework." In Multiple Classifier Systems, 293–303. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02326-2_30.
Повний текст джерелаLi, Nan, and Zhi-Hua Zhou. "Selective Ensemble of Classifier Chains." In Multiple Classifier Systems, 146–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38067-9_13.
Повний текст джерелаLu, Xuyao, Yan Yang, and Hongjun Wang. "Selective Clustering Ensemble Based on Covariance." In Multiple Classifier Systems, 179–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38067-9_16.
Повний текст джерелаKrasotkina, Olga, Oleg Seredin, and Vadim Mottl. "Supervised Selective Combination of Diverse Object-Representation Modalities for Regression Estimation." In Multiple Classifier Systems, 89–99. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-20248-8_8.
Повний текст джерелаTatarchuk, Alexander, Eugene Urlov, Vadim Mottl, and David Windridge. "A Support Kernel Machine for Supervised Selective Combining of Diverse Pattern-Recognition Modalities." In Multiple Classifier Systems, 165–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12127-2_17.
Повний текст джерелаTatarchuk, Alexander, Valentina Sulimova, David Windridge, Vadim Mottl, and Mikhail Lange. "Supervised Selective Combining Pattern Recognition Modalities and Its Application to Signature Verification by Fusing On-Line and Off-Line Kernels." In Multiple Classifier Systems, 324–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02326-2_33.
Повний текст джерелаVelasco, Horacio M. González, Carlos J. García Orellana, Miguel Macías Macías, and Ramón Gallardo Caballero. "Selective Color Edge Detector Based on a Neural Classifier." In Advanced Concepts for Intelligent Vision Systems, 84–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11558484_11.
Повний текст джерелаAshwini, S. S., M. Z. Kurian, and M. Nagaraja. "Lung Cancer Detection and Prediction Using Customized Selective Segmentation Technique with SVM Classifier." In Emerging Research in Computing, Information, Communication and Applications, 37–44. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1342-5_4.
Повний текст джерелаHue, Carine, Marc Boullé, and Vincent Lemaire. "Online Learning of a Weighted Selective Naive Bayes Classifier with Non-convex Optimization." In Advances in Knowledge Discovery and Management, 3–17. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-45763-5_1.
Повний текст джерелаТези доповідей конференцій з теми "Selective classifier"
Germi, Saeed Bakhshi, Esa Rahtu, and Heikki Huttunen. "Selective Probabilistic Classifier Based on Hypothesis Testing." In 2021 9th European Workshop on Visual Information Processing (EUVIP). IEEE, 2021. http://dx.doi.org/10.1109/euvip50544.2021.9483967.
Повний текст джерелаAhmad, Irshad, Abdul Muhamin Naeem, Muhammad Islam, and Azween Bin Abdullah. "Statistical Based Real-Time Selective Herbicide Weed Classifier." In 2007 IEEE International Multitopic Conference (INMIC). IEEE, 2007. http://dx.doi.org/10.1109/inmic.2007.4557689.
Повний текст джерелаChen, Jingnian, and Li Xu. "A Hybrid Selective Classifier for Categorizing Incomplete Data." In 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery. IEEE, 2009. http://dx.doi.org/10.1109/fskd.2009.257.
Повний текст джерелаFan, Yawen, Husheng Li, and Chao Tian. "Selective Sampling Based Efficient Classifier Representation in Distributed Learning." In GLOBECOM 2016 - 2016 IEEE Global Communications Conference. IEEE, 2016. http://dx.doi.org/10.1109/glocom.2016.7842257.
Повний текст джерелаNing, Bo, XianBin Cao, YanWu Xu, and Jun Zhang. "Virus-evolutionary genetic algorithm based selective ensemble classifier for pedestrian detection." In the first ACM/SIGEVO Summit. New York, New York, USA: ACM Press, 2009. http://dx.doi.org/10.1145/1543834.1543893.
Повний текст джерелаBoulle, M. "Regularization and Averaging of the Selective Naïve Bayes classifier." In The 2006 IEEE International Joint Conference on Neural Network Proceedings. IEEE, 2006. http://dx.doi.org/10.1109/ijcnn.2006.246637.
Повний текст джерелаBai, Lixia, Hong Li, and Weifeng Gao. "A Selective Ensemble Classifier Using Multiobjective Optimization Based Extreme Learning Machine Algorithm." In 2021 17th International Conference on Computational Intelligence and Security (CIS). IEEE, 2021. http://dx.doi.org/10.1109/cis54983.2021.00017.
Повний текст джерелаOrtiz-Bayliss, Jose C., Hugo Terashima-Marin, and Santiago E. Conant-Pablos. "Using learning classifier systems to design selective hyper-heuristics for constraint satisfaction problems." In 2013 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2013. http://dx.doi.org/10.1109/cec.2013.6557885.
Повний текст джерелаHonda, Toshifumi, Ryo Nakagaki, Obara Kenji, and Yuji Takagi. "Fuzzy selective voting classifier with defect extraction based on comparison within an image." In International Symposium on Multispectral Image Processing and Pattern Recognition, edited by S. J. Maybank, Mingyue Ding, F. Wahl, and Yaoting Zhu. SPIE, 2007. http://dx.doi.org/10.1117/12.750528.
Повний текст джерелаBalasubramanian, Ram, M. A. El-Sharkawi, R. J. Marks, Jae-Byung Jung, R. T. Miyamoto, G. M. Andersen, C. J. Eggen, and W. L. J. Fox. "Self-selective clustering of training data using the maximally-receptive classifier/regression bank." In 2009 IEEE International Conference on Systems, Man and Cybernetics - SMC. IEEE, 2009. http://dx.doi.org/10.1109/icsmc.2009.5346820.
Повний текст джерелаЗвіти організацій з теми "Selective classifier"
Searcy, Stephen W., and Kalman Peleg. Adaptive Sorting of Fresh Produce. United States Department of Agriculture, August 1993. http://dx.doi.org/10.32747/1993.7568747.bard.
Повний текст джерелаWebb, Geoffrey, and Mark Carman. Dynamic Dimensionality Selection for Bayesian Classifier Ensembles. Fort Belvoir, VA: Defense Technical Information Center, March 2015. http://dx.doi.org/10.21236/ada614917.
Повний текст джерелаZchori-Fein, Einat, Judith K. Brown, and Nurit Katzir. Biocomplexity and Selective modulation of whitefly symbiotic composition. United States Department of Agriculture, June 2006. http://dx.doi.org/10.32747/2006.7591733.bard.
Повний текст джерелаDzanku, Fred M., and Louis S. Hodey. Achieving Inclusive Oil Palm Commercialisation in Ghana. Institute of Development Studies (IDS), February 2022. http://dx.doi.org/10.19088/apra.2022.007.
Повний текст джерелаZhao, Bingyu, Saul Burdman, Ronald Walcott, Tal Pupko, and Gregory Welbaum. Identifying pathogenic determinants of Acidovorax citrulli toward the control of bacterial fruit blotch of cucurbits. United States Department of Agriculture, January 2014. http://dx.doi.org/10.32747/2014.7598168.bard.
Повний текст джерелаBrosh, Arieh, Gordon Carstens, Kristen Johnson, Ariel Shabtay, Joshuah Miron, Yoav Aharoni, Luis Tedeschi, and Ilan Halachmi. Enhancing Sustainability of Cattle Production Systems through Discovery of Biomarkers for Feed Efficiency. United States Department of Agriculture, July 2011. http://dx.doi.org/10.32747/2011.7592644.bard.
Повний текст джерелаMultiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, March 2022. http://dx.doi.org/10.4271/2022-01-0616.
Повний текст джерела