Academic literature on the topic 'ANN Classifiers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'ANN Classifiers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "ANN Classifiers"

1

Eldud, Omer Ahmed Abdelkarim. "Prediction of protein secondary structure using binary classificationtrees, naive Bayes classifiers and the Logistic Regression Classifier." Thesis, Rhodes University, 2016. http://hdl.handle.net/10962/d1019985.

Full text
Abstract:
The secondary structure of proteins is predicted using various binary classifiers. The data are adopted from the RS126 database. The original data consists of protein primary and secondary structure sequences. The original data is encoded using alphabetic letters. These data are encoded into unary vectors comprising ones and zeros only. Different binary classifiers, namely the naive Bayes, logistic regression and classification trees using hold-out and 5-fold cross validation are trained using the encoded data. For each of the classifiers three classification tasks are considered, namely helix against not helix (H/∼H), sheet against not sheet (S/∼S) and coil against not coil (C/∼C). The performance of these binary classifiers are compared using the overall accuracy in predicting the protein secondary structure for various window sizes. Our result indicate that hold-out cross validation achieved higher accuracy than 5-fold cross validation. The Naive Bayes classifier, using 5-fold cross validation achieved, the lowest accuracy for predicting helix against not helix. The classification tree classifiers, using 5-fold cross validation, achieved the lowest accuracies for both coil against not coil and sheet against not sheet classifications. The logistic regression classier accuracy is dependent on the window size; there is a positive relationship between the accuracy and window size. The logistic regression classier approach achieved the highest accuracy when compared to the classification tree and Naive Bayes classifiers for each classification task; predicting helix against not helix with accuracy 77.74 percent, for sheet against not sheet with accuracy 81.22 percent and for coil against not coil with accuracy 73.39 percent. It is noted that it is easier to compare classifiers if the classification process could be completely facilitated in R. Alternatively, it would be easier to assess these logistic regression classifiers if SPSS had a function to determine the accuracy of the logistic regression classifier.
APA, Harvard, Vancouver, ISO, and other styles
2

Joo, Hyonam. "Binary tree classifier and context classifier." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/53076.

Full text
Abstract:
Two methods of designing a point classifier are discussed in this paper, one is a binary decision tree classifier based on the Fisher's linear discriminant function as a decision rule at each nonterminal node, and the other is a contextual classifier which gives each pixel the highest probability label given some substantially sized context including the pixel. Experiments were performed both on a simulated image and real images to illustrate the improvement of the classification accuracy over the conventional single-stage Bayes classifier under Gaussian distribution assumption.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Billing, Jeffrey J. (Jeffrey Joel) 1979. "Learning classifiers from medical data." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8068.

Full text
Abstract:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.<br>Includes bibliographical references (leaf 32).<br>The goal of this thesis was to use machine-learning techniques to discover classifiers from a database of medical data. Through the use of two software programs, C5.0 and SVMLight, we analyzed a database of 150 patients who had been operated on by Dr. David Rattner of the Massachusetts General Hospital. C5.0 is an algorithm that learns decision trees from data while SVMLight learns support vector machines from the data. With both techniques we performed cross-validation analysis and both failed to produce acceptable error rates. The end result of the research was that no classifiers could be found which performed well upon cross-validation analysis. Nonetheless, this paper provides a thorough examination of the different issues that arise during the analysis of medical data as well as describes the different techniques that were used as well as the different issues with the data that affected the performance of these techniques.<br>by Jeffrey J. Billing.<br>M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
4

Siegel, Kathryn I. (Kathryn Iris). "Incremental random forest classifiers in spark." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106105.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (page 53).<br>The random forest is a machine learning algorithm that has gained popularity due to its resistance to noise, good performance, and training efficiency. Random forests are typically constructed using a static dataset; to accommodate new data, random forests are usually regrown. This thesis presents two main strategies for updating random forests incrementally, rather than entirely rebuilding the forests. I implement these two strategies-incrementally growing existing trees and replacing old trees-in Spark Machine Learning(ML), a commonly used library for running ML algorithms in Spark. My implementation draws from existing methods in online learning literature, but includes several novel refinements. I evaluate the two implementations, as well as a variety of hybrid strategies, by recording their error rates and training times on four different datasets. My benchmarks show that the optimal strategy for incremental growth depends on the batch size and the presence of concept drift in a data workload. I find that workloads with large batches should be classified using a strategy that favors tree regrowth, while workloads with small batches should be classified using a strategy that favors incremental growth of existing trees. Overall, the system demonstrates significant efficiency gains when compared to the standard method of regrowing the random forest.<br>by Kathryn I. Siegel.<br>M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Palmer-Brown, Dominic. "An adaptive resonance classifier." Thesis, University of Nottingham, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xue, Jinghao. "Aspects of generative and discriminative classifiers." Thesis, Connect to e-thesis, 2008. http://theses.gla.ac.uk/272/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2008.<br>Ph.D. thesis submitted to the Department of Statistics, Faculty of Information and Mathematical Sciences, University of Glasgow, 2008. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
7

Frankowsky, Maximilian, and Dan Ke. "Humanness and classifiers in Mandarin Chinese." Universitätsbibliothek Leipzig, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-224789.

Full text
Abstract:
Mandarin Chinese numeral classifiers receive considerable at-tention in linguistic research. The status of the general classifier 个 gè re-mains unresolved. Many linguists suggest that the use of 个 gè as a noun classifier is arbitrary. This view is challenged in the current study. Relying on the CCL-Corpus of Peking University and data from Google, we investigated which nouns for living beings are most likely classified by the general clas-sifier 个 gè. The results suggest that the use of the classifier 个 gè is motivated by an anthropocentric continuum as described by Köpcke and Zubin in the 1990s. We tested Köpcke and Zubin’s approach with Chinese native speakers. We examined 76 animal expressions to explore the semantic interdepen-dence of numeral classifiers and the nouns. Our study shows that nouns with the semantic feature [+ animate] are more likely to be classified by 个 gè if their denotatum is either very close to or very far located from the anthropo-centric center. In contrast animate nouns whose denotata are located at some intermediate distance from the anthropocentric center are less likely to be classified by 个 gè.
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Yuchun. "Classifiers : adaptive modules in pattern recognition systems." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/14496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chungfat, Neil C. (Neil Caye) 1979. "Context-aware activity recognition using TAN classifiers." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87220.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.<br>Includes bibliographical references (p. 73-77).<br>by Neil C. Chungfat.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Ming. "Sequence and text classification : features and classifiers." Thesis, University of East Anglia, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426966.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography