Academic literature on the topic 'Evaluation of extreme classifiers'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Evaluation of extreme classifiers.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Evaluation of extreme classifiers"
Balasubramanian, Kishore, and N. P. Ananthamoorthy. "Analysis of hybrid statistical textural and intensity features to discriminate retinal abnormalities through classifiers." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 233, no. 5 (March 20, 2019): 506–14. http://dx.doi.org/10.1177/0954411919835856.
Full textMichau, Gabriel, Yang Hu, Thomas Palmé, and Olga Fink. "Feature learning for fault detection in high-dimensional condition monitoring signals." Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 234, no. 1 (August 24, 2019): 104–15. http://dx.doi.org/10.1177/1748006x19868335.
Full textRaza, Ali, Furqan Rustam, Hafeez Ur Rehman Siddiqui, Isabel de la Torre Diez, Begoña Garcia-Zapirain, Ernesto Lee, and Imran Ashraf. "Predicting Genetic Disorder and Types of Disorder Using Chain Classifier Approach." Genes 14, no. 1 (December 26, 2022): 71. http://dx.doi.org/10.3390/genes14010071.
Full textAfolabi, Hassan A., and Aburas A. Abdurazzag. "Statistical performance assessment of supervised machine learning algorithms for intrusion detection system." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (March 1, 2024): 266. http://dx.doi.org/10.11591/ijai.v13.i1.pp266-277.
Full textThiamchoo, Nantarika, and Pornchai Phukpattaranont. "Evaluation of feature projection techniques in object grasp classification using electromyogram signals from different limb positions." PeerJ Computer Science 8 (May 6, 2022): e949. http://dx.doi.org/10.7717/peerj-cs.949.
Full textKamaruddin, Ami Shamril, Mohd Fikri Hadrawi, Yap Bee Wah, and Sharifah Aliman. "An evaluation of nature-inspired optimization algorithms and machine learning classifiers for electricity fraud prediction." Indonesian Journal of Electrical Engineering and Computer Science 32, no. 1 (October 1, 2023): 468. http://dx.doi.org/10.11591/ijeecs.v32.i1.pp468-477.
Full textTian, Zhang, Chen, Geng, and Wang. "Selective Ensemble Based on Extreme Learning Machine for Sensor-Based Human Activity Recognition." Sensors 19, no. 16 (August 8, 2019): 3468. http://dx.doi.org/10.3390/s19163468.
Full textGuo, Weian, Yan Zhang, Ming Chen, Lei Wang, and Qidi Wu. "Fuzzy performance evaluation of Evolutionary Algorithms based on extreme learning classifier." Neurocomputing 175 (January 2016): 371–82. http://dx.doi.org/10.1016/j.neucom.2015.10.069.
Full textAl-Gethami, Khalid M., Mousa T. Al-Akhras, and Mohammed Alawairdhi. "Empirical Evaluation of Noise Influence on Supervised Machine Learning Algorithms Using Intrusion Detection Datasets." Security and Communication Networks 2021 (January 15, 2021): 1–28. http://dx.doi.org/10.1155/2021/8836057.
Full textOkwonu, Friday Zinzendoff, Nor Aishah Ahad, Nicholas Oluwole Ogini, Innocent Ejiro Okoloko, and Wan Zakiyatussariroh Wan Husin. "COMPARATIVE PERFORMANCE EVALUATION OF EFFICIENCY FOR HIGH DIMENSIONAL CLASSIFICATION METHODS." Journal of Information and Communication Technology 21, No.3 (July 17, 2022): 437–64. http://dx.doi.org/10.32890/jict2022.21.3.6.
Full textDissertations / Theses on the topic "Evaluation of extreme classifiers"
Legrand, Juliette. "Simulation and assessment of multivariate extreme models for environmental data." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASJ015.
Full textAccurate estimation of the occurrence probabilities of extreme environmental events is a major issue for risk assessment. For example, in coastal engineering, the design of structures installed at or near the coasts must be such that they can withstand the most severe events they may encounter in their lifetime. This thesis focuses on the simulation of multivariate extremes, motivated by applications to significant wave height, and on the evaluation of models predicting the occurrences of extreme events.In the first part of the manuscript, we propose and study a stochastic simulator that, given offshore conditions, produces jointly offshore and coastal extreme significant wave heights (Hs). We rely on bivariate Peaks over Threshold and develop a non-parametric simulation scheme of bivariate generalised Pareto distributions. From such joint simulator, we derive a conditional simulation model. Both simulation algorithms are applied to numerical experiments and to extreme Hs near the French Brittanny coast. A further development is addressed regarding the marginal modelling of Hs. To take into account non-stationarities, we adapt the extended generalised Pareto model, letting the marginal parameters vary with the peak period and the peak direction.The second part of this thesis provides a more theoretical development. To evaluate different prediction models for extremes, we study the specific case of binary classifiers, which are the simplest type of forecasting and decision-making situation: an extreme event did or did not occur. Risk functions adapted to binary classifiers of extreme events are developed, answering our second question. Their properties are derived under the framework of multivariate regular variation and hidden regular variation, allowing to handle finer types of asymptotic independence. This framework is applied to extreme river discharges
Lavesson, Niklas. "Evaluation and Analysis of Supervised Learning Algorithms and Classifiers." Licentiate thesis, Karlskrona : Blekinge Institute of Technology, 2006. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/c655a0b1f9f88d16c125714c00355e5d?OpenDocument.
Full textNygren, Rasmus. "Evaluation of hyperparameter optimization methods for Random Forest classifiers." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301739.
Full textFör att skapa en maskininlärningsmodell behöver en ofta välja olika hyperparametrar som konfigurerar modellens egenskaper. Prestandan av en sådan modell beror starkt på valet av dessa hyperparametrar, varför det är relevant att undersöka hur optimering av hyperparametrar kan påverka klassifikationssäkerheten av en maskininlärningsmodell. I denna studie tränar och utvärderar vi en Random Forest-klassificerare vars hyperparametrar sätts till särskilda standardvärden och jämför denna med en klassificerare vars hyperparametrar bestäms av tre olika metoder för optimering av hyperparametrar (HPO) - Random Search, Bayesian Optimization och Particle Swarm Optimization. Detta görs på tre olika dataset, och varje HPO- metod utvärderas baserat på den ändring av klassificeringsträffsäkerhet som den medför över dessa dataset. Vi fann att varje HPO-metod resulterade i en total ökning av klassificeringsträffsäkerhet på cirka 2-3% över alla dataset jämfört med den träffsäkerhet som kruleslassificeraren fick med standardvärdena för hyperparametrana. På grund av begränsningar i form av tid och data kunde vi inte fastställa om den positiva effekten är generaliserbar till en större skala. Slutsatsen som kunde dras var istället att användbarheten av metoder för optimering av hyperparametrar är beroende på det dataset de tillämpas på.
Dang, Robin, and Anders Nilsson. "Evaluation of Machine Learning classifiers for Breast Cancer Classification." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280349.
Full textBröstcancer är en vanlig och dödlig sjukdom bland kvinnor globalt där en tidig upptäckt är avgörande för att förbättra prognosen för patienter. I dagens digitala samhälle kan datorer och komplexa algoritmer utvärdera och diagnostisera sjukdomar mer effektivt och med större säkerhet än erfarna läkare. Flera studier har genomförts för att automatisera tekniker med medicinska avbildningsmetoder, genom maskininlärnings tekniker, för att förutsäga och upptäcka bröstcancer. I den här rapport utvärderas och jämförs lämpligheten hos fem olika maskininlärningsmetoder att klassificera huruvida bröstcancer är av god- eller elakartad karaktär. Vidare undersöks hur metodernas effektivitet, med avseende på klassificeringssäkerhet samt exekveringstid, påverkas av förbehandlingsmetoden Principal component analysis samt ensemble metoden Bootstrap aggregating. I teorin skall båda förbehandlingsmetoder gynna vissa maskininlärningsmetoder och således öka klassificeringssäkerheten. Undersökningen är baserat på ett välkänt bröstcancer dataset från Wisconsin som används till att träna algoritmerna. Resultaten är evaluerade genom applicering av statistiska metoder där träffsäkerhet, känslighet och exekveringstid tagits till hänsyn. Följaktligen jämförs resultaten mellan de olika klassificerarna. Undersökningen visade att användningen av varken Principal component analysis eller Bootstrap aggregating resulterade i några nämnvärda förbättringar med avseende på klassificeringssäkerhet. Dock visade resultaten att klassificerarna Support vector machines Linear och RBF presterade bäst. I och med att undersökningen var begränsad med avseende på antalet dataset samt val av olika evalueringsmetoder med medförande justeringar är det därför osäkert huruvida det erhållna resultatet kan generaliseras över andra dataset och populationer.
Fischer, Manfred M., Sucharita Gopal, Petra Staufer-Steinnocher, and Klaus Steinocher. "Evaluation of Neural Pattern Classifiers for a Remote Sensing Application." WU Vienna University of Economics and Business, 1995. http://epub.wu.ac.at/4184/1/WSG_DP_4695.pdf.
Full textSeries: Discussion Papers of the Institute for Economic Geography and GIScience
Alorf, Abdulaziz Abdullah. "Primary/Soft Biometrics: Performance Evaluation and Novel Real-Time Classifiers." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96942.
Full textDoctor of Philosophy
The relevance of faces in our daily lives is indisputable. We learn to recognize faces as newborns, and faces play a major role in interpersonal communication. Faces probably represent the most accurate biometric trait in our daily interactions. Thereby, it is not singular that so much effort from computer vision researchers have been invested in the analysis of faces. The automatic detection and analysis of faces within images has therefore received much attention in recent years. The spectrum of computer vision research about face analysis includes, but is not limited to, face detection and facial attribute classification, which are the focus of this dissertation. The face is a primary biometric because by itself revels the subject's identity, while facial attributes (such as hair color and eye state) are soft biometrics because by themselves they do not reveal the subject's identity. Soft biometrics have many uses in the field of biometrics such as (1) they can be utilized in a fusion framework to strengthen the performance of a primary biometric system. For example, fusing a face with voice accent information can boost the performance of the face recognition. (2) They also can be used to create qualitative descriptions about a person, such as being an "old bald male wearing a necktie and eyeglasses." Face detection and facial attribute classification are not easy problems because of many factors, such as image orientation, pose variation, clutter, facial expressions, occlusion, and illumination, among others. In this dissertation, we introduced novel techniques to classify more than 40 facial attributes in real-time. Our techniques followed the general facial attribute classification pipeline, which begins by detecting a face and ends by classifying facial attributes. We also introduced a new facial attribute related to Middle Eastern headwear along with its detector. The new facial attribute were fused with a face detector to improve the detection performance. In addition, we proposed a new method to evaluate the robustness of face detection, which is the first process in the facial attribute classification pipeline. Detecting the states of human facial attributes in real time is highly desired by many applications. For example, the real-time detection of a driver's eye state (open/closed) can prevent severe accidents. These systems are usually called driver drowsiness detection systems. For classifying 40 facial attributes, we proposed a real-time model that preprocesses faces by localizing facial landmarks to normalize faces, and then crop them based on the intended attribute. The face was cropped only if the intended attribute is inside the face region. After that, 7 types of classical and deep features were extracted from the preprocessed faces. Lastly, these 7 types of feature sets were fused together to train three different classifiers. Our proposed model yielded 91.93% on the average accuracy outperforming 7 state-of-the-art models. It also achieved state-of-the-art performance in classifying 14 out of 40 attributes. We also developed a real-time model that classifies the states of three human facial attributes: (1) eyes (open/closed), (2) mouth (open/closed), and (3) eyeglasses (present/absent). Our proposed method consisted of six main steps: (1) In the beginning, we detected the human face. (2) Then we extracted the facial landmarks. (3) Thereafter, we normalized the face, based on the eye location, to the full frontal view. (4) We then extracted the regions of interest (i.e., the regions of the mouth, left eye, right eye, and eyeglasses). (5) We extracted low-level features from each region and then described them. (6) Finally, we learned a binary classifier for each attribute to classify it using the extracted features. Our developed model achieved 30 FPS with a CPU-only implementation, and our eye-state classifier achieved the top performance, while our mouth-state and glasses classifiers were tied as the top performers with deep learning classifiers. We also introduced a new facial attribute related to Middle Eastern headwear along with its detector. After that, we fused it with a face detector to improve the detection performance. The traditional Middle Eastern headwear that men usually wear consists of two parts: (1) the shemagh or keffiyeh, which is a scarf that covers the head and usually has checkered and pure white patterns, and (2) the igal, which is a band or cord worn on top of the shemagh to hold it in place. The shemagh causes many unwanted effects on the face; for example, it usually occludes some parts of the face and adds dark shadows, especially near the eyes. These effects substantially degrade the performance of face detection. To improve the detection of people who wear the traditional Middle Eastern headwear, we developed a model that can be used as a head detector or combined with current face detectors to improve their performance. Our igal detector consists of two main steps: (1) learning a binary classifier to detect the igal and (2) refining the classier by removing false positives. Due to the similarity in real-life applications, we compared the igal detector with state-of-the-art face detectors, where the igal detector significantly outperformed the face detectors with the lowest false positives. We also fused the igal detector with a face detector to improve the detection performance. Face detection is the first process in any facial attribute classification pipeline. As a result, we reported a novel study that evaluates the robustness of current face detectors based on: (1) diffraction blur, (2) image scale, and (3) the IoU classification threshold. This study would enable users to pick the robust face detector for their intended applications. Biometric systems that use face detection suffer from huge performance fluctuation. For example, users of biometric surveillance systems that utilize face detection sometimes notice that state-of-the-art face detectors do not show good performance compared with outdated detectors. Although state-of-the-art face detectors are designed to work in the wild (i.e., no need to retrain, revalidate, and retest), they still heavily depend on the datasets they originally trained on. This condition in turn leads to variation in the detectors' performance when they are applied on a different dataset or environment. To overcome this problem, we developed a novel optics-based blur simulator that automatically introduces the diffraction blur at different image scales/magnifications. Then we evaluated different face detectors on the output images using different IoU thresholds. Users, in the beginning, choose their own values for these three settings and then run our model to produce the efficient face detector under the selected settings. That means our proposed model would enable users of biometric systems to pick the efficient face detector based on their system setup. Our results showed that sometimes outdated face detectors outperform state-of-the-art ones under certain settings and vice versa.
Ayhan, Tezer Bahar. "Damage evaluation of civil engineering structures under extreme loadings." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00975488.
Full textZuzáková, Barbora. "Exchange market pressure: an evaluation using extreme value theory." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-199589.
Full textBuolamwini, Joy Adowaa. "Gender shades : intersectional phenotypic and demographic evaluation of face datasets and gender classifiers." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/114068.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 103-116).
This thesis (1) characterizes the gender and skin type distribution of IJB-A, a government facial recognition benchmark, and Adience, a gender classification benchmark, (2) outlines an approach for capturing images with more diverse skin types which is then applied to develop the Pilot Parliaments Benchmark (PPB), and (3) uses PPB to assess the classification accuracy of Adience, IBM, Microsoft, and Face++ gender classifiers with respect to gender, skin type, and the intersection of skin type and gender. The datasets evaluated are overwhelming lighter skinned: 79.6% - 86.24%. IJB-A includes only 24.6% female and 4.4% darker female, and features 59.4% lighter males. By construction, Adience achieves rough gender parity at 52.0% female but has only 13.76% darker skin. The Parliaments method for creating a more skin-type-balanced benchmark resulted in a dataset that is 44.39% female and 47% darker skin. An evaluation of four gender classifiers revealed a significant gap exists when comparing gender classification accuracies of females vs males (9 - 20%) and darker skin vs lighter skin (10 - 21%). Lighter males were in general the best classified group, and darker females were the worst classified group. 37% - 83% of classification errors resulted from the misclassification of darker females. Lighter males contributed the least to overall classification error (.4% - 3%). For the best performing classifier, darker females were 32 times more likely to be misclassified than lighter males. To increase the accuracy of these systems, more phenotypically diverse datasets need to be developed. Benchmark performance metrics need to be disaggregated not just by gender or skin type but by the intersection of gender and skin type. At a minimum, human-focused computer vision models should report accuracy on four subgroups: darker females, lighter females, darker males, and lighter males. The thesis concludes with a discussion of the implications of misclassification and the importance of building inclusive training sets and benchmarks.
by Joy Adowaa Buolamwini.
S.M.
Pydipati, Rajesh. "Evaluation of classifiers for automatic disease detection in citrus leaves using machine vision." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0006991.
Full textBooks on the topic "Evaluation of extreme classifiers"
Margineantu, Dragos D. Bootstrap methods for the cost-sensitive evaluation of classifiers. [Corvallis, OR: Oregon State University, Dept. of Computer Science, 2000.
Find full textResearch, United States Office of Federal Coordinator for Meteorological Services and Supporting. Report on wind chill temperature and extreme heat indices: Evaluation and improvement projects. Washington, D.C: U.S. Department of Commerce, National Oceanic and Atmospheric Administration, Office of the Federal Coordinator for Meteorological Services and Supporting Research, 2003.
Find full textKhan, S. M. Zubair Ali., Proshikhsan Shikhsa Kaj (Organization : Bangladesh), Proshikhsan Shikhsa Kaj (Organization : Bangladesh). Impact Monitoring and Evaluation Cell., and Great Britain. Dept. for International Development, Bangladesh., eds. Inclusion of the extreme poor to PROSHIKA activities. Dhaka: Impact Monitoring and Evaluation Cell, PROSHIKA, 2003.
Find full textMatin, M. A. Risk assessment and evaluation of probability of extreme hydrological events: Case study from Noakhali Sadar and Subarnachar Upazilas. Dhaka: IUCN Bangladesh Country Office, 2008.
Find full textMatin, M. A. Risk assessment and evaluation of probability of extreme hydrological events: Case study from Noakhali Sadar and Subarnachar Upazilas. Dhaka: IUCN Bangladesh Country Office, 2008.
Find full textDaishinpuku jishindō to kenchikubutsu no taishinsei hyōka: Kyodai kaikōgata jishin, nairiku jishin ni sonaete = Extreme ground motions and seismic performance evaluation of buildings : how to prepare for mega subduction and inland earthquakes. Tōkyō-to Minato-ku: Nihon Kenchiku Gakkai, 2013.
Find full textThe Evaluation of Competing Classifiers. Storming Media, 2000.
Find full textMeacham, Brian J. Extreme Event Mitigation in Buildings; Analysis and Design. National Fire Protection Association, 2006.
Find full textEvaluation in the Extreme: Research, Impact and Politics in Violently Divided Societies. SAGE Publications India Pvt, Ltd., 2015.
Find full textBush, Kenneth, and Colleen Duggan. Evaluation in the Extreme: Research, Impact and Politics in Violently Divided Societies. SAGE Publications India Pvt, Ltd., 2015.
Find full textBook chapters on the topic "Evaluation of extreme classifiers"
Seewald, Alexander K., and Johannes Fürnkranz. "An Evaluation of Grading Classifiers." In Advances in Intelligent Data Analysis, 115–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-44816-0_12.
Full textAlonzo, Todd A., and Margaret Sullivan Pepe. "Development and Evaluation of Classifiers." In Topics in Biostatistics, 89–116. Totowa, NJ: Humana Press, 2007. http://dx.doi.org/10.1007/978-1-59745-530-5_6.
Full textLóczy, Dénes. "Evaluation of Geomorphological Impact." In Geomorphological impacts of extreme weather, 363–70. Dordrecht: Springer Netherlands, 2013. http://dx.doi.org/10.1007/978-94-007-6301-2_23.
Full textAshok, Pranav, Tomáš Brázdil, Krishnendu Chatterjee, Jan Křetínský, Christoph H. Lampert, and Viktor Toman. "Strategy Representation by Decision Trees with Linear Classifiers." In Quantitative Evaluation of Systems, 109–28. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30281-8_7.
Full textTorzilli, Guido, Guido Costa, Fabio Procopio, Luca Viganó, and Matteo Donadon. "Intraoperative Evaluation of Resectability." In Extreme Hepatic Surgery and Other Strategies, 177–93. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-13896-1_11.
Full textSzadkowski, Rudolf, Jan Drchal, and Jan Faigl. "Basic Evaluation Scenarios for Incrementally Trained Classifiers." In Lecture Notes in Computer Science, 507–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30484-3_41.
Full textViechnicki, Peter. "A performance evaluation of automatic survey classifiers." In Grammatical Inference, 244–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0054080.
Full textCieslak, Kasia P., Roelof J. Bennink, and Thomas M. van Gulik. "Preoperative Evaluation of Liver Function." In Extreme Hepatic Surgery and Other Strategies, 31–52. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-13896-1_3.
Full textChao, J. Carlos Aguado. "Artificial Intelligence Classifiers and Their Social Impact." In Soft Computing for Risk Evaluation and Management, 170–94. Heidelberg: Physica-Verlag HD, 2001. http://dx.doi.org/10.1007/978-3-7908-1814-7_11.
Full textNirkhi, Smita. "Evaluation of Classifiers for Detection of Authorship Attribution." In Computational Intelligence: Theories, Applications and Future Directions - Volume I, 227–36. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1132-1_18.
Full textConference papers on the topic "Evaluation of extreme classifiers"
Britto, Larissa, and Luciano Pacífico. "Classificação de Espécies de Plantas Usando Extreme Learning Machine." In Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2019. http://dx.doi.org/10.5753/eniac.2019.9268.
Full textFlores, Christian, Christian Fonseca, David Achanccaray, and Javier Andreu-Perez. "Performance Evaluation of a P300 Brain-Computer Interface Using a Kernel Extreme Learning Machine Classifier." In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2018. http://dx.doi.org/10.1109/smc.2018.00629.
Full textItikawa, M. A., V. R. R. Ahón, T. A. Souza, A. M. V. Carrasco, J. C. Q. Neto, J. L. S. Gomes, R. R. H. Cavalcante, et al. "Automatic Cement Evaluation Using Machine Learning." In Offshore Technology Conference Brasil. OTC, 2023. http://dx.doi.org/10.4043/32961-ms.
Full textGautam, Chandan, Aruna Tiwari, and Sriram Ravindran. "Construction of multi-class classifiers by Extreme Learning Machine based one-class classifiers." In 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016. http://dx.doi.org/10.1109/ijcnn.2016.7727445.
Full textFein-Ashley, Jacob, Tian Ye, Rajgopal Kannan, Viktor Prasanna, and Carl Busart. "Benchmarking Deep Learning Classifiers for SAR Automatic Target Recognition." In 2023 IEEE High Performance Extreme Computing Conference (HPEC). IEEE, 2023. http://dx.doi.org/10.1109/hpec58863.2023.10363455.
Full textSivaguru, Raaghavi, Chhaya Choudhary, Bin Yu, Vadym Tymchenko, Anderson Nascimento, and Martine De Cock. "An Evaluation of DGA Classifiers." In 2018 IEEE International Conference on Big Data (Big Data). IEEE, 2018. http://dx.doi.org/10.1109/bigdata.2018.8621875.
Full textYusa, Mochammad, and Ema Utami. "Classifiers evaluation: Comparison of performance classifiers based on tuples amount." In 2017 4th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI). IEEE, 2017. http://dx.doi.org/10.1109/eecsi.2017.8239204.
Full textGoldman, Alfredo, and Paulo Floriano. "An evaluation system." In the 3rd Extreme Conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2414393.2414401.
Full textAssayed, Suha Khalil, Khaled Shaalan, Manar Alkhatib, and Safwan Maghaydah. "Machine Learning Chatbot for Sentiment Analysis of Covid-19 Tweets." In 10th International Conference on Computer Networks & Communications (CCNET 2023). Academy and Industry Research Collaboration Center (AIRCC), 2023. http://dx.doi.org/10.5121/csit.2023.130404.
Full textYoo, Youngwoo, and Se-Young Oh. "Fast training of convolutional neural network classifiers through extreme learning machines." In 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016. http://dx.doi.org/10.1109/ijcnn.2016.7727403.
Full textReports on the topic "Evaluation of extreme classifiers"
KAB LABS INC SAN DIEGO CA. Feature Set Evaluation for Classifiers. Fort Belvoir, VA: Defense Technical Information Center, March 1989. http://dx.doi.org/10.21236/ada226903.
Full textKAB LABS INC SAN DIEGO CA. Feature Set Evaluation for Classifiers. Fort Belvoir, VA: Defense Technical Information Center, January 1989. http://dx.doi.org/10.21236/ada226905.
Full textLiguori, Giovanni, and Nadia Pinardi. Evaluation of Extreme Forecast Indices (WP5+6). EuroSea, 2023. http://dx.doi.org/10.3289/eurosea_d4.11.
Full textAsenath-Smith, Emily, Terry Melendy, Amelia Menke, Andrew Bernier, and George Blaisdell. Evaluation of airfield damage repair methods for extreme cold temperatures. Engineer Research and Development Center (U.S.), March 2019. http://dx.doi.org/10.21079/11681/32298.
Full textRuby, Brent C. Evaluation of the Human/Extreme Environment Interaction: Implications for Enhancing Operational Performance and Recovery. Fort Belvoir, VA: Defense Technical Information Center, October 2011. http://dx.doi.org/10.21236/ada592672.
Full textRuby, Brent C. Evaluation of the Human/Extreme Environment Interaction: Implications for Enhancing Operational Performance and Recovery. Fort Belvoir, VA: Defense Technical Information Center, October 2012. http://dx.doi.org/10.21236/ada592673.
Full textRuby, Brent C. Evaluation of the Human/Extreme Environment Interaction: Implications for Enhancing Operational Performance and Recovery. Fort Belvoir, VA: Defense Technical Information Center, February 2014. http://dx.doi.org/10.21236/ada600954.
Full textBäumler, Maximilian, and Matthias Lehmann. Generating representative test scenarios: The FUSE for Representativity (fuse4rep) process model for collecting and analysing traffic observation data. TU Dresden, 2024. http://dx.doi.org/10.26128/2024.2.
Full textTruffer-Moudra, Dana, Sarah Azmi-Wendler, Robbin Garber-Slaght, Prateek Shrestha, Qwerty Mackey, and Conor Dennehy. Performance Evaluation and Costs of a Combined Ground Source Heat Pump and Solar Photovoltaic Storage System in an Extreme Cold Climate. Office of Scientific and Technical Information (OSTI), June 2023. http://dx.doi.org/10.2172/1986504.
Full textHuntington, Dale. Anti-trafficking programs in South Asia: Appropriate activities, indicators and evaluation methodologies. Population Council, 2002. http://dx.doi.org/10.31899/rh2002.1019.
Full text