Academic literature on the topic 'Support Vector Machine'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Support Vector Machine.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Support Vector Machine"

1

Xia, Tian. "Support Vector Machine Based Educational Resources Classification." International Journal of Information and Education Technology 6, no. 11 (2016): 880–83. http://dx.doi.org/10.7763/ijiet.2016.v6.809.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

BE, R. Aruna Sankari. "Cervical Cancer Detection Using Support Vector Machine." International journal of Emerging Trends in Science and Technology 04, no. 03 (March 31, 2017): 5033–38. http://dx.doi.org/10.18535/ijetst/v4i3.08.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Heo, Gyeong-Yong, and Seong-Hoon Kim. "Context-Aware Fusion with Support Vector Machine." Journal of the Korea Society of Computer and Information 19, no. 6 (June 30, 2014): 19–26. http://dx.doi.org/10.9708/jksci.2014.19.6.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huimin, Yao. "Research on Parallel Support Vector Machine Based on Spark Big Data Platform." Scientific Programming 2021 (December 17, 2021): 1–9. http://dx.doi.org/10.1155/2021/7998417.

Full text
Abstract:
With the development of cloud computing and distributed cluster technology, the concept of big data has been expanded and extended in terms of capacity and value, and machine learning technology has also received unprecedented attention in recent years. Traditional machine learning algorithms cannot solve the problem of effective parallelization, so a parallelization support vector machine based on Spark big data platform is proposed. Firstly, the big data platform is designed with Lambda architecture, which is divided into three layers: Batch Layer, Serving Layer, and Speed Layer. Secondly, in order to improve the training efficiency of support vector machines on large-scale data, when merging two support vector machines, the “special points” other than support vectors are considered, that is, the points where the nonsupport vectors in one subset violate the training results of the other subset, and a cross-validation merging algorithm is proposed. Then, a parallelized support vector machine based on cross-validation is proposed, and the parallelization process of the support vector machine is realized on the Spark platform. Finally, experiments on different datasets verify the effectiveness and stability of the proposed method. Experimental results show that the proposed parallelized support vector machine has outstanding performance in speed-up ratio, training time, and prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

V., Dr Padmanabha Reddy. "Human Cognitive State classification using Support Vector Machine." Journal of Advanced Research in Dynamical and Control Systems 12, no. 01-Special Issue (February 13, 2020): 46–54. http://dx.doi.org/10.5373/jardcs/v12sp1/20201045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jung, Kang-Mo. "Robust Algorithm for Multiclass Weighted Support Vector Machine." SIJ Transactions on Advances in Space Research & Earth Exploration 4, no. 3 (June 10, 2016): 1–5. http://dx.doi.org/10.9756/sijasree/v4i3/0203430402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dhaifallah, Mujahed Al, and K. S. Nisar. "Support Vector Machine Identification of Subspace Hammerstein Models." International Journal of Computer Theory and Engineering 7, no. 1 (February 2014): 9–15. http://dx.doi.org/10.7763/ijcte.2015.v7.922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

YANG, Zhi-Min, Yuan-Hai SHAO, and Jing LIANG. "Unascertained Support Vector Machine." Acta Automatica Sinica 39, no. 6 (March 25, 2014): 895–901. http://dx.doi.org/10.3724/sp.j.1004.2013.00895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, L., W. Zhou, and L. Jiao. "Wavelet Support Vector Machine." IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 34, no. 1 (February 2004): 34–39. http://dx.doi.org/10.1109/tsmcb.2003.811113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Navia-Vázquez, A., and E. Parrado-Hernández. "Support vector machine interpretation." Neurocomputing 69, no. 13-15 (August 2006): 1754–59. http://dx.doi.org/10.1016/j.neucom.2005.12.118.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Support Vector Machine"

1

Cardamone, Dario. "Support Vector Machine a Machine Learning Algorithm." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Nella presente tesi di laurea viene preso in considerazione l’algoritmo di classificazione Support Vector Machine. Piu` in particolare si considera la sua formulazione come problema di ottimizazione Mixed Integer Program per la classificazione binaria super- visionata di un set di dati.
APA, Harvard, Vancouver, ISO, and other styles
2

McChesney, Charlie. "External Support Vector Machine Clustering." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/409.

Full text
Abstract:
The external-Support Vector Machine (SVM) clustering algorithm clusters data vectors with no a priori knowledge of each vector's class. The algorithm works by first running a binary SVM against a data set, with each vector in the set randomly labeled, until the SVM converges. It then relabels data points that are mislabeled and a large distance from the SVM hyperplane. The SVM is then iteratively rerun followed by more label swapping until no more progress can be made. After this process, a high percentage of the previously unknown class labels of the data set will be known. With sub-cluster identification upon iterating the overall algorithm on the positive and negative clusters identified (until the clusters are no longer separable into sub-clusters), this method provides a way to cluster data sets without prior knowledge of the data's clustering characteristics, or the number of clusters.
APA, Harvard, Vancouver, ISO, and other styles
3

Armond, Kenneth C. Jr. "Distributed Support Vector Machine Learning." ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/711.

Full text
Abstract:
Support Vector Machines (SVMs) are used for a growing number of applications. A fundamental constraint on SVM learning is the management of the training set. This is because the order of computations goes as the square of the size of the training set. Typically, training sets of 1000 (500 positives and 500 negatives, for example) can be managed on a PC without hard-drive thrashing. Training sets of 10,000 however, simply cannot be managed with PC-based resources. For this reason most SVM implementations must contend with some kind of chunking process to train parts of the data at a time (10 chunks of 1000, for example, to learn the 10,000). Sequential and multi-threaded chunking methods provide a way to run the SVM on large datasets while retaining accuracy. The multi-threaded distributed SVM described in this thesis is implemented using Java RMI, and has been developed to run on a network of multi-core/multi-processor computers.
APA, Harvard, Vancouver, ISO, and other styles
4

Zigic, Ljiljana. "Direct L2 Support Vector Machine." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4274.

Full text
Abstract:
This dissertation introduces a novel model for solving the L2 support vector machine dubbed Direct L2 Support Vector Machine (DL2 SVM). DL2 SVM represents a new classification model that transforms the SVM's underlying quadratic programming problem into a system of linear equations with nonnegativity constraints. The devised system of linear equations has a symmetric positive definite matrix and a solution vector has to be nonnegative. Furthermore, this dissertation introduces a novel algorithm dubbed Non-Negative Iterative Single Data Algorithm (NN ISDA) which solves the underlying DL2 SVM's constrained system of equations. This solver shows significant speedup compared to several other state-of-the-art algorithms. The training time improvement is achieved at no cost, in other words, the accuracy is kept at the same level. All the experiments that support this claim were conducted on various datasets within the strict double cross-validation scheme. DL2 SVM solved with NN ISDA has faster training time on both medium and large datasets. In addition to a comprehensive DL2 SVM model we introduce and derive its three variants. Three different solvers for the DL2's system of linear equations with nonnegativity constraints were implemented, presented and compared in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
5

Park, Yongwon Baskiyar Sanjeev. "Dynamic task scheduling onto heterogeneous machines using Support Vector Machine." Auburn, Ala, 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SPRING/Computer_Science_and_Software_Engineering/Thesis/Park_Yong_50.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tsang, Wai-Hung. "Scaling up support vector machines /." View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20TSANG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Perez, Daniel Antonio. "Performance comparison of support vector machine and relevance vector machine classifiers for functional MRI data." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34858.

Full text
Abstract:
Multivariate pattern analysis (MVPA) of fMRI data has been growing in popularity due to its sensitivity to networks of brain activation. It is performed in a predictive modeling framework which is natural for implementing brain state prediction and real-time fMRI applications such as brain computer interfaces. Support vector machines (SVM) have been particularly popular for MVPA owing to their high prediction accuracy even with noisy datasets. Recent work has proposed the use of relevance vector machines (RVM) as an alternative to SVM. RVMs are particularly attractive in time sensitive applications such as real-time fMRI since they tend to perform classification faster than SVMs. Despite the use of both methods in fMRI research, little has been done to compare the performance of these two techniques. This study compares RVM to SVM in terms of time and accuracy to determine which is better suited to real-time applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Wen, Tong 1970. "Support Vector Machine algorithms : analysis and applications." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8404.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2002.
Includes bibliographical references (p. 89-97).
Support Vector Machines (SVMs) have attracted recent attention as a learning technique to attack classification problems. The goal of my thesis work is to improve computational algorithms as well as the mathematical understanding of SVMs, so that they can be easily applied to real problems. SVMs solve classification problems by learning from training examples. From the geometry, it is easy to formulate the finding of SVM classifiers as a linearly constrained Quadratic Programming (QP) problem. However, in practice its dual problem is actually computed. An important property of the dual QP problem is that its solution is sparse. The training examples that determine the SVM classifier are known as support vectors (SVs). Motivated by the geometric derivation of the primal QP problem, we investigate how the dual problem is related to the geometry of SVs. This investigation leads to a geometric interpretation of the scaling property of SVMs and an algorithm to further compress the SVs. A random model for the training examples connects the Hessian matrix of the dual QP problem to Wishart matrices. After deriving the distributions of the elements of the inverse Wishart matrix Wn-1(n, nI), we give a conjecture about the summation of the elements of Wn-1(n, nI). It becomes challenging to solve the dual QP problem when the training set is large. We develop a fast algorithm for solving this problem. Numerical experiments show that the MATLAB implementation of this projected Conjugate Gradient algorithm is competitive with benchmark C/C++ codes such as SVMlight and SvmFu. Furthermore, we apply SVMs to time series data.
(cont.) In this application, SVMs are used to predict the movement of the stock market. Our results show that using SVMs has the potential to outperform the solution based on the most widely used geometric Brownian motion model of stock prices.
by Tong Wen.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Yufeng. "Multicategory psi-learning and support vector machine." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1085424065.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains x, 71 p.; also includes graphics Includes bibliographical references (p. 69-71). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
10

Merat, Sepehr. "Clustering Via Supervised Support Vector Machines." ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/857.

Full text
Abstract:
An SVM-based clustering algorithm is introduced that clusters data with no a priori knowledge of input classes. The algorithm initializes by first running a binary SVM classifier against a data set with each vector in the set randomly labeled. Once this initialization step is complete, the SVM confidence parameters for classification on each of the training instances can be accessed. The lowest confidence data (e.g., the worst of the mislabeled data) then has its labels switched to the other class label. The SVM is then re-run on the data set (with partly re-labeled data). The repetition of the above process improves the separability until there is no misclassification. Variations on this type of clustering approach are shown.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Support Vector Machine"

1

Andreas, Christmann, ed. Support vector machines. New York: Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Campbell, Colin. Learning with support vector machines. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Joachim, Diederich, ed. Rule extraction from support vector machines. Berlin: Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Boyle, Brandon H. Support vector machines: Data analysis, machine learning, and applications. Hauppauge, N.Y: Nova Science Publishers, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hamel, Lutz. Knowledge discovery with support vector machines. Hoboken, N.J: John Wiley & Sons, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

missing], [name. Least squares support vector machines. Singapore: World Scientific, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ertekin, Şeyda. Algorithms for efficient learning systems: Online and active learning approaches. Saarbrücken: VDM Verlag Dr. Müller, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Support vector machines for pattern classification. 2nd ed. London: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Joachims, Thorsten. Learning to classify text using support vector machines. Boston: Kluwer Academic Publishers, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

K, Suykens Johan A., Signoretto Marco, and Argyriou Andreas, eds. Regularization, optimization, kernels, and support vector machines. Boca Raton: Taylor & Francis, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Support Vector Machine"

1

Zhou, Zhi-Hua. "Support Vector Machine." In Machine Learning, 129–53. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-1967-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Dengsheng. "Support Vector Machine." In Texts in Computer Science, 179–205. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-17989-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ukil, Abhisek. "Support Vector Machine." In Power Systems, 161–226. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-73170-2_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Suzuki, Joe. "Support Vector Machine." In Statistical Learning with Math and R, 171–92. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7568-6_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Hwanjo. "Support Vector Machine." In Encyclopedia of Database Systems, 1–4. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_557-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Hwanjo. "Support Vector Machine." In Encyclopedia of Database Systems, 2890–92. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Adankon, Mathias M., and Mohamed Cheriet. "Support Vector Machine." In Encyclopedia of Biometrics, 1–9. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-3-642-27733-7_299-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aberham, Jana, and Fabrizio Kuruc. "Support Vector Machine." In Wie Maschinen lernen, 95–103. Wiesbaden: Springer Fachmedien Wiesbaden, 2019. http://dx.doi.org/10.1007/978-3-658-26763-6_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

El Morr, Christo, Manar Jammal, Hossam Ali-Hassan, and Walid El-Hallak. "Support Vector Machine." In International Series in Operations Research & Management Science, 385–411. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16990-8_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Hang. "Support Vector Machine." In Machine Learning Methods, 127–77. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3917-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Support Vector Machine"

1

Qi, Xiaomin, Sergei Silvestrov, and Talat Nazir. "Data classification with support vector machine and generalized support vector machine." In ICNPAA 2016 WORLD CONGRESS: 11th International Conference on Mathematical Problems in Engineering, Aerospace and Sciences. Author(s), 2017. http://dx.doi.org/10.1063/1.4972718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Le, Trung, Dat Tran, Wanli Ma, Thien Pham, Phuong Duong, and Minh Nguyen. "Robust Support Vector Machine." In 2014 International Joint Conference on Neural Networks (IJCNN). IEEE, 2014. http://dx.doi.org/10.1109/ijcnn.2014.6889587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lv, Xutao. "Randomized Support Vector Forest." In British Machine Vision Conference 2014. British Machine Vision Association, 2014. http://dx.doi.org/10.5244/c.28.61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Qilong, Zhang, Shan Ganlin, and Duan Xiusheng. "Weighted Support Vector Machine Based Clustering Vector." In 2008 International Conference on Computer Science and Software Engineering. IEEE, 2008. http://dx.doi.org/10.1109/csse.2008.1454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yao, Chih-Chia, and Pao-Ta Yu. "Effective Training of Support Vector Machines using Extractive Support Vector Algorithm." In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kong, Bo, and Hong-wei Wang. "Reduced Support Vector Machine Based on Margin Vectors." In 2010 International Conference on Computational Intelligence and Software Engineering (CiSE). IEEE, 2010. http://dx.doi.org/10.1109/cise.2010.5677026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu Zhou, Shu-Xia Lu, Li-Sha Hu, and Meng Zhang. "Imbalanced extreme support vector machine." In 2012 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2012. http://dx.doi.org/10.1109/icmlc.2012.6358971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fung, Glenn, and Olvi L. Mangasarian. "Proximal support vector machine classifiers." In the seventh ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/502512.502527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kuo, R. J., and C. M. Chen. "Evolutionary-based support vector machine." In 2011 IEEE MTT-S International Microwave Workshop Series on Innovative Wireless Power Transmission: Technologies, Systems, and Applications (IMWS 2011). IEEE, 2011. http://dx.doi.org/10.1109/imws.2011.6114985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, WanLing, Peng Chen, and Xiangjun Song. "Improved Weighted Support Vector Machine." In 2016 5th International Conference on Advanced Materials and Computer Science (ICAMCS 2016). Paris, France: Atlantis Press, 2016. http://dx.doi.org/10.2991/icamcs-16.2016.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Support Vector Machine"

1

Gertz, E. M., and J. D. Griffin. Support vector machine classifiers for large data sets. Office of Scientific and Technical Information (OSTI), January 2006. http://dx.doi.org/10.2172/881587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alali, Ali. Application of Support Vector Machine in Predicting the Market's Monthly Trend Direction. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.1495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

O'Neill, Francis, Kristofer Lasko, and Elena Sava. Snow-covered region improvements to a support vector machine-based semi-automated land cover mapping decision support tool. Engineer Research and Development Center (U.S.), November 2022. http://dx.doi.org/10.21079/11681/45842.

Full text
Abstract:
This work builds on the original semi-automated land cover mapping algorithm and quantifies improvements to class accuracy, analyzes the results, and conducts a more in-depth accuracy assessment in conjunction with test sites and the National Land Cover Database (NLCD). This algorithm uses support vector machines trained on data collected across the continental United States to generate a pre-trained model for inclusion into a decision support tool within ArcGIS Pro. Version 2 includes an additional snow cover class and accounts for snow cover effects within the other land cover classes. Overall accuracy across the continental United States for Version 2 is 75% on snow-covered pixels and 69% on snow-free pixels, versus 16% and 66% for Version 1. However, combining the “crop” and “low vegetation” classes improves these values to 86% for snow and 83% for snow-free, compared to 19% and 83% for Version 1. This merging is justified by their spectral similarity, the difference between crop and low vegetation falling closer to land use than land cover. The Version 2 tool is built into a Python-based ArcGIS toolbox, allowing users to leverage the pre-trained model—along with image splitting and parallel processing techniques—for their land cover type map generation needs.
APA, Harvard, Vancouver, ISO, and other styles
4

Arun, Ramaiah, and Shanmugasundaram Singaravelan. Classification of Brain Tumour in Magnetic Resonance Images Using Hybrid Kernel Based Support Vector Machine. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, October 2019. http://dx.doi.org/10.7546/crabs.2019.10.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Y. Support vector machine for the prediction of future trend of Athabasca River (Alberta) flow rate. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2017. http://dx.doi.org/10.4095/299739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Qi, Yuan. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches. Fort Belvoir, VA: Defense Technical Information Center, August 2000. http://dx.doi.org/10.21236/ada458739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Luo, Yuzhou, Rui Wang, Zhongwei Jiang, and Xiqing Zuo. Assessment of the Effect of Health Monitoring System on the Sleep Quality by Using Support Vector Machine. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, January 2018. http://dx.doi.org/10.7546/crabs.2018.01.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Yuzhou, Rui Wang, Zhongwei Jiang, and Xiqing Zuo. Assessment of the Effect of Health Monitoring System on the Sleep Quality by Using Support Vector Machine. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, January 2018. http://dx.doi.org/10.7546/grabs2018.1.16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lasko, Kristofer, and Elena Sava. Semi-automated land cover mapping using an ensemble of support vector machines with moderate resolution imagery integrated into a custom decision support tool. Engineer Research and Development Center (U.S.), November 2021. http://dx.doi.org/10.21079/11681/42402.

Full text
Abstract:
Land cover type is a fundamental remote sensing-derived variable for terrain analysis and environmental mapping applications. The currently available products are produced only for a single season or a specific year. Some of these products have a coarse resolution and quickly become outdated, as land cover type can undergo significant change over a short time period. In order to enable on-demand generation of timely and accurate land cover type products, we developed a sensor-agnostic framework leveraging pre-trained machine learning models. We also generated land cover models for Sentinel-2 (20m) and Landsat 8 imagery (30m) using either a single date of imagery or two dates of imagery for mapping land cover type. The two-date model includes 11 land cover type classes, whereas the single-date model contains 6 classes. The models’ overall accuracies were 84% (Sentinel-2 single date), 82% (Sentinel-2 two date), and 86% (Landsat 8 two date) across the continental United States. The three different models were built into an ArcGIS Pro Python toolbox to enable a semi-automated workflow for end users to generate their own land cover type maps on demand. The toolboxes were built using parallel processing and image-splitting techniques to enable faster computation and for use on less-powerful machines.
APA, Harvard, Vancouver, ISO, and other styles
10

Alwan, Iktimal, Dennis D. Spencer, and Rafeed Alkawadri. Comparison of Machine Learning Algorithms in Sensorimotor Functional Mapping. Progress in Neurobiology, December 2023. http://dx.doi.org/10.60124/j.pneuro.2023.30.03.

Full text
Abstract:
Objective: To compare the performance of popular machine learning algorithms (ML) in mapping the sensorimotor cortex (SM) and identifying the anterior lip of the central sulcus (CS). Methods: We evaluated support vector machines (SVMs), random forest (RF), decision trees (DT), single layer perceptron (SLP), and multilayer perceptron (MLP) against standard logistic regression (LR) to identify the SM cortex employing validated features from six-minute of NREM sleep icEEG data and applying standard common hyperparameters and 10-fold cross-validation. Each algorithm was tested using vetted features based on the statistical significance of classical univariate analysis (p<0.05) and extended () 17 features representing power/coherence of different frequency bands, entropy, and interelectrode-based distance. The analysis was performed before and after weight adjustment for imbalanced data (w). Results: 7 subjects and 376 contacts were included. Before optimization, ML algorithms performed comparably employing conventional features (median CS accuracy: 0.89, IQR [0.88-0.9]). After optimization, neural networks outperformed others in means of accuracy (MLP: 0.86), the area under the curve (AUC) (SLPw, MLPw, MLP: 0.91), recall (SLPw: 0.82, MLPw: 0.81), precision (SLPw: 0.84), and F1-scores (SLPw: 0.82). SVM achieved the best specificity performance. Extending the number of features and adjusting the weights improved recall, precision, and F1-scores by 48.27%, 27.15%, and 39.15%, respectively, with gains or no significant losses in specificity and AUC across CS and Function (correlation r=0.71 between the two clinical scenarios in all performance metrics, p<0.001). Interpretation: Computational passive sensorimotor mapping is feasible and reliable. Feature extension and weight adjustments improve the performance and counterbalance the accuracy paradox. Optimized neural networks outperform other ML algorithms even in binary classification tasks. The best-performing models and the MATLAB® routine employed in signal processing are available to the public at (Link 1).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography