Gotowa bibliografia na temat „Support Vector Machine”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Support Vector Machine”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Support Vector Machine"

1

Xia, Tian. "Support Vector Machine Based Educational Resources Classification". International Journal of Information and Education Technology 6, nr 11 (2016): 880–83. http://dx.doi.org/10.7763/ijiet.2016.v6.809.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

BE, R. Aruna Sankari. "Cervical Cancer Detection Using Support Vector Machine". International journal of Emerging Trends in Science and Technology 04, nr 03 (31.03.2017): 5033–38. http://dx.doi.org/10.18535/ijetst/v4i3.08.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Heo, Gyeong-Yong, i Seong-Hoon Kim. "Context-Aware Fusion with Support Vector Machine". Journal of the Korea Society of Computer and Information 19, nr 6 (30.06.2014): 19–26. http://dx.doi.org/10.9708/jksci.2014.19.6.019.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Huimin, Yao. "Research on Parallel Support Vector Machine Based on Spark Big Data Platform". Scientific Programming 2021 (17.12.2021): 1–9. http://dx.doi.org/10.1155/2021/7998417.

Pełny tekst źródła
Streszczenie:
With the development of cloud computing and distributed cluster technology, the concept of big data has been expanded and extended in terms of capacity and value, and machine learning technology has also received unprecedented attention in recent years. Traditional machine learning algorithms cannot solve the problem of effective parallelization, so a parallelization support vector machine based on Spark big data platform is proposed. Firstly, the big data platform is designed with Lambda architecture, which is divided into three layers: Batch Layer, Serving Layer, and Speed Layer. Secondly, in order to improve the training efficiency of support vector machines on large-scale data, when merging two support vector machines, the “special points” other than support vectors are considered, that is, the points where the nonsupport vectors in one subset violate the training results of the other subset, and a cross-validation merging algorithm is proposed. Then, a parallelized support vector machine based on cross-validation is proposed, and the parallelization process of the support vector machine is realized on the Spark platform. Finally, experiments on different datasets verify the effectiveness and stability of the proposed method. Experimental results show that the proposed parallelized support vector machine has outstanding performance in speed-up ratio, training time, and prediction accuracy.
Style APA, Harvard, Vancouver, ISO itp.
5

V., Dr Padmanabha Reddy. "Human Cognitive State classification using Support Vector Machine". Journal of Advanced Research in Dynamical and Control Systems 12, nr 01-Special Issue (13.02.2020): 46–54. http://dx.doi.org/10.5373/jardcs/v12sp1/20201045.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Jung, Kang-Mo. "Robust Algorithm for Multiclass Weighted Support Vector Machine". SIJ Transactions on Advances in Space Research & Earth Exploration 4, nr 3 (10.06.2016): 1–5. http://dx.doi.org/10.9756/sijasree/v4i3/0203430402.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Dhaifallah, Mujahed Al, i K. S. Nisar. "Support Vector Machine Identification of Subspace Hammerstein Models". International Journal of Computer Theory and Engineering 7, nr 1 (luty 2014): 9–15. http://dx.doi.org/10.7763/ijcte.2015.v7.922.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

YANG, Zhi-Min, Yuan-Hai SHAO i Jing LIANG. "Unascertained Support Vector Machine". Acta Automatica Sinica 39, nr 6 (25.03.2014): 895–901. http://dx.doi.org/10.3724/sp.j.1004.2013.00895.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, L., W. Zhou i L. Jiao. "Wavelet Support Vector Machine". IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics) 34, nr 1 (luty 2004): 34–39. http://dx.doi.org/10.1109/tsmcb.2003.811113.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Navia-Vázquez, A., i E. Parrado-Hernández. "Support vector machine interpretation". Neurocomputing 69, nr 13-15 (sierpień 2006): 1754–59. http://dx.doi.org/10.1016/j.neucom.2005.12.118.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Support Vector Machine"

1

Cardamone, Dario. "Support Vector Machine a Machine Learning Algorithm". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Znajdź pełny tekst źródła
Streszczenie:
Nella presente tesi di laurea viene preso in considerazione l’algoritmo di classificazione Support Vector Machine. Piu` in particolare si considera la sua formulazione come problema di ottimizazione Mixed Integer Program per la classificazione binaria super- visionata di un set di dati.
Style APA, Harvard, Vancouver, ISO itp.
2

McChesney, Charlie. "External Support Vector Machine Clustering". ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/409.

Pełny tekst źródła
Streszczenie:
The external-Support Vector Machine (SVM) clustering algorithm clusters data vectors with no a priori knowledge of each vector's class. The algorithm works by first running a binary SVM against a data set, with each vector in the set randomly labeled, until the SVM converges. It then relabels data points that are mislabeled and a large distance from the SVM hyperplane. The SVM is then iteratively rerun followed by more label swapping until no more progress can be made. After this process, a high percentage of the previously unknown class labels of the data set will be known. With sub-cluster identification upon iterating the overall algorithm on the positive and negative clusters identified (until the clusters are no longer separable into sub-clusters), this method provides a way to cluster data sets without prior knowledge of the data's clustering characteristics, or the number of clusters.
Style APA, Harvard, Vancouver, ISO itp.
3

Armond, Kenneth C. Jr. "Distributed Support Vector Machine Learning". ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/711.

Pełny tekst źródła
Streszczenie:
Support Vector Machines (SVMs) are used for a growing number of applications. A fundamental constraint on SVM learning is the management of the training set. This is because the order of computations goes as the square of the size of the training set. Typically, training sets of 1000 (500 positives and 500 negatives, for example) can be managed on a PC without hard-drive thrashing. Training sets of 10,000 however, simply cannot be managed with PC-based resources. For this reason most SVM implementations must contend with some kind of chunking process to train parts of the data at a time (10 chunks of 1000, for example, to learn the 10,000). Sequential and multi-threaded chunking methods provide a way to run the SVM on large datasets while retaining accuracy. The multi-threaded distributed SVM described in this thesis is implemented using Java RMI, and has been developed to run on a network of multi-core/multi-processor computers.
Style APA, Harvard, Vancouver, ISO itp.
4

Zigic, Ljiljana. "Direct L2 Support Vector Machine". VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4274.

Pełny tekst źródła
Streszczenie:
This dissertation introduces a novel model for solving the L2 support vector machine dubbed Direct L2 Support Vector Machine (DL2 SVM). DL2 SVM represents a new classification model that transforms the SVM's underlying quadratic programming problem into a system of linear equations with nonnegativity constraints. The devised system of linear equations has a symmetric positive definite matrix and a solution vector has to be nonnegative. Furthermore, this dissertation introduces a novel algorithm dubbed Non-Negative Iterative Single Data Algorithm (NN ISDA) which solves the underlying DL2 SVM's constrained system of equations. This solver shows significant speedup compared to several other state-of-the-art algorithms. The training time improvement is achieved at no cost, in other words, the accuracy is kept at the same level. All the experiments that support this claim were conducted on various datasets within the strict double cross-validation scheme. DL2 SVM solved with NN ISDA has faster training time on both medium and large datasets. In addition to a comprehensive DL2 SVM model we introduce and derive its three variants. Three different solvers for the DL2's system of linear equations with nonnegativity constraints were implemented, presented and compared in this dissertation.
Style APA, Harvard, Vancouver, ISO itp.
5

Park, Yongwon Baskiyar Sanjeev. "Dynamic task scheduling onto heterogeneous machines using Support Vector Machine". Auburn, Ala, 2008. http://repo.lib.auburn.edu/EtdRoot/2008/SPRING/Computer_Science_and_Software_Engineering/Thesis/Park_Yong_50.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Tsang, Wai-Hung. "Scaling up support vector machines /". View abstract or full-text, 2007. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20TSANG.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Perez, Daniel Antonio. "Performance comparison of support vector machine and relevance vector machine classifiers for functional MRI data". Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34858.

Pełny tekst źródła
Streszczenie:
Multivariate pattern analysis (MVPA) of fMRI data has been growing in popularity due to its sensitivity to networks of brain activation. It is performed in a predictive modeling framework which is natural for implementing brain state prediction and real-time fMRI applications such as brain computer interfaces. Support vector machines (SVM) have been particularly popular for MVPA owing to their high prediction accuracy even with noisy datasets. Recent work has proposed the use of relevance vector machines (RVM) as an alternative to SVM. RVMs are particularly attractive in time sensitive applications such as real-time fMRI since they tend to perform classification faster than SVMs. Despite the use of both methods in fMRI research, little has been done to compare the performance of these two techniques. This study compares RVM to SVM in terms of time and accuracy to determine which is better suited to real-time applications.
Style APA, Harvard, Vancouver, ISO itp.
8

Wen, Tong 1970. "Support Vector Machine algorithms : analysis and applications". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/8404.

Pełny tekst źródła
Streszczenie:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2002.
Includes bibliographical references (p. 89-97).
Support Vector Machines (SVMs) have attracted recent attention as a learning technique to attack classification problems. The goal of my thesis work is to improve computational algorithms as well as the mathematical understanding of SVMs, so that they can be easily applied to real problems. SVMs solve classification problems by learning from training examples. From the geometry, it is easy to formulate the finding of SVM classifiers as a linearly constrained Quadratic Programming (QP) problem. However, in practice its dual problem is actually computed. An important property of the dual QP problem is that its solution is sparse. The training examples that determine the SVM classifier are known as support vectors (SVs). Motivated by the geometric derivation of the primal QP problem, we investigate how the dual problem is related to the geometry of SVs. This investigation leads to a geometric interpretation of the scaling property of SVMs and an algorithm to further compress the SVs. A random model for the training examples connects the Hessian matrix of the dual QP problem to Wishart matrices. After deriving the distributions of the elements of the inverse Wishart matrix Wn-1(n, nI), we give a conjecture about the summation of the elements of Wn-1(n, nI). It becomes challenging to solve the dual QP problem when the training set is large. We develop a fast algorithm for solving this problem. Numerical experiments show that the MATLAB implementation of this projected Conjugate Gradient algorithm is competitive with benchmark C/C++ codes such as SVMlight and SvmFu. Furthermore, we apply SVMs to time series data.
(cont.) In this application, SVMs are used to predict the movement of the stock market. Our results show that using SVMs has the potential to outperform the solution based on the most widely used geometric Brownian motion model of stock prices.
by Tong Wen.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
9

Liu, Yufeng. "Multicategory psi-learning and support vector machine". Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1085424065.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains x, 71 p.; also includes graphics Includes bibliographical references (p. 69-71). Available online via OhioLINK's ETD Center
Style APA, Harvard, Vancouver, ISO itp.
10

Merat, Sepehr. "Clustering Via Supervised Support Vector Machines". ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/857.

Pełny tekst źródła
Streszczenie:
An SVM-based clustering algorithm is introduced that clusters data with no a priori knowledge of input classes. The algorithm initializes by first running a binary SVM classifier against a data set with each vector in the set randomly labeled. Once this initialization step is complete, the SVM confidence parameters for classification on each of the training instances can be accessed. The lowest confidence data (e.g., the worst of the mislabeled data) then has its labels switched to the other class label. The SVM is then re-run on the data set (with partly re-labeled data). The repetition of the above process improves the separability until there is no misclassification. Variations on this type of clustering approach are shown.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Support Vector Machine"

1

Andreas, Christmann, red. Support vector machines. New York: Springer, 2008.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Campbell, Colin. Learning with support vector machines. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Joachim, Diederich, red. Rule extraction from support vector machines. Berlin: Springer, 2008.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Boyle, Brandon H. Support vector machines: Data analysis, machine learning, and applications. Hauppauge, N.Y: Nova Science Publishers, 2011.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Hamel, Lutz. Knowledge discovery with support vector machines. Hoboken, N.J: John Wiley & Sons, 2009.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

missing], [name. Least squares support vector machines. Singapore: World Scientific, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Ertekin, Şeyda. Algorithms for efficient learning systems: Online and active learning approaches. Saarbrücken: VDM Verlag Dr. Müller, 2009.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Support vector machines for pattern classification. Wyd. 2. London: Springer, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Joachims, Thorsten. Learning to classify text using support vector machines. Boston: Kluwer Academic Publishers, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

K, Suykens Johan A., Signoretto Marco i Argyriou Andreas, red. Regularization, optimization, kernels, and support vector machines. Boca Raton: Taylor & Francis, 2014.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Support Vector Machine"

1

Zhou, Zhi-Hua. "Support Vector Machine". W Machine Learning, 129–53. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-1967-3_6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Zhang, Dengsheng. "Support Vector Machine". W Texts in Computer Science, 179–205. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-17989-2_8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ukil, Abhisek. "Support Vector Machine". W Power Systems, 161–226. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-73170-2_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Suzuki, Joe. "Support Vector Machine". W Statistical Learning with Math and R, 171–92. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7568-6_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Yu, Hwanjo. "Support Vector Machine". W Encyclopedia of Database Systems, 1–4. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_557-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Yu, Hwanjo. "Support Vector Machine". W Encyclopedia of Database Systems, 2890–92. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_557.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Adankon, Mathias M., i Mohamed Cheriet. "Support Vector Machine". W Encyclopedia of Biometrics, 1–9. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-3-642-27733-7_299-3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Aberham, Jana, i Fabrizio Kuruc. "Support Vector Machine". W Wie Maschinen lernen, 95–103. Wiesbaden: Springer Fachmedien Wiesbaden, 2019. http://dx.doi.org/10.1007/978-3-658-26763-6_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

El Morr, Christo, Manar Jammal, Hossam Ali-Hassan i Walid El-Hallak. "Support Vector Machine". W International Series in Operations Research & Management Science, 385–411. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16990-8_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Li, Hang. "Support Vector Machine". W Machine Learning Methods, 127–77. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3917-6_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Support Vector Machine"

1

Qi, Xiaomin, Sergei Silvestrov i Talat Nazir. "Data classification with support vector machine and generalized support vector machine". W ICNPAA 2016 WORLD CONGRESS: 11th International Conference on Mathematical Problems in Engineering, Aerospace and Sciences. Author(s), 2017. http://dx.doi.org/10.1063/1.4972718.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Le, Trung, Dat Tran, Wanli Ma, Thien Pham, Phuong Duong i Minh Nguyen. "Robust Support Vector Machine". W 2014 International Joint Conference on Neural Networks (IJCNN). IEEE, 2014. http://dx.doi.org/10.1109/ijcnn.2014.6889587.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Lv, Xutao. "Randomized Support Vector Forest". W British Machine Vision Conference 2014. British Machine Vision Association, 2014. http://dx.doi.org/10.5244/c.28.61.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Qilong, Zhang, Shan Ganlin i Duan Xiusheng. "Weighted Support Vector Machine Based Clustering Vector". W 2008 International Conference on Computer Science and Software Engineering. IEEE, 2008. http://dx.doi.org/10.1109/csse.2008.1454.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Yao, Chih-Chia, i Pao-Ta Yu. "Effective Training of Support Vector Machines using Extractive Support Vector Algorithm". W 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370441.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kong, Bo, i Hong-wei Wang. "Reduced Support Vector Machine Based on Margin Vectors". W 2010 International Conference on Computational Intelligence and Software Engineering (CiSE). IEEE, 2010. http://dx.doi.org/10.1109/cise.2010.5677026.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Xu Zhou, Shu-Xia Lu, Li-Sha Hu i Meng Zhang. "Imbalanced extreme support vector machine". W 2012 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2012. http://dx.doi.org/10.1109/icmlc.2012.6358971.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Fung, Glenn, i Olvi L. Mangasarian. "Proximal support vector machine classifiers". W the seventh ACM SIGKDD international conference. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/502512.502527.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Kuo, R. J., i C. M. Chen. "Evolutionary-based support vector machine". W 2011 IEEE MTT-S International Microwave Workshop Series on Innovative Wireless Power Transmission: Technologies, Systems, and Applications (IMWS 2011). IEEE, 2011. http://dx.doi.org/10.1109/imws.2011.6114985.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Li, WanLing, Peng Chen i Xiangjun Song. "Improved Weighted Support Vector Machine". W 2016 5th International Conference on Advanced Materials and Computer Science (ICAMCS 2016). Paris, France: Atlantis Press, 2016. http://dx.doi.org/10.2991/icamcs-16.2016.4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Support Vector Machine"

1

Gertz, E. M., i J. D. Griffin. Support vector machine classifiers for large data sets. Office of Scientific and Technical Information (OSTI), styczeń 2006. http://dx.doi.org/10.2172/881587.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Alali, Ali. Application of Support Vector Machine in Predicting the Market's Monthly Trend Direction. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.1495.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

O'Neill, Francis, Kristofer Lasko i Elena Sava. Snow-covered region improvements to a support vector machine-based semi-automated land cover mapping decision support tool. Engineer Research and Development Center (U.S.), listopad 2022. http://dx.doi.org/10.21079/11681/45842.

Pełny tekst źródła
Streszczenie:
This work builds on the original semi-automated land cover mapping algorithm and quantifies improvements to class accuracy, analyzes the results, and conducts a more in-depth accuracy assessment in conjunction with test sites and the National Land Cover Database (NLCD). This algorithm uses support vector machines trained on data collected across the continental United States to generate a pre-trained model for inclusion into a decision support tool within ArcGIS Pro. Version 2 includes an additional snow cover class and accounts for snow cover effects within the other land cover classes. Overall accuracy across the continental United States for Version 2 is 75% on snow-covered pixels and 69% on snow-free pixels, versus 16% and 66% for Version 1. However, combining the “crop” and “low vegetation” classes improves these values to 86% for snow and 83% for snow-free, compared to 19% and 83% for Version 1. This merging is justified by their spectral similarity, the difference between crop and low vegetation falling closer to land use than land cover. The Version 2 tool is built into a Python-based ArcGIS toolbox, allowing users to leverage the pre-trained model—along with image splitting and parallel processing techniques—for their land cover type map generation needs.
Style APA, Harvard, Vancouver, ISO itp.
4

Arun, Ramaiah, i Shanmugasundaram Singaravelan. Classification of Brain Tumour in Magnetic Resonance Images Using Hybrid Kernel Based Support Vector Machine. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, październik 2019. http://dx.doi.org/10.7546/crabs.2019.10.12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Liu, Y. Support vector machine for the prediction of future trend of Athabasca River (Alberta) flow rate. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2017. http://dx.doi.org/10.4095/299739.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Qi, Yuan. Learning Algorithms for Audio and Video Processing: Independent Component Analysis and Support Vector Machine Based Approaches. Fort Belvoir, VA: Defense Technical Information Center, sierpień 2000. http://dx.doi.org/10.21236/ada458739.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Luo, Yuzhou, Rui Wang, Zhongwei Jiang i Xiqing Zuo. Assessment of the Effect of Health Monitoring System on the Sleep Quality by Using Support Vector Machine. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, styczeń 2018. http://dx.doi.org/10.7546/crabs.2018.01.16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Luo, Yuzhou, Rui Wang, Zhongwei Jiang i Xiqing Zuo. Assessment of the Effect of Health Monitoring System on the Sleep Quality by Using Support Vector Machine. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, styczeń 2018. http://dx.doi.org/10.7546/grabs2018.1.16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Lasko, Kristofer, i Elena Sava. Semi-automated land cover mapping using an ensemble of support vector machines with moderate resolution imagery integrated into a custom decision support tool. Engineer Research and Development Center (U.S.), listopad 2021. http://dx.doi.org/10.21079/11681/42402.

Pełny tekst źródła
Streszczenie:
Land cover type is a fundamental remote sensing-derived variable for terrain analysis and environmental mapping applications. The currently available products are produced only for a single season or a specific year. Some of these products have a coarse resolution and quickly become outdated, as land cover type can undergo significant change over a short time period. In order to enable on-demand generation of timely and accurate land cover type products, we developed a sensor-agnostic framework leveraging pre-trained machine learning models. We also generated land cover models for Sentinel-2 (20m) and Landsat 8 imagery (30m) using either a single date of imagery or two dates of imagery for mapping land cover type. The two-date model includes 11 land cover type classes, whereas the single-date model contains 6 classes. The models’ overall accuracies were 84% (Sentinel-2 single date), 82% (Sentinel-2 two date), and 86% (Landsat 8 two date) across the continental United States. The three different models were built into an ArcGIS Pro Python toolbox to enable a semi-automated workflow for end users to generate their own land cover type maps on demand. The toolboxes were built using parallel processing and image-splitting techniques to enable faster computation and for use on less-powerful machines.
Style APA, Harvard, Vancouver, ISO itp.
10

Alwan, Iktimal, Dennis D. Spencer i Rafeed Alkawadri. Comparison of Machine Learning Algorithms in Sensorimotor Functional Mapping. Progress in Neurobiology, grudzień 2023. http://dx.doi.org/10.60124/j.pneuro.2023.30.03.

Pełny tekst źródła
Streszczenie:
Objective: To compare the performance of popular machine learning algorithms (ML) in mapping the sensorimotor cortex (SM) and identifying the anterior lip of the central sulcus (CS). Methods: We evaluated support vector machines (SVMs), random forest (RF), decision trees (DT), single layer perceptron (SLP), and multilayer perceptron (MLP) against standard logistic regression (LR) to identify the SM cortex employing validated features from six-minute of NREM sleep icEEG data and applying standard common hyperparameters and 10-fold cross-validation. Each algorithm was tested using vetted features based on the statistical significance of classical univariate analysis (p<0.05) and extended () 17 features representing power/coherence of different frequency bands, entropy, and interelectrode-based distance. The analysis was performed before and after weight adjustment for imbalanced data (w). Results: 7 subjects and 376 contacts were included. Before optimization, ML algorithms performed comparably employing conventional features (median CS accuracy: 0.89, IQR [0.88-0.9]). After optimization, neural networks outperformed others in means of accuracy (MLP: 0.86), the area under the curve (AUC) (SLPw, MLPw, MLP: 0.91), recall (SLPw: 0.82, MLPw: 0.81), precision (SLPw: 0.84), and F1-scores (SLPw: 0.82). SVM achieved the best specificity performance. Extending the number of features and adjusting the weights improved recall, precision, and F1-scores by 48.27%, 27.15%, and 39.15%, respectively, with gains or no significant losses in specificity and AUC across CS and Function (correlation r=0.71 between the two clinical scenarios in all performance metrics, p<0.001). Interpretation: Computational passive sensorimotor mapping is feasible and reliable. Feature extension and weight adjustments improve the performance and counterbalance the accuracy paradox. Optimized neural networks outperform other ML algorithms even in binary classification tasks. The best-performing models and the MATLAB® routine employed in signal processing are available to the public at (Link 1).
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii