Auswahl der wissenschaftlichen Literatur zum Thema „Unary classifier“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Unary classifier" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Unary classifier"

1

Rangkuti, Rizki Perdana, Vektor Dewanto, Aprinaldi und Wisnu Jatmiko. „Utilizing Google Images for Training Classifiers in CRF-Based Semantic Segmentation“. Journal of Advanced Computational Intelligence and Intelligent Informatics 20, Nr. 3 (19.05.2016): 455–61. http://dx.doi.org/10.20965/jaciii.2016.p0455.

Der volle Inhalt der Quelle
Annotation:
One promising approach to pixel-wise semantic segmentation is based on conditional random fields (CRFs). CRF-based semantic segmentation requires ground-truth annotations to supervisedly train the classifier that generates unary potentials. However, the number of (public) annotation data for training is limitedly small. We observe that the Internet can provide relevant images for any given keywords. Our idea is to convert keyword-related images to pixel-wise annotated images, then use them as training data. In particular, we rely on saliency filters to identify the salient object (foreground) of a retrieved image, which mostly agrees with the given keyword. We utilize saliency information for back-and-foreground CRF-based semantic segmentation to further obtain pixel-wise ground-truth annotations. Experiment results show that training data from Google images improves both the learning performance and the accuracy of semantic segmentation. This suggests that our proposed method is promising for harvesting substantial training data from the Internet for training the classifier in CRF-based semantic segmentation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

MONSERRAT, M., F. ROSSELLÓ und J. TORRENS. „WHEN IS A CATEGORY OF MANY-SORTED PARTIAL ALGEBRAS CARTESIAN-CLOSED?“ International Journal of Foundations of Computer Science 06, Nr. 01 (März 1995): 51–66. http://dx.doi.org/10.1142/s0129054195000056.

Der volle Inhalt der Quelle
Annotation:
In this paper we study the cartesian closedness of the five most natural categories with objects all partial many-sorted algebras of a given signature. In particular, we prove that, from these categories, only the usual one and the one having as morphisms the closed homomorphisms can be cartesian closed. In the first case, it is cartesian closed exactly when the signature contains no operation symbol, in which case such a category is a slice category of sets. In the second case, it is cartesian closed if and only if all operations are unary. In this case, we identify it as a functor category and we show some relevant constructions in it, such as its subobjects classifier or the exponentials.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Novack, T., und U. Stilla. „DISCRIMINATION OF URBAN SETTLEMENT TYPES BASED ON SPACE-BORNE SAR DATASETS AND A CONDITIONAL RANDOM FIELDS MODEL“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W4 (11.03.2015): 143–48. http://dx.doi.org/10.5194/isprsannals-ii-3-w4-143-2015.

Der volle Inhalt der Quelle
Annotation:
In this work we focused on the classification of Urban Settlement Types (USTs) based on two datasets from the TerraSAR-X satellite acquired at ascending and descending look directions. These data sets comprise the intensity, amplitude and coherence images from the ascending and descending datasets. In accordance to most official UST maps, the urban blocks of our study site were considered as the elements to be classified. The considered USTs classes in this paper are: Vegetated Areas, Single-Family Houses and Commercial and Residential Buildings. Three different groups of image attributes were utilized, namely: Relative Areas, Histogram of Oriented Gradients and geometrical and contextual attributes extracted from the nodes of a Max-Tree Morphological Profile. These image attributes were submitted to three powerful soft multi-class classification algorithms. In this way, each classifier output a membership value to each of the classes. This membership values were then treated as the potentials of the unary factors of a Conditional Random Fields (CRFs) model. The pairwise factors of the CRFs model were parameterised with a Potts function. The reclassification performed with the CRFs model enabled a slight increase of the classification’s accuracy from 76% to 79% out of 1926 urban blocks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhang, Bin, Cunpeng Wang, Yonglin Shen und Yueyan Liu. „Fully Connected Conditional Random Fields for High-Resolution Remote Sensing Land Use/Land Cover Classification with Convolutional Neural Networks“. Remote Sensing 10, Nr. 12 (27.11.2018): 1889. http://dx.doi.org/10.3390/rs10121889.

Der volle Inhalt der Quelle
Annotation:
The interpretation of land use and land cover (LULC) is an important issue in the fields of high-resolution remote sensing (RS) image processing and land resource management. Fully training a new or existing convolutional neural network (CNN) architecture for LULC classification requires a large amount of remote sensing images. Thus, fine-tuning a pre-trained CNN for LULC detection is required. To improve the classification accuracy for high resolution remote sensing images, it is necessary to use another feature descriptor and to adopt a classifier for post-processing. A fully connected conditional random fields (FC-CRF), to use the fine-tuned CNN layers, spectral features, and fully connected pairwise potentials, is proposed for image classification of high-resolution remote sensing images. First, an existing CNN model is adopted, and the parameters of CNN are fine-tuned by training datasets. Then, the probabilities of image pixels belong to each class type are calculated. Second, we consider the spectral features and digital surface model (DSM) and combined with a support vector machine (SVM) classifier, the probabilities belong to each LULC class type are determined. Combined with the probabilities achieved by the fine-tuned CNN, new feature descriptors are built. Finally, FC-CRF are introduced to produce the classification results, whereas the unary potentials are achieved by the new feature descriptors and SVM classifier, and the pairwise potentials are achieved by the three-band RS imagery and DSM. Experimental results show that the proposed classification scheme achieves good performance when the total accuracy is about 85%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Alwaghid, Alhanoof, und Nurul Sarkar. „Exploring Malware Behavior of Webpages Using Machine Learning Technique: An Empirical Study“. Electronics 9, Nr. 6 (23.06.2020): 1033. http://dx.doi.org/10.3390/electronics9061033.

Der volle Inhalt der Quelle
Annotation:
Malware is one of the most common security threats experienced by a user when browsing webpages. A good understanding of the features of webpages (e.g., internet protocol, port, URL, Google index, and page rank) is required to analyze and mitigate the behavior of malware in webpages. This main objective of this paper is to analyze the key features of webpages and to mitigate the behavior of malware in webpages. To this end, we conducted an empirical study to identify the features that are most vulnerable to malware attacks and its results are reported. To improve the feature selection accuracy, a machine learning technique called bagging is employed using the Weka program. To analyze these behaviors, phishing and botnet data were obtained from the University of California Irvine machine learning repository. We validate our research findings by applying honeypot infrastructure using the Modern Honeypot Network (MHN) setup in a Linode Server. As the data suffer from high variance in terms of the type of data in each row, bagging is chosen because it can classify binary classes, date classes, missing values, nominal classes, numeric classes, unary classes and empty classes. As a base classifier of bagging, random tree was applied because it can handle similar types of data such as bagging, but better than other classifiers because it is faster and more accurate. Random tree had 88.22% test accuracy with the lowest run time (0.2 sec) and a receiver operating characteristic curve of 0.946. Results show that all features in the botnet dataset are equally important to identify the malicious behavior, as all scored more than 97%, with the exception of TCP and UDP. The accuracy of phishing and botnet datasets is more than 89% on average in both cross validation and test analysis. Recommendations are made for the best practice that can assist in future malware identification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhao, Ma, Zhong, Zhao und Cao. „Self-Training Classification Framework with Spatial-Contextual Information for Local Climate Zones“. Remote Sensing 11, Nr. 23 (28.11.2019): 2828. http://dx.doi.org/10.3390/rs11232828.

Der volle Inhalt der Quelle
Annotation:
Local climate zones (LCZ) have become a generic criterion for climate analysis among global cities, as they can describe not only the urban climate but also the morphology inside the city. LCZ mapping based on the remote sensing classification method is a fundamental task, and the protocol proposed by the World Urban Database and Access Portal Tools (WUDAPT) project, which consists of random forest classification and filter-based spatial smoothing, is the most common approach. However, the classification and spatial smoothing lack a unified framework, which causes the appearance of small, isolated areas in the LCZ maps. In this paper, a spatial-contextual information-based self-training classification framework (SCSF) is proposed to solve this LCZ classification problem. In SCSF, conditional random field (CRF) is used to integrate the classification and spatial smoothing processing into one model and a self-training method is adopted, considering that the lack of sufficient expert-labeled training samples is always a big issue, especially for the complex LCZ scheme. Moreover, in the unary potentials of CRF modeling, pseudo-label selection using a self-training process is used to train the classifier, which fuses the regional spatial information through segmentation and the local neighborhood information through moving windows to provide a more reliable probabilistic classification map. In the pairwise potential function, SCSF can effectively improve the classification accuracy by integrating the spatial-contextual information through CRF. The experimental results prove that the proposed framework is efficient when compared to the traditional mapping product of WUDAPT in LCZ classification.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yao, W., P. Polewski und P. Krzystek. „SEMANTIC LABELLING OF ULTRA DENSE MLS POINT CLOUDS IN URBAN ROAD CORRIDORS BASED ON FUSING CRF WITH SHAPE PRIORS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (13.09.2017): 971–76. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-971-2017.

Der volle Inhalt der Quelle
Annotation:
In this paper, a labelling method for the semantic analysis of ultra-high point density MLS data (up to 4000 points/m<sup>2</sup>) in urban road corridors is developed based on combining a conditional random field (CRF) for the context-based classification of 3D point clouds with shape priors. The CRF uses a Random Forest (RF) for generating the unary potentials of nodes and a variant of the contrastsensitive Potts model for the pair-wise potentials of node edges. The foundations of the classification are various geometric features derived by means of co-variance matrices and local accumulation map of spatial coordinates based on local neighbourhoods. Meanwhile, in order to cope with the ultra-high point density, a plane-based region growing method combined with a rule-based classifier is applied to first fix semantic labels for man-made objects. Once such kind of points that usually account for majority of entire data amount are pre-labeled; the CRF classifier can be solved by optimizing the discriminative probability for nodes within a subgraph structure excluded from pre-labeled nodes. The process can be viewed as an evidence fusion step inferring a degree of belief for point labelling from different sources. The MLS data used for this study were acquired by vehicle-borne Z+F phase-based laser scanner measurement, which permits the generation of a point cloud with an ultra-high sampling rate and accuracy. The test sites are parts of Munich City which is assumed to consist of seven object classes including impervious surfaces, tree, building roof/facade, low vegetation, vehicle and pole. The competitive classification performance can be explained by the diverse factors: e.g. the above ground height highlights the vertical dimension of houses, trees even cars, but also attributed to decision-level fusion of graph-based contextual classification approach with shape priors. The use of context-based classification methods mainly contributed to smoothing of labelling by removing outliers and the improvement in underrepresented object classes. In addition, the routine operation of a context-based classification for such high density MLS data becomes much more efficient being comparable to non-contextual classification schemes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Mishra, Bharavi, und K. K. Shukla. „Software Defect Prediction Based on GUHA Data Mining Procedure and Multi-Objective Pareto Efficient Rule Selection“. International Journal of Software Science and Computational Intelligence 6, Nr. 2 (April 2014): 1–29. http://dx.doi.org/10.4018/ijssci.2014040101.

Der volle Inhalt der Quelle
Annotation:
Software defect prediction, if is effective, enables the developers to distribute their testing efforts efficiently and let them focus on defect prone modules. It would be very resource consuming to test all the modules while the defect lies in fraction of modules. Information about fault-proneness of classes and methods can be used to develop new strategies which can help mitigate the overall development cost and increase the customer satisfaction. Several machine learning strategies have been used in recent past to identify defective modules. These models are built using publicly available historical software defect data sets. Most of the proposed techniques are not able to deal with the class imbalance problem efficiently. Therefore, it is necessary to develop a prediction model which consists of small simple and comprehensible rules. Considering these facts, in this paper, the authors propose a novel defect prediction approach named GUHA based Classification Association Rule Mining algorithm (G-CARM) where “GUHA” stands for General Unary Hypothesis Automaton. G-CARM approach is primarily based on Classification Association Rule Mining, and deploys a two stage process involving attribute discretization, and rule generation using GUHA. GUHA is oldest yet very powerful method of pattern mining. The basic idea of GUHA procedure is to mine the interesting attribute patterns that indicate defect proneness. The new method has been compared against five other models reported in recent literature viz. Naive Bayes, Support Vector Machine, RIPPER, J48 and Nearest Neighbour classifier by using several measures, including AUC and probability of detection. The experimental results indicate that the prediction performance of G-CARM approach is better than other prediction approaches. The authors' approach achieved 76% mean recall and 83% mean precision for defective modules and 93% mean recall and 83% mean precision for non-defective modules on CM1, KC1, KC2 and Eclipse data sets. Further defect rule generation process often generates a large number of rules which require considerable efforts while using these rules as a defect predictor, hence, a rule sub-set selection process is also proposed to select best set of rules according to the requirements. Evolution criteria for defect prediction like sensitivity, specificity, precision often compete against each other. It is therefore, important to use multi-objective optimization algorithms for selecting prediction rules. In this paper the authors report prediction rules that are Pareto efficient in the sense that no further improvements in the rule set is possible without sacrificing some performance criteria. Non-Dominated Sorting Genetic Algorithm has been used to find Pareto front and defect prediction rules.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Shu, Zhen, Kai Sun, Kaijin Qiu und Kou Ding. „PAIRWISE-SVM FOR ON-BOARD URBAN ROAD LIDAR CLASSIFICATION“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (02.06.2016): 109–13. http://dx.doi.org/10.5194/isprsarchives-xli-b1-109-2016.

Der volle Inhalt der Quelle
Annotation:
The common method of LiDAR classifications is Markov random fields (MRF). Based on construction of MRF energy function, spectral and directional features are extracted for on-board urban point clouds. The MRF energy function is consisted of unary and pairwise potentials. The unary terms are computed by SVM classifictaion. The initial labeling is mainly processed through geometrical shapes. The pairwise potential is estimated by Naïve Bayes. From training data, the probability of adjacent objects is computed by prior knowledge. The final labeling method is reweighted message-passing to minimization the energy function. The MRF model is difficult to process the large-scale misclassification. We propose a super-voxel clustering method for over-segment and grouping segment for large objects. Trees, poles ground, and building are classified in this paper. The experimental results show that this method improves the accuracy of classification and speed of computation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Shu, Zhen, Kai Sun, Kaijin Qiu und Kou Ding. „PAIRWISE-SVM FOR ON-BOARD URBAN ROAD LIDAR CLASSIFICATION“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B1 (02.06.2016): 109–13. http://dx.doi.org/10.5194/isprs-archives-xli-b1-109-2016.

Der volle Inhalt der Quelle
Annotation:
The common method of LiDAR classifications is Markov random fields (MRF). Based on construction of MRF energy function, spectral and directional features are extracted for on-board urban point clouds. The MRF energy function is consisted of unary and pairwise potentials. The unary terms are computed by SVM classifictaion. The initial labeling is mainly processed through geometrical shapes. The pairwise potential is estimated by Naïve Bayes. From training data, the probability of adjacent objects is computed by prior knowledge. The final labeling method is reweighted message-passing to minimization the energy function. The MRF model is difficult to process the large-scale misclassification. We propose a super-voxel clustering method for over-segment and grouping segment for large objects. Trees, poles ground, and building are classified in this paper. The experimental results show that this method improves the accuracy of classification and speed of computation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Unary classifier"

1

Beneš, Jiří. „Unární klasifikátor obrazových dat“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442432.

Der volle Inhalt der Quelle
Annotation:
The work deals with an introduction to classification algorithms. It then divides classifiers into unary, binary and multi-class and describes the different types of classifiers. The work compares individual classifiers and their areas of use. For unary classifiers, practical examples and a list of used architectures are given in the work. The work contains a chapter focused on the comparison of the effects of hyper parameters on the quality of unary classification for individual architectures. Part of the submission is a practical example of reimplementation of the unary classifier.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie