Academic literature on the topic 'Unimplementation of unary classifier'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Unimplementation of unary classifier.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Unimplementation of unary classifier"

1

Rangkuti, Rizki Perdana, Vektor Dewanto, Aprinaldi, and Wisnu Jatmiko. "Utilizing Google Images for Training Classifiers in CRF-Based Semantic Segmentation." Journal of Advanced Computational Intelligence and Intelligent Informatics 20, no. 3 (May 19, 2016): 455–61. http://dx.doi.org/10.20965/jaciii.2016.p0455.

Full text
Abstract:
One promising approach to pixel-wise semantic segmentation is based on conditional random fields (CRFs). CRF-based semantic segmentation requires ground-truth annotations to supervisedly train the classifier that generates unary potentials. However, the number of (public) annotation data for training is limitedly small. We observe that the Internet can provide relevant images for any given keywords. Our idea is to convert keyword-related images to pixel-wise annotated images, then use them as training data. In particular, we rely on saliency filters to identify the salient object (foreground) of a retrieved image, which mostly agrees with the given keyword. We utilize saliency information for back-and-foreground CRF-based semantic segmentation to further obtain pixel-wise ground-truth annotations. Experiment results show that training data from Google images improves both the learning performance and the accuracy of semantic segmentation. This suggests that our proposed method is promising for harvesting substantial training data from the Internet for training the classifier in CRF-based semantic segmentation.
APA, Harvard, Vancouver, ISO, and other styles
2

MONSERRAT, M., F. ROSSELLÓ, and J. TORRENS. "WHEN IS A CATEGORY OF MANY-SORTED PARTIAL ALGEBRAS CARTESIAN-CLOSED?" International Journal of Foundations of Computer Science 06, no. 01 (March 1995): 51–66. http://dx.doi.org/10.1142/s0129054195000056.

Full text
Abstract:
In this paper we study the cartesian closedness of the five most natural categories with objects all partial many-sorted algebras of a given signature. In particular, we prove that, from these categories, only the usual one and the one having as morphisms the closed homomorphisms can be cartesian closed. In the first case, it is cartesian closed exactly when the signature contains no operation symbol, in which case such a category is a slice category of sets. In the second case, it is cartesian closed if and only if all operations are unary. In this case, we identify it as a functor category and we show some relevant constructions in it, such as its subobjects classifier or the exponentials.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Bin, Cunpeng Wang, Yonglin Shen, and Yueyan Liu. "Fully Connected Conditional Random Fields for High-Resolution Remote Sensing Land Use/Land Cover Classification with Convolutional Neural Networks." Remote Sensing 10, no. 12 (November 27, 2018): 1889. http://dx.doi.org/10.3390/rs10121889.

Full text
Abstract:
The interpretation of land use and land cover (LULC) is an important issue in the fields of high-resolution remote sensing (RS) image processing and land resource management. Fully training a new or existing convolutional neural network (CNN) architecture for LULC classification requires a large amount of remote sensing images. Thus, fine-tuning a pre-trained CNN for LULC detection is required. To improve the classification accuracy for high resolution remote sensing images, it is necessary to use another feature descriptor and to adopt a classifier for post-processing. A fully connected conditional random fields (FC-CRF), to use the fine-tuned CNN layers, spectral features, and fully connected pairwise potentials, is proposed for image classification of high-resolution remote sensing images. First, an existing CNN model is adopted, and the parameters of CNN are fine-tuned by training datasets. Then, the probabilities of image pixels belong to each class type are calculated. Second, we consider the spectral features and digital surface model (DSM) and combined with a support vector machine (SVM) classifier, the probabilities belong to each LULC class type are determined. Combined with the probabilities achieved by the fine-tuned CNN, new feature descriptors are built. Finally, FC-CRF are introduced to produce the classification results, whereas the unary potentials are achieved by the new feature descriptors and SVM classifier, and the pairwise potentials are achieved by the three-band RS imagery and DSM. Experimental results show that the proposed classification scheme achieves good performance when the total accuracy is about 85%.
APA, Harvard, Vancouver, ISO, and other styles
4

Novack, T., and U. Stilla. "DISCRIMINATION OF URBAN SETTLEMENT TYPES BASED ON SPACE-BORNE SAR DATASETS AND A CONDITIONAL RANDOM FIELDS MODEL." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3/W4 (March 11, 2015): 143–48. http://dx.doi.org/10.5194/isprsannals-ii-3-w4-143-2015.

Full text
Abstract:
In this work we focused on the classification of Urban Settlement Types (USTs) based on two datasets from the TerraSAR-X satellite acquired at ascending and descending look directions. These data sets comprise the intensity, amplitude and coherence images from the ascending and descending datasets. In accordance to most official UST maps, the urban blocks of our study site were considered as the elements to be classified. The considered USTs classes in this paper are: Vegetated Areas, Single-Family Houses and Commercial and Residential Buildings. Three different groups of image attributes were utilized, namely: Relative Areas, Histogram of Oriented Gradients and geometrical and contextual attributes extracted from the nodes of a Max-Tree Morphological Profile. These image attributes were submitted to three powerful soft multi-class classification algorithms. In this way, each classifier output a membership value to each of the classes. This membership values were then treated as the potentials of the unary factors of a Conditional Random Fields (CRFs) model. The pairwise factors of the CRFs model were parameterised with a Potts function. The reclassification performed with the CRFs model enabled a slight increase of the classification’s accuracy from 76% to 79% out of 1926 urban blocks.
APA, Harvard, Vancouver, ISO, and other styles
5

Yao, W., P. Polewski, and P. Krzystek. "SEMANTIC LABELLING OF ULTRA DENSE MLS POINT CLOUDS IN URBAN ROAD CORRIDORS BASED ON FUSING CRF WITH SHAPE PRIORS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (September 13, 2017): 971–76. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-971-2017.

Full text
Abstract:
In this paper, a labelling method for the semantic analysis of ultra-high point density MLS data (up to 4000 points/m<sup>2</sup>) in urban road corridors is developed based on combining a conditional random field (CRF) for the context-based classification of 3D point clouds with shape priors. The CRF uses a Random Forest (RF) for generating the unary potentials of nodes and a variant of the contrastsensitive Potts model for the pair-wise potentials of node edges. The foundations of the classification are various geometric features derived by means of co-variance matrices and local accumulation map of spatial coordinates based on local neighbourhoods. Meanwhile, in order to cope with the ultra-high point density, a plane-based region growing method combined with a rule-based classifier is applied to first fix semantic labels for man-made objects. Once such kind of points that usually account for majority of entire data amount are pre-labeled; the CRF classifier can be solved by optimizing the discriminative probability for nodes within a subgraph structure excluded from pre-labeled nodes. The process can be viewed as an evidence fusion step inferring a degree of belief for point labelling from different sources. The MLS data used for this study were acquired by vehicle-borne Z+F phase-based laser scanner measurement, which permits the generation of a point cloud with an ultra-high sampling rate and accuracy. The test sites are parts of Munich City which is assumed to consist of seven object classes including impervious surfaces, tree, building roof/facade, low vegetation, vehicle and pole. The competitive classification performance can be explained by the diverse factors: e.g. the above ground height highlights the vertical dimension of houses, trees even cars, but also attributed to decision-level fusion of graph-based contextual classification approach with shape priors. The use of context-based classification methods mainly contributed to smoothing of labelling by removing outliers and the improvement in underrepresented object classes. In addition, the routine operation of a context-based classification for such high density MLS data becomes much more efficient being comparable to non-contextual classification schemes.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Ma, Zhong, Zhao, and Cao. "Self-Training Classification Framework with Spatial-Contextual Information for Local Climate Zones." Remote Sensing 11, no. 23 (November 28, 2019): 2828. http://dx.doi.org/10.3390/rs11232828.

Full text
Abstract:
Local climate zones (LCZ) have become a generic criterion for climate analysis among global cities, as they can describe not only the urban climate but also the morphology inside the city. LCZ mapping based on the remote sensing classification method is a fundamental task, and the protocol proposed by the World Urban Database and Access Portal Tools (WUDAPT) project, which consists of random forest classification and filter-based spatial smoothing, is the most common approach. However, the classification and spatial smoothing lack a unified framework, which causes the appearance of small, isolated areas in the LCZ maps. In this paper, a spatial-contextual information-based self-training classification framework (SCSF) is proposed to solve this LCZ classification problem. In SCSF, conditional random field (CRF) is used to integrate the classification and spatial smoothing processing into one model and a self-training method is adopted, considering that the lack of sufficient expert-labeled training samples is always a big issue, especially for the complex LCZ scheme. Moreover, in the unary potentials of CRF modeling, pseudo-label selection using a self-training process is used to train the classifier, which fuses the regional spatial information through segmentation and the local neighborhood information through moving windows to provide a more reliable probabilistic classification map. In the pairwise potential function, SCSF can effectively improve the classification accuracy by integrating the spatial-contextual information through CRF. The experimental results prove that the proposed framework is efficient when compared to the traditional mapping product of WUDAPT in LCZ classification.
APA, Harvard, Vancouver, ISO, and other styles
7

Alwaghid, Alhanoof, and Nurul Sarkar. "Exploring Malware Behavior of Webpages Using Machine Learning Technique: An Empirical Study." Electronics 9, no. 6 (June 23, 2020): 1033. http://dx.doi.org/10.3390/electronics9061033.

Full text
Abstract:
Malware is one of the most common security threats experienced by a user when browsing webpages. A good understanding of the features of webpages (e.g., internet protocol, port, URL, Google index, and page rank) is required to analyze and mitigate the behavior of malware in webpages. This main objective of this paper is to analyze the key features of webpages and to mitigate the behavior of malware in webpages. To this end, we conducted an empirical study to identify the features that are most vulnerable to malware attacks and its results are reported. To improve the feature selection accuracy, a machine learning technique called bagging is employed using the Weka program. To analyze these behaviors, phishing and botnet data were obtained from the University of California Irvine machine learning repository. We validate our research findings by applying honeypot infrastructure using the Modern Honeypot Network (MHN) setup in a Linode Server. As the data suffer from high variance in terms of the type of data in each row, bagging is chosen because it can classify binary classes, date classes, missing values, nominal classes, numeric classes, unary classes and empty classes. As a base classifier of bagging, random tree was applied because it can handle similar types of data such as bagging, but better than other classifiers because it is faster and more accurate. Random tree had 88.22% test accuracy with the lowest run time (0.2 sec) and a receiver operating characteristic curve of 0.946. Results show that all features in the botnet dataset are equally important to identify the malicious behavior, as all scored more than 97%, with the exception of TCP and UDP. The accuracy of phishing and botnet datasets is more than 89% on average in both cross validation and test analysis. Recommendations are made for the best practice that can assist in future malware identification.
APA, Harvard, Vancouver, ISO, and other styles
8

Mishra, Bharavi, and K. K. Shukla. "Software Defect Prediction Based on GUHA Data Mining Procedure and Multi-Objective Pareto Efficient Rule Selection." International Journal of Software Science and Computational Intelligence 6, no. 2 (April 2014): 1–29. http://dx.doi.org/10.4018/ijssci.2014040101.

Full text
Abstract:
Software defect prediction, if is effective, enables the developers to distribute their testing efforts efficiently and let them focus on defect prone modules. It would be very resource consuming to test all the modules while the defect lies in fraction of modules. Information about fault-proneness of classes and methods can be used to develop new strategies which can help mitigate the overall development cost and increase the customer satisfaction. Several machine learning strategies have been used in recent past to identify defective modules. These models are built using publicly available historical software defect data sets. Most of the proposed techniques are not able to deal with the class imbalance problem efficiently. Therefore, it is necessary to develop a prediction model which consists of small simple and comprehensible rules. Considering these facts, in this paper, the authors propose a novel defect prediction approach named GUHA based Classification Association Rule Mining algorithm (G-CARM) where “GUHA” stands for General Unary Hypothesis Automaton. G-CARM approach is primarily based on Classification Association Rule Mining, and deploys a two stage process involving attribute discretization, and rule generation using GUHA. GUHA is oldest yet very powerful method of pattern mining. The basic idea of GUHA procedure is to mine the interesting attribute patterns that indicate defect proneness. The new method has been compared against five other models reported in recent literature viz. Naive Bayes, Support Vector Machine, RIPPER, J48 and Nearest Neighbour classifier by using several measures, including AUC and probability of detection. The experimental results indicate that the prediction performance of G-CARM approach is better than other prediction approaches. The authors' approach achieved 76% mean recall and 83% mean precision for defective modules and 93% mean recall and 83% mean precision for non-defective modules on CM1, KC1, KC2 and Eclipse data sets. Further defect rule generation process often generates a large number of rules which require considerable efforts while using these rules as a defect predictor, hence, a rule sub-set selection process is also proposed to select best set of rules according to the requirements. Evolution criteria for defect prediction like sensitivity, specificity, precision often compete against each other. It is therefore, important to use multi-objective optimization algorithms for selecting prediction rules. In this paper the authors report prediction rules that are Pareto efficient in the sense that no further improvements in the rule set is possible without sacrificing some performance criteria. Non-Dominated Sorting Genetic Algorithm has been used to find Pareto front and defect prediction rules.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Unimplementation of unary classifier"

1

Beneš, Jiří. "Unární klasifikátor obrazových dat." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-442432.

Full text
Abstract:
The work deals with an introduction to classification algorithms. It then divides classifiers into unary, binary and multi-class and describes the different types of classifiers. The work compares individual classifiers and their areas of use. For unary classifiers, practical examples and a list of used architectures are given in the work. The work contains a chapter focused on the comparison of the effects of hyper parameters on the quality of unary classification for individual architectures. Part of the submission is a practical example of reimplementation of the unary classifier.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography