Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Similarity model.

Dissertationen zum Thema „Similarity model“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Similarity model" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Lu, Junde. „Model migration based on process similarity /“. View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CBME%202008%20LU.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Zhiwei. „Riemann space model and similarity-based Web retrieval“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ60214.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Anandan, Srinivasan. „Similarity metrics applied to graph based design model authoring“. Connect to this title online, 2008. http://etd.lib.clemson.edu/documents/1219855195/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Jenkins, Gavin Wesley. „A task-general dynamic neural model of object similarity judgments“. Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1648.

Der volle Inhalt der Quelle
Annotation:
The similarity between objects is judged in a wide variety of contexts from visual search to categorization to face recognition. There is a correspondingly rich history of similarity research, including empirical work and theoretical models. However, the field lacks an account of the real time neural processing dynamics of different similarity judgment behaviors. Some accounts focus on the lower-level processes that support similarity judgments, but they do not capture a wide range of canonical behaviors, and they do not account for the moment-to-moment stability and interaction of realistic neural object representations. The goal of this dissertation is to address this need and present a broadly applicable and neurally implemented model of object similarity judgments. I accomplished this by adapting and expanding an existing neural process model of change detection to capture a set of canonical, task-general similarity judgment behaviors. Target behaviors to model were chosen by reviewing the similarity judgment literature and identifying prominent and consistent behavioral effects. I tested each behavior for task-generality across three experiments using three diverse similarity judgment tasks. The following behaviors observed across all three tasks served as modeling targets: the effect of feature value comparisons, attentional modulation of feature dimensions, sensitivity to patterns of objects encountered over time, violations of minimality and triangle equality, and a sensitivity to circular feature dimensions like color hue. The model captured each effect. The neural processes implied by capturing these behaviors are discussed, along with the broader theoretical implications of the model and possibilities for its future expansion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shah, Yashna Jitendra. „The Impact of Role Model Similarity on Women's Leadership Outcomes“. Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/78144.

Der volle Inhalt der Quelle
Annotation:
Role models can serve as a means to counteract the prevalent 'Think Leader, Think Male' stereotype. This study was designed to assess the impact of role model similarity on women's leadership self-efficacy, task performance and future leadership behavior, using two conceptualizations of similarity – match with leadership self-concept and attainability of the role model. Additionally, the process by which one's self-perceptions of leadership impact judgments of one's own behavior was also investigated. Participants were presented with a role model vignette in a laboratory setting, following which they complete a leadership task. Results indicated that there were no significant effects of the interaction of the two role model manipulations of various leadership outcomes. However, match of role model with one's self-concept did impact one's leadership self-efficacy. Results also indicated that agentic leader prototypes partially mediated the relation between individuals' self-concept and self-judgments, such that participants whose self-concept matched the role model activated the agentic leader prototype. Overall findings suggest that match with one's self concept plays an important role in role models being perceived as similar to the self, which can have important implications for women's leadership development.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Esin, Yunus Emre. „Improvement Of Corpus-based Semantic Word Similarity Using Vector Space Model“. Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610759/index.pdf.

Der volle Inhalt der Quelle
Annotation:
This study presents a new approach for finding semantically similar words from corpora using window based context methods. Previous studies mainly concentrate on either finding new combination of distance-weight measurement methods or proposing new context methods. The main difference of this new approach is that this study reprocesses the outputs of the existing methods to update the representation of related word vectors used for measuring semantic distance between words, to improve the results further. Moreover, this novel technique provides a solution to the data sparseness of vectors which is a common problem in methods which uses vector space model. The main advantage of this new approach is that it is applicable to many of the existing word similarity methods using the vector space model. The other and the most important advantage of this approach is that it improves the performance of some of these existing word similarity measuring methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Owens, Charles Ray. „Donating Behavior in Children: The Effect of the Model's Similarity with the Model and Parental Models“. DigitalCommons@USU, 1985. https://digitalcommons.usu.edu/etd/5318.

Der volle Inhalt der Quelle
Annotation:
Model similarity and familiarity were investigated for adult and similar aged models demonstrating prosocial behavior. Third, fourth and fifth graders (75 male and 75 female) participated. Subjects were given questionnaires regarding their most and least preferred peers and their most preferred parent. The models were described as similar to the subject for some groups. Subjects were given instructions concerning a sorting task and cash certificates they would earn. Fifty control subjects viewed a video that contained neither prosocial nor antisocial behavior. For the remaining subjects, a 2 (sex of subject) X 2 (similar age model versus adult model) X 5 (treatment) factorial design was employed. The 5 treatment factors were: unfamiliar models described as a) similar, b) dissimilar, c) with no similarity mentioned, and familiar models who were d) preferred (either a best friend or preferred parent), and e) least preferred (either a least preferred peer or parent). Subjects (except the control group) saw a video taped model who demonstrated a sorting task and collected 20 certificates. All models shared 10 certificates by placing them in a canister marked "for the poor children". Subjects completed the task and had an opportunity to share while alone. Significantly more sharing occurred in the similar age than in the adult model group. Both of which imitated more than the control group. There was no difference in the imitation of males and females overall. There was no difference between the groups that saw unfamiliar models who were described as similar and the groups that saw unfamiliar models with no similarity mentioned. Each of these produced more imitative donating than the control, the familiar preferred model, and the unfamiliar model described as dissimilar groups. The familiar least preferred model group shared more than the control group. There were significant interaction effects between sex and treatment and between sex, treatment, and age of model. Unfamiliar models with no similarity mentioned and peer models each produced more sharing than parent models. Subjects who observed an unfamiliar model described as similar donated more than those seeing an unfamiliar model described as dissimilar. An unfamiliar age-mate model produced more sharing than a familiar and preferred friend. Donations were greater when the subject observed a least preferred peer rather than a best friend. This difference was due to the female subjects' performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hu, Rong (RongRong). „Image annotation with discriminative model and annotation refinement by visual similarity matching“. Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/61311.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 65-67).
A large percentage of photos on the Internet cannot be reached by search engines because of the absence of textual metadata. Such metadata come from description and tags of the photos by their uploaders. Despite of decades of research, neither model based and model-free approaches can provide quality annotation to images. In this thesis, I present a hybrid annotation pipeline that combines both approaches in hopes of increasing the accuracy of the resulting annotations. Given an unlabeled image, the first step is to suggest some words via a trained model optimized for retrieval of images from text. Though the trained model cannot always provide highly relevant words, they can be used as initial keywords to query a large web image repository and obtain text associated with retrieved images. We then use perceptual features (e.g., color, texture, shape, and local characteristics) to match the retrieved images with the query photo and use visual similarity to rank the relevance of suggested annotations for the query photo.
by Rong Hu.
M.Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

COUTINHO, Ana Emília Victor Barbosa. „Similarity-based test suite reduction in the context of Model-Based Testing“. Universidade Federal de Campina Grande, 2015. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/588.

Der volle Inhalt der Quelle
Annotation:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-04T22:31:28Z No. of bitstreams: 1 ANA EMÍLIA VICTOR BARBOSA COUTINHO - TESE PPGCC 2015..pdf: 3805756 bytes, checksum: 2bee7d8777dfd753eb994680cd2bb6c5 (MD5)
Made available in DSpace on 2018-05-04T22:31:28Z (GMT). No. of bitstreams: 1 ANA EMÍLIA VICTOR BARBOSA COUTINHO - TESE PPGCC 2015..pdf: 3805756 bytes, checksum: 2bee7d8777dfd753eb994680cd2bb6c5 (MD5) Previous issue date: 2015-03-20
Capes
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Klinkmüller, Christopher. „Adaptive Process Model Matching“. Doctoral thesis, Universitätsbibliothek Leipzig, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-224884.

Der volle Inhalt der Quelle
Annotation:
Process model matchers automate the detection of activities that represent similar functionality in different models. Thus, they provide support for various tasks related to the management of business processes including model collection management and process design. Yet, prior research primarily demonstrated the matchers’ effectiveness, i.e., the accuracy and the completeness of the results. In this context (i) the size of the empirical data is often small, (ii) all data is used for the matcher development, and (iii) the validity of the design decisions is not studied. As a result, existing matchers yield a varying and typically low effectiveness when applied to different datasets, as among others demonstrated by the process model matching contests in 2013 and 2015. With this in mind, the thesis studies the effectiveness of matchers by separating development from evaluation data and by empirically analyzing the validity and the limitations of design decisions. In particular, the thesis develops matchers that rely on different sources of information. First, the activity labels are considered as natural-language descriptions and the Bag-of-Words Technique is introduced which achieves a high effectiveness in comparison to the state of the art. Second, the Order Preserving Bag-of-Words Technique analyzes temporal dependencies between activities in order to automatically configure the Bag-of-Words Technique and to improve its effectiveness. Third, expert feedback is used to adapt the matchers to the domain characteristics of process model collections. Here, the Adaptive Bag-of-Words Technique is introduced which outperforms the state-of-the-art matchers and the other matchers from this thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

OLIVEIRA, NETO Francisco Gomes de. „Investigation of similarity-based test case selection for specification-based regression testing“. Universidade Federal de Campina Grande, 2014. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/360.

Der volle Inhalt der Quelle
Annotation:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-04-10T20:00:05Z No. of bitstreams: 1 FRANCISCO GOMES DE OLIVEIRA NETO - TESE PPGCC 2014..pdf: 5163454 bytes, checksum: 228c1fc4f2dc9aad01698011238cfde1 (MD5)
Made available in DSpace on 2018-04-10T20:00:05Z (GMT). No. of bitstreams: 1 FRANCISCO GOMES DE OLIVEIRA NETO - TESE PPGCC 2014..pdf: 5163454 bytes, checksum: 228c1fc4f2dc9aad01698011238cfde1 (MD5) Previous issue date: 2014-07-30
uring software maintenance, several modifications can be performed in a specification model in order to satisfy new requirements. Perform regression testing on modified software is known to be a costly and laborious task. Test case selection, test case prioritization, test suite minimisation,among other methods,aim to reduce these costs by selecting or prioritizing a subset of test cases so that less time, effort and thus money are involved in performing regression testing. In this doctorate research, we explore the general problem of automatically selecting test cases in a model-based testing (MBT) process where specification models were modified. Our technique, named Similarity Approach for Regression Testing (SART), selects subset of test cases traversing modified regions of a software system’s specification model. That strategy relies on similarity-based test case selection where similarities between test cases from different software versions are analysed to identify modified elements in a model. In addition, we propose an evaluation approach named Search Based Model Generation for Technology Evaluation (SBMTE) that is based on stochastic model generation and search-based techniques to generate large samples of realistic models to allow experiments with model-based techniques. Based on SBMTE,researchers are able to develop model generator tools to create a space of models based on statistics from real industrial models, and eventually generate samples from that space in order to perform experiments. Here we developed a generator to create instances of Annotated Labelled Transitions Systems (ALTS), to be used as input for our MBT process and then perform an experiment with SART.In this experiment, we were able to conclude that SART’s percentage of test suite size reduction is robust and able to select a sub set with an average of 92% less test cases, while ensuring coverage of all model modification and revealing defects linked to model modifications. Both SART and our experiment are executable through the LTS-BT tool, enabling researchers to use our selections trategy andr eproduce our experiment.
During software maintenance, several modifications can be performed in a specification model in order to satisfy new requirements. Perform regression testing on modified software is known to be a costly and laborious task. Test case selection, test case prioritization, test suite minimisation,among other methods,aim to reduce these costs by selecting or prioritizing a subset of test cases so that less time, effort and thus money are involved in performing regression testing. In this doctorate research, we explore the general problem of automatically selecting test cases in a model-based testing (MBT) process where specification models were modified. Our technique, named Similarity Approach for Regression Testing (SART), selects subset of test cases traversing modified regions of a software system’s specification model. That strategy relies on similarity-based test case selection where similarities between test cases from different software versions are analysed to identify modified elements in a model. In addition, we propose an evaluation approach named Search Based Model Generation for Technology Evaluation (SBMTE) that is based on stochastic model generation and search-based techniques to generate large samples of realistic models to allow experiments with model-based techniques. Based on SBMTE,researchers are able to develop model generator tools to create a space of models based on statistics from real industrial models, and eventually generate samples from that space in order to perform experiments. Here we developed a generator to create instances of Annotated Labelled Transitions Systems (ALTS), to be used as input for our MBT process and then perform an experiment with SART.In this experiment, we were able to conclude that SART’s percentage of test suite size reduction is robust and able to select a sub set with an average of 92% less test cases, while ensuring coverage of all model modification and revealing defects linked to model modifications. Both SART and our experiment are executable through the LTS-BT tool, enabling researchers to use our selections trategy andr eproduce our experiment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Cree, George S. „An attractor model of lexical conceptual processing, statistical feature relationships and semantic similarity priming“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0001/MQ30733.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Nimpfer, John Adam. „The Self-Evaluation Maintenance Model as a Moderator of Similarity-Attraction Vs Dissimilarity-Repulsion“. W&M ScholarWorks, 1997. https://scholarworks.wm.edu/etd/1539626145.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Arockiasamy, Savarimuthu. „Using the SKOS Model for Standardizing Semantic Similarity and Relatedness Measures for Ontological Terminologies“. Miami University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=miami1250095881.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Wolff, D. „Spot the odd song out : similarity model adaptation and analysis using relative human ratings“. Thesis, City University London, 2014. http://openaccess.city.ac.uk/5916/.

Der volle Inhalt der Quelle
Annotation:
Understanding how listeners relate and compare pieces of music is a fundamental challenge in music research as well as for commercial applications: Today’s large-scale applications for music recommendation and exploration utilise various models for similarity prediction to satisfy users’ expectations. Perceived similarity is specific to the individual and influenced by a number of factors such as cultural background and age. Thus, adapting a generic model to human similarity data is useful for personalisation and can help to better understand such differences. This thesis presents new and state-of-the-art machine learning techniques for modelling music similarity and their first evaluation on relative music similarity data. We expand the scope for future research with methods for similarity data collection and a new dataset. In particular, our models are evaluated on their ability to “spot the odd song out” of three given songs. While a few methods are readily available, others had to be adapted for their first application to such data. We explore the potential for learning generalisable similarity measures, presenting algorithms for metrics and neural networks. A generic modelling workflow is presented and implemented. We report the first evaluation of the methods on the MagnaTagATune dataset showing learning is possible and pointing out particularities of algorithms and feature types. The best results with up to 74% performance on test sets were achieved with a combination of acoustic and cultural features, but model training proved most powerful when only acoustic information is available. To assess the generalisability of the findings, we provide a first systematic analysis of the dataset itself. We also identify a bias in standard sampling methods for cross-validation with similarity data and present a new method for unbiased evaluation, providing use cases for the different validation strategies. Furthermore, we present an online game that collects a new similarity dataset, including participant attributes such as age, location, language and music background. It is based on our extensible framework which manages storage of participant input, context information as well as selection of presented samples. The collected data enables a more specific adaptation of music similarity by including user attributes into similarity models. Distinct similarity models are learnt from geographically defined user groups in a first experiment towards the more complex task of culture-aware similarity modelling. In order to improve training of the specific models on small datasets, we implement the concept of transfer learning for music similarity models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Bashon, Yasmina M. „Contributions to fuzzy object comparison and applications. Similarity measures for fuzzy and heterogeneous data and their applications“. Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6305.

Der volle Inhalt der Quelle
Annotation:
This thesis makes an original contribution to knowledge in the fi eld of data objects' comparison where the objects are described by attributes of fuzzy or heterogeneous (numeric and symbolic) data types. Many real world database systems and applications require information management components that provide support for managing such imperfect and heterogeneous data objects. For example, with new online information made available from various sources, in semi-structured, structured or unstructured representations, new information usage and search algorithms must consider where such data collections may contain objects/records with di fferent types of data: fuzzy, numerical and categorical for the same attributes. New approaches of similarity have been presented in this research to support such data comparison. A generalisation of both geometric and set theoretical similarity models has enabled propose new similarity measures presented in this thesis, to handle the vagueness (fuzzy data type) within data objects. A framework of new and unif ied similarity measures for comparing heterogeneous objects described by numerical, categorical and fuzzy attributes has also been introduced. Examples are used to illustrate, compare and discuss the applications and e fficiency of the proposed approaches to heterogeneous data comparison.
Libyan Embassy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Smith, Gregory J. „TOWARD A TWO-STAGE MODEL OF FREE CATEGORIZATION“. CSUSB ScholarWorks, 2015. https://scholarworks.lib.csusb.edu/etd/234.

Der volle Inhalt der Quelle
Annotation:
This research examines how comparison of objects underlies free categorization, an essential component of human cognition. Previous results using our binomial labeling task have shown that classification probabilities are affected in a graded manner as a function of similarity, i.e., the number of features shared by two objects. In a similarity rating task, people also rated objects sharing more features as more similar. However, the effect of matching features was approximately linear in the similarity task, but superadditive (exponential) in the labeling task. We hypothesize that this difference is due to the fact that people must select specific objects to compare prior to deciding whether to put them in the same category in the labeling task, while they were given specific pairs to compare in the rating task. Thus, the number of features shared by two objects could affect both stages (selection and comparison) in the labeling task, which might explain their super-additive effect, whereas it affected only the latter comparison stage in the similarity rating task. In this experiment, participants saw visual displays consisting of 16 objects from three novel superordinate artificial categories, and were asked to generate binomial (letter-number) labels for each object to indicate their super-and-subordinate category membership. Only one object could be viewed at a time, and these objects could be viewed in any order. This made it possible to record what objects people examine when labeling a given object, which in turn permits separate assessment of stage 1 (selection) versus stage 2 (comparison/decision). Our primary objective in this experiment was to determine whether the increase in category labeling probabilities as a function of level of match (similarity) can be explained by increased sampling alone (stage 1 model), an increased perception of similarity following sampling (stage 2 model), or some combination (mixed model). The results were consistent with earlier studies in showing that the number of matching discrete features shred by two objects affected the probability of same-category label assignment. However, there was no effect of the level of match on the probability of visiting the first matching object while labeling the second. This suggests that the labeling effect is not due to differences in the likelihood of comparing matching objects (stage 1) as a function of the level of match. Thus, the present data provides support for a stage 2 only model, in which the evaluation of similarity is the primary component underlying the level of match effect on free categorization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Bashon, Yasmina Massoud. „Contributions to fuzzy object comparison and applications : similarity measures for fuzzy and heterogeneous data and their applications“. Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/6305.

Der volle Inhalt der Quelle
Annotation:
This thesis makes an original contribution to knowledge in the fi eld of data objects' comparison where the objects are described by attributes of fuzzy or heterogeneous (numeric and symbolic) data types. Many real world database systems and applications require information management components that provide support for managing such imperfect and heterogeneous data objects. For example, with new online information made available from various sources, in semi-structured, structured or unstructured representations, new information usage and search algorithms must consider where such data collections may contain objects/records with di fferent types of data: fuzzy, numerical and categorical for the same attributes. New approaches of similarity have been presented in this research to support such data comparison. A generalisation of both geometric and set theoretical similarity models has enabled propose new similarity measures presented in this thesis, to handle the vagueness (fuzzy data type) within data objects. A framework of new and unif ied similarity measures for comparing heterogeneous objects described by numerical, categorical and fuzzy attributes has also been introduced. Examples are used to illustrate, compare and discuss the applications and e fficiency of the proposed approaches to heterogeneous data comparison.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

MacLean, Angus. „A lightweight, graph-theoretic model of class-based similarity to support object-oriented code reuse“. Thesis, Robert Gordon University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249740.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Muscoloni, Alessandro, und Carlo Vittorio Cannistraci. „A nonuniform popularity-similarity optimization (nPSO) model to efficiently generate realistic complex networks with communities“. Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-236957.

Der volle Inhalt der Quelle
Annotation:
The investigation of the hidden metric space behind complex network topologies is a fervid topic in current network science and the hyperbolic space is one of the most studied, because it seems associated to the structural organization of many real complex systems. The popularity-similarity-optimization (PSO) model simulates how random geometric graphs grow in the hyperbolic space, generating realistic networks with clustering, small-worldness, scale-freeness and rich-clubness. However, it misses to reproduce an important feature of real complex networks, which is the community organization. The geometrical-preferential-attachment (GPA) model was recently developed in order to confer to the PSO also a soft community structure, which is obtained by forcing different angular regions of the hyperbolic disk to have a variable level of attractiveness. However, the number and size of the communities cannot be explicitly controlled in the GPA, which is a clear limitation for real applications. Here, we introduce the nonuniform PSO (nPSO) model. Differently from GPA, the nPSO generates synthetic networks in the hyperbolic space where heterogeneous angular node attractiveness is forced by sampling the angular coordinates from a tailored nonuniform probability distribution (for instance a mixture of Gaussians). The nPSO differs from GPA in other three aspects: it allows one to explicitly fix the number and size of communities; it allows one to tune their mixing property by means of the network temperature; it is efficient to generate networks with high clustering. Several tests on the detectability of the community structure in nPSO synthetic networks and wide investigations on their structural properties confirm that the nPSO is a valid and efficient model to generate realistic complex networks with communities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Hoplamazian, Gregory J. „Cultural cues in advertising: Context effects on perceived model similarity, identification processes, and advertising outcomes“. The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1308295797.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Milajevs, Dmitrijs. „A study of model parameters for scaling up word to sentence similarity tasks in distributional semantics“. Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/36225.

Der volle Inhalt der Quelle
Annotation:
Representation of sentences that captures semantics is an essential part of natural language processing systems, such as information retrieval or machine translation. The representation of a sentence is commonly built by combining the representations of the words that the sentence consists of. Similarity between words is widely used as a proxy to evaluate semantic representations. Word similarity models are well-studied and are shown to positively correlate with human similarity judgements. Current evaluation of models of sentential similarity builds on the results obtained in lexical experiments. The main focus is how the lexical representations are used, rather than what they should be. It is often assumed that the optimal representations for word similarity are also optimal for sentence similarity. This work discards this assumption and systematically looks for lexical representations that are optimal for similarity measurement between sentences. We find that the best representation for word similarity is not always the best for sentence similarity and vice versa. The best models in word similarity tasks perform best with additive composition. However, the best result on compositional tasks is achieved with Kroneckerbased composition. There are representations that are equally good in both tasks when used with multiplicative composition. The systematic study of the parameters of similarity models reveals that the more information lexical representations contain, the more attention should be paid to noise. In particular, the word vectors in models with the feature size at the magnitude of the vocabulary size should be sparse, but if a small number of context features is used then the vectors should be dense. Given the right lexical representations, compositional operators achieve state-of-the-art performance, improving over models that use neural-word embeddings. To avoid overfitting, either several test datasets should be used or parameter selection should be based on parameters' average behaviours.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Miller, Gina L. „An empirical investigation of a categorization based model of the evaluation formation process as it pertains to set membership prediction“. Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/29984.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Whitmore, Corrie Baird. „Trust Development: Testing a New Model in Undergraduate Roommate Relationships“. Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/37380.

Der volle Inhalt der Quelle
Annotation:
Interpersonal trust reflects a vital component of all social relationships. Trust has been linked to a wide variety of individual and group outcomes in the literature, including personal satisfaction and motivation, willingness to take risks, and organizational success (Dirks & Ferrin, 2001; Pratt & Dirks, 2007; Simpson, 2007). In this dissertation I tested a new conceptual model evaluating the roles of attachment, propensity to trust, perceived similarity of trustee to self, and social exchange processes in trust development with randomly assigned, same-sex undergraduate roommates. Two hundred and fourteen first-year students (60% female, 85% Caucasian, mean age = 18) at a large south-eastern university completed self-report measures once per week during the first five weeks of the fall semester. Perceived similarity measured the second week of classes and social exchange measured three weeks later combined to provide the best prediction of participantsâ final trust scores. Attachment and propensity to trust, more distal predictors, did not have a significant relationship with trust. This study demonstrated that trust is strongly related to perceived similarity, as well as social exchange. A prime contribution of this study is the longitudinal, empirical test of a model of trust development in a new and meaningful relationship. Future work may build on this research design and these findings by focusing on early measurement of constructs, measuring dyads rather than individuals, and incorporating behavioral measures of trust.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Menlove, Kit J. „Model Detection Based upon Amino Acid Properties“. BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2253.

Der volle Inhalt der Quelle
Annotation:
Similarity searches are an essential component to most bioinformatic applications. They form the bases of structural motif identification, gene identification, and insights into functional associations. With the rapid increase in the available genetic data through a wide variety of databases, similarity searches are an essential tool for accessing these data in an informative and productive way. In our chapter, we provide an overview of similarity searching approaches, related databases, and parameter options to achieve the best results for a variety of applications. We then provide a worked example and some notes for consideration. Homology detection is one of the most basic and fundamental problems at the heart of bioinformatics. It is central to problems currently under intense investigation in protein structure prediction, phylogenetic analyses, and computational drug development. Currently discriminative methods for homology detection, which are not readily interpretable, are substantially more powerful than their more interpretable counterparts, particularly when sequence identity is very low. Here I present a computational graph-based framework for homology inference using physiochemical amino acid properties which aims to both reduce the gap in accuracy between discriminative and generative methods and provide a framework for easily identifying the physiochemical basis for the structural similarity between proteins. The accuracy of my method slightly improves on the accuracy of PSI-BLAST, the most popular generative approach, and underscores the potential of this methodology given a more robust statistical foundation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Bär, Daniel [Verfasser], Iryna [Akademischer Betreuer] Gurevych, Ido [Akademischer Betreuer] Dagan und Torsten [Akademischer Betreuer] Zesch. „A Composite Model for Computing Similarity Between Texts / Daniel Bär. Betreuer: Iryna Gurevych ; Ido Dagan ; Torsten Zesch“. Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2013. http://d-nb.info/1107772079/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Dousa, Dominic. „Assessing the degree of relatedness between key areas : a quantitative model of key distance and tonal similarity“. Virtual Press, 2003. http://liblink.bsu.edu/uhtbin/catkey/1259312.

Der volle Inhalt der Quelle
Annotation:
In this dissertation, I propose a model for comparing the degree to which diatonic major and minor keys are related to one another. I consider specific tonal and interval relationship factors (e.g., circle-of-fifths distance, tonic-to-tonic interval class) and their influence on the perception of how two keys are related. For all forty-eight possible relationships of two diatonic keys, I assign numerical values for each factor. These values are based on theoretical concepts where appropriate (e.g., the number of steps between the keys on the circle of fifths) and on an intuitive assessment in cases where there is no accepted numerical designation (e.g., the direction along the circle of fifths one travels from the first key to the second). I weight the values according to the relative significance of each factor and sum the weighted values to obtain a single numerical measure that describes the "distance" from the first key to the second. This abstract idea of distance represents the degree of relatedness between two keys. Larger distance values denote a lesser degree of relatedness. The model incorporates the idea of keydistance asymmetry - that the perceived distance between two keys depends on the order in which they occur.I devote one chapter to a general discussion of key relationships and another to the application of the model as a tool for analyzing the harmonic structure of tonal compositions from the standard literature. Using the key-distance model I develop in Chapter 2, I also provide a harmonic analysis of a composition for symphonic band which I have written to complement the theoretical portion of this dissertation. The score of this piece is included as an appendix.
School of Music
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Mebratu, Ashagrie Kefyalew. „Does religious similarity influence the direction of trade? : Evidence from US bilateral trade with other 168 countries“. Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-17478.

Der volle Inhalt der Quelle
Annotation:
Despite interest in the influence of religion on economic activity by early economists like Adam Smith, modern economists have done little research on the subject. In light of the apparent religious fervour in many parts of the global economy, economists' seeming lack of interest in studying how religious cultures enhance or retard the globalization of economic activity is especially surprising. In general, trade theories have given less weight towards the reason for trade explanation on demand side. As a contrary to H-O theory Linder had proposed a theoretically sound and empirically consistent trade theory with a new claim for the reasons why countries trade on the demand side. To fill this gap, I use international survey data on religiosity for a broad panel of countries trading with US to investigate the effects of church attendance and religious beliefs on trade. The beliefs are, in turn, the principal output of the religion sector, and the believer alignment to a specific denomination measures the inputs to this sector. Hence, I used an extended gravity model of international trade to control for a variety of factors that determine trade, and I used two regression methods, OLS and WLS, to exploit the model to its fullest. I find that the sharing of same religious cultures by people in different countries has a significantly positive influence on bilateral trade, all other things being equal. These results accord with a perspective in which religious beliefs influence individual traits that enhance trade and economic performance in general. And my attempt to magnify religion as a means to trade is only a derivation of Linder’s overlapping demand theory.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Rorissa, Abebe. „Perceived features and similarity of images: An investigation into their relationships and a test of Tversky's contrast model“. Thesis, University of North Texas, 2005. https://digital.library.unt.edu/ark:/67531/metadc4749/.

Der volle Inhalt der Quelle
Annotation:
The creation, storage, manipulation, and transmission of images have become less costly and more efficient. Consequently, the numbers of images and their users are growing rapidly. This poses challenges to those who organize and provide access to them. One of these challenges is similarity matching. Most current content-based image retrieval (CBIR) systems which can extract only low-level visual features such as color, shape, and texture, use similarity measures based on geometric models of similarity. However, most human similarity judgment data violate the metric axioms of these models. Tversky's (1977) contrast model, which defines similarity as a feature contrast task and equates the degree of similarity of two stimuli to a linear combination of their common and distinctive features, explains human similarity judgments much better than the geometric models. This study tested the contrast model as a conceptual framework to investigate the nature of the relationships between features and similarity of images as perceived by human judges. Data were collected from 150 participants who performed two tasks: an image description and a similarity judgment task. Qualitative methods (content analysis) and quantitative (correlational) methods were used to seek answers to four research questions related to the relationships between common and distinctive features and similarity judgments of images as well as measures of their common and distinctive features. Structural equation modeling, correlation analysis, and regression analysis confirmed the relationships between perceived features and similarity of objects hypothesized by Tversky (1977). Tversky's (1977) contrast model based upon a combination of two methods for measuring common and distinctive features, and two methods for measuring similarity produced statistically significant structural coefficients between the independent latent variables (common and distinctive features) and the dependent latent variable (similarity). This model fit the data well for a sample of 30 (435 pairs of) images and 150 participants (χ2 =16.97, df=10, p = .07508, RMSEA= .040, SRMR= .0205, GFI= .990, AGFI= .965). The goodness of fit indices showed the model did not significantly deviate from the actual sample data. This study is the first to test the contrast model in the context of information representation and retrieval. Results of the study are hoped to provide the foundations for future research that will attempt to further test the contrast model and assist designers of image organization and retrieval systems by pointing toward alternative document representations and similarity measures that more closely match human similarity judgments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Finlay, Richard. „The Variance Gamma (VG) Model with Long Range Dependence“. University of Sydney, 2009. http://hdl.handle.net/2123/5434.

Der volle Inhalt der Quelle
Annotation:
Doctor of Philosophy (PhD)
This thesis mainly builds on the Variance Gamma (VG) model for financial assets over time of Madan & Seneta (1990) and Madan, Carr & Chang (1998), although the model based on the t distribution championed in Heyde & Leonenko (2005) is also given attention. The primary contribution of the thesis is the development of VG models, and the extension of t models, which accommodate a dependence structure in asset price returns. In particular it has become increasingly clear that while returns (log price increments) of historical financial asset time series appear as a reasonable approximation of independent and identically distributed data, squared and absolute returns do not. In fact squared and absolute returns show evidence of being long range dependent through time, with autocorrelation functions that are still significant after 50 to 100 lags. Given this evidence against the assumption of independent returns, it is important that models for financial assets be able to accommodate a dependence structure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Williams, David W. „Why Do Different New Ventures Internationalize Differently? A Cognitive Model of Entrepreneurs' Internationalization Decisions“. Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/managerialsci_diss/21.

Der volle Inhalt der Quelle
Annotation:
What makes entrepreneurs select one international opportunity while rejecting or ignoring others? Furthermore, what makes entrepreneurs decide to exploit an international opportunity earlier or later? Two theories of internationalization provide answers to these questions: the Uppsala Model and International Entrepreneurship theory. However, these two theories provide competing answers to these questions, and empirical research offers inconsistent evidence about what influences entrepreneurs to select an international opportunity – and when to exploit the opportunity. To address these issues, I develop a cognitive model that explains when and why the predictions of these theories do (and do not) explain entrepreneurs’ behavior regarding new venture internationalization. More specifically, I propose that entrepreneurs’ internationalization decision making rests, in part, on cognitive processes of similarity comparison and structural alignment. I use a multi-method / multi-study approach to answer the above questions. In the first study, I use verbal protocol techniques to analyze the cognitive processes of entrepreneurs as they ‘think out loud’ while making decisions on international opportunity selection and age at entry. In the second study, I use a survey plus secondary data to test if the actual decisions made by entrepreneurs on international opportunity selection and age at entry correspond to the dissertation’s predictions. Results show that cognitive processes of similarity comparison and structural alignment underpin entrepreneurs’ internationalization decisions. Entrepreneurs rely heavily on commonalities and look for high levels of similarity between the home and host country when deciding when to internationalize their firms. Regarding entrepreneurs’ decisions on international opportunity selection, their decisions reflect the influence of both comparable and noncomparable opportunity features. Interestingly, I observe that prior international knowledge directly impacts entrepreneurs’ internationalization decisions, but also moderates the relationship between similarity considerations and entrepreneurs’ decisions on international opportunity selection. Ultimately, I reconcile and integrate two competing internationalization theories by resolving tensions between them. I demonstrate that the different predictions of the two internationalization theories can be explained by the differential focus that entrepreneurs place on comparable and noncomparable attributes of their opportunity set. I also show the importance of taking an individual-level and cognitive view to understanding these decisions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Shaban, Khaled. „A Semantic Graph Model for Text Representation and Matching in Document Mining“. Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2860.

Der volle Inhalt der Quelle
Annotation:
The explosive growth in the number of documents produced daily necessitates the development of effective alternatives to explore, analyze, and discover knowledge from documents. Document mining research work has emerged to devise automated means to discover and analyze useful information from documents. This work has been mainly concerned with constructing text representation models, developing distance measures to estimate similarities between documents, and utilizing that in mining processes such as document clustering, document classification, information retrieval, information filtering, and information extraction.

Conventional text representation methodologies consider documents as bags of words and ignore the meanings and ideas their authors want to convey. It is this deficiency that causes similarity measures to fail to perceive contextual similarity of text passages due to the variation of the words the passages contain, or at least perceive contextually dissimilar text passages as being similar because of the resemblance of words the passages have.

This thesis presents a new paradigm for mining documents by exploiting semantic information of their texts. A formal semantic representation of linguistic inputs is introduced and utilized to build a semantic representation scheme for documents. The representation scheme is constructed through accumulation of syntactic and semantic analysis outputs. A new distance measure is developed to determine the similarities between contents of documents. The measure is based on inexact matching of attributed trees. It involves the computation of all distinct similarity common sub-trees, and can be computed efficiently. It is believed that the proposed representation scheme along with the proposed similarity measure will enable more effective document mining processes.

The proposed techniques to mine documents were implemented as vital components in a mining system. A case study of semantic document clustering is presented to demonstrate the working and the efficacy of the framework. Experimental work is reported, and its results are presented and analyzed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Pierro, Gabriel Vicente de. „Consultas por similaridade no modelo relacional“. Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11092015-094738/.

Der volle Inhalt der Quelle
Annotation:
Os Sistemas de Gerenciamento de Bases de Dados Relacionais (SGBDR) foram concebidos para o armazenamento e recuperação de grandes volumes de dados. Tradicionalmente, estes sistemas suportam números, pequenas cadeias de caracteres e datas (que podem ser comparados por identidade ou por relações de ordem { RO), porém vem se tornando necessário organizar, armazenar e recuperar dados mais complexos, como por exemplo dados multimídia (imagens, áudio e vídeo), séries temporais etc. Quando se trata de dados complexos há uma mudança de paradigma, pois as comparações entre elementos são feitas por similaridade em vez das RO utilizadas tradicionalmente, tendo como mais frequentemente utilizados os operadores de comparação por abrangência (Rq) e por k-vizinhos mais próximos (k-NN). Embora muitos estudos estejam sendo feitos nessa área, quando lidando com consultas por similaridade grande parte do esforço é direcionado para criar as estruturas de indexação e dar suporte às operações necessárias para executar apenas o aspecto da consulta que trata da similaridade, sem focar em realizar uma integração homogênea das consultas que envolvam ambos os tipos de operadores simultaneamente nos ambientes dos SGDBRs. Um dos principais problemas nessa integração é lidar com as peculiaridades do operador de busca por k-NN. Todos os operadores de comparação por identidade e por RO são comutativos e associativos entre si. No entanto o operador de busca por k-NN não atende a nenhuma dessas propriedades. Com isso, a expressão de consultas em SQL, que usualmente pode ser feita sem que a expressão da ordem entre os predicados seja importante, precisa passar a considerar a ordem. Além disso, consultas que utilizam comparações por k-NN podem gerar múltiplos empates, e a falta de uma metodologia para resolvê-los pode levar a um processo de desempate arbitrário ou insensível ao contexto da consulta, onde usuários não tem poder para intervir de maneira significativa. Em alguns casos, isso pode levar a uma mesma consulta a retornar resultados distintos em casos onde a estrutura interna dos dados estiver sujeita a modificações, como por exemplo em casos de transações concorrentes em um SGBDR. Este trabalho aborda os problemas gerados pela inserção de operadores de busca por similaridade nos SGBDR, mais especificamente o k-NN, e propõe novas maneiras de representação de consultas com múltiplos predicados, por similaridade ou RO, assim como novos operadores derivados do k-NN que são mais adequados para um ambiente relacional que permita consultas híbridas, e permitem também controle sobre o tratamento de empates.
The Relational Database Management Systems (RDBMS) were originally conceived to store and retrieve large volumes of data. Traditionally, these systems support only numbers, small strings of characters and dates (which could be compared by identity and a Order Relationship { OR). However it has been increasingly necessary to organize, store and retrieve more complex data, such as multimedia (images, audio and video), time series etc. Dealing with those data types requires a paradigm shift, as the comparisons between each element are made by similarity, and not by the traditionally used identity or OR, with the most common similarity operators used being the range (Rq) and k-Nearest Neighbors (k-NN). Despite many studies in the field, when dealing with similarity queries a large part of the effort has been directed towards the data structures and the necessary operations to execute only the similarity side of the query, not paying attention to a more homogenous integration of queries that involve both operator types simultaneously in RDBMS environments. One of the main problems for such integration is the peculiarities of the k-NN operator. Both identity and OR operators possess the commutative and associative properties amongst themselves, but the k-NN operator does not. As such, expressing SQL queries, that usually can disregard the order in which predicates appear, now needs to be aware of the ordering. Furthermore, queries that use k-NN might generate multiple ties, and the lack of a methodology to solve them might lead to an arbitrary or context-detached untying process, where users have little or no control to intervene. In some applications, the lack of a controlled untying process may even lead to each query yielding distinct results if the underlying structures ought be subject to change, as it is be the case of the concurrent transactions in a relational database management system (RDBMS). This work focuses on the problems that arise from the integration of similarity based operators into RDBMS, more specifically the k-NN, and proposes new ways to represent queries with multiple predicates, including similarity, identity or OR, as well as new operators derived from k-NN that are better suited for a RDBMS environment containing hybrid queries, and also enable control over the untying process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Baioco, Gisele Busichia. „Modelo de custo para consultas por similaridade em espaços métricos“. Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-07052007-155746/.

Der volle Inhalt der Quelle
Annotation:
Esta tese apresenta um modelo de custo para estimar o número de acessos a disco (custo de I/O) e o número de cálculos de distância (custo de CPU) para consultas por similaridade executadas sobre métodos de acesso métricos dinâmicos. O objetivo da criação do modelo é a otimização de consultas por similaridade em Sistemas de Gerenciamento de Bases de Dados relacionais e objeto-relacionais. Foram considerados dois tipos de consultas por similaridade: consulta por abrangência e consulta aos k-vizinhos mais próximos. Como base para a criação do modelo de custo foi utilizado o método de acesso métrico dinâmico Slim-Tree. O modelo estima a dimensão intrínseca do conjunto de dados pela sua dimensão de correlação fractal. A validação do modelo é confirmada por experimentos com conjuntos de dados sintéticos e reais, de variados tamanhos e dimensões, que mostram que as estimativas obtidas em geral estão dentro da faixa de variação medida em consultas reais
This thesis presents a cost model to estimate the number of disk accesses (I/O costs) and the number of distance calculations (CPU costs) to process similarity queries over data indexed by dynamic metric access methods. The goal of the model is to optimize similarity queries on relational and object-relational Database Management Systems. Two types of similarity queries were taken into consideration: range queries and k-nearest neighbor queries. The dynamic metric access method Slim-Tree was used as the basis for the creation of the cost model. The model takes advantage of the intrinsic dimension of the data set, estimated by its correlation fractal dimension. Experiments were performed on real and synthetic data sets, with different sizes and dimensions, in order to validate the proposed model. They confirmed that the estimations are accurate, being always within the range achieved executing real queries
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Hampl, Filip. „Podobnost obrazu na základě barvy“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-220534.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis deals with image similarity based on colour. There are discussed necessary theoretical basis for better understanding of this topic. These basis are color models, that are implemented in work, principle of creating the histogram and its comparing. Next chapter deals with summary of recent progress in the field of image comparison and overview of several most used methods. Practical part introduces training image database, which gives results of success for each created method. These methods are separately described, including their principles and achieved results. In the very end of this work, user interface is described. This interface provides a transparent presentation of the results for the chosen method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Sun, Xuebo, und Yudan Wang. „An Application of Dimension Reduction for Intention Groups in Reddit“. Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-56500.

Der volle Inhalt der Quelle
Annotation:
Reddit (www.reddit.com) is a social news platform for information sharing and exchanging. The amount of data, in terms of both observations and dimensions is enormous because a large number of users express all aspects of knowledge in their own lives by publishing the comments. While it’s easy for a human being to understand the Reddit comments on an individual basis, it is a tremendous challenge to use a computer and extract insights from it. In this thesis, we seek one algorithmic driven approach to analyze both the unique Reddit data structure and the relations inside owners of comments by their similar features. We explore the various types of communications between two people with common characteristics and build a special communication model that characterizes the potential relationship between two users via their communication messages. We then seek a dimensionality reduction methodology that can merge users with similar behavior into same groups. Along the process, we develop computer program to collect data, define attributes based on the communication model and apply a rule-based group merging algorithm. We then evaluate the results to show the effectiveness of this methodology. Our results show reasonable success in producing user groups that have recognizable group characteristics and share similar intentions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Tocatlidou, Athena. „The use of evidential support logic and a new similarity measurement for fuzzy sets to model the decision making process“. Thesis, University of Bristol, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337159.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Wesolkowski, Slawomir. „Color Image Edge Detection and Segmentation: A Comparison of the Vector Angle and the Euclidean Distance Color Similarity Measures“. Thesis, University of Waterloo, 1999. http://hdl.handle.net/10012/937.

Der volle Inhalt der Quelle
Annotation:
This work is based on Shafer's Dichromatic Reflection Model as applied to color image formation. The color spaces RGB, XYZ, CIELAB, CIELUV, rgb, l1l2l3, and the new h1h2h3 color space are discussed from this perspective. Two color similarity measures are studied: the Euclidean distance and the vector angle. The work in this thesis is motivated from a practical point of view by several shortcomings of current methods. The first problem is the inability of all known methods to properly segment objects from the background without interference from object shadows and highlights. The second shortcoming is the non-examination of the vector angle as a distance measure that is capable of directly evaluating hue similarity without considering intensity especially in RGB. Finally, there is inadequate research on the combination of hue- and intensity-based similarity measures to improve color similarity calculations given the advantages of each color distance measure. These distance measures were used for two image understanding tasks: edge detection, and one strategy for color image segmentation, namely color clustering. Edge detection algorithms using Euclidean distance and vector angle similarity measures as well as their combinations were examined. The list of algorithms is comprised of the modified Roberts operator, the Sobel operator, the Canny operator, the vector gradient operator, and the 3x3 difference vector operator. Pratt's Figure of Merit is used for a quantitative comparison of edge detection results. Color clustering was examined using the k-means (based on the Euclidean distance) and Mixture of Principal Components (based on the vector angle) algorithms. A new quantitative image segmentation evaluation procedure is introduced to assess the performance of both algorithms. Quantitative and qualitative results on many color images (artificial, staged scenes and natural scene images) indicate good edge detection performance using a vector version of the Sobel operator on the h1h2h3 color space. The results using combined hue- and intensity-based difference measures show a slight improvement qualitatively and over using each measure independently in RGB. Quantitative and qualitative results for image segmentation on the same set of images suggest that the best image segmentation results are obtained using the Mixture of Principal Components algorithm on the RGB, XYZ and rgb color spaces. Finally, poor color clustering results in the h1h2h3 color space suggest that some assumptions in deriving a simplified version of the Dichromatic Reflectance Model might have been violated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Liu, Jinghui. „Approaches to improve the precision of similarity patterns and reproducibility for cluster analysis infinite mixture model based cluster analyses for gene expression data /“. Cincinnati, Ohio : University of Cincinnati, 2008. http://rave.ohiolink.edu/etdc/view.cgi?acc_num=ucin1211903300.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Vaníček, Jan. „Termomechanický model pneumatiky“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2021. http://www.nusl.cz/ntk/nusl-445170.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis is about thermomechanics of passenger car tires. The research part dealing with existing tire models is followed by the practical part. The practical part is based on the designs of thermomechanical models. The first model determines a dependence of temperature on the air pressure inside a tire when a temperature changes. The second thermomechanical model captures all the heat fluxes which affect a tire while a vehicle is in motion. The third thermomechanical model calculates temperatures of parts of the tire during driving tests. All models are programmed in MATLAB.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Carlassare, Giulio. „Similarità semantica e clustering di concetti della letteratura medica rappresentati con language model e knowledge graph di eventi“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23138/.

Der volle Inhalt der Quelle
Annotation:
Sul web è presente una grande quantità di informazioni principalmente in formato testuale e la diffusione dei social network ne ha incrementato la produzione. La mancanza di struttura rende difficile l'utilizzo della conoscenza contenuta, generalmente espressa da fatti rappresentabili come relazioni (due entità legate da un predicato) o eventi (in cui una parola esprime una semantica relativa anche a molte entità). La ricerca sta muovendo recentemente il proprio interesse verso i Knowledge Graph che permettono di codificare la conoscenza in un grafo dove i nodi rappresentano le entità e gli archi indicano le relazioni fra di esse. Nonostante al momento la loro costruzione richieda molto lavoro manuale, i recenti passi nel campo del Natural Language Understanding offrono strumenti sempre più sofisticati: in particolare, i language model basati su transformer sono la base di molte soluzioni per l'estrazione automatica di conoscenza dal testo. I temi trattati in questa tesi hanno applicazione diretta nell'ambito delle malattie rare: la scarsa disponibilità di informazioni ha portato alla nascita di comunità di pazienti sul web, in cui si scambiano pareri di indubbia rilevanza sulla propria esperienza. Catturare la "voce dei pazienti" può essere molto importante per far conoscere ai medici la visione che i diretti interessati hanno della malattia. Il caso di studio affrontato riguarda una specifica malattia rara, l'acalasia esofagea e il dataset di post pubblicati in un gruppo Facebook ad essa dedicato. Si propone una struttura modulare di riferimento, poi implementata con metodologie precedentemente analizzate. Viene infine presentata una soluzione in cui le interazioni in forma di eventi, estratte anche con l'utilizzo di un language model, vengono rappresentate efficacemente in uno spazio vettoriale che ne rispecchia il contenuto semantico dove è possibile effettuare clustering, calcolarne la similarità e di conseguenza aggregarli in un unico knowledge graph.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Zubrik, Tomáš. „Segmentace stránky ve webovém prohlížeči“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445512.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with the web page segmentation in a web browser. The implementation of Box Clustering Segmentation (BCS) method in JavaScript using an automated browser was created. The actual implementation consists of two main steps, which are the box extraction (leaf DOM nodes) from the browser context and their subsequent clustering based on the similarity model defined in BCS. Main result of this thesis is a functional implementation of BCS method usable for web page segmentation. The evaluation of the functionality and accuracy of the implementation is based on a comparison with a reference implementation created in Java.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Goyal, Vivek. „A Recommendation System Based on Multiple Databases“. University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1368027581.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Bani-Ahmad, Sulieman Ahmad. „RESEARCH-PYRAMID BASED SEARCH TOOLS FOR ONLINE DIGITAL LIBRARIES“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1207228115.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Kunze, Matthias. „Searching business process models by example“. Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6884/.

Der volle Inhalt der Quelle
Annotation:
Business processes are fundamental to the operations of a company. Each product manufactured and every service provided is the result of a series of actions that constitute a business process. Business process management is an organizational principle that makes the processes of a company explicit and offers capabilities to implement procedures, control their execution, analyze their performance, and improve them. Therefore, business processes are documented as process models that capture these actions and their execution ordering, and make them accessible to stakeholders. As these models are an essential knowledge asset, they need to be managed effectively. In particular, the discovery and reuse of existing knowledge becomes challenging in the light of companies maintaining hundreds and thousands of process models. In practice, searching process models has been solved only superficially by means of free-text search of process names and their descriptions. Scientific contributions are limited in their scope, as they either present measures for process similarity or elaborate on query languages to search for particular aspects. However, they fall short in addressing efficient search, the presentation of search results, and the support to reuse discovered models. This thesis presents a novel search method, where a query is expressed by an exemplary business process model that describes the behavior of a possible answer. This method builds upon a formal framework that captures and compares the behavior of process models by the execution ordering of actions. The framework contributes a conceptual notion of behavioral distance that quantifies commonalities and differences of a pair of process models, and enables process model search. Based on behavioral distances, a set of measures is proposed that evaluate the quality of a particular search result to guide the user in assessing the returned matches. A projection of behavioral aspects to a process model enables highlighting relevant fragments that led to a match and facilitates its reuse. The thesis further elaborates on two search techniques that provide concrete behavioral distance functions as an instantiation of the formal framework. Querying enables search with a notion of behavioral inclusion with regard to the query. In contrast, similarity search obtains process models that are similar to a query, even if the query is not precisely matched. For both techniques, indexes are presented that enable efficient search. Methods to evaluate the quality and performance of process model search are introduced and applied to the techniques of this thesis. They show good results with regard to human assessment and scalability in a practical setting.
Geschäftsprozesse bilden die Grundlage eines jeden Unternehmens, da jedes Produkt und jede Dienstleistung das Ergebnis einer Reihe von Arbeitsschritten sind, deren Ablauf einen Geschäftsprozess darstellen. Das Geschäftsprozessmanagement rückt diese Prozesse ins Zentrum der Betrachtung und stellt Methoden bereit, um Prozesse umzusetzen, abzuwickeln und, basierend auf einer Auswertung ihrer Ausführung, zu verbessern. Zu diesem Zweck werden Geschäftsprozesse in Form von Prozessmodellen dokumentiert, welche die auszuführenden Arbeitsschritte und ihre Ausführungsbeziehungen erfassen und damit eine wesentliche Grundlage des Geschäftsprozessmanagements bilden. Um dieses Wissen verwerten zu können, muss es gut organisiert und leicht auffindbar sein – eine schwierige Aufgabe angesichts hunderter bzw. tausender Prozessmodelle, welche moderne Unternehmen unterhalten. In der Praxis haben sich bisher lediglich einfache Suchmethoden etabliert, zum Beispiel Freitextsuche in Prozessbeschreibungen. Wissenschaftliche Ansätze hingegen betrachten Ähnlichkeitsmaße und Anfragesprachen für Prozessmodelle, vernachlässigen dabei aber Maßnahmen zur effizienten Suche, sowie die verständliche Wiedergabe eines Suchergebnisses und Hilfestellungen für dessen Verwendung. Diese Dissertation stellt einen neuen Ansatz für die Prozessmodellsuche vor, wobei statt einer Anfragesprache Prozessmodelle zur Formulierung einer Anfrage verwendet werden, welche exemplarisch das Verhalten der gesuchten Prozesse beschreiben. Dieser Ansatz fußt auf einem formalen Framework, welches ein konzeptionelles Distanzmaß zur Bewertung gemeinsamen Verhaltens zweier Geschäftsprozesse definiert und die Grundlage zur Suche bildet. Darauf aufbauend werden Qualitätsmaße vorgestellt, die einem Benutzer bei der Bewertung von Suchergebnissen behilflich sind. Verhaltensausschnitte, die zur Aufnahme in das Suchergebnis geführt haben, können im Prozessmodell hervorgehoben werden. Die Arbeit führt zwei Suchtechniken ein, die konkrete Distanzmaße einsetzen, um Prozesse zu suchen, die das Verhalten einer Anfrage exakt enthalten (Querying), oder diesem in Bezug auf das Verhalten ähnlich sind (Similarity Search). Für beide Techniken werden Indexstrukturen vorgestellt, die effizientes Suchen ermöglichen. Abschließend werden allgemeine Methoden zur Evaluierung von Prozessmodellsuchansätzen vorgestellt, mit welchen die genannten Suchtechniken überprüft werden. Im Ergebnis zeigen diese eine hohe Qualität der Suchergebnisse hinsichtlich einer Vergleichsstudie mit Prozessexperten, sowie gute Skalierbarkeit für große Prozessmodellsammlungen.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Mao, Bo. „Visualisation and Generalisation of 3D City Models“. Doctoral thesis, KTH, Geoinformatik och Geodesi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-48174.

Der volle Inhalt der Quelle
Annotation:
3D city models have been widely used in various applications such as urban planning, traffic control, disaster management etc. Efficient visualisation of 3D city models in different levels of detail (LODs) is one of the pivotal technologies to support these applications. In this thesis, a framework is proposed to visualise the 3D city models online. Then, generalisation methods are studied and tailored to create 3D city scenes in different scales dynamically. Multiple representation structures are designed to preserve the generalisation results on different level. Finally, the quality of the generalised 3D city models is evaluated by measuring the visual similarity with the original models.   In the proposed online visualisation framework, City Geography Makeup Language (CityGML) is used to represent city models, then 3D scenes in Extensible 3D (X3D) are generated from the CityGML data and dynamically updated to the user side for visualisation in the Web-based Graphics Library (WebGL) supported browsers with X3D Document Object Model (X3DOM) technique. The proposed framework can be implemented at the mainstream browsers without specific plugins, but it can only support online 3D city model visualisation in small area. For visualisation of large data volumes, generalisation methods and multiple representation structures are required.   To reduce the 3D data volume, various generalisation methods are investigated to increase the visualisation efficiency. On the city block level, the aggregation and typification methods are improved to simplify the 3D city models. On the street level, buildings are selected according to their visual importance and the results are stored in the indexes for dynamic visualisation. On the building level, a new LOD, shell model, is introduced. It is the exterior shell of LOD3 model, in which the objects such as windows, doors and smaller facilities are projected onto walls.  On the facade level, especially for textured 3D buildings, image processing and analysis methods are employed to compress the texture.   After the generalisation processes on different levels, multiple representation data structures are required to store the generalised models for dynamic visualisation. On the city block level the CityTree, a novel structure to represent group of buildings, is tested for building aggregation. According to the results, the generalised 3D city model creation time is reduced by more than 50% by using the CityTree. Meanwhile, a Minimum Spanning Tree (MST) is employed to detect the linear building group structures in the city models and they are typified with different strategies. On the building level and the street level, the visible building index is created along the road to support building selection. On facade level the TextureTree, a structure to represent building facade texture, is created based on the texture segmentation.   Different generalisation strategies lead to different outcomes. It is critical to evaluate the quality of the generalised models. Visually salient features of the textured building models such as size, colour, height, etc. are employed to calculate the visual difference between the original and the generalised models. Visual similarity is the criterion in the street view level building selection. In this thesis, the visual similarity is evaluated locally and globally. On the local level, the projection area and the colour difference between the original and the generalised models are considered. On the global level, the visual features of the 3D city models are represented by Attributed Relation Graphs (ARG) and their similarity distances are calculated with the Nested Earth Mover’s Distance (NEMD) algorithm.   The overall contribution of this thesis is that 3D city models are generalised in different scales (block, street, building and facade) and the results are stored in multiple representation structures for efficient dynamic visualisation, especially for online visualisation.
QC 20111116
ViSuCity
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Lidija, Krstanović. „Mera sličnosti između modela Gausovih smeša zasnovana na transformaciji prostora parametara“. Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=104904&source=NDLTD&language=en.

Der volle Inhalt der Quelle
Annotation:
Predmet istraživanja ovog rada je istraživanje i eksploatacija mogućnosti da parametri Gausovih komponenti korišćenih Gaussian mixture modela  (GMM) aproksimativno leže na niže dimenzionalnoj površi umetnutoj u konusu pozitivno definitnih matrica. U tu svrhu uvodimo novu, mnogo efikasniju meru sličnosti između GMM-ova projektovanjem LPP-tipa parametara komponenti iz više dimenzionalnog parametarskog originalno konfiguracijskog prostora u prostor značajno niže dimenzionalnosti. Prema tome, nalaženje distance između dva GMM-a iz originalnog prostora se redukuje na nalaženje distance između dva skupa niže dimenzionalnih euklidskih vektora, ponderisanih odgovarajućim težinama. Predložena mera je pogodna za primene koje zahtevaju visoko dimenzionalni prostor obeležja i/ili veliki ukupan broj Gausovih komponenti. Razrađena metodologija je primenjena kako na sintetičkim tako i na realnim eksperimentalnim podacima.
This thesis studies the possibility that the parameters of Gaussian components of aparticular Gaussian Mixture Model (GMM) lie approximately on a lower-dimensionalsurface embedded in the cone of positive definite matrices. For that case, we delivernovel, more efficient similarity measure between GMMs, by LPP-like projecting thecomponents of a particular GMM, from the high dimensional original parameter space,to a much lower dimensional space. Thus, finding the distance between two GMMs inthe original space is reduced to finding the distance between sets of lowerdimensional euclidian vectors, pondered by corresponding weights. The proposedmeasure is suitable for applications that utilize high dimensional feature spaces and/orlarge overall number of Gaussian components. We confirm our results on artificial, aswell as real experimental data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Khan, Wasiq. „A Novel Approach for Continuous Speech Tracking and Dynamic Time Warping. Adaptive Framing Based Continuous Speech Similarity Measure and Dynamic Time Warping using Kalman Filter and Dynamic State Model“. Thesis, University of Bradford, 2014. http://hdl.handle.net/10454/14802.

Der volle Inhalt der Quelle
Annotation:
Dynamic speech properties such as time warping, silence removal and background noise interference are the most challenging issues in continuous speech signal matching. Among all of them, the time warped speech signal matching is of great interest and has been a tough challenge for the researchers. An adaptive framing based continuous speech tracking and similarity measurement approach is introduced in this work following a comprehensive research conducted in the diverse areas of speech processing. A dynamic state model is introduced based on system of linear motion equations which models the input (test) speech signal frame as a unidirectional moving object along the template speech signal. The most similar corresponding frame position in the template speech is estimated which is fused with a feature based similarity observation and the noise variances using a Kalman filter. The Kalman filter provides the final estimated frame position in the template speech at current time which is further used for prediction of a new frame size for the next step. In addition, a keyword spotting approach is proposed by introducing wavelet decomposition based dynamic noise filter and combination of beliefs. The Dempster’s theory of belief combination is deployed for the first time in relation to keyword spotting task. Performances for both; speech tracking and keyword spotting approaches are evaluated using the statistical metrics and gold standards for the binary classification. Experimental results proved the superiority of the proposed approaches over the existing methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Jadrníček, Zbyněk. „Shlukování slov podle významu“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234899.

Der volle Inhalt der Quelle
Annotation:
This thesis is focused on the problem of semantic similarity of words in English language. At first reader is informed about theory of word sense clustering, then there are described chosen methods and tools related to the topic. In the practical part we design and implement system for determining semantic similarity using Word2Vec tool, particularly we focus on biomedical texts of MEDLINE database. At the end of the thesis we discuss reached results and give some ideas to improve the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Lehmann, Rüdiger. „Transformation model selection by multiple hypotheses testing“. Hochschule für Technik und Wirtschaft Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:520-qucosa-211719.

Der volle Inhalt der Quelle
Annotation:
Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows’ Cp, which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie