Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Fuzzy formal concept analysis.

Thèses sur le sujet « Fuzzy formal concept analysis »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Fuzzy formal concept analysis ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

De, Maio Carmen. « Fuzzy concept analysis for semantic knowledge extraction ». Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/1307.

Texte intégral
Résumé :
2010 - 2011
Availability of controlled vocabularies, ontologies, and so on is enabling feature to provide some added values in terms of knowledge management. Nevertheless, the design, maintenance and construction of domain ontologies are a human intensive and time consuming task. The Knowledge Extraction consists of automatic techniques aimed to identify and to define relevant concepts and relations of the domain of interest by analyzing structured (relational databases, XML) and unstructured (text, documents, images) sources. Specifically, methodology for knowledge extraction defined in this research work is aimed at enabling automatic ontology/taxonomy construction from existing resources in order to obtain useful information. For instance, the experimental results take into account data produced with Web 2.0 tools (e.g., RSS-Feed, Enterprise Wiki, Corporate Blog, etc.), text documents, and so on. Final results of Knowledge Extraction methodology are taxonomies or ontologies represented in a machine oriented manner by means of semantic web technologies, such as: RDFS, OWL and SKOS. The resulting knowledge models have been applied to different goals. On the one hand, the methodology has been applied in order to extract ontologies and taxonomies and to semantically annotate text. On the other hand, the resulting ontologies and taxonomies are exploited in order to enhance information retrieval performance and to categorize incoming data and to provide an easy way to find interesting resources (such as faceted browsing). Specifically, following objectives have been addressed in this research work:  Ontology/Taxonomy Extraction: that concerns to automatic extraction of hierarchical conceptualizations (i.e., taxonomies) and relations expressed by means typical description logic constructs (i.e., ontologies).  Information Retrieval: definition of a technique to perform concept-based the retrieval of information according to the user queries.  Faceted Browsing: in order to automatically provide faceted browsing capabilities according to the categorization of the extracted contents.  Semantic Annotation: definition of a text analysis process, aimed to automatically annotate subjects and predicates identified. The experimental results have been obtained in some application domains: e-learning, enterprise human resource management, clinical decision support system. Future challenges go in the following directions: investigate approaches to support ontology alignment and merging applied to knowledge management.
X n.s.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Konecny, Jan. « Isotone fuzzy Galois connections and their applications in formal concept analysis ». Diss., Online access via UMI:, 2009.

Trouver le texte intégral
Résumé :
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
Includes bibliographical references.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Glodeanu, Cynthia Vera. « Conceptual Factors and Fuzzy Data ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-103775.

Texte intégral
Résumé :
With the growing number of large data sets, the necessity of complexity reduction applies today more than ever before. Moreover, some data may also be vague or uncertain. Thus, whenever we have an instrument for data analysis, the questions of how to apply complexity reduction methods and how to treat fuzzy data arise rather naturally. In this thesis, we discuss these issues for the very successful data analysis tool Formal Concept Analysis. In fact, we propose different methods for complexity reduction based on qualitative analyses, and we elaborate on various methods for handling fuzzy data. These two topics split the thesis into two parts. Data reduction is mainly dealt with in the first part of the thesis, whereas we focus on fuzzy data in the second part. Although each chapter may be read almost on its own, each one builds on and uses results from its predecessors. The main crosslink between the chapters is given by the reduction methods and fuzzy data. In particular, we will also discuss complexity reduction methods for fuzzy data, combining the two issues that motivate this thesis
Komplexitätsreduktion ist eines der wichtigsten Verfahren in der Datenanalyse. Mit ständig wachsenden Datensätzen gilt dies heute mehr denn je. In vielen Gebieten stößt man zudem auf vage und ungewisse Daten. Wann immer man ein Instrument zur Datenanalyse hat, stellen sich daher die folgenden zwei Fragen auf eine natürliche Weise: Wie kann man im Rahmen der Analyse die Variablenanzahl verkleinern, und wie kann man Fuzzy-Daten bearbeiten? In dieser Arbeit versuchen wir die eben genannten Fragen für die Formale Begriffsanalyse zu beantworten. Genauer gesagt, erarbeiten wir verschiedene Methoden zur Komplexitätsreduktion qualitativer Daten und entwickeln diverse Verfahren für die Bearbeitung von Fuzzy-Datensätzen. Basierend auf diesen beiden Themen gliedert sich die Arbeit in zwei Teile. Im ersten Teil liegt der Schwerpunkt auf der Komplexitätsreduktion, während sich der zweite Teil der Verarbeitung von Fuzzy-Daten widmet. Die verschiedenen Kapitel sind dabei durch die beiden Themen verbunden. So werden insbesondere auch Methoden für die Komplexitätsreduktion von Fuzzy-Datensätzen entwickelt
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ayouni, Sarra. « Etude et Extraction de règles graduelles floues : définition d'algorithmes efficaces ». Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20015/document.

Texte intégral
Résumé :
L'Extraction de connaissances dans les bases de données est un processus qui vise à extraire un ensemble réduit de connaissances à fortes valeurs ajoutées à partir d'un grand volume de données. La fouille de données, l'une des étapes de ce processus, regroupe un certain nombre de taches, telles que : le clustering, la classification, l'extraction de règles d'associations, etc.La problématique d'extraction de règles d'association nécessite l'étape d'extraction de motifs fréquents. Nous distinguons plusieurs catégories de motifs : les motifs classiques, les motifs flous, les motifs graduels, les motifs séquentiels. Ces motifs diffèrent selon le type de données à partir desquelles l'extraction est faite et selon le type de corrélation qu'ils présentent.Les travaux de cette thèse s'inscrivent dans le contexte d'extraction de motifs graduels, flous et clos. En effet, nous définissons de nouveaux systèmes de clôture de la connexion de Galois relatifs, respectivement, aux motifs flous et graduels. Ainsi, nous proposons des algorithmes d'extraction d'un ensemble réduit pour les motifs graduels et les motifs flous.Nous proposons également deux approches d'extraction de motifs graduels flous, ceci en passant par la génération automatique des fonctions d'appartenance des attributs.En se basant sur les motifs flous clos et graduels clos, nous définissons des bases génériques de toutes les règles d'association graduelles et floues. Nous proposons également un système d'inférence complet et valide de toutes les règles à partir de ces bases
Knowledge discovery in databases is a process aiming at extracting a reduced set of valuable knowledge from a huge amount of data. Data mining, one step of this process, includes a number of tasks, such as clustering, classification, of association rules mining, etc.The problem of mining association rules requires the step of frequent patterns extraction. We distinguish several categories of frequent patterns: classical patterns, fuzzy patterns, gradual patterns, sequential patterns, etc. All these patterns differ on the type of the data from which the extraction is done and the type of the relationship that represent.In this thesis, we particularly contribute with the proposal of fuzzy and gradual patterns extraction method.Indeed, we define new systems of closure of the Galois connection for, respectively, fuzzy and gradual patterns. Thus, we propose algorithms for extracting a reduced set of fuzzy and gradual patterns.We also propose two approaches for automatically defining fuzzy modalities that allow obtaining relevant fuzzy gradual patterns.Based on fuzzy closed and gradual closed patterns, we define generic bases of fuzzy and gradual association rules. We thus propose a complet and valid inference system to derive all redundant fuzzy and gradual association rules
Styles APA, Harvard, Vancouver, ISO, etc.
5

Novi, Daniele. « Knowledge management and Discovery for advanced Enterprise Knowledge Engineering ». Doctoral thesis, Universita degli studi di Salerno, 2014. http://hdl.handle.net/10556/1466.

Texte intégral
Résumé :
2012 - 2013
The research work addresses mainly issues related to the adoption of models, methodologies and knowledge management tools that implement a pervasive use of the latest technologies in the area of Semantic Web for the improvement of business processes and Enterprise 2.0 applications. The first phase of the research has focused on the study and analysis of the state of the art and the problems of Knowledge Discovery database, paying more attention to the data mining systems. The most innovative approaches which were investigated for the "Enterprise Knowledge Engineering" are listed below. In detail, the problems analyzed are those relating to architectural aspects and the integration of Legacy Systems (or not). The contribution of research that is intended to give, consists in the identification and definition of a uniform and general model, a "Knowledge Enterprise Model", the original model with respect to the canonical approaches of enterprise architecture (for example with respect to the Object Management - OMG - standard). The introduction of the tools and principles of Enterprise 2.0 in the company have been investigated and, simultaneously, Semantic Enterprise based appropriate solutions have been defined to the problem of fragmentation of information and improvement of the process of knowledge discovery and functional knowledge sharing. All studies and analysis are finalized and validated by defining a methodology and related software tools to support, for the improvement of processes related to the life cycles of best practices across the enterprise. Collaborative tools, knowledge modeling, algorithms, knowledge discovery and extraction are applied synergistically to support these processes. [edited by author]
XII n.s.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Dao, Ngoc Bich. « Réduction de dimension de sac de mots visuels grâce à l’analyse formelle de concepts ». Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS010/document.

Texte intégral
Résumé :
La réduction des informations redondantes et/ou non-pertinentes dans la description de données est une étape importante dans plusieurs domaines scientifiques comme les statistiques, la vision par ordinateur, la fouille de données ou l’apprentissage automatique. Dans ce manuscrit, nous abordons la réduction de la taille des signatures des images par une méthode issue de l’Analyse Formelle de Concepts (AFC), qui repose sur la structure du treillis des concepts et la théorie des treillis. Les modèles de sac de mots visuels consistent à décrire une image sous forme d’un ensemble de mots visuels obtenus par clustering. La réduction de la taille des signatures des images consiste donc à sélectionner certains de ces mots visuels. Dans cette thèse, nous proposons deux algorithmes de sélection d’attributs (mots visuels) qui sont utilisables pour l’apprentissage supervisé ou non. Le premier algorithme, RedAttSansPerte, ne retient que les attributs qui correspondent aux irréductibles du treillis. En effet, le théorème fondamental de la théorie des treillis garantit que la structure du treillis des concepts est maintenue en ne conservant que les irréductibles. Notre algorithme utilise un graphe d’attributs, le graphe de précédence, où deux attributs sont en relation lorsque les ensembles d’objets à qui ils appartiennent sont inclus l’un dans l’autre. Nous montrons par des expérimentations que la réduction par l’algorithme RedAttsSansPerte permet de diminuer le nombre d’attributs tout en conservant de bonnes performances de classification. Le deuxième algorithme, RedAttsFloue, est une extension de l’algorithme RedAttsSansPerte. Il repose sur une version approximative du graphe de précédence. Il s’agit de supprimer les attributs selon le même principe que l’algorithme précédent, mais en utilisant ce graphe flou. Un seuil de flexibilité élevé du graphe flou entraîne mécaniquement une perte d’information et de ce fait une baisse de performance de la classification. Nous montrons par des expérimentations que la réduction par l’algorithme RedAttsFloue permet de diminuer davantage l’ensemble des attributs sans diminuer de manière significative les performances de classification
In several scientific fields such as statistics, computer vision and machine learning, redundant and/or irrelevant information reduction in the data description (dimension reduction) is an important step. This process contains two different categories : feature extraction and feature selection, of which feature selection in unsupervised learning is hitherto an open question. In this manuscript, we discussed about feature selection on image datasets using the Formal Concept Analysis (FCA), with focus on lattice structure and lattice theory. The images in a dataset were described as a set of visual words by the bag of visual words model. Two algorithms were proposed in this thesis to select relevant features and they can be used in both unsupervised learning and supervised learning. The first algorithm was the RedAttSansPerte, which based on lattice structure and lattice theory, to ensure its ability to remove redundant features using the precedence graph. The formal definition of precedence graph was given in this thesis. We also demonstrated their properties and the relationship between this graph and the AC-poset. Results from experiments indicated that the RedAttsSansPerte algorithm reduced the size of feature set while maintaining their performance against the evaluation by classification. Secondly, the RedAttsFloue algorithm, an extension of the RedAttsSansPerte algorithm, was also proposed. This extension used the fuzzy precedence graph. The formal definition and the properties of this graph were demonstrated in this manuscript. The RedAttsFloue algorithm removed redundant and irrelevant features while retaining relevant information according to the flexibility threshold of the fuzzy precedence graph. The quality of relevant information was evaluated by the classification. The RedAttsFloue algorithm is suggested to be more robust than the RedAttsSansPerte algorithm in terms of reduction
Styles APA, Harvard, Vancouver, ISO, etc.
7

Diner, Casri. « Visualizing Data With Formal Concept Analysis ». Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1046325/index.pdf.

Texte intégral
Résumé :
In this thesis, we wanted to stress the tendency to the geometry of data. This should be applicable in almost every branch of science, where data are of great importance, and also in every kind of industry, economy, medicine etc. Since machine'
s hard-disk capacities which is used for storing datas and the amount of data you can reach through internet is increasing day by day, there should be a need to turn this information into knowledge. This is one of the reasons for studying formal concept analysis. We wanted to point out how this application is related with algebra and logic. The beginning of the first chapter emphasis the relation between closure systems, galois connections, lattice theory as a mathematical structure and concept analysis. Then it describes the basic step in the formalization: An elementary form of the representation of data is defined mathematically. Second chapter explains the logic of formal concept analysis. It also shows how implications, which can be regard as special formulas on a set,between attributes can be shown by fewer implications, so called generating set for implications. These mathematical tools are then used in the last chapter, in order to describe complex '
concept'
lattices by means of decomposition methods in examples.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Krajča, Petr. « Advanced algorithms for formal concept analysis ». Diss., Online access via UMI:, 2009.

Trouver le texte intégral
Résumé :
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
Includes bibliographical references.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Petersen, Wiebke, et Petja Heinrich. « Qualitative Citation Analysis Based on Formal Concept Analysis ». Universitätsbibliothek Chemnitz, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200801464.

Texte intégral
Résumé :
Zu den Aufgaben der Bibliometrie gehört die Zitationsanalyse (Kessler 1963), das heißt die Analyse von Kozitationen (zwei Texte werden kozipiert, wenn es einen Text gibt, in dem beide zitiert werden) und die bibliographische Kopplung (zwei Texte sind bibliographisch gekoppelt, wenn beide eine gemeinsame Zitation aufweisen). In dem Vortrag wird aufgezeigt werden, daß die Formale Begriffsanalyse (FBA) für eine qualitative Zitationsanalyse geeignete Mittel bereithält. Eine besondere Eigenschaft der FBA ist, daß sie die Kombination verschiedenartiger (qualitativer und skalarer) Merkmale ermöglicht. Durch den Einsatz geeigneter Skalen kann auch dem Problem begegnet werden, daß die große Zahl von zu analysierenden Texten bei qualitativen Analyseansätzen in der Regel zu unübersichtlichen Zitationsgraphen führt, deren Inhalt nicht erfaßt werden kann. Die Relation der bibliographischen Kopplung ist eng verwandt mit den von Priss entwickelten Nachbarschaftskontexten, die zur Analyse von Lexika eingesetzt werden. Anhand einiger Beispielanalysen werden die wichtigsten Begriffe der Zitationsanalyse in formalen Kontexten und Begriffsverbänden modelliert. Es stellt sich heraus, daß die hierarchischen Begriffsverbände der FBA den gewöhnlichen Zitationsgraphen in vielerlei Hinsicht überlegen sind, da sie durch ihre hierarchische Verbandstruktur bestimmte Regularitäten explizit erfassen. Außerdem wird gezeigt, wie durch die Kombination geeigneter Merkmale (Doktorvater, Institut, Fachbereich, Zitationshäufigkeit, Keywords) und Skalen häufigen Fehlerquellen wie Gefälligkeitszitationen, Gewohnheitszitationen u.s.w. begegnet werden kann.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Sertkaya, Baris. « Formal Concept Analysis Methods for Description Logics ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1215598189927-85390.

Texte intégral
Résumé :
This work presents mainly two contributions to Description Logics (DLs) research by means of Formal Concept Analysis (FCA) methods: supporting bottom-up construction of DL knowledge bases, and completing DL knowledge bases. Its contribution to FCA research is on the computational complexity of computing generators of closed sets.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Lungley, Deirdre. « Adaptive information retrieval employing formal concept analysis ». Thesis, University of Essex, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.573750.

Texte intégral
Résumé :
In this thesis we propose the use of an adaptive interactive interface to allow user exploration of the context of an intranet query. The underlying domain model is that of a Formal Concept Analysis (FCA) lattice. Understanding the difficulty of achieving optimum document descriptors, essential for a browsable lattice, we propose harnessing implicit user feedback in learning document/term associations. We utilise a task-based methodology to evaluate subjects' perception of the usefulness of our interactive interface and that of one adapted through usage data. Results validated the usefulness of an interactive lattice and our adaptation methodology. As an intermediate method of evaluating the usefulness of a lattice structure, we performed a series of technical evaluations in different do- mains to evaluate the query suggestions provided by the lattice structure. The results of these evaluations were very positive. To this end also, we utilised TREC Session Track datasets - a search challenge match- ing very closely the information sources employed in our methodology. The results achieved validate the usefulness of the FCA approach in a close-to-real setting in a Web environment. In summation, our research validates the usefulness of log adaptation as a means of overcoming the challenge of creating a query-based FCA document lattice
Styles APA, Harvard, Vancouver, ISO, etc.
12

Sertkaya, Baris. « Formal Concept Analysis Methods for Description Logics ». Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23613.

Texte intégral
Résumé :
This work presents mainly two contributions to Description Logics (DLs) research by means of Formal Concept Analysis (FCA) methods: supporting bottom-up construction of DL knowledge bases, and completing DL knowledge bases. Its contribution to FCA research is on the computational complexity of computing generators of closed sets.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Sertkaya, Barış. « Formal concept analysis methods for description logics ». [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1215598189927-85390.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Kim, Bong-Seop. « Advanced web search based on formal concept analysis ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ62230.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Tilley, Thomas. « Formal concept analysis applications to requirements engineering and design / ». [St. Lucia, Qld.], 2003. http://adt.library.uq.edu.au/public/adt-QU20050223.204947/index.html.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

González, González Larry Javier. « Modelling dynamics of RDF graphs with formal concept analysis ». Tesis, Universidad de Chile, 2018. http://repositorio.uchile.cl/handle/2250/168144.

Texte intégral
Résumé :
Magíster en Ciencias, Mención Computación
La Web Semántica es una red de datos organizados de tal manera que permite su manipulación directa tanto por humanos como por computadoras. RDF es el framework recomendado por W3C para representar información en la Web Semántica. RDF usa un modelo de datos basado en grafos que no requiere ningún esquema fijo, provocando que los grafos RDF sean fáciles de extender e integrar, pero también difíciles de consultar, entender, explorar, resumir, etc. En esta tesis, inspirados en formal concept analysis (un subcampo de las matemáticas aplicadas, basados en la formalización de conceptos y jerarquı́as conceptuales, llamadas lattices) proponemos un data-driven schema para grandes y heterogéneos grafos RDF La idea principal es que si podemos definir un formal context a partir de un grafo RDF, entonces podemos extraer sus formal concepts y computar una lattice con ellos, lo que resulta en nuestra propuesta de esquema jerárquico para grafos RDF. Luego, proponemos un álgebra sobre tales lattices, que permite (1) calcular deltas entre dos lattices (por ejemplo, para resumir los cambios de una versión de un grafo a otro), y (2) sumar un delta a un lattice (por ejemplo, para proyectar cambios futuros). Mientras esta estructura (y su álgebra asociada) puede tener varias aplicaciones, nos centramos en el caso de uso de modelar y predecir el comportamiento dinámico de los grafos RDF. Evaluamos nuestros métodos al analizar cómo Wikidata ha cambiado durante 11 semanas. Primero extraemos los conjuntos de propiedades asociadas a entidades individuales de una manera escalable usando el framework MapReduce. Estos conjuntos de propiedades (también conocidos como characteristic sets) son anotados con sus entidades asociadas, y posteriormente, con su cardinalidad. En segundo lugar, proponemos un algoritmo para construir la lattice sobre los characteristic sets basados en la relación de subconjunto. Evaluamos la eficiencia y la escalabilidad de ambos procedimientos. Finalmente, usamos los métodos algebraicos para predecir cómo el esquema jerárquico de Wikidata evolucionaría. Contrastamos nuestros resultados con un modelo de regresión lineal como referencia. Nuestra propuesta supera al modelo lineal por un gran margen, llegando a obtener un root mean square error 12 veces más pequeño que el modelo de referencia. Concluimos que, basados en formal concept analysis, podemos definir y generar un esquema jerárquico a partir de un grafo RDF y que podemos usar esos esquemas para predecir cómo evolucionarán, en un alto nivel, estos grafos RDF en el tiempo.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Almuhisen, Feda. « Leveraging formal concept analysis and pattern mining for moving object trajectory analysis ». Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0738/document.

Texte intégral
Résumé :
Cette thèse présente un cadre de travail d'analyse de trajectoires contenant une phase de prétraitement et un processus d’extraction de trajectoires d’objets mobiles. Le cadre offre des fonctions visuelles reflétant le comportement d'évolution des motifs de trajectoires. L'originalité de l’approche est d’allier extraction de motifs fréquents, extraction de motifs émergents et analyse formelle de concepts pour analyser les trajectoires. A partir des données de trajectoires, les méthodes proposées détectent et caractérisent les comportements d'évolution des motifs. Trois contributions sont proposées : Une méthode d'analyse des trajectoires, basée sur les concepts formels fréquents, est utilisée pour détecter les différents comportements d’évolution de trajectoires dans le temps. Ces comportements sont “latents”, "emerging", "decreasing", "lost" et "jumping". Ils caractérisent la dynamique de la mobilité par rapport à l'espace urbain et le temps. Les comportements détectés sont visualisés sur des cartes générées automatiquement à différents niveaux spatio-temporels pour affiner l'analyse de la mobilité dans une zone donnée de la ville. Une deuxième méthode basée sur l'extraction de concepts formels séquentiels fréquents a également été proposée pour exploiter la direction des mouvements dans la détection de l'évolution. Enfin, une méthode de prédiction basée sur les chaînes de Markov est présentée pour prévoir le comportement d’évolution dans la future période pour une région. Ces trois méthodes sont évaluées sur ensembles de données réelles . Les résultats expérimentaux obtenus sur ces données valident la pertinence de la proposition et l'utilité des cartes produites
This dissertation presents a trajectory analysis framework, which includes both a preprocessing phase and trajectory mining process. Furthermore, the framework offers visual functions that reflect trajectory patterns evolution behavior. The originality of the mining process is to leverage frequent emergent pattern mining and formal concept analysis for moving objects trajectories. These methods detect and characterize pattern evolution behaviors bound to time in trajectory data. Three contributions are proposed: (1) a method for analyzing trajectories based on frequent formal concepts is used to detect different trajectory patterns evolution over time. These behaviors are "latent", "emerging", "decreasing", "lost" and "jumping". They characterize the dynamics of mobility related to urban spaces and time. The detected behaviors are automatically visualized on generated maps with different spatio-temporal levels to refine the analysis of mobility in a given area of the city, (2) a second trajectory analysis framework that is based on sequential concept lattice extraction is also proposed to exploit the movement direction in the evolution detection process, and (3) prediction method based on Markov chain is presented to predict the evolution behavior in the future period for a region. These three methods are evaluated on two real-world datasets. The obtained experimental results from these data show the relevance of the proposal and the utility of the generated maps
Styles APA, Harvard, Vancouver, ISO, etc.
18

Kriegel, Francesco. « Visualization of Conceptual Data with Methods of Formal Concept Analysis ». Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-125309.

Texte intégral
Résumé :
Draft and proof of an algorithm computing incremental changes within a labeled layouted concept lattice upon insertion or removal of an attribute column in the underlying formal context. Furthermore some implementational details and mathematical background knowledge are presented
Entwurf und Beweis eines Algorithmus zur Berechnung inkrementeller Änderungen in einem beschrifteten dargestellten Begriffsverband beim Einfügen oder Entfernen einer Merkmalsspalte im zugrundeliegenden formalen Kontext. Weiterhin sind einige Details zur Implementation sowie zum mathematischen Hintergrundwissen dargestellt
Styles APA, Harvard, Vancouver, ISO, etc.
19

Kiraly, Bret D. « An Experimental Application of Formal Concept Analysis to Research Communities ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1228497076.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Kanade, Parag M. « Fuzzy ants as a clustering concept ». [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000397.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Hanika, Tom [Verfasser]. « Discovering Knowledge in Bipartite Graphs with Formal Concept Analysis / Tom Hanika ». Kassel : Universitätsbibliothek Kassel, 2019. http://d-nb.info/1180660811/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Watmough, Martin John. « Discovering the hidden knowledge in transaction data through formal concept analysis ». Thesis, Sheffield Hallam University, 2013. http://shura.shu.ac.uk/7706/.

Texte intégral
Résumé :
The aim of this research is to discover if hitherto hidden knowledge exists in transaction data and how it can be exposed through the application of Formal Concept Analysis (FCA). Enterprise systems capture data in a transaction structure so that they can provide information that seeks to align with the knowledge that decision-makers use to achieve business goals. With the emergence of service-oriented architecture and developments in business intelligence, data in its own right is becoming significant, suggesting that data in itself may be capable of capturing human behaviour and offerer novel insights from a `bottom-up' perspective. The constraints of hard-coded top-down analysis can thus be addressed by agile systems that use components based on the discovery of the hidden knowledge in the transaction data. There is a need to connect the user's human-oriented approach to problem solving with the formal structures that computer applications need to bring their productivity to bear. FCA offers a natural approach that meets these requirements as it provides a mathematical theory based on concepts, logical relationships that can be represented and understood by humans. By taking an action research and case study approach an experimental environment was designed along two avenues. The first was a study in an educational setting that would combine the generation of the data with the behaviour of the users (students) at the time, thereby capturing their actions as reflected in the transaction data. To create a representative environment, the students used an industry standard SAP enterprise system with the business simulator ERPsim. This applied study provided an evaluation of FCA and contemporary tools while maintaining a relevant pedagogic outcome for the students. The second avenue was a discovery experiment based on user activity logs from an actual organisations productive system, applying and developing the methods applied previously. Analysis of user logs from this system using FCA revealed the hitherto hidden knowledge in its transaction data by discovering patterns and relationships made visible through the multi dimensional representation of data. The evidence gathered by this research supports FCA for exposing and discovering hidden knowledge from transactional data, it can contribute towards systems and humans working together more effectively.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Arévalo, Gabriela Beatriz. « High-level views in object-oriented systems using formal concept analysis / ». [S.l.] : [s.n.], 2004. http://www.zb.unibe.ch/download/eldiss/04arevalo_g.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Smith, David T. « A Formal Concept Analysis Approach to Association Rule Mining : The QuICL Algorithms ». NSUWorks, 2009. http://nsuworks.nova.edu/gscis_etd/309.

Texte intégral
Résumé :
Association rule mining (ARM) is the task of identifying meaningful implication rules exhibited in a data set. Most research has focused on extracting frequent item (FI) sets and thus fallen short of the overall ARM objective. The FI miners fail to identify the upper covers that are needed to generate a set of association rules whose size can be exploited by an end user. An alternative to FI mining can be found in formal concept analysis (FCA), a branch of applied mathematics. FCA derives a concept lattice whose concepts identify closed FI sets and connections identify the upper covers. However, most FCA algorithms construct a complete lattice and therefore include item sets that are not frequent. An iceberg lattice, on the other hand, is a concept lattice whose concepts contain only FI sets. Only three algorithms to construct an iceberg lattice were found in literature. Given that an iceberg concept lattice provides an analysis tool to succinctly identify association rules, this study investigated additional algorithms to construct an iceberg concept lattice. This report presents the development and analysis of the Quick Iceberg Concept Lattice (QuICL) algorithms. These algorithms provide incremental construction of an iceberg lattice. QuICL uses recursion instead of iteration to navigate the lattice and establish connections, thereby eliminating costly processing incurred by past algorithms. The QuICL algorithms were evaluated against leading FI miners and FCA construction algorithms using benchmarks cited in literature. Results demonstrate that QuICL provides performance on the order of FI miners yet additionally derive the upper covers. QuICL, when combined with known algorithms to extract a basis of association rules from a lattice, offer a "best known" ARM solution. Beyond this, the QuICL algorithms have proved to be very efficient, providing an order of magnitude gains over other incremental lattice construction algorithms. For example, on the Mushroom data set, QuICL completes in less than 3 seconds. Past algorithms exceed 200 seconds. On T10I4D100k, QuICL completes in less than 120 seconds. Past algorithms approach 10,000 seconds. QuICL is proved to be the "best known" all around incremental lattice construction algorithm. Runtime complexity is shown to be O(l d i) where l is the cardinality of the lattice, d is the average degree of the lattice, and i is a mean function on the frequent item extents.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Rudolph, Sebastian. « Relational Exploration : Combining Description Logics and Formal Concept Analysis for Knowledge Specification ». Doctoral thesis, Technische Universität Dresden, 2006. https://tud.qucosa.de/id/qucosa%3A25002.

Texte intégral
Résumé :
Facing the growing amount of information in today's society, the task of specifying human knowledge in a way that can be unambiguously processed by computers becomes more and more important. Two acknowledged fields in this evolving scientific area of Knowledge Representation are Description Logics (DL) and Formal Concept Analysis (FCA). While DL concentrates on characterizing domains via logical statements and inferring knowledge from these characterizations, FCA builds conceptual hierarchies on the basis of present data. This work introduces Relational Exploration, a method for acquiring complete relational knowledge about a domain of interest by successively consulting a domain expert without ever asking redundant questions. This is achieved by combining DL and FCA: DL formalisms are used for defining FCA attributes while FCA exploration techniques are deployed to obtain or refine DL knowledge specifications.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Ducrou, Jon. « Design for conceptual knowledge processing case studies in applied formal concept analysis / ». Access electronically, 2007. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20080919.093612/index.html.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Horner, Vincent Zion. « Developing a consumer health informatics decision support system using formal concept analysis ». Diss., Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-05052008-112403/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Everts, TJ. « Using Formal Concept Analysis with a Push-based Web Document Management System ». Thesis, Honours thesis, University of Tasmania, 2004. https://eprints.utas.edu.au/116/1/EvertsT_Hons_Thesis2004.pdf.

Texte intégral
Résumé :
The significant increase in amount of information readily available on the World Wide Web (WWW) makes it difficult for users to locate the information they desire in a timely manner. Modern information gathering and retrieval methods focus on simplifying this task by enabling the user to retrieve only a small subset of information that is more relevant and manageable. However, often the majority of users will not find an immediate use for the information. Therefore, it is necessary to provide a method to store it effectively so it can be utilised as a future knowledge resource. A commonly adopted approach is to classify the retrieved information based on its content. A technique that has been found to be suitable for this purpose is Multiple Classification Ripple Down Rules (MCRDR). MCRDR constructs a classification knowledge base over time using an incremental learning process. This incremental method of acquiring classification knowledge suits the nature of Web information because it is constantly evolving and being updated. However, despite this advantage, the classification knowledge of MCRDR is not often utilised for browsing the classified information. This is because MCRDR does not directly organise the knowledge in a way that is suitable for browsing. As a result, often an alternate structure is utilised for browsing the information which is usually based on a user's abstract understanding of the information domain. This study investigated the feasibility of utilising the classification knowledge acquired through the use of MCRDR as a resource for browsing information retrieved from the WWW. A system was implemented that used the concept lattice based browsing scheme of Formal Concept Analysis (FCA) to support the browsing of documents based on MCRDR classification knowledge. The feasibility of utilising classification knowledge as a resource for browsing documents was evaluated statistically. This was achieved by comparing the concept lattice-based browsing approach to a standard one that utilises abstract knowledge of a domain as a resource for browsing the same documents.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Abid, Ahmed. « Improvement of web service composition using semantic similarities and formal concept analysis ». Thesis, Tours, 2017. http://www.theses.fr/2017TOUR4007.

Texte intégral
Résumé :
Les Architectures Orientées Services (SOA) se sont progressivement imposées comme outil incontournable dans les échanges inter-entreprises grâce à leurs potentiels stratégiques et technologiques. Leurs mise en oeuvre est concrétisée à travers les services Web dont l'un des principaux atouts est leur composabilité. Avec l'émergence du Web sémantique la découverte et la composition de services Web sémantiques constituent un réel défi. Le processus de découverte s'appui généralement sur les registres traditionnels offrant des descriptions syntaxiques regroupés statiquement, ce qui pose un problème lié à l'hétérogénéité des descriptions syntaxiques et à la rigidité de la classification. Le processus de composition dépend à son tour de la qualité de l'appariement des services. Nous proposons dans cette thèse une architecture d'un framework qui couvre toutes les phases du processus de composition. Ensuite, nous proposons une mesure de similarité sémantique pour un appariement entre les descriptions des services Web. Le processus de découverte de services Web s'appuie sur la similarité entre les services, le formalisme d'Analyse de Concepts Formels et l'organisation des services en treillis. La composition ensuite repose sur l'établissement de services composites cohérents et pertinaents pour la fonctionnalité espérée. Les points forts de cette architecture sont l'adaptation et l'intégration des technologies sémantiques, le calcul de similarité sémantique et l'utilisation de cette similarité sémantique et du formalisme FCA afin d'optimiser le processus de composition
Service Oriented Architectures (SOA) have been progressively confirmed as an essential tool in inter-companies exchanges thanks to their strategic and technological potential. Their implementation is realised through Web services. One of the main assets of services is their compostability. With the emergence of the semantic Web, the discovery and composition of semantic Web services become a real challenge. The discovery process is generally based on traditional registries with syntactic descriptions where services are statically grouped. This poses a problem related to the heterogeneity of syntactic descriptions and the rigidity of the classification. The composition process depends on the Web service matching quality processed in the discovery phase. We propose in this dissertation an architecture of a framework that covers all the phases of the composition process. Then, we propose a semantic similarity measure Web services. The Web services discovery process relies on the proposed similarity measure, the formal concept analysis (FCA) formalism, and the organisation of lattice services. The composition is then based on the establishment of coherent and relevant composite services for the expected functionality. The main strengths of this architecture are the adaptation and integration of semantic technologies, the calculation of semantic similarity and the use of this semantic similarity and the FCA formalism in order to optimise the composition process
Styles APA, Harvard, Vancouver, ISO, etc.
30

Berthold, Stefan. « Linkability of communication contents : Keeping track of disclosed data using Formal Concept Analysis ». Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-369.

Texte intégral
Résumé :

A person who is communication about (the data subject) has to keep track of all of his revealed data in order to protect his right of informational self-determination. This is important when data is going to be processed in an automatic manner and, in particular, in case of automatic inquiries. A data subject should, therefore, be enabled to recognize useful decisions with respect to data disclosure, only by using data which is available to him.

For the scope of this thesis, we assume that a data subject is able to protect his communication contents and the corresponding communication context against a third party by using end-to-end encryption and Mix cascades. The objective is to develop a model for analyzing the linkability of communication contents by using Formal Concept Analysis. In contrast to previous work, only the knowledge of a data subject is used for this analysis instead of a global view on the entire communication contents and context.

As a first step, the relation between disclosed data is explored. It is shown how data can be grouped by types and data implications can be represented. As a second step, behavior, i. e. actions and reactions, of the data subject and his communication partners is included in this analysis in order to find critical data sets which can be used to identify the data subject.

Typical examples are used to verify this analysis, followed by a conclusion about pros and cons of this method for anonymity and linkability measurement. Results can be used, later on, in order to develop a similarity measure for human-computer interfaces.

Styles APA, Harvard, Vancouver, ISO, etc.
31

Distel, Felix. « Learning Description Logic Knowledge Bases from Data Using Methods from Formal Concept Analysis ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-70199.

Texte intégral
Résumé :
Description Logics (DLs) are a class of knowledge representation formalisms that can represent terminological and assertional knowledge using a well-defined semantics. Often, knowledge engineers are experts in their own fields, but not in logics, and require assistance in the process of ontology design. This thesis presents three methods that can extract terminological knowledge from existing data and thereby assist in the design process. They are based on similar formalisms from Formal Concept Analysis (FCA), in particular the Next-Closure Algorithm and Attribute-Exploration. The first of the three methods computes terminological knowledge from the data, without any expert interaction. The two other methods use expert interaction where a human expert can confirm each terminological axiom or refute it by providing a counterexample. These two methods differ only in the way counterexamples are provided.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Ma, Junheng. « Contributions to Numerical Formal Concept Analysis, Bayesian Predictive Inference and Sample Size Determination ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1285341426.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Sinha, Aditya. « Formal Concept Analysis for Search and Traversal in Multiple Databases with Effective Revision ». University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1245088303.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Shen, Gongqin. « Formal Concepts and Applications ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=case1121454398.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Meschke, Christian. « Concept Approximations ». Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-86642.

Texte intégral
Résumé :
In this thesis, we present a lattice theoretical approach to the field of approximations. Given a pair consisting of a kernel system and a closure system on an underlying lattice, one receives a lattice of approximations. We describe the theory of these lattices of approximations. Furthermore, we put a special focus on the case of concept lattices. As it turns out, approximation of formal concepts can be interpreted as traces, which are preconcepts in a subcontext
In der vorliegenden Arbeit beschreiben wir einen verbandstheoretischen Zugang zum Thema Approximieren. Ausgehend von einem Kern- und einem Hüllensystem auf einem vollständigen Verband erhält man einen Approximationsverband. Wir beschreiben die Theorie dieser Approximationsverbände. Des Weiteren liegt dabei ein Hauptaugenmerk auf dem Fall zugrundeliegender Begriffsverbände. Wie sich nämlich herausstellt, lassen sich Approximationen formaler Begriffe als Spuren auffassen, welche diese in einem vorgegebenen Teilkontext hinterlassen
Styles APA, Harvard, Vancouver, ISO, etc.
36

Ozdemir, Ali Yucel. « An Inquiry Into The Concept Of ». Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615034/index.pdf.

Texte intégral
Résumé :
This thesis makes a survey on conception of &ldquo
surface&rdquo
in the works of Peter Eisenman. In doing so, the concept of &ldquo
surface&rdquo
is discussed under three titles: &ldquo
Surface&rdquo
as an element of architectural vocabulary (as a formal element), as an analytical tool (as a grammar), and as a diagrammatic tool. Correspondingly, the thesis is intended to examine how &ldquo
surface&rdquo
is conceptualized and handled through the critical readings of Eisenman&rsquo
s writings, and projects are referred in order to support and visualize the discussions. In this context, Eisenman&rsquo
s dissertation, The Formal Basis of Modern Architecture (1963), reveals the definition of architectural surface in relation to the architectural language that is proposed by him. Through the formal analysis of Giuseppe Terragni&rsquo
s building, Casa Guiliani Frigerio, he utilizes surface as an analytical tool. Considering design processes of his projects, as discussed in the book Diagram Diaries (1999), surface becomes a dominant tool for generating architectural form. As a result, in this thesis, surface is evaluated in various aspects (as a formal, analytical and diagrammatic tool) that are essential for understanding of architectural form. In the case of Eisenman, its significance dominates the way of developing his architecture.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Rudolph, Sebastian [Verfasser]. « Relational exploration : combining description logics and formal concept analysis for knowledge specification / von Sebastian Rudolph ». Karlsruhe : Univ.-Verl. Karlsruhe, 2007. http://d-nb.info/983756430/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Joseph, Daniel. « Linking information resources with automatic semantic extraction ». Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/linking-information-resources-with-automatic-semantic-extraction(ada2db36-4366-441a-a0a9-d76324a77e2c).html.

Texte intégral
Résumé :
Knowledge is a critical dimension in the problem solving processes of human intelligence. Consequently, enabling intelligent systems to provide advanced services requires that their artificial intelligence routines have access to knowledge of relevant domains. Ontologies are often utilised as the formal conceptualisation of domains, in that they identify and model the concepts and relationships of the targeted domain. However complexities inherent in ontology development and maintenance have limited their availability. Separate from the conceptualisation component, domain knowledge also encompasses the concept membership of object instances within the domain. The need to capture both the domain model and the current state of instances within the domain has motivated the import of Formal Concept Analysis into intelligent systems research. Formal Concept Analysis, which provides a simplified model of a domain, has the advantage in that not only does it define concepts in terms of their attribute description but object instances are simultaneously ascribed to their appropriate concepts. Nonetheless, a significant drawback of Formal Concept Analysis is that when applied to a large dataset, the lattice with which it models a domain is often composed of a copious amount of concepts, many of which are arguably unnecessary or invalid. In this research a novel measure is introduced which assigns a relevance value to concepts in the lattice. This measure is termed the Collapse Index and is based on the minimum number of object instances that need be removed from a domain in order for a concept to be expunged from the lattice. Mathematics that underpin its origin and behaviour are detailed in the thesis showing that if the relevance of a concept is defined by the Collapse Index: a concept will eventually lose relevance if one of its immediate subconcepts increasingly acquires object instance support; and a concept has its highest relevance when its immediate subconcepts have equal or near equal object instance support. In addition, experimental evaluation is provided where the Collapse Index demonstrated comparable or better performance than the current prominent alternatives in: being consistent across samples; the ability to recall concepts in noisy lattices; and efficiency of calculation. It is also demonstrated that the Collapse Index affords concepts with low object instance support the opportunity to have a higher relevance than those of high supportThe second contribution to knowledge is that of an approach to semantic extraction from a dataset where the Collapse Index is included as a method of selecting concepts for inclusion in a final concept hierarchy. The utility of the approach is demonstrated by reviewing its inclusion in the implementation of a recommender system. This recommender system serves as the final contribution featuring a unique design where lattices represent user profiles and concepts in these profiles are pruned using the Collapse Index. Results showed that pruning of profile lattices enabled by the Collapse Index improved the success levels of movie recommendations if the appropriate thresholds are set.
Styles APA, Harvard, Vancouver, ISO, etc.
39

De, Alburquerque Melo Cassio. « Real-time Distributed Computation of Formal Concepts and Analytics ». Phd thesis, Ecole Centrale Paris, 2013. http://tel.archives-ouvertes.fr/tel-00966184.

Texte intégral
Résumé :
The advances in technology for creation, storage and dissemination of data have dramatically increased the need for tools that effectively provide users with means of identifying and understanding relevant information. Despite the great computing opportunities distributed frameworks such as Hadoop provide, it has only increased the need for means of identifying and understanding relevant information. Formal Concept Analysis (FCA) may play an important role in this context, by employing more intelligent means in the analysis process. FCA provides an intuitive understanding of generalization and specialization relationships among objects and their attributes in a structure known as a concept lattice. The present thesis addresses the problem of mining and visualising concepts over a data stream. The proposed approach is comprised of several distributed components that carry the computation of concepts from a basic transaction, filter and transforms data, stores and provides analytic features to visually explore data. The novelty of our work consists of: (i) a distributed processing and analysis architecture for mining concepts in real-time; (ii) the combination of FCA with visual analytics visualisation and exploration techniques, including association rules analytics; (iii) new algorithms for condensing and filtering conceptual data and (iv) a system that implements all proposed techniques, called Cubix, and its use cases in Biology, Complex System Design and Space Applications.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Cellier, Peggy, Felix Distel et Bernhard Ganter. « Contributions to the 11th International Conference on Formal Concept Analysis : Dresden, Germany, May 21–24, 2013 ». Technische Universität Dresden, 2013. https://tud.qucosa.de/id/qucosa%3A26885.

Texte intégral
Résumé :
Formal concept analysis (FCA) is a mathematical formalism based on order and lattice theory for data analysis. It has found applications in a broad range of neighboring fields including Semantic Web, data mining, knowledge representation, data visualization and software engineering. ICFCA is a series of annual international conferences that started in 2003 in Darmstadt and has been held in several continents: Europe, Australia, America and Africa. ICFCA has evolved to be the main forum for researchers working on theoretical or applied aspects of formal concept analysis worldwide. In 2013 the conference returned to Dresden where it was previously held in 2006. This year the selection of contributions was especially competitive. This volume is one of two volumes containing the papers presented at ICFCA 2013. The other volume is published by Springer Verlag as LNAI 7880 in its LNCS series. In addition to the regular contributions, we have included an extended abstract: Jean-Paul Doignon reviews recent results connecting formal concept analysis and knowledge space theory in his contribution “Identifiability in Knowledge Space Theory: a Survey of Recent Results”. The high-quality of the program of the conference was ensured by the much-appreciated work of the authors, the Program Committee members, and the Editorial Board members. Finally, we wish to thank the local organization team. They provided support to make ICFCA 2013 proceed smoothly in a pleasant atmosphere.:EXTENDED ABSTRACT Jean-Paul Doignon: Identifiability in Knowledge Space Theory: a survey of recent results S. 1 REGULAR CONTRIBUTIONS Ľubomír Antoni, Stanislav Krajči, Ondrej Krídlo and Lenka Pisková: Heterogeneous environment on examples S. 5 Robert Jäschke and Sebastian Rudolph: Attribute Exploration on the Web S. 19 Adam Krasuski and Piotr Wasilewski: The Detection of Outlying Fire Service’s Reports. The FCA Driven Analytics S. 35 Xenia Naidenova and Vladimir Parkhomenko: An Approach to Incremental Learning Based on Good Classification Tests S. 51 Alexey A. Neznanov, Dmitry A. Ilvovsky and Sergei O. Kuznetsov: FCART: A New FCA-based System for Data Analysis and Knowledge Discovery S. 65
Styles APA, Harvard, Vancouver, ISO, etc.
41

Kandasamy, Meenakshi. « Approaches to Creating Fuzzy Concept Lattices and an Application to Bioinformatics Annotations ». Miami University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=miami1293821656.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Distel, Felix, et Daniel Borchmann. « Expected Numbers of Proper Premises and Concept Intents ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-71153.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Potter, Dustin Paul. « A combinatorial approach to scientific exploration of gene expression data : An integrative method using Formal Concept Analysis for the comparative analysis of microarray data ». Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28792.

Texte intégral
Résumé :
Functional genetics is the study of the genes present in a genome of an organism, the complex interplay of all genes and their environment being the primary focus of study. The motivation for such studies is the premise that gene expression patterns in a cell are characteristic of its current state. The availability of the entire genome for many organisms now allows scientists unparalleled opportunities to characterize, classify, and manipulate genes or gene networks involved in metabolism, cellular differentiation, development, and disease. System-wide studies of biological systems have been made possible by the advent of high-throughput and large-scale tools such as microarrays which are capable of measuring the mRNA levels of all genes in a genome. Tools and methods for the integration, visualization, and modeling of the large-scale data obtained in typical systems biology experiments are indispensable. Our work focuses on a method that integrates gene expression values obtained from microarray experiments with biological functional information related to the genes measured in order to make global comparisons of multiple experiments. In our method, the integrated data is represented as a lattice and, using appropriate measures, a reference experiment can be compared to samples from a database of similar experiments, and a ranking of similarity is returned. In this work, support for the validity of our method is demonstrated both theoretically and empirically: a mathematical description of the lattice structure with respect to the integrated information is developed and the method is applied to data sets of both simulated and reported microarray experiments. A fast algorithm for constructing the lattice representation is also developed.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Peng, Jin-De, et 彭晉德. « Apply Document Processing Techniques to Improve Fuzzy Formal Concept Analysis Concept Quality ». Thesis, 2013. http://ndltd.ncl.edu.tw/handle/13267447827823741877.

Texte intégral
Résumé :
碩士
國立雲林科技大學
資訊管理系碩士班
101
Traditional Formal Concept Analysis (FCA) has been blamed for its drawback that fails to deal with uncertain information; furthermore, its performance to administer and searching deteriorated while coping with a large amount of documents and particularly in wider domains. To deal with this drawback, this study employed Fuzzy Theory into FCA and used Event Detection Clustering Technique based on the experimental dataset from Yahoo! News. We extracted news features through syntax rules and assigned normalized TF-IDF as membership grade. Then, event detection clustering was carried out to decrease the complexity of document set, enhance quality of searching and shorten the duration of processing. In comparison with traditional FCA, the results showed that our proposed method, assessing the quality of concept lattice through fuzzy rate, had higher fuzzy rate and the quality also increased as α-cut was higher. Furthermore, we are able to find an appropriate α-cut to build more precise concept lattice through users’ satisfaction. Experimental results showed that users indicate that the concept expressed the news contents best as the α-cut was 0.06 in this study.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Lin, Shin-Yang, et 林欣洋. « Knowledge Exploration in Drug Interaction using Fuzzy Formal Concept Analysis ». Thesis, 2007. http://ndltd.ncl.edu.tw/handle/43035149416098183090.

Texte intégral
Résumé :
碩士
國立成功大學
資訊管理研究所
95
The improvements in pharmaceutics, accompanied by medicinal and technological advances, have expanded the diversity of pharmaceuticals to a great extent, turning the world of pharmacology into a complex web of drugs, their interactions, and most important of all, their effects on patients. The increasing number of existent pharmaceuticals inevitably complicates the interactions between them, revealing the importance of their thorough understanding in order to prevent possible pathogenic symptoms. This is emphasized by the seriousness of drug misuse and the related consequences. This paper utilizes Fuzzy Formal Concept Analysis in the process of medicinal data analysis to uncover the ongoing connections between the formal concepts and the strengths of these relationships. The tacit knowledge extracted helps experts have a better insight of the pharmaceuticals and the possible drug interactions, consequently improving their usage in terms of effectiveness and reducing mistreatment.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Cheng, HsuFeng, et 鄭旭峰. « The Study on “Information Security” News Retrieval by using Fuzzy Formal Concept Analysis ». Thesis, 2013. http://ndltd.ncl.edu.tw/handle/73187591349099605567.

Texte intégral
Résumé :
碩士
國防大學管理學院
資訊管理學系
101
In today's ever-expanding universe of knowledge, digital documentation to be a trend, online information is the main part of knowledge resources. Therefore, to cite the past valuable references and try to find out how to make the massive volume of data as a file, to categorize and search relevant information in an efficient way is the critical issue of current studies. Nowadays we can use index application in plenty websites, and the way of navigating based on “Keywords”, however in this way, it cannot be more efficient to search one certain topic during navigation. Besides, if the search engine establishes on “Subject-Specific”, the main category is made by manually- it might be time-consuming and strenuous to handle with such a plenty data information. Therefore, the main purpose of this study is to figure out what’s the automatic classify system in search application, to aim for improving effectiveness in knowledge representation and discovery. There are three main purposes of this study- first, to acquire specific terms as our foundation, combine Fuzzy Formal Concept Analysis (FFCA) method as analysis process to set up the ontology and the concept relationship automatically for the information security domain. Second, the news reporting states on the website of “Information Security” which is the main resource of training materials. Query Expansion is the main navigation method. Third, to construct the systemized query model on information security news, the system then can get some recommended contents and then expand the search results and increase the retrieval efficiency . The conclusion of this study is to validate the establish timing would be shorten by Ontology set on automatic searching system- it is superior to manual one. To construct domain ontology based on FFCA is beneficial than Formal Concept Analysis. Experimental results on “Information Security News Retrieval Systems” illustrate the most efficient way to expand all relevant contents for every user.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Glodeanu, Cynthia Vera. « Conceptual Factors and Fuzzy Data ». Doctoral thesis, 2012. https://tud.qucosa.de/id/qucosa%3A26470.

Texte intégral
Résumé :
With the growing number of large data sets, the necessity of complexity reduction applies today more than ever before. Moreover, some data may also be vague or uncertain. Thus, whenever we have an instrument for data analysis, the questions of how to apply complexity reduction methods and how to treat fuzzy data arise rather naturally. In this thesis, we discuss these issues for the very successful data analysis tool Formal Concept Analysis. In fact, we propose different methods for complexity reduction based on qualitative analyses, and we elaborate on various methods for handling fuzzy data. These two topics split the thesis into two parts. Data reduction is mainly dealt with in the first part of the thesis, whereas we focus on fuzzy data in the second part. Although each chapter may be read almost on its own, each one builds on and uses results from its predecessors. The main crosslink between the chapters is given by the reduction methods and fuzzy data. In particular, we will also discuss complexity reduction methods for fuzzy data, combining the two issues that motivate this thesis.
Komplexitätsreduktion ist eines der wichtigsten Verfahren in der Datenanalyse. Mit ständig wachsenden Datensätzen gilt dies heute mehr denn je. In vielen Gebieten stößt man zudem auf vage und ungewisse Daten. Wann immer man ein Instrument zur Datenanalyse hat, stellen sich daher die folgenden zwei Fragen auf eine natürliche Weise: Wie kann man im Rahmen der Analyse die Variablenanzahl verkleinern, und wie kann man Fuzzy-Daten bearbeiten? In dieser Arbeit versuchen wir die eben genannten Fragen für die Formale Begriffsanalyse zu beantworten. Genauer gesagt, erarbeiten wir verschiedene Methoden zur Komplexitätsreduktion qualitativer Daten und entwickeln diverse Verfahren für die Bearbeitung von Fuzzy-Datensätzen. Basierend auf diesen beiden Themen gliedert sich die Arbeit in zwei Teile. Im ersten Teil liegt der Schwerpunkt auf der Komplexitätsreduktion, während sich der zweite Teil der Verarbeitung von Fuzzy-Daten widmet. Die verschiedenen Kapitel sind dabei durch die beiden Themen verbunden. So werden insbesondere auch Methoden für die Komplexitätsreduktion von Fuzzy-Datensätzen entwickelt.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Lin, Hun-Ching, et 林紘靖. « Automatic Document Classification UsingFuzzy Formal Concept Analysis ». Thesis, 2009. http://ndltd.ncl.edu.tw/handle/12015487957874690408.

Texte intégral
Résumé :
碩士
國立成功大學
資訊管理研究所
97
As computer becomes popular, the internet developes and the coming of the age of knowledge, the numerious of digital documents increases faster. There are always a huge deal of search resoult when we use search engine on the internet, and it becomes more and more difficult to find specified document from databases. Hense, people starts to find the way to find required documents from a huge database. Thus, automatical categorization of documennts becomes an important issue in managing document datas. In recent years, more and more research uses formal concept analysis(FCA) on information retrieval. However, classical formal concept analysis present the fuzzy information of document categorization (Tho et al., 2006), some research thus combines fuzzy theory with FCA to fuzzy FCA (Burusco and Fuentes-Gonzales, 1994). The researches of FCA then become more and more. This proposed research is trying to analysis documents with information retrieval technology to find the most important keywords of the specified dataset, then give fuzzy membership degree and then categorize the documents with fuzzy FCA. In this research, the categorization is computed with the concept lattice produced from the FCA process to find an application of the concept lattice besides presenting the domain knowledge. We hope this to be helpful to the researches of document categorization using FCA. The result shows that the categorization using concept lattice combining with fuzzy logic is precise. And the result is steady for all categories.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Lin, Yu-Ting, et 林于婷. « Document Overlapping Clustering Using Formal Concept Analysis ». Thesis, 2015. http://ndltd.ncl.edu.tw/handle/15119561186477320841.

Texte intégral
Résumé :
碩士
國立中興大學
資訊管理學系所
103
In recent year, information and data growing and spreading fast. A lot of studies trying to find the useful pattern or knowledge among the growing data. Text document clustering is also a technique in Data Mining field which could solve this problem. Text document clustering is a technique which group documents into several clusters based on the similarities among documents. Most of traditional clustering algorithms build disjoint clusters, but clusters should be overlapped because document may often belong to two or more categories in real world. For example, an article discussing the Apple Watch may be categorized into either 3C, Fashion, or even Clothing and Shoes. Then this article could be seen by more internet users. In this paper, we propose an overlapping clustering algorithm by using the Formal Concept Analysis, which could make an article belongs to two or more cluster. Due to the hierarchical structure of Formal Concept Lattice, an article could belong to more than one Formal Concept. Extracting the suitable Formal Concepts and transformed into conceptual vectors, the overlapping clustering result could be obtained. More over, our algorithm reduced the dimension of the vector space, it performs more efficiently than traditional clustering approaches which are based on Vector Space Model.
Styles APA, Harvard, Vancouver, ISO, etc.
50

黃大偉. « Event Tree Analysis Using Fuzzy Concept ». Thesis, 1997. http://ndltd.ncl.edu.tw/handle/11517306333510215915.

Texte intégral
Résumé :
碩士
國立清華大學
工業工程研究所
85
Event tree analysis (ETA) method is a straightforward and simple approach for risk assessment. It can be used to identify various sequences and their causes, and also to give the analyst the clear picture about which top event dominates the safety of the system. The traditional ETA uses a single probability to represent each top event. However, it is unreasonable to evaluate the occurrence of an event by using a crisp value without considering the inherent uncertainty and imprecision a state has. Since fuzzy set theory provides a framework for dealing with this kind of phenomena, this tool is used in this study. The main purpose of this study is to make an effort in constructing an easy methodology to evaluate the human error and integrates it into ETA by using fuzzy concept. In addition, a systematic FETA algorithm is developed to evaluate the risk of a large scale system. A practical example of an ATWS event in a nuclear power plant is used to demonstrate the procedure. The fuzzy outcomes will be defuzzified by using the total integral value in terms of the degree of optimism the decision maker has. At last, more information about the importance and uncertainty of top events will be provided by using the two indices.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie