To see the other types of publications on this topic, follow the link: Graph-based application.

Dissertations / Theses on the topic 'Graph-based application'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Graph-based application.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ferrer, Sumsi Miquel. "Theory and Algorithms on the Median Graph. Application to Graph-based Classification and Clustering." Doctoral thesis, Universitat Autònoma de Barcelona, 2008. http://hdl.handle.net/10803/5788.

Full text
Abstract:
Donat un conjunt d'objectes, el concepte genèric de mediana està definit com l'objecte amb la suma de distàncies a tot el conjunt, més petita. Sovint, aquest concepte és usat per a obtenir el representant del conjunt.
En el reconeixement estructural de patrons, els grafs han estat usats normalment per a representar objectes complexos. En el domini dels grafs, el concepte de mediana és conegut com median graph. Potencialment, té les mateixes aplicacions que el concepte de mediana per poder ser usat com a representant d'un conjunt de grafs.
Tot i la seva simple definició i les potencials aplicacions, s'ha demostrat que el seu càlcul és una tasca extremadament complexa. Tots els algorismes existents només han estat capaços de treballar amb conjunts petits de grafs, i per tant, la seva aplicació ha estat limitada en molts casos a usar dades sintètiques sense significat real. Així, tot i el seu potencial, ha restat com un concepte eminentment teòric.
L'objectiu principal d'aquesta tesi doctoral és el d'investigar a fons la teoria i l'algorísmica relacionada amb el concepte de medinan graph, amb l'objectiu final d'extendre la seva aplicabilitat i lliurar tot el seu potencial al món de les aplicacions reals. Per això, presentem nous resultats teòrics i també nous algorismes per al seu càlcul. Des d'un punt de vista teòric aquesta tesi fa dues aportacions fonamentals. Per una banda, s'introdueix el nou concepte d'spectral median graph. Per altra banda es mostra que certes de les propietats teòriques del median graph poden ser millorades sota determinades condicions. Més enllà de les aportacioncs teòriques, proposem cinc noves alternatives per al seu càlcul. La primera d'elles és una conseqüència directa del concepte d'spectral median graph. Després, basats en les millores de les propietats teòriques, presentem dues alternatives més per a la seva obtenció. Finalment, s'introdueix una nova tècnica per al càlcul del median basat en el mapeig de grafs en espais de vectors, i es proposen dos nous algorismes més.
L'avaluació experimental dels mètodes proposats utilitzant una base de dades semi-artificial (símbols gràfics) i dues amb dades reals (mollècules i pàgines web), mostra que aquests mètodes són molt més eficients que els existents. A més, per primera vegada, hem demostrat que el median graph pot ser un bon representant d'un conjunt d'objectes utilitzant grans quantitats de dades. Hem dut a terme experiments de classificació i clustering que validen aquesta hipòtesi i permeten preveure una pròspera aplicació del median graph a un bon nombre d'algorismes d'aprenentatge.
Given a set of objects, the generic concept of median is defined as the object with the smallest sum of distances to all the objects in the set. It has been often used as a good alternative to obtain a representative of the set.
In structural pattern recognition, graphs are normally used to represent structured objects. In the graph domain, the concept analogous to the median is known as the median graph. By extension, it has the same potential applications as the generic median in order to be used as the representative of a set of graphs.
Despite its simple definition and potential applications, its computation has been shown as an extremely complex task. All the existing algorithms can only deal with small sets of graphs, and its application has been constrained in most cases to the use of synthetic data with no real meaning. Thus, it has mainly remained in the box of the theoretical concepts.
The main objective of this work is to further investigate both the theory and the algorithmic underlying the concept of the median graph with the final objective to extend its applicability and bring all its potential to the world of real applications. To this end, new theory and new algorithms for its computation are reported. From a theoretical point of view, this thesis makes two main contributions. On one hand, the new concept of spectral median graph. On the other hand, we show that some of the existing theoretical properties of the median graph can be improved under some specific conditions. In addition to these theoretical contributions, we propose five new ways to compute the median graph. One of them is a direct consequence of the spectral median graph concept. In addition, we provide two new algorithms based on the new theoretical properties. Finally, we present a novel technique for the median graph computation based on graph embedding into vector spaces. With this technique two more new algorithms are presented.
The experimental evaluation of the proposed methods on one semi-artificial and two real-world datasets, representing graphical symbols, molecules and webpages, shows that these methods are much more ecient than the existing ones. In addition, we have been able to proof for the first time that the median graph can be a good representative of a class in large datasets. We have performed some classification and clustering experiments that validate this hypothesis and permit to foresee a successful application of the median graph to a variety of machine learning algorithms.
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Zan. "GRAPH-BASED ANALYSIS FOR E-COMMERCE RECOMMENDATION." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1167%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Ruifeng. "Contribution to graph-based manifold learning with application to image categorization." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCA015.

Full text
Abstract:
Les algorithmes d'apprentissage de représentation de données à base de graphes sont considérés comme une technique puissante pour l'extraction de caractéristiques et la réduction de dimensionnalité dans les domaines de la reconnaissance de formes, la vision par ordinateur et l'apprentissage automatique. Ces algorithmes utilisent les informations contenues dans les similitudes d’échantillons (par paire) et la matrice du graphe pondéré pour révéler la structure géométrique intrinsèque de données. Ces algorithmes sont capables de récupérer une structure de faible dimension à partir de données de dimension élevée. Le travail de cette thèse consiste à développer des techniques d'apprentissage de représentation de données à base de graphes, appliquées à la reconnaissance de formes. Plus précisément, les expérimentations sont conduites sur des bases de données correspondant à plusieurs catégories d'images publiques telles que les bases de visages, les bases de scènes intérieures et extérieures, les bases d’objets, etc. Plusieurs approches sont proposées dans cette thèse : 1) Une nouvelle méthode non linéaire appelée inclusion discriminante flexible basée sur un graphe avec sélection de caractéristiques est proposée. Nous recherchons une représentation non linéaire et linéaire des données pouvant convenir à des tâches d'apprentissage génériques telles que la classification et le regroupement. En outre, un résultat secondaire de la méthode proposée est la sélection de caractéristiques originales, où la matrice de transformation linéaire estimée peut-être utilisée pour le classement et la sélection de caractéristiques. 2) Pour l'obtention d'une représentation non linéaire flexible et inductive des données, nous développons et étudions des stratégies et des algorithmes qui estiment simultanément la représentation de données désirée et une pondération explicite de caractéristiques. Le critère proposé estime explicitement les poids des caractéristiques ainsi que les données projetées et la transformation linéaire de sorte que la régularité des données et de grandes marges soient obtenues dans l'espace de projection. De plus, nous introduisons une variante à base de noyaux du modèle afin d'obtenir une représentation de données non linéaire inductive proche d'un véritable sous-espace non linéaire pour une bonne approximation des données. 3) Un apprentissage profond flexible qui peut surmonter les limites et les faiblesses des modèles d'apprentissage à une seule couche est introduit. Nous appelons cette stratégie une représentation basée sur un graphe élastique avec une architecture profonde qui explore en profondeur les informations structurelles des données. Le cadre résultant peut être utilisé pour les environnements semi-supervisés et supervisés. De plus, les problèmes d'optimisation qui en résultent peuvent être résolus efficacement. 4) Nous proposons une méthode semi-supervisée pour la représentation de données qui exploite la notion de convolution avec graphes. Cette méthode offre une nouvelle perspective de recherche sur la représentation de données non linéaires et établit un lien avec le traitement du signal sur les méthodes à base de graphes. La méthode proposée utilise et exploite les graphes de deux manières. Tout d'abord, il déploie une régularité des données sur les graphes. Deuxièmement, son modèle de régression est construit sur l'utilisation conjointe des données et de leur graphe en ce sens que le modèle de régression fonctionne avec des données convolutées. Ces dernières sont obtenues par propagation de caractéristiques
Graph-based Manifold Learning algorithms are regarded as a powerful technique for feature extraction and dimensionality reduction in Pattern Recogniton, Computer Vision and Machine Learning fields. These algorithms utilize sample information contained in the item-item similarity and weighted matrix to reveal the intrinstic geometric structure of manifold. It exhibits the low dimensional structure in the high dimensional data. This motivates me to develop Graph-based Manifold Learning techniques on Pattern Recognition, specially, application to image categorization. The experimental datasets of thesis correspond to several categories of public image datasets such as face datasets, indoor and outdoor scene datasets, objects datasets and so on. Several approaches are proposed in this thesis: 1) A novel nonlinear method called Flexible Discriminant graph-based Embedding with feature selection (FDEFS) is proposed. We seek a non-linear and a linear representation of the data that can be suitable for generic learning tasks such as classification and clustering. Besides, a byproduct of the proposed embedding framework is the feature selection of the original features, where the estimated linear transformation matrix can be used for feature ranking and selection. 2) We investigate strategies and related algorithms to develop a joint graph-based embedding and an explicit feature weighting for getting a flexible and inductive nonlinear data representation on manifolds. The proposed criterion explicitly estimates the feature weights together with the projected data and the linear transformation such that data smoothness and large margins are achieved in the projection space. Moreover, this chapter introduces a kernel variant of the model in order to get an inductive nonlinear embedding that is close to a real nonlinear subspace for a good approximation of the embedded data. 3) We propose the graph convolution based semi-supervised Embedding (GCSE). It provides a new perspective to non-linear data embedding research, and makes a link to signal processing on graph methods. The proposed method utilizes and exploits graphs in two ways. First, it deploys data smoothness over graphs. Second, its regression model is built on the joint use of the data and their graph in the sense that the regression model works with convolved data. The convolved data are obtained by feature propagation. 4) A flexible deep learning that can overcome the limitations and weaknesses of single-layer learning models is introduced. We call this strategy an Elastic graph-based embedding with deep architecture which deeply explores the structural information of the data. The resulting framework can be used for semi-supervised and supervised settings. Besides, the resulting optimization problems can be solved efficiently
APA, Harvard, Vancouver, ISO, and other styles
4

Martineau, Maxime. "Deep learning onto graph space : application to image-based insect recognition." Thesis, Tours, 2019. http://www.theses.fr/2019TOUR4024.

Full text
Abstract:
Le but de cette thèse est d'étudier la reconnaissance d'insectes comme un problème de reconnaissance des formes basé images. Bien que ce problème ait été étudié en profondeur au long des trois dernières décennies, un aspect reste selon nous toujours à expérimenter à ce jour : les approches profondes (deep learning). À cet effet, la première contribution de cette thèse consiste à déterminer la faisabilité de l'application des réseaux de neurones convolutifs profonds (CNN) au problème de reconnaissance d'images d'insectes. Les limitations majeures ont les suivantes: les images sont très rares et les cardinalités de classes sont hautement déséquilibrées. Pour atténuer ces limitations, le transfer learning et la pondération de la fonction de coûts ont été employés. Des méthodes basées graphes sont également proposées et testées. La première consiste en la conception d'un classificateur de graphes de type perceptron. Le second travail basé sur les graphes de cette thèse est la définition d'un opérateur de convolution pour construire un modèle de réseaux de neurones convolutifs s'appliquant sur les graphes (GCNN.) Le dernier chapitre de la thèse s'applique à utiliser les méthodes mentionnées précédemment à des problèmes de reconnaissance d'images d'insectes. Deux bases d'images sont ici proposées. Là première est constituée d'images prises en laboratoire sur arrière-plan constant. La seconde base est issue de la base ImageNet. Cette base est composée d'images prises en contexte naturel. Les CNN entrainés avec transfer learning sont les plus performants sur ces bases d'images
The goal of this thesis is to investigate insect recognition as an image-based pattern recognition problem. Although this problem has been extensively studied along the previous three decades, an element is to the best of our knowledge still to be experimented as of 2017: deep approaches. Therefore, a contribution is about determining to what extent deep convolutional neural networks (CNNs) can be applied to image-based insect recognition. Graph-based representations and methods have also been tested. Two attempts are presented: The former consists in designing a graph-perceptron classifier and the latter graph-based work in this thesis is on defining convolution on graphs to build graph convolutional neural networks. The last chapter of the thesis deals with applying most of the aforementioned methods to insect image recognition problems. Two datasets are proposed. The first one consists of lab-based images with constant background. The second one is generated by taking a ImageNet subset. This set is composed of field-based images. CNNs with transfer learning are the most successful method applied on these datasets
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Pilho. "E-model event-based graph data model theory and implementation /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29608.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Madisetti, Vijay; Committee Member: Jayant, Nikil; Committee Member: Lee, Chin-Hui; Committee Member: Ramachandran, Umakishore; Committee Member: Yalamanchili, Sudhakar. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
6

GRASSI, FRANCESCO. "Statistical and Graph-Based Signal Processing: Fundamental Results and Application to Cardiac Electrophysiology." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2710580.

Full text
Abstract:
The goal of cardiac electrophysiology is to obtain information about the mechanism, function, and performance of the electrical activities of the heart, the identification of deviation from normal pattern and the design of treatments. Offering a better insight into cardiac arrhythmias comprehension and management, signal processing can help the physician to enhance the treatment strategies, in particular in case of atrial fibrillation (AF), a very common atrial arrhythmia which is associated to significant morbidities, such as increased risk of mortality, heart failure, and thromboembolic events. Catheter ablation of AF is a therapeutic technique which uses radiofrequency energy to destroy atrial tissue involved in the arrhythmia sustenance, typically aiming at the electrical disconnection of the of the pulmonary veins triggers. However, recurrence rate is still very high, showing that the very complex and heterogeneous nature of AF still represents a challenging problem. Leveraging the tools of non-stationary and statistical signal processing, the first part of our work has a twofold focus: firstly, we compare the performance of two different ablation technologies, based on contact force sensing or remote magnetic controlled, using signal-based criteria as surrogates for lesion assessment. Furthermore, we investigate the role of ablation parameters in lesion formation using the late-gadolinium enhanced magnetic resonance imaging. Secondly, we hypothesized that in human atria the frequency content of the bipolar signal is directly related to the local conduction velocity (CV), a key parameter characterizing the substrate abnormality and influencing atrial arrhythmias. Comparing the degree of spectral compression among signals recorded at different points of the endocardial surface in response to decreasing pacing rate, our experimental data demonstrate a significant correlation between CV and the corresponding spectral centroids. However, complex spatio-temporal propagation pattern characterizing AF spurred the need for new signals acquisition and processing methods. Multi-electrode catheters allow whole-chamber panoramic mapping of electrical activity but produce an amount of data which need to be preprocessed and analyzed to provide clinically relevant support to the physician. Graph signal processing has shown its potential on a variety of applications involving high-dimensional data on irregular domains and complex network. Nevertheless, though state-of-the-art graph-based methods have been successful for many tasks, so far they predominantly ignore the time-dimension of data. To address this shortcoming, in the second part of this dissertation, we put forth a Time-Vertex Signal Processing Framework, as a particular case of the multi-dimensional graph signal processing. Linking together the time-domain signal processing techniques with the tools of GSP, the Time-Vertex Signal Processing facilitates the analysis of graph structured data which also evolve in time. We motivate our framework leveraging the notion of partial differential equations on graphs. We introduce joint operators, such as time-vertex localization and we present a novel approach to significantly improve the accuracy of fast joint filtering. We also illustrate how to build time-vertex dictionaries, providing conditions for efficient invertibility and examples of constructions. The experimental results on a variety of datasets suggest that the proposed tools can bring significant benefits in various signal processing and learning tasks involving time-series on graphs. We close the gap between the two parts illustrating the application of graph and time-vertex signal processing to the challenging case of multi-channels intracardiac signals.
APA, Harvard, Vancouver, ISO, and other styles
7

Bush, Stephen J. Baker Erich J. "Automated sequence homology using empirical correlations to create graph-based networks for the elucidation of protein relationships /." Waco, Tex. : Baylor University, 2008. http://hdl.handle.net/2104/5221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhu, Xiaoting. "Systematic Assessment of Structural Features-Based Graph Embedding Methods with Application to Biomedical Networks." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592394966493963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Yan. "Improving the efficiency of graph-based data mining with application to public health data." Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Fall2007/y_zhang_112907.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Loureiro, Rui. "Bond graph model based on structural diagnosability and recoverability analysis : application to intelligent autonomous vehicles." Thesis, Lille 1, 2012. http://www.theses.fr/2012LIL10079/document.

Full text
Abstract:
La présente thèse concerne l’étude structurelle pour le recouvrement du défaut par l’approche du bond graph. L'objectif est d'exploiter les propriétés structurelles et causales de l'outil bond graph, afin d’effectuer à la fois le diagnostic et l’analyse de la commande du système physique en présence du défaut. En effet, l’outil bond graph permet de vérifier les conditions structurelles de recouvrement de défauts pas seulement du point de vue de l’analyse de commande, mais aussi en considérant les informations issues de l’étape de diagnostic. Par conséquent, l’ensemble des défauts tolérés est obtenu en mode hors-ligne avant d’effectuer une implémentation réelle. En outre, en estimant le défaut comme une puissance perturbatrice fournie au système, ce qui permet d’étendre les résultats d’analyse structurelle pour le recouvrement du défaut à une compensation locale adaptative, directement à partir du modèle bond graph. Enfin, les résultats obtenus sont validés dans une application d’un véhicule autonome intelligent redondant
This work deals with structural fault recoverability analysis using the bond graph model. The objective is to exploit the structural and causal properties of the bond graph tool in order to perform both diagnosis and control analysis in the presence of faults. Indeed, the bond graph tool enables to verify the structural conditions of fault recoverability not only from a control perspective but also from a diagnosis one. In this way, the set of faults that can be recovered is obtained previous to industrial implementation. In addition, a novel way to estimate the fault by a disturbing power furnished to the system, enabled to extend the results of structural fault recoverability by performing a local adaptive compensation directly from the bond graph model. Finally, the obtained structural results are validated on a redundant intelligent autonomous vehicle
APA, Harvard, Vancouver, ISO, and other styles
11

Nguyen, Vu ngoc tung. "Analysis of biochemical reaction graph : application to heterotrophic plant cell metabolism." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0023/document.

Full text
Abstract:
Aujourd’hui, la biologie des systèmes est confrontée aux défis de l’analyse de l’énorme quantité de données biologiques et à la taille des réseaux métaboliques pour des analyses à grande échelle. Bien que plusieurs méthodes aient été développées au cours des dernières années pour résoudre ce problème, ce sujet reste un domaine de recherche en plein essor. Cette thèse se concentre sur l’analyse des propriétés structurales, le calcul des modes élémentaires de flux et la détermination d’ensembles de coupe minimales du graphe formé par ces réseaux. Dans notre recherche, nous avons collaboré avec des biologistes pour reconstruire un réseau métabolique de taille moyenne du métabolisme cellulaire de la plante, environ 90 noeuds et 150 arêtes. En premier lieu, nous avons fait l’analyse des propriétés structurelles du réseau dans le but de trouver son organisation. Les réactions points centraux de ce réseau trouvés dans cette étape n’expliquent pas clairement la structure du réseau. Les mesures classiques de propriétés des graphes ne donnent pas plus d’informations utiles. En deuxième lieu, nous avons calculé les modes élémentaires de flux qui permettent de trouver les chemins uniques et minimaux dans un réseau métabolique, cette méthode donne un grand nombre de solutions, autour des centaines de milliers de voies métaboliques possibles qu’il est difficile de gérer manuellement. Enfin, les coupes minimales de graphe, ont été utilisés pour énumérer tous les ensembles minimaux et uniques des réactions qui stoppent les voies possibles trouvées à la précédente étape. Le nombre de coupes minimales a une tendance à ne pas croître exponentiellement avec la taille du réseau a contrario des modes élémentaires de flux. Nous avons combiné l’analyse de ces modes et les ensembles de coupe pour améliorer l’analyse du réseau. Les résultats montrent l’importance d’ensembles de coupe pour la recherche de la structure hiérarchique du réseau à travers modes de flux élémentaires. Nous avons étudié un cas particulier : qu’arrive-t-il si on stoppe l’entrée de glucose ? En utilisant les coupes minimales de taille deux, huit réactions ont toujours été trouvés dans les modes élémentaires qui permettent la production des différents sucres et métabolites d’intérêt au cas où le glucose est arrêté. Ces huit réactions jouent le rôle du squelette / coeur de notre réseau. En élargissant notre analyse aux coupes minimales de taille 3, nous avons identifié cinq réactions comme point de branchement entre différent modes. Ces 13 réactions créent une classification hiérarchique des modes de flux élémentaires fixés et nous ont permis de réduire considérablement le nombre de cas à étudier (approximativement divisé par 10) dans l’analyse des chemins réalisables dans le réseau métabolique. La combinaison de ces deux outils nous a permis d’approcher plus efficacement l’étude de la production des différents métabolites d’intérêt par la cellule de plante hétérotrophique
Nowadays, systems biology are facing the challenges of analysing the huge amount of biological data and large-scale metabolic networks. Although several methods have been developed in recent years to solve this problem, it is existing hardness in studying these data and interpreting the obtained results comprehensively. This thesis focuses on analysis of structural properties, computation of elementary flux modes and determination of minimal cut sets of the heterotrophic plant cellmetabolic network. In our research, we have collaborated with biologists to reconstructa mid-size metabolic network of this heterotrophic plant cell. This network contains about 90 nodes and 150 edges. First step, we have done the analysis of structural properties by using graph theory measures, with the aim of finding its owned organisation. The central points orhub reactions found in this step do not explain clearly the network structure. The small-world or scale-free attributes have been investigated, but they do not give more useful information. In the second step, one of the promising analysis methods, named elementary flux modes, givesa large number of solutions, around hundreds of thousands of feasible metabolic pathways that is difficult to handle them manually. In the third step, minimal cut sets computation, a dual approach of elementary flux modes, has been used to enumerate all minimal and unique sets of reactions stopping the feasible pathways found in the previous step. The number of minimal cut sets has a decreasing trend in large-scale networks in the case of growing the network size. We have also combined elementary flux modes analysis and minimal cut sets computation to find the relationship among the two sets of results. The findings reveal the importance of minimal cut sets in use of seeking the hierarchical structure of this network through elementary flux modes. We have set up the circumstance that what will be happened if glucose entry is absent. Bi analysis of small minimal cut sets we have been able to found set of reactions which has to be present to produce the different sugars or metabolites of interest in absence of glucose entry. Minimal cut sets of size 2 have been used to identify 8 reactions which play the role of the skeleton/core of our network. In addition to these first results, by using minimal cut sets of size 3, we have pointed out five reactions as the starting point of creating a new branch in creationof feasible pathways. These 13 reactions create a hierarchical classification of elementary flux modes set. It helps us understanding more clearly the production of metabolites of interest inside the plant cell metabolism
APA, Harvard, Vancouver, ISO, and other styles
12

Fruth, Jana. "Sensitivy analysis and graph-based methods for black-box functions with on application to sheet metal forming." Thesis, Saint-Etienne, EMSE, 2015. http://www.theses.fr/2015EMSE0779/document.

Full text
Abstract:
Le domaine général de la thèse est l’analyse de sensibilité de fonctions boîte noire. L’analyse de sensibilité étudie comment la variation d’une sortie peut être reliée à la variation des entrées. C’est un outil important dans la construction, l’analyse et l’optimisation des expériences numériques (computer experiments).Nous présentons tout d’abord l’indice d’interaction total, qui est utile pour le criblage d’interactions. Plusieurs méthodes d’estimation sont proposées. Leurs propriétés sont étudiées au plan théorique et par des simulations.Le chapitre suivant concerne l’analyse de sensibilité pour des modèles avec des entrées fonctionnelles et une sortie scalaire. Une approche séquentielle très économique est présentée, qui permet non seulement de retrouver la sensibilité de entrées fonctionnelles globalement, mais aussi d’identifier les régions d’intérêt dans leur domaine de définition.Un troisième concept est proposé, les support index functions, mesurant la sensibilité d’une entrée sur tout le support de sa loi de probabilité.Finalement les trois méthodes sont appliquées avec succès à l’analyse de sensibilité de modèles d’emboutissage
The general field of the thesis is the sensitivity analysis of black-box functions. Sensitivity analysis studies how the variation of the output can be apportioned to the variation of input sources. It is an important tool in the construction, analysis, and optimization of computer experiments.The total interaction index is presented, which can be used for the screening of interactions. Several variance-based estimation methods are suggested. Their properties are analyzed theoretically as well as on simulations.A further chapter concerns the sensitivity analysis for models that can take functions as input variables and return a scalar value as output. A very economical sequential approach is presented, which not only discovers the sensitivity of those functional variables as a whole but identifies relevant regions in the functional domain.As a third concept, support index functions, functions of sensitivity indices over the input distribution support, are suggested.Finally, all three methods are successfully applied in the sensitivity analysis of sheet metal forming models
APA, Harvard, Vancouver, ISO, and other styles
13

Madi, Kamel. "Inexact graph matching : application to 2D and 3D Pattern Recognition." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1315/document.

Full text
Abstract:
Les Graphes sont des structures mathématiques puissantes constituant un outil de modélisation universel utilisé dans différents domaines de l'informatique, notamment dans le domaine de la reconnaissance de formes. L'appariement de graphes est l'opération principale dans le processus de la reconnaissance de formes à base de graphes. Dans ce contexte, trouver des solutions d'appariement de graphes, garantissant l'optimalité en termes de précision et de temps de calcul est un problème de recherche difficile et d'actualité. Dans cette thèse, nous nous intéressons à la résolution de ce problème dans deux domaines : la reconnaissance de formes 2D et 3D. Premièrement, nous considérons le problème d'appariement de graphes géométriques et ses applications sur la reconnaissance de formes 2D. Dance cette première partie, la reconnaissance des Kites (structures archéologiques) est l'application principale considérée. Nous proposons un "framework" complet basé sur les graphes pour la reconnaissance des Kites dans des images satellites. Dans ce contexte, nous proposons deux contributions. La première est la proposition d'un processus automatique d'extraction et de transformation de Kites a partir d'images réelles en graphes et un processus de génération aléatoire de graphes de Kites synthétiques. En utilisant ces deux processus, nous avons généré un benchmark de graphes de Kites (réels et synthétiques) structuré en 3 niveaux de bruit. La deuxième contribution de cette première partie, est la proposition d'un nouvel algorithme d'appariement pour les graphes géométriques et par conséquent pour les Kites. L'approche proposée combine les invariants de graphes au calcul de l'édition de distance géométrique. Deuxièmement, nous considérons le problème de reconnaissance des formes 3D ou nous nous intéressons à la reconnaissance d'objets déformables représentés par des graphes c.à.d. des tessellations de triangles. Nous proposons une décomposition des tessellations de triangles en un ensemble de sous structures que nous appelons triangle-étoiles. En se basant sur cette décomposition, nous proposons un nouvel algorithme d'appariement de graphes pour mesurer la distance entre les tessellations de triangles. L'algorithme proposé assure un nombre minimum de structures disjointes, offre une meilleure mesure de similarité en couvrant un voisinage plus large et utilise un ensemble de descripteurs qui sont invariants ou au moins tolérants aux déformations les plus courantes. Finalement, nous proposons une approche plus générale de l'appariement de graphes. Cette approche est fondée sur une nouvelle formalisation basée sur le problème de mariage stable. L'approche proposée est optimale en terme de temps d'exécution, c.à.d. la complexité est quadratique O(n2), et flexible en terme d'applicabilité (2D et 3D). Cette approche se base sur une décomposition en sous structures suivie par un appariement de ces structures en utilisant l'algorithme de mariage stable. L'analyse de la complexité des algorithmes proposés et l'ensemble des expérimentations menées sur les bases de graphes des Kites (réelle et synthétique) et d'autres bases de données standards (2D et 3D) attestent l'efficacité, la haute performance et la précision des approches proposées et montrent qu'elles sont extensibles et générales
Graphs are powerful mathematical modeling tools used in various fields of computer science, in particular, in Pattern Recognition. Graph matching is the main operation in Pattern Recognition using graph-based approach. Finding solutions to the problem of graph matching that ensure optimality in terms of accuracy and time complexity is a difficult research challenge and a topical issue. In this thesis, we investigate the resolution of this problem in two fields: 2D and 3D Pattern Recognition. Firstly, we address the problem of geometric graphs matching and its applications on 2D Pattern Recognition. Kite (archaeological structures) recognition in satellite images is the main application considered in this first part. We present a complete graph based framework for Kite recognition on satellite images. We propose mainly two contributions. The first one is an automatic process transforming Kites from real images into graphs and a process of generating randomly synthetic Kite graphs. This allowing to construct a benchmark of Kite graphs (real and synthetic) structured in different level of deformations. The second contribution in this part, is the proposition of a new graph similarity measure adapted to geometric graphs and consequently for Kite graphs. The proposed approach combines graph invariants with a geometric graph edit distance computation. Secondly, we address the problem of deformable 3D objects recognition, represented by graphs, i.e., triangular tessellations. We propose a new decomposition of triangular tessellations into a set of substructures that we call triangle-stars. Based on this new decomposition, we propose a new algorithm of graph matching to measure the distance between triangular tessellations. The proposed algorithm offers a better measure by assuring a minimum number of triangle-stars covering a larger neighbourhood, and uses a set of descriptors which are invariant or at least oblivious under most common deformations. Finally, we propose a more general graph matching approach founded on a new formalization based on the stable marriage problem. The proposed approach is optimal in term of execution time, i.e. the time complexity is quadratic O(n2) and flexible in term of applicability (2D and 3D). The analyze of the time complexity of the proposed algorithms and the extensive experiments conducted on Kite graph data sets (real and synthetic) and standard data sets (2D and 3D) attest the effectiveness, the high performance and accuracy of the proposed approaches and show that the proposed approaches are extensible and quite general
APA, Harvard, Vancouver, ISO, and other styles
14

Bertarelli, Lorenza. "Analysis and simulation of cryptographic techniques based on sparse graph with application to satellite and airborne communication systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15537/.

Full text
Abstract:
In view of the future diffusion of quantum computers, many of the cryptographic systems currently in use must be rethought. This drastic but concrete preview has led us to reflect on a possible solution based on error correction code theory. In fact, many lines of thought are in agreement with the possibility of using error correctly codes not only for channel encoding, but also for encryption. In 1978 McEliece was the first to propose this idea, at those time not very much considered because with worse performance in the area of cryptography compared to other techniques, but today re-evaluated. The Low Density Parity Check codes (LDPC) are state-of-art in error correction since they can be decoded efficiently close to the Shannon capacity. Thanks to new advances in the algorithmic aspects of code theory and progress on linear-time encodable/decodable codes it is possible to achieve capacity even against adversarial noise. This thesis work mainly focuses on the decoding (or in terms of cryptography in the decryption) of LDPC codes through the implementation of Hard Decision Iterative Decoding with the aim of studying the performances for different codes belonging to the same LDPC family in terms of error correction capability.
APA, Harvard, Vancouver, ISO, and other styles
15

Morimitsu, Henrique. "A graph-based approach for online multi-object tracking in structured videos with an application to action recognition." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-13012016-101607/.

Full text
Abstract:
In this thesis we propose a novel approach for tracking multiple objects using structural information. The objects are tracked by combining particle filter and frame description with Attributed Relational Graphs (ARGs). We start by learning a structural probabilistic model graph from annotated images. The graphs are then used to evaluate the current tracking state and to correct it, if necessary. By doing so, the proposed method is able to deal with challenging situations such as abrupt motion and tracking loss due to occlusion. The main contribution of this thesis is the exploration of the learned probabilistic structural model. By using it, the structural information of the scene itself is used to guide the object detection process in case of tracking loss. This approach differs from previous works, that use structural information only to evaluate the scene, but do not consider it to generate new tracking hypotheses. The proposed approach is very flexible and it can be applied to any situation in which it is possible to find structural relation patterns between the objects. Object tracking may be used in many practical applications, such as surveillance, activity analysis or autonomous navigation. In this thesis, we explore it to track multiple objects in sports videos, where the rules of the game create some structural patterns between the objects. Besides detecting the objects, the tracking results are also used as an input for recognizing the action each player is performing. This step is performed by classifying a segment of the tracking sequence using Hidden Markov Models (HMMs). The proposed tracking method is tested on several videos of table tennis matches and on the ACASVA dataset, showing that the method is able to continue tracking the objects even after occlusion or when there is a camera cut.
Nesta tese, uma nova abordagem para o rastreamento de múltiplos objetos com o uso de informação estrutural é proposta. Os objetos são rastreados usando uma combinação de filtro de partículas com descrição das imagens por meio de Grafos Relacionais com Atributos (ARGs). O processo é iniciado a partir do aprendizado de um modelo de grafo estrutural probabilístico utilizando imagens anotadas. Os grafos são usados para avaliar o estado atual do rastreamento e corrigi-lo, se necessário. Desta forma, o método proposto é capaz de lidar com situações desafiadoras como movimento abrupto e perda de rastreamento devido à oclusão. A principal contribuição desta tese é a exploração do modelo estrutural aprendido. Por meio dele, a própria informação estrutural da cena é usada para guiar o processo de detecção em caso de perda do objeto. Tal abordagem difere de trabalhos anteriores, que utilizam informação estrutural apenas para avaliar o estado da cena, mas não a consideram para gerar novas hipóteses de rastreamento. A abordagem proposta é bastante flexível e pode ser aplicada em qualquer situação em que seja possível encontrar padrões de relações estruturais entre os objetos. O rastreamento de objetos pode ser utilizado para diversas aplicações práticas, tais como vigilância, análise de atividades ou navegação autônoma. Nesta tese, ele é explorado para rastrear diversos objetos em vídeos de esporte, na qual as regras do jogo criam alguns padrões estruturais entre os objetos. Além de detectar os objetos, os resultados de rastreamento também são usados como entrada para reconhecer a ação que cada jogador está realizando. Esta etapa é executada classificando um segmento da sequência de rastreamento por meio de Modelos Ocultos de Markov (HMMs). A abordagem de rastreamento proposta é testada em diversos vídeos de jogos de tênis de mesa e na base de dados ACASVA, demonstrando a capacidade do método de lidar com situações de oclusão ou cortes de câmera.
APA, Harvard, Vancouver, ISO, and other styles
16

Olsson, Fredrik. "A Lab System for Secret Sharing." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2385.

Full text
Abstract:

Finnegan Lab System is a graphical computer program for learning how secret sharing works. With its focus on the algorithms and the data streams, the user does not have to consider machine-specific low-level details. It is highly modularised and is not restricted to secret sharing, but can easily be extended with new functions, such as building blocks for Feistel networks or signal processing.

This thesis describes what secret sharing is, the development of a new lab system designed for secret sharing and how it can be used.

APA, Harvard, Vancouver, ISO, and other styles
17

Nguyen, Thi Kim Ngan. "Generalizing association rules in n-ary relations : application to dynamic graph analysis." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00995132.

Full text
Abstract:
Pattern discovery in large binary relations has been extensively studied. An emblematic success in this area concerns frequent itemset mining and its post-processing that derives association rules. In this case, we mine binary relations that encode whether some properties are satisfied or not by some objects. It is however clear that many datasets correspond to n-ary relations where n > 2. For example, adding spatial and/or temporal dimensions (location and/or time when the properties are satisfied by the objects) leads to the 4-ary relation Objects x Properties x Places x Times. Therefore, we study the generalization of association rule mining within arbitrary n-ary relations: the datasets are now Boolean tensors and not only Boolean matrices. Unlike standard rules that involve subsets of only one domain of the relation, in our setting, the head and the body of a rule can include arbitrary subsets of some selected domains. A significant contribution of this thesis concerns the design of interestingness measures for such generalized rules: besides a frequency measures, two different views on rule confidence are considered. The concept of non-redundant rules and the efficient extraction of the non-redundant rules satisfying the minimal frequency and minimal confidence constraints are also studied. To increase the subjective interestingness of rules, we then introduce disjunctions in their heads. It requires to redefine the interestingness measures again and to revisit the redundancy issues. Finally, we apply our new rule discovery techniques to dynamic relational graph analysis. Such graphs can be encoded into n-ary relations (n ≥ 3). Our use case concerns bicycle renting in the Vélo'v system (self-service bicycle renting in Lyon). It illustrates the added-value of some rules that can be computed thanks to our software prototypes.
APA, Harvard, Vancouver, ISO, and other styles
18

Wan, Wei. "A New Approach to the Decomposition of Incompletely Specified Functions Based on Graph Coloring and Local Transformation and Its Application to FPGA Mapping." PDXScholar, 1992. https://pdxscholar.library.pdx.edu/open_access_etds/4698.

Full text
Abstract:
The thesis presents a new approach to the decomposition of incompletely specified functions and its application to FPGA (Field Programmable Gate Array) mapping. Five methods: Variable Partitioning, Graph Coloring, Bond Set Encoding, CLB Reusing and Local Transformation are developed in order to efficiently perform decomposition and FPGA (Lookup-Table based FPGA) mapping. 1) Variable Partitioning is a high quality hemistic method used to find the "best" partitions, avoiding the very time consuming testing of all possible decomposition charts, which is impractical when there are many input variables in the input function. 2) Graph Coloring is another high quality heuristic\ used to perform the quasi-optimum don't care assignment, making the program possible to accept incompletely specified function and perform a quasi-optimum assignment to the unspecified part of the function. 3) Bond Set Encoding algorithm is used to simplify the decomposed blocks during the process of decomposition. 4) CLB Reusing algorithm is used to reduce the number of CLBs used in the final mapped circuit. 5) Local Transformation concept is introduced to transform nondecomposable functions into decomposable ones, thus making it possible to apply decomposition method to FPGA mapping. All the above developed methods are incorporated into a program named TRADE, which performs global optimization over the input functions. While most of the existing methods recursively perform local optimization over some kinds of network-like graphs, and few of them can handle incompletely specified functions. Cube calculus is used in the TRADE program, the operations are global and very fast. A short description of the TRADE program and the evaluation of the results are provided at the_ end of the thesis. For many benchmarks the TRADE program gives better results than any program published in the literature.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Peng. "Historical handwriting representation model dedicated to word spotting application." Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4019/document.

Full text
Abstract:
L’objectif du travail de thèse est de proposer un modèle de représentation des écritures dans les images de documents du patrimoine sans recourir à une transcription des textes. Ce modèle, issu d’une étude très complète des méthodes actuelles de caractérisation des écritures, est à la base d’une proposition de scénario de recherche par similarité de mots, indépendante du scripteur et ne nécessitant pas d’apprentissage. La recherche par similarité proposée repose sur une structure de graphes intégrant des informations sur la topologie, la morphologie locale des mots et sur le contexte extrait du voisinage de chaque point d’intérêt. Un graphe est construit à partir du squelette décrit en chaque point sommet par le contexte de formes, descripteur riche et compact. L’extraction de mots est assurée par une première étape de localisation grossière de régions candidates, décrites par une séquence déduite d’une représentation par graphes liée à des critères topologiques de voisinage. L’appariement entre mots repose ensuite sur une distance dynamique et un usage adapté du coût d’édition approximé entre graphes rendant compte de la nature bi-dimensionnelle de l’écriture. L’approche a été conçue pour être robuste aux distorsions de l’écriture et aux changements de scripteurs. Les expérimentations sont réalisées sur des bases de documents manuscrits patrimoniaux exploitées dans les compétitions de word-spotting. Les performances illustrent la pertinence de la proposition et ouvrent des voies nouvelles d’investigation dans des domaines d’applications autour de la reconnaissance de symboles et d’écritures iconographiques
As more and more documents, especially historical handwritten documents, are converted into digitized version for long-term preservation, the demands for efficient information retrieval techniques in such document images are increasing. The objective of this research is to establish an effective representation model for handwriting, especially historical manuscripts. The proposed model is supposed to help the navigation in historical document collections. Specifically speaking, we developed our handwriting representation model with regards to word spotting application. As a specific pattern recognition task, handwritten word spotting faces many challenges such as the high intra-writer and inter-writer variability. Nowadays, it has been admitted that OCR techniques are unsuccessful in handwritten offline documents, especially historical ones. Therefore, the particular characterization and comparison methods dedicated to handwritten word spotting are strongly required. In this work, we explore several techniques that allow the retrieval of singlestyle handwritten document images with query image. The proposed representation model contains two facets of handwriting, morphology and topology. Based on the skeleton of handwriting, graphs are constructed with the structural points as the vertexes and the strokes as the edges. By signing the Shape Context descriptor as the label of vertex, the contextual information of handwriting is also integrated. Moreover, we develop a coarse-to-fine system for the large-scale handwritten word spotting using our representation model. In the coarse selection, graph embedding is adapted with consideration of simple and fast computation. With selected regions of interest, in the fine selection, a specific similarity measure based on graph edit distance is designed. Regarding the importance of the order of handwriting, dynamic time warping assignment with block merging is added. The experimental results using benchmark handwriting datasets demonstrate the power of the proposed representation model and the efficiency of the developed word spotting approach. The main contribution of this work is the proposed graph-based representation model, which realizes a comprehensive description of handwriting, especially historical script. Our structure-based model captures the essential characteristics of handwriting without redundancy, and meanwhile is robust to the intra-variation of handwriting and specific noises. With additional experiments, we have also proved the potential of the proposed representation model in other symbol recognition applications, such as handwritten musical and architectural classification
APA, Harvard, Vancouver, ISO, and other styles
20

Raveaux, Romain. "Fouille de graphes et classification de graphes : application à l’analyse de plans cadastraux." Thesis, La Rochelle, 2010. http://www.theses.fr/2010LAROS311/document.

Full text
Abstract:
Les travaux présentés dans ce mémoire de thèse abordent sous différents angles très intéressants, un sujet vaste et ambitieux : l’interprétation de plans cadastraux couleurs.Dans ce contexte, notre approche se trouve à la confluence de différentes thématiques de recherche telles que le traitement du signal et des images, la reconnaissance de formes, l’intelligence artificielle et l’ingénierie des connaissances. En effet, si ces domaines scientifiques diffèrent dans leurs fondements, ils sont complémentaires et leurs apports respectifs sont indispensables pour la conception d’un système d’interprétation. Le centre du travail est le traitement automatique de documents cadastraux du 19e siècle. La problématique est traitée dans le cadre d'un projet réunissant des historiens, des géomaticiens et des informaticiens. D'une part nous avons considéré le problème sous un angle systémique, s'intéressant à toutes les étapes de la chaîne de traitements mais aussi avec un souci évident de développer des méthodologies applicables dans d'autres contextes. Les documents cadastraux ont été l'objet de nombreuses études mais nous avons su faire preuve d'une originalité certaine, mettant l'accent sur l'interprétation des documents et basant notre étude sur des modèles à base de graphes. Des propositions de traitements appropriés et de méthodologies ont été formulées. Le souci de comblé le gap sémantique entre l’image et l’interprétation a reçu dans le cas des plans cadastraux étudiés une réponse
This thesis tackles the problem of technical document interpretationapplied to ancient and colored cadastral maps. This subject is on the crossroadof different fields like signal or image processing, pattern recognition, artificial intelligence,man-machine interaction and knowledge engineering. Indeed, each of thesedifferent fields can contribute to build a reliable and efficient document interpretationdevice. This thesis points out the necessities and importance of dedicatedservices oriented to historical documents and a related project named ALPAGE.Subsequently, the main focus of this work: Content-Based Map Retrieval within anancient collection of color cadastral maps is introduced
APA, Harvard, Vancouver, ISO, and other styles
21

Raymond, John W. "Applications of graph-based similarity in cheminformatics." Thesis, University of Sheffield, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Tierny, Julien. "Reeb graph based 3D shape modeling and applications." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2008. http://tel.archives-ouvertes.fr/tel-00838246.

Full text
Abstract:
Avec le développement récent des technologies 3D, les formes 3D sont devenues un type de données multimédia interactives de première importance. Leur représentation la plus courante, le maillage de polygones, souffre cependant de grande variabilité face à des transformations canoniques préservant la forme. Il est donc nécessaire de concevoir des techniques de modélisation intrinsèque de forme. Dans cette thèse, nous explorons la modélisation topologique par l'étude de structures basées sur les graphes de Reeb. En particulier, nous introduisons une nouvelle abstraction de forme, appelée squelette topologique avancé, qui permet non seulement l'étude de l'évolution topologique des lignes de niveau de fonctions de Morse mais aussi l'étude de leur évolution géométrique. Nous démontrons l'utilité de cette représentation intrinsèque de forme dans trois problèmes de recherche liés à l'Informatique Graphique et à la Vision par Ordinateur. Tout d'abord, nous introduisons la notion de calcul géométrique sur les graphes de Reeb pour le calcul automatique et stable de squelettes de con- trôle pour la manipulation interactive de forme. Ensuite, en introduisant les notions de cartes de Reeb et de motifs de Reeb, nous proposons une nouvelle méthode pour l'estimation de similarité partielle entre formes 3D. Nous montrons que cette approche dépasse les méthodes participant au concours international de reconnaissance de forme 2007 (SHREC 2007) par un gain de 14%. Enfin, nous présentons deux techniques permettant de fournir une dé- composition fonctionnelle d'une forme 3D, à la fois en considérant des heuristiques issues de la théorie de la perception humaine et des données 3D variant dans le temps. Des exemples applicatifs concrets viennent illustrer l'utilité de notre ap- proche pour chacun de ces problèmes de recherche.
APA, Harvard, Vancouver, ISO, and other styles
23

Couprie, Camille. "Graph-based variational optimization and applications in computer vision." Phd thesis, Université Paris-Est, 2011. http://tel.archives-ouvertes.fr/tel-00666878.

Full text
Abstract:
Many computer vision applications such as image filtering, segmentation and stereovision can be formulated as optimization problems. Recently discrete, convex, globally optimal methods have received a lot of attention. Many graph-based methods suffer from metrication artefacts, segmented contours are blocky in areas where contour information is lacking. In the first part of this work, we develop a discrete yet isotropic energy minimization formulation for the continuous maximum flow problem that prevents metrication errors. This new convex formulation leads us to a provably globally optimal solution. The employed interior point method can optimize the problem faster than the existing continuous methods. The energy formulation is then adapted and extended to multi-label problems, and shows improvements over existing methods. Fast parallel proximal optimization tools have been tested and adapted for the optimization of this problem. In the second part of this work, we introduce a framework that generalizes several state-of-the-art graph-based segmentation algorithms, namely graph cuts, random walker, shortest paths, and watershed. This generalization allowed us to exhibit a new case, for which we developed a globally optimal optimization method, named "Power watershed''. Our proposed power watershed algorithm computes a unique global solution to multi labeling problems, and is very fast. We further generalize and extend the framework to applications beyond image segmentation, for example image filtering optimizing an L0 norm energy, stereovision and fast and smooth surface reconstruction from a noisy cloud of 3D points
APA, Harvard, Vancouver, ISO, and other styles
24

Lan, Ching Fu. "Design techniques for graph-based error-correcting codes and their applications." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3329.

Full text
Abstract:
In Shannon’s seminal paper, “A Mathematical Theory of Communication”, he defined ”Channel Capacity” which predicted the ultimate performance that transmission systems can achieve and suggested that capacity is achievable by error-correcting (channel) coding. The main idea of error-correcting codes is to add redundancy to the information to be transmitted so that the receiver can explore the correlation between transmitted information and redundancy and correct or detect errors caused by channels afterward. The discovery of turbo codes and rediscovery of Low Density Parity Check codes (LDPC) have revived the research in channel coding with novel ideas and techniques on code concatenation, iterative decoding, graph-based construction and design based on density evolution. This dissertation focuses on the design aspect of graph-based channel codes such as LDPC and Irregular Repeat Accumulate (IRA) codes via density evolution, and use the technique (density evolution) to design IRA codes for scalable image/video communication and LDPC codes for distributed source coding, which can be considered as a channel coding problem. The first part of the dissertation includes design and analysis of rate-compatible IRA codes for scalable image transmission systems. This part presents the analysis with density evolution the effect of puncturing applied to IRA codes and the asymptotic analysis of the performance of the systems. In the second part of the dissertation, we consider designing source-optimized IRA codes. The idea is to take advantage of the capability of Unequal Error Protection (UEP) of IRA codes against errors because of their irregularities. In video and image transmission systems, the performance is measured by Peak Signal to Noise Ratio (PSNR). We propose an approach to design IRA codes optimized for such a criterion. In the third part of the dissertation, we investigate Slepian-Wolf coding problem using LDPC codes. The problems to be addressed include coding problem involving multiple sources and non-binary sources, and coding using multi-level codes and nonbinary codes.
APA, Harvard, Vancouver, ISO, and other styles
25

Poudel, Prabesh. "Security Vetting Of Android Applications Using Graph Based Deep Learning Approaches." Bowling Green State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1617199500076786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

MANZO, MARIO. "ATTRIBUTED RELATIONAL SIFT-BASED REGIONS GRAPH (ARSRG):DESCRIPTION, MATCHING AND APPLICATIONS." Doctoral thesis, Università degli Studi di Milano, 2014. http://hdl.handle.net/2434/233320.

Full text
Abstract:
Finding correspondences between images is a crucial activity in many overlapping fields of research, such as Image Retrieval and Pattern Recognition. Many existing techniques address this problem using local invariant image features, instead of color, shape and texture, that to some degree loose the large scale structure of the image. In this thesis, in order to account for spatial relations among the local invariant features and to improve the image representation, first a graph data structure is introduced, where local features are represented by nodes and spatial relations by edges; second an algorithm able to find matches between local invariant features, organized in graph structures, is built; third a mapping procedure from graph to vector space is proposed, in order to speed up the classification process. Effectiveness of the proposed framework is demonstrated through applications in image-based localization and art painting. The literature shows many approximate algorithms to solve these problems, so a comparison with the state of the art is performed in each step of the process. By using both local and spatial information, the proposed framework outperforms its competitors for the image correspondence problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Cheng, Sibo. "Error covariance specification and localization in data assimilation with industrial application Background error covariance iterative updating with invariant observation measures for data assimilation A graph clustering approach to localization for adaptive covariance tuning in data assimilation based on state-observation mapping Error covariance tuning in variational data assimilation: application to an operating hydrological model." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPAST067.

Full text
Abstract:
Les méthodes d’assimilation de données et plus particulièrement les méthodes variationnelles sont mises à profit dans le domaine industriel pour deux grands types d’applications que sont la reconstruction de champ physique et le recalage de paramètres. Une des difficultés de mise en œuvre des algorithmes d’assimilation est que la structure de matrices de covariance d’erreurs, surtout celle d’ébauche, n’est souvent pas ou mal connue. Dans cette thèse, on s’intéresse à la spécification et la localisation de matrices de covariance dans des systèmes multivariés et multidimensionels, et dans un cadre industriel. Dans un premier temps, on cherche à adapter/améliorer notre connaissance sur les covariances d’analyse à l’aide d’un processus itératif. Dans ce but nous avons développé deux nouvelles méthodes itératives pour la construction de matrices de covariance d’erreur d’ébauche. L’efficacité de ces méthodes est montrée numériquement en expériences jumelles avec des erreurs indépendantes ou relatives aux états vrais. On propose ensuite un nouveau concept de localisation pour le diagnostic et l’amélioration des covariances des erreurs. Au lieu de s’appuyer sur une distance spatiale, cette localisation est établie exclusivement à partir de liens entre les variables d’état et les observations. Finalement, on applique une combinaison de ces nouvelles approches et de méthodes plus classiques existantes, pour un modèle hydrologique multivarié développé à EDF. L’assimilation de données est mise en œuvre pour corriger la quantité de précipitation observée afin d’obtenir une meilleure prévision du débit d’une rivière en un point donné
Data assimilation techniques are widely applied in industrial problems of field reconstruction or parameter identification. The error covariance matrices, especially the background matrix in data assimilation are often difficult to specify. In this thesis, we are interested in the specification and localization of covariance matrices in multivariate and multidimensional systems in an industrial context. We propose to improve the covariance specification by iterative processes. Hence, we developed two new iterative methods for background matrix recognition. The power of these methods is demonstrated numerically in twin experiments with independent errors or relative to true states. We then propose a new concept of localization and applied it for error covariance tuning. Instead of relying on spatial distance, this localization is established purely on links between state variables and observations. Finally, we apply these new approaches, together with other classical methods for comparison, to a multivariate hydrological model. Variational assimilation is implemented to correct the observed precipitation in order to obtain a better river flow forecast
APA, Harvard, Vancouver, ISO, and other styles
28

Streib, Kevin. "IMPROVED GRAPH-BASED CLUSTERING WITH APPLICATIONS IN COMPUTER VISION AND BEHAVIOR ANALYSIS." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1331063343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Koushaeian, Reza. "An Ontology And Conceptual Graph Based Best Matching Algorithm For Context-aware Applications." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613216/index.pdf.

Full text
Abstract:
Context-aware computing is based on using knowledge about the current context. Interpretation of current context to an understandable knowledge is carried out by reasoning over context and in some cases by matching the current context with the desired context. In this thesis we concentrated on context matching issue in context-aware computing domain. Context matching can be done in various ways like it is done in other matching processes. Our matching approach is best matching in order to generate granular similarity results and not to be limited to Boolean values. We decided to use Ontology as the encoded domain knowledge for our matching method. Context matching method is related to the method that we represent context. We selected conceptual graphs to represent the context. We proposed a generic algorithm for context matching based on the ontological information that benefit from the conceptual graph theory and its advantages.
APA, Harvard, Vancouver, ISO, and other styles
30

Watkins, Gregory Shroll. "A framework for interpreting noisy, two-dimensional images, based on a fuzzification of programmed, attributed graph grammars." Thesis, Rhodes University, 1998. http://hdl.handle.net/10962/d1004862.

Full text
Abstract:
This thesis investigates a fuzzy syntactic approach to the interpretation of noisy two-dimensional images. This approach is based on a modification of the attributed graph grammar formalism to utilise fuzzy membership functions in the applicability predicates. As far as we are aware, this represents the first such modification of graph grammars. Furthermore, we develop a method for programming the resultant fuzzy attributed graph grammars through the use of non-deterministic control diagrams. To do this, we modify the standard programming mechanism to allow it to cope with the fuzzy certainty values associated with productions in our grammar. Our objective was to develop a flexible framework which can be used for the recognition of a wide variety of image classes, and which is adept at dealing with noise in these images. Programmed graph grammars are specifically chosen for the ease with which they allow one to specify a new two-dimensional image class. We implement a prototype system for Optical Music Recognition using our framework. This system allows us to test the capabilities of the framework for coping with noise in the context of handwritten music score recognition. Preliminary results from the prototype system show that the framework copes well with noisy images.
APA, Harvard, Vancouver, ISO, and other styles
31

TESFAYE, YONATAN TARIKU. "Applications of a graph theoretic based clustering framework in computer vision and pattern recognition." Doctoral thesis, Università IUAV di Venezia, 2018. http://hdl.handle.net/11578/282321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Vellambi, Badri Narayanan. "Applications of graph-based codes in networks: analysis of capacity and design of improved algorithms." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/37091.

Full text
Abstract:
The conception of turbo codes by Berrou et al. has created a renewed interest in modern graph-based codes. Several encouraging results that have come to light since then have fortified the role these codes shall play as potential solutions for present and future communication problems. This work focuses on both practical and theoretical aspects of graph-based codes. The thesis can be broadly categorized into three parts. The first part of the thesis focuses on the design of practical graph-based codes of short lengths. While both low-density parity-check codes and rateless codes have been shown to be asymptotically optimal under the message-passing (MP) decoder, the performance of short-length codes from these families under MP decoding is starkly sub-optimal. This work first addresses the structural characterization of stopping sets to understand this sub-optimality. Using this characterization, a novel improved decoder that offers several orders of magnitude improvement in bit-error rates is introduced. Next, a novel scheme for the design of a good rate-compatible family of punctured codes is proposed. The second part of the thesis aims at establishing these codes as a good tool to develop reliable, energy-efficient and low-latency data dissemination schemes in networks. The problems of broadcasting in wireless multihop networks and that of unicast in delay-tolerant networks are investigated. In both cases, rateless coding is seen to offer an elegant means of achieving the goals of the chosen communication protocols. It was noticed that the ratelessness and the randomness in encoding process make this scheme specifically suited to such network applications. The final part of the thesis investigates an application of a specific class of codes called network codes to finite-buffer wired networks. This part of the work aims at establishing a framework for the theoretical study and understanding of finite-buffer networks. The proposed Markov chain-based method extends existing results to develop an iterative Markov chain-based technique for general acyclic wired networks. The framework not only estimates the capacity of such networks, but also provides a means to monitor network traffic and packet drop rates on various links of the network.
APA, Harvard, Vancouver, ISO, and other styles
33

Lu, Qifeng. "Bivariate Best First Searches to Process Category Based Queries in a Graph for Trip Planning Applications in Transportation." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/26444.

Full text
Abstract:
With the technological advancement in computer science, Geographic Information Science (GIScience), and transportation, more and more complex path finding queries including category based queries are proposed and studied across diverse disciplines. A category based query, such as Optimal Sequenced Routing (OSR) queries and Trip Planning Queries (TPQ), asks for a minimum-cost path that traverses a set of categories with or without a predefined order in a graph. Due to the extensive computing time required to process these complex queries in a large scale environment, efficient algorithms are highly desirable whenever processing time is a consideration. In Artificial Intelligence (AI), a best first search is an informed heuristic path finding algorithm that uses domain knowledge as heuristics to expedite the search process. Traditional best first searches are single-variate in terms of the number of variables to describe a state, and thus not appropriate to process these queries in a graph. In this dissertation, 1) two new types of category based queries, Category Sequence Traversal Query (CSTQ) and Optimal Sequence Traversal Query (OSTQ), are proposed; 2) the existing single-variate best first searches are extended to multivariate best first searches in terms of the state specified, and a class of new concepts--state graph, sub state graph, sub state graph space, local heuristic, local admissibility, local consistency, global heuristic, global admissibility, and global consistency--is introduced into best first searches; 3) two bivariate best first search algorithms, C* and O*, are developed to process CSTQ and OSTQ in a graph, respectively; 4) for each of C* and O*, theorems on optimality and optimal efficiency in a sub state graph space are developed and identified; 5) a family of algorithms including C*-P, C-Dijkstra, O*-MST, O*-SCDMST, O*- Dijkstra, and O*-Greedy is identified, and case studies are performed on path finding in transportation networks, and/or fully connected graphs, either directed or undirected; and 6) O*- SCDMST is adopted to efficiently retrieve optimal solutions for OSTQ using network distance metric in a large transportation network.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
34

Ali, Ismael Ali. "Using and Improving Computational Cognitive Models for Graph-Based Semantic Learning and Representation from Unstructured Text with Applications." Kent State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=kent1524217759138453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Koopman, Bevan Raymond. "Semantic search as inference : applications in health informatics." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/71385/1/Bevan_Koopman_Thesis.pdf.

Full text
Abstract:
This thesis developed new search engine models that elicit the meaning behind the words found in documents and queries, rather than simply matching keywords. These new models were applied to searching medical records: an area where search is particularly challenging yet can have significant benefits to our society.
APA, Harvard, Vancouver, ISO, and other styles
36

Xing, Yihan. "An inertia-capacitance beam substructure formulation based on bond graph terminology with applications to rotating beam and wind turbine rotor blades." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for marin teknikk, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11637.

Full text
Abstract:
In this thesis, an Inertia-Capacitance (IC) beam substructure formulation based on bond graph terminology is developed. The IC beam is formulated in the centre of mass body fixed coordinate system which allows for easy interfacing of the IC beam in a multibody system setting. This multibody floating frame approach is also computationally cheaper than nonlinear finite element methods. Elastic deformations in the IC beam are assumed to be small and described by modal superposition. The formulation couples rigid body motions and elastic deformations in a nonlinear fashion. Detailed derivations for a two-dimensional planar IC beam with bending modes are presented. Brief derivations are also presented for the two-dimensional IC beam with both bending and axial modes and for the three-dimensional IC beam with bending modes. A modal acceleration method via the decoupling of modes is developed for use in the IC beam. The Karnopp-Margolis method is used in the model set-ups to ensure complete integral causality. This results in an efficient numerical system. The large deflection cantilevered beam and the rotating beam spin-up maneuver problems are solved. Convergence studies of various model parameters are performed. The effects of axial modes in the spin-up maneuver problem are also investigated. Investigations are also made on the hinges used for the substructure interconnections. The IC beam is shown to be capable of solving these problems accurately and efficiently. Lastly, the methodology to apply the IC beam formulation to the wind turbine rotor blades is presented.
APA, Harvard, Vancouver, ISO, and other styles
37

Bodvill, Jonatan. "Enterprise network topology discovery based on end-to-end metrics : Logical site discovery in enterprise networks based on application level measurements in peer- to-peer systems." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-227803.

Full text
Abstract:
In data intensive applications deployed in enterprise networks, especially applications utilizing peer-to-peer technology, locality is of high importance. Peers should aim to maximize data exchange with other peers where the connectivity is the best. In order to achieve this, locality information must be present which peers can base their decisions on. This information is not trivial to find as there is no readily available global knowledge of which nodes have good connectivity. Having each peer try other peers randomly until it finds good enough partners is costly and lowers the locality of the application until it converges. In this thesis a solution is presented which creates a logical topology of a peer-to-peer network, grouping peers into clusters based on their connectivity metrics. This can then be used to aid the peer-to-peer partner selection algorithm to allow for intelligent partner selection. A graph model of the system is created, where peers in the system are modelled as vertices and connections between peers are modelled as edges, with a weight in relation to the quality of the connection. The problem is then modelled as a weighted graph clustering problem which is a well-researched problem with a lot of published work tied to it. State-of-the-art graph community detection algorithms are researched, selected depending on factors such as performance and scalability, optimized for the current purpose and implemented. The results of running the algorithms on the streaming data are evaluated against known information. The results show that unsupervised graph community detection algorithm creates useful insights into networks connectivity structure and can be used in peer-to-peer contexts to find the best partners to exchange data with.
I dataintensiva applikationer i företagsnätverk, speciellt applikationer som använder sig av peer-to-peer teknologi, är lokalitet viktigt. Klienter bör försöka maximera datautbyte med andra klienter där nätverkskopplingen är som bäst. För att klienterna ska kunna göra sådana val måste information om vilka klienter som befinner sig vara vara tillgänglig som klienterna kan basera sina val på. Denna information är inte trivial att framställa då det inte finns någon färdig global information om vilka klienter som har bra uppkoppling med andra klienter och att låta varje klient prova sig fram blint tills de hittar de bästa partnerna är kostsamt och sänker applikationens lokalitet innan den konvergerar. I denna rapport presenteras en lösning som skapar en logisk vy över ett peer-to-peer nätverk, vilken grupperar klienter i kluster baserat på deras uppkopplingskvalitet. Denna vy kan sedan användas för att förbättra lokaliteten i peerto-peer applikationen. En grafmodell av systemet skapas, där klienter modelleras som hörn och kopplingar mellan klienter modelleras som kanter med en vikt i relation till uppkopplingskvaliteten. Problemet formuleras sedan som ett riktat grafklusterproblem vilket är ett väldokumenterat forskningsområde med mycket arbete publicerat kring. De mest framstående grafklusteralgoritmerna är sedan studerade, utvalda baserat på kravspecifikationer, optimerade för det aktuella problemet och implementerade. Resultaten som produceras av att algoritmerna körs på strömdata är evaluerade mot känd information. Resultaten visar att oövervakade grafklusteralgoritmer skapar användbar information kring nätverkens uppkopplingsstruktur och kan användas i peer-to-peerapplikationssammanhang för att hitta de bästa partnerna att utbyta data med.
APA, Harvard, Vancouver, ISO, and other styles
38

Singh, Saurabh. "Characterizing applications by integrating andimproving tools for data locality analysis and programperformance." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492741656429829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Teng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.

Full text
Abstract:
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
APA, Harvard, Vancouver, ISO, and other styles
40

Hon, Tze-lap, and 韓子立. "Dynamic Graph-Based Software Watermarking – CT Algorithm: Analysis, Improvement and Application in Java." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/35083753207335613295.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
95
Software watermarking is an important technique that enables the protection of programs and intellectual property rights. In this thesis, we discuss the development of software watermarks and the problems involved in dynamic graph-based software watermarking when applying CT algorithm is applied to Java technology. Because Java differs from traditional programming languages, we use the object-oriented analysis and design approach to solve these problems. By embedding watermarks into the input sequences on objects with strong relation, we can prevent watermark tempering. Our experimental results demonstrate that our approach not only effectively increases the difficulty and time required to tamper watermarks but also reduces memory usage and the resources required for loading these watermarks.
APA, Harvard, Vancouver, ISO, and other styles
41

Hung, Pei-Hsuan, and 洪培軒. "Downsampling of Graph Signals and Object Detection Application Using Fast Region-based Convolutional Networks." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/54802948944695108964.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
104
This thesis consists of two sections. In the first section, we study the downsampling methods for graph signals. Graph Signal Processing is an emerging field of signal processing for us to analysis irregular structure signals and becomes more and more significant in these days. The operations on these datasets as graph signals have been subjects to many recent studies, especially for basic signal operations such as shifting, modulating, and down-sampling. However, the sizes of the graphs in the applications can be very large and lead a lot of computational and technical challenges for the purpose of storage or analysis. To compress these datasets on graphs more effectively, we propose a pre-filtering classifier can selectively downsample signals and also consider the distribution of the signals on graphs. As compared to the other methods, such as color-based methods and topology-based methods, our proposed method can achieve better performance in terms of higher SNR. Moreover, our method can be processed efficiently and efficacy in terms of shorter computing-time and fewer vertices in use during compression. The second section of this thesis talks about how to use Fast Regions with Convolutional Neural Network (Fast R-CNN) to develop some object detection applications from the building of the environment including the setup of GPU and the platform of parallel computing to the process of training and testing in fast R-CNN algorithm. By using region-based convolutional neural networks, the correctness of object detection has a large progress in recent years, and fast R-CNN algorithm helps us to achieve near real-time rates when using very deep networks. To realize this efficient and powerful method more, some applications based on it are also proposed. Further, a machine learning technique is also applied to graph signal processing.
APA, Harvard, Vancouver, ISO, and other styles
42

Chuang, Yu-Hsuan, and 莊育瑄. "Design and Implementation of Social Application for Elderly Care Based on Open Graph Protocol." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/40106388707820215009.

Full text
Abstract:
碩士
國立臺灣大學
電子工程學研究所
100
Social Network Service (SNS) characterized by Facebook and Twitter has become the next generation paradigm of obtaining data, information and knowledge on the web. With the growth of the aged population, more and more elderly people also hope to share their feelings, videos, and photos and get more opportunities to interact with their families and friends through the community network. However, it is quite hard for the elderly people who have little knowledge about the network to learn to use the community network with complex operating functions. The aim of this paper is to develop a set of social network software exclusively for the elderly people, customize Graphic User Interface (GUI), support voice input and output and simplify it as an application service mainly for the major functions. Furthermore, our key observation is that propose innovative mechanism of elderly social network based on Open Graph Protocol (OGP) and provide more related information favored by more elderly users, including the third-party services and various applications and websites to make the community network services focus on the user’s association with people and things. The future network application services will be full of individualized experiences, which will also be applied to community network for elderly people, to make the user benefit from the social network at any place.
APA, Harvard, Vancouver, ISO, and other styles
43

"Structured graphs: a visual formalism for scalable graph based tools and its application to software structured analysis." University of Technology, Sydney. School of Computing Sciences, 1996. http://hdl.handle.net/2100/296.

Full text
Abstract:
Very large graphs are difficult for a person to browse and edit on a computer screen. This thesis introduces a visual formalism, structured graphs, which supports the scalable browsing and editing of very large graphs. This approach is relevant to a given application when it incorporates a large graph which is composed of named nodes and links, and abstraction hierarchies which can be defined on these nodes and links. A typical browsing operation is the selection of an arbitrary group of nodes and the display of the network of nodes and links for these nodes. Typical editing operations is: adding a new link between two nodes, adding a new node in the hierarchy, and moving sub-graphs to a new position in the node hierarchy. These operations are scalable when the number of user steps involved remains constant regardless of how large the graph is. This thesis shows that with structured graphs, these operations typically take one user step. We demonstrate the utility of structured graph formalism in an application setting. Computer aided software engineering tools, and in particular, structured analysis tools, are the chosen application area for this thesis, as they are graph based, and existing tools, though adequate for medium sized systems, lack scalability. In this thesis examples of an improved design for a structured analysis tool, based on structured graphs, is given. These improvements include scalable browsing and editing operations to support an individual software analyst, and component composition operations to support the construction of large models by a group of software analysts. Finally, we include proofs of key properties and descriptions of two text based implementations.
APA, Harvard, Vancouver, ISO, and other styles
44

Chung, Wei-Shih, and 鍾維時. "Application of Two Phases Model Based on Directed Acyclic Graph Relevance Vector Machine for Multi-Class Credit Rating Problems." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/20427235240370726030.

Full text
Abstract:
碩士
國立暨南國際大學
資訊管理學系
99
Over the past decades, the corporate credit rating status has been extensively studied by researchers in turbulent economic environment; the ratings performances have the potential impact on bank decision-making. Traditional corporate credit rating models are employed statistical methods to estimate rating status, but the model established by statistical methods in dealing with increasingly complex data is not perform a satisfactory job. Nowadays, some researchers began to use machine learning techniques to cope with the related problem, and the machine learning approach would not satisfy strict statistical limitation. In this paper, we applied Relevance Vector Machine (RVM) and Directed Acyclic Graph (DAG) methods to deal with multi-class classification (namely DAGRVM) and the subsequent experimental results could give a reference for banker to make suitable financial granting. To overcome the opaque nature of RVM, the investigation utilized Rough Set Theory (RST) to derive intuitive decision rule from RVM. The comprehensive decision rule would enhance the practical application. Therefore, the experimental results show that the DAGRVM method is an effective technique for the classification of credit rating, and it can obtain better classification accuracy (88%) than the Directed Acyclic Graph Support Vector Machine (DAGSVM). Moreover, the rules extracted by RVs model can be effectively used as a reference for enterprises.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Wei-An, and 陳韋安. "Harmony Graph, a Social-Network Based Model for Symbolic Music Content, and its Application to Music Visualization and Genre Classification." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/01915270528749095540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Sheng-Feng, and 吳聲鋒. "Incorporating Centrality-based Plane Graph Drawing and Force-directed Method to Visualize Small-World Graphs and its Application to Semiconductor Wafer Fabrication." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/24v6m5.

Full text
Abstract:
碩士
國立交通大學
工業工程與管理系所
103
Analysis of large and complex network graphs has been an important issue. Small-world network is a special type of those complex network graphs. The structure of this type of graphs cannot be effectively recognized by conventional graph drawing algorithms, such that it is difficult to identify and analyze the network. To solve this problem, this paper proposes a visualization approach, which utilizes centrality to remove some links between nodes, then uses a plane graph drawing method to lay out the reduced subgraph without any edge crossing, then applies a force-directed graph drawing method based on node-edge repulsion to improve the layout, finally adds back the remaining links. On experimental analysis, our results can not only analyze the same information with previous methods, but successfully gain more useful information. It lets us have a better understanding for the relationship between nodes and search out some messages that were never found before. Application of this approach to semiconductor wafer fabrication example is demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
47

Morkhande, Rahul Raj Kumar. "Characterization of Divergence resulting from Workload, Memory and Control-Flow behavior in GPGPU Applications." Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5453.

Full text
Abstract:
GPGPUs have emerged as high-performance computing platforms and are used for boosting the performance of general non-graphics applications from various scientifi c domains. These applications span varied areas like social networks, defense, bioinformatics, data science, med- ical imaging, uid dynamics, etc [3]. In order to efficiently exploit the computing potential of these accelerators, the application should be well mapped to the underlying architecture. As a result, different characteristics of behaviors can be seen in applications running on GPG- PUs. Applications are characterized as regular or irregular based on their behavior. Regular applications typically operate on array-like data structures whose run-time behavior can be statically predicted whereas irregular applications operate on pointer-based data structures like graphs, trees etc. [2]. Irregular applications are generally characterized by the presence of high degree of data-dependent control flow and memory accesses. In the literature, we nd various efforts to characterize such applications, particularly the irregular ones which exhibit behavior that results in run-time bottlenecks. Burtscher et al [2] investigated various irregular GPGPU applications by quantifying control flow and memory access behaviors on a real GPU device. Molly et al. [4] analyzed performance aspects of these behaviors on a cycle-accurate GPU simulator [1]. Qiumin Xu et al. [5] studied execution characteristics of graph-based applications on GPGPUs. All of these works focused on characterizing the di- vergences at the kernel level but not at the thread level. In this work, we provide an in-depth characterization of three divergences resulting from 1) Workload distribution, 2) Memory access and 3) Control- flow behaviors at different levels of the GPU thread hierarchy with the purpose of analyzing and quantifying the divergence characteristics at warp, thread block, and kernel level. In Chapter 1, we review certain aspects of CPUs, GPUs and how they are different from each other. Then we discuss various characteristics of GPGPU applications. In Chapter 2, we provide background on GPU architectures, CUDA programming models, and GPUs SIMD execution model. We briefly explain key programming concepts of CUDA like GPUs thread hierarchy and different addressable memory spaces. We describe various behaviors that cause divergence across the parallel threads. We then review the related work in the context of divergence we studied in this work followed by this thesis contribution. In Chapter 3, we explain our methodology for quantifying the workload and branch divergence across the threads at various levels of thread organization. We then present our characterization methodology to quantify divergent aspects of memory instructions. In Chapter 4, we present our chosen benchmarks taken from various suites and show the baseline GPGPU-Sim con g- fiuration we used for evaluating our methodology. Then we discuss our characterization results for workload and branch divergence at warp, thread-block and kernel level for some interest- ing kernels of applications. We examine graph-based application divergence behaviors and show how it varies across threads. We present our characterization for memory access be- haviors of irregular applications using instruction classi fication based on spatial locality. We then discuss the relationship between the throughput and divergence measures by studying their correlation coefficients. To summarize, we quantifi ed and analyzed the control- flow and workload divergence across the threads at warp, thread-block and kernel level for a diverse collection of 12 GPGPU applications which exhibit both regular and irregular behaviors. By using thread's hardware utilization efficiency and a measure we call `Average normalized instructions per thread', we quantify branch and workload divergence respectively. Our characterization technique for memory divergence classi es memory instructions into four different groups based on the property of intra-warp spatial locality of instructions. We then quantify the impact of memory divergence using the behavior of GPU L1 data cache.
APA, Harvard, Vancouver, ISO, and other styles
48

Lin, Ko-Jui, and 林克叡. "Graph-based Asymmetric Opportunistic Networks with Applications." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/74697114481332384267.

Full text
Abstract:
碩士
中華大學
資訊工程學系碩士班
102
Opportunistic network is an unreliable network architecture, and each mobile device can only directly communicate within the mobile devices which are within the communication range. Previous results usually assume that mobile devices in opportunistic network can move in an area without obstacle, but this condition seems unrealistic because many space may exist obstacles such as sky area, ocean area etc. In this thesis, we proposed a graph-based asymmetric opportunistic network, which is more appropriate to real applications. Graph based asymmetric opportunistic network also emphasized that the popular place in a deployed area has higher road usage rate than other. This thesis also discusses how to deploy a fixed number of information exchange stations to enhance the packet delivery ratio and delay time of a given graph based asymmetric opportunistic network. We define a mobility model by using the graph technique, and use this model to find out where to locate an information exchange station. Finally, we also demonstrate the contribution of the proposed ideas by conducting simulations in a bicycle asymmetric opportunistic network.
APA, Harvard, Vancouver, ISO, and other styles
49

"Graph-Based Sparse Learning: Models, Algorithms, and Applications." Doctoral diss., 2014. http://hdl.handle.net/2286/R.I.27437.

Full text
Abstract:
abstract: Sparse learning is a powerful tool to generate models of high-dimensional data with high interpretability, and it has many important applications in areas such as bioinformatics, medical image processing, and computer vision. Recently, the a priori structural information has been shown to be powerful for improving the performance of sparse learning models. A graph is a fundamental way to represent structural information of features. This dissertation focuses on graph-based sparse learning. The first part of this dissertation aims to integrate a graph into sparse learning to improve the performance. Specifically, the problem of feature grouping and selection over a given undirected graph is considered. Three models are proposed along with efficient solvers to achieve simultaneous feature grouping and selection, enhancing estimation accuracy. One major challenge is that it is still computationally challenging to solve large scale graph-based sparse learning problems. An efficient, scalable, and parallel algorithm for one widely used graph-based sparse learning approach, called anisotropic total variation regularization is therefore proposed, by explicitly exploring the structure of a graph. The second part of this dissertation focuses on uncovering the graph structure from the data. Two issues in graphical modeling are considered. One is the joint estimation of multiple graphical models using a fused lasso penalty and the other is the estimation of hierarchical graphical models. The key technical contribution is to establish the necessary and sufficient condition for the graphs to be decomposable. Based on this key property, a simple screening rule is presented, which reduces the size of the optimization problem, dramatically reducing the computational cost.
Dissertation/Thesis
Doctoral Dissertation Computer Science 2014
APA, Harvard, Vancouver, ISO, and other styles
50

Hsu, Chi-Yu, and 胥吉友. "Improved Image Segmentation Techniques Based on Superpixels and Graph Theory with Applications of Saliency Detection." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/40870211370266280310.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
101
Image segmentation is a fundamental problem in computer vision and image processing. Though this topic has been researched for many years, it is still a challenging task. Recently, the researches of superpixels have great improvement. This new technique makes the traditional segmentation algorithms more efficient and has better performances. On the other hand, the saliency detection is another new topic of image processing and its performance usually closely related to the segmentation techniques we used. In this thesis, we propose two algorithms for image segmentation and saliency detection, respectively. For image segmentation, an effective graph-based image segmentation algorithm using the superpixel-based graph representation is introduced. The techniques of SLIC superpixels, 5-D spectral clustering, and boundary-focused region merging are adopted in the proposed algorithm. With SLIC superpixels, the original image segmentation problem is transformed into the superpixel labeling problem. It makes the proposed algorithm more efficient than pixel-based segmentation algorithms. With the proposed methods of 5-D spectral clustering and boundary-focused region merging, the position information is considered for clustering and the threshold for region merging can be adaptive. These techniques make the segmentation result more consistent with human perception. The simulations on the Berkeley segmentation database show that our proposed method outperforms state-of-the-art methods. For saliency detection, a very effective saliency detection algorithm is proposed. Our algorithm is mainly based on two new techniques. First, the discrete cosine transform (DCT) is used for constructing the block-wise saliency map. Then, the superpixel-based segmentation is applied. Since DCT coefficients can reflect the color features of each block in the frequency domain and superpixels can well preserve object boundaries, with these two techniques, the performance of saliency detection can be significantly improved. The simulations performed on a database of 1000 images with human-marked ground truths show that our proposed method can extract the salient region very accurately and outperforms all of the existing saliency detection methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography