Siga este enlace para ver otros tipos de publicaciones sobre el tema: Stream graphs.

Tesis sobre el tema "Stream graphs"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 48 mejores tesis para su investigación sobre el tema "Stream graphs".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Gillani, Syed. "Semantically-enabled stream processing and complex event processing over RDF graph streams". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES055/document.

Texto completo
Resumen
Résumé en français non fourni par l'auteur
There is a paradigm shift in the nature and processing means of today’s data: data are used to being mostly static and stored in large databases to be queried. Today, with the advent of new applications and means of collecting data, most applications on the Web and in enterprises produce data in a continuous manner under the form of streams. Thus, the users of these applications expect to process a large volume of data with fresh low latency results. This has resulted in the introduction of Data Stream Processing Systems (DSMSs) and a Complex Event Processing (CEP) paradigm – both with distinctive aims: DSMSs are mostly employed to process traditional query operators (mostly stateless), while CEP systems focus on temporal pattern matching (stateful operators) to detect changes in the data that can be thought of as events. In the past decade or so, a number of scalable and performance intensive DSMSs and CEP systems have been proposed. Most of them, however, are based on the relational data models – which begs the question for the support of heterogeneous data sources, i.e., variety of the data. Work in RDF stream processing (RSP) systems partly addresses the challenge of variety by promoting the RDF data model. Nonetheless, challenges like volume and velocity are overlooked by existing approaches. These challenges require customised optimisations which consider RDF as a first class citizen and scale the processof continuous graph pattern matching. To gain insights into these problems, this thesis focuses on developing scalable RDF graph stream processing, and semantically-enabled CEP systems (i.e., Semantic Complex Event Processing, SCEP). In addition to our optimised algorithmic and data structure methodologies, we also contribute to the design of a new query language for SCEP. Our contributions in these two fields are as follows: • RDF Graph Stream Processing. We first propose an RDF graph stream model, where each data item/event within streams is comprised of an RDF graph (a set of RDF triples). Second, we implement customised indexing techniques and data structures to continuously process RDF graph streams in an incremental manner. • Semantic Complex Event Processing. We extend the idea of RDF graph stream processing to enable SCEP over such RDF graph streams, i.e., temporalpattern matching. Our first contribution in this context is to provide a new querylanguage that encompasses the RDF graph stream model and employs a set of expressive temporal operators such as sequencing, kleene-+, negation, optional,conjunction, disjunction and event selection strategies. Based on this, we implement a scalable system that employs a non-deterministic finite automata model to evaluate these operators in an optimised manner. We leverage techniques from diverse fields, such as relational query optimisations, incremental query processing, sensor and social networks in order to solve real-world problems. We have applied our proposed techniques to a wide range of real-world and synthetic datasets to extract the knowledge from RDF structured data in motion. Our experimental evaluations confirm our theoretical insights, and demonstrate the viability of our proposed methods
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Rannou, Léo. "Temporal Connectivity and Path Computation for Stream Graph". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS418.

Texto completo
Resumen
Les données structurelles et les données temporelles ont, pendant longtemps, été analysées séparément. De nombreux réseaux complexes contiennent une dimension temporelle, comme les contacts entre individus ou les transactions financières. La théorie des graphes fournit un large ensemble d'outils pour modéliser et analyser les connexions entre entités. Malheureusement, cette approche ne prend pas compte la nature temporelle des interactions. La théorie des stream graphs est un formalisme permettant de modéliser les réseaux dynamiques dans lesquels les nœuds et/ou les liens arrivent et/ou partent au fil du temps. Plusieurs concepts théoriques tels que les composantes connexes dans les stream graphs ont été définis récemment, mais aucun algorithme n'a été proposé pour les calculer. De plus, la complexité algorithmique de ces problèmes est inconnue, ainsi que les connaissances qu'ils peuvent apporter sur les stream graphs de terrain. Dans cette thèse, nous proposons plusieurs solutions pour le calcul de notions de connectivité et de chemins dans les stream graphs. Nous présentons également des représentations alternatives - des structures de données conçues pour faciliter certains calculs - stream graphs. Nous fournissons également des implémentations et comparons expérimentalement nos méthodes sur une grande variété de cas pratiques. Nous montrons que ces concepts apportent beaucoup d'informations sur les caractéristiques de ces ensembles de données. Straph, une bibliothèque python, a été développée afin de disposer d'une ressource fiable afin de manipuler, analyser et visualiser les stream graphs
For a long time, structured data and temporal data have been analysed separately. Many real world complex networks have a temporal dimension, such as contacts between individuals or financial transactions. Graph theory provides a wide set of tools to model and analyze static connections between entities. Unfortunately, this approach does not take into account the temporal nature of interactions. Stream graph theory is a formalism to model highly dynamic networks in which nodes and/or links arrive and/or leave over time. The number of applications of stream graph theory has risen rapidly, along with the number of theoretical concepts and algorithms to compute them. Several theoretical concepts such as connected components and temporal paths in stream graphs were defined recently, but no algorithm was provided to compute them. Moreover, the algorithmic complexities of these problems are unknown, as well as the insight they may shed on real-world stream graphs of interest. In this thesis, we present several solutions to compute notions of connectivity and path concepts in stream graphs. We also present alternative representations - data structures designed to facilitate specific computations - of stream graphs. We provide implementations and experimentally compare our methods in a wide range of practical cases. We show that these concepts indeed give much insight on features of large-scale datasets. Straph, a python library, was developed in order to have a reliable library for manipulating, analysing and visualising stream graphs, to design algorithms and models, and to rapidly evaluate them
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Faleiros, Thiago de Paulo. "Propagação em grafos bipartidos para extração de tópicos em fluxo de documentos textuais". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-10112016-105854/.

Texto completo
Resumen
Tratar grandes quantidades de dados é uma exigência dos modernos algoritmos de mineração de texto. Para algumas aplicações, documentos são constantemente publicados, o que demanda alto custo de armazenamento em longo prazo. Então, é necessário criar métodos de fácil adaptação para uma abordagem que considere documentos em fluxo, e que analise os dados em apenas um passo sem requerer alto custo de armazenamento. Outra exigência é a de que essa abordagem possa explorar heurísticas a fim de melhorar a qualidade dos resultados. Diversos modelos para a extração automática das informações latentes de uma coleção de documentos foram propostas na literatura, dentre eles destacando-se os modelos probabilísticos de tópicos. Modelos probabilísticos de tópicos apresentaram bons resultados práticos, sendo estendidos para diversos modelos com diversos tipos de informações inclusas. Entretanto, descrever corretamente esses modelos, derivá-los e em seguida obter o apropriado algoritmo de inferência são tarefas difíceis, exigindo um tratamento matemático rigoroso para as descrições das operações efetuadas no processo de descoberta das dimensões latentes. Assim, para a elaboração de um método simples e eficiente para resolver o problema da descoberta das dimensões latentes, é necessário uma apropriada representação dos dados. A hipótese desta tese é a de que, usando a representação de documentos em grafos bipartidos, é possível endereçar problemas de aprendizado de máquinas, para a descoberta de padrões latentes em relações entre objetos, por exemplo nas relações entre documentos e palavras, de forma simples e intuitiva. Para validar essa hipótese, foi desenvolvido um arcabouço baseado no algoritmo de propagação de rótulos utilizando a representação em grafos bipartidos. O arcabouço, denominado PBG (Propagation in Bipartite Graph), foi aplicado inicialmente para o contexto não supervisionado, considerando uma coleção estática de documentos. Em seguida, foi proposta uma versão semissupervisionada, que considera uma pequena quantidade de documentos rotulados para a tarefa de classificação transdutiva. E por fim, foi aplicado no contexto dinâmico, onde se considerou fluxo de documentos textuais. Análises comparativas foram realizadas, sendo que os resultados indicaram que o PBG é uma alternativa viável e competitiva para tarefas nos contextos não supervisionado e semissupervisionado.
Handling large amounts of data is a requirement for modern text mining algorithms. For some applications, documents are published constantly, which demand a high cost for long-term storage. So it is necessary easily adaptable methods for an approach that considers documents flow, and be capable of analyzing the data in one step without requiring the high cost of storage. Another requirement is that this approach can exploit heuristics in order to improve the quality of results. Several models for automatic extraction of latent information in a collection of documents have been proposed in the literature, among them probabilistic topic models are prominent. Probabilistic topic models achieve good practical results, and have been extended to several models with different types of information included. However, properly describe these models, derive them, and then get appropriate inference algorithms are difficult tasks, requiring a rigorous mathematical treatment for descriptions of operations performed in the latent dimensions discovery process. Thus, for the development of a simple and efficient method to tackle the problem of latent dimensions discovery, a proper representation of the data is required. The hypothesis of this thesis is that by using bipartite graph for representation of textual data one can address the task of latent patterns discovery, present in the relationships between documents and words, in a simple and intuitive way. For validation of this hypothesis, we have developed a framework based on label propagation algorithm using the bipartite graph representation. The framework, called PBG (Propagation in Bipartite Graph) was initially applied to the unsupervised context for a static collection of documents. Then a semi-supervised version was proposed which need only a small amount of labeled documents to the transductive classification task. Finally, it was applied in the dynamic context in which flow of textual data was considered. Comparative analyzes were performed, and the results indicated that the PBG is a viable and competitive alternative for tasks in the unsupervised and semi-supervised contexts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Arnoux, Thibaud. "Prédiction d'interactions dans les flots de liens. Combiner les caractéristiques structurelles et temporelles". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS229.

Texto completo
Resumen
Le formalisme des flots de liens représente une approche permettant de conserver la dynamique du système tout en fournissant un cadre d'étude solide pour appréhender le comportement du système. Un flot de liens est une série de triplets (t,u,v) indiquant qu'une interaction a eu lieu entre u et v au temps t. La forte importance de la dynamique du système dans la prédiction dans les flots de liens la place au carrefour de la prédiction de liens dans les graphes et de la prédiction de séries temporelles. Nous allons explorer différentes formalisations du problème de la prédiction dans les flots de liens. Dans la suite nous nous intéresserons à la prédiction de l'activité, c'est-à-dire prédire le nombre d'interactions apparaissant dans le futur pour chaque paire de nœuds durant une certaine période. Nous introduisons le protocole développé, permettant de combiner de manière cohérente les caractéristiques des données afin d'effectuer la prédiction de l'activité. Nous étudions le comportement de notre protocole sur diverses expériences sur quatre jeux de données et évaluons la qualité de chaque prédiction. Nous étudierons comment l'utilisation de classes de nœuds permet de préserver la diversité des types de liens prédits tout en améliorant la prédiction. Notre objectif est de définir un cadre de prédiction général permettant des études approfondies de la relation entre caractéristiques structurelles et temporelles dans les tâches de prédiction
The link stream formalism represent an approach allowing to capture the system dynamic while providing a framework to understand the system's behavior. A link stream is a sequence of triplet (t,u,v) indicating that an interaction occurred between u and v at time t. The importance of the system's dynamic during the prediction places it at the crossroads of link prediction in graphs and time series prediction. We will explore several formalizations of the problem of prediction in link streams. In the following we will study the activity prediction, that is to say predicting the number of interactions occurring in the future between each pair of nodes during a given period. We introduce the protocol, allowing to combine the data characteristics to predict the activity. We study the behavior of our protocol during several experiments on four datasets et evaluate the prediction quality. We will look at how the introduction of pair of nodes classes allows to preserve the link diversity in the prediction while improving the prediction. Our goal is to define a general prediction framework allowing in-depth studies of the relationship between temporal and structural characteristics in prediction tasks
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Baudin, Alexis. "Cliques statiques et temporelles : algorithmes d'énumération et de détection de communautés". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS609.

Texto completo
Resumen
Les graphes sont des objets mathématiques qui permettent de modéliser des interactions ou connexions entre entités de types variés. Un graphe peut représenter par exemple un réseau social qui connecte les utilisateurs entre eux, un réseau de transport comme le métro où les stations sont connectées entre elles, ou encore un cerveau avec les milliards de neurones en interaction qu'il contient. Depuis quelques années, la forte dynamicité de ces structures a été mise en évidence, ainsi que l'importance de prendre en compte l'évolution temporelle de ces réseaux pour en comprendre le fonctionnement. Alors que de nombreux concepts et algorithmes ont été développés sur les graphes pour décrire des structures de réseaux statiques, il reste encore beaucoup à faire pour formaliser et développer des algorithmes pertinents pour décrire la dynamique des réseaux réels. Cette thèse vise à mieux comprendre comment sont structurés les graphes massifs qui sont issus du monde réel et à développer des outils pour étendre notre compréhension à des structures évoluant dans le temps. Il a été montré que ces graphes ont des propriétés particulières, qui les distinguent des graphes théoriques ou tirés aléatoirement. Exploiter ces propriétés permet alors de concevoir des algorithmes pour résoudre certains problèmes difficiles beaucoup plus rapidement sur ces instances que dans le cas général. La thèse se focalise sur les cliques, qui sont des groupes d'éléments tous connectés entre eux. Nous étudions l'énumération des cliques dans les graphes statiques et temporels et la détection de communautés qu'elles permettent de mettre en œuvre. Les communautés d'un graphe sont des ensembles de sommets tels qu'au sein d'une communauté, les sommets interagissent fortement entre eux, et peu avec le reste du graphe. Leur étude aide à comprendre les propriétés structurelles et fonctionnelles des réseaux. Nous évaluons nos algorithmes sur des graphes massifs issus du monde réel, ouvrant ainsi de nouvelles perspectives pour comprendre les interactions au sein de ces réseaux. Nous travaillons d'abord sur des graphes, sans tenir compte de la composante temporelle des interactions. Nous commençons par utiliser la méthode de détection de communautés par percolation de cliques, en mettant en évidence ses limites en mémoire, qui empêchent de l'appliquer à des graphes trop massifs. En introduisant un algorithme de résolution approchée du problème, nous dépassons cette limite. Puis, nous améliorons l'énumération des cliques maximales dans le cas des graphes particuliers dits bipartis. Ils correspondent à des interactions entre des groupes de sommets de type différent, par exemple des liens entre des personnes et du contenu consulté, la participation à des événements, etc. Ensuite, nous considérons des interactions qui ont lieu au cours du temps, grâce au formalisme des flots de liens. Nous cherchons à étendre les algorithmes présentés en première partie, pour exploiter leurs avantages dans l'étude des interactions temporelles. Nous fournissons un nouvel algorithme d'énumération des cliques maximales dans les flots de liens, beaucoup plus efficace que l'état de l'art sur des jeux de données massifs. Enfin, nous nous intéressons aux communautés dans les flots de liens par percolation de cliques, en développant une extension de la méthode utilisée sur les graphes. Les résultats montrent une amélioration significative par rapport à l'état de l'art, et nous analysons les communautés obtenues pour fournir des informations pertinentes sur l'organisation des interactions temporelles dans les flots de liens. Mon travail de thèse a permis d’apporter de nouvelles réflexions sur l’étude des réseaux massifs issus du monde réel. Cela montre l'importance d'explorer le potentiel des graphes dans un contexte réel, et pourrait contribuer à l'émergence de solutions novatrices pour les défis complexes de notre société moderne
Graphs are mathematical objects used to model interactions or connections between entities of various types. A graph can represent, for example, a social network that connects users to each other, a transport network like the metro where stations are connected to each other, or a brain with the billions of interacting neurons it contains. In recent years, the dynamic nature of these structures has been highlighted, as well as the importance of taking into account the temporal evolution of these networks to understand their functioning. While many concepts and algorithms have been developed on graphs to describe static network structures, much remains to be done to formalize and develop relevant algorithms to describe the dynamics of real networks. This thesis aims to better understand how massive graphs are structured in the real world, and to develop tools to extend our understanding to structures that evolve over time. It has been shown that these graphs have particular properties, which distinguish them from theoretical or randomly drawn graphs. Exploiting these properties then enables the design of algorithms to solve certain difficult problems much more quickly on these instances than in the general case. My PhD thesis focuses on cliques, which are groups of elements that are all connected to each other. We study the enumeration of cliques in static and temporal graphs and the detection of communities they enable. The communities of a graph are sets of vertices such that, within a community, the vertices interact strongly with each other, and little with the rest of the graph. Their study helps to understand the structural and functional properties of networks. We are evaluating our algorithms on massive real-world graphs, opening up new perspectives for understanding interactions within these networks. We first work on graphs, without taking into account the temporal component of interactions. We begin by using the clique percolation method of community detection, highlighting its limitations in memory, which prevent it from being applied to graphs that are too massive. By introducing an approximate problem-solving algorithm, we overcome this limitation. Next, we improve the enumeration of maximal cliques in the case of bipartite graphs. These correspond to interactions between groups of vertices of different types, e.g. links between people and viewed content, participation in events, etc. Next, we consider interactions that take place over time, using the link stream formalism. We seek to extend the algorithms presented in the first part, to exploit their advantages in the study of temporal interactions. We provide a new algorithm for enumerating maximal cliques in link streams, which is much more efficient than the state-of-the-art on massive datasets. Finally, we focus on communities in link streams by clique percolation, developing an extension of the method used on graphs. The results show a significant improvement over the state of the art, and we analyze the communities obtained to provide relevant information on the organization of temporal interactions in link streams. My PhD work has provided new insights into the study of massive real-world networks. This shows the importance of exploring the potential of graphs in a real-world context, and could contribute to the emergence of innovative solutions for the complex challenges of our modern society
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Wang, Changliang. "Continuous subgraph pattern search over graph streams /". View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20WANG.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Navarin, Nicolò <1984&gt. "Learning with Kernels on Graphs: DAG-based kernels, data streams and RNA function prediction". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6578/1/navarin_nicolo_tesi.pdf.

Texto completo
Resumen
In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Navarin, Nicolò <1984&gt. "Learning with Kernels on Graphs: DAG-based kernels, data streams and RNA function prediction". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6578/.

Texto completo
Resumen
In many application domains data can be naturally represented as graphs. When the application of analytical solutions for a given problem is unfeasible, machine learning techniques could be a viable way to solve the problem. Classical machine learning techniques are defined for data represented in a vectorial form. Recently some of them have been extended to deal directly with structured data. Among those techniques, kernel methods have shown promising results both from the computational complexity and the predictive performance point of view. Kernel methods allow to avoid an explicit mapping in a vectorial form relying on kernel functions, which informally are functions calculating a similarity measure between two entities. However, the definition of good kernels for graphs is a challenging problem because of the difficulty to find a good tradeoff between computational complexity and expressiveness. Another problem we face is learning on data streams, where a potentially unbounded sequence of data is generated by some sources. There are three main contributions in this thesis. The first contribution is the definition of a new family of kernels for graphs based on Directed Acyclic Graphs (DAGs). We analyzed two kernels from this family, achieving state-of-the-art results from both the computational and the classification point of view on real-world datasets. The second contribution consists in making the application of learning algorithms for streams of graphs feasible. Moreover,we defined a principled way for the memory management. The third contribution is the application of machine learning techniques for structured data to non-coding RNA function prediction. In this setting, the secondary structure is thought to carry relevant information. However, existing methods considering the secondary structure have prohibitively high computational complexity. We propose to apply kernel methods on this domain, obtaining state-of-the-art results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Reyes, Juan C. (Juan Carlos) 1980. "A graph editing framework for the StreamIt language". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17980.

Texto completo
Resumen
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (leaves 55-56).
A programming language is more useful if it provides a level of abstraction that makes programming more intuitive and also allows the development of tools that take advantage of the language's internal representation. StreamIt, a language for the development of streaming applications, has a hierarchical and structural nature that lends itself to a graphical programming tool. I created a prototype StreamIt Graph Editor (SGE) to facilitate the development of streaming applications using StreamIt. The SGE provides intuitive visualization tools that allow developers to work more efficiently by automating certain processes. Thus, the programmer can focus more on design issues than on low level details that slow down the development process.
by Juan C. Reyes.
M.Eng.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Karczmarek, Michal 1977. "Constrained and phased scheduling of synchronous data flow graphs for StreamIt language". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87333.

Texto completo
Resumen
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2003.
Includes bibliographical references (p. 107-109).
by Michal Karczmarek.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Popa, Tiberiu. "Compiling Data Dependent Control Flow on SIMD GPUs". Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1186.

Texto completo
Resumen
Current Graphic Processing Units (GPUs) (circa. 2003/2004) have programmable vertex and fragment units. Often these units are implemented as SIMD processors employing parallel pipelines. Data dependent conditional execution on SIMD architectures implemented using processor idling is inefficient. I propose a multi-pass approach based on conditional streams which allows dynamic load balancing of the fragment units of the GPU and better theoretical performance on programs using data dependent conditionals and loops. The proposed system can be used to turn the fragment unit of a SIMD GPU into a stream processor with data dependent control flow.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

McKeon, Sean Patrick. "A GPU Stream Computing Approach to Terrain Database Integrity Monitoring". Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/cs_theses/65.

Texto completo
Resumen
Synthetic Vision Systems (SVS) provide an aircraft pilot with a virtual 3-D image of surrounding terrain which is generated from a digital elevation model stored in an onboard database. SVS improves the pilot's situational awareness at night and in inclement weather, thus reducing the chance of accidents such as controlled flight into terrain. A terrain database integrity monitor is needed to verify the accuracy of the displayed image due to potential database and navigational system errors. Previous research has used existing aircraft sensors to compare the real terrain position with the predicted position. We propose an improvement to one of these models by leveraging the stream computing capabilities of commercial graphics hardware. "Brook for GPUs," a system for implementing stream computing applications on programmable graphics processors, is used to execute a streaming ray-casting algorithm that correctly simulates the beam characteristics of a radar altimeter during all phases of flight.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Wilmet, Audrey. "Détection d'anomalies dans les flots de liens : combiner les caractéristiques structurelles et temporelles". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS402.

Texto completo
Resumen
Un flot de liens est un ensemble de liens {(t,u,v)} dans lequel un triplet (t,u,v) modélise l'interaction entre deux entités u et v à l'instant t. Dans de nombreuses situations, les données résultent de la mesure des interactions entre plusieurs millions d'entités au cours du temps et peuvent ainsi être étudiées grâce au formalisme des flots de liens. C'est le cas des appels téléphoniques, des échanges d'e-mails, des transferts d'argent, des contacts entre individus, du trafic IP, des achats en ligne, et bien d'autres encore. L'objectif de cette thèse est la détection d'ensembles de liens anormaux dans un flot de liens. Dans une première partie, nous concevons une méthode qui construit différents contextes, un contexte étant un ensemble de caractéristiques décrivant les circonstances d'une anomalie. Ces contextes nous permettent de trouver des comportements inattendus pertinents, selon plusieurs dimensions et perspectives. Dans une seconde partie, nous concevons une méthode permettant de détecter des anomalies dans des distributions hétérogènes dont le comportement est constant au cours du temps, en comparant une séquence de distributions hétérogènes similaires. Nous appliquons nos outils méthodologiques à des interactions temporelles provenant de retweets sur Twitter et de trafic IP du groupe MAWI
A link stream is a set of links {(t, u, v)} in which a triplet (t, u, v) models the interaction between two entities u and v at time t. In many situations, data result from the measurement of interactions between several million of entities over time and can thus be studied through the link stream's formalism. This is the case, for instance, of phone calls, email exchanges, money transfers, contacts between individuals, IP traffic, online shopping, and many more. The goal of this thesis is the detection of sets of abnormal links in a link stream. In a first part, we design a method that constructs different contexts, a context being a set of characteristics describing the circumstances of an anomaly. These contexts allow us to find unexpected behaviors that are relevant, according to several dimensions and perspectives. In a second part, we design a method to detect anomalies in heterogeneous distributions whose behavior is constant over time, by comparing a sequence of similar heterogeneous distributions. We apply our methodological tools to temporal interactions coming from retweets of Twitter and IP traffic of MAWI group
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Gregory, Linda Mae Alice. "The Lakes and Streams Project: A curriculum for elementary and middle grades on a local environmental issue". CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2175.

Texto completo
Resumen
This project covers the environmental issues of the proposed Lakes and Streams Project for the City of San Bernardino. The water history of San Bernardino, from the hot springs to the development of the current municipal water system, is also detailed. Two curriculum units teach students how to use environmental issue analysis skills. One focuses on the water history of San Bernardino and is aimed at grades three to five. The other immerses middle grade students directly into the Lakes and Streams issue.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Segura, Salvador Albert. "High-performance and energy-efficient irregular graph processing on GPU architectures". Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671449.

Texto completo
Resumen
Graph processing is an established and prominent domain that is the foundation of new emerging applications in areas such as Data Analytics and Machine Learning, empowering applications such as road navigation, social networks and automatic speech recognition. The large amount of data employed in these domains requires high throughput architectures such as GPGPU. Although the processing of large graph-based workloads exhibits a high degree of parallelism, memory access patterns tend to be highly irregular, leading to poor efficiency due to memory divergence.In order to ameliorate these issues, GPGPU graph applications perform stream compaction operations which process active nodes/edges so subsequent steps work on a compacted dataset. We propose to offload this task to the Stream Compaction Unit (SCU) hardware extension tailored to the requirements of these operations, which additionally performs pre-processing by filtering and reordering elements processed.We show that memory divergence inefficiencies prevail in GPGPU irregular graph-based applications, yet we find that it is possible to relax the strict relationship between thread and processed data to empower new optimizations. As such, we propose the Irregular accesses Reorder Unit (IRU), a novel hardware extension integrated in the GPU pipeline that reorders and filters data processed by the threads on irregular accesses improving memory coalescing.Finally, we leverage the strengths of both previous approaches to achieve synergistic improvements. We do so by proposing the IRU-enhanced SCU (ISCU), which employs the efficient pre-processing mechanisms of the IRU to improve SCU stream compaction efficiency and NoC throughput limitations due to SCU pre-processing operations. We evaluate the ISCU with state-of-the-art graph-based applications achieving a 2.2x performance improvement and 10x energy-efficiency.
El processament de grafs és un domini prominent i establert com a la base de noves aplicacions emergents en àrees com l'anàlisi de dades i Machine Learning, que permeten aplicacions com ara navegació per carretera, xarxes socials i reconeixement automàtic de veu. La gran quantitat de dades emprades en aquests dominis requereix d’arquitectures d’alt rendiment, com ara GPGPU. Tot i que el processament de grans càrregues de treball basades en grafs presenta un alt grau de paral·lelisme, els patrons d’accés a la memòria tendeixen a ser irregulars, fet que redueix l’eficiència a causa de la divergència d’accessos a memòria. Per tal de millorar aquests problemes, les aplicacions de grafs per a GPGPU realitzen operacions de stream compaction que processen nodes/arestes per tal que els passos posteriors funcionin en un conjunt de dades compactat. Proposem deslliurar d’aquesta tasca a la extensió hardware Stream Compaction Unit (SCU) adaptada als requisits d’aquestes operacions, que a més realitza un pre-processament filtrant i reordenant els elements processats.Mostrem que les ineficiències de divergència de memòria prevalen en aplicacions GPGPU basades en grafs irregulars, tot i que trobem que és possible relaxar la relació estricta entre threads i les dades processades per obtenir noves optimitzacions. Com a tal, proposem la Irregular accesses Reorder Unit (IRU), una nova extensió de maquinari integrada al pipeline de la GPU que reordena i filtra les dades processades pels threads en accessos irregulars que milloren la convergència d’accessos a memòria. Finalment, aprofitem els punts forts de les propostes anteriors per aconseguir millores sinèrgiques. Ho fem proposant la IRU-enhanced SCU (ISCU), que utilitza els mecanismes de pre-processament eficients de la IRU per millorar l’eficiència de stream compaction de la SCU i les limitacions de rendiment de NoC a causa de les operacions de pre-processament de la SCU.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Ito, Dai. "Evaluation of susceptibility to wheat streak mosaic virus among small grains and alternative hosts in the Great Plains". Thesis, Montana State University, 2011. http://etd.lib.montana.edu/etd/2011/ito/ItoD0511.pdf.

Texto completo
Resumen
Wheat streak mosaic virus (WSMV), endemic in small grains production areas of the Great Plains, causes yield losses of wheat 2 to 5% annually. Yield loss in individual fields can reach 100%. Control relies on cultural practices to control the vector, the wheat curl mite (Aceria tosichella Keifer, WCM), and the use of resistant or tolerant varieties. WSMV and WCM depend on living tissue for survival and reproduction, including common grassy weeds. Little is known about the relative importance of these weeds as alternative hosts of WSMV. The purpose of these studies was to evaluate the risk of infection with WSMV in commonly grown wheat varieties and various grassy weed species, information useful to understanding WSMV epidemiology and control. Winter wheat, spring wheat and barley varieties in Montana were evaluated in the field by measuring the effect of fall vs. spring inoculation and variety on incidence, symptom severity, and yield components. Winter wheat varieties from five states, and spring wheat and barley varieties from Montana were tested for incidence and absorbance in greenhouse. Fall-inoculated winter wheat had less effect of WSMV inoculation compared to spring-inoculated winter wheat. Yields of spring wheat varieties were largely reduced by WSMV inoculation. There was no correlation between yield and incidence or symptom severity. In greenhouse studies, the highest incidence was observed in varieties from Idaho and Nebraska, whereas the highest relative absorbance was observed in varieties from Montana. In 2008 and 2009, surveys of common grassy weeds were conducted. Grass species from croplands in six states were selected and mechanically inoculated to determine the susceptibility to WSMV. Grassy weeds were also evaluated as a source of WSMV by measuring transmission efficiency with virulifeous WCM. Bromus tectorum was the most prevalent grassy weed and the most frequent viral host. Aegilops cylindrica, and Avena fatua had the highest incidence and relative absorbance. There were no differences in the susceptibility of grass species to WSMV by their state of origin. WCM transmission study indicated infected grass species had lower transmission efficiency than from infected wheat. These studies will benefit producers in Montana to assess their risk of WSMV based on variety selection and the presence of grassy weeds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Nzekon, Nzeko'o Armel Jacques. "Système de recommandation avec dynamique temporelle basée sur les flots de liens". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS454.

Texto completo
Resumen
La recommandation des produits appropriés aux clients est cruciale dans de nombreuses plateformes de e-commerce qui proposent un grand nombre de produits. Les systèmes de recommandation sont une solution favorite pour la réalisation de cette tâche. La majorité des recherches de ce domaine reposent sur des notes explicites que les utilisateurs attribuent aux produits, alors que la plupart du temps ces notes ne sont pas disponibles en quantité suffisante. Il est donc important que les systèmes de recommandation utilisent les données implicites que sont des flots de liens représentant les relations entre les utilisateurs et les produits, c'est-à-dire l'historique de navigation, des achats et de streaming. C'est ce type de données implicites que nous exploitons. Une approche populaire des systèmes de recommandation consiste, pour un entier N donné, à proposer les N produits les plus pertinents pour chaque utilisateur : on parle de recommandation top-N. Pour ce faire, bon nombre de travaux reposent sur des informations telles que les caractéristiques des produits, les goûts et préférences antérieurs des utilisateurs et les relations de confiance entre ces derniers. Cependant, ces systèmes n'utilisent qu'un ou deux types d'information simultanément, ce qui peut limiter leurs performances car l'intérêt qu'un utilisateur a pour un produit peut à la fois dépendre de plus de deux types d'information. Pour remédier à cette limite, nous faisons trois propositions dans le cadre des graphes de recommandation. La première est une extension du Session-based Temporal Graph (STG) introduit par Xiang et al., et qui est un graphe dynamique combinant les préférences à long et à court terme des utilisateurs, ce qui permet de mieux capturer la dynamique des préférences de ces derniers. STG ne tient pas compte des caractéristiques des produits et ne fait aucune différence de poids entre les arêtes les plus récentes et les arêtes les plus anciennes. Le nouveau graphe proposé, Time-weight content-based STG contourne les limites du STG en y intégrant un nouveau type de nœud pour les caractéristiques des produits et une pénalisation des arêtes les plus anciennes. La seconde contribution est un système de recommandation basé sur l'utilisation de Link Stream Graph (LSG). Ce graphe est inspiré d'une représentation des flots de liens et a la particularité de considérer le temps de manière continue contrairement aux autres graphes de la littérature, qui soit ignore la dimension temporelle comme le graphe biparti classique (BIP), soit considère le temps de manière discontinue avec un découpage du temps en tranches comme STG
Recommending appropriate items to users is crucial in many e-commerce platforms that propose a large number of items to users. Recommender systems are one favorite solution for this task. Most research in this area is based on explicit ratings that users give to items, while most of the time, ratings are not available in sufficient quantities. In these situations, it is important that recommender systems use implicit data which are link stream connecting users to items while maintaining timestamps i.e. users browsing, purchases and streaming history. We exploit this type of implicit data in this thesis. One common approach consists in selecting the N most relevant items to each user, for a given N, which is called top-N recommendation. To do so, recommender systems rely on various kinds of information, like content-based features of items, past interest of users for items and trust between users. However, they often use only one or two such pieces of information simultaneously, which can limit their performance because user's interest for an item can depend on more than two types of side information. To address this limitation, we make three contributions in the field of graph-based recommender systems. The first one is an extension of the Session-based Temporal Graph (STG) introduced by Xiang et al., which is a dynamic graph combining long-term and short-term preferences in order to better capture user preferences over time. STG ignores content-based features of items, and make no difference between the weight of newer edges and older edges. The new proposed graph Time-weight Content-based STG addresses STG limitations by adding a new node type for content-based features of items, and a penalization of older edges. The second contribution is the Link Stream Graph (LSG) for temporal recommendations. This graph is inspired by a formal representation of link stream, and has the particularity to consider time in a continuous way unlike others state-of-the-art graphs, which ignore the temporal dimension like the classical bipartite graph (BIP), or consider time discontinuously like STG where time is divided into slices. The third contribution in this thesis is GraFC2T2, a general graph-based framework for top-N recommendation. This framework integrates basic recommender graphs, and enriches them with content-based features of items, users' preferences temporal dynamics, and trust relationships between them. Implementations of these three contributions on CiteUlike, Delicious, Last.fm, Ponpare, Epinions and Ciao datasets confirm their relevance
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Hachi, Ryma. "Explorer l'effet de la morphologie des réseaux viaires sur leurs conditions d'accessibilité : une approche empirique fondée sur la théorie des graphes". Thesis, Paris 1, 2020. http://www.theses.fr/2020PA01H072.

Texto completo
Resumen
Cette thèse vise à explorer la relation entre la morphologie des réseaux viaires et l’accessibilité qui s’offre aux individus lors de leurs déplacements dans l’espace urbain. L’accessibilité est ici définie comme un ensemble de conditions favorables aux déplacements (de faibles distances à parcourir, un faible niveau d’encombrement, …). Cette relation fait l’objet de nombreuses connaissances tacites en urbanisme. Des morphologies viaires types ou des interventions sur des réseaux existants sont préconisées en urbanisme pour les conditions d’accessibilité qu’elles sont supposées offrir. Toutefois, les effets réels de ces morphologies et de ces interventions sur les conditions d’accessibilité sont peu évalués de manière formalisée et systématique. Pour pallier ce manque, nous choisissons d’adopter une approche quantitative basée sur la théorie des graphes. Celle-ci permet une analyse de la morphologie et des conditions d’accessibilité des réseaux au moyen de descripteurs calculés sur ces graphes, puis l’étude de la relation entre descripteurs morphologiques et descripteurs d’accessibilité. Notre travail est exploratoire. Il porte sur un ensemble de dix cas d’étude empiriques, choisis pour être représentatifs de cas théoriques préconisés en urbanisme. Nous avons constitué deux corpus d’étude. Le premier rassemble des réseaux à la morphologie type. C’est le cas des réseaux organiques tels que celui de Paris au Moyen Age, des réseaux quadrillés tels que celui de Manhattan, et des réseaux arborescents tels que celui des banlieues suburbaines étasuniennes. Le second corpus est constitué des états successifs d’un réseau dans lequel ont été menées des interventions types, préconisées dans la littérature. En l’occurrence, il s’agit de la création de percées en étoile dans le réseau viaire de Paris au XIXe siècle. La description quantitative des caractéristiques morphologiques et des conditions d’accessibilité, menée sur les deux corpus, révèle des spécificités de chacun des réseaux et des interventions types analysés, tant en termes de morphologie qu’en termes d’accessibilité. Nos résultats permettent également d’identifier des tendances quant au lien entre les caractéristiques morphologiques des réseaux étudiés et leurs conditions d’accessibilité. Nous montrons notamment que ces tendances sont plus marquées pour le corpus de réseaux à la morphologie type, que pour le réseau parisien à différentes dates : à Paris, de fortes variations dans les descripteurs morphologiques s’accompagnent souvent de faibles variations dans les descripteurs d’accessibilité. D’un point de vue thématique, ce résultat suggère que les grands travaux menés au XIXe siècle par Haussmann ont certes affecté la morphologie du réseau viaire, mais ont eu un faible effet sur les conditions d’accessibilité offertes par ce réseau. Enfin, nous concluons que l’adoption d’une approche quantitative pour traiter de la relation entre la morphologie d’un réseau viaire et ses conditions d’accessibilité nécessite des allers retours, entre les savoirs et interprétations propres à l’urbanisme, et les méthodes et mesures issues d’autres disciplines, en l’occurrence de la Science des réseaux
This thesis aims to explore the relationship between the morphology of street networks and the accessibility offered to individuals during their trips in the urban space. The accessibility is defined as a set of favourable conditions for traveling (e.g. short distances to cover, low congestion level). This relationship is the subject of much tacit knowledge in the urban design community. Typical network morphologies or typical interventions on existing networks are recommended by urban designers, for the accessibility conditions they are supposed to offer. However, the actual effects of these recommendations on accessibility conditions are little evaluated in a formalized and systematic way. To compensate for this lack, we choose to adopt a quantitative approach based on graph theory. This allows an analysis of the morphology and accessibility conditions of networks by means of descriptors calculated on graphs, and then the study of the relationship between morphological and accessibility descriptors. Our work is exploratory. It concerns a set of ten empirical case studies, chosen for their representativity of theoretical cases recommended in urban design. We have constituted two corpuses of study. The first brings together networks with a typical morphology. This is the case of organic networks such as Paris in the Middle Ages, grid networks like Manhattan, and tree-like networks like in some American suburbs. The second corpus is made up of successive states of a network in which typical interventions, recommended in the literature, have been carried out. In this case, it concerns the creation of star-shaped breakthroughs in the street network of Paris in the 19th century. The quantitative description of the morphological characteristics and the accessibility conditions, carried out on the two corpuses, reveals some specificities of each typical network and intervention analyzed, both in terms of morphology and accessibility. Furthermore, our results allow us to identify trends in the relationship between the morphological characteristics of the studied networks and their accessibility conditions. In particular, we show that these trends are more marked for the corpus of networks with a typical morphology than for the Parisian network at different dates : in Paris, strong variations in morphological descriptors are often accompanied by weak variations in accessibility descriptors. From a thematic point of view, this result suggests that the major works carried out in the 19th century by Haussmann certainly affected the morphology of the street network, but had a little effect on the accessibility conditions offered by this network. Eventually, we conclude that the adoption of a quantitative approach to deal with the relationship between the morphology of a street network and its accessibility conditions requires a back and forth movement between the knowledge and interpretations specific to urban design and the methods and measures from other disciplines, in this case network science
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

De, Oliveira Joffrey. "Gestion de graphes de connaissances dans l'informatique en périphérie : gestion de flux, autonomie et adaptabilité". Electronic Thesis or Diss., Université Gustave Eiffel, 2023. http://www.theses.fr/2023UEFL2069.

Texto completo
Resumen
Les travaux de recherche menés dans le cadre de cette thèse de doctorat se situent à l'interface du Web sémantique, des bases de données et de l'informatique en périphérie (généralement dénotée Edge computing). En effet, notre objectif est de concevoir, développer et évaluer un système de gestion de bases de données (SGBD) basé sur le modèle de données Resource Description Framework (RDF) du W3C, qui doit être adapté aux terminaux que l'on trouve dans l'informatique périphérique. Les applications possibles d'un tel système sont nombreuses et couvrent un large éventail de secteurs tels que l'industrie, la finance et la médecine, pour n'en citer que quelques-uns. Pour preuve, le sujet de cette thèse a été défini avec l'équipe du laboratoire d'informatique et d'intelligence artificielle (CSAI) du ENGIE Lab CRIGEN. Ce dernier est le centre de recherche et de développement d'ENGIE dédié aux gaz verts (hydrogène, biogaz et gaz liquéfiés), aux nouveaux usages de l'énergie dans les villes et les bâtiments, à l'industrie et aux technologies émergentes (numérique et intelligence artificielle, drones et robots, nanotechnologies et capteurs). Le CSAI a financé cette thèse dans le cadre d'une collaboration de type CIFRE. Les fonctionnalités d'un système satisfaisant ces caractéristiques doivent permettre de détecter de manière pertinente et efficace des anomalies et des situations exceptionnelles depuis des mesures provenant de capteurs et/ou actuateurs. Dans un contexte industriel, cela peut correspondre à la détection de mesures, par exemple de pression ou de débit sur un réseau de distribution de gaz, trop élevées qui pourraient potentiellement compromettre des infrastructures ou même la sécurité des individus. Le mode opératoire de cette détection doit se faire au travers d'une approche conviviale pour permettre au plus grand nombre d'utilisateurs, y compris les non-programmeurs, de décrire les situations à risque. L'approche doit donc être déclarative, et non procédurale, et doit donc s'appuyer sur un langage de requêtes, par exemple SPARQL. Nous estimons que l'apport des technologies du Web sémantique peut être prépondérant dans un tel contexte. En effet, la capacité à inférer des conséquences implicites depuis des données et connaissances explicites constitue un moyen de créer de nouveaux services qui se distinguent par leur aptitude à s'ajuster aux circonstances rencontrées et à prendre des décisions de manière autonome. Cela peut se traduire par la génération de nouvelles requêtes dans certaines situations alarmantes ou bien en définissant un sous-graphe minimal de connaissances dont une instance de notre SGBD a besoin pour répondre à l'ensemble de ses requêtes. La conception d'un tel SGBD doit également prendre en compte les contraintes inhérentes de l'informatique en périphérie, c'est-à-dire les limites en terme de capacité de calcul, de stockage, de bande passante et parfois énergétique (lorsque le terminal est alimenté par un panneau solaire ou bien une batterie). Il convient donc de faire des choix architecturaux et technologiques satisfaisant ces limitations. Concernant la représentation des données et connaissances, notre choix de conception s'est porté sur les structures de données succinctes (SDS) qui offrent, entre autres, les avantages d'être très compactes et ne nécessitant pas de décompression lors du requêtage. De même, il a été nécessaire d'intégrer la gestion de flux de données au sein de notre SGBD, par exemple avec le support du fenêtrage dans des requêtes SPARQL continues, et des différents services supportés par notre système. Enfin, la détection d'anomalies étant un domaine où les connaissances peuvent évoluer, nous avons intégré le support des modifications au niveau des graphes de connaissances stockés sur les instances des clients de notre SGBD. Ce support se traduit par une extension de certaines structures SDS utilisées dans notre prototype
The research work carried out as part of this PhD thesis lies at the interface between the Semantic Web, databases and edge computing. Indeed, our objective is to design, develop and evaluate a database management system (DBMS) based on the W3C Resource Description Framework (RDF) data model, which must be adapted to the terminals found in Edge computing.The possible applications of such a system are numerous and cover a wide range of sectors such as industry, finance and medicine, to name but a few. As proof of this, the subject of this thesis was defined with the team from the Computer Science and Artificial Intelligence Laboratory (CSAI) at ENGIE Lab CRIGEN. The latter is ENGIE's research and development centre dedicated to green gases (hydrogen, biogas and liquefied gases), new uses of energy in cities and buildings, industry and emerging technologies (digital and artificial intelligence, drones and robots, nanotechnologies and sensors). CSAI financed this thesis as part of a CIFRE-type collaboration.The functionalities of a system satisfying these characteristics must enable anomalies and exceptional situations to be detected in a relevant and effective way from measurements taken by sensors and/or actuators. In an industrial context, this could mean detecting excessively high measurements, for example of pressure or flow rate in a gas distribution network, which could potentially compromise infrastructure or even the safety of individuals. This detection must be carried out using a user-friendly approach to enable as many users as possible, including non-programmers, to describe risk situations. The approach must therefore be declarative, not procedural, and must be based on a query language, such as SPARQL.We believe that Semantic Web technologies can make a major contribution in this context. Indeed, the ability to infer implicit consequences from explicit data and knowledge is a means of creating new services that are distinguished by their ability to adjust to the circumstances encountered and to make autonomous decisions. This can be achieved by generating new queries in certain alarming situations, or by defining a minimal sub-graph of knowledge that an instance of our DBMS needs in order to respond to all of its queries.The design of such a DBMS must also take into account the inherent constraints of Edge computing, i.e. the limits in terms of computing capacity, storage, bandwidth and sometimes energy (when the terminal is powered by a solar panel or a battery). Architectural and technological choices must therefore be made to meet these limitations. With regard to the representation of data and knowledge, our design choice fell on succinct data structures (SDS), which offer, among other advantages, the fact that they are very compact and do not require decompression during querying. Similarly, it was necessary to integrate data flow management within our DBMS, for example with support for windowing in continuous SPARQL queries, and for the various services supported by our system. Finally, as anomaly detection is an area where knowledge can evolve, we have integrated support for modifications to the knowledge graphs stored on the client instances of our DBMS. This support translates into an extension of certain SDS structures used in our prototype
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Gaumont, Noé. "Groupes et Communautés dans les flots de liens : des données aux algorithmes". Electronic Thesis or Diss., Paris 6, 2016. http://www.theses.fr/2016PA066271.

Texto completo
Resumen
Les interactions sont partout : il peut s'agir de contacts entre individus, d'emails, d'appels téléphoniques, etc. Toutes ces interactions sont définies par deux entités interagissant sur un intervalle de temps: par exemple, deux individus se rencontrant entre 12h et 14h. Nous modélisons ces interactions par des flots de liens qui sont des ensembles de quadruplets (b, e, u, v), où chaque quadruplet représente un lien entre les noeuds u et v existant durant l'intervalle [b,e]. Dans un graphe, une communauté est un sous-ensemble plus densément connecté qu’une référence. Dans le formalisme de flot de liens, les notions même de densité et de référence sont à définir. Nous étudions donc comment étendre la notion de communauté aux flots de liens. Pour ce faire, nous nous appuyons sur des données réel où une structure communautaire est connue. Puis, nous développons une méthode permettant de trouver automatiquement des sous-flots qui sont jugés pertinents. Ces sous-flots, c’est-à-dire des sous-ensembles de liens, sont trouvés grâce à une méthode de détection de communautés appliquée sur une projection du flot sur un graphe statique. Un sous-flot est jugé pertinent s’il est plus dense que les sous-flots qui lui sont proches temporellement et topologiquement. Ainsi nous approfondissons les notions de voisinage et référence dans les flots de liens. Nous appliquons cette méthode sur plusieurs jeux de données d’interactions réelles et obtenons des groupes pertinents qui n’auraient pas pu être détectés par les méthodes existantes. Enfin, nous abordons la génération de flots de liens avec une structure communautaire donnée et à la manière d'évaluer une telle partition
Interactions are everywhere: in the contexts of face-to-face contacts, emails, phone calls, IP traffic, etc. In all of them, an interaction is characterized by two entities and a time interval: for instance, two individuals meet from 1pm to 3pm. We model them as link stream which is a set of quadruplets (b,e,u,v) where each quadruplet means that a link exists between u and v from time b to time e. In graphs, a community is a subset which is more densely connected than a reference. Within the link stream formalism, the notion of density and reference have to be redefined. Therefore, we study how to extend the notion of density for link streams. To this end, we use a real data set where a community structure is known. Then, we develop a method that finds automatically substream which are considered relevant. These substream, defined as subsets of links, are discovered by a classical community detection algorithm applied on the link stream the transformed into a static graph. A substream is considered relevant, if it is denser than the substreams which are close temporally and structurally. Thus, we deepen the notion of neighbourhood and reference in link streams. We apply our method on several real world interaction networks and we find relevant substream which would not have been found by existing methods. Finally, we discuss the generation of link streams having a given community structure and also a proper way to evaluate such community structure
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Gaumont, Noé. "Groupes et Communautés dans les flots de liens : des données aux algorithmes". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066271/document.

Texto completo
Resumen
Les interactions sont partout : il peut s'agir de contacts entre individus, d'emails, d'appels téléphoniques, etc. Toutes ces interactions sont définies par deux entités interagissant sur un intervalle de temps: par exemple, deux individus se rencontrant entre 12h et 14h. Nous modélisons ces interactions par des flots de liens qui sont des ensembles de quadruplets (b, e, u, v), où chaque quadruplet représente un lien entre les noeuds u et v existant durant l'intervalle [b,e]. Dans un graphe, une communauté est un sous-ensemble plus densément connecté qu’une référence. Dans le formalisme de flot de liens, les notions même de densité et de référence sont à définir. Nous étudions donc comment étendre la notion de communauté aux flots de liens. Pour ce faire, nous nous appuyons sur des données réel où une structure communautaire est connue. Puis, nous développons une méthode permettant de trouver automatiquement des sous-flots qui sont jugés pertinents. Ces sous-flots, c’est-à-dire des sous-ensembles de liens, sont trouvés grâce à une méthode de détection de communautés appliquée sur une projection du flot sur un graphe statique. Un sous-flot est jugé pertinent s’il est plus dense que les sous-flots qui lui sont proches temporellement et topologiquement. Ainsi nous approfondissons les notions de voisinage et référence dans les flots de liens. Nous appliquons cette méthode sur plusieurs jeux de données d’interactions réelles et obtenons des groupes pertinents qui n’auraient pas pu être détectés par les méthodes existantes. Enfin, nous abordons la génération de flots de liens avec une structure communautaire donnée et à la manière d'évaluer une telle partition
Interactions are everywhere: in the contexts of face-to-face contacts, emails, phone calls, IP traffic, etc. In all of them, an interaction is characterized by two entities and a time interval: for instance, two individuals meet from 1pm to 3pm. We model them as link stream which is a set of quadruplets (b,e,u,v) where each quadruplet means that a link exists between u and v from time b to time e. In graphs, a community is a subset which is more densely connected than a reference. Within the link stream formalism, the notion of density and reference have to be redefined. Therefore, we study how to extend the notion of density for link streams. To this end, we use a real data set where a community structure is known. Then, we develop a method that finds automatically substream which are considered relevant. These substream, defined as subsets of links, are discovered by a classical community detection algorithm applied on the link stream the transformed into a static graph. A substream is considered relevant, if it is denser than the substreams which are close temporally and structurally. Thus, we deepen the notion of neighbourhood and reference in link streams. We apply our method on several real world interaction networks and we find relevant substream which would not have been found by existing methods. Finally, we discuss the generation of link streams having a given community structure and also a proper way to evaluate such community structure
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Jin, Ruoming. "New techniques for efficiently discovering frequent patterns". Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1121795612.

Texto completo
Resumen
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xvii, 170 p.; also includes graphics. Includes bibliographical references (p. 160-170). Available online via OhioLINK's ETD Center
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Počatko, Boris. "Dynamický definovatelný dashboard". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236436.

Texto completo
Resumen
This thesis deals with the design and implementation of a dynamic user-definable dashboard. The user will be able to define conditions dynamically, which will filter out and save only the data he needs. The application will support the changing of the condition definitions and the display of the graphs after they were created. The current implementations available on the internet are usually solutions designed to fit only one type of project and are not designed to meet general guidelines for a dashboard. The dashboard is designed for a smooth cooperation with high load databases and therefore not to slow down the whole solution.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Raza, Asim. "SSVEP based EEG Interface for Google Street View Navigation". Thesis, Linköpings universitet, Medie- och Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-104276.

Texto completo
Resumen
Brain-computer interface (BCI) or Brain Machine Interface (BMI) provides direct communication channel between user’s brain and an external device without any requirement of user’s physical movement. Primarily BCI has been employed in medical sciences to facilitate the patients with severe motor, visual and aural impairments. More recently many BCI are also being used as a part of entertainment. BCI differs from Neuroprosthetics, a study within Neuroscience, in terms of its usage; former connects the brain with a computer or external device while the later connects the nervous system to an implanted device. A BCI receives the modulated input from user either invasively or non-invasively. The modulated input, concealed in the huge amount of noise, contains distinct brain patterns based on the type of activity user is performing at that point in time. Primary task of a typical BCI is to find out those distinct brain patterns and translates them to meaningful communication command set. Cursor controllers, Spellers, Wheel Chair and robot Controllers are classic examples of BCI applications. This study aims to investigate an Electroencephalography (EEG) based non-invasive BCI in general and its interaction with a web interface in particular. Different aspects related to BCI are covered in this work including feedback techniques, BCI frameworks, commercial BCI hardware, and different BCI applications. BCI paradigm Steady State Visually Evoked Potentials (SSVEP) is being focused during this study. A hybrid solution is developed during this study, employing a general purpose BCI framework OpenViBE, which comprised of a low-level stimulus management and control module and a web based Google Street View client application. This study shows that a BCI can not only provide a way of communication for the impaired subjects but it can also be a multipurpose tool for a healthy person. During this study, it is being established that the major hurdles that hamper the performance of a BCI system are training protocols, BCI hardware and signal processing techniques. It is also observed that a controlled environment and expert assistance is required to operate a BCI system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Passerat-Palmbach, Jonathan. "Contributions to parallel stochastic simulation : application of good software engineering practices to the distribution of pseudorandom streams in hybrid Monte Carlo simulations". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00858735.

Texto completo
Resumen
The race to computing power increases every day in the simulation community. A few years ago, scientists have started to harness the computing power of Graphics Processing Units (GPUs) to parallelize their simulations. As with any parallel architecture, not only the simulation model implementation has to be ported to the new parallel platform, but all the tools must be reimplemented as well. In the particular case of stochastic simulations, one of the major element of the implementation is the pseudorandom numbers source. Employing pseudorandom numbers in parallel applications is not a straightforward task, and it has to be done with caution in order not to introduce biases in the results of the simulation. This problematic has been studied since parallel architectures are available and is called pseudorandom stream distribution. While the literature is full of solutions to handle pseudorandom stream distribution on CPU-based parallel platforms, the young GPU programming community cannot display the same experience yet. In this thesis, we study how to correctly distribute pseudorandom streams on GPU. From the existing solutions, we identified a need for good software engineering solutions, coupled to sound theoretical choices in the implementation. We propose a set of guidelines to follow when a PRNG has to be ported to GPU, and put these advice into practice in a software library called ShoveRand. This library is used in a stochastic Polymer Folding model that we have implemented in C++/CUDA. Pseudorandom streams distribution on manycore architectures is also one of our concerns. It resulted in a contribution named TaskLocalRandom, which targets parallel Java applications using pseudorandom numbers and task frameworks. Eventually, we share a reflection on the methods to choose the right parallel platform for a given application. In this way, we propose to automatically build prototypes of the parallel application running on a wide set of architectures. This approach relies on existing software engineering tools from the Java and Scala community, most of them generating OpenCL source code from a high-level abstraction layer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Pranke, Nico. "Skalierbares und flexibles Live-Video Streaming mit der Media Internet Streaming Toolbox". Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola&quot, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-26652.

Texto completo
Resumen
Die Arbeit befasst sich mit der Entwicklung und Anwendung verschiedener Konzepte und Algorithmen zum skalierbaren Live-Streaming von Video sowie deren Umsetzung in der Media Internet Streaming Toolbox. Die Toolbox stellt eine erweiterbare, plattformunabhängige Infrastruktur zur Erstellung aller Teile eines Live-Streamingsystems von der Videogewinnung über die Medienverarbeitung und Codierung bis zum Versand bereit. Im Vordergrund steht die flexible Beschreibung der Medienverarbeitung und Stromerstellung sowie die Erzeugung von klientenindividuellen Stromformaten mit unterschiedlicher Dienstegüte für eine möglichst große Zahl von Klienten und deren Verteilung über das Internet. Es wird ein integriertes graphenbasiertes Konzept entworfen, in dem das Component Encoding Stream Construction, die Verwendung von Compresslets und eine automatisierte Flussgraphenkonstruktion miteinander verknüpft werden. Die für die Stromkonstruktion relevanten Teile des Flussgraphen werden für Gruppen mit identischem Zustand entkoppelt vom Rest ausgeführt. Dies führt zu einer maximalen Rechenlast, die unabhängig von der Zahl der Klienten ist, was sowohl theoretisch gezeigt als auch durch konkrete Messungen bestätigt wird. Als Beispiele für die Verwendung der Toolbox werden unter Anderem zwei waveletbasierte Stromformate entwickelt, integriert und bezüglich Codiereffizienz und Skalierbarkeit miteinander verglichen
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Pranke, Nico. "Skalierbares und flexibles Live-Video Streaming mit der Media Internet Streaming Toolbox". Doctoral thesis, TU Bergakademie Freiberg, 2009. https://tubaf.qucosa.de/id/qucosa%3A22696.

Texto completo
Resumen
Die Arbeit befasst sich mit der Entwicklung und Anwendung verschiedener Konzepte und Algorithmen zum skalierbaren Live-Streaming von Video sowie deren Umsetzung in der Media Internet Streaming Toolbox. Die Toolbox stellt eine erweiterbare, plattformunabhängige Infrastruktur zur Erstellung aller Teile eines Live-Streamingsystems von der Videogewinnung über die Medienverarbeitung und Codierung bis zum Versand bereit. Im Vordergrund steht die flexible Beschreibung der Medienverarbeitung und Stromerstellung sowie die Erzeugung von klientenindividuellen Stromformaten mit unterschiedlicher Dienstegüte für eine möglichst große Zahl von Klienten und deren Verteilung über das Internet. Es wird ein integriertes graphenbasiertes Konzept entworfen, in dem das Component Encoding Stream Construction, die Verwendung von Compresslets und eine automatisierte Flussgraphenkonstruktion miteinander verknüpft werden. Die für die Stromkonstruktion relevanten Teile des Flussgraphen werden für Gruppen mit identischem Zustand entkoppelt vom Rest ausgeführt. Dies führt zu einer maximalen Rechenlast, die unabhängig von der Zahl der Klienten ist, was sowohl theoretisch gezeigt als auch durch konkrete Messungen bestätigt wird. Als Beispiele für die Verwendung der Toolbox werden unter Anderem zwei waveletbasierte Stromformate entwickelt, integriert und bezüglich Codiereffizienz und Skalierbarkeit miteinander verglichen
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Sarin, Anika. "open / close: assimilating immersive spaces in visual communication". VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4876.

Texto completo
Resumen
I am interested in two spaces obverse to each other: open and closed. An open space develops organically based on how people inhabit it. Interacting with an open space is a dynamic, sporadic, multisensory, immersive, and subjective experience. In such spaces, we are confronted with an alternative aesthetic, one that is in conflict with the seamlessness of a closed space. A closed space is anchored on definite variables like structure, use and boundaries. While interaction between people and space is important, the space is tightly controlled and interaction is designed. Through this thesis project, I present a method that metaphorically transforms the experience of a walk through a closed space into an open-ended and immersive experience. When space develops as a response to our actions, it affords intimacy and a sense of belonging. It facilitates deeper expressiveness through engagement. By applying a method that uses fragmentation, recurrence and motion, I am metaphorically transforming an urban closed space to open. Through this transformation I am creating a fresh person-space dialogue that temporarily destabilizes perception and encourages physical sensation which allows for an intimate experience of the space. An immersive interaction with an open space transgresses the urban sterility of a closed space and is capable of creating a diversity of distinct experiences.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Skalický, Martin. "Cyklistický/běžecký tréninkový deník využívající GPS data". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237039.

Texto completo
Resumen
This master's thesis practical goal is to create an application with usefull graphical users interface, which allows to import training data from GPS device. Also it will generate graphical and statistical outputs of achived results with export option to HTML and tabular processors format. Theoretical part of this thesis presents introduction to creating of a training diary, short description of GPS system function, as next it describes GPS data storage formats and application design.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Glomm, Anna Sandaker. "Graphic revolt! : Scandinavian artists' workshops, 1968-1975 : Røde Mor, Folkets Ateljé and GRAS". Thesis, University of St Andrews, 2012. http://hdl.handle.net/10023/3171.

Texto completo
Resumen
This thesis examines the relationship between the three artists' workshops Røde Mor (Red Mother), Folkets Ateljé (The People's Studio) and GRAS, who worked between 1968 and 1975 in Denmark, Sweden and Norway. Røde Mor was from the outset an articulated Communist graphic workshop loosely organised around collective exhibitions. It developed into a highly productive and professionalised group of artists that made posters by commission for political and social movements. Its artists developed a familiar and popular artistic language characterised by imaginative realism and socialist imagery. Folkets Ateljé, which has never been studied before, was a close knit underground group which created quick and immediate responses to concurrent political issues. This group was founded on the example of Atelier Populaire in France and is strongly related to its practices. Within this comparative study it is the group that comes closest to collective practises around 1968 outside Scandinavia, namely the democratic assembly. The silkscreen workshop GRAS stemmed from the idea of economic and artistic freedom, although socially motivated and politically involved, the group never implemented any doctrine for participation. The aim of this transnational study is to reveal common denominators to the three groups' poster art as it was produced in connection with a Scandinavian experience of 1968. By ‘1968' it is meant the period from the late 1960s till the end of the 1970s. It examines the socio-political conditions under which the groups flourished and shows how these groups operated in conjunction with the political environment of 1968. The thesis explores the relationship between political movements and the collective art making process as it appeared in Scandinavia. To present a comprehensible picture of the impact of 1968 on these groups, their artworks, manifestos, and activities outside of the collective space have been discussed. The argument has presented itself that even though these groups had very similar ideological stances, their posters and techniques differ. This has impacted the artists involved to different degrees, yet made it possible to express the same political goals. It is suggested to be linked with the Scandinavian social democracies and common experience of the radicalisation that took place mostly in the aftermath of 1968 proper. By comparing these three groups' it has been uncovered that even with the same socio-political circumstances and ideological stance divergent styles did develop to embrace these issue.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

"Scalable Algorithms and Systems for Graph Analytics and Stream Processing". 2016. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1292713.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Chen, Mei-Hsuan y 陳美璇. "Data Flow Graph Partitioning for Stream Processing in Multi-FPGA Reconfigurable System". Thesis, 2003. http://ndltd.ncl.edu.tw/handle/05549707611165024588.

Texto completo
Resumen
碩士
國立交通大學
資訊工程系
91
The reconfigurable computing offers computation ability in hardware to increase performance, but also keeps the flexibility in software solution. The multi-FGPA reconfigurable system provides means for dealing with the applications that are too large to fit within a single FPGA, but may be partitioned over multiple FPGA available. The systems have a limited number of I/O pins that connect the FPGAs together, and therefore I/O pins must be used carefully. The object of this thesis is to exploit potential throughput of stream processing in multi-FPGA reconfigurable system. We proposed two approaches that schedule data flow graph onto the multi-FPGA system. The first method makes use of data flow graph to find the ideal size and connectivity of FPGA for multi-FPGA reconfigurable system. And the second approach increases the throughput by decreasing the communication overhead in current multi-FPGA reconfigurable system. In our simulation, we use kernel algorithms of DSP as benchmark. The results are promising.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

蔡明瑾. "Incremental Detection for Frequent Sub-Graph Patterns Changing on Data Streams". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/01052596127875168457.

Texto completo
Resumen
碩士
國立臺灣師範大學
資訊教育學系
94
Graph is a kind of structural data, which is applied to model the various relations among data in real world. Mining frequent sub-graph patterns, being equal to solve the problem of checking graph isomorphism, is a NP hard problem. Therefore, mining frequent sub-graph patterns in data streams is an even more complicated problem. In this thesis, graph data at every time point is collected for mining frequent sub-graph patterns at the time point. We assume that the changing of frequent sub-graph patterns will take several time points. Therefore, it is not necessary to re-mine frequent sub-graph patterns at each time point. The frequent sub-graph patterns discovered at the first time point are named base patterns. An efficient method, named FGCD algorithm, is proposed to detect the change of base patterns at the following time points, the FGCD algorithm approximately counts the frequencies of base patterns in the set of newly coming graphs, and calculates the percentage of remaining frequent patterns to decide whether the trend of frequent sub-graph patterns is changing or not and trigger to perform the re-mining of frequent sub-graph patterns. The storage structures of graphs are designed and the downward closure property among frequent sub-graphs is applied in the proposed method to efficiently match the sub-graphs patterns. According to experimental results, FGCD can approximately estimate the percentage of base patterns that remain frequent. When the trend of frequent sub-graph patterns does not change, FGCD algorithm provides a more efficient way than re-mining to maintain the frequent sub-graph patterns approximately.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Singh, Paramvir. "Fast and scalable triangle counting in graph streams: the hybrid approach". Thesis, 2020. http://hdl.handle.net/1828/12445.

Texto completo
Resumen
Triangle counting is a major graph problem with several applications in social network analysis, anomaly detection, etc. A considerable amount of work has contributed to approximately computing the global triangle counts using several computational models. One of the most popular streaming models considered is Edge Streaming in which the edges arrive in the form of a graph stream. We categorize the existing literature into two categories: Fixed Memory (FM) approach, and Fixed Probability (FP) approach. As the size of the graphs grows, several challenges arise such as memory space limitations, and prohibitively long running time. Therefore, both FM and FP categories exhibit some limitations. FP algorithms fail to scale for massive graphs. We identified a limitation of FM category $i.e.$ FM algorithms have higher computational time than their FP variants. In this work, we present a new category called the Hybrid approach that overcomes the limitations of both FM and FP approaches. We present two new algorithms that belong to the hybrid category: Neighbourhood Hybrid Multisampling (NHMS) and Triest/ThinkD Hybrid Sampling (THS) for estimating the number of global triangles in graphs. These algorithms are highly scalable and have better running time than FM and FP variants. We experimentally show that both NHMS and THS outperform state-of-the-art algorithms in space-efficient environments.
Graduate
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Liu, Che-Ming y 劉哲銘. "Mining Representative Patterns over Data Streams with a Lexical Order Graph". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/57925974869510144889.

Texto completo
Resumen
碩士
中原大學
資訊工程研究所
96
Data in recent applications over data streams such as network monitoring, stock and financial analysis often continuously and rapidly flow into the system. As the storage space is limited, a proper mechanism for data update and compression is required in order that the important information can be preserved. In the previous representative patterns, RP and δ-TCFI, they are both pick the big size of itemsets to represent the subsets of it under the threshold. This paper combines the concept of representative patterns from static databases and the techniques for pattern update and count estimation over data streams. We propose an algorithm for mining two types of representative patterns. Moreover, we adapt the data structure proposed for mining closed frequent patterns from static databases to batch processing of transactions from data streams. By our mining algorithm, comparing a frequent pattern with the representative patterns discovered so far is efficient. The experiment results show that the two types of representative patterns lead to different performance. When mining δ-TCFI, we can get well efficiency, precision and recall. When mining RP, we can get lower error rate. Users can set one of them as the target for mining according to their application needs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

"Application of stream processing to hydraulic network solvers". Thesis, 2011. http://hdl.handle.net/10210/3907.

Texto completo
Resumen
M.Ing.
The aim of this research was to investigate the use of stream processing on the graphics processing unit (GPU) and to apply it into the hydraulic modelling of a water distribution system. The stream processing model was programmed and compared to the programming on the conventional, sequential programming platform, namely the CPU. The use of the GPU as a parallel processor has been widely adopted in many different non-graphic applications and the benefits of implementing parallel processing in these fields have been significant. They have the capacity to perform from billions to trillions of floating-point operations per second using programmable shader programs. These great advances seen in the GPU architecture have been driven by the gaming industry and a demand for better gaming experiences. The computational performance of the GPU is much greater than the computational capability of CPU processors. Hydraulic modelling of water distribution systems has become vital to the construction of new water distribution systems. This is because water distribution networks are very complex and are nonlinear in nature. Further, modelling is able to prevent and anticipate problems in a system without physically building the system. The hydraulic model that was used was the Gradient Method, which is the hydraulic model used in the EPANET software package. The Gradient Method produces a linear system which is both positive-definite and symmetric. The Cholesky method is currently being used in the EPANET algorithm in order to solve the linear equations produced by the Gradient Method. Thus, a linear solution method had to be selected for the use in both parallel processing on the GPU and as a hydraulic network solver. The Conjugate Gradient algorithm was selected as an ideal algorithm as it works well with the hydraulic solver and could be converted into a parallel algorithm on the GPU. The Conjugate Gradient Method is one of the best-known iterative techniques used in the solution of sparse symmetric positive definite linear systems. The Conjugate Gradient Method was constructed both in the sequential programming model and the stream processing model, using the CPU and the GPU respectively on two different computer systems. The Cholesky method was also programmed in the sequential programming model for both of the computer systems. A comparison was made between the Cholesky and the Conjugate Gradient Methods in order to evaluate the two methods relative to each other. The findings in this study have shown that stream processing on the GPU can be used in the parallel GPU architecture in order to perform general-purpose algorithms. The results further affirmed that iterative linear solution methods should only be used for large linear systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

CHANG, CHIUNG-FANG y 張瓊方. "Detecting Texts and Graphs in Street View Images by Convolutional Neural Networks". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/83408631354295778349.

Texto completo
Resumen
碩士
國立中央大學
資訊工程學系
105
Considering that traffic and shop signs appearing in street view images contain useful information, such as locations of scenes or effects of advertising billboards, a text and graph detection mechanism in street view images is proposed in this research. Many of these artificial objects in street view images are not easy to extract with a fixed template. Besides, cluttered backgrounds containing such items as buildings or trees may block some parts of the signs, increasing the challenges of detection. Weather or light conditions further complicate the detection process. The proposed detection mechanism is divided into two parts; first, we use the Fully Convolutional Network (FCN) to train a detection model for effectively locating the positions of signs in street view images. In the second part, we extract the texts and graphs in the selected areas employing their characteristics. By observing that, regardless of various shapes, the texts/graphs are usually superimposed on smooth areas, we construct smooth-region maps according to the gradient magnitudes and then confirm the actual areas of signs. The texts and graphs can then be extracted by Maximally Stable Extremal Regions (MSER), which is suitable for text detection. Experimental results show that this mechanism can effectively extract texts and graphs in different types of complex street scenes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Graps, Amara Lynn [Verfasser]. "Io revealed in the Jovian dust streams / presented by Amara Lynn Graps". 2001. http://d-nb.info/963611534/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Peng, Yi-Cheng y 彭以程. "Concept-Based Event Identification from Social Streams Using Evolving Social Graph Sequences". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/99327486875315576507.

Texto completo
Resumen
碩士
國立清華大學
資訊系統與應用研究所
102
Social networks, which have become extremely popular in the 21st century, contain a tremendous amount of user-generated content about real-world events. This user-generated content relays real-world events as they happen, and sometimes even ahead of the newswire. The goal of this work is to identify events from social streams. The proposed model utilizes sliding-window-based statistical techniques to extract event candidates from social streams. Subsequently, the “Concept-based evolving graph sequences”(cEGS) approach is employed to verify information propagation trends of event candidates and to identify those events. The experimental results show the usefulness of our approach in identifying real-world events in social streams.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Åleskog, Christoffer. "Graph-based Multi-view Clustering for Continuous Pattern Mining". Thesis, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21850.

Texto completo
Resumen
Background. In many smart monitoring applications, such as smart healthcare, smart building, autonomous cars etc., data are collected from multiple sources and contain information about different perspectives/views of the monitored phenomenon, physical object, system. In addition, in many of those applications the availability of relevant labelled data is often low or even non-existing. Inspired by this, in this thesis study we propose a novel algorithm for multi-view stream clustering. The algorithm can be applied for continuous pattern mining and labeling of streaming data. Objectives. The main objective of this thesis is to develop and implement a novel multi-view stream clustering algorithm. In addition, the potential of the proposed algorithm is studied and evaluated on two datasets: synthetic and real-world. The conducted experiments study the new algorithm’s performance compared to a single-view clustering algorithm and an algorithm without transferring knowledge between chunks. Finally, the obtained results are analyzed, discussed and interpreted. Methods. Initially, we study the state-of-the-art multi-view (stream) clustering algorithms. Then we develop our multi-view clustering algorithm for streaming data by implementing transfer of knowledge feature. We present and explain in details the developed algorithm by motivating each choice made during the algorithm design phase. Finally, discussion of the algorithm configuration, experimental setup and the datasets chosen for the experiments are presented and motivated. Results. Different configurations of the proposed algorithm have been studied and evaluated under different experimental scenarios on two different datasets: synthetic and real-world. The proposed multi-view clustering algorithm has demonstrated higher performance on the synthetic data than on the real-world dataset. This is mainly due to not very good quality of the used real-world data. Conclusions. The proposed algorithm has demonstrated higher performance results on the synthetic dataset than on the real-world dataset. It can generate high-quality clustering solutions with respect to the used evaluation metrics. In addition, the transfer of knowledge feature has been shown to have a positive effect on the algorithm performance. A further study of the proposed algorithm on other richer and more suitable datasets, e.g., data collected from numerous sensors used for monitoring some phenomenon, is planned to be conducted in the future work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Zhao, Z. W. y I.-Ming Chen. "Optimizing the Dynamic Distribution of Data-stream for High Speed Communications". 2004. http://hdl.handle.net/1721.1/7459.

Texto completo
Resumen
The performances of high-speed network communications frequently rest with the distribution of data-stream. In this paper, a dynamic data-stream balancing architecture based on link information is introduced and discussed firstly. Then the algorithms for simultaneously acquiring the passing nodes and links of a path between any two source-destination nodes rapidly, as well as a dynamic data-stream distribution planning are proposed. Some related topics such as data fragment disposal, fair service, etc. are further studied and discussed. Besides, the performance and efficiency of proposed algorithms, especially for fair service and convergence, are evaluated through a demonstration with regard to the rate of bandwidth utilization. Hoping the discussion presented here can be helpful to application developers in selecting an effective strategy for planning the distribution of data-stream.
Singapore-MIT Alliance (SMA)
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Azevedo, José Maria Pantoja Mata Vale e. "Image Stream Similarity Search in GPU Clusters". Master's thesis, 2018. http://hdl.handle.net/10362/58447.

Texto completo
Resumen
Images are an important part of today’s society. They are everywhere on the internet and computing, from news articles to diverse areas such as medicine, autonomous vehicles and social media. This enormous amount of images requires massive amounts of processing power to process, upload, download and search for images. The ability to search an image, and find similar images in a library of millions of others empowers users with great advantages. Different fields have different constraints, but all benefit from the quick processing that can be achieved. Problems arise when creating a solution for this. The similarity calculation between several images, performing thousands of comparisons every second, is a challenge. The results of such computations are very large, and pose a challenge when attempting to process. Solutions for these problems often take advantage of graphs in order to index images and their similarity. The graph can then be used for the querying process. Creating and processing such a graph in an acceptable time frame poses yet another challenge. In order to tackle these challenges, we take advantage of a cluster of machines equipped with Graphics Processing Units (GPUs), enabling us to parallelize the process of describing an image visually and finding other images similar to it in an acceptable time frame. GPUs are incredibly efficient at processing data such as images and graphs, through algorithms that are heavily parallelizable. We propose a scalable and modular system that takes advantage of GPUs, distributed computing and fine-grained parallellism to detect image features, index images in a graph and allow users to search for similar images. The solution we propose is able to compare up to 5000 images every second. It is also able to query a graph with thousands of nodes and millions of edges in a matter of milliseconds, achieving a very efficient query speed. The modularity of our solution allows the interchangeability of algorithms and different steps in the solution, which provides great adaptability to any needs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Lai, Chih-Chia y 賴志嘉. "On Constructing the Registration Graph of a 3-D Scene Using RGB-D Image Streams". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/42601737493375454708.

Texto completo
Resumen
碩士
國立暨南國際大學
資訊工程學系
101
The key problem of using a mobile robot equipped with an RGB-D camera to explore an unknown environment is how to fuse the information contained in the acquired images. Due to the limited field of view of the camera, it is inevitable to register the acquired images. If we represent each image as a node and each pairwise registration result as an edge linking two registered images, then the completed registration results can be expressed as a registration graph. Constructing a registration graph from a series of input images can greatly simplify the 3-D scene reconstruction problem. Notably, the critical issue of registration graph construction is to determine whether a pair of given images are overlapped. If two images are determined to be overlapped, then the second problem is to determine their registration parameters and to add an edge to link those two images. In this work, we use the number of SIFT feature correspondences to select possibly overlapped images. However, the computational complexity of the traditional SIFT feature matching method is too high. Hence, we propose a fast SIFT feature matching algorithm based on the visual word (VW) technique. We first quantize the SIFT features via the vector quantization method with a specified codebook. If two SIFT features are quantized to different VWs, then those two SIFT features are deemed as not matched. Therefore, when matching SIFT features, we only have to consider those features having the same VW and, thus, the computation cost can be greatly reduced.The matched SIFT features computed with the VW approach are further verified with the RANSAC algorithm to remove incorrect matching results and to estimate the registration parameters. Experimental results show that the proposed method can improve the computation speed for 38 times without sacrificing two much matching accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Udupa, Abhishek. "Efficient Compilation Of Stream Programs Onto Multi-cores With Accelerators". Thesis, 2009. https://etd.iisc.ac.in/handle/2005/971.

Texto completo
Resumen
Over the past two decades, microprocessor manufacturers have typically relied on wider issue widths and deeper pipelines to obtain performance improvements for single threaded applications. However, in the recent years, with power dissipation and wire delays becoming primary design constraints, this approach can no longer be effectively used to yield performance improvements. Thus process designers and vendors are universally moving towards multi-core designs. Examples for these are the commodity general purpose multi-core processors, the CellBE accelerator from IBM and the Graphics Processing Units from NVIDIA and ATI. Although these many and multi-core architectures can provide enormous performance benefits, it is difficult to program for them due to the complexity of writing explicitly parallel code. The ubiquity of computationally intensive media processing applications makes it imperative to consider new programming frameworks and languages that can express parallelism in an easy, portable manner. The StreamIt programming language has been proposed to efficiently exploit parallelism at various levels on general purpose multi-core architectures and stream processors and allow media processing and DSP application to be developed in an easy and portable fashion. The StreamIt model allows programmers to specify a program as a set of filters connected by FIFO communication channels. The graphs thus specified by the StreamIt programs describe task, data and pipeline parallelism which can be potentially exploited on modern Graphics Processing Units (GPUs), which have emerged as powerful, commodity stream processors, which support abundant parallelism in hardware. The first part of this thesis deals with the challenges in mapping StreamIt programs to GPUs and proposes an efficient technique to software pipeline the execution of stream Programs on GPUs. We formulate this problem—both scheduling and assignment of filters to processors—as an efficient Integer Linear Program(ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipeline parallelism. We have evaluated our approach on a platform equipped with an NVIDIA GeForce 8800 GTS 512 GPU and our approach yields a (geometric) mean speedup of 5.02X, with a maximum speedup of 36.83X across a set of StreamIt benchmarks, with the speedup measured relative to an optimized single threaded CPU execution. While the approach of software pipelining the execution of stream programs on GPUs is efficient and performs well, it does not utilize the CPU cores to perform useful computation. Further, it does not support programs with stateful filters, which are essentially filters that are not data parallel owing to a dependence between each successive firing that is carried through the implicit state of the filter. The second part of the thesis aims at addressing these issues and describes a novel method to orchestrate the execution of a StreamIt program on the multiple cores of a system and GPUs in a synergistic manner. The proposed approach identifies, using profiling, the relative benefits of executing a task on the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers, the limited DMA bandwidth available and the required buffer layout transformations associated with the partitioning, as an integrated Integer Linear Program(ILP) which can then be solved by an ILP solver. Since solving an ILP is NP-Hard in the general case and may thus require a large amount of time, we also propose an efficient heuristic algorithm for the work partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solutions to the ILP formulation on an average across the benchmark suite, while requiring 2–3 orders of magnitude less time than the ILP approach. The partitioned tasks are then software pipelined to execute on the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with eight CPU cores, out of which four were used, and a GeForce 8800 GTS512 GPU show a(geometric) mean speed up of 6.84X with a maximum of 51.96X over a single threaded CPU execution across a set of StreamIt benchmarks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Udupa, Abhishek. "Efficient Compilation Of Stream Programs Onto Multi-cores With Accelerators". Thesis, 2009. http://hdl.handle.net/2005/971.

Texto completo
Resumen
Over the past two decades, microprocessor manufacturers have typically relied on wider issue widths and deeper pipelines to obtain performance improvements for single threaded applications. However, in the recent years, with power dissipation and wire delays becoming primary design constraints, this approach can no longer be effectively used to yield performance improvements. Thus process designers and vendors are universally moving towards multi-core designs. Examples for these are the commodity general purpose multi-core processors, the CellBE accelerator from IBM and the Graphics Processing Units from NVIDIA and ATI. Although these many and multi-core architectures can provide enormous performance benefits, it is difficult to program for them due to the complexity of writing explicitly parallel code. The ubiquity of computationally intensive media processing applications makes it imperative to consider new programming frameworks and languages that can express parallelism in an easy, portable manner. The StreamIt programming language has been proposed to efficiently exploit parallelism at various levels on general purpose multi-core architectures and stream processors and allow media processing and DSP application to be developed in an easy and portable fashion. The StreamIt model allows programmers to specify a program as a set of filters connected by FIFO communication channels. The graphs thus specified by the StreamIt programs describe task, data and pipeline parallelism which can be potentially exploited on modern Graphics Processing Units (GPUs), which have emerged as powerful, commodity stream processors, which support abundant parallelism in hardware. The first part of this thesis deals with the challenges in mapping StreamIt programs to GPUs and proposes an efficient technique to software pipeline the execution of stream Programs on GPUs. We formulate this problem—both scheduling and assignment of filters to processors—as an efficient Integer Linear Program(ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipeline parallelism. We have evaluated our approach on a platform equipped with an NVIDIA GeForce 8800 GTS 512 GPU and our approach yields a (geometric) mean speedup of 5.02X, with a maximum speedup of 36.83X across a set of StreamIt benchmarks, with the speedup measured relative to an optimized single threaded CPU execution. While the approach of software pipelining the execution of stream programs on GPUs is efficient and performs well, it does not utilize the CPU cores to perform useful computation. Further, it does not support programs with stateful filters, which are essentially filters that are not data parallel owing to a dependence between each successive firing that is carried through the implicit state of the filter. The second part of the thesis aims at addressing these issues and describes a novel method to orchestrate the execution of a StreamIt program on the multiple cores of a system and GPUs in a synergistic manner. The proposed approach identifies, using profiling, the relative benefits of executing a task on the superscalar CPU cores and the accelerator. We formulate the problem of partitioning the work between the CPU cores and the GPU, taking into account the latencies for data transfers, the limited DMA bandwidth available and the required buffer layout transformations associated with the partitioning, as an integrated Integer Linear Program(ILP) which can then be solved by an ILP solver. Since solving an ILP is NP-Hard in the general case and may thus require a large amount of time, we also propose an efficient heuristic algorithm for the work partitioning between the CPU and the GPU, which provides solutions which are within 9.05% of the optimal solutions to the ILP formulation on an average across the benchmark suite, while requiring 2–3 orders of magnitude less time than the ILP approach. The partitioned tasks are then software pipelined to execute on the multiple CPU cores and the Streaming Multiprocessors (SMs) of the GPU. The software pipelining algorithm orchestrates the execution between CPU cores and the GPU by emitting the code for the CPU and the GPU, and the code for the required data transfers. Our experiments on a platform with eight CPU cores, out of which four were used, and a GeForce 8800 GTS512 GPU show a(geometric) mean speed up of 6.84X with a maximum of 51.96X over a single threaded CPU execution across a set of StreamIt benchmarks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Lau, Sin Ki Braundt. "Human centric routing algorithm for urban cyclists and the influence of street network spatial configuration". Master's thesis, 2020. http://hdl.handle.net/10362/95144.

Texto completo
Resumen
Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial Technologies
Understanding wayfinding behavior of cyclist aid decision makers to design better cities in favor of this sustainable active transport. Many have modelled the physical influence of building environment on wayfinding behavior, with cyclist route choices and routing algorithm. Incorporating cognitive wayfinding approach with Space Syntax techniques not only adds the human centric element to model routing algorithm, but also opens the door to evaluate spatial configuration of cities and its effect on cyclist behavior. This thesis combines novel Space Syntax techniques with Graph Theory to develop a reproducible Human Centric Routing Algorithm and evaluates how spatial configuration of cities influences modelled wayfinding behavior. Valencia, a concentric gridded city, and Cardiff with a complex spatial configuration are chosen as the case study areas. Significant differences in routes distribution exist between cities and suggest that spatial configuration of the city has an influence on the modelled routes. Street Network Analysis is used to further quantify such differences and confirms that the simpler spatial configuration of Valencia has a higher connectivity, which could facilitate cyclist wayfinding. There are clear implications on urban design that spatial configuration with higher connectivity indicates legibility, which is key to build resilience and sustainable communities. The methodology demonstrates automatic, scalable and reproducible tools to create Human Centric Routing Algorithm anywhere in the world. Reproducibility self-assessment (https://osf.io/j97zp/): 3, 3, 3, 2, 1 (Input data, Preprocessing, Methods, Computational Environment and Results).
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Guo, T. "Real-time analytics for complex structure data". Thesis, 2015. http://hdl.handle.net/10453/38990.

Texto completo
Resumen
University of Technology Sydney. Faculty of Engineering and Information Technology.
The advancement of data acquisition and analysis technology has resulted in many real-world data being dynamic and containing rich content and structured information. More specifically, with the fast development of information technology, many current real-world data are always featured with dynamic changes, such as new instances, new nodes and edges, and modifications to the node content. Different from traditional data, which are represented as feature vectors, data with complex relationships are often represented as graphs to denote the content of the data entries and their structural relationships, where instances (nodes) are not only characterized by the content but are also subject to dependency relationships. Plus, real-time availability is one of outstanding features of today’s data. Real-time analytics is dynamic analysis and reporting based on data entered into a system before the actual time of use. Real-time analytics emphasizes on deriving immediate knowledge from dynamic data sources, such as data streams, and knowledge discovery and pattern mining are facing complex, dynamic data sources. However, how to combine structure information and node content information for accurate and real-time data mining is still a big challenge. Accordingly, this thesis focuses on real-time analytics for complex structure data. We explore instance correlation in complex structure data and utilises it to make mining tasks more accurate and applicable. To be specific, our objective is to combine node correlation with node content and utilize them for three different tasks, including (1) graph stream classification, (2) super-graph classification and clustering, and (3) streaming network node classification. Understanding the role of structured patterns for graph classification: the thesis introduces existing works on data mining from an complex structured perspective. Then we propose a graph factorization-based fine-grained representation model, where the main objective is to use linear combinations of a set of discriminative cliques to represent graphs for learning. The optimization-oriented factorization approach ensures minimum information loss for graph representation, and also avoids the expensive sub-graph isomorphism validation process. Based on this idea, we propose a novel framework for fast graph stream classification. A new structure data classification algorithm: The second method introduces a new super-graph classification and clustering problem. Due to the inherent complex structure representation, all existing graph classification methods cannot be applied to super-graph classification. In the thesis, we propose a weighted random walk kernel which calculates the similarity between two super-graphs by assessing (a) the similarity between super-nodes of the super-graphs, and (b) the common walks of the super-graphs. Our key contribution is: (1) a new super-node and super-graph structure to enrich existing graph representation for real-world applications; (2) a weighted random walk kernel considering node and structure similarities between graphs; (3) a mixed-similarity considering structured content inside super-nodes and structural dependency between super-nodes; and (4) an effective kernel-based super-graph classification method with sound theoretical basis. Empirical studies show that the proposed methods significantly outperform the state-of-the-art methods. Real-time analytics framework for dynamic complex structure data: For streaming networks, the essential challenge is to properly capture the dynamic evolution of the node content and node interactions in order to support node classification. While streaming networks are dynamically evolving, for a short temporal period, a subset of salient features are essentially tied to the network content and structures, and therefore can be used to characterize the network for classification. To achieve this goal, we propose to carry out streaming network feature selection (SNF) from the network, and use selected features as gauge to classify unlabeled nodes. A Laplacian based quality criterion is proposed to guide the node classification, where the Laplacian matrix is generated based on node labels and network topology structures. Node classification is achieved by finding the class label that results in the minimal gauging value with respect to the selected features. By frequently updating the features selected from the network, node classification can quickly adapt to the changes in the network for maximal performance gain. Experiments and comparisons on real-world networks demonstrate that SNOC is able to capture dynamics in the network structures and node content, and outperforms baseline approaches with significant performance gain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

RIMA, Matteo. "Il romanzo testamento". Doctoral thesis, 2012. http://hdl.handle.net/11562/396537.

Texto completo
Resumen
La tesi si propone di individuare e di definire una sorta di (sotto)genere letterario fin qui mai trattato, quello del romanzo-testamento. Con questa definizione mi riferisco a tutte le opere scritte all’interno della “dimensione della morte”, ovvero la fase della vita in cui il pensiero della morte diviene dominante. Questo accade solitamente per tre possibili motivi: per l’età avanzata, per una grave malattia o per una precisa volontà suicida; a queste tre motivazioni corrispondono altrettanti capitoli, ognuno dei quali approfondisce quattro diversi testi (romanzi, racconti o fumetti che siano). La situazione nelle quali gli autori realizzano le rispettive opere è estremamente differente: chi affronta la morte in tarda età può permettersi di scrivere con una certa serenità, nella consapevolezza di avere completato naturalmente il proprio percorso; chi muore anzitempo, per malattia, rimpiange gli anni che non potrà vivere e realizza opere animate da una notevole tensione narrativa; chi sceglie di darsi volontariamente la morte si rivolge al mondo con atteggiamento di sfida, per quanto il suo sguardo si dimostri freddo e distaccato. Segue quindi un’appendice nella quale si analizzano altri tre romanzi: originariamente contenuti nei tre capitoli iniziali, essi sono stati successivamente stralciati in quanto sfuggivano a una precisa categorizzazione e male si amalgamavano agli altri; peraltro, tali romanzi erano troppo pertinenti per ignorarli, per cui sono stati trattati in un’apposita sezione. Capitolo 1. Il vecchio scrittore e la morte. I romanzi analizzati sono Deux anglaises et le continent (Henri-Pierre Roché, 1956), Mercy of a Rude Stream (Henry Roth, 1994-1998), The Captain Is Out to Lunch and the Sailors Have Taken Over the Ship (Charles Bukowski, 1998) e Ravelstein (Saul Bellow, 2000). Quattro opere realizzate da autori piuttosto avanti con l’età (si va dai 72 anni di Bukowski agli 89 di Roth) che si rivelano interamente o parzialmente autobiografiche: Roché rivive una fase della propria giovinezza, romanzandola; Roth ripercorre i tredici anni vissuti ad Harlem tra il 1914 e il 1927 dedicandovi ben quattro volumi (per un totale di circa 1500 pagine); Bukowski tiene un vero e proprio diario in cui racconta le proprie esperienze quotidiane; Bellow narra la propria amicizia con Abe Ravelstein, intellettuale ebreo morto qualche anno prima. L’unico dei quattro a usare il proprio vero nome è Bukowski; gli altri tre ricorrono ad altrettanti alter-ego che peraltro nascondono poco o nulla della reale identità dei personaggi. Capitolo 2. Lo scrittore e la malattia. Il capitolo si apre con l’analisi degli ultimi romanzi di Leonardo Sciascia, Il cavaliere e la morte (1988) e Una storia semplice (1989). Si prosegue con il testo più breve esaminato nella presente ricerca: “Nel frattempo”, racconto a fumetti di sei pagine realizzato da Magnus (nome d’arte di Roberto Raviola) nel 1996; si termina quindi con Le soleil des mourants, scritto da Jean-Claude Izzo nel 1999. Si tratta di opere realizzate nell’imminenza della morte (Una storia semplice, “Nel frattempo”) o comunque nella piena consapevolezza che la vita sta per giungere al termine (Il cavaliere e la morte, Le soleil des mourants). Nonostante ognuno dei quattro scritti contenga elementi autobiografici, nessuno di essi è puramente autobiografico: Sciascia scrive due polizieschi, Magnus una commedia, Izzo un dramma on the road. I quattro protagonisti sono accomnati da un fatto: tutti loro si confrontano con la malattia, reale (Il cavaliere e la morte, Le soleil des mourants) o metaforica (Una storia semplice, “Nel frattempo”) che sia. L’unico a uscire vincitore da questo confronto è il personaggio di Magnus; gli altri risultano tutti sconfitti, seppure in misura diversa (la sconfitta è totale per Izzo e lo Sciascia del Cavaliere e la morte, mentre è solo parziale in Una storia semplice). Capitolo 3. Lo scrittore e il suicidio. I testi analizzati nel terzo capitolo sono Le feu follet (Pierre Drieu la Rochelle, 1931), Dissipatio H.G. (Guido Morselli, 1973), “Good Old Neon” (David Foster Wallace, 2004) e Suicide (Édouard Levé, 2008). Realizzate da autori poi suicidatisi, queste quattro opere narrano le storie di altrettanti suicidi: tre sono biografie che ricostruiscono l’esistenza di persone realmente vissute (Feu follet racconta, romanzandola, la fine di Jacques Rigaut; “Good Old Neon” e Suicide si ispirano alla scomparsa di due conoscenti dei rispettivi autori), mentre la quarta (Dissipatio H.G.) è una vicenda di pura invenzione. Nonostante la presenza dei suddetti rimandi biografici, i quattro protagonisti sono caratterizzati in modo tale da divenire dei parziali alter-ego degli scrittori: la fedeltà biografica non è mai una priorità. Due di queste opere (Feu follet e Suicide) hanno uno sfondo estremamente realistico, mentre le altre due (Dissipatio H.G. e “Good Old Neon”) si svolgono in suggestivi scenari fantastico/fantascientifici, come a suggerire la volontà di abbandonare questo mondo che contraddistingue gli autori. Appendice. (In)consapevolezza di morire. I romanzi qui raccolti sono tre: Palomar (Italo Calvino, 1983), Gli ultimi giorni di Pompeo (Andrea Pazienza, 1987) e Camere separate (Pier Vittorio Tondelli, 1989). L’ultimo è stato scritto da un autore che sapeva di essere affetto da AIDS e che, pertanto, era consapevole che non sarebbe sopravvissuto molto (per quanto la natura della malattia lo autorizzasse a sperare che la fine fosse ancora lontana); gli altri due sono invece opera di scrittori che erano in buone condizioni di salute e non sospettavano che di lì a poco sarebbero morti; eppure, al termine dei rispettivi romanzi, essi uccidono i propri protagonisti (entrambi alter-ego). Il capitolo si occupa appunto di individuare la connessione, evidente o sotterranea che sia, tra il destino del personaggio e quello del suo autore. La condizione nella quale si giunge al termine della vita influenza inevitabilmente l’approccio alla scrittura. La relativa serenità che contraddistingue chi si avvia a morire in tarda età fa sì che il vecchio scrittore si dedichi principalmente a una narrativa apertamente autobiografica che ricorda il passato, in modo che egli lo possa rivivere ancora una volta prima di andarsene. Chi muore anzitempo e incolpevole, a causa di una malattia, guarda con rimpianto agli anni futuri che non avrà la possibilità di vivere: scrivere in questo stato d’animo conduce alla realizzazione di opere con una componente didattica, che mirano a trasmettere un messaggio universale. Il desiderio di raggiungere un ampio numero di lettori fa sì che l’autore ricorra alla narrativa di genere; alla base di tale atteggiamento c’è la volontà di esercitare una forma di controllo su un futuro a cui non si potrà assistere in prima persona. Lo scrittore suicida, infine, realizza con il proprio ultimo romanzo una lunga lettera d’addio: egli dimostra la propria volontà di evadere dal mondo dando vita a elaborati scenari di fantasia oppure descrivendo una realtà all’interno della quale si trova spaesato, fuori posto. In un caso come nell’altro, egli vuole fuggire da questo mondo per andare alla scoperta dell’altro. A prescindere dal tipo di morte che li attende, gli scrittori che hanno raggiunto l’ultima fase della propria vita non usano metafore o giri di parole: nelle proprie opere, essi presentano direttamente la propria situazione. Pertanto, i protagonisti dei loro romanzi-testamento sono anziani che riflettono sulla loro prossima morte, oppure persone mortalmente malate, oppure giovani uomini dalle chiare tendenza suicide: in poche parole, personaggi che sono alter-ego totali o parziali dei rispettivi creatori.
The aim of this doctoral thesis is to identify and to define a new and previously unseen literary sub-genre: the “testamentary novel”. By saying so, I embrace all the works of literature that have been written by an author who is living within the “dimension of death”, that is to say the stage of life in which the idea of death has become overwhelming. This may happen because of three main reasons: old age, severe illness or suicidal tendencies. Three different situations that originate three different kinds of narratives: a man who faces death in his old age writes relatively peacefully, knowing that he has naturally come to the end of his life; a man who dies prematurely, by illness, regrets all the future years that he won’t be able to live and writes works of literature that vibrate with narrative tension; a man who voluntarily gives an end to his own life addresses the whole world as if to defy it, and yet writes in a cold and detached style. After these three chapters there is an appendix in which I analyze three other novels: they were initially meant for the already existing chapters, but then I realized that they didn’t belong there, being quite eccentric and avoiding every clear classification, so I left them out. However, they were too pertinent to be totally ignored, so I put them in this separate section (that so became a sort of fourth chapter). Chapter 1. The old writer and death. In this first chapter I analyze the following novels: Deux anglaises et le continent (Henri-Pierre Roché, 1956), Mercy of a Rude Stream (Henry Roth, 1994-1998), The Captain Is Out to Lunch and the Sailors Have Taken Over the Ship (Charles Bukowski, 1998) and Ravelstein (Saul Bellow, 2000). Written by aged authors (spanning the age range 72 to 89, Bukowski being the “youngest” and Roth the oldest), these four narratives are either entirely or partially autobiographical: Roché tells a story about his long gone youth; Roth retraces (in a four-volumes and 1500 pages novel) the thirteen years he lived in Harlem as a kid, between 1914 and 1927; Bukowski keeps an actual diary in which he writes about his daily life; Bellow gives an accout of his friendship with the recently deceased Abe Ravelstein. The only writer who uses his real name in the narrative is Bukowski, whereas the other ones adopt three well recognizable alter-egos. Chapter 2. The writer and the illness. The second chapter begins with the last two novels written by Leonardo Sciascia, Il cavaliere e la morte (1988) and Una storia semplice (1989). These novels are followed by the shortest story analyzed in this thesis: “Nel frattempo”, a six-pages graphic novel that Magnus (Roberto Raviola’s nom de plume) wrote and drew in 1996; the second chapter is completed by Le soleil des mourants, a novel by Jean-Claude Izzo (1999). These narratives have been written by authors who were severely ill and were fully aware that they would die shortly. Each one of the four stories is partly autobiographical, but no one of them is completely autobiographical: Sciascia writes two detective novels, Magnus writes a sort of dark comedy and Izzo writes an extremely dramatic story which resembles a classic tragedy. The four protagonists have one thing in common: they all face illness, sometimes actual (Il cavaliere e la morte, Le soleil des mourants) and sometimes metaphorical (Una storia semplice, “Nel frattempo”). The only one of them who clearly wins this peculiar battle is Magnus’ character; the other ones all suffer a defeat (a total defeat in Le soleil des mourants and Il cavaliere e la morte, a partial defeat in Una storia semplice). Capitolo 3. The writer and suicide. The four works of literature analyzed in the third chapter are the following ones: Le feu follet (Pierre Drieu la Rochelle, 1931), Dissipatio H.G. (Guido Morselli, 1973), “Good Old Neon” (David Foster Wallace, 2004) and Suicide (Édouard Levé, 2008). Written by authors who have actually committed suicide, these narratives tell the stories of four suicidal men: three of them are biographical accounts (Feu follet tells about Jacques Rigaut’s suicide, while “Good Old Neon” and Suicide are inspired by the suicides committed some years before by two acquaintances of the authors), the fourth one is entirely fictional. However, these biographical accounts are deliberately inaccurate, so the characters portrayed by the writers become eventually their partial alter-egos. Two of the four narratives take place in a completely realistic setting; on the other hand, the background of the other two is imaginary and fantastic, as if to suggest the authors’ desire to leave the world he’s still living in. Appendix. (Un)aware to die. In this appendix, which is a sort of fourth chapter, three novels are analyzed: Palomar (Italo Calvino, 1983), Gli ultimi giorni di Pompeo (Andrea Pazienza, 1987) and Camere separate (Pier Vittorio Tondelli, 1989). The third one has been written by a man who was suffering from AIDS and was therefore aware that he wouldn’t survive much longer (even if he couldn’t foresee the specific moment of his future demise, of course); on the contrary, the two other novels have been written by two healthy men who couldn’t imagine that they would die a few months after having completed their works; nevertheless, at the end of their narratives they both kill their main character (who is clearly their alter-ego). There is indeed a connection between the death of the character and the death of the author, and this appendix aims to identify it. After having analyzed these fifteen narratives I realized that different kinds of death originate different kinds of writing. The man who dies in the relative peacefulness of his old age is naturally encouraged to write about his past life, so he can relive it one last time. When a man dies prematurely, because of an incurable disease, he regrets all the future years that he won’t be able to live: he writes a somehow educational work of literature, a novel containing a universal message that aims to teach something to the ones who will survive him; in order to reach the maximum amount of readers, he makes use of an “easy” genre, such as comedy or detective novel. He does so because he wants to use his narrative in order to exert a sort of influence over the future (even if, or just because, he knows that he won’t be there in person). The suicidal man writes his final novel as if it were a long suicide letter: he shows off his strong desire to leave this life by making up imaginary worlds or else describing a reality that doesn’t fit him, a world in which he just can’t find his proper place. Apart from the kind of death that awaits them, the writers who have reached the final stage of their life don’t use metaphors or circumlocution: in their novels, they plainly present their own situation. So, the main characters of their testamentary works of literature are old men who muse about dying, or persons severely ill, or young men with suicidal tendencies: in short, these characters are total or partial alter-egos who have the specific duty of standing in for their creators.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía