Дисертації з теми "[519.1"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "[519.1".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Villamizar, Vergel Michael Alejandro. "Efficient approaches for object class detection." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/96775.
Повний текст джерелаComputer vision and more specifically object recognition have demonstrated in recent years an impressive progress that has led to the emergence of new and useful technologies that facilitate daily activities and improve some industrial processes. Currently, we can find algorithms for object recognition in computers, video cameras, mobile phones, tablets or websites, for the accomplishment of specific tasks such as face detection, gesture and scene recognition, detection of pedestrians, augmented reality, etc. However, these applications are still open problems that each year receive more attention in the computer vision community. This is demonstrated by the fact that hundreds of articles addressing these problems are published in international conferences and journals annually. In a broader view, recent work attempts to improve the performance of classifiers, to face new and more challenging problems of detection and to increase the computational efficiency of the resulting algorithms in order to be implemented commercially in diverse electronic devices. Although nowadays there are robust and reliable approaches for detecting objects, most of these methods have a high computational cost that make impossible their application for real-time tasks. In particular, the computational cost and performance of any recognition system is determined by the type of features, the method of recognition and the methodology used for localizing objects within images. The main objective of these methods is to produce not only effective but also efficient detection systems. Through this dissertation different approaches are presented for addressing efficiently and discriminatively the detection of objects in diverse and difficult imaging conditions. Each one of the proposed approaches are especially designed and focus on different detection problems, such as object categorization, detection under rotations in the plane or the detection of objects from multiple views. The proposed methods combine several ideas and techniques for obtaining object detectors that are both highly discriminative and efficient. This is demonstrated experimentally in several state-of-the-art databases where our results are competitive with other recent and successful methods. In particular, this dissertation studies and develops fast features, learning algorithms, methods for reducing the computational cost of the classifiers and integral image representations for speeding up feature computation.
Martínez, Ruiz Alba. "Patent value models: partial least squares path modelling with mode C and few indicators." Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/116489.
Повний текст джерелаDos objetivos general fueron planteados en esta tesis. Primero, establacer un modelo PLS para el valor de las patentes e investigar las relaciones de causalidad entre las variables que determinan el valor de las patentes. Segundo, investigar el desempeño del procedimiento Partial Least Squares (PLS) Path Modelling con Modo C en el contexto de los modelos de valor de las patentes. La tesis es organizada en 10 capítulos. El Capítulo 1 presenta una introducción a la tesis que incluye los objetivos, el alcance de la investigación y la estructura del documento. El Capítulo 2 entrega una presentación general de los diferentes enfoques para valoración de patentes desde una perspectiva del cambio tecnológico. También se entregan las definiciones necesarias relacionadas con los documentos e indicadores de patentes. El Capítulo 3 describe la muestra de patentes usada en esta investigación. Se presentan los criterios utilizados para recuperar los datos, el procedimiento seguido para calcular los indicadores de patentes y la descripción estadística de la muestra. El Capítulo 4 provee una introducción a los modelos de ecuaciones estructurales (SEMs) incluyendo orígenes, antecedentes básicos y desarrollos recientes. Además se entregan los lineamientos para la especificación de los modelos y el proceso de modelamiento para SEMs. Este capítulo discute con especial énfasis la determinación de la naturaleza reflectiva o formativa de los modelos de medida. El Capítulo 5 presenta los principales algoritmos PLS: NIPALS, Regresión PLS y PLS Path Modelling. Se presentan dos implementaciones de PLS Path Modelling: los procedimientos de Lohmöller y Wold. Adicionalmente, se analyzan resultados previos relacionados con: la sensibilidad del procedimiento al valor inicial de los vectores de pesos y el esquema de ponderación, y las propiedades del algoritmo, tales como consistencia, consistencia “at large” y convergencia. También brevemente se revisan algunas extensiones del procedimiento y su relación con otros métodos. El capítulo termina describiendo las técnicas de validación. El Capítulo 6 provee evidencia acerca de la exactitud y precisión con que PLS Path Modelling con Modo C recupera valores verdaderos en SEMs con pocos indicadores por constructo. Simulaciones Monte Carlo y experimentos computacionales son llevados a cabo para estudiar el rendimiento del algoritmo. El Capítulo 7 trata la formulación y estimación de los modelos de valoración de patentes. Esto comprende la identificación y definición de las variables observables y no observables, la determinación de los bloques de variables manifiestas y las relaciones estructurales, la especificación de los modelos de primer y segundo orden del valor de las patentes y la estimación de los mismos con PLS Path Modelling. En el Capítulo 8, la evolución del valor de las patentes a través del tiempo es investigado usando SEMs longitudinales. Dos set-ups son explorados. El primer modelo longitudinal considera variables manifiestas dependientes del tiempo y el segundo incluye variables latentes dependientes del tiempo. Los SEMs son estimados usando PLS Path Modelling. En el Capítulo 9, el procedimiento Two-Step PLS Path Modelling con Modo C (TsPLS) es implementado para estudiar los efectos no lineales y de interacción entre constructos formativos. Simulaciones Monte Carlo son llevadas a cabo para generar datos y determinar la exactitud y precisión con que este enfoque recupera valores verdaderos. Este capítulo incluye una aplicación del procedimiento a los modelos de patentes. Finalmente, el Capítulo 10 provee un resumen de las conclusiones y lineamientos para futuras investigaciones. La principal contribución de esta tesis es proponer modelos PLS para el valor de las patentes, y alrededor de este objetivo, nosotros hemos también contribuido en dos áreas principales: Contribuciones en el área del Cambio Tecnológico comprenden: (1) Evidencia empírica del rol del stock de conocimiento, el alcance tecnológico y el alcance internacional como determinantes del valor de las patentes y la utilidad tecnológica. Un patrón estable de coeficientes de trayectoria fue encontrado al estimar los modelos con muestras en diferentes periodos de tiempo. (2) Conceptualizar el valor de las patentes en un valor potencial y uno reconocido. También proveer evidencia acerca de que el valor potencial es pequeño al compararlo con el valor que las patentes adquieren con posterioridad. (3) Evidencia acerca de la importancia de considerar la naturaleza longitudinal de los indicatores en el problema de valorización de patentes, especialmente de las citas recibidas, el indicador de valor más utilizado. (4) Introducir una perspectiva multidimensional en el problema de valoración de patentes. Este nuevo enfoque puede ofrecer un entendimiento robusto de las diferentes variables que determinar el valor de las patentes. Contribuciones en el área del PLS PLS Path Modelling comprenden: (5) Evidencia empírica acerca del desempeño de PLS Path Modelling con Modo C. Apropiadamente implemetado, el procedimiento puede adecuadamente capturar algunas de las complejas relaciones dinámicas en los modelos. Nuestra investigación muestra que PLS Path Modelling con Modo C se comporta de acuerdo al marco teórico establecido para los procedimientos PLS y los modelos PLS (Wold, 1982; Krämer, 2006; Hanafi, 2007; Dijkstra, 2010). Es decir, (a) las estimaciones PLS estan siempre sesgadas, (b) las relaciones internas son subestimadas, (c) las relaciones externas son sobrestimadas, (d) el Modo A carece de la propiedad de convergencia monótona, (3) el Modo B tiene la propiedad de convergencia monótona. (6) Evidencia empírica acerca de la convergencia “at large” de PLS Path Modelling con Modo A. (7) Evidencia empírica para los modelos formativos con pocos indicadores (8) Evidencia empírica del desempeño del procedimiento Two-Step PLS Path Modelling con Modo C para estimar efectos no lineales y de interacción entre constructos formativos.
Perarnau, Llobet Guillem. "Random combinatorial structures with low dependencies : existence and enumeration." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/362940.
Повний текст джерелаIn this thesis we study different problems in combinatorics and in graph theory by means of the probabilistic method. This method, introduced by Erdös, has become an extremely powerful tool to provide existential proofs for certain problems in different mathematical branches where other methods had failed utterly. One of its main concerns is to study the behavior of random variables. In particular, one common situation arises when these random variables count the number of bad events that occur in a combinatorial structure. The idea of the Poisson Paradigm is to estimate the probability of these bad events not happening at the same time when the dependencies among them are weak or rare. If this is the case, this probability should behave similarly as in the case where all the events are mutually independent. This idea gets reflected in several well-known tools, such as the Lovász Local Lemma or Suen inequality. The goal of this thesis is to study these techniques by setting new versions or refining the existing ones for particular cases, as well as providing new applications of them for different problems in combinatorics and graph theory. Next, we enumerate the main contributions of this thesis. The first part of this thesis extends a result of Erdös and Spencer on latin transversals [1]. They showed that an integer matrix such that no number appears many times, admits a latin transversal. This is equivalent to study rainbow matchings of edge-colored complete bipartite graphs. Under the same hypothesis of, we provide enumerating results on such rainbow matchings. The second part of the thesis deals with identifying codes, a set of vertices such that all vertices in the graph have distinct neighborhood within the code. We provide bounds on the size of a minimal identifying code in terms of the degree parameters and partially answer a question of Foucaud et al. On a different chapter of the thesis, we show that any dense enough graph has a very large spanning subgraph that admits a small identifying code. In some cases, proving the existence of a certain object is trivial. However, the same techniques allow us to obtain enumerative results. The study of permutation patterns is a good example of that. In the third part of the thesis we devise a new approach in order to estimate how many permutations of given length avoid a consecutive copy of a given pattern. In particular, we provide upper and lower bounds for them. One of the consequences derived from our approach is a proof of the CMP conjecture, stated by Elizalde and Noy as well as some new results on the behavior of most of the patterns. In the last part of this thesis, we focus on the Lonely Runner Conjecture, posed independently by Wills and Cusick and that has multiple applications in different mathematical fields. This well-known conjecture states that for any set of runners running along the unit circle with constant different speeds and starting at the same point, there is a moment where all of them are far enough from the origin. We improve the result of Chen on the gap of loneliness by studying the time when two runners are close to the origin. We also show an invisible runner type result, extending a result of Czerwinski and Grytczuk.
Mitjana, Margarida. "Propagació d'informació en grafs i digrafs que modelen xarxes d'interconnexió simètriques." Doctoral thesis, Universitat Politècnica de Catalunya, 1999. http://hdl.handle.net/10803/315841.
Повний текст джерелаSalas, Piñón Julián. "On the structure of graphs without short cycles." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/124508.
Повний текст джерелаCastellanos, Carrazana Abel. "Performance model for hybrid MPI+OpenMP master/worker applications." Doctoral thesis, Universitat Autònoma de Barcelona, 2014. http://hdl.handle.net/10803/283403.
Повний текст джерелаIn the current environment, various branches of science are in need of auxiliary high-performance computing to obtain relatively short-term results. This is mainly due to the high volume of information that needs to be processed and the computational cost demanded by these calculations. The benefit to performing this processing using distributed and parallel programming mechanisms is that it achieves shorter waiting times in obtaining the results. To support this, there are basically two widespread programming models: the model of message passing based on the standard libraries MPI and the shared memory model with the use of OpenMP. Hybrid applications are those that combine both models in order to take the specific potential of parallelism of each one in each case. Unfortunately, experience has shown that using this combination of models does not necessarily guarantee an improvement in the behavior of applications. There are several parameters that must be considered to determine the configuration of the application that provides the best execution time. The number of process that must be used,the number of threads on each node, the data distribution among processes and threads, and so on, are parameters that seriously affect the performance of the application. On the one hand, the appropriate value of such parameters depends on the architectural features of the system (communication latency, communication bandwidth, cache memory size and architecture, computing capabilities, etc.), and, on the other hand, on the features of the application. The main contribution of this thesis is a novel technique for predicting the performance and efficiency of parallel hybrid Master/Worker applications. This technique is known as model-based regression trees into the field of machine learning. The experimental results obtained allow us to be optimistic about the use of this algorithm for predicting both metrics and to select the best application execution parameters.
De, la Cruz Raúl. "Leveraging performance of 3D finite difference schemes in large scientific computing simulations." Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/325139.
Повний текст джерелаAtrás quedaron los días en los que ingenieros y científicos realizaban sus experimentos empíricamente. Durante esas décadas, se llevaban a cabo ensayos reales para verificar la robustez y fiabilidad de productos venideros y probar modelos teóricos. Con la llegada de la era computacional, la computación científica se ha convertido en una solución factible comparada con métodos empíricos, en términos de esfuerzo, coste y fiabilidad. Los supercomputadores han reducido el tiempo de las simulaciones y han mejorado los resultados numéricos gracias al refinamiento del dominio. Diversos métodos numéricos coexisten para resolver las Ecuaciones Diferenciales Parciales (EDPs). Métodos como Elementos Finitos (EF) y Volúmenes Finitos (VF) están bien adaptados para tratar problemas donde las mallas no estructuradas son frecuentes. Desafortunadamente, esta flexibilidad no se confiere de forma gratuita. Estos esquemas conllevan latencias más altas debido al acceso irregular de datos. En cambio, el esquema de Diferencias Finitas (DF) ha demostrado ser una solución eficiente cuando las mallas estructuradas se adaptan a los requerimientos. Esta tesis se enfoca en mejorar los esquemas DF para impulsar el rendimiento de las simulaciones en la computación científica. Se proponen diferentes técnicas, como el Semi-stencil, un nuevo algoritmo que incrementa el ratio de FLOP/Byte para operadores de stencil de orden medio y alto reduciendo los accesos y promoviendo el reuso de datos. El algoritmo es ortogonal y puede ser combinado con técnicas como spatial- o time-blocking, añadiendo mejoras adicionales. Las nuevas tendencias hacia sistemas con procesadores multi-simétricos (SMP) -donde decenas de cores son replicados en el mismo procesador- plantean nuevos retos debido a la exacerbación del problema del ancho de memoria. Para paliar este problema, nuestra investigación se centra en estrategias para reducir la presión en la jerarquía de cache, particularmente cuando diversos threads comparten recursos debido a Simultaneous Multi-Threading (SMT). Introducimos diversos planificadores de descomposición de dominios para balancear la carga asegurando resultados casi óptimos sin poner en riesgo el rendimiento global. Combinamos estos planificadores con técnicas de spatial-blocking y auto-tuning, explorando el espacio paramétrico y reduciendo los fallos en la cache de último nivel. Como alternativa a los métodos de fuerza bruta usados en auto-tuning donde un espacio paramétrico se debe recorrer para encontrar un candidato, los modelos de rendimiento son una solución factible. Los modelos de rendimiento pueden predecir el rendimiento en diferentes arquitecturas, seleccionando parámetros suboptimos casi de forma instantánea. En esta tesis, ideamos un modelo de rendimiento para stencils flexible y extensible. El modelo es capaz de soportar arquitecturas multi-core incluyendo características complejas como prefetchers, SMT y optimizaciones algorítmicas. Nuestro modelo puede ser usado no solo para predecir los tiempos de ejecución, sino también para tomar decisiones de los mejores parámetros algorítmicos. Además, puede ser incluido en optimizadores run-time para decidir la mejor configuración SMT. Algunas industrias confían en técnicas DF para sus códigos. Sin embargo, no todos los aspectos que aparecen en la industria han sido sometidos a investigación. En este aspecto, hemos diseñado e implementado desde cero una infraestructura DF que cubre las características más importantes que una aplicación industrial debe incluir. Algunas de las técnicas de optimización propuestas en esta tesis han sido incluidas para contribuir en el rendimiento global a nivel industrial. Mostramos resultados de un par de aplicaciones estratégicas para la industria: un modelo de transporte atmosférico que simula la dispersión de ceniza volcánica y un modelo de imagen sísmica usado en la industria del petroleo y gas para identificar reservas ricas en hidrocarburos
Sales, i. Inglès Vicenç. "Aportaciones al estudio de los sistemas electorales." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/385426.
Повний текст джерелаUna cuestión importante que las sociedades modernas deben decidir es la elección de algunos individuos que los representen y puedan asimismo tomar ciertas decisiones. Los mecanismos encargados de hacer esto se llaman Sistemas Electorales. De hecho, existen muchos y son muy diferentes. En este trabajo, presentamos algunas ideas para analizamos matemáticamente. El primer capítulo es un análisis global de los sistemas electorales. Comienza con la definición de sistema electoral --con la ayuda de la Teoría de Probabilidades-- y la introducción de los ejemplos simples más importantes que hay. Después definimos dos operaciones para los sistemas electorales, con lo que obtenemos dos generalizaciones diferentes de cada uno de los ejemplos. Introducimos también algunas propiedades importantes que pueden poseer los sistemas electorales --superaditividad, monotonía, crecimiento y estabilidad-- y estudiamos cuáles de los ejemplos las poseen y el comportamiento de dichas propiedades respecto de las operaciones anteriormente definidas. Finalmente, la estabilidad nos proporciona la posibilidad de definir los sistemas electorales mayoritarios y proporcionales. En el segundo capítulo estudiamos los sistemas electorales desde el punto de vista individual. En tal sentido, introducimos las expectativas electorales, obtenidas fijando el vector de votos de una candidatura arbitraria. Seguidamente, estudiamos su relación con las operaciones anteriormente definidas y terminamos el capítulo introduciendo algunas propiedades de tipo individual que los sistemas electorales pueden poseer y analizando qué sucede cuando consideramos las operaciones anteriores. El tercer capítulo trata sobre un parámetro introducido para evaluar si una candidatura cualquiera resulta beneficiada o no por un sistema electoral. La forma de hacerlo ha sido considerar la media de las expectativas electorales introducidas en el segundo capítulo. Obtenemos de esta forma el concepto de expectativa electoral media. Y finalizamos el capítulo estudiando de nuevo su comportamiento respecto de las operaciones y los ejemplos introducidos. En el cuarto capítulo analizamos tres cuestiones sobre juegos de mayoría ponderada que serán utilizados en el capítulo siguiente: otra forma de definirlos, una nueva operación entre ellos y su convergencia. Finalmente, en el quinto capítulo substituimos el número de respresentantes de cada candidatura por su índice de poder de Shapley-Shubik y estudiamos los sistemas electorales utilizando este nuevo indicador. De esta forma, obtenemos el concepto de sistema de poder y, análogamente, los de expectativa de poder y expectativa de poder media. E introducimos algunas propiedades nuevas sobre cada uno de los conceptos anteriores y analizamos también su relación con las propiedades análogas de los sistemas electorales.
Font, Valverde Martí. "Bayesian analysis of textual data." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/384329.
Повний текст джерелаIn this thesis I develop statistical methodology for analyzing discrete data to be applied to stylometry problems, always with the Bayesian approach in mind. The statistical analysis of literary style has long been used to characterize the style of texts and authors, and to help settle authorship attribution problems. Early work in the literature used word length, sentence length, and proportion of nouns, articles, adjectives or adverbs to characterize literary style. I use count data that goes from the frequency of word frequency, to the simultaneous analysis of word length counts and more frequent function words counts. All of them are characteristic features of the style of author and at the same time rather independent of the context in which he writes. Here we intrude a Bayesian Analysis of word frequency counts, that have a reverse J-shaped distribution with extraordinarily long upper tails. It is based on extending Sichel's non-Bayesian methodology for frequency count data using the inverse gaussian Poisson model. The model is checked by exploring the posterior distribution of the Pearson errors and by implementing posterior predictive consistency checks. The posterior distribution of the inverse gaussian mixing density also provides a useful interpretation, because it can be seen as an estimate of the vocabulary distribution of the author, from which measures of richness and of diversity of the author's writing can be obtained. An alternative analysis is proposed based on the inverse gaussian-zero truncated Poisson mixture model, which is obtained by switching the order of the mixing and the truncation stages. An analysis of the heterogeneity of the style of a text is proposed that strikes a compromise between change-point, that analyze sudden changes in style, and cluster analysis, that does not take order into consideration. Here an analysis is proposed that strikes a compromise by incorporating the fact that parts of the text that are close together are more likely to belong to the same author than parts of the text far apart. The approach is illustrated by revisiting the authorship attribution of Tirant lo Blanc. A statistical analysis of the heterogeneity of literary style in a set of texts that simultaneously uses different stylometric characteristics, like word length and the frequency of function words, is proposed. It clusters the rows of all contingency tables simultaneously into groups with homogeneous style based on a finite mixture of sets of multinomial models. That has some advantages over the usual heuristic cluster analysis approaches as it naturally incorporates the text size, the discrete nature of the data, and the dependence between categories. All is illustrated with the analysis of the style in plays by Shakespeare, El Quijote, and Tirant lo Blanc. Finally, authorship attribution and verification problems that are usually treated separately are treated jointly. That is done by assuming an open-set classification framework for attribution problems, contemplating the possibility that neither one of the candidate authors, with training texts known to have been written by them is the author of the disputed texts. Then the verification problem becomes a special case of attribution problems.A formal Bayesian multinomial model for this more general authorship attribution is given and a closed form solution for it is derived. The approach to the verification problem is illustrated by exploring whether a court ruling sentence could have been written by the judge that signs it or not, and the approach to the attribution problem illustrated by exploring whether a court ruling sentence could have been written by the judge that signs it or not, and the approach to the attribution problem is illustrated by revisiting the authority attribution
Argollo, de Oliveira Dias Júnior Eduardo. "Performance prediction and tuning in a multi-cluster environment." Doctoral thesis, Universitat Autònoma de Barcelona, 2006. http://hdl.handle.net/10803/5761.
Повний текст джерелаUn problema que se introduce con esta colaboración es un incremento en la heterogeneidad tanto de cómputo como de comunicación, aumentando la complejidad de dicho sistema lo que dificulta su uso.
El objetivo de esta tesis es lograr la reducción del tiempo de ejecución de aplicaciones, originalmente desarrolladas para un cluster, usando eficientemente un multi-cluster.
Proponemos una arquitectura del sistema para lograr una máquina virtual multi-cluster transparente a la aplicación que además la dota de escalabilidad y robustez tolerando los problemas de la comunicación por Internet. Esta arquitectura propone un master-worker jerárquico en el que se introducen elementos claves como los gestores de comunicación que dotan al sistema de robustez, seguridad y transparencia en las comunicaciones entre clusters a través de Internet.
Desarrollamos un modelo de prestaciones para poder hacer una estimación teórica del tiempo de ejecución y de la eficiencia de una aplicación ejecutándose en un multi-cluster. La precisión de las estimaciones es superior al 90%.
Proponemos una metodología que da un procedimiento que define los pasos para realizar la predicción del tiempo de ejecución, para garantizar un umbral de eficiencia seleccionando los recursos adecuados y para guiar a la sintonización de la aplicación encontrando los cuellos de botella de la ejecución.
Clusters of computers represent an alternative for speeding up scientific applications. Nevertheless applications grow in complexity and need more resources. The joint of distributed clusters, using Internet, in a multi-cluster can allow the resources obtainment.
A problem on reaching an effective collaboration from multiple clusters is the increase on the computation and communication heterogeneity. This factor increases the complexity of such a system bringing difficulties in its use.
The target of this thesis is to attain the reduction of the execution time of applications, originally written for a single cluster, efficiently using a multi-cluster. In order to reach this goal we propose a system architecture, an analytical model and a performance and tuning methodology.
The proposed system architecture aims to obtain a multi-cluster virtual machine, transparent to the application and that provides scalability and robustness, tolerating possible faults in the Internet communication between clusters. This architecture is organized around a hierarchical master-worker with communication managers. Communication managers are a key element responsible for the robustness, security and transparency in the communication between clusters using Internet.
The analytical performance model was developed to estimate the execution time and efficiency of an application executing in a multi-cluster. The precision on the estimations are over 90%.
The proposed performance prediction and application tuning methodology is a procedure that defines the steps to predict the execution time and efficiency, to guarantee an efficiency threshold and to guide on the application tuning, evaluating the execution bottlenecks.
Barrera, Campo Jos e. Fernando. "Multimodal Stereo from Thermal Infrared and Visible Spectrum." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/117596.
Повний текст джерелаRecent advances in thermal infrared imaging (LWIR) has allowed its use in applications beyond of military domain. Nowadays, this new sensor family is included in diverse technical and scienti c applications. They o er features that facilitate tasks, such as detection of pedestrians, hot spots, di erences in temperature, among others, which can signi cantly improve the performance of a system where the persons are expected to play the principal role. For instance, video surveillance applications, monitoring, and pedestrian detection. During this dissertation is stated the next question: Could a couple of sensors measuring di erent bands of the electromagnetic spectrum, as the visible and thermal infrared, provides depth information? Although is a complex question, we shows that a system of those characteristics is possible as well as their advantages, drawbacks, and potential opportunities. The fusion and matching of data coming from di erent sensors, as the emissions registered at visible and infrared band, represents a special challenge, because it has been showed that theses signals are weak correlated. Indeed, they are uncorrelated. Therefore, many traditional techniques of image processing and computer vision are not helpful, requiring adjustments for their correct performs in every modality. In this research is performed a experimental study that compares di erent cost functions and matching approaches, in order to build a multimodal stereo system. Furthermore, are identi ed the common problem between visible/visible and infrared/visible stereo, special in the outdoor scenes. A contribution of this dissertation is the isolation achieved, between the di erent stage that compose a multimodal stereo system. Our framework summarizes the architecture of a generic stereo algorithm, at di erent levels: computational, functional, and structural, which is successful because this can be extended toward high-level fusion (semantic) and high-order (prior). The proposed framework is intended to explore novel multimodal stereo matching approaches, going from sparse to dense representation (both disparity and depth maps). Moreover, context information is added in form of priors and assumptions. Finally, this dissertation shows a promissory way toward the integration of multiple sensors for recovering three-dimensional information.
Carmona, Mejías Ángeles. "Grafos y digrafos con máxima conectividad y máxima distancia conectividad." Doctoral thesis, Universitat Politècnica de Catalunya, 1995. http://hdl.handle.net/10803/6716.
Повний текст джерелаMuntaner, Batlle Francesc Antoni. "Magic graphs." Doctoral thesis, Universitat Politècnica de Catalunya, 2001. http://hdl.handle.net/10803/7017.
Повний текст джерелаSi un graf G admet un etiquetament super edge magic, aleshores G es diu que és un graf super edge màgic. La tesis està principalment enfocada a l'estudi del conjunt de grafs que admeten etiquetaments super edge magic així com també a desenvolupar relacions entre aquest tipus d'etiquetaments i altres etiquetaments molt estudiats com ara els etiquetaments graciosos i armònics, entre d'altres. De fet, els etiquetaments super edge magic serveixen com nexe d'unió entre diferents tipus d'etiquetaments, i per tant moltes relacions entre etiquetaments poden ser obtingudes d'aquesta forma.
A la tesis també es proposa una nova manera de pensar en la ja famosa conjectura que afirma que tots els arbres admeten un etiquetament super edge magic. Això és, per a cada arbre T trobam un arbre super edge magic T' que conté a T com a subgraf, i l'ordre de T'no és massa gran quan el comparam amb l'ordre de T .
Un problema de naturalesa similar al problema anterior, en el sentit que intentam trobar un graf super edge magic lo més petit possible i que contengui a cert tipus de grafs, i que ha estat completament resolt a la tesis es pot enunciar com segueix.
Problema: Quin és un graf conexe G super edge magic d'ordre més petit que conté al graf complet
Kn com a subgraf?.
La solució d'aquest problema és prou interessant ja que relaciona els etiquetaments super edge magic amb un concepte clàssic de la teoria aditiva de nombres com són els conjunts de Sidon dèbils, també coneguts com well spread sets.De fet, aquesta no és la única vegada que el concepte de conjunt de Sidon apareix a la tesis. També quan a la tesis es tracta el tema de la deficiència , els conjunts de Sidon són d'una gran utilitat. La deficiencia super edge magic d'un graf és una manera de mesurar quan d'aprop està un graf de ser super edge magic. Tècnicament parlant, la deficiència super edge magic d'un graf
G es defineix com el mínim número de vèrtexs aillats amb els que hem d'unir
G perque el graf resultant sigui super edge magic. Si d'aquesta manera no aconseguim mai que el graf resultant sigui super edge magic, aleshores deim que la deficiència del graf és infinita. A la tesis, calculam la deficiència super edge magic de moltes families importants de grafs, i a més donam alguns resultats generals, sobre aquest concepte.
Per acabar aquest document, simplement diré que al llarg de la tesis molts d'exemples que completen la tesis, i que fan la seva lectura més agradable i entenible han estat introduits.
OF THESIS
If a graph G admits a super edge magic labeling, then G is called a super edge magic graph. The thesis is mainly devoted to study the set of graphs which admit super edge magic labelings as well as to stablish and study relations with other well known labelings.
For instance, graceful and harmonic labelings, among others, since many relations among labelings can be obtained using super edge magic labelings as the link.
In the thesis we also provide a new approach to the already famous conjecture that claims that every tree is super edge magic. We attack this problem by finding for any given tree T a super edge magic tree T' that contains T as a subgraph, and the order of T'is not too large if we compare it with the order of T .
A similar problem to this one, in the sense of finding small host super edge magic graphs for certain type of graphs, which is completely solved in the thesis, is the following one.
Problem: Find the smallest order of a connected super edge magic graph G that contains the complete graph Kn as a subgraph.
The solution of this problem has particular interest since it relates super edge magic labelings with the additive number theoretical concept of weak Sidon set, also known as well spread set. In fact , this is not the only time that this concept appears in the thesis.
Also when studying the super edge magic deficiency, additive number theory and in particular well spread sets have proven to be very useful. The super edge magic deficiency of graph is a way of measuring how close is graph to be super edge magic.
Properly speaking, the super edge magic deficiency of a graph G is defined to be the minimum number of isolated vertices that we have to union G with, so that the resulting graph is super edge magic. If no matter how many isolated vertices we union G with, the resulting graph is never super edge magic, then the super edge magic deficiency is defined to be infinity. In the thesis, we compute the super edge magic deficiency of may important families of graphs and we also provide some general results, involving this concept.
Finally, and in order to bring this document to its end, I will just mention that many examples that improve the clarity of the thesis and makes it easy to read, can be found along the hole work.
Roca, Vilà Jordi. "Constancy and inconstancy in categorical colour perception." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/117476.
Повний текст джерелаTo recognise objects is perhaps the most important task an autonomous system, either biological or artificial needs to perform. In the context of human vision, this is partly achieved by recognizing the colour of surfaces despite changes in the wavelength distribution of the illumination, a property called colour constancy. Correct surface colour recognition may be adequately accomplished by colour category matching without the need to match colours precisely, therefore categorical colour constancy is likely to play an important role for object identification to be successful. The main aim of this work is to study the relationship between colour constancy and categorical colour perception. Previous studies of colour constancy have shown the influence of factors such the spatio-chromatic properties of the background, individual observer's performance, semantics, etc. However there is very little systematic study of these influences. To this end, we developed a new approach to colour constancy which includes both individual observers' categorical perception, the categorical structure of the background, and their interrelations resulting in a more comprehensive characterization of the phenomenon. In our study, we first developed a new method to analyse the categorical structure of 3D colour space, which allowed us to characterize individual categorical colour perception as well as quantify inter-individual variations in terms of shape and centroid location of 3D categorical regions. Second, we developed a new colour constancy paradigm, termed chromatic setting, which allows measuring the precise location of nine categorically-relevant points in colour space under immersive illumination. Additionally, we derived from these measurements a new colour constancy index which takes into account the magnitude and orientation of the chromatic shift, memory effects and the interrelations among colours and a model of colour naming tuned to each observer/adaptation state. Our results lead to the following conclusions: (1) There exists large inter-individual variations in the categorical structure of colour space, and thus colour naming ability varies significantly but this is not well predicted by low-level chromatic discrimination ability; (2) Analysis of the average colour naming space suggested the need for an additional three basic colour terms (turquoise, lilac and lime) for optimal colour communication; (3) Chromatic setting improved the precision of more complex linear colour constancy models and suggested that mechanisms other than cone gain might be best suited to explain colour constancy; (4) The categorical structure of colour space is broadly stable under illuminant changes for categorically balanced backgrounds; (5) Categorical inconstancy exists for categorically unbalanced backgrounds thus indicating that categorical information perceived in the initial stages of adaptation may constrain further categorical perception.
Fernandez, Alonso Eduard. "Offloading Techniques to Improve Performance on MPI Applications in NoC-Based MPSoCs." Doctoral thesis, Universitat Autònoma de Barcelona, 2014. http://hdl.handle.net/10803/284889.
Повний текст джерелаFuture embedded System-on-Chip (SoC) will probably be made up of tens or hundreds of heterogeneous Intellectual Properties (IP) cores, which will execute one parallel application or even several applications running in parallel. These systems could be possible due to the constant evolution in technology that follows the Moore’s law, which will lead us to integrate more transistors on a single dice, or the same number of transistors in a smaller dice. In embedded MPSoC systems, NoCs can provide a flexible communication infrastructure, in which several components such as microprocessor cores, MCU, DSP, GPU, memories and other IP components can be interconnected. In this thesis, firstly, we present a complete development process created for developing MPSoCs on reconfigurable clusters by complementing the current SoC development process with additional steps to support parallel programming and software optimization. This work explains systematically problems and solutions to achieve a FPGA-based MPSoC following our systematic flow and offering tools and techniques to develop parallel applications for such systems. Additionally, we show several programming models for embedded MPSoCs and propose the adoption of MPI for such systems and show some implementations created in this thesis over shared and distributed memory architectures. Finally, the focus will be set on the overhead produced by MPI library and on trying to find solutions to minimize this overhead and then be able to accelerate the execution of the application, offloading some parts of the software stack to the Network Interface Controller.
Gong, Wenjuan. "3D Motion Data aided Human Action Recognition and Pose Estimation." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/116189.
Повний текст джерелаEn este trabajo se exploran el reconocimiento de acciones humanas y la estimación de su postura en secuencias de imágenes. A diferencia de las técnicas tradicionales de aprendizaje a partir de imágenes 2D o vídeo con la salida anotada, en esta Tesis abordamos este objetivo con la información de movimiento 3D capturado, que nos ayudar a cerrar el lazo entre las caracteríssticas 2D de la imagen y las interpretaciones sobre el movimiento humano.
In this work, we explore human action recognition and pose estimation problems. Different from traditional works of learning from 2D images or video sequences and their annotated output, we seek to solve the problems with additional 3D motion capture information, which helps to fill the gap between 2D image features and human interpretations.
García, Alfaro Joaquín. "Platform of intrusion management design and implementation." Doctoral thesis, Universitat Autònoma de Barcelona, 2006. http://hdl.handle.net/10803/3053.
Повний текст джерелаEsta tesis ha sido principalmente financiada por la Agencia de Gestión y Ayudas Universitarias y de Investigación (AGAUR) del Departamento de Universidades, Investigación y Sociedad de la Información (DURSI) de la Generalitat de Catalunya (num. de referencia 2003FI00126). El trabajo ha sido conjuntamente realizado en la Universitat Autònoma de Barcelona y la Ecole Nationale Superieure des Télécommunications de Bretagne.
Palabras clave: Políticas de seguridad, detección de intrusos, contramedidas, correlación de eventos, comunicación publish/subscribe, control de acceso, protección de componentes.
Since computer infrastructures are currently getting more vulnerable than ever, traditional security mechanisms are still necessary but not suficient. We need to design effective response techniques to circumvent intrusions when they are detected. We present in this dissertation the design of a platform which is intended to act as a central point to analyze and verify network security policies, and to control and configure -without anomalies or errors- both prevention and detection security components. We also present in our work a response mechanism based on a library that implements different types of countermeasures. The objective of such a mechanism is to be a support tool in order to help the administrator to choose, in this library, the appropriate counter-measure when a given intrusion occurs. We finally present an infrastructure for the communication between the components of our platform, as well as a mechanism for the protection of such components. All these approaches and proposals have been implemented and evaluated. We present the obtained results within the respectives sections of this dissertation.
This thesis has mainly been funded by the Agency for Administration of University and Research Grants (AGAUR) of the Ministry of Education and Universities (DURSI) of the Government of Catalonia (reference number 2003FI00126). The research was jointly carried out at the Universitat Autònoma de Barcelona and at the Ecole Nationale Superieure des Télécommunications de Bretagne.
Keywords: Security policies, intrusion detection, response, counter-measures, event correlation, communication publish/subscribe, access control, components protection.
Balaguer, Herrero Pedro. "Information and control supervision of adaptive/iterative schemes." Doctoral thesis, Universitat Autònoma de Barcelona, 2007. http://hdl.handle.net/10803/5807.
Повний текст джерелаAquesta tesi aborda el problema del disseny de controladors des d'un punt de vista de la informació requerida per aconseguir aquesta finalitat. Les aportacions de la tesi es divideixen en dues parts.
En la primera part l'objectiu és caracteritzar el concepte d'informació dins del problema del disseny de controladors, així com analitzar totes les fonts d'informació disponibles. Aquest objectiu s'aconsegueix mitjançant el desenvolupament d'un marc conceptual en el qual es pot establir relacions fonamentals necessàries per augmentar la informació dels elements del problema de control. En un segon pas, aquest marc ja establert s'utilitza par analitzar i comparar les tècniques del control adaptatiu clàssic amb el control iteratiu. El marc permet comparar ambdues tècniques de disseny de controladors en funció de la gestió de la informació que cadascuna d'elles realitza, proporcionant així un punt de referència per comparar diverses maneres de gestionar la informació per al disseny de controladors.
En la segona part de la tesi s'aborda el problema de la validació de la informació existent en un model. L'objectiu és ser capaç de validar un model de manera que el resultat de la validació no sigui simplement un resultat binari de "validat/invalidat", si no que es donen guies de decisió sobre com gestionar la informació dels elements del problema de control amb la finalitat d'augmentar la informació dels models. Per aconseguir aquest fi es desenvolupa l'algorisme de validació FDMV (Frequency Domain Model Validation) que permet que el resultat de la validació d'un model sigui dependent de la freqüència. D'aquest fet es conclou que un mateix model pot ser validat per a un cert rang de freqüències mentre que el mateix model pot ser invalidat per a un altre rang de freqüències diferents. Aquesta validació dependent de la freqüència permet millorar la gestió de la informació en lo referent a 1) disseny experimental per a una nova identificació, 2) selecció de l'ordre del model adequat i, 3) selecció de la amplada de banda del controlador acceptable per l'actual model. L'algorisme FDMV es mostra com una eina especialment apropiada per a ser emprada en tècniques de control iteratiu.
El diseño de un controlador es un proceso que requiere adquirir y procesar información con el fin de diseñar un sistema de control satisfactorio. Además es ampliamente reconocido el hecho de que si se añade nueva información en el proceso de diseño de un controlador, es posible mejorar el desempeño del controlador obtenido. Esta es la filosofía existente detrás del control adaptativo. Sin embargo el concepto de información referido al problema de diseño de reguladores, si bien está muy extendido, no dispone de una formalización clara ni unificada al carecer de un marco conceptual.
En esta tesis se aborda el problema del diseño de controladores desde un punto de vista de la información requerida para tal fin. Las aportaciones de la tesis se dividen en dos partes.
En la primera parte el objetivo es caracterizar el concepto de información cuando hablamos del problema del diseño de controladores, así como analizar todas las fuentes de información disponibles. Esto se consigue mediante el desarrollo de un marco conceptual en el cual se pueden establecer relaciones fundamentales necesarias para aumentar la información de los elementos del problema de control. En un segundo paso, dicho marco ya establecido se utiliza para analizar y comparar las técnicas del control adaptativo clásico con el control iterativo. El marco permite comparar ambas técnicas de diseño de controladores en función de la gestión de la información que cada una de ellas realiza, proporcionando así un punto de referencia para comparar diversas maneras de gestionar la información para el diseño de controladores.
En la segunda parte de la tesis se aborda el problema de la validación de la información existente en un modelo. El objetivo es ser capaz de validar un modelo de manera tal que el resultado de la validación no sea simplemente un resultado binario de "validado/invalidado", si no que aporte guías de decisión sobre como gestionar la información de los elementos del problema de control con el fin de aumentar la información del modelo. Para tal fin se desarrolla el algoritmo de validación FDMV (Frequency Domain Model Validation) que permite que el resultado de la validación de un modelo sea dependiente de la frecuencia. De ello se sigue que un mismo modelo puede ser validado para cierto rango de frecuencias mientras que el mismo modelo puede ser invalidado para otro rango de frecuencias diferentes. Esta validación dependiente de la frecuencia permite mejorar la gestión de la información en lo referente al i) diseño experimental para una nueva identificación, ii) selección del orden del modelo adecuado y iii) selección del ancho de banda del controlador aceptable por el modelo a mano. El algoritmo FDMV se muestra como una herramienta especialmente apropiada para ser empleada en técnicas de control iterativo.
Solà, Belda Carles. "Economic action and reference points: an experimental analysis." Doctoral thesis, Universitat Autònoma de Barcelona, 2001. http://hdl.handle.net/10803/4019.
Повний текст джерелаThis thesis analyzes several aspects of the motivations that drive individuals and their implications in economic processes. In particular, I analyze in detail normative criteria that individuals apply such as those of fairness and reciprocity. In the Introduction I define the use I make of the concepts of reciprocity, fairness, menu dependence and reference points that will be used in the course of the different chapters. The methodology developed in this thesis employs some theoretical models on the behavior of individuals in strategic interactions, using elements of Game Theory and Experimental Economics. In the second chapter, "On Rabin's Concept of Fairness and the Private Provision of Public Goods", I analyze in detail the implications of Rabin's (1993) theory of individual behavior and its implications. This model introduces, apart from the economic payoffs that the individual obtains in a strategic interaction, psychological phenomena, mainly a sense of fairness in the relation with other agents. In this chapter I analyze the implications of an extended version of this theory to a field where there exists a vast amount of experimental evidences contradicting the behavior predicted by standard game theoretical models. I show that Rabin's theory is consistent with one piece of evidence repeatedly found in experiments, the so call "splitting". I also show that the model is inconsistent with another piece of evidence in the field, the "MPCR effect". The third chapter, "Reference Points and Negative Reciprocity in Simple Sequential Games", analyzes the influence that certain payoff vectors, the "reference points", not attainable at that time, may have on the preference by other payoff vectors. This is connected with the attribution of certain intentions to the other players when selecting some courses of action. By using experiments I obtain results that confirm the importance of these reference points in the reciprocity considerations that individuals apply. Chapter four , "Distributional Concerns and Reference Points", analyzes some aspects that may interact with the reference points in the attributions of intentions. These aspects are the payoff to the agent from a given course of action, his/her relative payoff and the joint payoff. The experimental results show that none of these elements is able to explain by itself the results. Finally, the fifth chapter, "The Sequential Prisoner's Dilemma Game: Reciprocity and Group Size Effects" analyzes how aspects of the individual motivations interact with social aspects. In particular it studies how the reactions of individuals change with the dimension of the group in certain processes. The experimental results obtained show that in the prisoner's dilemma game (two-person and three-person games) the behavior of subjects may be consistent with reciprocity considerations and with inequality aversion considerations.
Guimarans, Serrano Daniel. "Hybrid algorithms for solving routing problems." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/96386.
Повний текст джерелаAn important component of many distribution systems is routing vehicles to serve customers. The Vehicle Routing Problem (VRP) provides a theoretical framework for approaching this class of logistics problems dealing with physical distribution. Because of its complexity and applicability, this class of logistics problems is among the most popular research areas in combinatorial optimization. This PhD. Thesis is aimed to introduce three different yet related hybrid methodologies to solve the VRP. These methodologies have been especially designed for being flexible in the sense that they can be used, with minor adaptations, for solving different variants of the VRP present in industrial application cases. In the three methodologies described in this work, different technologies are used to achieve the desired flexibility, efficiency, and robustness. Constraint Programming (CP) has been chosen as the modeling paradigm to describe the main constraints involved in the VRP. CP provides the pursued flexibility for the three methodologies, since adding side constraints present in most real application cases becomes a modeling issue and does not affect the search algorithm definition. In the first two hybrid methodologies, the CP model is used to check solution's feasibility during search. The third methodology presents a richer model for the VRP capable of tackling different problem variants. In this case, the search is performed and controlled from a CP perspective. Lagrangian Relaxation (LR) and a probabilistic version of the Clarke and Wright Savings (CWS) heuristic are used for specific purposes within the proposed methodologies. The former is used for minimizing the total traveled distance and the latter to provide a good initial solution. Both methods provide an efficient approach to the respectively faced problems. Moreover, the use of LR permits reducing the computational complexity of the performed local search processes and therefore reduces the required computational time to solve the VRP. All methodologies are based on the so-called Variable Neighborhood Search (VNS) metaheuristic. The VNS is formed by a family of algorithms that exploits systematically the idea of neighborhood changes both in the search phase to find a local minimum, and in perturbation phase, to escape from the corresponding valley. Although it is an extended method, there are few examples of its application to the VRP. However, interesting results have been obtained even applying the simplest VNS algorithms to this problem. The present thesis is aimed to contribute to the current research on the application of the VNS metaheuristic to the VRP. It has been chosen as the framework where the mentioned techniques are embedded. Hence, the metaheuristic is used to guide the search, while the desired efficiency is provided by the composing methods. On the other hand, using CP as the modeling paradigm provides the required flexibility. This characteristic is enhanced in the last described methodology. In this case, the CP search is guided by a combination of the VNS and the Large Neighborhood Search (LNS) metaheuristics. This methodology represents an initial approach for tackling efficiently more complex and richer VRP, similar to real application cases.
Nuñez, Castillo Carlos Heriberto. "Predictive and Distributed Routing Balancing for High Speed Interconnection Networks." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/120240.
Повний текст джерелаIn high performance clusters, current parallel application communication needs, such as traffic pattern, communication volume, etc., change along time and are difficult to know in advance. Such needs often exceed or do not match available resources causing resource use imbalance, network congestion, throughput reduction and message latency increase, thus degrading the overall system performance. Studies on parallel applications show repetitive behavior that can be characterized by a set of representative phases. This work presents a Predictive and Distributed Routing Balancing (PR-DRB) technique, a new method developed to gradually control network congestion, based on paths expansion, traffic distribution, applications pattern repetitiveness and speculative adaptive routing, in order to maintain low latency values. PR-DRB monitors messages latencies on routers and saves the found solutions to congestion, to quickly respond in future similar situations. Traffic congestion experiments were conducted in order to evaluate the performance of the method, and improvements were observed.
Morales, López Leopoldo. "Combinatorial dynamics of strip patterns of quasiperiodic skew products in the cylinder." Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/384935.
Повний текст джерелаThe thesis consists of two parts. In the first we aim at extending the results and techniques from Fabbri et al. 2005 to study the Combinatorial Dynamics, the <
Moriña, Soler David. "Nous models per a sèries temporals." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/125658.
Повний текст джерелаIn this work we consider two kinds of problems. The first concerns the issue of seasonality in the context of discrete time series, while the second part will deal with first-order autoregressive models with non-Gaussian innovations. Integer valued time series appear in many different contexts. The classical analysis of continuous time series can be inadequate when modeling phenomena based on counts, as the assumption of normal random variations is hardly achieved in the case of integer series. This motivates the study of models based in discrete distributions (Poisson, negative binomial, …). However, in the context of standard models of discrete time series (INAR, INMA, …) there is a lack of techniques focused on dealing with possible seasonal behavior in the data, and therefore there is a need of suitable tools to model phenomena presenting this feature, as, for example, the incidence of diseases such as flu, allergies, pneumonia,… In this work we propose a variation of the INAR(2) model to include a seasonal component, and we study how can it be applied to analyze data concerning hospital admissions caused by influenza. Following the same example, we consider several methods to make predictions about future occupation of hospital beds based on this type of time-series models of counts in the short and long term. The second issue addressed in this work appears as a problem of characterization of distributions in the context of first-order autoregressive models, prompted by a surprising result by McKenzie, establishing that a process Y t with AR(1) structure, and considering the exponentiated series Xt = eYt, the autocorrelation function of Xt is the same as the original series Y t if and only if the stationary distribution of Xt is gamma. Using this result as a starting point, our main goal has been to generalize this result by McKenzie in the sense of characterizing the distribution of the innovations in this context, and develop a goodness of fit test based on the empirical distribution function in order to decide whether it is reasonable to assume, at some level of confidence, that the the innovations follows a specific distribution. In particular, this contrast can be used in the classical situation, in order to check if the innovations of a first-order autoregressive model are Gaussian. This contrast has been applied on a data set concerning fish catches in the Atlantic Ocean and Gulf of St. Lawrence River between 1990 and 1996 to study whether the assumption of normality of the innovations is reasonable or not. As a second example, the contrast has been made on data concerning the deflator of the Spanish gross domestic product from 1962 to 2011. Finally, a study of the power of the test, in different situations is presented, considering different values for the first autocorrelation coeffcient, different sample sizes and different marginal distributions.
Cartas, Rosado Raúl. "Modelos de calibración n−dimensionales para lenguas electrónicas." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/98338.
Повний текст джерелаThe computational tools described in this thesis are meant to be alternative solutions to build multivariate calibration models from multi-way data obtained with arrays of electrochemical sensors. Both experimental and computational applications described herein are aimed to build electronic tongues of potentiometric and voltammetric types. The solution proposals are based on computational techniques designed to explore large databases in search of consistent patterns and/or systematic relationships between variables, allowing then to apply these models to new data to predict or estimate expected results. Some of the tools were implemented using multilayer perceptron neural networks with complex transfer functions (of little or no use in the chemical area) in the hidden layer neurons. To make compatible the type of structure of most of the data used in this thesis with the input of the neural networks, the electrochemical information was pretreated using mono- or multi-dimensional processing techniques in order to reduce the number of variables and dimensions. In addition to the structres based on neural networks, we also propose to build models using base functions of the truncated spline and B-spline types. The first is known as Adaptive Regression Splines Multivariable (MARS) and the second as B-splines Multivariate Adaptive Regression (B-MARS). In addition to the tools described above and implemented as proposed solutions, we also built successfully calibration models using multi-way partial least squares regression (N-PLS).
Bascompte, Viladrich David. "Models for bacteriophage systems, Weak convergence of Gaussian processes and L2 modulus of Brownian local time." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/129911.
Повний текст джерелаIn this dissertation three different problems are treated. In Chapter 1 we construct two families of processes that converge, in the sense of the finite dimensional distributions, towards two independent Gaussian processes. Chapter 2 is devoted to the study of a model of bacteriophage treatments for bacterial infections. Finally, in Chapter 3 we study some aspects of the L2 modulus of continuity of Brownian local time. In the first chapter we consider two independent Gaussian processes that can be represented in terms of a stochastic integral of a deterministic kernel with respect to the Wiener process and we construct, from a single Poisson process, two families of processes that converge, in the sense of the finite dimensional distributions, towards these Gaussian processes. We will use this result to prove convergence in law results towards some other processes, like sub-fractional Brownian motion. In Chapter 2 we construct and study several models that pretend to study how will behave a treatment of bateriophages in some farm animals. This problem has been brought to our attention by the Molecular Biology Group of the Department of Genetics and Microbiology at the Universitat Autònoma de Barcelona. Starting from a basic model, we will study several variations, first from a deterministic point of view, finding several results on equilibria and stability, and later in a noisy context, producing concentration type results. Finally, in Chapter 3 we shall study the decomposition on Wiener chaos of the L2 modulus of continuity of the Brownian local time. More precisely, we shall find a Central Limit Theorem for each Wiener chaos element of the L2 modulus of continuity of the Brownian local time. This result provides us with an example of a family of random variables that is convergent in law to a Normal distribution, but its chaos elements of even order do not converge.
Guillamet, Monfulleda David. "Statistical Local Appearance Models for Object Recognition." Doctoral thesis, Universitat Autònoma de Barcelona, 2004. http://hdl.handle.net/10803/3044.
Повний текст джерелаUsualment, els mètodes basats en l'aparença local utilitzen descriptors d'alta dimensionalitat a l'hora de descriure regions locals dels objectes. Llavors, el problema de la maledicció de la dimensionalitat (curse of dimensionality) pot sorgir i la classificació dels objectes pot empitjorar. En aquest sentit, un exemple típic per alleujar la maledicció de la dimensionalitat és la utilització de tècniques basades en la reducció de la dimensionalitat. D'entre les possibles tècniques per reduir la dimensionalitat, es poden utilitzar les transformacions lineals de dades. Bàsicament, ens podem beneficiar de les transformacions lineals de dades si la projecció millora o manté la mateixa informació de l'espai d'alta dimensió original i produeix classificadors fiables. Llavors, el principal objectiu és la modelització de patrons d'estructures presents als espais d'altes dimensions en espais de baixes dimensions.
La primera part d'aquesta tesi utilitza primordialment histogrames color, un descriptor local que ens proveeix d'una bona font d'informació relacionada amb les variacions fotomètriques de les regions locals dels objectes. Llavors, aquests descriptors d'alta dimensionalitat es projecten en espais de baixes dimensions tot utilitzant diverses tècniques. L'anàlisi de components principals (PCA), la factorització de matrius amb valors no-negatius (NMF) i la versió ponderada del NMF són 3 transformacions lineals que s'han introduit en aquesta tesi per reduir la dimensionalitat de les dades i proporcionar espais de baixa dimensionalitat que siguin fiables i mantinguin les estructures de l'espai original. Una vegada s'han explicat, les 3 tècniques lineals són àmpliament comparades segons els nivells de classificació tot utilitzant una gran diversitat de bases de dades. També es presenta un primer intent per unir aquestes tècniques en un únic marc de treball i els resultats són molt interessants i prometedors. Un altre objectiu d'aquesta tesi és determinar quan i quina transformació lineal s'ha d'utilitzar tot tenint en compte les dades amb que estem treballant. Finalment, s'introdueix l'anàlisi de components independents (ICA) per modelitzar funcions de densitat de probabilitats tant a espais originals d'alta dimensionalitat com la seva extensió en subespais creats amb el PCA. L'anàlisi de components independents és una tècnica lineal d'extracció de característiques que busca minimitzar les dependències d'alt ordre. Quan les seves assumpcions es compleixen, es poden obtenir característiques estadísticament independents a partir de les mesures originals. En aquest sentit, el ICA s'adapta al problema de reconeixement estadístic de patrons de dades d'alta dimensionalitat. Això s'aconsegueix utilitzant representacions condicionals a la classe i un esquema de decisió de Bayes adaptat específicament. Degut a l'assumpció d'independència aquest esquema resulta en una modificació del classificador ingenu de Bayes.
El principal inconvenient de les transformacions lineals de dades esmentades anteriorment és que no consideren cap tipus de relació espacial entre les característiques locals. Conseqüentment, es presenta un mètode per reconèixer objectes tridimensionals a partir d'imatges d'escenes complexes, tot utilitzant un únic model après d'una imatge de l'objecte. Aquest mètode es basa directament en les característiques visuals locals extretes de punts rellevants dels objectes i té en compte les relacions espacials entre elles. Aquest nou esquema redueix l'ambigüitat de les representacions anteriors. De fet, es presenta una nova metodologia general per obtenir estimacions fiables de distribucions conjuntes de vectors de característiques locals de múltiples punts rellevants dels objectes. Per fer-ho, definim el concepte de k-tuples per poder representar l'aparença local de l'objecte a k punts diferents i al mateix moment les dependències estadístiques entre ells. En aquest sentit, el nostre mètode s'adapta a entorns complexes i reals demostrant una gran habilitat per detectar objectes en aquests escenaris amb resultats molt prometedors.
During the last few years, there has been a growing interest in object recognition techniques directly based on images, each corresponding to a particular appearance of the object. These techniques which use only information of images are called appearance based models and the interest in such techniques is due to its success in recognizing objects. Earlier appearance-based approaches were focused on the use of holistic approaches. In spite of the fact that global representations have been successfully used in a broad set of computer vision applications (i.e. face recognition, robot positioning, etc), there are still some problems that can not be easily solved. Partial object occlusions, severe lighting changes, complex backgrounds, object scale changes and different viewpoints or orientations of objects are still a problem if they should be faced under a holistic perspective. Then, local appearance approaches emerged as they reduce the effect of some of these problems and provide a richer representation to be used in more complex environments.
Usually, local appearance methods use high dimensional descriptors to describe local regions of objects. Then, the curse of dimensionality problem appears and object classification degrades. A typical example to alleviate the curse of dimensionality problem is to use techniques based on dimensionality reduction. Among possible reduction techniques, one could use linear data transformations. We can benefit from linear data transformations if the projection improves or mantains the same information of the high dimensional space and produces reliable classifiers. Then, the main goal is to model low dimensional pattern structures present in high dimensional data.
The first part of this thesis is mainly focused on the use of color histograms, a local descriptor which provides a good source of information directly related to the photometric variations of local image regions. Then, these high dimensional descriptors are projected to low dimensional spaces using several techniques. Principal Component Analysis (PCA), Non-negative Matrix Factorization (NMF) and a weighted version of NMF, the Weighted Non-negative Matrix Factorization (WNMF) are 3 linear transformations of data which have been introduced in this thesis to reduce dimensionality and provide reliable low dimensional spaces. Once introduced, these three linear techniques are widely compared in terms of performances using several databases. Also, a first attempt to merge these techniques in an unified framework is shown and results seem to be very promising. Another goal of this thesis is to determine when and which linear transformation might be used depending on the data we are dealing with. To this end, we introduce Independent Component Analysis (ICA) to model probability density functions in the original high dimensional spaces as well as its extension to model subspaces obtained using PCA. ICA is a linear feature extraction technique that aims to minimize higher-order dependencies in the extracted features. When its assumptions are met, statistically independent features can be obtained from the original measurements. We adapt ICA to the particular problem of statistical pattern recognition of high dimensional data. This is done by means of class-conditional representations and a specifically adapted Bayesian decision scheme. Due to the independence assumption this scheme results in a modification of the naive Bayes classifier.
The main disadvantage of the previous linear data transformations is that they do not take into account the relationship among local features. Consequently, we present a method of recognizing three-dimensional objects in intensity images of cluttered scenes, using a model learned from one single image of the object. This method is directly based on local visual features extracted from relevant keypoints of objects and takes into account the relationship between them. Then, this new scheme reduces the ambiguity of previous representations. In fact, we describe a general methodology for obtaining a reliable estimation of the joint distribution of local feature vectors at multiple salient points (keypoints). We define the concept of k-tuple in order to represent the local appearance of the object at k different points as well as the statistical dependencies among them. Our method is adapted to real, complex and cluttered environments and we present some results of object detection in these scenarios with promising results.
Cesar, Galobardes Eduardo. "Definition of Framework-based Performance Models for Dynamic Performance Tuning." Doctoral thesis, Universitat Autònoma de Barcelona, 2006. http://hdl.handle.net/10803/5760.
Повний текст джерелаFor example, in the analysis phase of parallel/distributed programs, the programmer has to decompose the problem (data and/or code) to find the concurrency of the algorithm. In the design phase, the programmer has to be aware of the communication and synchronization conditions between tasks. In the implementation phase, the programmer has to learn how to use specific communication libraries and runtime environments but also to find a way of debugging programs. Finally, to obtain the best performance, the programmer has to tune the application by using monitoring tools, which collect information about the application's behavior. Tuning can be a very difficult task because it can be difficult to relate the information gathered by the monitor to the application's source code. Moreover, tuning can be even more difficult for those applications that change their behavior dynamically because, in this case, a problem might happen or not depending on the execution conditions.
It can be seen that these issues require a high degree of expertise, which prevents the more widespread use of this kind of solution. One of the best ways to solve these problems would be to develop, as has been done in sequential programming, tools to support the analysis, design, coding, and tuning of parallel/distributed applications.
In the particular case of performance analysis and/or tuning, it is important to note that the best way of analyzing and tuning parallel/distributed applications depends on some of their behavioral characteristics. If the application to be tuned behaves in a regular way then a static analysis (predictive or trace based) would be enough to find the application's performance bottlenecks and to indicate what should be done to overcome them. However, if the application changes its behavior from execution to execution or even dynamically changes its behavior in a single execution then the static analysis cannot offer efficient solutions for avoiding performance bottlenecks.
In this case, dynamic monitoring and tuning techniques should be used instead. However, in dynamic monitoring and tuning, decisions must be taken efficiently, which means that the application's performance analysis outcome must be accurate and punctual in order to effectively tackle problems; at the same time, intrusion on the application must be minimized because the instrumentation inserted in the application in order to monitor and tune it alters its behavior and could introduce performance problems that were not there before the instrumentation.
This is more difficult to achieve if there is no information about the structure and behavior of the application; therefore, blind automatic dynamic tuning approaches have limited success, whereas cooperative dynamic tuning approaches can cope with more complex problems at the cost of asking for user collaboration. We have proposed a third approach. If a programming tool, based on the use of skeletons or frameworks, has been used in the development of the application then much information about the structure and behavior of the application is available and a performance model associated to the structure of the application can be defined for use by the dynamic tuning tool. The resulting tuning tool should produce the outcome of a collaborative one while behaving like an automatic one from the point of view of the application developer.
Baró, i. Solé Xavier. "Probabilistic Darwin Machines: A new approach to develop Evolutionary Object Detection Systems." Doctoral thesis, Universitat Autònoma de Barcelona, 2009. http://hdl.handle.net/10803/5793.
Повний текст джерелаUna de les tasques inconscients per a les persones i que més interès està despertant en àmbit científics des del principi, és el que es coneix com a reconeixement de patrons. La creació de models del món que ens envolta, ens serveix per a reconèixer objectes del nostre entorn, predir situacions, identificar conductes, etc. Tota aquesta informació ens permet adaptar-nos i interactuar amb el nostre entorn. S'ha arribat a relacionar la capacitat d'adaptació d'un ésser al seu entorn amb la quantitat de patrons que és capaç d'identificar.
Quan parlem de reconeixement de patrons en el camp de la Visió per Computador, ens referim a la capacitat d'identificar objectes a partir de la informació continguda en una o més imatges. En aquest camp s'ha avançat molt en els últims anys, i ara ja som capaços d'obtenir resultats "útils" en entorns reals, tot i que encara estem molt lluny de tenir un sistema amb la mateixa capacitat d'abstracció i tan robust com el sistema visual humà.
En aquesta tesi, s'estudia el detector de cares de Viola i Jones, un dels mètode més estesos per resoldre la detecció d'objectes. Primerament, s'analitza la manera de descriure els objectes a partir d'informació de contrastos d'il·luminació en zones adjacents de les imatges, i posteriorment com aquesta informació és organitzada per crear estructures més complexes. Com a resultat d'aquest estudi, i comparant amb altres metodologies, s'identifiquen dos punts dèbils en el mètode de detecció de Viola i Jones. El primer fa referència a la descripció dels objectes, i la segona és una limitació de l'algorisme d'aprenentatge, que dificulta la utilització de millors descriptors.
La descripció dels objectes utilitzant les característiques de Haar, limita la informació extreta a zones connexes de l'objecte. En el cas de voler comparar zones distants, s'ha d'optar per grans mides de les característiques, que fan que els valors obtinguts depenguin més del promig de valors d'il·luminació de l'objecte, que de les zones que es volen comparar. Amb l'objectiu de poder utilitzar aquest tipus d'informacions no locals, s'intenta introduir els dipols dissociats en l'esquema de detecció d'objectes.
El problema amb el que ens trobem en voler utilitzar aquest tipus de descriptors, és que la gran cardinalitat del conjunt de característiques, fa inviable la utilització de l'Adaboost, l'algorisme utilitzat per a l'aprenentatge. El motiu és que durant el procés d'aprenentatge, es fa un anàlisi exhaustiu de tot l'espai d'hipòtesis, i al ser tant gran, el temps necessari per a l'aprenentatge esdevé prohibitiu. Per eliminar aquesta limitació, s'introdueixen mètodes evolutius dins de l'esquema de l'Adaboost i s'estudia els efectes d'aquest canvi en la capacitat d'aprenentatge. Les conclusions extretes són que no només continua essent capaç d'aprendre, sinó que la velocitat de convergència no és afectada significativament.
Aquest nou Adaboost amb estratègies evolutives obre la porta a la utilització de conjunts de característiques amb cardinalitats arbitràries, el que ens permet indagar en noves formes de descriure els nostres objectes, com per exemple utilitzant els dipols dissociats. El primer que fem és comparar la capacitat d'aprenentatge del mètode utilitzant les característiques de Haar i els dipols dissociats. Com a resultat d'aquesta comparació, el que veiem és que els dos tipus de descriptors tenen un poder de representació molt similar, i depenent del problema en que s'apliquen, uns s'adapten una mica millor que els altres. Amb l'objectiu d'aconseguir un sistema de descripció capaç d'aprofitar els punts forts tant de Haar com dels dipols, es proposa la utilització d'un nou tipus de característiques, els dipols dissociats amb pesos, els quals combinen els detectors d'estructures que fan robustes les característiques de Haar amb la capacitat d'utilitzar informació no local dels dipols dissociats. A les proves realitzades, aquest nou conjunt de característiques obté millors resultats en tots els problemes en que s'ha comparat amb les característiques de Haar i amb els dipols dissociats.
Per tal de validar la fiabilitat dels diferents mètodes, i poder fer comparatives entre ells, s'ha utilitzat un conjunt de bases de dades públiques per a diferents problemes, tals com la detecció de cares, la detecció de texts, la detecció de vianants i la detecció de cotxes. A més a més, els mètodes també s'han provat sobre una base de dades més extensa, amb la finalitat de detectar senyals de trànsit en entorns de carretera i urbans.
Ever since computers were invented, we have wondered whether they might perform some of the human quotidian tasks. One of the most studied and still nowadays less understood problem is the capacity to learn from our experiences and how we generalize the knowledge that we acquire.
One of that unaware tasks for the persons and that more interest is awakening in different scientific areas since the beginning, is the one that is known as pattern recognition. The creation of models that represent the world that surrounds us, help us for recognizing objects in our environment, to predict situations, to identify behaviors... All this information allows us to adapt ourselves and to interact with our environment. The capacity of adaptation of individuals to their environment has been related to the amount of patterns that are capable of identifying.
When we speak about pattern recognition in the field of Computer Vision, we refer to the ability to identify objects using the information contained in one or more images. Although the progress in the last years, and the fact that nowadays we are already able to obtain "useful" results in real environments, we are still very far from having a system with the same capacity of abstraction and robustness as the human visual system.
In this thesis, the face detector of Viola & Jones is studied as the paradigmatic and most extended approach to the object detection problem. Firstly, we analyze the way to describe the objects using comparisons of the illumination values in adjacent zones of the images, and how this information is organized later to create more complex structures. As a result of this study, two weak points are identified in this family of methods: The first makes reference to the description of the objects, and the second is a limitation of the learning algorithm, which hampers the utilization of best descriptors.
Describing objects using Haar-like features limits the extracted information to connected regions of the object. In the case we want to compare distant zones, large contiguous regions must be used, which provokes that the obtained values depend more on the average of lighting values of the object than in the regions we are wanted to compare. With the goal to be able to use this type of non local information, we introduce the Dissociated Dipoles into the outline of objects detection.
The problem using this type of descriptors is that the great cardinality of this feature set makes unfeasible the use of Adaboost as learning algorithm. The reason is that during the learning process, an exhaustive search is made over the space of hypotheses, and since it is enormous, the necessary time for learning becomes prohibitive. Although we studied this phenomenon on the Viola & Jones approach, it is a general problem for most of the approaches, where learning methods introduce a limitation on the descriptors that can be used, and therefore, on the quality of the object description. In order to remove this limitation, we introduce evolutionary methods into the Adaboost algorithm, studying the effects of this modification on the learning ability. Our experiments conclude that not only it continues being able to learn, but its convergence speed is not significantly altered.
This new Adaboost with evolutionary strategies opens the door to the use of feature sets with an arbitrary cardinality, which allows us to investigate new ways to describe our objects, such as the use of Dissociated Dipoles. We first compare the learning ability of this evolutionary Adaboost using Haar-like features and Dissociated Dipoles, and from the results of this comparison, we conclude that both types of descriptors have similar representation power, but depends on the problem they are applied, one adapts a little better than the other. With the aim of obtaining a descriptor capable of share the strong points from both Haar-like and Dissociated Dipoles, we propose a new type of feature, the Weighted Dissociated Dipoles, which combines the robustness of the structure detectors present in the Haar-like features, with the Dissociated Dipoles ability to use non local information. In the experiments we carried out, this new feature set obtains better results in all problems we test, compared with the use of Haar-like features and Dissociated Dipoles.
In order to test the performance of each method, and compare the different methods, we use a set of public databases, which covers face detection, text detection, pedestrian detection, and cars detection. In addition, our methods are tested to face a traffic sign detection problem, over large databases containing both, road and urban scenes.
Gunes, Baydin Atilim. "Evolut ionary adap tat ion in cas e - bas ed reasoning. An application to cross-domain analogies for mediation." Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/129294.
Повний текст джерелаAnalogy plays a fundamental role in problem solving and it lies behind many processes central to human cognitive capacity, to the point that it has been considered "the core of cognition". Analogical reasoning functions through the process of transfer, the use of knowledge learned in one situation in another for which it was not targeted. The case-based reasoning (CBR) paradigm presents a highly related, but slightly different model of reasoning mainly used in artificial intelligence, different in part because analogical reasoning commonly focuses on cross-domain structural similarity whereas CBR is concerned with transfer of solutions between semantically similar cases within one specific domain. In this dissertation, we join these interrelated approaches from cognitive science, psychology, and artificial intelligence, in a CBR system where case retrieval and adaptation are accomplished by the Structure Mapping Engine (SME) and are supported by commonsense reasoning integrating information from several knowledge bases. For enabling this, we use a case representation structure that is based on semantic networks. This gives us a CBR model capable of recalling and adapting solutions from seemingly different, but structurally very similar domains, forming one of our contributions in this study. A traditional weakness of research on CBR systems has always been about adaptation, where most applications settle for a very simple "reuse" of the solution from the retrieved case, mostly through null adaptation or substitutional adaptation. The difficulty of adaptation is even more obvious for our case of cross-domain CBR using semantic networks. Solving this difficulty paves the way to another contribution of this dissertation, where we introduce a novel generative adaptation technique based on evolutionary computation that enables the spontaneous creation or modification of semantic networks according to the needs of CBR adaptation. For the evaluation of this work, we apply our CBR system to the problem of mediation, an important method in conflict resolution. The mediation problem is non-trivial and presents a very good real world example where we can spot structurally similar problems from domains seemingly as far as international relations, family disputes, and intellectual rights.
Viles, Cuadros Noèlia. "Continuïtat respecte el paràmetre de Hurst de les lleis d'alguns funcionals del moviment Brownià fraccionari." Doctoral thesis, Universitat Autònoma de Barcelona, 2009. http://hdl.handle.net/10803/3112.
Повний текст джерелаFerrer, Sumsi Miquel. "Theory and Algorithms on the Median Graph. Application to Graph-based Classification and Clustering." Doctoral thesis, Universitat Autònoma de Barcelona, 2008. http://hdl.handle.net/10803/5788.
Повний текст джерелаEn el reconeixement estructural de patrons, els grafs han estat usats normalment per a representar objectes complexos. En el domini dels grafs, el concepte de mediana és conegut com median graph. Potencialment, té les mateixes aplicacions que el concepte de mediana per poder ser usat com a representant d'un conjunt de grafs.
Tot i la seva simple definició i les potencials aplicacions, s'ha demostrat que el seu càlcul és una tasca extremadament complexa. Tots els algorismes existents només han estat capaços de treballar amb conjunts petits de grafs, i per tant, la seva aplicació ha estat limitada en molts casos a usar dades sintètiques sense significat real. Així, tot i el seu potencial, ha restat com un concepte eminentment teòric.
L'objectiu principal d'aquesta tesi doctoral és el d'investigar a fons la teoria i l'algorísmica relacionada amb el concepte de medinan graph, amb l'objectiu final d'extendre la seva aplicabilitat i lliurar tot el seu potencial al món de les aplicacions reals. Per això, presentem nous resultats teòrics i també nous algorismes per al seu càlcul. Des d'un punt de vista teòric aquesta tesi fa dues aportacions fonamentals. Per una banda, s'introdueix el nou concepte d'spectral median graph. Per altra banda es mostra que certes de les propietats teòriques del median graph poden ser millorades sota determinades condicions. Més enllà de les aportacioncs teòriques, proposem cinc noves alternatives per al seu càlcul. La primera d'elles és una conseqüència directa del concepte d'spectral median graph. Després, basats en les millores de les propietats teòriques, presentem dues alternatives més per a la seva obtenció. Finalment, s'introdueix una nova tècnica per al càlcul del median basat en el mapeig de grafs en espais de vectors, i es proposen dos nous algorismes més.
L'avaluació experimental dels mètodes proposats utilitzant una base de dades semi-artificial (símbols gràfics) i dues amb dades reals (mollècules i pàgines web), mostra que aquests mètodes són molt més eficients que els existents. A més, per primera vegada, hem demostrat que el median graph pot ser un bon representant d'un conjunt d'objectes utilitzant grans quantitats de dades. Hem dut a terme experiments de classificació i clustering que validen aquesta hipòtesi i permeten preveure una pròspera aplicació del median graph a un bon nombre d'algorismes d'aprenentatge.
Given a set of objects, the generic concept of median is defined as the object with the smallest sum of distances to all the objects in the set. It has been often used as a good alternative to obtain a representative of the set.
In structural pattern recognition, graphs are normally used to represent structured objects. In the graph domain, the concept analogous to the median is known as the median graph. By extension, it has the same potential applications as the generic median in order to be used as the representative of a set of graphs.
Despite its simple definition and potential applications, its computation has been shown as an extremely complex task. All the existing algorithms can only deal with small sets of graphs, and its application has been constrained in most cases to the use of synthetic data with no real meaning. Thus, it has mainly remained in the box of the theoretical concepts.
The main objective of this work is to further investigate both the theory and the algorithmic underlying the concept of the median graph with the final objective to extend its applicability and bring all its potential to the world of real applications. To this end, new theory and new algorithms for its computation are reported. From a theoretical point of view, this thesis makes two main contributions. On one hand, the new concept of spectral median graph. On the other hand, we show that some of the existing theoretical properties of the median graph can be improved under some specific conditions. In addition to these theoretical contributions, we propose five new ways to compute the median graph. One of them is a direct consequence of the spectral median graph concept. In addition, we provide two new algorithms based on the new theoretical properties. Finally, we present a novel technique for the median graph computation based on graph embedding into vector spaces. With this technique two more new algorithms are presented.
The experimental evaluation of the proposed methods on one semi-artificial and two real-world datasets, representing graphical symbols, molecules and webpages, shows that these methods are much more ecient than the existing ones. In addition, we have been able to proof for the first time that the median graph can be a good representative of a class in large datasets. We have performed some classification and clustering experiments that validate this hypothesis and permit to foresee a successful application of the median graph to a variety of machine learning algorithms.
García, García Gloria. "Cuantificación de la no invariancia y su aplicación en estadística." Doctoral thesis, Universitat de Barcelona, 2001. http://hdl.handle.net/10803/1553.
Повний текст джерелаJustamente, el objetivo de este trabajo pretende ser una contribución a la estimación puntual donde se analice el problema de un estimador para esos "grises", situaciones que van desde la existencia de una invariancia clásica hasta su total ausencia. Los diferentes "tonos de gris" serán las órdenes de invariancia que vamos a definir, en el contexto de la Geometría Diferencial Informativa.
Fornés, Bisquerra Alicia. "Writer Identification by a Combination of Graphical Features in the Framework of Old Handwritten Music Scores." Doctoral thesis, Universitat Autònoma de Barcelona, 2009. http://hdl.handle.net/10803/3063.
Повний текст джерелаFerdinandova, Ivana. "Models of Reasoning." Doctoral thesis, Universitat Autònoma de Barcelona, 2004. http://hdl.handle.net/10803/4044.
Повний текст джерелаEn los tres trabajos que forman la tesis se analizan distintos juegos.
En el primero el juego es el Dilema del Prisionero y el objetivo es estudiar la influencia del aprendizaje social e individual o imitación sobre el resultado del juego. Los resultados demuestran que la elección de uno de estos modelos determina el resultado final.
El segundo trabajo se dedica a crear en modelo dinámico de formación de coaliciones en el que los individuos no saben el valor que tiene cada coalición para ellos. El modelo crea un proceso de Markov no estacionario. Nuestros resultados demuestran que los puntos fijos del sistema se pueden aproximar por una secuencia de dinámicas perturbadas en los que los jugadores saben el valor de las coaliciones.
En el ultimo trabajo analizamos la dinámica de un mercado usando un modelo computacional. El enfoque del trabajo es la influencia de los hábitos de los consumidores sobre la estructura del mercado. Los resultados demuestran que algunas de las características del comportamiento de los consumidores pueden sostener la diversidad en calidades y tamaño de las empresas en el mercado.
This thesis focuses on studying the way in which individuals' adaptation mechanizms influence their behavior and the outcomes in different games. In all the models presented here the emphasis is put on the adaptation process and its elements, rather than on the equilibrium behavior of the players.
The thesis consists of three papers.
The first one focuses on the importance of the way the information is exchanged in the context of Repeated Prisoner's Dilemma game. In Chapter 2 we build a simulation model imitating the structure of human reasoning in order to study how people face a Repeated Prisoner's Dilemma game. The results are ranged starting from individual learning in which case the worst result -defection- is obtained, passing through a partial imitation, where individuals could end up in cooperation or defection, and reaching the other extreme of social learning, where mutual cooperation can be obtained. The influence of some particular strategies on the attainment of cooperation is also considered. Those differences in the results of the three scenarios we have constructed suggest that one should be very careful when deciding which one to choose.
Chapter 3 studies the process of coalition formation when players are unsure about the true benefit of belonging to a given coalition. Under such strong incomplete information scenario, we use a Case-Based Decision Theory approach to study the underlying dynamic process. We show that such process can be modeled as a non-stationary Markov process. Our main result shows that any rest point of such dynamics can be approached by a sequence of similar "perturbed" dynamics in which players learn all the information about the value of each possible coalition
In Chapter 4 we study the dynamics of an experience good market using a two-sided adaptation Agent Based Computational Economics (ACE) model. The main focus of the analysis is the influence of consumers' habits on market structure. Our results show that given characteristics of consumers' behavior might sustain the diversity in the market both in terms of quality and firms' size. We observe that the more adaptive one side of the market is, the more the market reflects its interests.
Ruiz, Muñoz José Luis. "Completion and decomposition of hypergraphs by domination hypergraphs." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/458247.
Повний текст джерелаUn grafo consiste en un conjunto no vacío de vértices y un conjunto de pares no ordenados de vértices denominados aristas. Un conjunto de vértices D es dominante si todo vértice que no esté en D es adyacente a algún vértice de D. Un hipergrafo sobre un conjunto finito X es una colección de subconjuntos de X, ninguno de los cuales es un subconjunto de ningún otro. El hipergrafo de dominación de un grafo es la colección de los conjuntos dominantes minimales del grafo. Un hipergrafo es de dominación si es el hipergrafo de dominación de un grafo. En el capítulo 1 introducimos dos estructuras de retículo distributivo en el conjunto de hipergrafos sobre un conjunto finito y también definimos algunas operaciones: el complementario de un hipergrafo y las dos operaciones de transversal correspondientes a cada una de las estructuras de retículo. Estudiamos el comportamiento de estas operaciones con respecto a los órdenes parciales y las estructuras de retículo. En el capítulo 2 introducimos varios hipergrafos asociados a un grafo, siendo los más importantes el hipergrafo de dominación y el hipergrafo de independencia-dominación del grafo, cuyos elementos son los conjuntos independientes maximales del grafo, y establecemos varias relaciones entre ellos. Después calculamos el hipergrafo de dominación de todos los grafos de orden 5, salvo isomorfismo. También investigamos cuándo un hipergrafo es un hipergrafo de dominación y encontramos todos los hipergrafos de dominación en algunos casos. En el capítulo 3 presentamos el problema de la aproximación de un hipergrafo por hipergrafos de una familia dada. Dado un hipergrafo, definimos cuatro familias de aproximaciones, que llamamos compleciones, dependiendo del orden parcial usado y de por dónde aproximemos el hipergrafo. Establecemos condiciones suficientes para la existencia de compleciones, introducimos los conjuntos de compleciones minimales o maximales de un hipergrafo y estudiamos el concepto de descomposición, que conduce al índice de descomposición de un hipergrafo. Las propiedades de evitación resultan ser cruciales en el estudio de la existencia de descomposiciones. En el capítulo 4 presentamos técnicas de cálculo y calculamos las compleciones de dominación minimales superiores y los índices de descomposición de algunos hipergrafos. En los apéndices damos el código SAGE, desarrollado para realizar los cálculos de esta tesis, y damos la lista de los hipergrafos de dominación de todos los grafos de orden 5 así como todos los grafos de orden 5 que poseen el mismo hipergrafo de dominación.
Huerta, Muñoz Diana Lucia. "The flexible periodic vehicle routing problem: modeling alternatives and solution techniques." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/471455.
Повний текст джерела: En esta tesis se presenta y estudia el Problema de Ruteo de Vehículos Periódico Flexible. En este problema, un transportista debe establecer un plan de distribución para atender a un conjunto determinado de clientes durante un horizonte de planificación utilizando una flota de vehículos con capacidad homogénea. La demanda total de cada cliente es conocida por el horizonte temporal y se puede satisfacer visitando al cliente en varios períodos de tiempo. Sin embargo, hay un límite en la cantidad máxima que se puede entregar en cada visita. El objetivo es minimizar el costo total de ruteo. Este problema puede verse como una generalización del Problema clásico de Ruteo de Vehículos Periódico que, en cambio, tiene programas de servicio fijos y cantidades de entrega fijas por visita. Por otro lado, el Problema de Ruteo de Vehículos Periódico Flexible comparte algunas características con el Problema de Ruteo de Inventarios en el cual los niveles de inventario se consideran en cada período de tiempo, la entrega del producto es una variable de decisión y, típicamente, un costo de inventario está involucrado en la función objetivo. Se discute la relación entre estos problemas periódicos de rutas y se presenta un análisis del peor de los casos, que muestra las ventajas del problema estudiado con respecto a los problemas periódicos mencionados anteriormente. Además, las formulaciones alternativas de programación entera mixta se describen y se prueban computacionalmente. Dada la dificultad de resolver a optimalidad el problema estudiado para instancias de tamaño pequeño , se desarrolla una matheurística que puede resolver instancias de gran tamaño de manera eficiente. Una extensa experiencia computacional ilustra las características de las soluciones del problema y muestra que, también en la práctica, permitir políticas flexibles puede producir ahorros sustanciales en los costos de ruteo en comparación con el Problema de Ruteo de Vehículos Periódico y el Problema de Rutas de Inventario.
Coulson, Matthew John. "Threshold phenomena involving the connected components of random graphs and digraphs." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/673350.
Повний текст джерелаEn aquesta tesi considerem diversos models de grafs i graf dirigits aleatoris, i investiguem el seu comportament a prop dels llindars per l'aparició de certs tipus de components connexes. En primer lloc, estudiem la finestra crítica per a l'aparició d'una component fortament connexa en dígrafs aleatoris binomials (o d'Erdos-Rényi). En particular, provem diversos resultats sobre la probabilitat límit que la component fortament connexa sigui sigui molt gran o molt petita. A continuació, estudiem el model de configuració per a grafs no dirigits i mostrem noves cotes superiors per la mida de la component connexa més gran en els règims sub-crítics i quasi-subcrítics. També demostrem que, en general, aquestes cotes no poden ser millorades. Finalment, estudiem el model de configuració per a dígrafs aleatoris. Ens centrem en la regió quasi-subcrítica i demostrem que aquest model es comporta de manera similar al model binomial, el comportament del qual va ser estudiat per Luczak i Seierstad en les regions quasi-subcrítica i quasi-supercrítica. A més a més, demostrem l'existència d'una funció llindar per a l'existència d'una component feble gegant, tal com va predir Kryven.
Matemàtica aplicada
Entenza, Rodriguez Ana Isabel. "Ellementos básicos de las representaciones visuales funcionales análisis crítico de las aportaciones realizadas desde diversas disciplinas." Doctoral thesis, Universitat Autònoma de Barcelona, 2008. http://hdl.handle.net/10803/299367.
Повний текст джерелаAquesta investigació parteix de la necessitat d'elaborar un discurs teòric que ens permeti superar el problema de coneixement fonamentat en l'assaig i l'error, pràctica que sovint fem servir a les nostres aules, i aportar directrius més sòlides que ens permetin obtenir criteris generals per poder " llegir "i" escriure "aquest tipus de missatges. El nostre objectiu és definir i enumerar els elements mínims sense significat de les representacions visuals funcionals, bidimensionals i fixes , que ens permetin aprofundir en com s'articula el seu significat i fer una proposta concreta sobre la seva anàlisi, a partir de les aportacions dels diferents autors consultats. La situació amb què ens trobem és la de que els estudis de representació visual no procedeixen d'una disciplina única i pròpia, sinó de diverses (art, psicologia, comunicació, semiòtica, etc.), i que no disposa d'un cos teòric estable, sinó que canvia en funció de cada disciplina. Això ens obliga a fer un enfocament interdisciplinari i comprendre cada proposta teòrica, per intentar formalitzar una mena de «mínim comú múltiple» dels elements constitutius, a partir de les aportacions dels autors consultats. L'estudi detallat de les fonts de referència ens permeten agrupar els textos en quatre grans blocs: • els que prenen com a referència les aportacions de Da Vinci i Kandinsky (Villafañe, Wong, Arnheim, etc.); • els que segueixen, d'alguna manera, el camí iniciat per Barthes (Joly i el Grup μ, entre d'altres); • els que reflexionen sobre les articulacions de les representacions visuals, i sobre la noció de codi, a partir de les aportacions de Prieto i d'Eco, fonamentalment; • i aquells que han realitzat les seves aportacions des d'un enfocament més visual que lingüístic, tenint en compte algunes de les seves característiques com, per exemple, l'espacialitat (Saint Martin, Cossette). A més, en la nostra recerca dels elements bàsics de les representacions visuals funcionals, al marge d'èpoques o estils, tindrem en compte el cos teòric que ens proporciona la lingüística perquè, encara que en molts sentits les estructures verbals i visuals no són comparables, és un referent sine qua non per a una anàlisi d'aquestes característiques, bé per fer servir les seves aportacions, bé per allunyar-se de elles. En el nostre cas, en tant que tractem d'aprofundir en la construcció del signe, ens emmarquem dins dels estudis de la semiòtica visual perquè, encara que no hi ha un acord total sobre això, considerem que la semiòtica és la disciplina que hauria d'elaborar una teoria de la significació i, en el cas de la semiòtica de la imatge, la responsable de trobar els equivalents visuals dels conceptes lingüístics de fonema, paraula, oració, etc.
Our research stems from the need to develop a theoretical discourse capable of overcoming the problem of knowledge based on trial and error—common in our classrooms—and of providing more solid guidelines and general criteria to ‘read’ and ‘write’ this type of messages. Our aims are: to identify and define those minimum elements without meaning in two-dimensional, functional, fixed visual representations that illustrate how its actual meaning is conveyed; and to propose how those elements might be analyzed, based on contributions of the different authors we consulted. The current situation is that studies in visual representation do not have their own specialty and are dispersed among different disciplines (art, psychology, communications, semiotics, etc.) and do not have a stable body of theory (thus changing according to each discipline). We must therefore adopt an interdisciplinary approach and broach each theoretical proposal separately in order to formalize something like a ‘minimum common denominator’ of the constituent elements based, again, on the contributions of the different authors we consulted. A detailed analysis of sources and references allow the literature to be broken down into four large blocks: • Those that look to Da Vinci and Kandinsky (Villafañe, Wong, Arnheim, etc.); • Those that follow to some degree the path marked by Barthes (Joly and Groupe μ, among others); • Those that reflect on the production of visual representations and the idea of code, fundamentally through the prisms of Prieto and Eco; • Those that use a visual (rather than linguistic) focus and consider characteristics such as ‘spaceness’ (Saint Martin, Cossette). In our review of the basic elements of functional visual representations and independently of period and styles, we use the theoretical corpus that linguistics offers—even though verbal and visual structures are not always comparable—because it is an inevitable point of reference for an analysis of this nature, either to co-opt its concepts or differ from them. We immediately situate ourselves in the field of visual semiotics as soon as we try to dig deeper into the construction of the signifier because (although there is no general agreement in this regard) semiotics should be the discipline that elaborates a theory of meaning and—as in the case of semiotics of images—finds visual equivalents to the linguistic concepts of phoneme, word, speech, and so on.
Daoudi, Jalila. "Nuevos modelos y técnicas estadísticas para el estudio de datos financieros." Doctoral thesis, Universitat Autònoma de Barcelona, 2009. http://hdl.handle.net/10803/130794.
Повний текст джерелаOur line of investigation has developed in the field of the statistical applied to the finances. Our aim is to find and analyse new statistical models to adjust the financial data and new techniques to study the behavior of the tails. An application of this work is the study of operational risk. The banking business has changed deeply by the processes of liberalization, financial and technological innovation. This, has generated an evolution in the field of modeling the processes for the measurement and the quantification of the risk. The risk of loss has his origin in events that can not be attribute to market risk or to credit risk, it is resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk but excludes strategic and reputacional risk. The most recent method for hedging the operational risk is the method of advanced measurement (AMA) that consists in modeling the aggregate distribution of losses (Loss Distribution Approach or LDA) that has been used successfully in the field of insurances. assuming that the severities are independent, and, that are independent of the frequency of the events, the methodology LDA requires modeling separately the frequency and the severity. The VaR is then calculated as the percentile of the aggregate distribution of losses for a level of probability 99;9%. In the Chapter one, we give an overview on heavy-tailed distributions. Historically, it have been used in the world of insurances, specifically the distributions subexponentials. In the last decade, this methodology has moved to the world of the finances. In the Chapter two, it is shown that the prices formation mechanism may explain some amount of non-normality. Assuming normality for bid and ask prices, the observed transaction prices may be a mixture of normal distributions or a mixture of left- right truncated normal distributions, the latter case being more likely. The statistical properties of the mixture of left-right truncated normal distri- butions are developed. It is proved that there is only one maximum for the likelihood function and the maximum is closely related to the coeficient of variation. Our results show that continuity at zero of this distribution can be avoided in statistical analysis. Empirical work suggests that in financial data non-normality is also produced for a large number of values close to the mean, here referred to as ïnliers". This could explain that the Laplace approximation is often a better approach than normal distribution for daily returns. The approach based in the modeling the distribution of severity by semi weighed distributions such as the lognormal distribution, the inverse gaussian and the mixture of normal provide robust estimates estables of the VaR. The approach based in the theory of extreme value that adjust the values over an determined threshold by the generalized Pareto distribution is of importance To obtain values estables of the measures of the risk. In the Chapter three, we provide precise arguments to explain the anomalous behavior of the likelihood surface when sampling from the generalized Pareto distribution for small or moderate samples. The behavior of the profile-likelihood function is characterized in terms of the empirical coefficient of variation. A suficient condition is given for global maximum of the likelihood function of the Pareto distribution to be at a finite point. In the Chapter four, we develop a previous and complementary methodology to the parametric studies to contrast the model from a point of empirical view. New methods to decide between polynomial or exponential tails are introduced. Is an alternative method to the classical methods ME-Plot and the Hill-Plot. The key idea is based on a characterization of the exponential distribution and uses the residual coefficient of variation as a random process. A graphical method, called a CV plot, is introduced to distinguish between exponential and polynomial tails. Moreover, new statistics introduced from a multivariate point of view allow for testing exponentiality using simultaneously several thresholds. The usefulness of our approach is clearly shown with the daily returns of exchange rates between the US dollar and the Japan yen. One of the difficulties of the distribution of generalized Pareto is that include bounded distributions, there lead a problems of convergence of the estimators. Besides the likelihood functions for small samples can not having solutions. Thus, we propose the TNP distribution that is the union of the normal truncated, the exponential distribution and the distribution of Pareto and is an alternative to the distribution GPD to modeling financial data.
Higueras, Hernáez Manuel. "Advanced Statistical Methods for Cytogenetic Radiation Biodosimetry." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/314194.
Повний текст джерелаThe original aim of this thesis, entitled ‘Advanced Statistical Methods for Cytogenetic Radiation Biodosimetry’ was to ‘develop novel solutions for statistical analysis of cytogenetic data’ and the question that was to be addressed by this project was ‘how best to correctly quantify cytogenetic damage that is induced by radiation in complex scenarios of radiation exposure, where it is recognised that the current techniques are insufficient.’ The project was created with the aim of providing increased statistical accuracy of cytogenetic dose estimates carried out both for the Public Health England's biological dosimetry service and for research purposes, which will correspondingly further inform health risk assessment for radiation protection. The specific objectives were: 1. Identification of further limitations in the existing cytogenetic methodologies (in addition to those that had been previously identified); 2. Identification of scenarios that pose particular difficulties for practical cytogenetic biodosimetry, for which advancement of statistical methods may offer solutions; 3. Identification and comparison of solutions that have been previously developed/proposed in the literature and their further development/application (as appropriate) for the scenarios found in 2; 4. Development and testing of further novel solutions, in the Bayesian and classical frameworks, and implementation of these for use in practical cytogenetic biodosimetry (e.g. in Java); 5. Publication of the results. This project has addressed the original aim and above objectives through original research into statistical methods for biological dosimetry resulting in six peer-reviewed publications. The main results from each of these publications are as follows: A review of the literature to identify radiation exposure scenarios requiring further work which outlines the rationale behind a Bayesian approach and gives a practical example; Development of techniques to address the identified gaps, in particular new models for inverse regression and partial body irradiation; Demonstration of application of these novel methodologies to existing and new cytogenetic data in a number of different scenarios and Development of two R-project packages, one to support biological dose estimation using the novel methodology and another to manage the family of Generalized Hermite Distributions.
Padilla, Cozar Maria. "Mètodes gràfics i estadístics per a la detecció de valors extrems." Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/368209.
Повний текст джерелаExtreme value theory (EVT) is the only statistical discipline that develops techniques and models to describe the unusual behavior instead of the usual one, the main objective of which is the quantile estimation corresponding to very extreme events. In many areas of application, a typical requirement is to estimate the value at risk (VaR), high enough to overcome the possibility that this is less than a given amount. The probability theory associated to the EVT is well-established thanks to the results of Fisher and Tippet (1923), Balkema and de Haan (1974), and Pickands (1975). Two main approaches were developed; the block maxima method and the method of the excesses over a threshold, the application of which presents difficulties in applying statistical tools, Diebold et al. (1998). The determination of the threshold from which the limit distribution can be used and its behavior are the main problems to deal with. To distinguish the general data from those which are under study, we will use the concept of tail, which refers to values that are above a sufficiently high value. For excesses over a threshold the distribution that characterizes the asymptotic behavior of the tail is the Generalized Pareto (GPD), its shape parameter, called extreme value index, classifies tails in heavy, exponential, and light. The application of the GPD model is extensively detailed in McNeil et al. (2005) and Embrechts et al. (1997), but there are limitations, for example, when there are no finite moments or the subjectivity that arises when graphical methods are used. The aim of this thesis is to present new tools for EVT, which can be used for the threshold selection and the extreme value index estimation and they solve some existing problems. Chapter 1 is a review of statistical theory for extreme values. The most used graphical methods are remembered, the mean excess plot and the Hill-plot, and the estimation methods available for GPD too. Finally, a new graphical method called CV plot, Castillo et al. (2014), and a recently appeared approach to heavy and exponential tails, Castillo et al. (2013), are presented. In Chapter 2 the fact that the coefficient of variation characterizes residual distributions, Gupta and Kirman (2000), is used to find the theoretical CV plot for some distributions and to apply the asymptotic theory to the case of GPD, provided by the existence of four finite moments. Thanks to a transformation, presented in Chapter 3, the CV-plot can be applied in any situation. Chapter 4 presents statistics that allow us to estimate the extreme value index, to contrast the hypothesis of GPD and they are used in an automated selection thresholds algorithm. This third point is a major breakthrough for the application of the method Peak over Threshold, which requires estimating the threshold from a graphical method and estimating the extreme value index through the maximum likelihood, Coles (2001), since researcher's subjectivity disappears when the threshold is estimated. Chapter 5 is dedicated to the study of 16 data sets in embedded systems. The CV plot and statistical Tm have been used, obtaining good results in most cases. In Chapter 6, we will again apply new tools to data on Danish fire insurance, McNeil (1997), and financial data analyzed in Gomes and Pestana (2007). Finally, Chapter 7 presents the conclusions of this work and the new lines of research.
Soto, Silva Wladimir Eduardo. "Optimizing planning decisions in the fruit supply chain." Doctoral thesis, Universitat de Lleida, 2017. http://hdl.handle.net/10803/457541.
Повний текст джерелаThe Chilean agro-industry has experienced a steady increase of industrialized fruit exports over the last decade, reaching a total volume increase of 107% and 185% in value. This growth means that the fresh fruit supply chain, either for preserved, frozen, dehydrated, fresh fruit or juices, requires support in order to make management increasingly more efficient. So far, some problems directly related to the need to improve the sector´s competitiveness have not yet been addressed. In the last few years production costs have risen mainly due to labor shortage and poor quality of raw material (fresh fruit). That is why, improving the supply chain efficiency and thus the agro-industry competitiveness, particularly in the center-south region of the country, requires new tools that could support decisions making regarding the fresh fruit supply chain. Within this context, the general objective of this research was to develop a set of tools aiming to support tactical decisions that could enhance management of fresh fruit purchasing, cold storage, transport, and opening of cold chambers. Three important contributions are made in this research study. The first one has to do with the state-ofthe- art of supply chains management, by reviewing optimization models applied to fresh fruit supply chains. The second one consists in providing four tools to support tactical decisions regarding fresh fruit supply chains, specifically, three mathematical models for the optimization of decisions that support the selection of growers and the purchasing of fresh fruit, their subsequent storage and transportation, and the proposal of a mathematical model for cold storage management. The third contribution is the proposal of a Decision Support System (DSS), which aids in decisions about growers selection and purchasing of fresh fruit, as well as its subsequent storage and transportation. Finally, there is an important additional contribution that involves the application of the models to real cases. All models proposed were created and validated with the support of agro-industries from the centersouth region of the country having problems with their supply chain, which were addressed in this research study.
Font, Clos Francesc. "On the Scale Invariance of certain Complex Systems." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/308310.
Повний текст джерелаComplexity Science is an interdisciplinary field of study that applies ideas and methods mostly from statistical physics and critical phenomena to a variety of systems in almost any other field, from Biology to Economics, to Geology or even Sports Science. In essence, it attempts to challenge the reductionist approach to scientific inquiry by claiming that "the total is more that the sum of its parts" and that, therefore, reductionism shall ultimately fail: When a problem or system is analyzed by studying its constituent units, and these units are subsequently analyzed in terms of even simpler units, and so on, then a descending hierarchy of realms of study is formed. And while the system might be somewhat understood in terms of different concepts at each different level, from the coarser description down to its most elementary units, there is no guarantee of a successful bottom-up, comprehensive "reconstruction" of the system. Reductionism only provides a way down the hierarchy of theories, i.e., towards those supposedly more basic and elementary; Complexity aims at finding a way back home, that is, from the basic elementary units up to the original object of study. Scale invariance is the property of being invariant under a scale transformation. Thus, scale-invariant systems lack characteristic scales, as rescaling its variables leaves them unchanged. This is considered of importance in Complexity Science, because it provides a bridge between different realms of physics, linking the microscopic world with the macroscopic world. This Thesis studies the scale invariant properties of the frequency-count representation of Zipf's law in natural languages, the type-token growth curve of general Zipf's systems and the distribution of event durations in a thresholded birth-death process. It is shown that some properties of this systems can be expressed as scaling laws, and are therefore scale-invariant. The associated scaling exponents and scaling functions are determined.
Silva, dos Santos Guna Alexander. "Performability issues of fault tolerance solutions for message-passing systems: the case of RADIC." Doctoral thesis, Universitat Autònoma de Barcelona, 2009. http://hdl.handle.net/10803/5774.
Повний текст джерелаEsta tesis se enmarca en el estudio de la relación entre prestaciones y disponibilidad cuando un cluster de computadores basado en el modelo de paso de mensajes, usa un protocolo de tolerancia a fallos basado en rollback-recovery con log de mensajes pesimista. Esta relación también es conocida como performability.
Los principales factores que influyen en la performability cuando se usa la arquitectura de tolerancia a fallos RADIC son identificados y estudiados. Los factores fundamentales son la latencia de envío de mensajes que se incrementa cuando se usa el log pesimista, que implica una perdida de prestaciones, como también la replicación de los datos redundantes (checkpoint y log) necesaria para el incremento de la disponibilidad en RADIC y el cambio de la distribución de procesos por nodo causada por los fallos, que pueden causar degradación de las prestaciones así como las paradas por mantenimiento preventivo.
Para tratar estos problemas se proponen alternativas de diseño basadas en análisis de la performability. La pérdida de prestaciones causada por el log y la replicación ha sido mitigada usando la técnica de pipeline. El cambio en la distribución de procesos por nodo puede ser evitado o restaurada usando un mecanismo flexible y transparente de redundancia dinámica que ha sido propuesto, que permite inserción dinámica de nodos spare o de repuesto.
Los resultados obtenidos demuestran que las contribuciones presentadas son capaces de mejorar la performability de un cluster de computadores cuando se usa una solución de tolerancia a fallos como RADIC.
Is a fast but fragile system good? Is an available but slow system good? These two questions demonstrate the importance of performance and availability in computer clusters.
This thesis addresses issues correlated to performance and availability when a rollback- recovery pessimistic message log based fault tolerance protocol is applied into a computer cluster based on the message-passing model. Such a correlation is also known as performability.
The root factors influencing the performability when using the RADIC (Redundant Array of Distributed Independent Fault Tolerance Controllers) fault tolerance architecture are raised and studied. Factors include the message delivery latency, which increases when using pessimistic logging causing performance overhead, as also in the redundant data (logs and checkpoints) replication needed to increase availability in RADIC and the process per node distribution changed by faults, which may cause performance degradation and preventive maintenance stops.
In order to face these problems some alternatives are presented based on a performability analysis. Using a pipeline approach the performance overhead of message logging and the redundant data replication were mitigated. Changes in the process per node distribution can be avoided or restored using the flexible and transparent mechanism for dynamic redundancy proposed, or using a dynamic insertion of spare or replacement nodes.
The obtained results show that the presented contributions could improve the performability of a computer cluster when using a fault tolerance solution such as RADIC.
Ballester, Pla Coralio. "On Peer Networks and Group Formation." Doctoral thesis, Universitat Autònoma de Barcelona, 2005. http://hdl.handle.net/10803/4064.
Повний текст джерелаPara obtener nuestros resultados, utilizamos el concepto de NP-completitud, que es un modelo bien establecido de complejidad temporal en Ciencias de la Computación. En concreto, nos concentramos en estabilidad grupal y estabilidad individual en juegos hedónicos. Los juegos hedónicos son una clase simple de juegos cooperativos en los que la utilidad de cada individuo viene totalmente determinada por el grupo laboral al que pertenece. Nuestros resultados referentes a la complejidad, expresados en términos de NP-completitud, cubren un amplio espectro de dominios de las preferencias individuales, incluyendo preferencias estrictas, indiferencias en las preferencias o preferencias libres sobre el tamaño de los grupos. Dichos resultados también se cumplen si nos restringimos al caso en el que el tamaño máximo de los grupos es pequeño (dos o tres jugadores)
En el artículo "Who is Who in Networks. Wanted: The Key Player" (junto con Antoni Calvó Armengol e Yves Zenou), analizamos un modelo de efectos de grupo en el que los agentes interactúan en un juego de influencias bilaterales. Los juegos no cooperativos con población finita y utilidades linales-cuadráticas, en los cuales cada jugador decide cuánto esfuerzo ejercer, pueden ser interpretados como juegos en red con complementariedades en los pagos, junto con un componente de susitucion global y uniforme, y un efecto de concavidad propia.
Para dichos juegos, la acción de cada jugador en un equilibrio de Nash es proporcional a su centralidad de Bonacich en la red de complementariedades, estableciendo así un puente con la literatura de redes sociales. Dicho vínculo entre Bonacich y Nash implica que el equilibrio agregado aumenta con el tamaño y la densidad de la red.
También analizamos una política que consiste en seleccionar al jugador clave, ésto es, el jugador que, una vez eliminado del juego, induce un cambio óptimo en la actividad agregada. Proveemos una caracterización geométrica del jugador clave, identificada con una medida de inter-centralidad, la cual toma en cuenta tanto la centralidad de cada jugador como su contribución a la centralidad de los otros.
En el artículo "Optimal Targets in Peer Networks" (junto con Antoni Calvó Armengol e Yves Zenou), nos centramos en las consecuencias y limitaciones prácticas que se derivan del modelo de decisiones sobre delincuencia. Las principales metas que aborda el trabajo son las siguientes. Primero, la elección se extiende el concepto de delincuente clave en una red al de grupo clave. En dicha situación se trata de seleccionar de modo óptimo al conjunto de delincuentes a eliminar/neutralizar, dadas las restricciones presupuestarias para aplicar medidas. Dicho problema presenta una inherente complejidad computacional que solo puede salvarse mediante el uso de procedimientos aproximados, "voraces" o probabilísticos. Por otro lado, tratamos el problema del delincuente clave en el contexto de redes dinámicas, en las que, inicialmente, los individuos deciden acerca de su futuro como delincuentes o como ciudadanos que obtienen un salario fijo en el mercado. En dicha situación, la elección del delincuente clave es más compleja, ya que el objetivo de disminuir la delincuencia debe tener en cuenta los efectos en cadena que pueda traer consigo la desaparición de uno o varios delincuentes. Por último, estudiamos la complejidad computacional del problema de elección óptima y explotamos la propiedad de submodularidad de la intercentralidad de grupo, lo cual nos permite acotar el error relativo de la aproximación basada en un algoritmo voraz.
The aim of this thesis work is to contribute to the analysis of the interaction of agents in social networks and groups.
In the chapter "NP-completeness in Hedonic Games", we identify some significant limitations in standard models of cooperation in games: It is often impossible to achieve a stable organization of a society in a reasonable amount of time. The main implications of these results are the following. First, from a positive point of view, societies are bound to evolve permanently, rather than reach a steady state configuration rapidly. Second, from a normative perspective, a planner should take into account practical time limitations in order to implement a stable social order.
In order to obtain our results, we use the notion of NP-completeness, a well-established model of time complexity in Computer Science. In particular, we concentrate on group stability and individual stability in hedonic games. Hedonic games are a simple class of cooperative games in which each individual's utility is entirely determined by her group. Our complexity results, phrased in terms of NP-completeness, cover a wide spectrum of preference domains, including strict preferences, indifference in preferences or undemanding preferences over sizes of groups. They also hold if we restrict the maximum size of groups to be very small (two or three players).
The last two chapters deal with the interaction of agents in the social setting. It focuses on games played by agents who interact among them. The actions of each player generate consequences that spread to all other players throughout a complex pattern of bilateral influences.
In "Who is Who in Networks. Wanted: The Key Player" (joint with Antoni Calvó-Armengol and Yves Zenou), we analyze a model peer effects where agents interact in a game of bilateral influences. Finite population non-cooperative games with linear-quadratic utilities, where each player decides how much action she exerts, can be interpreted as a network game with local payoff complementarities, together with a globally uniform payoff substitutability component and an own-concavity effect.
For these games, the Nash equilibrium action of each player is proportional to her Bonacich centrality in the network of local complementarities, thus establishing a bridge with the sociology literature on social networks. This Bonacich-Nash linkage implies that aggregate equilibrium increases with network size and density. We then analyze a policy that consists in targeting the key player, that is, the player who, once removed, leads to the optimal change in aggregate activity. We provide a geometric characterization of the key player identified with an inter-centrality measure, which takes into account both a player's centrality and her contribution to the centrality of the others.
Finally, in the last chapter, "Optimal Targets in Peer Networks" (joint with Antoni Calvó-Armengol and Yves Zenou), we analyze the previous model in depth and study the properties and the applicability of network design policies.
In particular, the key group is the optimal choice for a planner who wishes to maximally reduce aggregate activity. We show that this problem is computationally hard and that a simple greedy algorithm used for maximizing submodular set functions can be used to find an approximation. We also endogeneize the participation in the game and describe some of the properties of the key group. The use of greedy heuristics can be extended to other related problems, like the removal or addition of new links in the network.
Reyes, Sotomayor Ricardo. "Stabilized reduced order models for low speed flows." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669102.
Повний текст джерелаEsta tesis presenta un modelo de orden reducido estabilizado paran fluidos a baja velocidad utilizando un enfoque de multiescala variacional. Para desarrollar esta formulación utilizamos el método de elementos finitos para el modelo no reducido y una descomposición en autovalores del mismo para construir la base. Adicional a la formulación del modelo reducido, presentamos dos técnicas que podemos formular al utilizar este enfoque: una reducción adicional del dominio, basada en la reducción de la malla, donde usamos una técnica de refinamiento adaptativa y un esquema de descomposición de dominio para el modelo reducido. Para ilustrar y probar la formulación propuesta, utilizamos cuatro diferentes modelos fisicos: una ecuación de convección-difusión-reacción, la ecuación de Navier-Stokes para fluidos incompresibles, una aproximación de Boussinesq para la ecuación de Navier-Stokes, y una aproximación para números de Mach bajos de la ecuación de Navier-Stokes.
Alvarado, Orellana Sergio. "Aportes metodológicos en la estimación de tamaños de muestra en estudios poblacionales de prevalencia." Doctoral thesis, Universitat Autònoma de Barcelona, 2014. http://hdl.handle.net/10803/283363.
Повний текст джерелаThis dissertation addresses the application of six statistical approaches used to estimate sample sizes in multinomial populations which correspond to: Angers (1974), Tortora (1978), Thompson (1987), Cochran (1977), Bromaghin (1993) and Fitzpatrick & Scott (1987), such approaches are widely discussed in the literature of statistical sampling but generated controversy when applying in health studies because they do not always allow combining costs, representation and adequate sample sizes for sampling scheme simple random sampling and complex populations where the design variable or study corresponds to a multinomial distribution type. Initially discusses how the use of a maximun variance when the design variable with k = 2 gives estimates of prevalence categories considering a P = 0.50 for this sample size estimate without knowing previous values of this estimator which delivers biased estimates. Later theoretical populations were simulated for variables k = 3, 4, 5, 6 and 7 categories, generating 25 different populations of size N = 1,000,000 varying proportions according to different values for different categories. For these populations were extracted by simple random sampling, samples of different sizes were estimated using the six approaches mentioned above that considered different values of sampling errors, then the performance of these was assessed by: 1) sample size, 2) Level of real confidence, 3) average Estimator, 4) bias and 5) mean Square Error. The discussion then focuses on determining which delivery method best used sample sizes and estimates considering scenarios where the categories considered ranging from k = 3 to k = 7, finally proposes and discusses the use of measures of uncertainty or entropy Shannon to study the uncertainty associated with the estimated vectors using different methods.
Abío, Roig Ignasi. "Solving hard industrial combinatorial problems with SAT." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/117608.
Повний текст джерелаGonzález, Dan José Roberto. "Introducción del factor humano al análisis de riesgo." Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/325427.
Повний текст джерелаLa frecuencia de los accidentes es un aspecto muy importante en el campo del análisis del riesgo. Variables cómo el factor humano en general no se consideran explícitamente en su evaluación. Esto se debe a la incertidumbre que se genera debido a la falta de información y la complejidad para calcular este factor. No obstante, el factor humano es una de las mayores causas de eventos no deseados en las industrias de proceso. En esta tesis, se desarrolló un modificador de la frecuencia de accidentes con el objetivo de introducir el factor humano en la estimación de riesgo. Este modificador tiene en cuenta variables como: el factor organizacional, el factor de las características del trabajo y el factor de las características personales. La inclusión del factor humano se hizo mediante la aplicación de dos metodologías: La lógica difusa y la simulación de Monte Carlo. La primera de ellas se basa en el funcionamiento del razón humana, por ello fue necesaria la contribución de expertos internacionales en el área de estudio a través de su contribución en un cuestionario. En la segunda, Montecarlo, las variables son representades por funciones de probabilidad a través de un tratamiento probabilístico. Los modificadores fueron aplicados a cuatro casos de estudio de industrias químicas reales: dos relacionados con empresas que almacenan productos inflamables y dos que almancenan productos tóxicos e inflamables. Los nuevos valores de frecuencia, después de aplicar el modificador obtenido, son considerados más realistas debido a que ya incluyen el factor humano. Además, los modelos fueron validados con la comparación de los resultados obtenidos con el método internacionalmente aceptado para la evaluación de riesgo: el Análisis Cuantitativo de Riesgo (ACR). Consecuentemente, la evaluación final de riesgo es más conservadora aunque en la línia de los resultados obtenidos a partir de un ACR.
Artés, Vivancos Tomàs. "Multi-core hybrid architectures applied to forest fire spread prediction." Doctoral thesis, Universitat Autònoma de Barcelona, 2015. http://hdl.handle.net/10803/311111.
Повний текст джерелаLos incendios forestales son un tipo de catástrofe natural que representa un gran reto para sociedad debido a sus elevados costes económicos y humanos. Con el objetivo de evitar los costes derivados de dicho desastre natural y mejorar la extinción de éstos, los simuladores de propagación de incendios se pueden utilizar para intentar anticipar el comportamiento del incendio y ayudar a conseguir una extinción del incendio más eficiente y segura. Cuando se propociona una predicción de la propagación de un incendio forestal existen dos elementos clave: la precisión y el tiempo necesario para computar la predicción. Bajo el contexto de la simulación de desastres naturales, es bien conocido que parte del error de la predicción está sujeta a la incertidumbre en los datos de entrada utilizados por el simulador. Por esta razón, la comunidad científica ha creado distintos métodos de calibración para reducir la incertidumbre de los datos de entrada y así mejorar el error de la predicción. En este trabajo se utiliza una metodología de calibración basada en dos etapas que ha sido probada en trabajos previos con buenos resultados. Este método de calibración implica una necesidad considerable de recursos computacionales y eleva el tiempo de cómputo debido al uso de un Algoritmo Genético como método de búsqueda de los mejores datos de entrada del simulador. Se debe tener en cuenta las restricciones de tiempo bajo las que trabaja un sistema de predicción de incendios. Es necesario mantener un equilibrio adecuado entre precisión y tiempo de cómputo utilizado para poder proporcionar una buena predicción a tiempo. Para poder utilizar la técnica de calibración mencionada, se debe solucionar el problema que representa que algunas soluciones sean inviables debido a que implican tiempos de ejecución muy largos, lo que puede impedir que se pueda dar respuesta a su debido tiempo en un supuesto contexto operacional. La presente Tesis Doctoral utiliza las arquitecturas multi-core con el objetivo de acelerar el método de calibración basado en dos etapas y poder proporcionar una predicción bajo tiempos de entrega que se darían en un contexto real. Por esta razón, se define una política de asignación de núcleos basada en el tiempo disponible de ejecución . Esta política de asignación asignará un número determinado de recursos a una determinada simulación previamente a ser ejecutada. La política de asignación se basa en árboles de decisión creados con los parametros de simulación n utilizados. Sin embargo, se proponen dos métodos para aquellos casos donde el algoritmo genético tienda a crear individuos cuyo tiempo de ejecución provocan que sea imposible acabar la calibración a tiempo: Re-TAC y Soft-TAC. La propuesta ReTAC utiliza la resolución de las simulaciones para solucionar el problema. En concreto, Re-TAC trata de encontrar la mínima reducción de la resolución que permita que aquellas simulaciones que son demasiado largas puedan ser ejecutadas manteniendo la precisión bajo control. Por otro lado, Soft-TAC utiliza poblaciones de tama˜no dinámico. Es decir, los individuos no se matan al alcanzar el límite de timepo de ejecución asignado a una generación del Algoritmo Genético, sino que se permite la ejecución simultanea de individuos de distintas generaciones haciendo que el tamaño de la población sea dinámico. Todas la estrategias de predicción propuestas han sido probadas con casos reales obteniendo resultados satisfactorios en términos de precisión y de tiempo de cómputo utilizado.
Large forest fires are a kind of natural hazard that represents a big threat for the society because it implies a significant number of economic and human costs. To avoid major damages and to improve forest fire management, one can use forest fire spread simulators to predict fire behaviour. When providing forest fire predictions, there are two main considerations: accuracy and computation time. In the context of natural hazards simulation, it is well known that part of the final forecast error comes from uncertainty in the input data. For this reason several input data calibration methods have been developed by the scientific community. In this work, we use the Two-Stage calibration methodology, which has been shown to provide good results. This calibration strategy is computationally intensive and timeconsuming because it uses a Genetic Algorithm as an optimization strategy. Taking into account the aspect of urgency in forest fire spread prediction, we need to maintain a balance between accuracy and the time needed to calibrate the input parameters. In order to take advantage of this technique, we must deal with the problem that some of the obtained solutions are impractical, since they involve simulation times that are too long, preventing the prediction system from being deployed at an operational level. This PhD Thesis exploits the benefits of current multi-core architectures with the aim of accelerating the Two-Stage forest fire prediction scheme being able to deliver predictions under strict real time constraints. For that reason, a Time-Aware Core allocation (TAC) policy has been defined to determine in advance the more appropriate number of cores assigned to a given forest fire spread simulation. Each execution configuration is obtained considering the particular values of the input data needed for each simulation by applying a dynamic decision tree. However, in those cases where the optimization process will drive the system to solutions whose simulation time will prevent the system to finish on time, two different enhanced schemes have been defined: Re-TAC and Soft-TAC. Re-TAC approach deals with the resolution of the simulation. In particular, Re-TAC finds the minimum resolution reduction for such long simulations, keeping accuracy loss to a known interval. On the other hand, Soft-TAC considers the GA's population size as dynamic in the sense that none individual will be killed for over passing the internal generations deadline, but it will be keep executing and the population size for the subsequent GA's generation is modified according to that. All proposed prediction strategies have been tested with past real cases obtaining satisfactory results both in terms of prediction accuracy and in the time required to deliver the prediction.