Thèses sur le sujet « INFORMATION THEORETIC MEASURES »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : INFORMATION THEORETIC MEASURES.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 23 meilleures thèses pour votre recherche sur le sujet « INFORMATION THEORETIC MEASURES ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Iyengar, Giridharan Ranganathan 1969. « Information theoretic measures for encoding video ». Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/61531.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Kariuki, Stella Waithiegeni. « Information-theoretic performance measures for effective scheduling in manufacturing ». Thesis, University of Oxford, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400193.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Foster, Peter. « Information-theoretic measures of predictability for music content analysis ». Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/9021.

Texte intégral
Résumé :
This thesis is concerned with determining similarity in musical audio, for the purpose of applications in music content analysis. With the aim of determining similarity, we consider the problem of representing temporal structure in music. To represent temporal structure, we propose to compute information-theoretic measures of predictability in sequences. We apply our measures to track-wise representations obtained from musical audio; thereafter we consider the obtained measures predictors of musical similarity. We demonstrate that our approach benefits music content analysis tasks based on musical similarity. For the intermediate-specificity task of cover song identification, we compare contrasting discrete-valued and continuous-valued measures of pairwise predictability between sequences. In the discrete case, we devise a method for computing the normalised compression distance (NCD) which accounts for correlation between sequences. We observe that our measure improves average performance over NCD, for sequential compression algorithms. In the continuous case, we propose to compute information-based measures as statistics of the prediction error between sequences. Evaluated using 300 Jazz standards and using the Million Song Dataset, we observe that continuous-valued approaches outperform discrete-valued approaches. Further, we demonstrate that continuous-valued measures of predictability may be combined to improve performance with respect to baseline approaches. Using a filter-and-refine approach, we demonstrate state-of-the-art performance using the Million Song Dataset. For the low-specificity tasks of similarity rating prediction and song year prediction, we propose descriptors based on computing track-wise compression rates of quantised audio features, using multiple temporal resolutions and quantisation granularities. We evaluate our descriptors using a dataset of 15 500 track excerpts of Western popular music, for which we have 7 800 web-sourced pairwise similarity ratings. Combined with bag-of-features descriptors, we obtain performance gains of 31.1% and 10.9% for similarity rating prediction and song year prediction. For both tasks, analysis of selected descriptors reveals that representing features at multiple time scales benefits prediction accuracy.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Castelló, Boscá Pascual. « Viewpoint-driven Simplification of Polygonal Models using Information Theoretic measures ». Doctoral thesis, Universitat Jaume I, 2007. http://hdl.handle.net/10803/10484.

Texte intégral
Résumé :
Los modelos poligonales actualmente dominan el campo de los gráficos interactivos. Esto es debido a su simplicidad matemática que permite que los más comunes algoritmos de visualización se implementen directamente en el hardware. Sin embargo la complejidad de estos modelos (medidos por el número de polígonos) crece más rápido que la capacidad del hardware grafico para visualizarlos interactivamente. Las técnicas de simplificación de polígonos ofrecen una solución para tratar estos modelos complejos. Estos métodos simplifican la geometría poligonal reduciendo el coste de visualización del modelo sin una pérdida del contenido visual del objeto. Esta idea aun sigue vigente aunque es una idea ya antigua en gráficos por ordenador. Durante los últimos años ha surgido un gran abanico de métodos de simplificación. La mayoría ha abordado el problema de la simplificación desde el punto de vista geométrico. Es decir, elaborando métricas que permiten guiar la simplificación calculando el error cometido en cada paso utilizando una medida puramente geométrica. Recientemente se han desarrollado nuevos métodos que intentan guiar el proceso de simplificación mediante una medida de similitud visual. En otras palabras, que los modelos simplificados se vean de forma parecida cuando se visualizan.
El error geométrico es uno de los factores que influye en la similitud visual pero no es el único. Otros factores como las siluetas, las propias oclusiones y transparencias, los atributos de superficie, etc. influyen notablemente. En esta tesis se presenta un nuevo método de simplificación de mallas de polígonos. Este método realiza una simplificación guiada por el punto de vista, acometiendo una simplificación cuyo objetivo es garantizar la similitud visual. Esto permite que muchas aplicaciones cuyo objetivo sea la visualización interactiva como por ejemplo los juegos de ordenador se beneficien en buena medida. Se han propuesto diferentes métricas para conducir el método de simplificación desarrollado, todas ellas están basadas en la Teoría de la Información. El empleo de una métrica u otra permite llevar a cabo tres grandes tipos de simplificaciones. En el primer grupo se consigue una simplificación cuyo factor primordial es la similitud visual lo que conduce a la obtención de modelos simplificados cuya geometría oculta ha sido simplificada en gran medida o en su totalidad.
En el segundo grupo de métricas se aborda la simplificación con el objetivo de la similitud visual pero respetando en mayor medida la geometría oculta del modelo. Con lo que el error geométrico es menor que en grupo anterior a costa de un mayor error visual. Finalmente, el tercer grupo permite que se pueda acometer una simplificación que intenta preservar las regiones visualmente salientes de la malla mediante la aplicación del concepto de saliency de malla, definido a partir de la divergencia de Jeshen-Shannon. Estas pequeñas regiones se mantienen mejor si se hace uso de este método, de otra forma serían eliminadas ya que tienen un coste de simplificación bajo.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Bonaventura, Brugués Xavier. « Perceptual information-theoretic measures for viewpoint selection and object recognition ». Doctoral thesis, Universitat de Girona, 2015. http://hdl.handle.net/10803/302540.

Texte intégral
Résumé :
Viewpoint selection has been an emerging area in computer graphics for some years, and it is now getting maturity with applications in fields such as scene navigation, volume visualization, object recognition, mesh simplification, and camera placement. But why is viewpoint selection important? For instance, automated viewpoint selection could play an important role when selecting a representative model by exploring a large 3D model database in as little time as possible. Such an application could show the model view that allows for ready recognition or understanding of the underlying 3D model. An ideal view should strive to capture the maximum information of the 3D model, such as its main characteristics, parts, functionalities, etc. The quality of this view could affect the number of models that the artist can explore in a certain period of time. In this thesis, we present an information-theoretic framework for viewpoint selection and object recognition. From a visibility channel between a set of viewpoints and the polygons of a 3D model we obtain several viewpoint quality measures from the respective decompositions of mutual information. We also review and compare in a common framework the most relevant viewpoint quality measures for polygonal models presented in the literature. From the information associated to the polygons of a model, we obtain several shading approaches to improve the object recognition and the shape perception. We also use this polygonal information to select the best views of a 3D model and to explore it. We use these polygonal information measures to enhance the visualization of a 3D terrain model generated from textured geometry coming from real data. Finally, we analyze the application of the viewpoint quality measures presented in this thesis to compute the shape similarity between 3D polygonal models. The information of the set of viewpoints is seen as a shape descriptor of the model. Then, given two models, their similarity is obtained by performing a registration process between the corresponding set of viewpoints
La selecció de punts de vista ha estat una àrea emergent en la computació gràfica des de fa alguns anys i ara està aconseguint la maduresa amb aplicacions en camps com la navegació d’una escena, la visualització de volums, el reconeixement d’objectes, la simplificació d’una malla i la col·locació de la càmera. Però per què és important la selecció del punt de vista? Per exemple, la automatització de la selecció de punts de vista podria tenir un paper important a l’hora de seleccionar un model representatiu mitjançant l’exploració d’una gran base de dades de models 3D en el menor temps possible. Aquesta aplicació podria mostrar la vista del model que permet el millor reconeixement o comprensió del model 3D. Un punt de vista ideal ha de captar la màxima informació del model 3D, com per exemple les seves principals característiques, parts, funcionalitats, etc. La qualitat d’aquest punt de vista pot afectar el nombre de models que l’artista pot explorar en un determinat període de temps. En aquesta tesi, es presenta un marc de teoria de la informació per a la selecció de punts de vista i el reconeixement d’objectes. Obtenim diverses mesures de qualitat de punt de vista a través de la descomposició de la informació mútua d’un canal de visibilitat entre un conjunt de punts de vista i els polígons d’un model 3D. També revisem i comparem en un marc comú les mesures més rellevants que s’han presentat a la literatura sobre la qualitat d’un punt de vista d’un model poligonal. A partir de la informació associada als polígons d’un model, obtenim diversos tipus de renderitzat per millorar el reconeixement d’objectes i la percepció de la forma. Utilitzem aquesta informació poligonal per seleccionar les millors vistes d’un model 3D i per la seva exploració. També usem aquestes mesures d’informació poligonal per millorar la visualització d’un model de terreny 3D amb textures generat a partir de dades reals. Finalment, s’analitza l’aplicació de les mesures de qualitat de punt de vista presentades en aquesta tesi per calcular la similitud entre dos models poligonals. La informació del conjunt de punts de vista és vista com un descriptor del model. Llavors, donats dos models poligonals, la seva similitud s’obté mitjançant la realització d’un procés de registre entre els conjunts de punts de vista corresponents
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kabata, Apphiah Justice. « Convergence Performance of Information Theoretic Similarity Measures for Robust Matching ». Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186525.

Texte intégral
Résumé :
Image matching is an area of significant use in the medical field, as there is a need to match images captured with different modalities, to overcome the limitation that can occur when dealing with individual modalities. However, performing matching of multimodal images may not be a trivial task. Multimodality entails changes in brightness and contrast that might be an obstacle when performing a match using similarity measures. This study investigated the convergence performance of information-theoretic similarity measures. The similarity measures analysed in this study are mutual information (MI), the cross-cumulative residual entropy (CCRE), and the sum of conditional variances (SCV). To analyse the convergence performance of these measures, an experiment was conducted on one data set introducing the concept of multimodality, and two single images displaying a significant variation in texture. This was to investigate the impact of multimodality and variations in texture on the convergence performance of similarity measures. The experiment investigated the ability for similarity measures to find convergence on MRI and CT medical images after a displacement has occurred. The results of the experiment showed that the convergence performance of similarity measures varies depending on the texture on images. MI is best suitable in the context of high-textured images while CCRE is more applicable in low-textured images. The measure SCV is the most stable similarity measure as it is little affected by the variation in texture. The experiment also reveals that the convergence performance of the similarity measures identified in the case of unimodality, can be preserved in the context of multimodality. This study gives better awareness of the convergence performance of similarity measures. This could improve the use of similarity measures in the medical field which could yield better diagnosis of patients’ conditions.
Matching av bilder har en stor betydelse inom det medicinska området. Detta eftersom det finns ett behov av att matcha bilder som tagits med olika modaliteter för att övervinna begränsningarna som kan uppkomma när man arbetar med en modalitet. Dock är det inte alltid trivialt att utföra matchning av multimodala bilder. Multimodalitet medför förändringar i ljusstyrka och kontrast som kan vara ett hinder vid utförande av matchning med likhetsmått.                                                           Denna rapport undersökte konvergens prestanda av informationsteoretiska likhetsmått. Likhetsmåtten som behandlades i denna studie var mutual information (MI), cross­cumulative residual entropy (CCRE), och the sum of conditional variances (SCV). För att analysera konvergens perstandat av likhesmåtten, utfördes ett experiment på en dataset som introducerar multimodalitet, och två enskilda bilder som har en betydande variation i struktur. Detta för att undersöka om konvergens av likhetsmåttena påverkas av multimodalitet och variation i struktur. Experimentet fokuserades till att undersöka möjligheten för de olika likhetsmåtten att hitta konvergens på MRI och CT medicinsiska bilder, efter att en manipulering av deras positioner har skett.                                                           Resultaten av denna studie visade att konvergens prestanda av likhetsmåtten varierade beroende på mängd struktur i datat. MI visade sig vara bäst lämpad i samband med tydlig struktur medan CCRE var mer lämplig när strukturen var mindre. SCV visade sig vara det mest stabila likhetsmåttet eftersom det inte påverkades märkbart av struktur variation. Studien visade också att konvergens beteendet av likhetsmåtten som identifierades i samband med unimodalitet, kunde bevaras i multimodalitet.                                                           Denna studie ger ökad medvetenhet om konvergens prestanda hos likhetsmåtten. Detta kan leda till en förbättrad användning av likhetsmåtten inom det medicinska området vilket kan ge möjligheter till att ställa bättre diagnoser av patienter.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wang, Fei. « Information theoretic measures and their applications to image registration and segmentation ». [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0015631.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zhang, Jie. « New information theoretic distance measures and algorithms for multimodality image registration ». [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0011552.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Deza, Juan Ignacio. « Climate networks constructed by using information-theoretic measures and ordinal time-series analysis ». Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/286281.

Texte intégral
Résumé :
This Thesis is devoted to the construction of global climate networks (CNs) built from time series -surface air temperature anomalies (SAT)- using nonlinear analysis. Several information theory measures have been used including mutual information (MI) and conditional mutual information (CMI). The ultimate goal of the study is to improve the present understanding of climatic variability by means of networks, focusing on the different spatial and time-scales of climate phenomena. An introduction to the main components of this interdisciplinary work are offered in the first three chapters. Climate variability and patterns are introduced Chapter 1, network theory in Chapter 2, and nonlinear time series analysis -especially information theoretic methodology- in Chapter 3. In Chapter 4, the statistical similarity of SAT anomalies in different regions of the world is assessed using MI. These climate networks are constructed from time series of monthly averaged SAT anomalies, and from their symbolic ordinal representation, which allows an analysis of these interdependencies on different time scales. This analysis allows identifying topological changes in the networks when using ordinal patterns (OPs) of different time intervals. Intra-seasonal (of a few months), inter-seasonal (covering a year) and inter-annual (several years) timescales are considered. The nature of the interdependencies is then explored in Chapter 5 by using SAT data from an ensemble of atmospheric general circulation model (AGCM) runs, all of them forced by the same historical sea surface temperature (SST). It is possible to separate atmospheric variability into a forced component, and another one intrinsic to the atmosphere. In this way, it is possible to obtain climate networks for both types of variability and characterize them. Furthermore, an analysis using OP allows to construct CNs for several time scales, and evaluate the connectivity of each different network. This selecting both time scale and variability type allows to obtain a further insight into the study of SAT anomalies. The connectivity of the constructed CNs allows to assess the influence of two main climate phenomena: ENSO and the North Atlantic Oscillation (NAO). In Chapter 6, a natural extension of the network construction methodology is implemented in order to infer the direction of the links. A directionality index (DI) is used. DI can be defined as the difference of the CMI between two time series x(t) and y(t), calculated in two ways: i) considering the information about x(t) contained in t time units in the past of y(t), and ii) considering the information about y(t) contained in t time units in the past of x(t). DI is used to quantify the direction of information flow among the series, indicating the direction of the links of the network. Two SAT datasets -one monthly-averaged and another daily-averaged- are used. The links of the obtained networks are interpreted in terms of known atmospheric tropical and extra-tropical variability phenomena. Specific and relevant geographical regions are selected, the net direction of propagation of the atmospheric patterns is analyzed, and the direction of the inferred links is tested using surrogate data. These patterns are also found to be acting on various time scales, such as synoptic atmospheric waves in the extra-tropics or longer time scale events in the tropics. The final Chapter 7 presents the main conclusions, and a discussion of future work.
El objetivo de esta tesis es la creación de redes climáticas (CN por las siglas en inglés) a partir de un conjunto global de series temporales de temperatura del aire superficial (SAT), utilizando técnicas de análisis no lineal de series temporales. Varias metodologías son aplicadas al estudio de la variabilidad climática, incluyendo la Información mutua (MI) y la información mutual condicional (CMI). El objetivo principal de esta tesis es estudiar la variabilidad climática a través del análisis de redes haciendo énfasis en los diferentes patrones espaciales y temporales del sistema climático. Una introducción a los componentes principales de este trabajo interdisciplinario se presenta en los primeros tres capítulos. La variabilidad climática y los patrones atmosféricos se introducen en el Capítulo 1, la teoría de redes en el Capítulo 2, y el análisis no lineal de series temporales, especialmente metodos en teorá de la información, en el Capítulo 3. En el Capítulo 4, la similitud estadística de las anomalías de SAT en diferentes regiones del mundo es evaluada utilizando MI. Estas redes climáticas globales son construidas a partir de series temporales de SAT promediadas a escalas de tiempo mensuales, y a partir de su representación simbólica, permitiendo un análisis de estas interdependencias en varias escalas temporales. Se identifican cambios topológicos entre las redes, como resultado de variaciones en el intervalo de construcción de losOP. Escalas intra-estacionales (unos meses), inter-estacionales (cubriendo un año) e inter-anuales (varios años), son consideradas. Se encuentra que un incremento en el espaciado de los patrones ordinales (por lo tanto, en la escala de tiempo del análisis ordinal), resulta en redes climáticas con un incremento en la conectividad en el Pacífico ecuatorial. Al contrario, el número de conexiones significativas decrece al realizar el análisis ordinal en una escala de tiempo más corta (es decir, comparando meses consecutivos). Este efecto es interpretado como una consecuencia del efecto de El Niño-Oscilación Sud (ENSO) actuando en escalas de tiempo más largas y de una mayor estocasticidad en las series temporales en escalas de tiempo más cortas. La naturaleza de las interdependencias es explorada en el Capítulo 5, utilizando datos de SAT, resultantes de un conjunto de salidas de un modelo atmosférico de circulación global (AGCM), todas forzadas por la misma temperatura de la superficie del mar (SST). Es posible separar la variabilidad atmosférica en una componente forzada y otra intrínseca a la atmósfera. De esta forma, se obtienen redes climáticas para ambos tipos de variabilidad, lo que posibilita caracterizarlas. Un análisis utilizando OP permite crear CNs para diferentes escalas temporales, y encontrar la escala de OP para la cual las diferentes redes presentan mayor conectividad. Este doble proceso de selección permitie estudiar la variabilidad de las anomalías de SAT desde un nuevo punto de vista. La conectividad de las redes climáticas así construídas permite evaluar la influencia de dos fenómenos climáticos: ENSO y la Oscilación del Atlántico Norte (NAO). Para esto, se pueden comparar las redes originales, con redes provenientes de series temporales a las que se les quitaron linealmente estos fenómenos. Un resultado clave de este análisis es que la conectividad de la red de variabilidad forzada es muy afectada por ENSO: eliminando el índice NINO3.4 (que caracteriza ENSO), se provoca una pérdida general de la conectividad en la red. El hecho de que incluso conexiones entre áreas muy alejadas del océano Pacífico ecuatorial se hayan perdido al quitar el índice, sugiere que estas regiones no están directamente conectadas sino que ambas son influenciadas por la zona dominada por ENSO, especialmente en escalas de tiempo interanuales. Por otro lado, en la red de variabilidad interna, independiente del forzado de las SST, las conexiones delMar del Labrador con el resto del mundo resultan significantemente afectadas por NAO, con un máximo en escalas intra-anuales. Aunque las conexiones no locales más fuertes resultan las forzadas por el océano, se muestra la presencia de teleconexiones asociadas con la variabilidad interna. En el Capítulo 6, una extensión natural de la metodología de construcción de redes es implementada, permitiendo inferir la dirección de las conexiones. Un índice de direccionalidad (DI), puede ser definido como la diferencia entre la CMI entre dos series temporales x(t ) e y(t ) calculada de dos formas: i) considerando la información de x(t ) contenida en τ unidades de tiempo en el pasado de y(t ) y ii) considerando la información de y(t ) contenida en τ unidades de tiempo en el pasado de x(t ). Este índice DI, se utiliza para cuantificar la dirección del flujo de información entre las series, lo que equivale a la dirección de la conexión entre los respectivos nodos de la red. Dos conjuntos de series temporales, uno promediado mensualmente y el otro promediado diariamente, son usados. Las conexiones de las redes resultantes son interpretadas en términos de fenómenos de variabilidad tropical y extratropical conocidos. Regiones específicas y relevantes son seleccionadas, la dirección neta de propagación de los patrones atmosféricos es analizada y contrastada con un test de inferencia estadística. Se encuentra que diferentes patrones de variabilidad, actúan en varias escalas de tiempo, tales como ondas sinópticas atmosféricas en los extra-trópicos o escalas de tiempo mayores en los trópicos. La dependencia de valores de DI con τ es investigada. Para la escala sinóptica (τ Ç 10 días), DI presenta una dependencia con τ, con un mínimo en los trópicos y máximos (en forma de trenes de ondas) en los extra-trópicos. Para valores mayores de τ, los links resultan ser relativamente robustos a la elección del parámetro, mostrando una conectividad alta en los trópicos y baja en los extra trópicos. El análisis demuestra la capacidad de DI de inferir la dirección neta de las interacciones climáticas, y de mejorar la compresión actual de fenómenos climáticos y de la predictabilidad climática. La red resultante está en total acuerdo con los conocimientos actuales de fenómenos climáticos, validando esta metodología para inferir, directamente de los datos, la dirección neta de las interacciones climáticas. Finalmente, el Capítulo 7, presenta las conclusiones, y una discusión de trabajo futuro.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kenneway, Debra A. « An Investigation of the Two-Dimensional Ising Spin Glass Using Information Theoretic Measures ». Fogler Library, University of Maine, 2005. http://www.library.umaine.edu/theses/pdf/KennewayDA2005.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Yusuf, Isse Jamila, et Ghouch Chaimae El. « Information Theoretic Similarity Measures for Robust Image Matching : Multimodal Imaging - Infrared and Visible light ». Thesis, KTH, Teoretisk datalogi, TCS, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186450.

Texte intégral
Résumé :
Abstract This study aimed to investigate the applicability of three different information theoretic similarity measures in image matching, mutual information (MI), cross-cumulative residual entropy (CCRE) and sum of conditional variances (SCV). An experiment was conducted to assess the impact on the performances of the similarity measures when dealing with multimodality, in this case in the context of infrared and visible light. This was achieved by running simulations of four different scenarios using images taken in infrared and visible light, and additionally with variations in amount of details to create different experimental setups. Namely experimental setup A: unimodal data sets with more and less details and experimental setup B: multimodal datasets with more and less details. The result showed that the concept of multimodality gives a statistically significant effect on the performances of all similarity measures. Observations were made that the similarity measures performances also, when trying to match images with different amount of details, differed from each other. This provided a basis for judgement on what measure to use as to give as clear and sound results as possible depending on the variation of detail amount in the data. With this study, it was concluded that the similarity measure CCRE gave the most clear and sound results in the context of multimodality concerning infrared and visible light for both cases of more or less details. Even though the other similarity measures performed well in some cases, CCRE would be to recommend as observed by this study. Keywords : Image matching, image registration, information theoretic similarity measures, multimodal imaging, similarity measures, MI, CCRE, SCV, infrared, visible light.
Denna studie syftade till att undersöka tillämpligheten av tre olika informationsteoretiska likhetsmått vid matchning av bilder, mutual information (MI), cross cumulative residual entropy (CCRE) och sum of conditional variances (SCV). Ett experiment genomfördes för att bedöma hur de olika likhetsmåtten påverkades i kontexten av multimodalitet, i detta fall i samband med infrarött och synligt ljus. Detta uppnåddes genom att köra simuleringar av fyra olika scenarier med hjälp av bilder tagna i infrarött och synligt ljus, och dessutom med variationer i mängden detaljer för att skapa olika experimentella uppsättningar. Nämligen experimentuppsättning A: unimodala datamängder med mer / mindre detaljer och experimentuppsättning B: multimodala datamängder med mer / mindre detaljer.   Resultatet visade att multimodalitet har en statistiskt signifikant påverkan på alla likhetsmått. Observationer gjordes att likhetsmåttens beteenden, när man försöker matcha bilder med olika mängd detaljer, skilde sig från varandra. Detta gav en grund för bedömning av vilken av dessa likhetsmått som då kunde användas för att ge de mest tydliga och stabila resultaten som möjligt beroende på variationen av mängden detaljer i datat. Med denna studie drogs slutsatsen att likhetsmåttet CCRE gav mest de tydliga och stabila resultaten i samband med multimodalitet gällande infrarött och synligt ljus för båda fallen av mer eller mindre detaljer. Även om de andra likhetsmåtten också gav goda resultat i vissa fall, skulle CCRE vara att rekommendera, som observerat i denna studie.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Bloch, Matthieu. « Physical-layer security ». Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24658.

Texte intégral
Résumé :
Thesis (Ph.D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: McLaughlin, Steven; Committee Member: Barros, Joao; Committee Member: Bellissard, Jean; Committee Member: Fekri, Faramarz; Committee Member: Lanterman, Aaron
Styles APA, Harvard, Vancouver, ISO, etc.
13

Zegers, Pablo, B. Frieden, Carlos Alarcón et Alexis Fuentes. « Information Theoretical Measures for Achieving Robust Learning Machines ». MDPI AG, 2016. http://hdl.handle.net/10150/621411.

Texte intégral
Résumé :
Information theoretical measures are used to design, from first principles, an objective function that can drive a learning machine process to a solution that is robust to perturbations in parameters. Full analytic derivations are given and tested with computational examples showing that indeed the procedure is successful. The final solution, implemented by a robust learning machine, expresses a balance between Shannon differential entropy and Fisher information. This is also surprising in being an analytical relation, given the purely numerical operations of the learning machine.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Kim, Hyoung-soo. « INFORMATION-THEORETIC OPTIMIZATION OF WIRELESS SENSOR NETWORKS AND RADAR SYSTEMS ». Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/193668.

Texte intégral
Résumé :
Three information measures are discussed and used as objective functions for optimization of wireless sensor networks (WSNs) and radar systems. In addition, a long-term system performance measure is developed for evaluating the performance of slow-fading WSNs. Three system applications are considered: a distributed detection system, a distributed multiple hypothesis system, and a radar target recognition system.First, we consider sensor power optimization for distributed binary detection systems. The system communicates over slow-fading orthogonal multiple access channels. In earlier work, it was demonstrated that system performance could be improved by adjusting transmit power to maximize the J-divergence measure of a binary detection system. We define outage probability for slow-fading system as a long-term performance measure, and analytically develop the detection outage with the given system model.Based on the analytical result of the outage probability, diversity gain is derived and shown to be proportional to the number of the sensor nodes. Then, we extend the optimized power control strategy to a distributed multiple hypothesis system, and enhance the power optimization by exploiting a priori probabilities and local sensor statistics. We also extend outage probability to the distributed multiple-hypotheses problem. The third application is radar waveform design with a new performance measure: Task-Specific Information (TSI). TSI is an information-theoretic measure formulated for one or more specific sensor tasks by encoding the task(s) directly into the signal model via source variables. For example, we consider the problem of correctly classifying a linear system from a set of known alternatives, and the source variable takes the form of an indicator vector that selects the transfer function of the true hypothesis. We then compare the performance of TSI with conventional waveforms and other information-theoretic waveform designs via simulation. We apply radar-specific constraints and signal models to the waveform optimization.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Harrison, Willie K. « Physical-layer security : practical aspects of channel coding and cryptography ». Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44818.

Texte intégral
Résumé :
In this work, a multilayer security solution for digital communication systems is provided by considering the joint effects of physical-layer security channel codes with application-layer cryptography. We address two problems: first, the cryptanalysis of error-prone ciphertext; second, the design of a practical physical-layer security coding scheme. To our knowledge, the cryptographic attack model of the noisy-ciphertext attack is a novel concept. The more traditional assumption that the attacker has the ciphertext is generally assumed when performing cryptanalysis. However, with the ever-increasing amount of viable research in physical-layer security, it now becomes essential to perform the analysis when ciphertext is unreliable. We do so for the simple substitution cipher using an information-theoretic framework, and for stream ciphers by characterizing the success or failure of fast-correlation attacks when the ciphertext contains errors. We then present a practical coding scheme that can be used in conjunction with cryptography to ensure positive error rates in an eavesdropper's observed ciphertext, while guaranteeing error-free communications for legitimate receivers. Our codes are called stopping set codes, and provide a blanket of security that covers nearly all possible system configurations and channel parameters. The codes require a public authenticated feedback channel. The solutions to these two problems indicate the inherent strengthening of security that can be obtained by confusing an attacker about the ciphertext, and then give a practical method for providing the confusion. The aggregate result is a multilayer security solution for transmitting secret data that showcases security enhancements over standalone cryptography.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Streicher, Simon. « Plant-wide fault and disturbance screening using combined network centrality and information-theoretic causality measure analysis ». Diss., University of Pretoria, 2018. http://hdl.handle.net/2263/70898.

Texte intégral
Résumé :
Finding the source of a disturbance in complex systems such as industrial chemical processing plants can be a difficult task and require a significant amount of engineering hours. In many cases, a systematic elimination procedure is considered to be the only feasible approach but can cause significant process upsets. Practitioners desire robust alternative approaches. This study evaluates methods for ranking process elements according to the magnitude of their influence in a complex system. The use of data-driven causality estimation techniques to infer an information transfer network among process elements is studied. Graph centrality measures are then applied to rank the process elements according to their overall effect. A software implementation of the proposed methods forms part of this work.
Dissertation (MEng)--University of Pretoria, 2018.
Sasol
Chemical Engineering
MEng
Unrestricted
Styles APA, Harvard, Vancouver, ISO, etc.
17

Oyenubi, Adeola. « Information theoretic measure of complexity and stock market analysis : using the JSE as a case study ». Master's thesis, University of Cape Town, 2010. http://hdl.handle.net/11427/10967.

Texte intégral
Résumé :
Includes bibliographical references (leaves 37-39).
Bozdogan [8] [6] [7] developed a new model selection criteria called information measure of complexity (ICOMP) for model selection. In contrast to Akaike's [1] information criterion (AIC) and other AIC type criteria that are traditionally used for regression analysis, ICOMP takes into account the interdependencies of the parameter estimates. This paper is divided into two parts. In the first part we compare and contrast ICOMP with AIC and other AIC type selection criterion for model selection in regression analysis involving stock market securities. While in the second part we apply the definition of information theoretic measure of complexity to portfolio analysis. We compare the complexity of a portfolio of securities with its' measure of diversification (PDI) and examine the similarities and differences between the two quantities as it affects portfolio management.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Liginlal, Divakaran. « Building fuzzy front-end decision support systems for new product information in global telecommunication markets : A measure theoretical approach ». Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/284835.

Texte intégral
Résumé :
In today's highly competitive business environment, innovation and new product introduction are recognized as the sustaining forces of corporate success. The early phases of new product development, collectively known as the 'front-end', are crucial to the success of new products. Building a fuzzy front-end decision support system, balancing the needs for analytical soundness and model robustness while incorporating decision-maker's subjectivity and adaptability to different business situations, is a challenging task. A process model and a structural model focusing on the different forms of uncertainties involved in new product introduction in a global telecommunication market are presented in this dissertation. Fuzzy measure theory and fuzzy set theory are used to build a quantitative model of the executive decision-process at the front-end. Solutions to the problem of exponential complexity in defining fuzzy measures are also proposed. The notion of constrained fiizzy integrals demonstrates how the fuzzy measure-theoretical model integrates resource allocation in the presence of project interactions. Forging links between business strategies and expert evaluations of critical success factors is attempted through fuzzy rule-based techniques in the framework of the proposed model. Interviews with new product managers of several American business firms have confirmed the need for building an intelligent front-end decision support system for new product development. The outline of a fuzzy systems development methodology and the design of a proof-of-concept prototype serve as significant contributions of this research work toward this end. In the context of executive decision making, a usability inspection of the prototype is carried out and results are discussed. A computational analysis, based upon methods of tactical systems simulation, measures the rank order consistency of the fuzzy measure theoretical approach in comparison with two competing fuzzy multiple attribute decision models under structural variations of the underlying models. The results demonstrate that (1) the modeling of the fuzzy numbers representing the linguistic variables, (2) the selection of the granularity of the linguistic scales, and (3) the selection of the model dimensions significantly affect the quality of the decisions suggested by the decision aid. A comprehensive plan for future validation of the decision aid is also presented.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Schäwel, Johanna [Verfasser], et Nicole [Akademischer Betreuer] Krämer. « How to Raise Users’ Awareness of Online Privacy : An Empirical and Theoretical Approach for Examining the Impact of Persuasive Privacy Support Measures on Users’ Self-Disclosure on Online Social Networking Sites / Johanna Schäwel ; Betreuer : Nicole Krämer ». Duisburg, 2019. http://d-nb.info/1199265012/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Hamrouni-Chtourou, Sameh. « Approches variationnelles statistiques spatio-temporelles pour l'analyse quantitative de la perfusion myocardique en IRM ». Phd thesis, Institut National des Télécommunications, 2012. http://tel.archives-ouvertes.fr/tel-00814577.

Texte intégral
Résumé :
L'analyse quantitative de la perfusion myocardique, i.e. l'estimation d'indices de perfusion segmentaires puis leur confrontation à des valeurs normatives, constitue un enjeu majeur pour le dépistage, le traitement et le suivi des cardiomyopathies ischémiques --parmi les premières causes de mortalité dans les pays occidentaux. Dans la dernière décennie, l'imagerie par résonance magnétique de perfusion (IRM-p) est la modalité privilégiée pour l'exploration dynamique non-invasive de la perfusion cardiaque. L'IRM-p consiste à acquérir des séries temporelles d'images cardiaques en incidence petit-axe et à plusieurs niveaux de coupe le long du grand axe du cœur durant le transit d'un agent de contraste vasculaire dans les cavités et le muscle cardiaques. Les examens IRM-p résultants présentent de fortes variations non linéaires de contraste et des artefacts de mouvements cardio-respiratoires. Dans ces conditions, l'analyse quantitative de la perfusion myocardique est confrontée aux problèmes complexes de recalage et de segmentation de structures cardiaques non rigides dans des examens IRM-p. Cette thèse se propose d'automatiser l'analyse quantitative de la perfusion du myocarde en développant un outil d'aide au diagnostic non supervisé dédié à l'IRM de perfusion cardiaque de premier passage, comprenant quatre étapes de traitement : -1.sélection automatique d'une région d'intérêt centrée sur le cœur; -2.compensation non rigide des mouvements cardio-respiratoires sur l'intégralité de l'examen traité; -3.segmentation des contours cardiaques; -4.quantification de la perfusion myocardique. Les réponses que nous apportons aux différents défis identifiés dans chaque étape s'articulent autour d'une idée commune : exploiter l'information liée à la cinématique de transit de l'agent de contraste dans les tissus pour discriminer les structures anatomiques et guider le processus de recalage des données. Ce dernier constitue le travail central de cette thèse. Les méthodes de recalage non rigide d'images fondées sur l'optimisation de mesures d'information constituent une référence en imagerie médicale. Leur cadre d'application usuel est l'alignement de paires d'images par appariement statistique de distributions de luminance, manipulées via leurs densités de probabilité marginales et conjointes, estimées par des méthodes à noyaux. Efficaces pour des densités jointes présentant des classes individualisées ou réductibles à des mélanges simples, ces approches atteignent leurs limites pour des mélanges non-linéaires où la luminance au pixel s'avère être un attribut trop frustre pour permettre une décision statistique discriminante, et pour des données mono-modal avec variations non linéaires et multi-modal. Cette thèse introduit un modèle mathématique de recalage informationnel multi-attributs/multi-vues générique répondant aux défis identifiés: (i) alignement simultané de l'intégralité de l'examen IRM-p analysé par usage d'un atlas, naturel ou synthétique, dans lequel le cœur est immobile et en utilisant les courbes de rehaussement au pixel comme ensemble dense de primitives; et (ii) capacité à intégrer des primitives image composites, spatiales ou spatio-temporelles, de grande dimension. Ce modèle, disponible dans le cadre classique de Shannon et dans le cadre généralisé d'Ali-Silvey, est fondé sur de nouveaux estimateurs géométriques de type k plus proches voisins des mesures d'information, consistants en dimension arbitraire. Nous étudions leur optimisation variationnelle en dérivant des expressions analytiques de leurs gradients sur des espaces de transformations spatiales régulières de dimension finie et infinie, et en proposant des schémas numériques et algorithmiques de descente en gradient efficace. Ce modèle de portée générale est ensuite instancié au cadre médical ciblé, et ses performances, notamment en terme de précision et de robustesse, sont évaluées dans le cadre d'un protocole expérimental tant qualitatif que quantitatif
Styles APA, Harvard, Vancouver, ISO, etc.
21

KANDPAL, MANU. « PREDICTION OF CRITICAL RESIDUES IN PFEMP1 USING INFORMATION THEORETIC MEASURES ». Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15332.

Texte intégral
Résumé :
PfEMP1(Plasmodium falciparum erythrocyte membrane protein) is an important target for protective immunity and is implicated in the pathology of malaria through its ability to adhere to host endothelial receptors. PfEMP1 has specific domains which are important in its cytoadherence function. PfEMP1 binds to CD36, an 88 kDa glycoprotein found in several cell types including platelets, monocytes, dendritic cells, and micro vascular endothelial cells. This cytoadherence of PFEMP1 to CD36 receptor is due to a specific domain called CIDR1α domain. We hypothesize that the cytoadherence function of CIDR1α to CD36 receptor is facilitated by various conserved motifs which may be targeted to disrupt the parasite cytoadherence system. Indepth knowledge of structure and function of various conserved motifs of CIDR1α is necessary for effective drug design and vaccine designing. Herein, we will be employing computational approaches to predict fold and functionally critical residues of CIDR1α domain.For this, information theoretic scores which are variants of Relative Entropy will be calculated from Multiple Sequence Alignment (MSA) by considering distinct physico-chemical properties. The residues of CIDR1α with high RE and CRE will be predicted to be fold and functionally significant respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Ganesh, Natesh. « Physical Information Theoretic Bounds on Energy Costs for Error Correction ». 2011. https://scholarworks.umass.edu/theses/677.

Texte intégral
Résumé :
With diminishing returns in performance with scaling of traditional transistor devices, there is a growing need to understand and improve potential replacements technologies. Sufficient reliability has not been established in these devices and additional redundancy through use of fault tolerance and error correction codes are necessary. There is a price to pay in terms of energy and area, with this additional redundancy. It is of utmost importance to determine this energy cost and relate it to the increased reliability offered by the use of error correction codes. In this thesis, we have determined the lower bound for energy dissipation associated with error correction using a linear (n,k) block code. The bound obtained is implementation independent and is derived from fundamental considerations and it allows for quantum effects in the channel and decoder. We have also developed information theoretic efficacy measures that can quantify the performance of the error correction and their relationship to the corresponding energy cost.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Roos, Anneliese. « The law of data (privacy) protection : a comparative and theoretical study ». Thesis, 2003. http://hdl.handle.net/10500/1463.

Texte intégral
Résumé :
In present-day society more and more personal information is being collected. The nature of the collection has also changed: more sensitive and potentially prejudicial information is collected. The advent of computers and the development of new telecommunications technology, linking computers in networks (principally the Internet) and enabling the transfer of information between computer systems, have made information increasingly important, and boosted the collection and use of personal information. The risks inherent in the processing of personal information are that the data may be inaccurate, incomplete or irrelevant, accessed or disclosed without authorisation, used for a purpose other than that for which they were collected, or destroyed. The processing of personal information poses a threat to a person's right to privacy. The right to identity is also infringed when incorrect or misleading information relating to a person is processed. In response to the problem of the invasion of the right to privacy by the processing of personal information, many countries have adopted "data protection" laws. Since the common law in South Africa does not provide adequate protection for personal data, data protection legislation is also required. This study is undertaken from a private law perspective. However, since privacy is also protected as a fundamental right, the influence of constitutional law on data protection is also considered. After analysing different foreign data protection laws and legal instruments, a set of core data protection principles is identified. In addition, certain general legal principles that should form the basis of any statutory data protection legislation in South Africa are proposed. Following an analysis of the theoretical basis for data protection in South African private law, the current position as regards data protection in South-Africa is analysed and measured against the principles identified. The conclusion arrived at is that the current South African acts can all be considered to be steps in the right direction, but not complete solutions. Further legislation incorporating internationally accepted data protection principles is therefore necessary. The elements that should be incorporated in a data protection regime are discussed.
Jurisprudence
LL. D. (Jurisprudence)
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie