Thèses sur le sujet « Texture description »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Texture description.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Texture description ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Siqueira, Fernando Roberti de 1989. « Multi-scale approaches to texture description = Abordagens multiescala para descrição de textura ». [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275604.

Texte intégral
Résumé :
Orientadores: Hélio Pedrini, William Robson Schwartz
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-24T04:06:04Z (GMT). No. of bitstreams: 1 Siqueira_FernandoRobertide_M.pdf: 20841189 bytes, checksum: 62053b7b36d54bbdccc8b5aa3650fe6a (MD5) Previous issue date: 2013
Resumo: Visão computacional e processamento de imagens desempenham um papel importante em diversas áreas, incluindo detecção de objetos e classificação de imagens, tarefas muito importantes para aplicações em imagens médicas, sensoriamento remoto, análise forense, detecção de pele, entre outras. Estas tarefas dependem fortemente de informação visual extraída de imagens que possa ser utilizada para descrevê-las eficientemente. Textura é uma das principais propriedades usadas para descrever informação tal como distribuição espacial, brilho e arranjos estruturais de superfícies. Para reconhecimento e classificação de imagens, um grande grupo de descritores de textura foi investigado neste trabalho, sendo que apenas parte deles é realmente multiescala. Matrizes de coocorrência em níveis de cinza (GLCM) são amplamente utilizadas na literatura e bem conhecidas como um descritor de textura efetivo. No entanto, este descritor apenas discrimina informação em uma única escala, isto é, a imagem original. Escalas podem oferecer informações importantes em análise de imagens, pois textura pode ser percebida por meio de diferentes padrões em diferentes escalas. Dessa forma, duas estratégias diferentes para estender a matriz de coocorrência para múltiplas escalas são apresentadas: (i) uma representação de escala-espaço Gaussiana, construída pela suavização da imagem por um filtro passa-baixa e (ii) uma pirâmide de imagens, que é definida pelo amostragem de imagens em espaço e escala. Este descritor de textura é comparado com outros descritores em diferentes bases de dados. O descritor de textura proposto e então aplicado em um contexto de detecção de pele, como forma de melhorar a acurácia do processo de detecção. Resultados experimentais demonstram que a extensão multiescala da matriz de coocorrência exibe melhora considerável nas bases de dados testadas, exibindo resultados superiores em relação a diversos outros descritores, incluindo a versão original da matriz de coocorrência em escala única
Abstract: Computer vision and image processing techniques play an important role in several fields, including object detection and image classification, which are very important tasks with applications in medical imagery, remote sensing, forensic analysis, skin detection, among others. These tasks strongly depend on visual information extracted from images that can be used to describe them efficiently. Texture is one of the main used characteristics that describes information such as spatial distribution, brightness and surface structural arrangements. For image recognition and classification, a large set of texture descriptors was investigated in this work, such that only a small fraction is actually multi-scale. Gray level co-occurrence matrices (GLCM) have been widely used in the literature and are known to be an effective texture descriptor. However, such descriptor only discriminates information on a unique scale, that is, the original image. Scales can offer important information in image analysis, since texture can be perceived as different patterns at distinct scales. For that matter, two different strategies for extending the GLCM to multiple scales are presented: (i) a Gaussian scale-space representation, constructed by smoothing the image with a low-pass filter and (ii) an image pyramid, which is defined by sampling the image both in space and scale. This texture descriptor is evaluated against others in different data sets. Then, the proposed texture descriptor is applied in skin detection context, as a mean of improving the accuracy of the detection process. Experimental results demonstrated that the GLCM multi-scale extension has remarkable improvements on tested data sets, outperforming many other feature descriptors, including the original GLCM
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
Styles APA, Harvard, Vancouver, ISO, etc.
2

Brady, Karen. « A probabilistic framework for adaptive texture description ». Nice, 2003. http://www.theses.fr/2003NICE4048.

Texte intégral
Résumé :
Cette thèse s’intéresse au problème de la description de textures. Au point de départ de notre travail se trouve la constatation que, si nous voulons modéliser une texture avec précision, nous avons besoin d’une distribution de probabilités »s sur un espace d’images de taille infinie. Cela nous permet ensuite de générer des distributions sur des images de taille finie par marginalisation. Pour une distribution gaussienne, les contraintes de calcul imposées par la diagonalisation nous amènent naturellement à des modèles adaptatifs utilisant des paquets d’ondelettes. En effet, ces derniers saisissent au mieux les périodicités principales de l’image ainsi que les corrélations à longue distance, tout en préservant l’indépendance des coefficients des paquets d’ondelettes. Nous utilisons les modèles ainsi obtenues dans deux méthodes de segmentation destinées à analyser des mosaïques de texture de Brodatz et des images de télédétection à haute réduction
This thesis deals with the issue of texture description. We start from the fact that in order to model texture accurately one needs a probability distributions on the space of infinite images. From this we generate a distribution on finite regions by marginalization. For a Gaussian distribution, the computational requirement of diagonalisation and the modelling requirement of adaptivity together lead naturally to adaptive wavelet packet models which capture the principal periodicities present in the textures and allow long-range correlations while preserving the independence of the wavelet packet coefficients. The resulting models are used within two different segmentation schemes for the purposes of analysing mosaics of natural textures from the Brodatz album and high resolution remote sensing images
Styles APA, Harvard, Vancouver, ISO, etc.
3

Spann, Michael. « Texture description and segmentation in image processing ». Thesis, Aston University, 1985. http://publications.aston.ac.uk/8057/.

Texte intégral
Résumé :
Textured regions in images can be defined as those regions containing a signal which has some measure of randomness. This thesis is concerned with the description of homogeneous texture in terms of a signal model and to develop a means of spatially separating regions of differing texture. A signal model is presented which is based on the assumption that a large class of textures can adequately be represented by their Fourier amplitude spectra only, with the phase spectra modelled by a random process. It is shown that, under mild restrictions, the above model leads to a stationary random process. Results indicate that this assumption is valid for those textures lacking significant local structure. A texture segmentation scheme is described which separates textured regions based on the assumption that each texture has a different distribution of signal energy within its amplitude spectrum. A set of bandpass quadrature filters are applied to the original signal and the envelope of the output of each filter taken. The filters are designed to have maximum mutual energy concentration in both the spatial and spatial frequency domains thus providing high spatial and class resolutions. The outputs of these filters are processed using a multi-resolution classifier which applies a clustering algorithm on the data at a low spatial resolution and then performs a boundary estimation operation in which processing is carried out over a range of spatial resolutions. Results demonstrate a high performance, in terms of the classification error, for a range of synthetic and natural textures.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ylioinas, J. (Juha). « Towards optimal local binary patterns in texture and face description ». Doctoral thesis, Oulun yliopisto, 2016. http://urn.fi/urn:isbn:9789526214498.

Texte intégral
Résumé :
Abstract Local binary patterns (LBP) are among the most popular image description methods and have been successfully applied in a diverse set of computer vision problems, covering texture classification, material categorization, face recognition, and image segmentation, to name only a few. The popularity of the LBP methodology can be verified by inspecting the number of existing studies about its different variations and extensions. The number of those studies is vast. Currently, the methodology has been acknowledged as one of the milestones in face recognition research. The starting point of this research is to gain more understanding of which principles the original LBP descriptor is based on. After gaining some degree of insight, yet another try is made to improve some steps of the LBP pipeline, consisted of image pre-processing, pattern sampling, pattern encoding, binning, and further histogram post-processing. The main contribution of this thesis is a bunch of novel LBP extensions that partly try to unify some of the existing derivatives and extensions. The basis for the design of the new additional LBP methodology is to maximise data-driven premises, at the same time minimizing the need for tuning by hand. Prior to local binary pattern extraction, the thesis presents an image upsampling step dubbed as image pre-interpolation. As a natural consequence of upsampling, a greater number of patterns can be extracted and binned to a histogram improving the representational performance of the final descriptor. To improve the following two steps of the LBP pipeline, namely pattern sampling and encoding, three different learning-based methods are introduced. Finally, a unifying model is presented for the last step of the LBP pipeline, namely for local binary pattern histogram post-processing. As a special case of this, a novel histogram smoothing scheme is proposed, which shares the motivation and the effects with the image pre-interpolation for the most of its part. Deriving descriptors for such face recognition problems as face verification or age estimation has been and continues to be among the most popular domains where LBP has ever been applied. This study is not an exception in that regard as the main investigations and conclusions here are made on the basis of how the proposed LBP variations perform especially in the problems of face recognition. The experimental part of the study demonstrates that the proposed methods, experimentally validated using publicly available texture and face datasets, yield results comparable to the best performing LBP variants found in the literature, reported with the corresponding benchmarks
Tiivistelmä Paikalliset binäärikuviot kuuluvat suosituimpiin menetelmiin kuville suoritettavassa piirteenirrotuksessa. Menetelmää on sovellettu moniin konenäön ongelmiin, kuten tekstuurien luokittelu, materiaalien luokittelu, kasvojen tunnistus ja kuvien segmentointi. Menetelmän suosiota kuvastaa hyvin siitä kehitettyjen erilaisten johdannaisten suuri lukumäärä ja se, että nykyään kyseinen menetelmien perhe on tunnustettu yhdeksi virstanpylvääksi kasvojentunnistuksen tutkimusalueella. Tämän tutkimuksen lähtökohtana on ymmärtää periaatteita, joihin tehokkaimpien paikallisten binäärikuvioiden suorituskyky perustuu. Tämän jälkeen tavoitteena on kehittää parannuksia menetelmän eri askelille, joita ovat kuvan esikäsittely, binäärikuvioiden näytteistys ja enkoodaus, sekä histogrammin koostaminen ja jälkikäsittely. Esiteltävien uusien menetelmien lähtökohtana on hyödyntää mahdollisimman paljon kohdesovelluksesta saatavaa tietoa automaattisesti. Ensimmäisenä menetelmänä esitellään kuvan ylösnäytteistykseen perustuva paikallisten binäärikuvioiden johdannainen. Ylösnäytteistyksen luonnollisena seurauksena saadaan näytteistettyä enemmän binäärikuvioita, jotka histogrammiin koottuna tekevät piirrevektorista alkuperäistä erottelevamman. Seuraavaksi esitellään kolme oppimiseen perustuvaa menetelmää paikallisten binäärikuvioiden laskemiseksi ja niiden enkoodaukseen. Lopuksi esitellään paikallisten binäärikuvioiden histogrammin jälkikäsittelyn yleistävä malli. Tähän malliin liittyen esitellään histogrammin silottamiseen tarkoitettu operaatio, jonka eräs tärkeimmistä motivaatioista on sama kuin kuvan ylösnäytteistämiseen perustuvalla johdannaisella. Erilaisten piirteenirrotusmenetelmien kehittäminen kasvojentunnistuksen osa-alueille on erittäin suosittu paikallisten binäärikuvioiden sovellusalue. Myös tässä työssä tutkittiin miten kehitetyt johdannaiset suoriutuvat näissä osa-ongelmissa. Tutkimuksen kokeellinen osuus ja siihen liittyvät numeeriset tulokset osoittavat, että esitellyt menetelmät ovat vertailukelpoisia kirjallisuudesta löytyvien parhaimpien paikallisten binäärikuvioiden johdannaisten kanssa
Styles APA, Harvard, Vancouver, ISO, etc.
5

Sitepu, Husinsyah. « March-type models for the description of texture in granular materials ». Thesis, Curtin University, 1998. http://hdl.handle.net/20.500.11937/2314.

Texte intégral
Résumé :
Texture in crystalline materials, i.e. preferred orientation (PO), is of interest in terms of texture-property relationships and also in X-ray diffraction science because PO can cause serious systematic errors in quantitative phase analysis using diffraction data. The single- parameter, pole-density distribution function (PDDF), proposed by March (1932) to represent PO in diffraction analysis, is used widely it Rietveld pattern-fitting following a suggestion by Dollase (1986). While the March model is an excellent descriptor of PO for gibbsite [AI(OH)3] x-ray powder diffraction (XRPD) data (O'Connor, Li and Sitepu, 1991), the model has proved to be deficient for Rietveld modelling with molybdite [Mo03], calcite [CaCO3] and kaolinite [A12O3.2SiO2.2H2O] XRPD data (Sitepu, 1991; O'Connor, Li and Sitepu, 1992; and Sitepu, O'Connor and Li, 1996). Therefore, the March model should not be regarded as a general-purpose PDDF descriptor.This study has examined the validity of the March model using XRPD and neutron powder diffraction (NPD) instruments operated, respectively, by the Curtin Materials Research Group in Perth and by the Australian Nuclear Science and Technology Organisation at the HIFAR reactor facility at Lucas Heights near Sydney. Extensive suites of XRPD and NPD data were measured for uniaxially-pressed powders of molybdite and calcite, for which the compression was systematically varied. It is clear from the various Rietveld refinements that the March model becomes increasingly unsatisfactory as the uniaxial pressure (and, therefore, the level of PO) increases.The March model has been tested with a physical relationship developed by the author which links the March r-parameter to the uniaxial pressure via the powder bulk modulus, B. The agreement between the results obtained from directly measured values of B and from Rietveld analysis with the March model are promising in terms of deducing the powder bulk modulus from the March r-parameter.An additional test of the March model was made with NPD data for specimens mounted, first, parallel to the instrument rotation axis and, then, normal to the axis. The results have provided some further indication that the March model is deficient for the materials considered in the study.During the course of the study, it was found that there are distinct differences between the direction of the near-surface texture in calcite, as measured by XRPD, and bulk texture characterised by NPD. The NPD-derived textures appear to be correct descriptions for the bulk material in uniaxially-pressed powders, whereas the XRPD textures are heavily influenced by the pressing procedure.An additional outcome of the NPD work has been the discovery, made jointly with Dr Brett Hunter of ANSTO, that the popular LHPM Rietveld code did not allow for inclusion of PO contributions from symmetry-equivalent reflections. Revision of the code by Dr Hunter showed that there is substantial bias in Rietveld-March r-parameters if these reflections are not factored correctly into the calculations.Finally, examination of pole-figure data has underlined the extent to which the March model oversimplifies the true distributions. It is concluded that spherical harmonics modelling should be used rather than the March model as a general PO modelling tool.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sitepu, Husinsyah. « March-type models for the description of texture in granular materials ». Curtin University of Technology, School of Physical Sciences, 1998. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=10543.

Texte intégral
Résumé :
Texture in crystalline materials, i.e. preferred orientation (PO), is of interest in terms of texture-property relationships and also in X-ray diffraction science because PO can cause serious systematic errors in quantitative phase analysis using diffraction data. The single- parameter, pole-density distribution function (PDDF), proposed by March (1932) to represent PO in diffraction analysis, is used widely it Rietveld pattern-fitting following a suggestion by Dollase (1986). While the March model is an excellent descriptor of PO for gibbsite [AI(OH)3] x-ray powder diffraction (XRPD) data (O'Connor, Li and Sitepu, 1991), the model has proved to be deficient for Rietveld modelling with molybdite [Mo03], calcite [CaCO3] and kaolinite [A12O3.2SiO2.2H2O] XRPD data (Sitepu, 1991; O'Connor, Li and Sitepu, 1992; and Sitepu, O'Connor and Li, 1996). Therefore, the March model should not be regarded as a general-purpose PDDF descriptor.This study has examined the validity of the March model using XRPD and neutron powder diffraction (NPD) instruments operated, respectively, by the Curtin Materials Research Group in Perth and by the Australian Nuclear Science and Technology Organisation at the HIFAR reactor facility at Lucas Heights near Sydney. Extensive suites of XRPD and NPD data were measured for uniaxially-pressed powders of molybdite and calcite, for which the compression was systematically varied. It is clear from the various Rietveld refinements that the March model becomes increasingly unsatisfactory as the uniaxial pressure (and, therefore, the level of PO) increases.The March model has been tested with a physical relationship developed by the author which links the March r-parameter to the uniaxial pressure via the powder bulk modulus, B. The agreement between the results obtained from directly measured values of B and from Rietveld analysis with the March model are ++
promising in terms of deducing the powder bulk modulus from the March r-parameter.An additional test of the March model was made with NPD data for specimens mounted, first, parallel to the instrument rotation axis and, then, normal to the axis. The results have provided some further indication that the March model is deficient for the materials considered in the study.During the course of the study, it was found that there are distinct differences between the direction of the near-surface texture in calcite, as measured by XRPD, and bulk texture characterised by NPD. The NPD-derived textures appear to be correct descriptions for the bulk material in uniaxially-pressed powders, whereas the XRPD textures are heavily influenced by the pressing procedure.An additional outcome of the NPD work has been the discovery, made jointly with Dr Brett Hunter of ANSTO, that the popular LHPM Rietveld code did not allow for inclusion of PO contributions from symmetry-equivalent reflections. Revision of the code by Dr Hunter showed that there is substantial bias in Rietveld-March r-parameters if these reflections are not factored correctly into the calculations.Finally, examination of pole-figure data has underlined the extent to which the March model oversimplifies the true distributions. It is concluded that spherical harmonics modelling should be used rather than the March model as a general PO modelling tool.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Esling, Claude Baro R. « Description de la texture des solides polycristallins et de leur déformation plastique ». Metz : Université de Metz, 2008. ftp://ftp.scd.univ-metz.fr/pub/Theses/1972/Esling.Claude.SMZ7202.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Wu, Jimin. « Description quantitative et modélisation de la texture d'un granite : granite de Guéret (France) ». Bordeaux 1, 1995. http://www.theses.fr/1995BOR10600.

Texte intégral
Résumé :
Le travail présenté consiste en une analyse de la densité, de la taille, de la forme et de la distribution spatiale des principaux minéraux (quartz, feldspaths, biotite et muscovite) du granite de Guéret (France). Après un rappel des méthodes et techniques de l'analyse d'images, on développe trois méthodes d'acquisition d'images : méthode macro-photographique sur lames minces, méthode de saisie directe en microscopie et méthode semi-automatique par dessin manuel de la texture. Les paramètres mesures font l'objet d'une analyse statistique et d'une analyse critique par comparaison des résultats obtenus au moyen de chacune des méthodes. Les lois weibull, de laplace-gauss, et de poisson sont respectivement utilisées dans la modélisation de la distribution des tailles, de la forme et de la distribution spatiale des mineraux. Une analyse détaillée de la taille et de la distribution spatiale de biotite met en évidence une orientation privilégiée de la biotite parallèlement à une famille de fracture géometriquement bien identifiée. L'analyse des modèles de la distribution spatiale des minéraux se fait par adéquation de leur distribution expérimentale a une loi de poisson.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Favier, Eric. « Contribution de l'analyse multi-résolution à la description des contours et des textures ». Saint-Etienne, 1994. http://www.theses.fr/1994STET4020.

Texte intégral
Résumé :
Cette thèse s'attache à l'étude multirésolution des contours discrets et des images à niveaux de gris de texture. Le but est de fournir une description de ces objets à différentes échelles d'étude et d'essayer de déterminer la ou les échelles d'étude les plus appropriées pour l'analyse de ceux-ci. La première partie de ce travail se rapporte à l'étude des images binaires et plus particulièrement à l'étude des contours discrets. La notion d'échelle d'étude d'un contour est définie ainsi que des algorithmes permettant de la choisir. Pour chaque contour, on détermine la (ou les) échelle(s) d'étude permettant de le décrire de manière optimale. Des algorithmes de calculs de courbures en chaque point du contour, de détermination de points dominants en fonction de l'échelle d'étude choisie sont décrits. Une définition de la convexité est donnée en fonction des études choisies ainsi que la notion de t enveloppe convexe. Il est également présenté une distance sur l'ensemble des contours discrets. Cette dernière est une distance de convexité à une échelle d'étude donnée qui permet de comparer deux contours indépendamment de leur taille sur un critère de convexité. De plus, un lien est tissé entre ces différents algorithmes et les opérations de granulométrie ou d'ouvertures connues dans la morphologie mathématique. La seconde partie de cette thèse aborde l'étude des images en niveaux de gris multitexturées, là aussi le but est de décrire ces images en fonction de l'échelle d'étude et de trouver les bons paramètres pour l'analyse de ce type d'images. Les méthodes sont d'ordre statistique et les processus mis en oeuvre sont liés aux approches multirésolutions. Un modèle gaussien d'images de texture est présenté. Chaque image est étudiée à différentes échelles d'étude, et le choix de la meilleure échelle d'étude est abordé ce qui permet de proposer des méthodes automatiques de détection des zones de texture semblable. Des résultats sont présentés pour des exemples d'images multitexturées et une analyse des résultats montre que nos méthodes permettent de segmenter de manière très satisfaisante certaines images qui posent des problèmes à de nombreux algorithmes existants
Styles APA, Harvard, Vancouver, ISO, etc.
10

Kellokumpu, V. P. (Vili-Petteri). « Vision-based human motion description and recognition ». Doctoral thesis, Oulun yliopisto, 2011. http://urn.fi/urn:isbn:9789514296758.

Texte intégral
Résumé :
Abstract This thesis investigates vision based description and recognition of human movements. Automated vision based human motion analysis is a fundamental technology for creating video based human computer interaction systems. Because of its wide range of potential applications, the topic has become an active area of research in the computer vision community. This thesis proposes the use of low level description of dynamics for human movement description and recognition. Two groups of approaches are developed: first, texture based methods that extract dynamic features for human movement description, and second, a framework that considers ballistic dynamics for human movement segmentation and recognition. Two texture based descriptions for human movement analysis are introduced. The first method uses the temporal templates as a preprocessing stage and extracts a motion description using local binary pattern texture features. This approach is then extended to a spatiotemporal space and a dynamic texture based method that uses local binary patterns from three orthogonal planes is proposed. The method needs no accurate segmentation of silhouettes, rather, it is designed to work on image data. The dynamic texture based description is also applied to gait recognition. The proposed descriptions have been experimentally validated on publicly available databases. Psychological studies on human movement indicate that common movements such as reaching and striking are ballistic by nature. Based on the psychological observations this thesis considers the segmentation and recognition of ballistic movements using low level motion features. Experimental results on motion capture and video data show the effectiveness of the method
Tiivistelmä Tässä väitöskirjassa tutkitaan ihmisen liikkeen kuvaamista ja tunnistamista konenäkömenetelmillä. Ihmisen liikkeen automaattinen analyysi on keskeinen teknologia luotaessa videopohjaisia järjestelmiä ihmisen ja koneen vuorovaikutukseen. Laajojen sovellusmahdollisuuksiensa myötä aiheesta on tullut aktiivinen tutkimusalue konenäön tutkimuksen piirissä. Väitöskirjassa tutkitaan matalan tason piirteiden käyttöä ihmisen liikkeen dynaamiikan kuvaamiseen ja tunnistamiseen. Työssä esitetään kaksi tekstuuripohjaista mentelmää ihmisen liikkeen kuvaamiseen ja viitekehys ballististen liikkeiden segmentointiin ja tunnistamiseen. Työssä esitetään kaksi tekstuuripohjaista menetelmää ihmisen liikkeen analysointiin. Ensimmäinen menetelmä käyttää esikäsittelynä ajallisia kuvamalleja ja kuvaa mallit paikallisilla binäärikuvioilla. Menetelmä laajennetaan myös tila-aika-avaruuteen. Dynaamiseen tekstuuriin perustuva menetelmä irroittaa paikalliset binäärikuviot tila-aika-avaruuden kolmelta ortogonaaliselta tasolta. Menetelmä ei vaadi ihmisen siluetin tarkkaa segmentointia kuvista, koska se on suunniteltu toimimaan suoraan kuvatiedon perusteella. Dynaamiseen tekstuuriin pohjautuvaa menetelmää sovelletaan myös henkilön tunnistamiseen kävelytyylin perusteella. Esitetyt menetelmät on kokeellisesti vahvistettu yleisesti käytetyillä ja julkisesti saatavilla olevilla tietokannoilla. Psykologiset tutkimukset ihmisen liikkumisesta osoittavat, että yleiset liikkeet, kuten kurkoittaminen ja iskeminen, ovat luonteeltaan ballistisia. Tässä työssä tarkastellaan ihmisen liikkeen ajallista segmentointia ja tunnistamista matalan tason liikepiirteistä hyödyntäen psykologisia havaintoja. Kokeelliset tulokset liikkeenkaappaus ja video aineistolla osoittavat menetelmän toimivan hyvin
Styles APA, Harvard, Vancouver, ISO, etc.
11

Heurtier, Philippe. « Contribution de l'analyse d'images à l'exploitation des banques d'images de tissus par description automatique ». Saint-Etienne, 1996. http://www.theses.fr/1996STET4012.

Texte intégral
Résumé :
L'exploitation de banques de données images comme bases de connaissances est l'objet principal de ce travail de thèse. Afin d'obtenir ces informations complémentaires, contenues de façon intrinsèque dans chaque image, des méthodes classiques d'analyse d'images ont été adaptées. Jusqu'à présent, seule une saisie manuelle préalable, longue et fastidieuse, aurait permis l'utilisation de ces informations. Une automatisation de cette description d'images s'avère être la seule solution fiable, précise et présentant l'intégrité nécessaire. L'ensemble de ces renseignements a été structuré selon une nomenclature. L'aspect arborescent de ce mode d'archivage des informations est très important lors de leur utilisation dans une base de données. La recherche sur la description porte, en premier lieu, sur l'étude des couleurs et du graphisme sur une surface textile (importance de la trame sur l'aspect du motif). Puis, face à l'ampleur des données à analyser, les méthodes classiques d'analyses multidimensionnelles ont été étudiées. Une place particulière a été accordée à la partie interprétation des résultats de ces analyses statistiques. Une méthode d'ordonnancement des tissus d'une base a été présentée. Il s'agit d'une des multiples utilisations de ces méthodes de description automatique. A chaque étape, notre objectif a été l'application industrielle des algorithmes développés, sans perdre de vue l'aspect théorique
Styles APA, Harvard, Vancouver, ISO, etc.
12

Jouini, Mohamed Soufiane. « Caractérisation des réservoirs basée sur des textures des images scanners de carottes ». Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13769/document.

Texte intégral
Résumé :
Les carottes, extraites lors des forages de puits de pétrole, font partie des éléments les plus importants dans la chaîne de caractérisation de réservoir. L’acquisition de celles-ci à travers un scanner médical permet d’étudier de façon plus fine les variations des types de dépôts. Le but de cette thèse est d’établir les liens entre les imageries scanners 3D de carottes, et les différentes propriétés pétrophysiques et géologiques. Pour cela la phase de modélisation des images, et plus particulièrement des textures, est très importante et doit fournir des descripteurs extraits qui présentent un assez haut degrés de confiance. Une des solutions envisagée pour la recherche de descripteurs a été l’étude des méthodes paramétriques permettant de valider l’analyse faite sur les textures par un processus de synthèse. Bien que ceci ne représente pas une preuve pour un lien bijectif entre textures et paramètres, cela garantit cependant au moins une confiance en ces éléments. Dans cette thèse nous présentons des méthodes et algorithmes développés pour atteindre les objectifs suivants : 1. Mettre en évidence les zones d’homogénéités sur les zones carottées. Cela se fait de façon automatique à travers de la classification et de l’apprentissage basés sur les paramètres texturaux extraits. 2. Établir les liens existants entre images scanners et les propriétés pétrophysiques de la roche. Ceci se fait par prédiction de propriétés pétrophysiques basées sur l’apprentissage des textures et des calibrations grâce aux données réelles.
Cores extracted, during wells drilling, are essential data for reservoirs characterization. A medical scanner is used for their acquisition. This feature provide high resolution images improving the capacity of interpretation. The main goal of the thesis is to establish links between these images and petrophysical data. Then parametric texture modelling can be used to achieve this goal and should provide reliable set of descriptors. A possible solution is to focus on parametric methods allowing synthesis. Even though, this method is not a proven mathematically, it provides high confidence on set of descriptors and allows interpretation into synthetic textures. In this thesis methods and algorithms were developed to achieve the following goals : 1. Segment main representative texture zones on cores. This is achieved automatically through learning and classifying textures based on parametric model. 2. Find links between scanner images and petrophysical parameters. This is achieved though calibrating and predicting petrophysical data with images (Supervised Learning Process)
Styles APA, Harvard, Vancouver, ISO, etc.
13

Garnier, Mickaël. « Modèles descriptifs de relations spatiales pour l'aide au diagnostic d'images biomédicales ». Thesis, Paris 5, 2014. http://www.theses.fr/2014PA05S015/document.

Texte intégral
Résumé :
La pathologie numérique s’est développée ces dernières années grâce à l’avancée récente des algorithmes d’analyse d’images et de la puissance de calcul. Notamment, elle se base de plus en plus sur les images histologiques. Ce format de données a la particularité de révéler les objets biologiques recherchés par les experts en utilisant des marqueurs spécifiques tout en conservant la plus intacte possible l’architecture du tissu. De nombreuses méthodes d’aide au diagnostic à partir de ces images se sont récemment développées afin de guider les pathologistes avec des mesures quantitatives dans l’établissement d’un diagnostic. Les travaux présentés dans cette thèse visent à adresser les défis liés à l’analyse d’images histologiques, et à développer un modèle d’aide au diagnostic se basant principalement sur les relations spatiales, une information que les méthodes existantes n’exploitent que rarement. Une technique d’analyse de la texture à plusieurs échelles est tout d’abord proposée afin de détecter la présence de tissu malades dans les images. Un descripteur d’objets, baptisé Force Histogram Decomposition (FHD), est ensuite introduit dans le but d’extraire les formes et l’organisation spatiale des régions définissant un objet. Finalement, les images histologiques sont décrites par les FHD mesurées à partir de leurs différents types de tissus et des objets biologiques marqués qu’ils contiennent. Les expérimentations intermédiaires ont montré que les FHD parviennent à correctement reconnaitre des objets sur fonds uniformes y compris dans les cas où les relations spatiales ne contiennent à priori pas d’informations pertinentes. De même, la méthode d’analyse de la texture s’avère satisfaisante dans deux types d’applications médicales différents, les images histologiques et celles de fond d’œil, et ses performances sont mises en évidence au travers d’une comparaison avec les méthodes similaires classiquement utilisées pour l’aide au diagnostic. Enfin, la méthode dans son ensemble a été appliquée à l’aide au diagnostic pour établir la sévérité d’un cancer via deux ensembles d’images histologiques, un de foies métastasés de souris dans le contexte du projet ANR SPIRIT, et l’autre de seins humains dans le cadre du challenge CPR 2014 : Nuclear Atypia. L’analyse des relations spatiales et des formes à deux échelles parvient à correctement reconnaitre les grades du cancer métastasé dans 87, 0 % des cas et fourni des indications quant au degré d’atypie nucléaire. Ce qui prouve de fait l’efficacité de la méthode et l’intérêt d’encoder l’organisation spatiale dans ce type d’images particulier
During the last decade, digital pathology has been improved thanks to the advance of image analysis algorithms and calculus power. Particularly, it is more and more based on histology images. This modality of images presents the advantage of showing only the biological objects targeted by the pathologists using specific stains while preserving as unharmed as possible the tissue structure. Numerous computer-aided diagnosis methods using these images have been developed this past few years in order to assist the medical experts with quantitative measurements. The studies presented in this thesis aim at adressing the challenges related to histology image analysis, as well as at developing an assisted diagnosis model mainly based on spatial relations, an information that currently used methods rarely use. A multiscale texture analysis is first proposed and applied to detect the presence of diseased tissue. A descriptor named Force Histogram Decomposition (FHD) is then introduced in order to extract the shapes and spatial organisation of regions within an object. Finally, histology images are described by the FHD measured on their different types of tissue and also on the stained biological objects inside every types of tissue. Preliminary studies showed that the FHD are able to accurately recognise objects on uniform backgrounds, including when spatial relations are supposed to hold no relevant information. Besides, the texture analysis method proved to be satisfactory in two different medical applications, namely histology images and fundus photographies. The performance of these methods are highlighted by a comparison with the usual approaches in their respectives fields. Finally, the complete method has been applied to assess the severity of cancers on two sets of histology images. The first one is given as part of the ANR project SPIRIT and presents metastatic mice livers. The other one comes from the challenge ICPR 2014 : Nuclear Atypia and contains human breast tissues. The analysis of spatial relations and shapes at two different scales achieves a correct recognition of metastatic cancer grades of 87.0 % and gives insight about the nuclear atypia grade. This proves the efficiency of the method as well as the relevance of measuring the spatial organisation in this particular type of images
Styles APA, Harvard, Vancouver, ISO, etc.
14

Chierici, Carlos Eduardo de Oliveira. « Classificação de texturas com diferentes orientações baseada em descritores locais ». Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/18/18152/tde-27102015-103555/.

Texte intégral
Résumé :
Diversas abordagens vêm sendo empregadas para a descrição de texturas, entre elas a teoria dos conjuntos fuzzy e lógica fuzzy. O Local Fuzzy Pattern (LFP) é um descritor de texturas diferente dos demais métodos baseados em sistemas fuzzy, por não utilizar regras linguísticas e sim números fuzzy que são usados na codificação de um padrão local de escala de cinza. Resultados anteriores indicaram o LFP como um descritor eficaz para a classificação de texturas a partir de amostras rotacionadas ou não. Este trabalho propõe uma análise mais abrangente sobre sua viabilidade para aplicação em cada um desses problemas, além de propor uma modificação a este descritor, adaptando-o para a captura de padrões em multiresolução, o Sampled LFP. A avaliação da performance do LFP e do Sampled LFP para o problema de classificação de texturas foi feita através da aplicação de uma série de testes envolvendo amostras de imagens rotacionadas ou não das bases de imagens Outex, álbum de Brodatz e VisTex, onde a sensibilidade obtida por esses descritores foi comparada com um descritor de referência, a variante do Local Binary Pattern (LBP) melhor indicada para o teste em execução. Os resultados apontaram o LFP como um descritor não indicado para aplicações que trabalhem exclusivamente com amostras não rotacionadas, visto que o LBP mostrou maior eficácia para este tipo de problema. Já para a análise de amostras rotacionadas, o Sampled LFP se mostrou o melhor descritor entre os comparados. Todavia, foi verificado que o Sampled LFP somente supera o LBP para resoluções de análise maiores ou iguais a 32x32 pixels e que o primeiro descritor é mais sensível ao número de amostras usadas em seu treinamento do que o segundo, sendo, portanto, um descritor indicado para o problema de classificação de amostras rotacionadas, onde seja possível trabalhar com imagens a partir de 32x32 pixels e que o número de amostras utilizadas para treinamento seja maximizado.
Several approaches have been employed for describing textures, including the fuzzy sets theory and fuzzy logic. The Local Fuzzy Pattern is a texture descriptor different from other methods based on fuzzy systems, which use linguistic rules to codify a texture. Instead, fuzzy numbers are applied in order to encode a local grayscale pattern. Previous results indicated the LFP as an effective descriptor employed to characterize statically oriented and rotated textures samples. This paper proposes a more comprehensive analysis of its feasibility for use in each of these problems, besides proposing a modification to this descriptor, adapting it to capture patterns in multiresolution, the Sampled LFP. The LFP and Sampled LFP performance evaluation when applied to the problem of texture classification was conducted by applying a series of tests involving images samples, rotated or not, from image databases such as Outex, the Brodatz album and Vistex, where the sensitivity obtained by these descriptors were compared with a reference descriptor, the variant Local Binary Pattern (LBP) best suited to running the test. The results indicated the LFP as a descriptor not suitable for applications who work exclusively with non-rotated samples, since the LBP showed greater efficacy for this problem kind. As for rotated samples analysis, the Sampled LFP proved the best descriptor among those compared. However, it was determined that the Sampled LFP only overcomes the LBP when the analysis resolutions are greater or equal to 32x32 pixels, besides that, the first descriptor is more sensitive to the number of training samples than the latter, therefore, this descriptor is indicated for the problem of rotated samples classification, where it is possible to work with resolution from 32x32 pixels while maximizing the number of samples used for training.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Langoni, Virgílio de Melo. « Novos descritores de texturas dinâmicas utilizando padrões locais e fusão de dados ». Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18152/tde-07112017-112730/.

Texte intégral
Résumé :
Nas últimas décadas, as texturas dinâmicas ou texturas temporais, que são texturas com movimento, tornaram-se objetos de intenso interesse por parte de pesquisadores das áreas de processamento digital de imagens e visão computacional. Várias técnicas vêm sendo desenvolvidas, ou aperfeiçoadas, para a extração de características baseada em texturas dinâmicas. Essas técnicas, em vários casos, são a combinação de duas ou mais metodologias pré-existentes que visam apenas a extração de características e não a melhora da qualidade das características extraídas. Além disso, para os casos em que as características são \"pobres\" em qualidade, o resultado final do processamento poderá apresentar queda de desempenho. Assim, este trabalho propõe descritores que extraiam características dinâmicas de sequências de vídeos e realize a fusão de informações buscando aumentar o desempenho geral na segmentação e/ou reconhecimento de texturas ou cenas em movimento. Os resultados obtidos utilizando-se duas bases de vídeos demonstram que os descritores propostos chamados de D-LMP e D-SLMP foram superiores ao descritor da literatura comparado e denominado de LBP-TOP. Além de apresentarem taxas globais de acurácia, precisão e sensibilidade superiores, os descritores propostos extraem características em um tempo inferior ao descritor LBP-TOP, o que os tornam mais práticos para a maioria das aplicações. A fusão de dados oriundos de regiões com diferentes características dinâmicas aumentou o desempenho dos descritores, demonstrando assim, que a técnica pode ser aplicada não somente para a classificação de texturas dinâmicas em sí, mas também para a classificação de cenas gerais em vídeos.
In the last decades, the dynamic textures or temporal textures, which are textures with movement, have become objects of intense interest on the part of researchers of the areas of digital image processing and computer vision. Several techniques have been developed, or perfected, for feature extraction based on dynamic textures. These techniques, in several cases, are the combination of two or more pre-existing methodologies that aim only the feature extraction and not the improvement of the quality of the extracted features. Moreover, in cases that the features are \"poor\" in quality, the final result of processing may present low performance. Thus, this work proposes descriptors that extract dynamic features of video sequences and perform the fusion of information seeking to increase the overall performance in the segmentation and/or recognition of textures or moving scenes. The results obtained using two video bases show that the proposed descriptors called D-LMP and D-SLMP were superior to the descriptor of the literature compared and denominated of LBP-TOP. In addition to presenting higher overall accuracy, precision and sensitivity rates, the proposed descriptors extract features at a shorter time than the LBP-TOP descriptor, which makes them more practical for most applications. The fusion of data from regions with different dynamic characteristics increased the performance of the descriptors, thus demonstrating that the technique can be applied not only to the classification of dynamic textures, but also to the classification of general scenes in videos.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Tania, Sheikh. « Efficient texture descriptors for image segmentation ». Thesis, Federation University Australia, 2022. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/184087.

Texte intégral
Résumé :
Colour and texture are the most common features used in image processing and computer vision applications. Unlike colour, a local texture descriptor needs to express the unique variation pattern in the intensity differences of pixels in the neighbourhood of the pixel-of-interest (POI) so that it can sufficiently discriminate different textures. Since the descriptor needs spatial manipulation of all pixels in the neighbourhood of the POI, approximation of texture impacts not only the computational cost but also the performance of the applications. In this thesis, we aim to develop novel texture descriptors, especially for hierarchical image segmentation techniques that have recently gained popularity for their wide range of applications in medical imaging, video surveillance, autonomous navigation, and computer vision in general. To pursue the aim, we focus in reducing the length of the texture feature and directly modelling the distribution of intensity-variation in the parametric space of a probability density function (pdf). In the first contributory chapter, we enhance the state-of-the-art Weber local descriptor (WLD) by considering the mean value of neighbouring pixel intensities along radial directions instead of sampling pixels at three scales. Consequently, the proposed descriptor, named Radial Mean WLD (RM-WLD), is three-fold shorter than WLD and it performs slightly better than WLD in hierarchical image segmentation. The statistical distributions of pixel intensities in different image regions are diverse by nature. In the second contributory chapter, we propose a novel texture feature, called ‘joint scale,’ by directly modelling the probability distribution of intensity differences. The Weibull distribution, one of the extreme value distributions, is selected for this purpose as it can represent a wide range of probability distributions with a couple of parameters. In addition, gradient orientation feature is calculated from all pixels in the neighbourhood with an extended Sobel operator, instead of using only the vertical and horizontal neighbours as considered in WLD. The length of the texture descriptor combining joint scale and gradiet orientation features remains the same as RM-WLD, but it exhibits significantly improved discrimination capability for better image segmentation. Initial regions in hierarchical segmentation play an important role in approximating texture features. Traditional arbitrary-shaped initial regions maintain the uniform colour property and thus may not retain the texture pattern of the segment they belong to. In the final contributory chapter, we introduce regular-shaped initial regions by enhancing the cuboidal partitioning technique, which has recently gained popularity in image/video coding research. Since the regions (cuboids) of cuboidal partitioning are of rectangular shape, they do not follow the colour-based boundary adherence of traditional initial regions. Consequently, the cuboids retain sufficient texture pattern cues to provide better texture approximation and discriminating capability. We have used benchmark segmentation datasets and metrics to evaluate the proposed texture descriptors. Experimental results on benchmark metrics and computational time are promising when the proposed texture features are used in the state-of-the-art iterative contraction and merging (ICM) image segmentation technique.
Doctor of Philosophy
Styles APA, Harvard, Vancouver, ISO, etc.
17

Carkacioglu, Abdurrahman. « Texture Descriptors For Content-based Image Retrieval ». Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/4/1035534/index.pdf.

Texte intégral
Résumé :
Content Based Image Retrieval (CBIR) systems represent images in the database by color, texture, and shape information. In this thesis, we concentrate on tex- ture features and introduce a new generic texture descriptor, namely, Statistical Analysis of Structural Information (SASI). Moreover, in order to increase the re- trieval rates of a CBIR system, we propose a new method that can also adapt an image retrieval system into a con¯
gurable one without changing the underlying feature extraction mechanism and the similarity function. SASI is based on statistics of clique autocorrelation coe±
cients, calculated over structuring windows. SASI de¯
nes a set of clique windows to extract and measure various structural properties of texture by using a spatial multi- resolution method. Experimental results, performed on various image databases, indicate that SASI is more successful then the Gabor Filter descriptors in cap- turing small granularities and discontinuities such as sharp corners and abrupt changes. Due to the °
exibility in designing the clique windows, SASI reaches higher average retrieval rates compared to Gabor Filter descriptors. However, the price of this performance is increased computational complexity. Since, retrieving of similar images of a given query image is a subjective task, it is desirable that retrieval mechanism should be con¯
gurable by the user. In the proposed method, basically, original feature space of a content-based retrieval system is nonlinearly transformed into a new space, where the distance between the feature vectors is adjusted by learning. The transformation is realized by Arti¯
cial Neural Network architecture. A cost function is de¯
ned for learning and optimized by simulated annealing method. Experiments are done on the texture image retrieval system, which use SASI and Gabor Filter features. The results indicate that con¯
gured image retrieval system is signi¯
cantly better than the original system.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Vieira, Raissa Tavares. « Descritores robustos à rotação de texturas baseados na abordagem LMP com acréscimo da informação de Magnitude e Sinal ». Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18152/tde-04102017-110333/.

Texte intégral
Résumé :
Classificação de imagens de textura, especialmente aquelas com mudanças significativas de rotação, iluminação, escala e ponto de vista, é um problema fundamental e desafiador na área de visão computacional. Esta tese propõe dois descritores de imagem simples, porém eficientes, chamados de Sampled Local Mapped Pattern Magnitude (SLMP_M) e Completed Local Mapped Pattern (CLMP) aplicados na classificação de textura. Os descritores propostos são parte de um aprimoramento do descritor Local Mapped Pattern (LMP) para trabalhar de maneira eficiente com imagens de textura rotacionadas. Os métodos propostos necessitam de um pré-ajuste de parâmetros que utiliza o método de otimização por enxame de partículas, e são discriminativos e robustos para a descrição de texturas rotacionadas em ângulos arbitrários. Para a validação dos descritores propostos duas bases de imagens são utilizadas, Kylberg Sintorn Rotation Dataset e Brodatz Texture Rotation Dataset, uma nova base de dados desenvolvida pela autora, formada por imagens de texturas rotacionadas do Álbum de Brodatz. As duas bases contêm imagens de texturas naturais que foram rotacionadas fisicamente no momento da captura e rotacionadas por processos computacionais. É feita também uma avaliação da influência de métodos de interpolação no processo de rotação das imagens e são comparados com diferentes descritores presentes na literatura. Cinco métodos de interpolação são investigados: Lanczos, B-spline, Cúbica, Linear e Nearest Neighbor. Os resultados experimentais demonstram que os descritores propostos nesta tese superam o desempenho dos descritores Completed Local Binary Pattern (CLBP), e dos descritores que combinam a versão generalizada das características de Fourier com variações do descritor Local Binary Pattern (LBP), LBPDFT, ILBPDFT, LTPDFT e ILTPDFT. Os resultados também demonstram que a escolha do método de interpolação no processo de rotação das imagens influencia na capacidade de reconhecimento.
Texture image classification, especially those with significant changes of rotation, illumination, scale and point of view, is a fundamental and challenging problem in the field of computer vision. This thesis proposes two simple, but efficient, image descriptors called Sampled Local Mapped Pattern Magnitude (SLMP_M) and Completed Local Mapped Pattern (CLMP) applied in texture classification. The proposed descriptors are part of an enhancement to the Local Mapped Pattern (LMP) descriptor to work efficiently with rotated texture images. The descriptors proposed requires a parameter preset by the particle swarm optimization method, they are discriminating and robust for the description of rotated textures at arbitrary angles. For the validation of the proposed descriptors two image datasets are used: Kylberg Sintorn Rotation Dataset and Brodatz Texture Rotation Dataset, a new texture dataset introduced, which contains rotated texture images from Brodatzs Album. Both databases contain images of natural textures that have been rotated by Hardware and computational procedures. An evaluation of the influence of interpolation methods on the image rotation process is also presented and compared with different descriptors in the literature. Five interpolation methods are investigated: Lanczos, B-spline, Cubic, Linear and Nearest Neighbor. The experimental results show that the descriptors proposed in this thesis outperform the performance of the Completed Local Binary Pattern (CLBP) descriptors, and the descriptors that combine the generalized version of the Fourier characteristics with variations of the descriptor Local Binary Pattern (LBP), LBPDFT, ILBDFT, LTPDFT e ILTPDFT compared. The results also prove that the selection of the interpolation method in the image rotation process influences the recognition capability.
Styles APA, Harvard, Vancouver, ISO, etc.
19

FRACZAK, LIDIA. « Description d'itineraires : de la reference au texte ». Paris 11, 1998. http://www.theses.fr/1998PA112240.

Texte intégral
Résumé :
Nous proposons un modele cognitif et discursif de la production de descriptions d'itineraires. Deux processus sont impliques dans ce traitement : la determination et la description d'un itineraire. L'interface entre eux est constituee par une representation referentielle de l'itineraire. S'appuyant sur une analyse de descriptions d'itineraires produites par des humains, notre apport concerne plus particulierement le processus de description ainsi que les connaissances discursives sur lesquelles il est fonde. Le processus de description consiste en la structuration conceptuelle et en la structuration textuelle, les principes correspondants faisant partie des connaissances discursives. Notre modelisation du niveau conceptuel et du niveau textuel du traitement, et des interactions entre eux, a permis de rendre compte, par exemple, des phenomenes discursifs comme la distinction entre les informations explicites et implicites, et l'emploi de formes verbales differentes (ex. Formes du present et du futur) dans une description. Le modele a donne lieu a une application informatique qui permet la generation automatique de descriptions d'itineraires dans le metro.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Schumacher, Pol [Verfasser]. « Workflow Extraction from Textual Process Descriptions / Pol Schumacher ». München : Verlag Dr. Hut, 2016. http://d-nb.info/1100967893/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Li, Jie. « Description of Jersey knitted fabrics using image processing ». Thesis, Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/8595.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Hernandez, Nicolas. « Description et détection automatique de structures de texte ». Paris 11, 2004. http://www.theses.fr/2004PA112329.

Texte intégral
Résumé :
Les systèmes de recherche d'information ne sont pas adaptés pour une navigation intra-documentaire (résumé dynamique). Or celle-ci est souvent nécessaire pour évaluer la pertinence d'un document. Notre travail se situe dans une perspective de web sémantique. Notre objectif est d'enrichir les documents pour fournir aux systèmes, voire directement à l'utilisateur, des informations de description et d'organisation du contenu des documents. Les informations de nature descriptive concernent d'une part l'identification des expressions thématiques du discours, et d'autre part l'identification du type d'information sémantique ou rhétorique contenu dans une phrase donnée (par exemple la présentation du but de l'auteur, l'énonciation d'une définition, l'exposition d'un résultat, etc. ). L'identification des thèmes implémente deux approches distinctes l'une fondée sur la résolution d'anaphores, la seconde sur la construction de chaînes lexicales. En ce qui concerne l'identification des types d'information des phrases, nous proposons une méthode d'acquisition automatique de marques méta-discursives. L'objectif de détection de l'organisation du discours est envisagé selon deux approches. La première consiste à une analyse globale descendante du texte, en combinant une segmentation par cohésion lexicale, et un repérage de marques linguistiques de type introducteur de cadres (e. G. "En ce qui concerne X, En Corée, D'abord etc. "). La seconde approche vise une détection plus fine de l'organisation du discours en identifiant les relations de dépendance informationnelle entre les phrases (subordination et coordination)
Information Retrieval Systems are not well adapted for text browsing and visualization (dynamic summarization). But this one is always necessary for the user to evaluate the Information Retrieval (IR) systems are not well adapted for text browsing and visualization (dynamic summarization). But this is always necessary for users to evaluate the relevance of a document. Our work follows a Web Semantic perspective. We aim at annotating documents with abstract information about content description and discourse organization in order to create more abilities for IR systems. Descriptive information concerns both topic identification and semantic and rhetorical classification of text extracts (With information such as "Our aim is. . . ", "This paper deals with. . . "). We implement a system to identify topical linguistic expressions based on a robust anaphora system and lexical chains building. We also propose a method in order to automatically acquire meta-discursive material. We perform the detection of the text structure thanks to two complementary approaches. The first one offers a top-down analysis based on the segmentation provided by lexical cohesion and by linguistic markers such as frame introducers. The second one is concerned by local text organization by the detection of informational relations (coordination and subordination) between subsequent sentences
Styles APA, Harvard, Vancouver, ISO, etc.
23

Blanchard, Coralie. « Etude des facteurs influençant la structure et la texture de produits céréaliers alvéolés de cuisson semi-humide : une approche instrumentale et sensorielle de caractérisation de la texture ». Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOS001/document.

Texte intégral
Résumé :
La texture, manifestation sensorielle des propriétés structurales, mécaniques et de surface d’un matériau constitue un paramètre clé dans l’évaluation des produits alimentaires. Elle reflète leur qualité, leur fraîcheur et influence l’acceptabilité du produit par le consommateur déterminant l’intention de ré-achat. Dans la littérature scientifique, la plupart des travaux portant sur la texture des produits céréaliers ont étudié des matrices alimentaires telles que le pain ou les biscuits mais plus rares sont les travaux sur les gâteaux type cake. L’objectif de ce travail est donc de caractériser le moelleux d’un produit de type cake de sa mise en place à son évolution au cours de la conservation du produit au moyen de méthodes instrumentales et sensorielles. Dans un premier temps, nous avons étudié l’influence de la nature de la farine, du procédé de fabrication et de l’aération des produits sur caractère moelleux au travers de méthodes instrumentales et sensorielles. La caractérisation instrumentale des produits moelleux et la structure de leur mie ont été évaluées par des mesures rhéologiques (texturomètre, DMTA) et d’imagerie (XR-Tomography). La caractérisation sensorielle a été menée par l’établissement d’un profil sensoriel de la texture avec un panel entraîné évaluant l’aspect des produits et les sensations perçues au toucher et en bouche. Dans un second temps, nous avons étudié les propriétés fonctionnelles des farines et de leurs composants en milieu modèle et complexe par différentes méthodes physico-chimiques (rhéologie des pâtes, analyse enthalpique différentielle, microscopie, diffraction RX). Enfin, les mesures sensorielles et instrumentales ont été mises en relation via une analyse factorielle multiple dans le but de déterminer des méthodes instrumentales permettant de caractériser le caractère moelleux des produits de type cake. Les résultats montrent que l’aération de la mie et la composition de la farine sont les facteurs clés du moelleux dans ce type de produit. L’évaluer et le sélectionner sur la base de ses caractéristiques physico-chimiques (élasticité, fermeté, aération) s’avère possible compte tenu de la stabilité de sa texture au cours du temps afin de pouvoir anticiper sur l’acceptabilité du produit par le consommateur le plus tôt possible dans son processus de développement
Since texture is the manifestation of structural, mechanical and surface properties of a material, it represents a key characteristic for food materials. It reflects food quality, freshness perception influencing consumer acceptance.Studies encountered in the scientific literature that are devoted to cereal based foods texture are foremost based on bread also biscuits scarcely on cakes. This study entitled ‘study of the different factors influencing the structure and the texture of semi-humid baked aerated cereal products: sensory and instrumental dimensions of texture’ focus on cake softness characterization, set up and evolution. First, the investigation of the influence of soft wheat flour origin, making process and aeration properties on cake texture is proposed. Instrumental characterization of cake texture properties was performed through high deformation using TPA and relaxation tests. Several approaches were attempted to determine cake crumb structure including rheology, microscopy; image analysis and X Ray-Tomography. Sensory characterization of cake texture was achieved through descriptive texture profile involving establishment of our trained panel. Second, we peer into the functional properties of wheat flour also of its gluten and starch components, physico-chemical methods among which fluid rheology, differential scanning calorimetry, optic microscopy and X-Ray powder diffraction are employed. The results are discussed in terms of physical and chemical changes that cake dough ingredients undergo upon making process. This investigation highlights that several parameters are substantially involved in cake structure set up and final texture perception. Suitable flour choice (composition, components quality) and aeration management are critical factors for the elaboration of a product to be perceived the softest as possible. Also, regarding evolution of texture, it is possible to state on the selection of a product whether than another at early development stages allowing anticipate on consumer acceptance
Styles APA, Harvard, Vancouver, ISO, etc.
24

Homlong, Siri. « The Language of Textiles : Description and Judgement on Textile Pattern Composition ». Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis (AUU), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7216.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Youssef, Wael Farid. « Instanciation d'un schéma de description textuel de scènes de vidéo surveillance ». Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30249.

Texte intégral
Résumé :
Les systèmes de vidéosurveillance sont des outils importants pour les agences chargées de l'application de la loi dans la lutte contre la criminalité. Les chambres de contrôle de la vidéosurveillance ont deux fonctions principales : surveiller en direct les zones de surveillance et résoudre les infractions en enquêtant les archives. Pour soutenir ces tâches difficiles, plusieurs solutions significatives issues des domaines de la recherche et du marché ont été proposées. Cependant, le manque de modèles génériques et précis pour la représentation du contenu vidéo fait de la construction d'un système intelligent et automatisé capable d'analyser et de décrire des vidéos une tâche ardue. De plus, le domaine d'application montre toujours un écart important entre le domaine de la recherche et les besoins réels, ainsi qu'un manque entre ces besoins réels et les outils d'analyse vidéo dans le marché. Par conséquence, jusqu'à présent dans les systèmes de surveillance conventionnels, la surveillance en direct et la recherche dans des archives reposent principalement sur des opérateurs humains. Cette thèse propose une nouvelle approche pour la description textuelle de contenus importants dans des scènes de vidéosurveillance, basée sur une nouvelle "ontologie VSSD" générique, sans contexte, centrée sur les interactions entre deux objets. L'ontologie proposée est générique, flexible et extensible, dédiée à la description de scènes de vidéosurveillance. Tout en analysant les différentes scènes vidéo, notre approche introduit de nombreux nouveaux concepts et méthodes concernant la médiation et l'action distante, la description synthétique, ainsi qu'une nouvelle façon de segmenter la vidéo et de classer les scènes. Nous introduisons une nouvelle méthode heuristique de distinction entre les objets déformables et non déformables dans les scènes. Nous proposons également des caractéristiques importantes pour une meilleure classification des interactions entre les objets vidéo, basée sur l'apprentissage, et une meilleure description.[...]
Surveillance systems are important tools for law enforcement agencies for fighting crimes. Surveillance control rooms have two main duties: live monitoring the surveillance areas, and crime solving by investigating the archives. To support these difficult tasks, several significant solutions from the research and market fields have been proposed. However, the lack of generic and precise models for video content representation make the building of fully automated intelligent video analysis and description system a challenging task. Furthermore, the application domain still shows a big gap between the research field and the real practical needs, it also shows a lack between these real needs and the on-market video analytics tools. Consequently, in conventional surveillance systems, live monitoring and investigating the archives still rely mostly on human operators. This thesis proposes a novel approach for textual describing important contents in videos surveillance scenes, based on new generic context-free "VSSD ontology", with focus on two objects interactions. The proposed ontology presents a new generic flexible and extensible ontology dedicated for video surveillance scenes description. While analysing and understanding variety of video scenes, our approach introduces many new concepts and methods concerning mediation and action at a distant, abstraction in the description, and a new manner of categorizing the scenes. It introduces a new heuristic way to discriminate between deformable and non-deformable objects in the scenes. It also highlights and exports important features for better video objects interactions learning classifications and for better description. These features, if used as key parameters in video analytics tools, are much suitable for supporting surveillance systems operators through generating alerts, and intelligent search. Moreover, our system outputs can support police incidents reports, according to investigators needs, with many types of automatic textual description based on new well-structured rule-based schemas or templates. [...]
Styles APA, Harvard, Vancouver, ISO, etc.
26

Wang, Josiah Kwok-Siang. « Learning visual recognition of fine-grained object categories from textual descriptions ». Thesis, University of Leeds, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.597096.

Texte intégral
Résumé :
This thesis investigates the task of learning visual object category recognition from textual descriptions. The work contributes primarily to the recognition of fine-grained object categories, such as animal and plant species, where it may be difficult to collect many images for training. but where textual descriptions are readily available, for example from online nature guides. The idea of using textual descriptions for fine-grained object category recognition is explored in three separate but related tasks. The first is the task of learning recognition of object categories solely from textual descriptions; no category-specific training images are used. Our proposed framework comprises three components: (i) natural language processing to build object category models from textual descriptions; (ii) visual processing to extract visual attributes from test images; (Hi) generative model connecting textual terms and visual attributes from images. As an 'upper-bound' we also evaluate how well humans perform in a similar task. The proposed method was evaluated on a butterfly dataset as an example, performing substantially better than chance, and interestingly comparable to the performance of non-native English speakers. The second task is an extension to the first. Here we focus on the problem of learning models for attribute terms (e.g. "orange bands"), from a set of training classes disjoint from the test classes. Attribute models are learnt independently for each attribute term in a weakly supervised fashion from textual descriptions, and are used in conjunction with textual descriptions of the test classes to build probabilistic models for object category recognition. A modest accuracy was achieved with our method when evaluated on a butterfly dataset, although performance was substantially improved with some human supervision to combine similar attribute terms. The third task explores how textual descriptions can be used to automatically harvest training images for each object category. Starting with just the category name, a textual description and no example images, web pages are gathered from search engines, and images filtered based on how similar their surrounding texts are to the given textual description. The idea is that images in close proximity to texts that are similar to the textual description are more likely to be example images of the desired category. The proposed method is demonstrated for a set of butterfly categories. where images were successfully re-ranked based on their corresponding text blocks alone, with many categories achieving higher precision than their baselines at early stages of recall. The proposed approaches of exploiting textual descriptions, although still in their infancy, shows potential for visual object recognition tasks, effectively reducing the amount of human supervision required for annotating images.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Costa, Marcela de Rezende. « Processamento acelerado de presunto cru com uso de transglutaminase em carne desossada : perfis sensorial, colorimetrico e de textura em comparação com produtos tradicionais ». [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/255373.

Texte intégral
Résumé :
Orientadores: Pedro Eduardo de Felicio, Expedito Tadeu Facco Silveira
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia de Alimentos
Made available in DSpace on 2018-08-04T02:37:23Z (GMT). No. of bitstreams: 1 Costa_MarceladeRezende_M.pdf: 744526 bytes, checksum: 179be5d7190c191d9c90cd41e69373b7 (MD5) Previous issue date: 2005
Resumo: O presunto cru é um produto fermentado tradicional e com longo período de maturação, apreciado particularmente nos países da Península Ibérica e Itália. No Brasil seu consumo é pequeno, devido ao preço relativamente elevado. Assim, foi realizado um trabalho visando o desenvolvimento de um produto com características similares aos presuntos crus tradicionais, mas com um tempo de processo menor e com provável redução de custos de produção. Através de avaliações sensoriais e instrumentais foram analisados 4 produtos tradicionais e 2 produtos elaborados no presente trabalho de pesquisa com 3,5% ou 5,0% de sal, denominados CTC 3,5% e CTC 5,0%. Foram realizadas medidas de cor do sistema Hunter Lab. As amostras se dividiram em dois grupos com relação à luminosidade e teor de vermelho: luminosidade - produtos mais claros (presuntos CTC) e produtos mais escuros (Serrano, Tipo Serrano, Italiano e Tipo Parma), e cor vermelha - presuntos com maior teor de vermelho (Italiano e Tipo Serrano) e presuntos com menor teor (CTC 3,5%, CTC 5,0%, Serrano e Tipo Parma). Na avaliação instrumental de textura através de TPA - Análise de Perfil de Textura, os produtos que mostraram perfis texturais mais distintos foram os presuntos do CTC em relação ao Italiano. Os presuntos CTC obtiveram os maiores valores de dureza, elasticidade, coesividade e mastigabilidade, enquanto o presunto Italiano apresentou os menores valores. Análise Descritiva Quantitativa (ADQ) foi aplicada para definição de atributos sensoriais de presuntos crus. Os dados foram analisados utilizando análise de variância, teste de Tukey e Análise de Componentes Principais (ACP). Na ADQ, 18 descritores foram utilizados para caracterizar as amostras. Os produtos se diferenciaram pelos seguintes atributos: CTC 3,5% - sabor mais ácido e menor intensidade de aroma e sabor de ranço, vermelho e suculência; CTC 5,0% - maior fibrosidade e menores notas de intensidade e persistência de sabor, e maciez; Serrano - maiores intensidades de aroma de ranço, vermelho, intensidade e persistência de sabor e menor sabor salgado; Tipo Serrano - maior intensidade de sabor de ranço e menor intensidade de sabor doce; Italiano - maiores intensidades de sabor salgado e maciez; Tipo Parma - sabor de carne, marmoreado e amarelo da gordura mais intensos. A ACP separou as amostras em dois grupos: (1) um formado pelos presuntos CTC 3,5%, Serrano e Italiano, caracterizado principalmente pelos atributos aroma e sabor de carne, aroma e sabor ácido, aroma e sabor doce e fibrosidade, e (2) outro formado pelos presuntos CTC 5,0%, Tipo Serrano e Tipo Parma, que se diferenciou do primeiro principalmente pelos descritores sabor salgado, marmoreado e maciez. As mesmas amostras foram avaliadas com relação a sua aceitação, utilizando escalas hedônicas de 9 pontos, e preferência pelos consumidores. Foram realizados dois testes afetivos: (1) um teste de aceitação onde as 6 amostras foram avaliadas e (2) um teste de aceitação e de preferência onde apenas os dois produtos elaborados no CTC foram avaliados. Os dados dos dois testes foram analisados utilizando análise de variância e teste de Tukey. Além disso, os dados de aceitação global do primeiro teste foram analisados através do Mapa de Preferência Interno (MDPREF). Os resultados dos testes afetivos mostraram que todos os tipos de presunto cru analisados obtiveram a maioria de suas notas da avaliação pelos consumidores entre gostei ligeiramente e gostei muitíssimo. De maneira geral, o presunto Tipo Serrano foi o produto que obteve o maior percentual de notas na faixa de aceitação da escala e o Serrano foi o que obteve o menor percentual. O MDPREF evidenciou quatro segmentações das amostras, em ordem decrescente de aceitação pelos consumidores: (1) um composto pelas amostras de presunto cru Italiano e Tipo Serrano, (2) um referente às duas amostras do CTC, (3) um formado pelo presunto Tipo Parma e (4) o último representado pelo presunto Serrano. Os dois produtos CTC (3,5 e 5,0%) apresentaram níveis similares de aceitação pelo consumidor e não diferiram (p>0,05) no teste de preferência pelos consumidores
Abstract: The dry cured ham is a traditional fermented product with a long period of maturation, appreciated particularly in countries of the Iberian peninsula and Italy. In Brazil its consumption is small, because of the price relatively raised. Thus, the development of a product with similar characteristics was carried aiming at the traditional dry cured hams, but with a reduced process time and probable reduction of production costs. Through sensorial and instrumental evaluations 4 traditional products and 2 products elaborated in this present work of research with 3.5 or 5.0% of salt, named CTC 3,5% and CTC 5,0%, had been analyzed. Color was measured by Hunter Lab Color System. The samples were divided in two groups with regard to luminosity and red content: luminosity - clearer products (CTC hams) and darker products (Serrano, Type Italian and Parma Type), and red color - (Italian and Parma Type) and hams with less content (CTC 3,5%, CTC 5,0%, Serrano and Parma Type). In the instrumental evaluation of texture through TPA ¿ Texture Profile Analysis, the products that have shown more distinct texture profiles were the hams of the CTC in relation to the Italian. The CTC dry cured hams had the biggest values of hardness, springiness, cohesiveness and chewiness, while the Italian ham presented the smaller values. Quantitative Descriptive Analysis (DQA) was applied for definition of sensorial attributes of dry cured ham. The data were analyzed using analysis of variance, Tukey Test and Principal Components Analysis (PCA). In the ADQ, 18 descriptors had been used to characterize the samples. The products were differentiated for the following attributes: CTC 3,5% - more acid flavor and less intensity of rancid odour and flavor, redness and juiciness; CTC 5,0% - higher fibrousness and lower scores of flavor intensity and persistence, and softness; Serrano - bigger intensities rancid flavor, redness, flavor intensity and persistence and minor salty flavor; Serrano Type - highest intensities rancid flavor and less sweet intensity flavor; Italian - highest salty flavor and softness; Parma Type - highest meaty flavor, marbling and fat yellowness. PCA separated the samples in two groups: (1) one formed for CTC 3,5%, Serrano and Italian dry cured hams, mainly characterized for the attributes meaty odour and flavor, acid odour and flavor, sweet odour and flavor and fibrousness, and (2) other formed by the CTC 5,0%, Serrano Type and Parma Type hams, that was differentiated mainly of first group for the descritors the salty flavor, marbling and softness. The same samples were evaluated with regard to its acceptance, using hedonic scales of 9 points, and preference of the consumers. Two affective tests were carried through: (1) one was tested of acceptance where the 6 samples were evaluated and (2) one was tested of acceptance and preference where only the two products elaborated in the CTC were evaluated. The data of the two tests were analyzed using variance analysis and Tukey Test. Moreover, the data of global acceptance of the first test was analyzed through the Internal Preference Map (MDPREF). The results of the affective tests had shown that all the types of dry cured ham analyzed had the majority of its notes of the evaluation for the consumers between I liked slightly and I liked very much. In general way, the ham Serrano Type was the product that got the percentile greater of notes in the region of acceptance of the scale and Serrano was what it got the percentile minor. The MDPREF evidenced four segmentation of the samples, orderly decreasing of acceptance for the consumers: (1) a composition for the samples of Italian and Serrano Type dry cured hams, (2) a referring one to two samples CTC, (3) one formed for Parma Type ham and (4) the last one represented for the Serrano ham. The two products elaborated in the CTC had presented similar levels of acceptance for the consumer. They don¿t differ (p>0.05) in the preference test with consumers
Mestrado
Tecnologia de Alimentos
Mestre em Tecnologia de Alimentos
Styles APA, Harvard, Vancouver, ISO, etc.
28

JUNIOR, FERNANDO ALBERTO CORREIA DOS SANTOS. « AUTOMATIC GENERATION OF EXAMPLES OF USE FROM THE TEXTUAL DESCRIPTION OF USE CASES ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=30736@1.

Texte intégral
Résumé :
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Esta dissertação apresenta uma solução que permite a geração automática de exemplos de uso a partir da descrição textual de casos de uso. Os casos de uso descrevem especificações em um nível de formalização suficiente para a geração dos exemplos. Um exemplo gerado é um texto em linguagem natural que é o resultado da paráfrase de um possível comportamento do software, extraído de um caso de uso e aplicado a um contexto real, em que atores são convertidos em personagens fictícios e os atributos são valorados de acordo com as regras de negócios especificadas no caso de uso. O formato proposto para a construção de exemplos tem como objetivo permitir que clientes possam ler, entender e julgar se o comportamento que está sendo proposto é o desejado. Com isso é esperado que o próprio cliente possa validar as especificações e que, quando defeitos forem encontrados, a especificação possa logo ser corrigida e refletida de volta nos exemplos. Ao mesmo tempo a especificação formalizada na forma de um caso de uso auxiliará desenvolvedores a criar soluções mais próximas do correto por construção, quando comparado com especificações textuais convencionais.
This master s dissertation presents a solution for the automatic generation of examples of use from the textual description of use cases. Use cases describe specifications in a sufficiently formal way that is enough to automatically generate usage examples. A generated example is a text in a natural language which is the paraphrase of one possible manner to use the software, extracted from the use case and applied to a real context where actors are converted into fictitious personas and attributes are valued according to the business rules specified in the use case. The proposed format to present the example aims to allow clients to read, to understand and to judge whether the expressed behavior is in fact what he wants. With this approach, it is expected that the customer himself can approve the specifications and when defects are found, so the specification can quickly be corrected and reflected in the examples. At the same time, the formalized specification in the form of a use case will help developers create solutions that are by construction closer to the correct one when compared to conventional textual specifications.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Nguyen, Vu Lam. « Approches complémentaires pour une classification efficace des textures ». Thesis, Cergy-Pontoise, 2018. http://www.theses.fr/2018CERG0974/document.

Texte intégral
Résumé :
Dans cette thèse, nous nous intéressons à la classification des images de textures avec aucune connaissance a priori sur les conditions de numérisation. Cette classification selon des définitions pré-établies de matériaux repose sur des algorithmes qui extraient des descripteurs visuels.A cette fin, nous introduisons tout d'abord une variante de descripteurs par motifs binaires locaux (Local Binary Patterns).Dans cette proposition, une approche statistique est suivie pour représenter les textures statiques.Elle incorpore la quantité d'information complémentaire des niveaux de gris des images dans des opérateurs basés LBP.Nous avons nommé cette nouvelle méthode "Completed Local Entropy Binary Patterns (CLEBP)".CLEBP capture la distribution des relations entre les mesures statistiques des données aléatoires d'une image, l'ensemble étant calculé pour tous les pixels au sein d'une structure locale.Sans la moindre étape préalable d'apprentissage, ni de calibration automatique, les descriptions CLEBP contiennent à la fois des informations locales et globales des textures, tout en étant robustes aux variations externes.En outre, nous utilisons le filtrage inspiré par la biologie, ou biologically-inspired filtering (BF), qui simule la rétine humaine via une phase de prétraitement.Nous montrons que notre approche est complémentaire avec les LBP conventionnels, et les deux combinés offrent de meilleurs résultats que l'une des deux méthodes seule.Les résultats expérimentaux sur quatre bases de texture, Outex, KTH-TIPS-2b, CURet, et UIUC montrent que notre approche est plus performante que les méthodes actuelles.Nous introduisons également un cadre formel basé sur une combinaison de descripteurs pour la classification de textures.Au sein de ce cadre, nous combinons des descripteurs LBP invariants en rotation et en échelle, et de faible dimension, avec les réseaux de dispersion, ou scattering networks (ScatNet).Les résultats expérimentaux montrent que l'approche proposée est capable d'extraire des descripteurs riches à de nombreuses orientations et échelles.Les textures sont modélisées par une concaténation des codes LBP et valeurs moyennes des coefficients ScatNet.Nous proposons également d'utiliser le filtrage inspiré par la biologie, ou biologically-inspired filtering (BF), pour améliorer la resistance des descripteurs LBP.Nous démontrons par l'expérience que ces nouveaux descripteurs présentent de meilleurs résultats que les approches usuelles de l'état de l'art.Ces résultats sont obtenus sur des bases réelles qui contiennent de nombreuses avec des variations significatives.Nous proposons aussi un nouveau réseau conçu par l'expertise appelé réseaux de convolution normalisée, ou normalized convolution network.Celui-ci est inspiré du modèle des ScatNet, auquel deux modifications ont été apportées.La première repose sur l'utilisation de la convolution normalisé en lieu et place de la convolution standard.La deuxième propose de remplacer le calcul de la valeur moyenne des coefficients du réseaux par une agrégation avec la méthode des vecteurs de Fisher.Les expériences montrent des résultats compétitifs sur de nombreuses bases de textures.Enfin, tout au long de cette thèse, nous avons montré par l'expérience qu'il est possible d'obtenir de très bons résultats de classification en utilisant des techniques peu coûteuses en ressources
This thesis investigates the complementary approaches for classifying texture images.The thesis begins by proposing a Local Binary Pattern (LBP) variant for efficient texture classification.In this proposed method, a statistical approach to static texture representation is developed. It incorporates the complementary quantity information of image intensity into the LBP-based operators. We name our LBP variant `the completed local entropy binary patterns (CLEBP)'. CLEBP captures the distribution of the relationships between statistical measures of image data randomness, calculated over all pixels within a local structure. Without any pre-learning process and any additional parameters to be learned, the CLEBP descriptors convey both global and local information about texture while being robust to external variations. Furthermore, we use biologically-inspired filtering (BF) which simulates the performance of human retina as preprocessing technique. It is shown that our approach and the conventional LBP have the complementary strength and that by combining these algorithms, one obtains better results than either of them considered separately. Experimental results on four large texture databases show that our approach is more efficient than contemporary ones.We then introduce a framework which is a feature combination approach to the problem of texture classification. In this framework, we combine Local Binary Pattern (LBP) features with low dimensional, rotation and scale invariant counterparts, the handcrafted scattering network (ScatNet). The experimental results show that the proposed approach is capable of extracting rich features at multiple orientations and scales. Textures are modeled by concatenating histogram of LBP codes and the mean values of ScatNet coefficients. Then, we propose using Biological Inspired Filtering (BF) preprocessing technique to enhance the robustness of LBP features. We have demonstrated by experiment that the novel features extracted from the proposed framework achieve superior performance as compared to their traditional counterparts when benchmarked on real-world databases containing many classes with significant imaging variations.In addition, we propose a novel handcrafted network called normalized convolution network. It is inspired by the model of ScatNet with two important modification. Firstly, normalized convolution substitute for standard convolution in ScatNet model to extract richer texture features. Secondly, Instead of using mean values of the network coefficients, Fisher vector is exploited as an aggregation method. Experiments show that our proposed network gains competitive classification results on many difficult texture benchmarks.Finally, throughout the thesis, we have proved by experiments that the proposed approaches gain good classification results with low resource required
Styles APA, Harvard, Vancouver, ISO, etc.
30

Romero, Mier y. Teran Andrés. « Real-time multi-target tracking : a study on color-texture covariance matrices and descriptor/operator switching ». Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-01002065.

Texte intégral
Résumé :
Visual recognition is the problem of learning visual categories from a limited set of samples and identifying new instances of those categories, the problem is often separated into two types: the specific case and the generic category case. In the specific case the objective is to identify instances of a particular object, place or person. Whereas in the generic category case we seek to recognize different instances that belong to the same conceptual class: cars, pedestrians, road signs and mugs. Specific object recognition works by matching and geometric verification. In contrast, generic object categorization often includes a statistical model of their appearance and/or shape.This thesis proposes a computer vision system for detecting and tracking multiple targets in videos. A preliminary work of this thesis consists on the adaptation of color according to lighting variations and relevance of the color. Then, literature shows a wide variety of tracking methods, which have both advantages and limitations, depending on the object to track and the context. Here, a deterministic method is developed to automatically adapt the tracking method to the context through the cooperation of two complementary techniques. A first proposition combines covariance matching for modeling characteristics texture-color information with optical flow (KLT) of a set of points uniformly distributed on the object . A second technique associates covariance and Mean-Shift. In both cases, the cooperation allows a good robustness of the tracking whatever the nature of the target, while reducing the global execution times .The second contribution is the definition of descriptors both discriminative and compact to be included in the target representation. To improve the ability of visual recognition of descriptors two approaches are proposed. The first is an adaptation operators (LBP to Local Binary Patterns ) for inclusion in the covariance matrices . This method is called ELBCM for Enhanced Local Binary Covariance Matrices . The second approach is based on the analysis of different spaces and color invariants to obtain a descriptor which is discriminating and robust to illumination changes.The third contribution addresses the problem of multi-target tracking, the difficulties of which are the matching ambiguities, the occlusions, the merging and division of trajectories.Finally to speed algorithms and provide a usable quick solution in embedded applications this thesis proposes a series of optimizations to accelerate the matching using covariance matrices. Data layout transformations, vectorizing the calculations (using SIMD instructions) and some loop transformations had made possible the real-time execution of the algorithm not only on Intel classic but also on embedded platforms (ARM Cortex A9 and Intel U9300).
Styles APA, Harvard, Vancouver, ISO, etc.
31

Tremblay, Steeve. « Le vocabulaire de l'environnement aquatique : élaboration d'un corpus textuel et amorce de description lexicale ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ61841.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Tremblay, Steeve. « Le vocabulaire de l'environnement aquatique : élaboration d'un corpus textuel et amorce de description lexicale ». Sherbrooke : Université de Sherbrooke, 2000.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
33

Loescher, Eleonore. « Evaluations instrumentales et sensorielles de la texture de produits alimentaires de type semi-liquide : application aux cas de fromages blancs et de compotes ». Massy, ENSIA, 2003. http://www.theses.fr/2003EIAA0128.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
34

Leopold, Henrik, der Aa Han van, Fabian Pittke, Manuel Raffel, Jan Mendling et Hajo A. Reijers. « Searching textual and model-based process descriptions based on a unified data format ». Springer Berlin Heidelberg, 2019. http://dx.doi.org/10.1007/s10270-017-0649-y.

Texte intégral
Résumé :
Documenting business processes using process models is common practice in many organizations. However, not all process information is best captured in process models. Hence, many organizations complement these models with textual descriptions that specify additional details. The problem with this supplementary use of textual descriptions is that existing techniques for automatically searching process repositories are limited to process models. They are not capable of taking the information from textual descriptions into account and, therefore, provide incomplete search results. In this paper, we address this problem and propose a technique that is capable of searching textual as well as model-based process descriptions. It automatically extracts activity-related and behavioral information from both descriptions types and stores it in a unified data format. An evaluation with a large Austrian bank demonstrates that the additional consideration of textual descriptions allows us to identify more relevant processes from a repository.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Figl, Kathrin, et Jan Recker. « Process Innovation as Creative Problem-Solving : An Experimental Study of Textual Descriptions and Diagrams ». Elsevier, 2016. http://dx.doi.org/10.1016/j.im.2016.02.008.

Texte intégral
Résumé :
The use of process models to support business analysts' idea-generation tasks has been a long-standing topic of interest in process improvement. We examine how two types of representations of organizational processes - textual and diagrammatic - assist analysts in developing innovative solutions to process-redesign tasks. The results of our study clarify the types of process-redesign ideas generated by analysts who work with text versus those who work with models. We find that the volume and originality of process-redesign ideas do not differ significantly but that appropriateness of ideas varies. We discuss the implications of these findings for research and practice in process improvement.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Ottensooser, Avner, Alan Fekete, Hajo A. Reijers, Jan Mendling et Con Menictas. « Making Sense of Business Process Descriptions : An Experimental Comparison of Graphical and Textual Notations ». Elsevier, 2012. http://dx.doi.org/10.1016/j.jss.2011.09.023.

Texte intégral
Résumé :
How effective is a notation in conveying the writer's intent correctly? This paper identifies understandability of design notations as an important aspect which calls for an experimental comparison. We compare the success of university students in interpreting business process descriptions, for an established graphical notation (BPMN) and for an alternative textual notation (based on written use-cases). Because a design must be read by diverse communities, including technically-trained professionals such as developers and business analysts, as well as end-users and stakeholders from a wider business setting, we used different types of participants in our experiment. Specifically, we included those who had formal training in process description, and others who had not. Our experiments showed significant increases by both groups in their understanding of the process from reading the textual model. This was not so for the graphical model, where only the trained readers showed significant increases. This finding points at the value of educating readers of graphical descriptions in that particular notation when they become exposed to such models in their daily work.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Brown, Marissa. « Sensory characteristics and classification of commercial and experimental plain yogurts ». Thesis, Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/4114.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

ZEN, Ja Hu, et Kanenori SUWA. « Autoclastic subvolcanic rocks in the Tonglu basin, Zhejiang Province, China : a description of "pearlitic border" textures in an adamellite porphyry ». Dept. of Earth and Planetary Sciences, Nagoya University, 2002. http://hdl.handle.net/2237/2855.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Gautier-Dalché, Patrick. « La "Description mappe mundi" de Hugues de Saint-Victor texte inédit avec introduction et commentaire ». Lille 3 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb375978797.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
40

PINTO, THIAGO DELGADO. « A TOOL FOR THE AUTOMATIC GENERATION AND EXECUTION OF FUNCTIONAL TESTS BASED ON THE TEXTUAL USE CASE DESCRIPTION ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2013. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=24924@1.

Texte intégral
Résumé :
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CENTRO FEDERAL DE EDUCAÇÃO TECNOLÓGICA CELSO SUCKOW FONSECA
Esta dissertação apresenta uma solução para a geração e execução automática de testes funcionais a partir da descrição textual de casos de uso, visando verificar se determinada aplicação atende aos requisitos funcionais definidos por esta documentação. A ferramenta construída é capaz de gerar casos de teste semânticos valorados, transformá-los em código-fonte (para Java Swing e os frameworks de teste TestNG e FEST, na versão atual), executá-los, coletar os resultados e analisar se os casos de uso da aplicação atendem ou não a estes requisitos. Dentre os principais diferenciais da solução construída estão a cobertura de cenários de teste que envolvem múltiplos casos de uso, a cobertura de cenários envolvendo recursão, a possibilidade da definição de regras de negócio sobre dados existentes em bancos de dados de teste, a geração automática dos valores fornecidos nos testes e a geração de testes funcionais semânticos, num formato independente de linguagem de programação e frameworks de teste.
This master s dissertation presents a solution for the automatic generation and execution of functional tests based on the textual use case description and aims to verify whether certain application matches its functional requirements defined by this documentation. The constructed tool is capable of generating valued semantic test cases, of transforming them into source code (for Java Swing and the TestNG and FEST frameworks, in the current version), of executing them, of collecting the results and of analyzing whether the application s use cases matches (or not) its requirements. The solution main differentials includes the coverage of test scenarios that involves more than one use case, the coverage of scenarios containing recursive flows, the possibility of defining business rules using data existing in test databases, as well as the automatic generation of test values, and the generation of semantic functional tests in a format independent of programming languages and frameworks.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Cândido, Gilberto Gomes. « O ato narrativo e a ética na descrição do documento de arquivo / ». Marília, 2020. http://hdl.handle.net/11449/192628.

Texte intégral
Résumé :
Orientador: João Batista Ernesto de Moraes
Resumo: A Descrição do Documento de Arquivo, como uma das funções de procedimento Arquivístico, é elaborada de modo a permitir a difusão e o acesso aos usuários; tal método procura proporcionar elementos/caracteres do conteúdo formal do documento de forma a se elaborar instrumentos de pesquisa. Assim sendo, a Descrição é um processo de representação dos elementos intrínsecos e extrínsecos ao Documento de Arquivo e deve ser fidedigna. Com isto, objetivou-se conhecer e como ocorrem o Ato Narrativo e a Ética na representação do documento de arquivo, demonstrando que tal ato representativo não é objetivo, mas sim subjetivo advindo de interpretações, de modo a contribuir para discussões e aprofundamento na área de Representação, bem como apresentar subsídios para compreensão da subjetividade sobre o processo de representação por meio da descrição do documento de arquivo, com intuito de descrever os procedimentos metodológicos; reproduzir interlocuções metodológicas e ilustrar a aplicação do processo de descrição sobre os dossiês da Comissão Pastoral da Terra, da CNBB - Conferência Nacional dos Bispos do Brasil - Norte 2 com reflexões dos procedimentos metodológicos e filosóficos apresentados. Assim, identificou-se como resultados que os processos descritivos são interpretativos e podem vir a atribuir juízos críticos valorativos e categóricos por meio da visão de mundo do profissional arquivista durante a representação. Dado que o procedimento de descrição aplicado pelo arquivista em sua a... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The Archive Document description, as one of the Archival Procedure Functions, is elaborated to allow the diffusion and user’s access; this method seeks to provide elements/characters of documents formal contents in order to elaborate research tools. Therefore, description is a representation process of intrinsic and extrinsic of Archive Docmument elements should be trustworthy. With this, it was aimed to know about and how Narrative Act and Ethic occurs in archive document representation, demonstrating that such representative act is not objetive, but subjetive from intepretations, in order to contribute to discussions and deepening of representation area, well with presenting subsidies to subjetive comprehention about de representation process by the archive document descripition, in order to describe methodological procedures; reproduce metodological interlocutions and ilustrate this process application to description about the Comissão Pastoral da Terra dossiers, CNBB – Conferência Nacional dos Bispos do Brasil – Norte 2, with reflections about methodological and filosophical procedures presented. Thus, was identified as results that descriptive process are interpretative and may atribute valuable and categorical critical judgments, through archivist professional worldview during the representation. The description procedure applied by the in his professional performance uses cognitive aspects, based on his interpretations, which seeks to identify and extract the elements/... (Complete abstract click electronic access below)
Resumen: La representación descriptiva en archivología pasa por algunos procesos de creación de archivos correlacionados con sus funciones, tales como la evaluación y la clasificación, entre otros. En esta investigación, la función explorada fue la descripción del documento de archivo, conciliando teóricamente la práctica del “hacer” y del “cómo hacer”, desde el proceso de descripción hasta la representación del documento de archivo, siguiendo los estándares internacionales de descripción desde la concepción posmoderna. La investigación buscó comprender los aspectos del acto narrativo y la ética sobre el proceso de descripción, con el fin de contribuir a la comprensión de estos aspectos en el proceso. Con esto se enfatizó que los procesos descriptivos son interpretativos y atribuyen juicios críticos, valorativos y categorías fundamentados en la cosmovisión del archivista durante el acto de representación. El procedimiento de descripción aplicado por el archivero en su práctica profesional se apoya en aspectos cognitivos interpretativos, que buscan identificar y extraer los elementos y actores del contenido y el contexto del documento de archivo, de forma confiable. Por lo tanto, la representación surgida del proceso de descripción no es neutral ni objetiva, ya que usa percepciones cognitivas, es decir, es subjetiva. Por lo tanto, el acto narrativo en la descripción puede considerarse como el acto de contar o (re)contar una historia. La ética busca guiar la actuación profesional del ar... (Resumen completo clicar acceso eletrônico abajo)
Doutor
Styles APA, Harvard, Vancouver, ISO, etc.
42

Chappuy, Sylviane Vauquois Bernard. « Formalisation de la description des niveaux d'interprétation des langues naturelles étude menée en vue de l'analyse et de la génération au moyen de transducteurs / ». S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00306957.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Coleman, Graham Keith. « Descriptor control of sound transformations and mosaicing synthesis ». Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/392138.

Texte intégral
Résumé :
Sampling, as a musical or synthesis technique, is a way to reuse recorded musical expressions. In this dissertation, several ways to expand sampling synthesis are explored, especially mosaicing synthesis, which imitates target signals by transforming and compositing source sounds, in the manner of a mosaic made of broken tile. One branch of extension consists of the automatic control of sound transformations towards targets defined in a perceptual space. The approach chosen uses models that predict how the input sound will be transformed as a function of the selected parameters. In one setting, the models are known, and numerical search can be used to find sufficient parameters; in the other, they are unknown and must be learned from data. Another branch focuses on the sampling itself. By mixing multiple sounds at once, perhaps it is possible to make better imitations, e.g. in terms of the harmony of the target. However, using mixtures leads to new computational problems, especially if properties like continuity, important to high quality sampling synthesis, are to be preserved. A new mosaicing synthesizer is presented which incorporates all of these elements: supporting automatic control of sound transformations using models, mixtures supported by perceptually relevant harmony and timbre descriptors, and preservation of continuity of the sampling context and transformation parameters. Using listening tests, the proposed hybrid algorithm was compared against classic and contemporary algorithms, and the hybrid algorithm performed well on a variety of quality measures.
El mostreig, com a tècnica musical o de síntesi, és una manera de reutilitzar expressions musicals enregistrades. En aquesta dissertació s’exploren estratègies d’ampliar la síntesi de mostreig, sobretot la síntesi de “mosaicing”. Aquesta última tracta d’imitar un senyal objectiu a partir d’un conjunt de senyals font, transformant i ordenant aquests senyals en el temps, de la mateixa manera que es faria un mosaic amb rajoles trencades. Una d’aquestes ampliacions de síntesi consisteix en el control automàtic de transformacions de so cap a objectius definits a l’espai perceptiu. L’estratègia elegida utilitza models que prediuen com es transformarà el so d’entrada en funció d’uns paràmetres seleccionats. En un cas, els models són coneguts, i cerques númeriques es poden fer servir per trobar paràmetres suficients; en l’altre, els models són desconeguts i s’han d’aprendre a partir de les dades. Una altra ampliació es centra en el mostreig en si. Mesclant múltiples sons a la vegada, potser és possible fer millors imitacions, més específicament millorar l’harmonia del resultat, entre d’altres. Tot i així, utilitzar múltiples mescles crea nous problemes computacionals, especialment si propietats com la continuïtat, important per a la síntesis de mostreig d’alta qualitat, han de ser preservades. En aquesta tesi es presenta un nou sintetitzador mosaicing que incorpora tots aquests elements: control automàtic de transformacions de so fent servir models, mescles a partir de descriptors d’harmonia i timbre perceptuals, i preservació de la continuïtat del context de mostreig i dels paràmetres de transformació. Fent servir proves d’escolta, l’algorisme híbrid proposat va ser comparat amb algorismes clàssics i contemporanis: l’algorisme híbrid va donar resultats positius a una varietat de mesures de qualitat.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Gautier, Dalché Patrick. « La "Descriptio mappe mundi" de Hugues de Saint-Victor : texte établi avec introduction et commentaire ». Paris 1, 1986. http://www.theses.fr/1986PA010613.

Texte intégral
Résumé :
Dans son de archa mystica, hugues de saint-victor annonce une descriptio mappe mundi qu'il ecrira ulterieurement. On discute depuis longtemps de l'authenticite de cette oeuvre non decouverte jusqu'a present. L'attribution a hugues de saint-victor du texte edite se fonde sur plusieurs arguments. Il porte le titre descriptio mappe mundi. Il se trouve dans deux manuscrits de la deuxieme moitie du xiie siecle dont l'un emane du monastere cistercien d'ourscamp lie a saint-victor. Le prologue porte la marque du style et des preoccupations pedagogiques et exegetiques de hugues de saint-victor. C'est enfin la description d'une grande carte disparue dont un modele reduit se trouve dans un manuscrit ayant appartenu aux celestins de marcoussis, dont la bibliotheque est essentiellement d'origine parisienne. Le texte de la descriptio mappe mundi est date des annees 1130-1135. Le modele de la carte decrite par hugues de saint-victor est une carte de l'antiquite tardive (ve-vie siecle). Son auteur avait utilise des sources litteraires et cartographiques dont certaines ont aujourd'hui disparu. La descriptio mappe mundi a elle-meme une double originalite. Elle rompt avec les pratiques traditionnelles de l'enseignement de la geographie (dont l'histoire, de meme que celle de l'expression mappa mundi, est rappelee), en se fondant avant tout sur la carte plutot que sur des textes. Par la confiance qu'elle lui accorde, elle montre l'attitude de geographe de son auteur, qui repere des faits essentiels que la geographie antique avait negliges, comme notamment la perception du role des alpes, epine dorsale de l'europe, ou l'appartenance de l'egypte au continent africain.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Mamadou, Diarra. « Extraction et fusion de points d'intérêt et textures spectraux pour l'identification, le contrôle et la sécurité ». Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCK031/document.

Texte intégral
Résumé :
La biométrie est une technologie émergente qui propose de nouvelles méthodes de contrôle, d’identification et de sécurité. Les systèmes biométriques sont souvent victimes de menaces. La reconnaissance faciale est populaire et plusieurs approches existantes utilisent des images dans le spectre visible. Ces systèmes traditionnels opérant dans le spectre visible souffrent de plusieurs limitations dues aux changements d’éclairage, de poses et d’expressions faciales. La méthodologie présentée dans cette thèse est basée sur de la reconnaissance faciale multispectrale utilisant l'imagerie infrarouge et visible, pour améliorer la performance de la reconnaissance faciale et pallier les insuffisances du spectre visible. Les images multispectrales utilisées cette étude sont obtenues par fusion d’images visibles et infrarouges. Les différentes techniques de reconnaissance sont basées sur l’extraction de caractéristiques telles que la texture et les points d’intérêt par les techniques suivantes : une extraction hybride de caractéristiques, une extraction binaire de caractéristiques, une mesure de similarité tenant compte des caractéristiques extraites
Biometrics is an emerging technology that proposes new methods of control, identification and security. Biometric systems are often subject to risks. Face recognition is popular and several existing approaches use images in the visible spectrum. These traditional systems operating in the visible spectrum suffer from several limitations due to changes in lighting, poses and facial expressions. The methodology presented in this thesis is based on multispectral facial recognition using infrared and visible imaging, to improve the performance of facial recognition and to overcome the deficiencies of the visible spectrum. The multispectral images used in this study are obtained by fusion of visible and infrared images. The different recognition techniques are based on features extraction such as texture and points of interest by the following techniques: a hybrid feature extraction, a binary feature extraction, a similarity measure taking into account the extracted characteristics
Styles APA, Harvard, Vancouver, ISO, etc.
46

Rogerson, Michelle. « The utility of applying textual analysis to descriptions of offender modus operandi for the prevention of high volume crime ». Thesis, University of Huddersfield, 2016. http://eprints.hud.ac.uk/id/eprint/28709/.

Texte intégral
Résumé :
Police crime information systems contain modus operandi (MO) fields which provide brief text descriptions of the circumstances surrounding crime events and the actions taken by offenders to commit them. This Thesis aims to assess the feasibility of undertaking systematic analysis of these descriptions for high volume crimes. In particular, it seeks to ask the following three questions: 1) Are police recorded MO data a potential source of actionable intelligence to inform crime prevention? 2) Can techniques drawn from computer-aided text analysis be used to identify meaningful patterns in MO data for high volume crimes? 3) Do conceptual frameworks add value to the analysis and interpretation of patterns in MOs? The study focuses on a sample of theft from the person and robbery of personal property offences (n~30,000). Although existing studies have utilised similar data, they have tended to focus on crime detection and have been beset with problems of data quality. To explore these aims, it was first necessary to conduct a thorough review of MO fields to identify the challenges they present for analysis. Problems identified include various types of error but a more prominent challenge is the inherent flexibility found within natural language, i.e. human language as opposed to languages that are artificially constructed. Based on the data review, it was possible to select, and develop, appropriate techniques of computer-aided content analysis to process the data ready for further statistical investigation. In particular, a cluster analysis successfully identified and classified groups of offences based on similarities in their MO fields. The findings from the analysis were interpreted using two conceptual frameworks, the conjunction of criminal opportunity and crime scripts, both of which are informed by situational crime theories. The thesis identified that the benefits of these frameworks were twofold. As methods of analysis the frameworks ensure that the interpretation of results is systematic. As theoretical frameworks they provide an explicit link between patterns in the data, findings from previous literature, theories of crime causation and methods of prevention. Importantly, using the two frameworks together helps to build an improved understanding of offender's ability both to cope with and to exploit crime situations. The thesis successfully demonstrates that MO fields contain a potential source of intelligence relevant to both practical crime prevention and research, and that it is possible to extract this information using innovative computer-aided textual analysis techniques. The research undertaken served as a pathfinding exercise developing what amounts to a replicable technique applicable to datasets from other localities and other crime types. However, the analysis process is neither fully objective nor automated. The thesis concluded that criminological frameworks are a pre-requisite to the interpretation of this intelligence although the research questioned the strict categories and hierarchies imposed by the frameworks which do not entirely reflect the flexibilities of real-life crime commission.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Doualan, Gaëlle. « Étude historique, épistémologique et descriptive de la synonymie ». Thesis, Paris 4, 2015. http://www.theses.fr/2015PA040149.

Texte intégral
Résumé :
Cette thèse se donne pour objectif d’étudier les apories théoriques et épistémologiques liées à la synonymie. Pour l’essentiel, ces problèmes se concentrent dans la faiblesse théorique de la notion par rapport au succès empirique dont elle jouit dans l’usage et les dictionnaires. Cette faiblesse théorique prend racine dans l’histoire de la notion : la synonymie a été définie en premier lieu par Aristote et a subi de nombreuses transformations au cours des siècles. La définition scientifique de la synonymie s’est constituée à partir de la synonymie distinctive des synonymistes. Les notions fondatrices de la linguistique moderne ont été appliquées à la synonymie alors qu’elle a été élaborée avant leur conception. La synonymie se plie mal aux cadres théoriques de la linguistique moderne d’où des apories. L’approche distinctive centre l’étude de la synonymie sur les différences de sens alors qu’elle repose sur des équivalences sémantiques approchées. L’histoire de la notion aide à éclairer les apories et à s’en distancier pour recentrer la notion sur les équivalences puisqu’elles rendent possibles la synonymie et sur le discours qui est le lieu de l’émergence du sens. Cela rompt avec la synonymie réduite à la recherche de différences de sens entre items lexicaux synonymes. Une approche onomasiologique et textuelle est mise en place pour proposer un nouveau cadre d’étude de la synonymie. Cette approche se manifeste par la détection de réseaux lexicaux témoignant de relations d’équivalence qui émergent en contexte. Pour tester cette approche, les réseaux lexicaux du vocabulaire du vice et de la vertu sont étudiés dans des textes du XVIIe siècle traitant de thèmes moraux
This thesis studies theoretical and epistemological aporia of synonymy. These problems are concentrated in the theoretical weakness of synonymy in comparison with its empirical success both in usage and dictionaries. This theoretical weakness originates in the history of the notion: at first, synonymy had been defined by Aristoteles and was afterwards subjected to transformations during the following centuries. With the beginning of modern linguistics, scientific definition of synonymy had been built from the distinctive synonymy of French synonymists. The fundamental notions of modern linguistics, such as the opposition between language and discourse, had been applied to synonymy whereas it had been elaborated before their conception. Synonymy can hardly be submitted to modern linguistics theoretical frameworks without generating theoretical difficulties. The distinctive approach centers the study of synonymy on semantic differences whereas synonymy is based on approximate semantic equivalence. History of synonymy sheds light on aporia and helps to distance from it and to center the notion on the semantic equivalence because they make synonymy possible and on discourse because that is where sense emerges. This breaks off with the synonymy which is solely based on semantic differences between synonymic lexical items. An onomasiological and textual approach is set up to propose a new scientific framework to synonymy: this approach consists in the detection of lexical networks showing semantic relations appearing in context. To test this approach, lexical networks of vice and virtue vocabulary are studied in seventeenth century French texts treating moral themes
Styles APA, Harvard, Vancouver, ISO, etc.
48

Joar, Hedvall, et Claesson Charlie. « Character description by the use of level design and game mechanics : A study on how to convey a character based narrative within a game ». Thesis, Uppsala universitet, Institutionen för speldesign, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-260644.

Texte intégral
Résumé :
This study is an investigation of the possibilities to create levels that convey a character and its personal narrative with the use of three level design methods. The used methods are: the use of smaller objects and details to convey information, using the player's personal references and memories and having a clear goal for the player. The results are gathered though qualitative interviews with the participant in the study and examining how participants interpret the narrative in game levels that were designed by using these three level design methods. This study uses the game Project Rewind as test bed, which was partially developed for this thesis purpose. The test result have shown that players took more notice of clusters of smaller visual objects, and memorized interactive objects much better than the stationary objects. They also used their own personal and cultural references, memories and stereotypes when analysing the objects in the levels to create an image of the character presented to them. Based on the results that we received from the interviews we found that having a clear demographic is important when designing a narrative level that needs be easy to understand. This is because of each player's own personal references, certain things can be perceived differently based on the player's references. The test results also show that the three level design methods mentioned can to a certain degree be used to convey a narrative when used to design a level.
Denna studie är en utredning av möjligheterna att skapa spelnivåer som förmedlar en karaktär och dess personliga berättelse med användning av tre level design metoder. De använda metoderna är: användning av mindre objekt och detaljer, användning av spelarens personliga referenser och minnen samt att ha ett tydligt mål för spelaren. Resultaten samlas in genom kvalitativa intervjuer med deltagarna i studien och analysera hur deltagarna tolkar berättelsen i spelets nivåer som var designade med hjälp av dessa tre level design metoder. Denna studie använder spelet Project Rewind för testerna, detta spel var delvis utvecklat för denna studiens ändamål. Test resultaten visar att spelaren tog upp mer information genom kluster av mindre visuella föremål, och memorerade interaktiva objekt mycket bättre än de stillastående objekten. De använde också sina egna personliga och kulturella referenser, minnen och stereotyper när de analyserade objekten i nivåerna för att skapa en bild av karaktären presenteras för dem. Baserat på de resultat som vi fått från undersökningen fann vi att det är viktigt att ha ett tydligt demografiskt mål, vilket är viktigt när man utformar en berättelse i en spel nivå som ska vara lätt att förstå. Detta är på grund av varje spelare har sina egna personliga referenser, vissa saker kan uppfattas på olika sätt beroende på vad för referenser man har. Test resultaten visar också att de tre level design metoderna som nämndes tidigare kan till en viss grad bli använda för att förmedla ett narrativ när man använder dem för att designa en nivå.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Marinič, Michal. « Rozpoznávání textu z obrazových dat ». Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-220584.

Texte intégral
Résumé :
The thesis is concerned with optical character recognition from image data with different methods used for character classification. In the first theoretical part it focuses on explanation of all important parts of system for optical character recognition. The latter practical part of the thesis describes an example of image segmentation, the implementation of artificial neural networks for image recognition and create simple training set of data for the evaluation of the network. It also describes the process of training Tesseract tool and its implementation in a simple application EasyTessOCR for character recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Bordry, Guillaume. « "La musique est un texte" : histoire, typologie, et fonctions de la description littéraire de la musique, en particulier dans l'oeuvre d'Hector Berlioz (1803-1869) ». Paris 3, 2005. http://www.theses.fr/2005PA030068.

Texte intégral
Résumé :
Partant d'une formule de Balzac, " la musique est un texte ", cette thèse définit une pratique textuelle, la description littéraire de la musique, qui consiste à restituer l'art musical par des moyens autres que techniques, centrés sur les émotions et l'imagination de l'auditeur ; ce dernier retranscrit ses impressions au moyen de procédés littéraires. L'expérience musicale trouve en effet à se dire, à se construire, sur le modèle d'une pratique littéraire : la lecture. Les compositeurs, et Berlioz le premier, utilisent le pouvoir évocateur de ces textes dans leur propre création musicale. La première partie de cette thèse propose ainsi une histoire, puis une typologie de la description littéraire de la musique. La deuxième partie est une illustration de la première à travers l'étude d'un cas particulier, celui de Berlioz, dont elle éclaire les manipulations conjointes du texte et de la musique. Berlioz est à la fois compositeur, écrivain, auditeur et lecteur, ce qui permet à ses descriptions musicales de tenir un rôle particulier, dans ses textes comme dans sa musique : nous cherchons à en définir les modalités et le sens
Starting from a phrase by Honoré de Balzac, " music is a text ", this work attempts to define a specific kind of textual practice : the literary description of music, which consists in recreating musical art by other means than technical ones. This kind of description focuses on the emotions and the imagination of the listener, who transcribes his impressions thanks to literary processes. Musical experience thus takes the shape of a literary practice : reading. Composers, Berlioz first, use the evocative power of these texts in their own musical creation. The first part of this work outlines a history and a typology of the literary description of music. The second part illustrates the first, focusing on a specific case, Hector Berlioz's, and enlightens both his manipulations of text and music. Berlioz is at the same time a composer, a writer, a listener and a reader, which allows his musical descriptions to play a specific role in his writings as well as in his music. This work attempts to define the forms and the sense of it
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie