Дисертації з теми "Reconnaissance optique de texte"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Reconnaissance optique de texte".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Mullot, Rémy. "Segmentation d'images et extraction de primitives pour la reconnaissance optique de texte." Rouen, 1991. http://www.theses.fr/1991ROUES001.
Vincent, Nicole. "Contribution à la reconnaissance de textes multipolices." lyon, INSA, 1988. http://www.theses.fr/1988ISAL0011.
Namane, Abderrahmane. "Degraded printed text and handwritten recognition methods : Application to automatic bank check recognition." Université Louis Pasteur (Strasbourg) (1971-2008), 2007. http://www.theses.fr/2007STR13048.
Character recognition is a significant stage in all document recognition systems. Character recognition is considered as an assignment problem and decision of a given character, and is an active research subject in many disciplines. This thesis is mainly related to the recognition of degraded printed and handwritten characters. New solutions were brought to the field of document image analysis (DIA). The first solution concerns the development of two recognition methods for handwritten numeral character, namely, the method based on the use of Fourier-Mellin transform (FMT) and the self-organization map (SOM), and the parallel combination of HMM-based classifiers using as parameter extraction a new projection technique. In the second solution, one finds a new holistic recognition method of handwritten words applied to French legal amount. The third solution presents two recognition methods based on neural networks for the degraded printed character applied to the Algerian postal check. The first work is based on sequential combination and the second used a serial combination based mainly on the introduction of a relative distance for the quality measurement of the degraded character. During the development of this thesis, methods of preprocessing were also developed, in particular, the handwritten numeral slant correction, the handwritten word central zone detection and its slope
Saidane, Zohra. "Reconnaissance de texte dans les images et les vidéos en utilisant les réseaux de neurones à convolutions." Phd thesis, Télécom ParisTech, 2008. http://pastel.archives-ouvertes.fr/pastel-00004685.
Minetto, Rodrigo. "Reconnaissance de zones de texte et suivi d'objets dans les images et les vidéos." Paris 6, 2012. http://www.theses.fr/2012PA066108.
In this thesis we address three computer vision problems: (1) the detection and recognition of flat text objects in images of real scenes; (2) the tracking of such text objects in a digital video; and (3) the tracking an arbitrary three-dimensional rigid object with known markings in a digital video. For each problem we developed innovative algorithms, which are at least as accurate and robust as other state-of-the-art algorithms. Specifically, for text recognition we developed (and extensively evaluated) a new HOG-based descriptor specialized for Roman script, which we call T-HOG, and showed its value as a post-filter for an existing text detector (SnooperText). We also improved the SnooperText algorithm by using the multi-scale technique to handle widely different letter sizes while limiting the sensitivity of the algorithm to various artifacts. For text tracking, we describe four basic ways of combining a text detector and a text tracker, and we developed a specific tracker based on a particle-filter which exploits the T-HOG recognizer. For rigid object tracking we developed a new accurate and robust algorithm (AFFTrack) that combines the KLT feature tracker with an improved camera calibration procedure. We extensively tested our algorithms on several benchmarks well-known in the literature. We also created benchmarks (publicly available) for the evaluation of text detection and tracking and rigid object tracking algorithms
Paquet, Thierry. "Segmentation et classification de mots en reconnaissance optique de textes manuscrits." Rouen, 1992. http://www.theses.fr/1992ROUES007.
Henry, Jean-Luc. "Reconnaissance et contexte : une approche coopérative pour la lecture de textes imprimés." Lyon, INSA, 1996. http://www.theses.fr/1996ISAL0027.
The printed documents analysis is not only based on the optical character recognition, it also uses statistical, typographic and contextual information. A contextual stage, independent from the recognition does not give good results. The topic of this work is to build a cooperation between the recognition and the contextual stage. The recognition stage give information to the syntactic analysis stage in order to improve the correction process. Then, the contextual analysis stage provides necessary information to the recognition stage in order to correct its decision criteria and to improve automatically the recognition performance during the reading. This work is divided in two parts. The first part presents the character recognition only from the patterns and the second part studies the recognition with the help of contextual information mainly based on a syntactic correction. This work starts with a presentation of classic methods to extract features from patterns and to compare features descriptions. Then we introduce a pattern compacted by mutually comparing characters to collect all identical patterns on the entire text, called prototypes. In order to reconstruct the recognized text, we identify these prototypes with an original pretopological recognition approach, based on a classification by adaptive neighborhoods. The second part of this work deals with the contextual processing and the cooperation abilities between the two main stages involved in the recognition process. The contextual analysis corrects recognition errors with the pattern redundancies information and a trie dictionary. The system reorganizes pattern representation of the system by modifying parameters that intervene in the process of recognition. The global recognition rate reach an optimum that no longer depends on the training set, but on choice of features and the method of comparison used
Soua, Mahmoud. "Extraction hybride et description structurelle de caractères pour une reconnaissance efficace de texte dans les documents hétérogènes scannés : Méthodes et Algorithmes parallèles." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1069/document.
The Optical Character Recognition (OCR) is a process that converts text images into editable text documents. Today, these systems are widely used in the dematerialization applications such as mail sorting, bill management, etc. In this context, the aim of this thesis is to propose an OCR system that provides a better compromise between recognition rate and processing speed which allows to give a reliable and a real time documents dematerialization. To ensure its recognition, the text is firstly extracted from the background. Then, it is segmented into disjoint characters that are described based on their structural characteristics. Finally, the characters are recognized when comparing their descriptors with a predefined ones.The text extraction, based on binarization methods remains difficult in heterogeneous and scanned documents with a complex and noisy background where the text may be confused with a textured background or because of the noise. On the other hand, the description of characters, and the extraction of segments, are often complex using calculation of geometricaltransformations, polygon, including a large number of characteristics or gives low discrimination if the characteristics of the selected type are sensitive to variation of scale, style, etc. For this, we adapt our algorithms to the type of heterogeneous and scanned documents. We also provide a high discriminatiobn between characters that descriptionis based on the study of the structure of the characters according to their horizontal and vertical projections. To ensure real-time processing, we parallelise algorithms developed on the graphics processor (GPU). Our main contributions in our proposed OCR system are as follows:A new binarisation method for heterogeneous and scanned documents including text regions with complex or homogeneous background. In this method, an image analysis process is used followed by a classification of the document areas into images (text with a complex background) and text (text with a homogeneous background). For text regions is performed text extraction using a hybrid method based on classification algorithm Kmeans (CHK) that we have developed for this aim. This method combines local and global approaches. It improves the quality of separation text/background, while minimizing the amount of distortion for text extraction from the scanned document and noisy because of the process of digitization. The image areas are improved with Gamma Correction (CG) before applying HBK. According to our experiment, our text extraction method gives 98% of character recognition rate on heterogeneous scanned documents.A Unified Character Descriptor based on the study of the character structure. It employs a sufficient number of characteristics resulting from the unification of the descriptors of the horizontal and vertical projection of the characters for efficient discrimination. The advantage of this descriptor is both on its high performance and its simple computation. It supports the recognition of alphanumeric and multiscale characters. The proposed descriptor provides a character recognition 100% for a given Face-type and Font-size.Parallelization of the proposed character recognition system. The GPU graphics processor has been used as a platform of parallelization. Flexible and powerful, this architecture provides an effective solution for accelerating intensive image processing algorithms. Our implementation, combines coarse/fine-grained parallelization strategies to speed up the steps of the OCR chain. In addition, the CPU-GPU communication overheads are avoided and a good memory management is assured. The effectiveness of our implementation is validated through extensive experiments
Wolf, Christian. "Détection de textes dans des images issues d'un flux vidéo pour l'indexation sémantique." Lyon, INSA, 2003. http://theses.insa-lyon.fr/publication/2003ISAL0074/these.pdf.
This work situates within the framework of image and video indexation. A way to include semantic knowledge into the indexing process is to use the text included in the images and video sequences. It is rich in information but easy to use. Existing methods for text detection are simple: most of them are based on texture estimation or edge detection followed by an accumulation of these characteristics. We suggest the usage of geometrical features very early in the detection chain: a first coarse detection calculates a text "probability" image. Afterwards, for each pixel we calculate geometrical properties of the eventual surrounding text rectangle, which are added to the features of the first step and fed into a support vector machine classifier. For the application to video sequences, we propose an algorithm which detects text on a frame by frame basis, tracking the found text rectangles across multiple frames and integrating the frame robustly into a single image. We tackle the character segmentation problem and suggest two different methods: the first algorithm maximizes a criterion based on the local contrast in the image. The second approach exploits a priori knowledge on the spatial binary distribution of the pixels. This prior knowledge in the form of a Markov random field model is integrated into Bayesian estimation framework in order to obtain an estimation of the original binary image
Nosary, Ali. "Reconnaissance automatique de textes manuscrits par adaptation au scripteur." Rouen, 2002. http://www.theses.fr/2002ROUES007.
This thesis deals with the problem of off-line handwritten text recognition. It describes a system of text recognition which exploits an original principle of adaptation to the handwriting to be recognized. The adaptation principle, inspired by contextual effects observed from a human reader, is based on the automatic learning, during the recognition, of the graphical characteristics of the handwriting (writer invariants). The word recognition proceeds according to an analytical approach based on a segmentation-recognition principle. The on-line adaptation of the recognition system relies on the iteration of two steps : a word recognition step which allows to label the writer's representations (allographes) on the whole text and a revaluation step of character models. The implementation of our adaptation strategy requires an interactive recognition scheme able to make interact treatments at various contextual levels. The interaction model retained is based on the multi-agent paradigm
Wolf, Christian Jolion Jean-Michel. "Détection de textes dans des images issues d'un flux vidéo pour l'indexation sémantique." Villeurbanne : Doc'INSA, 2005. http://docinsa.insa-lyon.fr/these/pont.php?id=wolf.
Thèse rédigée en anglais. Introduction et conclusion générale en français. En 2ème partie, choix d'articles en français avec résumés, mots-clef et réf. bibliogr. Titre provenant de l'écran-titre. Bibliogr. p. 147-154. Publications de l'auteur p. 155-157.
Chen, Yong. "Analyse et interprétation d'images à l'usage des personnes non-voyantes : application à la génération automatique d'images en relief à partir d'équipements banalisés." Thesis, Paris 8, 2015. http://www.theses.fr/2015PA080046/document.
Visual information is a very rich source of information to which blind and visually impaired people (BVI) not always have access. The presence of images is a real handicap for the BVI. The transcription into an embossed image may increase the accessibility of an image to BVI. Our work takes into account the aspects of tactile cognition, the rules and the recommendations for the design of an embossed image. We focused our work on the analysis and comparison of digital image processing techniques in order to find the suitable methods to create an automatic procedure for embossing images. At the end of this research, we tested the embossed images created by our system with users with blindness. In the tests, two important points were evaluated: The degree of understanding of an embossed image; The time required for exploration.The results suggest that the images made by this system are accessible to blind users who know braille. The implemented system can be regarded as an effective tool for the creation of an embossed image. The system offers an opportunity to generalize and formalize the procedure for creating an embossed image. The system gives a very quick and easy solution.The system can process pedagogical images with simplified semantic contents. It can be used as a practical tool for making digital images accessible. It also offers the possibility of cooperation with other modalities of presentation of the image to blind people, for example a traditional interactive map
Anigbogu, Julian Chukwuka. "Reconnaissance de textes imprimés multifontes à l'aide de modèles stochastiques et métriques." Nancy 1, 1992. http://www.theses.fr/1992NAN10150.
Yousfi, Sonia. "Embedded Arabic text detection and recognition in videos." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI069/document.
This thesis focuses on Arabic embedded text detection and recognition in videos. Different approaches robust to Arabic text variability (fonts, scales, sizes, etc.) as well as to environmental and acquisition condition challenges (contrasts, degradation, complex background, etc.) are proposed. We introduce different machine learning-based solutions for robust text detection without relying on any pre-processing. The first method is based on Convolutional Neural Networks (ConvNet) while the others use a specific boosting cascade to select relevant hand-crafted text features. For the text recognition, our methodology is segmentation-free. Text images are transformed into sequences of features using a multi-scale scanning scheme. Standing out from the dominant methodology of hand-crafted features, we propose to learn relevant text representations from data using different deep learning methods, namely Deep Auto-Encoders, ConvNets and unsupervised learning models. Each one leads to a specific OCR (Optical Character Recognition) solution. Sequence labeling is performed without any prior segmentation using a recurrent connectionist learning model. Proposed solutions are compared to other methods based on non-connectionist and hand-crafted features. In addition, we propose to enhance the recognition results using Recurrent Neural Network-based language models that are able to capture long-range linguistic dependencies. Both OCR and language model probabilities are incorporated in a joint decoding scheme where additional hyper-parameters are introduced to boost recognition results and reduce the response time. Given the lack of public multimedia Arabic datasets, we propose novel annotated datasets issued from Arabic videos. The OCR dataset, called ALIF, is publicly available for research purposes. As the best of our knowledge, it is first public dataset dedicated for Arabic video OCR. Our proposed solutions were extensively evaluated. Obtained results highlight the genericity and the efficiency of our approaches, reaching a word recognition rate of 88.63% on the ALIF dataset and outperforming well-known commercial OCR engine by more than 36%
Leroux, Manuel. "Reconnaissance de textes manuscrits à vocabulaire limité avec application à la lecture automatique des chèques." Rouen, 1991. http://www.theses.fr/1991ROUES045.
Elagouni, Khaoula. "Combining neural-based approaches and linguistic knowledge for text recognition in multimedia documents." Thesis, Rennes, INSA, 2013. http://www.theses.fr/2013ISAR0013/document.
This thesis focuses on the recognition of textual clues in images and videos. In this context, OCR (optical character recognition) systems, able to recognize caption texts as well as natural scene texts captured anywhere in the environment have been designed. Novel approaches, robust to text variability (differentfonts, colors, sizes, etc.) and acquisition conditions (complex background, non uniform lighting, low resolution, etc.) have been proposed. In particular, two kinds of methods dedicated to text recognition are provided:- A segmentation-based approach that computes nonlinear separations between characters well adapted to the localmorphology of images;- Two segmentation-free approaches that integrate a multi-scale scanning scheme. The first one relies on a graph model, while the second one uses a particular connectionist recurrent model able to handle spatial constraints between characters.In addition to the originalities of each approach, two extra contributions of this work lie in the design of a character recognition method based on a neural classification model and the incorporation of some linguistic knowledge that enables to take into account the lexical context.The proposed OCR systems were tested and evaluated on two datasets: a caption texts video dataset and a natural scene texts dataset (namely the public database ICDAR 2003). Experiments have demonstrated the efficiency of our approaches and have permitted to compare their performances to those of state-of-the-art methods, highlighting their advantages and limits
Barrère, Killian. "Architectures de Transformer légères pour la reconnaissance de textes manuscrits anciens." Electronic Thesis or Diss., Rennes, INSA, 2023. http://www.theses.fr/2023ISAR0017.
Transformer architectures deliver low error rates but are challenging to train due to limited annotated data in handwritten text recognition. We propose lightweight Transformer architectures to adapt to the limited amounts of annotated handwritten text available. We introduce a fast Transformer architecture with an encoder, processing up to 60 pages per second. We also present architectures using a Transformer decoder to incorporate language modeling into character recognition. To effectively train our architectures, we offer algorithms for generating synthetic data adapted to the visual style of modern and historical documents. Finally, we propose strategies for learning with limited data and reducing prediction errors. Our architectures, combined with synthetic data and these strategies, achieve competitive error rates on lines of text from modern documents. For historical documents, they train effectively with minimal annotated data, surpassing state-ofthe- art approaches. Remarkably, just 500 annotated lines are sufficient for character error rates close to 5%
Trupin, Eric. "Segmentation de documents : Application a un systeme de lecture pour non-voyants." Rouen, 1993. http://www.theses.fr/1993ROUES009.
Peyrard, Clément. "Single image super-resolution based on neural networks for text and face recognition." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI083/document.
This thesis is focussed on super-resolution (SR) methods for improving automatic recognition system (Optical Character Recognition, face recognition) in realistic contexts. SR methods allow to generate high resolution images from low resolution ones. Unlike upsampling methods such as interpolation, they restore spatial high frequencies and compensate artefacts such as blur or jaggy edges. In particular, example-based approaches learn and model the relationship between low and high resolution spaces via pairs of low and high resolution images. Artificial Neural Networks are among the most efficient systems to address this problem. This work demonstrate the interest of SR methods based on neural networks for improved automatic recognition systems. By adapting the data, it is possible to train such Machine Learning algorithms to produce high-resolution images. Convolutional Neural Networks are especially efficient as they are trained to simultaneously extract relevant non-linear features while learning the mapping between low and high resolution spaces. On document text images, the proposed method improves OCR accuracy by +7.85 points compared with simple interpolation. The creation of an annotated image dataset and the organisation of an international competition (ICDAR2015) highlighted the interest and the relevance of such approaches. Moreover, if a priori knowledge is available, it can be used by a suitable network architecture. For facial images, face features are critical for automatic recognition. A two step method is proposed in which image resolution is first improved, followed by specialised models that focus on the essential features. An off-the-shelf face verification system has its performance improved from +6.91 up to +8.15 points. Finally, to address the variability of real-world low-resolution images, deep neural networks allow to absorb the diversity of the blurring kernels that characterise the low-resolution images. With a single model, high-resolution images are produced with natural image statistics, without any knowledge of the actual observation model of the low-resolution image
Do, Thanh Ha. "Sparse representations over learned dictionary for document analysis." Electronic Thesis or Diss., Université de Lorraine, 2014. http://www.theses.fr/2014LORR0021.
In this thesis, we focus on how sparse representations can help to increase the performance of noise removal, text region extraction, pattern recognition and spotting symbols in graphical documents. To do that, first of all, we give a survey of sparse representations and its applications in image processing. Then, we present the motivation of building learning dictionary and efficient algorithms for constructing a learning dictionary. After describing the general idea of sparse representations and learned dictionary, we bring some contributions in the field of symbol recognition and document processing that achieve better performances compared to the state-of-the-art. These contributions begin by finding the answers to the following questions. The first question is how we can remove the noise of a document when we have no assumptions about the model of noise found in these images? The second question is how sparse representations over learned dictionary can separate the text/graphic parts in the graphical document? The third question is how we can apply the sparse representation for symbol recognition? We complete this thesis by proposing an approach of spotting symbols that use sparse representations for the coding of a visual vocabulary
Kanoun, Slim. "Identification et analyse de textes arabes par approche affixale." Rouen, 2002. http://www.theses.fr/2002ROUES040.
The presented work in this memory tackles the problems involved in differentiation and text recognition in off-line mode in Arabic and Latin multilingual documents. The first part of this work relates to a method of differentiation between Arabic texts and Latin texts in two natures printed and handwritten. The second part proposes a new approach, called affixal approach, for Arabic word recognition and text analysis. This approach is characterized by modelling from morph-syntactic entities (word basic morphemes) by integrating the morpho-phonological aspects of Arabic vocabulary in the recognition process compared to the traditional approaches which proceed by the modelling of grahic entities (word, letter, pseudo word). The tests carried out show well the contribution of the approach on the recognition simplification and the morph-syntactic categorization of the words in an Arabic text
Do, Thanh Ha. "Sparse representations over learned dictionary for document analysis." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0021/document.
In this thesis, we focus on how sparse representations can help to increase the performance of noise removal, text region extraction, pattern recognition and spotting symbols in graphical documents. To do that, first of all, we give a survey of sparse representations and its applications in image processing. Then, we present the motivation of building learning dictionary and efficient algorithms for constructing a learning dictionary. After describing the general idea of sparse representations and learned dictionary, we bring some contributions in the field of symbol recognition and document processing that achieve better performances compared to the state-of-the-art. These contributions begin by finding the answers to the following questions. The first question is how we can remove the noise of a document when we have no assumptions about the model of noise found in these images? The second question is how sparse representations over learned dictionary can separate the text/graphic parts in the graphical document? The third question is how we can apply the sparse representation for symbol recognition? We complete this thesis by proposing an approach of spotting symbols that use sparse representations for the coding of a visual vocabulary
Dang, Quoc Bao. "Information spotting in huge repositories of scanned document images." Thesis, La Rochelle, 2018. http://www.theses.fr/2018LAROS024/document.
This work aims at developing a generic framework which is able to produce camera-based applications of information spotting in huge repositories of heterogeneous content document images via local descriptors. The targeted systems may take as input a portion of an image acquired as a query and the system is capable of returning focused portion of database image that match the query best. We firstly propose a set of generic feature descriptors for camera-based document images retrieval and spotting systems. Our proposed descriptors comprise SRIF, PSRIF, DELTRIF and SSKSRIF that are built from spatial space information of nearest keypoints around a keypoints which are extracted from centroids of connected components. From these keypoints, the invariant geometrical features are considered to be taken into account for the descriptor. SRIF and PSRIF are computed from a local set of m nearest keypoints around a keypoint. While DELTRIF and SSKSRIF can fix the way to combine local shape description without using parameter via Delaunay triangulation formed from a set of keypoints extracted from a document image. Furthermore, we propose a framework to compute the descriptors based on spatial space of dedicated keypoints e.g SURF or SIFT or ORB so that they can deal with heterogeneous-content camera-based document image retrieval and spotting. In practice, a large-scale indexing system with an enormous of descriptors put the burdens for memory when they are stored. In addition, high dimension of descriptors can make the accuracy of indexing reduce. We propose three robust indexing frameworks that can be employed without storing local descriptors in the memory for saving memory and speeding up retrieval time by discarding distance validating. The randomized clustering tree indexing inherits kd-tree, kmean-tree and random forest from the way to select K dimensions randomly combined with the highest variance dimension from each node of the tree. We also proposed the weighted Euclidean distance between two data points that is computed and oriented the highest variance dimension. The secondly proposed hashing relies on an indexing system that employs one simple hash table for indexing and retrieving without storing database descriptors. Besides, we propose an extended hashing based method for indexing multi-kinds of features coming from multi-layer of the image. Along with proposed descriptors as well indexing frameworks, we proposed a simple robust way to compute shape orientation of MSER regions so that they can combine with dedicated descriptors (e.g SIFT, SURF, ORB and etc.) rotation invariantly. In the case that descriptors are able to capture neighborhood information around MSER regions, we propose a way to extend MSER regions by increasing the radius of each region. This strategy can be also applied for other detected regions in order to make descriptors be more distinctive. Moreover, we employed the extended hashing based method for indexing multi-kinds of features from multi-layer of images. This system are not only applied for uniform feature type but also multiple feature types from multi-layers separated. Finally, in order to assess the performances of our contributions, and based on the assessment that no public dataset exists for camera-based document image retrieval and spotting systems, we built a new dataset which has been made freely and publicly available for the scientific community. This dataset contains portions of document images acquired via a camera as a query. It is composed of three kinds of information: textual content, graphical content and heterogeneous content
Montreuil, Florent. "Extraction de structures de documents par champs aléatoires conditionnels : application aux traitements des courriers manuscrits." Phd thesis, Rouen, 2011. http://www.theses.fr/2011ROUES047.
The automatic processing of written documents is a very active field in the industry. Indeed, due to the mass of written documents to process, the automatic analysis becomes a necessity, but the performance of current systems is highly variable according to the types of documents processed. For example, treatment of unconstrained handwritten documents remains an unsolved issue because two technological obstacles that hinder the development of reliable automatic processing of handwritten documents : - the first is the recognition of handwritten in those documents - the second is related to the existence of widely variability in the document structures. This thesis focuses on solving the second bolt in the case of unconstrained handwritten documents. For this, we have developed reliable and robust methods to analyze document structures based on the use of Conditional Random Fields. The choice of Conditional Random Fields is motivated by the ability of these graphical models to take into account the relationships between the various entities of the document (words, phrases, blocks,. . . ) and integrate contextual knowledge. In addition, the use of probabilistic modeling gifted learning overcomes the inherent variability of the documents to be processed. The originality of the thesis also addresses the proposal of a hierarchical approach for extracting joint physical (segmentation of the document into blocks, lines, ldots) and logical (functional interpretation of the physical structure) structures by combining low-level physical features (position, graphic,. . . ) and high-level logical (keyword spotting). The experiments carried out on handwritten letters show that the proposed model represents an interesting solution because of its discriminatory character and his natural ability to integrate and contextualize the characteristics of different kinds
Montreuil, Florent. "Extraction de structures de documents par champs aléatoires conditionnels : application aux traitements des courriers manuscrits." Phd thesis, Université de Rouen, 2011. http://tel.archives-ouvertes.fr/tel-00652301.
Nguyen, Chu Duc. "Localization and quality enhancement for automatic recognition of vehicle license plates in video sequences." Thesis, Ecully, Ecole centrale de Lyon, 2011. http://www.theses.fr/2011ECDL0018.
Automatic reading of vehicle license plates is considered an approach to mass surveillance. It allows, through the detection / localization and optical recognition to identify a vehicle in the images or video sequences. Many applications such as traffic monitoring, detection of stolen vehicles, the toll or the management of entrance/ exit parking uses this method. Yet in spite of important progress made since the appearance of the first prototype sin 1979, with a recognition rate sometimes impressive thanks to advanced science and sensor technology, the constraints imposed for the operation of such systems limit laid. Indeed, the optimal use of techniques for localizing and recognizing license plates in operational scenarios requiring controlled lighting conditions and a limitation of the pose, velocity, or simply type plate. Automatic reading of vehicle license plates then remains an open research problem. The major contribution of this thesis is threefold. First, a new approach to robust license plate localization in images or image sequences is proposed. Then, improving the quality of the plates is treated with a localized adaptation of super-resolution technique. Finally, a unified model of location and super-resolution is proposed to reduce the time complexity of both approaches combined
Capitaine, Thierry. "Reconnaissance optique de partitions musicales." Compiègne, 1995. http://www.theses.fr/1995COMPD852.
Zemirli, Zouhir. "Synthèse vocale de textes arabes voyellés." Toulouse 3, 2004. http://www.theses.fr/2004TOU30262.
The text to speech synthesis consists in creating speech by analysis of a text which is subjected to no restriction. The object of this thesis is to describe the modeling and the taking into account of knowledge in phonetic, phonological, morpho-lexical and syntactic necessary to the development of a complete system of voice synthesis starting from diacritized arab texts. The automatic generation of the prosodico-phonetics sequence required the development of several components. The morphosyntaxic labelling "TAGGAR" carries out grammatical labelling, a marking and a syntactic grouping and the automatic insertion of the pauses. Graphemes to phonemes conversion is ensured by using lexicons, syntactic grammars, morpho-orthographical and phonological rules. A multiplicative model of prediction of the duration of the phonemes is described and a model of generation of the prosodic contours based on the accents of the words and the syntactic group is presented
Kesiman, Made Windu Antara. "Document image analysis of Balinese palm leaf manuscripts." Thesis, La Rochelle, 2018. http://www.theses.fr/2018LAROS013/document.
The collection of palm leaf manuscripts is an important part of Southeast Asian people’s culture and life. Following the increasing of the digitization projects of heritage documents around the world, the collection of palm leaf manuscripts in Southeast Asia finally attracted the attention of researchers in document image analysis (DIA). The research work conducted for this dissertation focused on the heritage documents of the collection of palm leaf manuscripts from Indonesia, especially the palm leaf manuscripts from Bali. This dissertation took part in exploring DIA researches for palm leaf manuscripts collection. This collection offers new challenges for DIA researches because it uses palm leaf as writing media and also with a language and script that have never been analyzed before. Motivated by the contextual situations and real conditions of the palm leaf manuscript collections in Bali, this research tried to bring added value to digitized palm leaf manuscripts by developing tools to analyze, to transliterate and to index the content of palm leaf manuscripts. These systems aim at making palm leaf manuscripts more accessible, readable and understandable to a wider audience and, to scholars and students all over the world. This research developed a DIA system for document images of palm leaf manuscripts, that includes several image processing tasks, beginning with digitization of the document, ground truth construction, binarization, text line and glyph segmentation, ending with glyph and word recognition, transliteration and document indexing and retrieval. In this research, we created the first corpus and dataset of the Balinese palm leaf manuscripts for the DIA research community. We also developed the glyph recognition system and the automatic transliteration system for the Balinese palm leaf manuscripts. This dissertation proposed a complete scheme of spatially categorized glyph recognition for the transliteration of Balinese palm leaf manuscripts. The proposed scheme consists of six tasks: the text line and glyph segmentation, the glyph ordering process, the detection of the spatial position for glyph category, the global and categorized glyph recognition, the option selection for glyph recognition and the transliteration with phonological rules-based machine. An implementation of knowledge representation and phonological rules for the automatic transliteration of Balinese script on palm leaf manuscript is proposed. The adaptation of a segmentation-free LSTM-based transliteration system with the generated synthetic dataset and the training schemes at two different levels (word level and text line level) is also proposed
Duthil, Benjamin. "De l'extraction des connaissances à la recommandation." Phd thesis, Montpellier 2, 2012. http://tel.archives-ouvertes.fr/tel-00771504.
Ennaji, Abdellatif. "Classification et parallélisme en reconnaissance optique de caractères." Rouen, 1993. http://www.theses.fr/1993ROUES027.
Lardier, Melody. "Système optoélectronique de reconnaissance multicapteurs par filtrage optique." Rennes 1, 2003. http://www.theses.fr/2003REN10117.
Benjelloun, Mohammed. "Etude théorique et expérimentale du filtrage numérique de l'image d'un texte en relief Braille pour sa transcription en texte noir." Lille 1, 1986. http://www.theses.fr/1986LIL10035.
Pitou, Cynthia. "Extraction d'informations textuelles au sein de documents numérisés : cas des factures." Thesis, La Réunion, 2017. http://www.theses.fr/2017LARE0015.
Document processing is the transformation of a human understandable data in a computer system understandable format. Document analysis and understanding are the two phases of document processing. Considering a document containing lines, words and graphical objects such as logos, the analysis of such a document consists in extracting and isolating the words, lines and objects and then grouping them into blocks. The subsystem of document understanding builds relationships (to the right, left, above, below) between the blocks. A document processing system must be able to: locate textual information, identify if that information is relevant comparatively to other information contained in the document, extract that information in a computer system understandable format. For the realization of such a system, major difficulties arise from the variability of the documents characteristics, such as: the type (invoice, form, quotation, report, etc.), the layout (font, style, disposition), the language, the typography and the quality of scanning.This work is concerned with scanned documents, also known as document images. We are particularly interested in locating textual information in invoice images. Invoices are largely used and well regulated documents, but not unified. They contain mandatory information (invoice number, unique identifier of the issuing company, VAT amount, net amount, etc.) which, depending on the issuer, can take various locations in the document. The present work is in the framework of region-based textual information localization and extraction.First, we present a region-based method guided by quadtree decomposition. The principle of the method is to decompose the images of documents in four equals regions and each regions in four new regions and so on. Then, with a free optical character recognition (OCR) engine, we try to extract precise textual information in each region. A region containing a number of expected textual information is not decomposed further. Our method allows to determine accurately in document images, the regions containing text information that one wants to locate and retrieve quickly and efficiently.In another approach, we propose a textual information extraction model consisting in a set of prototype regions along with pathways for browsing through these prototype regions. The life cycle of the model comprises five steps:- Produce synthetic invoice data from real-world invoice images containing the textual information of interest, along with their spatial positions.- Partition the produced data.- Derive the prototype regions from the obtained partition clusters.- Derive pathways for browsing through the prototype regions, from the concept lattice of a suitably defined formal context.- Update incrementally the set of protype regions and the set of pathways, when one has to add additional data
Ghorbel, Adam. "Generalized Haar-like filters for document analysis : application to word spotting and text extraction from comics." Thesis, La Rochelle, 2016. http://www.theses.fr/2016LAROS008/document.
The presented thesis follows two directions. The first one disposes a technique for text and graphic separation in comics. The second one points out a learning free segmentation free word spotting framework based on the query-by-string problem for manuscript documents. The two approaches are based on human perception characteristics. Indeed, they were inspired by several characteristics of human vision such as the Preattentive processing. These characteristics guide us to introduce two multi scale approaches for two different document analysis tasks which are text extraction from comics and word spotting in manuscript document. These two approaches are based on applying generalized Haar-like filters globally on each document image whatever its type. Describing and detailing the use of such features throughout this thesis, we offer the researches of document image analysis field a new line of research that has to be more explored in future. The two approaches are layout segmentation free and the generalized Haar-like filters are applied globally on the image. Moreover, no binarization step of the processed document is done in order to avoid losing data that may influence the accuracy of the two frameworks. Indeed, any learning step is performed. Thus, we avoid the process of extraction features a priori which will be performed automatically, taking into consideration the different characteristics of the documents
Ghanmi, Nabil. "Segmentation d'images de documents manuscrits composites : application aux documents de chimie." Electronic Thesis or Diss., Université de Lorraine, 2016. http://www.theses.fr/2016LORR0109.
This thesis deals with chemistry document segmentation and structure analysis. This work aims to help chemists by providing the information on the experiments which have already been carried out. The documents are handwritten, heterogeneous and multi-writers. Although their physical structure is relatively simple, since it consists of a succession of three regions representing: the chemical formula of the experiment, a table of the used products and one or more text blocks describing the experimental procedure, several difficulties are encountered. In fact, the lines located at the region boundaries and the imperfections of the table layout make the separation task a real challenge. The proposed methodology takes into account these difficulties by performing segmentation at several levels and treating the region separation as a classification problem. First, the document image is segmented into linear structures using an appropriate horizontal smoothing. The horizontal threshold combined with a vertical overlapping tolerance favor the consolidation of fragmented elements of the formula without too merge the text. These linear structures are classified in text or graphic based on discriminant structural features. Then, the segmentation is continued on text lines to separate the rows of the table from the lines of the raw text locks. We proposed for this classification, a CRF model for determining the optimal labelling of the line sequence. The choice of this kind of model has been motivated by its ability to absorb the variability of lines and to exploit contextual information. For the segmentation of table into cells, we proposed a hybrid method that includes two levels of analysis: structural and syntactic. The first relies on the presence of graphic lines and the alignment of both text and spaces. The second tends to exploit the coherence of the cell content syntax. We proposed, in this context, a Recognition-based approach using contextual knowledge to detect the numeric fields present in the table. The thesis was carried out in the framework of CIFRE, in collaboration with the eNovalys campany.We have implemented and tested all the steps of the proposed system on a consequent dataset of chemistry documents
Moradkhan, Romel. "Détection des points critiques d'une forme : application à la reconnaissance de caractères manuscrits." Paris 9, 1993. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1993PA090012.
The représentation of two-dimensional patterns by their contours is of great importance since many patterns, such as hand-written or printed characters, can be recognized by their contours. Because of its complexity the détection of dominant points of digitalized contours continues to be an important area of research. The first part of our work covers dominant point détection methods of digitalized curves (contours). After a survey of existing techniques we propose two new and efficient methods: the first is based on the notion of "co-angularity"; the second on the notion of "axis of symmetry". In the second part we focus on the problem of hand-written character récognition. We have proposed a hierarchical algorithm based on ctural matching which is both flexible and continuous
Loy, Wee Wang Landau I. D. "Reconnaissance en ligne de caractères alphanumériques manuscrits." S. l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00297291.
Ambs, Pierre. "Traitement de l'information par processeurs optiques : application à la reconnaissance de formes." Mulhouse, 1987. http://www.theses.fr/1987MULH0049.
Yao, Jianping. "Algorithmes pour la reconnaissance de formes optique invariante aux distorsions." Toulon, 1997. http://www.theses.fr/1997TOUL0014.
Ben, Tara Walid. "Reconnaissance invariante sous tranformations d'intensité : étude de performances." Master's thesis, Université Laval, 2008. http://hdl.handle.net/20.500.11794/20452.
Poulard, Fabien B. "Détection de dérivation de texte." Nantes, 2011. http://www.theses.fr/2011NANT2023.
Thanks to the Internet, the production and publication of content is possible with ease and speed. This possibility raises the issue of controling the origins of this content. This work focuses on detecting derivation links between texts. A derivation link associates a derivative text and the pre-existing texts from which it was written. We focused on the task of identifying derivative texts given a source text for various forms of derivation. Our rst contribution is the denition of a theoretical framework denes the concept of derivation as well as a model framing the dierent forms of derivation. Then, we set up an experimental framework consisting of free software tools, evaluation corpora and evaluation metrics based on IR. The Piithie and Wikinews corpora we have developed are to our knowledge the only ones in French for the evaluation of the detection of derivation links. Finally, we explored dierent methods of detection based on the signature-based approach. In particular, we have introduced the notions of specicity and invariance to guide the choice of descriptors used to modelize the texts in the expectation of their comparison. Our results show that the choice of motivated descriptors, including linguistically motivated ones, can reduce the size of the modelization of texts, and therefore the cost of the method, while oering performances comparable to the much more voluminous state of the art approach
Yatim, Farhat. "Reconnaissance de caractères multifontes par une structure pluri-procédures." Lille 1, 1988. http://www.theses.fr/1988LIL10033.
Le, Berre Guillaume. "Vers la mitigation des biais en traitement neuronal des langues." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0074.
It is well known that deep learning models are sensitive to biases that may be present in the data used for training. These biases, which can be defined as useless or detrimental information for the task in question, can be of different kinds: one can, for example, find biases in the writing styles used, but also much more problematic biases relating to the sex or ethnic origin of individuals. These biases can come from different sources, such as annotators who created the databases, or from the annotation process itself. My thesis deals with the study of these biases and, in particular, is organized around the mitigation of the effects of biases on the training of Natural Language Processing (NLP) models. In particular, I have worked a lot with pre-trained models such as BERT, RoBERTa or UnifiedQA which have become essential in recent years in all areas of NLP and which, despite their extensive pre-training, are very sensitive to these bias problems.My thesis is organized in three parts, each presenting a different way of managing the biases present in the data. The first part presents a method allowing to use the biases present in an automatic summary database in order to increase the variability and the controllability of the generated summaries. Then, in the second part, I am interested in the automatic generation of a training dataset for the multiple-choice question-answering task. The advantage of such a generation method is that it makes it possible not to call on annotators and therefore to eliminate the biases coming from them in the data. Finally, I am interested in training a multitasking model for optical text recognition. I show in this last part that it is possible to increase the performance of our models by using different types of data (handwritten and typed) during their training
Al, Falou Ayman. "Implantation optique de correlateurs multivoies appliques a la reconnaissance des formes." Rennes 1, 1999. http://www.theses.fr/1999REN10099.
Bastos, Dos Santos José Eduardo. "L'identification de texte en images de chèques bancaires brésiliens." Compiègne, 2003. http://www.theses.fr/2003COMP1453.
Identifying and distinguishing text in document images are tasks whose cat!Jal solutions are mainly based on using contextual informations, like layout informations or informations from the phisical structure. Ln this research work, an alternative for this task is investigated based only in features observed from textual elements, giving more independency to the process. The hole process was developped considering textual elements fragmented in sm ail portions(samples) in order to provide an alternative solution to questions Iike scale and textual elements overlapping. From these samples, a set of features is extracted and serves as input to a classifyer maily chrged with textual extraction from the document and also the distinguish between handwritting and machine-printed text. Moreover, sinGe the only informations emplyed is observed directly from textual elements, the process assumes a character more independent as it doesn't use any heuristics nor à priori information of the treated document. Results around 93% of correct classification confirms the efficacy of the process
Elagouni, Khaoula. "Combinaison d'approches neuronales et de connaissances linguistiques pour la reconnaissance de texte dans les documents multimédias." Phd thesis, INSA de Rennes, 2013. http://tel.archives-ouvertes.fr/tel-00864923.
Chauvet, Philippe. "Système d'analyse, reconnaissance et description de documents complexes /." Paris : Ecole nationale supérieure des télécommunications, 1993. http://catalogue.bnf.fr/ark:/12148/cb35562138m.
Duhem, Olivier. "Contribution à l'étude de composants de l'optique guidée associant des cristaux liquides [Texte imprimé]." Artois, 1999. http://www.theses.fr/1999ARTO0403.
Fasquel, Jean-Baptiste. "Une méthode opto-informatique de détection et de reconnaissance d'objets d'intérêt : Application à la détection des lésions cancéreuses du foie et à la vérification en temps-réel des signatures manuscrites." Université Louis Pasteur (Strasbourg) (1971-2008), 2002. http://www.theses.fr/2002STR13234.
Due to recent technological advances, optical processors become faster than specialized digital processors, essentially for linear filterings. The purpose of this thesis is to point out, for two applications, the potential of the coupling of a specialized digital processor with a Vander Lugt optical correlator, by developping an original hybrid " opto-electronic " method for object detection and recognition. The proposed object detection method is based on the digital statistical recombination of a set of optical smoothings, within regions of interest which are previously detected using a fast hybrid technique. It is shown that this hybrid method allows the unsupervised detection of noisy objects of varying sizes. Experimental results validate its potential for the fast detection of liver tumors. The proposed object recognition method, dedicated to the fast verification of handwritten signatures, consists in several statistical classifiers. Each one is based on a set of specific optical filterings allowing to measure the similarity between underlying structures of the signature to be verified and the reference signatures. The different decisions and their fusion are performed with a digital processor. Experimental results validate the proposed hybrid object recognition method