Дисертації з теми "Analyse de données textuellees"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Analyse de données textuellees".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Chartron, Ghislaine. "Analyse des corpus de données textuelles, sondage de flux d'informations." Paris 7, 1988. http://www.theses.fr/1988PA077211.
Ramadasse, Harry. "L'observation du travail digital des cols blancs : le développement d'un Observatoire au sein du Groupe Michelin." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASI002.
This thesis investigates the digital work practices of white-collar workers within the Michelin Group. During the first year of field immersion, a diagnostic of digital intelligence was conducted, leading to the following industrial question: "How to measure digital dexterity of Michelin Group's white-collar workers?". By translating this question into theoretical concepts, the following research question was identified:"How can the digital work practices of white-collar employees in a pre-digital industrial organization be observed?".Drawing on literature related to digital transformation, digital work, and observatories, this thesis advocates for the development of an observatory based on a Big Quali sample of over 200,000 textual responses collected internally.The contributions of this work include: the proposal of a theoretical framework for digital work, a methodology for developing an observatory that can become an organizational management tool, and a proposed BigQuali/AI methodological approach for researchers and practitioners aiming to analyze large textual databases
Dubois, Vincent. "Apprentissage approximatif et extraction de connaissances à partir de données textuelles." Nantes, 2003. http://www.theses.fr/2003NANT2001.
Rigouste, Loïs. "Méthodes probabilistes pour l'analyse exploratoire de données textuelles." Phd thesis, Télécom ParisTech, 2006. http://pastel.archives-ouvertes.fr/pastel-00002424.
Lespinats, Sylvain. "Style du génome exploré par analyse textuelle de l'ADN." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2006. http://tel.archives-ouvertes.fr/tel-00151611.
Partant de ces constatations, nous avons mis en place des procédures d'évaluation des distances entre signatures de façon à rendre plus manifeste les informations biologiques sur lesquelles s'appuient nos analyses. Une méthode de projection non-linéaire des voisinages y est associée ce qui permet de s'affranchir des problèmes de grande dimension et de visualiser l'espace occupé par les données. L'analyse des relations entre les signatures pose le problème de la contribution de chaque variable (les mots) à la distance entre les signatures. Un Z-score original basé sur la variation de la fréquence des mots le long des génomes a permis de quantifier ces contributions. L'étude des variations de l'ensemble des fréquences le long d'un génomes permet d'extraire des segments originaux. Une méthode basée sur l'analyse du signal permet d'ailleurs de segmenter précisément ces zones originales.
Grâce à cet ensemble de méthodes, nous proposons des résultats biologiques. En particulier, nous mettons en évidence une organisation de l'espace des signatures génomiques cohérente avec la taxonomie des espèces. De plus, nous constatons la présence d'une syntaxe de l'ADN : il existe des « mots à caractère syntaxique » et des « mots à caractère sémantique », la signature s'appuyant surtout sur les mots à caractère syntaxique. Enfin, l'analyse des signatures le long du génome permet une détection et une segmentation précise des ARN et de probables transferts horizontaux. Une convergence du style des transferts horizontaux vers la signature de l'hôte a d'ailleurs pu être observée.
Des résultats variés ont été obtenus par analyse des signatures. Ainsi, la simplicité d'utilisation et la rapidité de l'analyse des séquences par signatures en font un outil puissant pour extraire de l'information biologique à partir des génomes.
Marteau, Hubert. "Une méthode d'analyse de données textuelles pour les sciences sociales basée sur l'évolution des textes." Tours, 2005. http://www.theses.fr/2005TOUR4028.
This PhD Thesis aims at bringing to sociologists a data-processing tool wich allows them to analyse of semi-directing open talks. The proposed tool performs in two steps : an indexation of the talks followed by a classification. Usually, indexing methods rely on a general stastistical analysis. Such methods are suited for texts having contents and structure ( literary texts, scientific texts,. . . ). These texts have more vocabulary and structure than talks (limitation to 1000 words for suche texts). On the basis of the assumption that the sociological membership strongly induces the form of the speech, we propose various methods to evaluate the structure and the evolution of the texts. The methods attempt to find new representations of texts (image, signal) and to extract values from these new representations. Selected classification is a classification by trees (NJ). It has a low complexity and it respects distances, then this method is a good solution to provide a help to classification
Dermouche, Mohamed. "Modélisation conjointe des thématiques et des opinions : application à l'analyse des données textuelles issues du Web." Thesis, Lyon 2, 2015. http://www.theses.fr/2015LYO22007/document.
This work is located at the junction of two domains : topic modeling and sentiment analysis. The problem that we propose to tackle is the joint and dynamic modeling of topics (subjects) and sentiments (opinions) on the Web. In the literature, the task is usually divided into sub-tasks that are treated separately. The models that operate this way fail to capture the topic-sentiment interaction and association. In this work, we propose a joint modeling of topics and sentiments, by taking into account associations between them. We are also interested in the dynamics of topic-sentiment associations. To this end, we adopt a statistical approach based on the probabilistic topic models. Our main contributions can be summarized in two points : 1. TS (Topic-Sentiment model) : a new probabilistic topic model for the joint extraction of topics and sentiments. This model allows to characterize the extracted topics with distributions over the sentiment polarities. The goal is to discover the sentiment proportions specfic to each of theextracted topics. 2. TTS (Time-aware Topic-Sentiment model) : a new probabilistic model to caracterize the topic-sentiment dynamics. Relying on the document's time information, TTS allows to characterize the quantitative evolutionfor each of the extracted topic-sentiment pairs. We also present two other contributions : a new evaluation framework for measuring the performance of topic-extraction methods, and a new hybrid method for sentiment detection and classification from text. This method is based on combining supervised machine learning and prior knowledge. All of the proposed methods are tested on real-world data based on adapted evaluation frameworks
Trouvilliez, Benoît. "Similarités de données textuelles pour l'apprentissage de textes courts d'opinions et la recherche de produits." Thesis, Artois, 2013. http://www.theses.fr/2013ARTO0403/document.
This Ph.D. thesis is about the establishment of textual data similarities in the client relation domain. Two subjects are mainly considered : - the automatic analysis of short messages in response of satisfaction surveys ; - the search of products given same criteria expressed in natural language by a human through a conversation with a program. The first subject concerns the statistical informations from the surveys answers. The ideas recognized in the answers are identified, organized according to a taxonomy and quantified. The second subject concerns the transcription of some criteria over products into queries to be interpreted by a database management system. The number of criteria under consideration is wide, from simplest criteria like material or brand, until most complex criteria like color or price. The two subjects meet on the problem of establishing textual data similarities thanks to NLP techniques. The main difficulties come from the fact that the texts to be processed, written in natural language, are short ones and with lots of spell checking errors and negations. Establishment of semantic similarities between words (synonymy, antonymy, ...) and syntactic relations between syntagms (conjunction, opposition, ...) are other issues considered in our work. We also study in this Ph. D. thesis automatic clustering and classification methods in order to analyse answers to satisfaction surveys
Andreewsky, Marina. "Construction automatique d'un système de type expert pour l'interrogation de bases de données textuelles." Paris 11, 1989. http://www.theses.fr/1989PA112310.
Priam, Rodolphe. "Méthodes de carte auto-organisatrice par mélange de lois contraintes. Ap^plication à l'exploration dans les tableaux de contingence textuels." Rennes 1, 2003. http://www.theses.fr/2003REN10166.
El, Haddadi Anass. "Fouille multidimensionnelle sur les données textuelles visant à extraire les réseaux sociaux et sémantiques pour leur exploitation via la téléphonie mobile." Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1378/.
Competition is a fundamental concept of the liberal economy tradition that requires companies to resort to Competitive Intelligence (CI) in order to be advantageously positioned on the market, or simply to survive. Nevertheless, it is well known that it is not the strongest of the organizations that survives, nor the most intelligent, but rather, the one most adaptable to change, the dominant factor in society today. Therefore, companies are required to remain constantly on a wakeful state to watch for any change in order to make appropriate solutions in real time. However, for a successful vigil, we should not be satisfied merely to monitor the opportunities, but before all, to anticipate risks. The external risk factors have never been so many: extremely dynamic and unpredictable markets, new entrants, mergers and acquisitions, sharp price reduction, rapid changes in consumption patterns and values, fragility of brands and their reputation. To face all these challenges, our research consists in proposing a Competitive Intelligence System (CIS) designed to provide online services. Through descriptive and statistics exploratory methods of data, Xplor EveryWhere display, in a very short time, new strategic knowledge such as: the profile of the actors, their reputation, their relationships, their sites of action, their mobility, emerging issues and concepts, terminology, promising fields etc. The need for security in XPlor EveryWhere arises out of the strategic nature of information conveyed with quite a substantial value. Such security should not be considered as an additional option that a CIS can provide just in order to be distinguished from one another. Especially as the leak of this information is not the result of inherent weaknesses in corporate computer systems, but above all it is an organizational issue. With Xplor EveryWhere we completed the reporting service, especially the aspect of mobility. Lastly with this system, it's possible to: View updated information as we have access to our strategic database server in real-time, itself fed daily by watchmen. They can enter information at trade shows, customer visits or after meetings
Houle, Annie. "Délit de langue et paternité textuelle : une approche informatisée." Thesis, Université Laval, 2013. http://www.theses.ulaval.ca/2013/29405/29405.pdf.
Dendani, Mohamed. "La faisabilité et la fécondité de l'application de logiciels d'analyse de données textuelles, sur un corpus substantiel d'entretiens." Aix-Marseille 1, 1992. http://www.theses.fr/1992AIX10040.
Afzali, Said Abdoul Razeq. "Analyse morphosyntaxique automatique du Dari (persan d'Afghanistan) et mise au point d'un système d'interrogation de bases de données textuelles en langage naturel." Paris 5, 1986. http://www.theses.fr/1986PA05H042.
Abbé, Adeline. "Analyse de données textuelles d'un forum médical pour évaluer le ressenti exprimé par les internautes au sujet des antidépresseurs et des anxyolitiques." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS385/document.
Analysis of textual data is facilitated by the use of text mining (TM) allowing to automate content analysis, and is implemented in several application in healthcare. These include the use of TM to explore the content of posts shared online.We performed a systematique literature review to identify the application of TM in psychiatry. In addition, we used TM to explore users’ concerns of an online forum dedicated to antidepressants and anxiolytics between 2013 and 2015 analysing words frequency, cooccurences, topic models (LDA) and popularity of topics.The four TM applications in psychiatry retrieved are the analysis of patients' narratives (psychopathology), feelings expressed online, content of medical records, and biomedical literature screening. Four topics are identified on the forum: withdrawals (most frequent), escitalopram, anxiety related to treatment effect and secondary effects. While concerns around secondary effects of treatment declined, questions about withdrawals effects and changing medication increased related to several antidepressants.Content analysis of online textual data allow us to better understand major concerns of patients, support provided, and to improve the adherence of treatment
Loubier, Eloïse. "Analyse et visualisation de données relationnelles par morphing de graphe prenant en compte la dimension temporelle." Phd thesis, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-00423655.
Nos travaux conduisent à l'élaboration des techniques graphiques permettant la compréhension des activités humaines, de leurs interactions mais aussi de leur évolution, dans une perspective décisionnelle. Nous concevons un outil alliant simplicité d'utilisation et précision d'analyse se basant sur deux types de visualisations complémentaires : statique et dynamique.
L'aspect statique de notre modèle de visualisation repose sur un espace de représentation, dans lequel les préceptes de la théorie des graphes sont appliqués. Le recours à des sémiologies spécifiques telles que le choix de formes de représentation, de granularité, de couleurs significatives permet une visualisation plus juste et plus précise de l'ensemble des données. L'utilisateur étant au cœur de nos préoccupations, notre contribution repose sur l'apport de fonctionnalités spécifiques, qui favorisent l'identification et l'analyse détaillée de structures de graphes. Nous proposons des algorithmes qui permettent de cibler le rôle des données au sein de la structure, d'analyser leur voisinage, tels que le filtrage, le k-core, la transitivité, de retourner aux documents sources, de partitionner le graphe ou de se focaliser sur ses spécificités structurelles.
Une caractéristique majeure des données stratégiques est leur forte évolutivité. Or l'analyse statistique ne permet pas toujours d'étudier cette composante, d'anticiper les risques encourus, d'identifier l'origine d'une tendance, d'observer les acteurs ou termes ayant un rôle décisif au cœur de structures évolutives.
Le point majeur de notre contribution pour les graphes dynamiques représentant des données à la fois relationnelles et temporelles, est le morphing de graphe. L'objectif est de faire ressortir les tendances significatives en se basant sur la représentation, dans un premier temps, d'un graphe global toutes périodes confondues puis en réalisant une animation entre les visualisations successives des graphes attachés à chaque période. Ce procédé permet d'identifier des structures ou des événements, de les situer temporellement et d'en faire une lecture prédictive.
Ainsi notre contribution permet la représentation des informations, et plus particulièrement l'identification, l'analyse et la restitution des structures stratégiques sous jacentes qui relient entre eux et à des moments donnés les acteurs d'un domaine, les mots-clés et concepts qu'ils utilisent.
Loubier, Éloïse. "Analyse et visualisation de données relationnelles par morphing de graphe prenant en compte la dimension temporelle." Toulouse 3, 2009. http://thesesups.ups-tlse.fr/2264/.
With word wide exchanges, companies must face increasingly strong competition and masses of information flows. They have to remain continuously informed about innovations, competition strategies and markets and at the same time they have to keep the control of their environment. The Internet development and globalization reinforced this requirement and on the other hand provided means to collect information. Once summarized and synthesized, information generally is under a relational form. To analyze such a data, graph visualization brings a relevant mean to users to interpret a form of knowledge which would have been difficult to understand otherwise. The research we have carried out results in designing graphical techniques that allow understanding human activities, their interactions but also their evolution, from the decisional point of view. We also designed a tool that combines ease of use and analysis precision. It is based on two types of complementary visualizations: statics and dynamics. The static aspect of our visualization model rests on a representation space in which the precepts of the graph theory are applied. Specific semiologies such as the choice of representation forms, granularity, and significant colors allow better and precise visualizations of the data set. The user being a core component of our model, our work rests on the specification of new types of functionalities, which support the detection and the analysis of graph structures. We propose algorithms which make it possible to target the role of the data within the structure, to analyze their environment, such as the filtering tool, the k-core, and the transitivity, to go back to the documents, and to give focus on the structural specificities. One of the main characteristics of strategic data is their strong evolution. However the statistical analysis does not make it possible to study this component, to anticipate the incurred risks, to identify the origin of a trend, and to observe the actors or terms having a decisive role in the evolution structures. With regard to dynamic graphs, our major contribution is to represent relational and temporal data at the same time; which is called graph morphing. The objective is to emphasize the significant tendencies considering the representation of a graph that includes all the periods and then by carrying out an animation between successive visualizations of the graphs attached to each period. This process makes it possible to identify structures or events, to locate them temporally, and to make a predictive reading of it. Thus our contribution allows the representation of advanced information and more precisely the identification, the analysis, and the restitution of the underlying strategic structures which connect the actors of a domain, the key words, and the concepts they use; this considering the evolution feature
Ghalamallah, Ilhème. "Proposition d'un modèle d'analyse exploratoire multidimensionnelle dans un contexte d'intelligence économique." Toulouse 3, 2009. http://www.theses.fr/2009TOU30293.
A successful business is often conditioned by its ability to identify, collect, process and disseminate information for strategic purposes. Moreover, information technology and knowledge provide constraints that companies must adapt : a continuous stream, a circulation much faster techniques increasingly complex. The risk of being swamped by this information and no longer able to distinguish the essential from the trivial. Indeed, with the advent of new economy dominated by the market, the problem of industrial and commercial enterprise is become very complex. Now, to be competitive, the company must know how to manage their intangible capital. Competitive Intelligence (CI) is a response to the upheavals of the overall business environment and more broadly to any organization. In an economy where everything moves faster and more complex, management Strategic Information has become a key driver of overall business performance. CI is a process and an organizational process that can be more competitive, by monitoring its environment and its dynamics. In this context, we found that much information has strategic significance to the relationship: links between actors in the field, semantic networks, alliances, mergers, acquisitions, collaborations, co-occurrences of all kinds. Our work consists in proposing a model of multivariate analysis dedicated to the IE. This approach is based on the extraction of knowledge by analyzing the evolution of relational databases. We offer a model for understanding the activity of actors in a given field, but also their interactions their development and strategy, this decision in perspective. This approach is based on the designing a system of generic information online analysis to homogenize and organize text data in relational form, and thence to extract implicit knowledge of the content and formatting are adapted to non-specialist decision makers in the field of knowledge extraction
Aouini, Mourad. "Approche multi-niveaux pour l'analyse des données textuelles non-standardisées : corpus de textes en moyen français." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCC003.
This thesis presents a non-standardized text analysis approach which consists a chain process modeling allowing the automatic annotation of texts: grammar annotation using a morphosyntactic tagging method and semantic annotation by putting in operates a system of named-entity recognition. In this context, we present a system analysis of the Middle French which is a language in the course of evolution including: spelling, the flexional system and the syntax are not stable. The texts in Middle French are mainly distinguished by the absence of normalized orthography and the geographical and chronological variability of medieval lexicons.The main objective is to highlight a system dedicated to the construction of linguistic resources, in particular the construction of electronic dictionaries, based on rules of morphology. Then, we will present the instructions that we have carried out to construct a morphosyntactic tagging which aims at automatically producing contextual analyzes using the disambiguation grammars. Finally, we will retrace the path that led us to set up local grammars to find the named entities. Hence, we were asked to create a MEDITEXT corpus of texts in Middle French between the end of the thirteenth and fifteenth centuries
Chartron, Ghislaine. "Analyse des corpus de données textuelles sondage de flux d'informations /." Grenoble 2 : ANRT, 1988. http://catalogue.bnf.fr/ark:/12148/cb37612583z.
Bouillot, Flavien. "Classification de textes : de nouvelles pondérations adaptées aux petits volumes." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS167.
Every day, classification is omnipresent and unconscious. For example in the process of decision when faced with something (an object, an event, a person), we will instinctively think of similar elements in order to adapt our choices and behaviors. This storage in a particular category is based on past experiences and characteristics of the element. The largest and the most accurate will be experiments, the most relevant will be the decision. It is the same when we need to categorize a document based on its content. For example detect if there is a children's story or a philosophical treatise. This treatment is of course more effective if we have a large number of works of these two categories and if books had a large number of words. In this thesis we address the problem of decision making precisely when we have few learning documents and when the documents had a limited number of words. For this we propose a new approach based on new weights. It enables us to accurately determine the weight to be given to the words which compose the document.To optimize treatment, we propose a configurable approach. Five parameters make our adaptable approach, regardless of the classification given problem. Numerous experiments have been conducted on various types of documents in different languages and in different configurations. According to the corpus, they highlight that our proposal allows us to achieve superior results in comparison with the best approaches in the literature to address the problems of small dataset. The use of parameters adds complexity since it is then necessary to determine optimitales values. Detect the best settings and best algorithms is a complicated task whose difficulty is theorized through the theorem of No-Free-Lunch. We treat this second problem by proposing a new meta-classification approach based on the concepts of distance and semantic similarities. Specifically we propose new meta-features to deal in the context of classification of documents. This original approach allows us to achieve similar results with the best approaches to literature while providing additional features. In conclusion, the work presented in this manuscript has been integrated into various technical implementations, one in the Weka software, one in a industrial prototype and a third in the product of the company that funded this work
Flaminio, Silvia. "(Se) représenter les barrages : (a)ménagement, concessions et controverses." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN071/document.
The aim of this PhD thesis is to study representations and narratives on dams, which are often controversial infrastructures. If the symbolic role of dams has been underlined in the literature, few studies actually focus on the perception of dams and their spatial and temporal trajectories. Building on the literature of social and cultural geography on representation, and the writings of political ecology on discourse, this thesis confronts different sources (newspapers, interviews and archives), study areas (in France and Australia) and methodological approaches (quantitative and qualitative) in order to follow the discursive evolution of hydraulic infrastructure. The points of view of various stakeholders are also considered: inhabitants, engineers and hydraulic institutions, opponents to dams, administrations in charge of nature protection and scientists who produce environmental knowledge. From a methodologicial perspective, the dissertation highlights the limits of certain material and illustrates the necessity to consider different sources in parallel. The results show the evolution of waterscapes, hydrosocial spaces and cycles---the gradual concessions made to environmentalists at the expense of hydraulic bureaucracies---but they also illustrate, on a broader perspective, the production and the flow of discourses on the environment---the disaggregation of a Promethean discourse on nature and the multiplication of different and sometimes opposing representations of the environment---particularly during conflicts and controversies
Morvan, Jérémy. "La gouvernance d'entreprise managériale : positionnement et rôle des gérants de fonds socialement responsables." Phd thesis, Université de Bretagne occidentale - Brest, 2005. http://tel.archives-ouvertes.fr/tel-00011421.
Dans la première partie, nous développons une approche théorique de la gouvernance. Dans le premier chapitre, nous présentons la théorie de l'agence et la théorie des parties prenantes pour identifier les acteurs du processus productif. Dans un deuxième chapitre, nous cherchons à faire évoluer le paradigme en présentant un modèle de légitimité du pouvoir dans la firme.
Dans la seconde partie, nous produisons une approche empirique de la gouvernance. L'objectif est de comprendre l'imbrication des légitimités pragmatique, cognitive et morale de la firme dans sa recherche d'une adhésion des partenaires. Dans le troisième chapitre, une analyse de données textuelles permet d'identifer les attentes financières, partenariales et citoyennes de ces fonds socialement responsables (SR) en direction de l'entreprise. Dans le quatrième chapitre, nous comparons les performances de fonds et indices SR et traditionnels.
Tagny, Ngompe Gildas. "Méthodes D'Analyse Sémantique De Corpus De Décisions Jurisprudentielles." Thesis, IMT Mines Alès, 2020. http://www.theses.fr/2020EMAL0002.
A case law is a corpus of judicial decisions representing the way in which laws are interpreted to resolve a dispute. It is essential for lawyers who analyze it to understand and anticipate the decision-making of judges. Its exhaustive analysis is difficult manually because of its immense volume and the unstructured nature of the documents. The estimation of the judicial risk by individuals is thus impossible because they are also confronted with the complexity of the judicial system and language. The automation of decision analysis enable an exhaustive extraction of relevant knowledge for structuring case law for descriptive and predictive analyses. In order to make the comprehension of a case law exhaustive and more accessible, this thesis deals with the automation of some important tasks for the expert analysis of court decisions. First, we study the application of probabilistic sequence labeling models for the detection of the sections that structure court decisions, legal entities, and legal rules citations. Then, the identification of the demands of the parties is studied. The proposed approach for the recognition of the requested and granted quanta exploits the proximity between sums of money and automatically learned key-phrases. We also show that the meaning of the judges' result is identifiable either from predefined keywords or by a classification of decisions. Finally, for a given category of demands, the situations or factual circumstances in which those demands are made, are discovered by clustering the decisions. For this purpose, a method of learning a similarity distance is proposed and compared with established distances. This thesis discusses the experimental results obtained on manually annotated real data. Finally, the thesis proposes a demonstration of applications to the descriptive analysis of a large corpus of French court decisions
Lapierre, Dominique. "Diversité culturelle et religieuse dans le Devisement du monde de Marco Polo." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR117.
The main goal of this dissertation is to propose a new reading and approach of Marco Polo’s Travels, also known as the Devisement du monde, the Description of the World or Il Milione. This study is based both on Marco Polo’s description of people living on the other side of the world and on the critical reception of his book. When Marco Polo left Venice, the prevailing opposition between West and East was mainly grounded in the duality opposing Christians and Saracens. However, through his travels and during his stay at Kubilai Khan’s Mongol court, the young man embraced more complex issues relating to religious beliefs and practices related to philosophical movements barely known in the Western world.“The observer of religions”, as historian P. Ménard calls him, seems to be fascinated by the cultural and religious diversity he encounters during his stay in the Mongol empire. So, many differences with his own culture were difficult to absorb and to report. In this study, we particularly focus on the many versions and translations of these descriptions, written in a context of political and religious turmoil. Following the works of C. Dutschke and C. Gadrat on the reception theory applied to the Travels, this diachronic research is founded on ten manuscripts and six editions dating from the early 14th century until the late 19th century. The impact of Marco Polo’s account is not studied here through the circulation of his Travels or according to the number of authors mentioning it in their own writings, but is rather established in relation to the text itself, along with the paratext, miniatures and illustrations. All these elements provide valuable information concerning its reception through ages, and about the expectations of the potential audience, which also evolved over time. All the versions and translations of our corpus have been digitized, and thanks to text analysis tools, we were able to reconcile close reading and data processing while analyzing the text
Torres, Aguilar Sergio. "Un modèle de reconnaissance automatique des entités nommées et des structures textuelles pour les corpus diplomatiques médiolatins." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLV081.
In this thesis, we present two computer models to structure textual information for large databases of medieval charters. The two models, one applied to the recognition of named entities, the other to the detection of parts of the diplomatics discourse, are supervised Conditional random fields (CRF) models trained on a hand-annotated corpus of medieval charters. ( orpus Burgundiae Medii Aevi or CBMA).The main Named Entity Recognition model has proven to be robust in its application to widely varying corpora in size, chronology and origin. The secondary model detecting parts of the diplomatic discourse, although less efficient, remains valid as a structuring tool. At the moment both can be used for indexing and studying a wide variety of diplomatics sources, thus saving huge human efforts.We have developed different solutions to overcome the gap between model's dependence on its original training-set and its ability to be applied to other corpora. Similarly, various corrections and additions were made to the golden-corpus from several historical and linguistic analysis concerning writing phenomena in charters, which greatly helped to improve the initial performance.In a later step we applied our automatic tools in the recognition of names of people, places and parts of the diplomatics discourse on thousands of charters from the CBMA corpus in order to study different questions concerning historical science and diplomatics. These studies concern the semi-automatic dating of a non-dated cartulary; the evolution of the spatial vocabulary in the charters of the central Middle Ages and the indexing of charters from their scriptural modules, in particular formulae of the charter protocols. This studies has a twofold purpose: on the one hand have shown different strategies for abstracting and adapting to the automatic processing well-known methods of research in history; on the other hand, seek to provide us tools with an applicative framework to obtain relevant knowledge to the historical science using massive processing
Hô, Dinh Océane. "Caractérisation différentielle de forums de discussion sur le VIH en vietnamien et en français : Éléments pour la fouille comportementale du web social." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCF022/document.
The standard discourse produced by official organisations is confronted with the unofficial or informal discourse of the social web. Empowering people to express themselves results in a new balance of authority, when it comes to knowledge and changes the way people learn. Social web discourse is available to each and everyone and its size is growing fast, which opens up new fields for both humanities and social sciences to investigate. The latter, however, are not equipped to engage with such complex and little-analysed data. The aim of this dissertation is to investigate how far social web discourse can help supplement official discourse. In it we set out a method to collect and analyse data that is in line with the characteristics of a digital environment, namely data size, anonymity, transience, structure. We focus on forums, where such discourse is built, and test our method on a specific social issue, ie the HIV/AIDS epidemic in Vietnam. This field of investigation encompasses several related questions that have to do with health, society, the evolution of morals, the mismatch between different kinds of discourse. Our study is also grounded in the analysis of a comparable French corpus dealing with the same topic, whose genre and discourse characteristics are equivalent to those of the Vietnamese one: this two-pronged research highlights the specific features of different socio-cultural environments
Cherrabi, El Alaoui Nezha. "Un prisme sémantique des brevets par thésaurus interposés : positionnement, essais et applications." Electronic Thesis or Diss., Toulon, 2020. http://www.theses.fr/2020TOUL4003.
We live in an information society, characterized by an explosion of data available on the web and in different databases. Researchers in the field of information stress the need for relevant information. Information literacy has always been the challenge for humanity to maintain its survival, now information must be of a sufficient degree of reliability to avoid polluting knowledge. The patent is a multidimensional source, a leading source of information. The instrumented analysis of patent data is becoming a necessity and constitutes, for companies, industrialists and the State, a resource for the most efficient measurement of inventive activity, for an objective approach. Searching patent databases is a complex task for several reasons, the number of existing patents is very high and increasing rapidly, keyword searches do not yield satisfactory results, large companies use professionals capable of performing targeted and efficient searches, which is often not the case for university researchers, students and other profiles.Hence the need for the machine to help experts and non-experts alike to better exploit patent information. Thus, we propose a method to accompany the user in the use of this documentation. This method is based on a standardized reference system of man-made technical principles, which are themselves described by terminology sets that we combine with natural language processing (NLP) tools to dispense with the editorial forms of patents and to extend the associated vocabularies
Afzali, Said Abdoul Razeq. "Analyse morphosyntaxique automatique du dari, persan d'Afghanistan, et mise au point d'un système d'interrogation de bases de données textuelles en langage naturel." Lille 3 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb375953243.
Marine, Cadoret. "Analyse factorielle de données de catégorisation. : Application aux données sensorielles." Rennes, Agrocampus Ouest, 2010. http://www.theses.fr/2010NSARG006.
In sensory analysis, holistic approaches in which objects are considered as a whole are increasingly used to collect data. Their interest comes on a one hand from their ability to acquire other types of information as the one obtained by traditional profiling methods and on the other hand from the fact they require no special skills, which makes them feasible by all subjects. Categorization (or free sorting), in which subjects are asked to provide a partition of objects, belongs to these approaches. The first part of this work focuses on categorization data. After seeing that this method of data collection is relevant, we focus on the statistical analysis of these data through the research of Euclidean representations. The proposed methodology which consists in using factorial methods such as Multiple Correspondence Analysis (MCA) or Multiple Factor Analysis (MFA) is also enriched with elements of validity. This methodology is then illustrated by the analysis of two data sets obtained from beers on a one hand and perfumes on the other hand. The second part is devoted to the study of two data collection methods related to categorization: sorted Napping® and hierarchical sorting. For both data collections, we are also interested in statistical analysis by adopting an approach similar to the one used for categorization data. The last part is devoted to the implementation in the R software of functions to analyze the three kinds of data that are categorization data, hierarchical sorting data and sorted Napping® data
Gomes, Da Silva Alzennyr. "Analyse des données évolutives : application aux données d'usage du Web." Phd thesis, Université Paris Dauphine - Paris IX, 2009. http://tel.archives-ouvertes.fr/tel-00445501.
Gomes, da Silva Alzennyr. "Analyse des données évolutives : Application aux données d'usage du Web." Paris 9, 2009. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2009PA090047.
Nowadays, more and more organizations are becoming reliant on the Internet. The Web has become one of the most widespread platforms for information change and retrieval. The growing number of traces left behind user transactions (e. G. : customer purchases, user sessions, etc. ) automatically increases the importance of usage data analysis. Indeed, the way in which a web site is visited can change over time. These changes can be related to some temporal factors (day of the week, seasonality, periods of special offer, etc. ). By consequence, the usage models must be continuously updated in order to reflect the current behaviour of the visitors. Such a task remains difficult when the temporal dimension is ignored or simply introduced into the data description as a numeric attribute. It is precisely on this challenge that the present thesis is focused. In order to deal with the problem of acquisition of real usage data, we propose a methodology for the automatic generation of artificial usage data over which one can control the occurrence of changes and thus, analyse the efficiency of a change detection system. Guided by tracks born of some exploratory analyzes, we propose a tilted window approach for detecting and following-up changes on evolving usage data. In order measure the level of changes, this approach applies two external evaluation indices based on the clustering extension. The proposed approach also characterizes the changes undergone by the usage groups (e. G. Appearance, disappearance, fusion and split) at each timestamp. Moreover, the refereed approach is totally independent of the clustering method used and is able to manage different kinds of data other than usage data. The effectiveness of this approach is evaluated on artificial data sets of different degrees of complexity and also on real data sets from different domains (academic, tourism, e-business and marketing)
Peng, Tao. "Analyse de données loT en flux." Electronic Thesis or Diss., Aix-Marseille, 2021. http://www.theses.fr/2021AIXM0649.
Since the advent of the IoT (Internet of Things), we have witnessed an unprecedented growth in the amount of data generated by sensors. To exploit this data, we first need to model it, and then we need to develop analytical algorithms to process it. For the imputation of missing data from a sensor f, we propose ISTM (Incremental Space-Time Model), an incremental multiple linear regression model adapted to non-stationary data streams. ISTM updates its model by selecting: 1) data from sensors located in the neighborhood of f, and 2) the near-past most recent data gathered from f. To evaluate data trustworthiness, we propose DTOM (Data Trustworthiness Online Model), a prediction model that relies on online regression ensemble methods such as AddExp (Additive Expert) and BNNRW (Bagging NNRW) for assigning a trust score in real time. DTOM consists: 1) an initialization phase, 2) an estimation phase, and 3) a heuristic update phase. Finally, we are interested predicting multiple outputs STS in presence of imbalanced data, i.e. when there are more instances in one value interval than in another. We propose MORSTS, an online regression ensemble method, with specific features: 1) the sub-models are multiple output, 2) adoption of a cost sensitive strategy i.e. the incorrectly predicted instance has a higher weight, and 3) management of over-fitting by means of k-fold cross-validation. Experimentation with with real data has been conducted and the results were compared with reknown techniques
Sibony, Eric. "Analyse mustirésolution de données de classements." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0036/document.
This thesis introduces a multiresolution analysis framework for ranking data. Initiated in the 18th century in the context of elections, the analysis of ranking data has attracted a major interest in many fields of the scientific literature : psychometry, statistics, economics, operations research, machine learning or computational social choice among others. It has been even more revitalized by modern applications such as recommender systems, where the goal is to infer users preferences in order to make them the best personalized suggestions. In these settings, users express their preferences only on small and varying subsets of a large catalog of items. The analysis of such incomplete rankings poses however both a great statistical and computational challenge, leading industrial actors to use methods that only exploit a fraction of available information. This thesis introduces a new representation for the data, which by construction overcomes the two aforementioned challenges. Though it relies on results from combinatorics and algebraic topology, it shares several analogies with multiresolution analysis, offering a natural and efficient framework for the analysis of incomplete rankings. As it does not involve any assumption on the data, it already leads to overperforming estimators in small-scale settings and can be combined with many regularization procedures for large-scale settings. For all those reasons, we believe that this multiresolution representation paves the way for a wide range of future developments and applications
Vidal, Jules. "Progressivité en analyse topologique de données." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS398.
Topological Data Analysis (TDA) forms a collection of tools that enable the generic and efficient extraction of features in data. However, although most TDA algorithms have practicable asymptotic complexities, these methods are rarely interactive on real-life datasets, which limits their usability for interactive data analysis and visualization. In this thesis, we aimed at developing progressive methods for the TDA of scientific scalar data, that can be interrupted to swiftly provide a meaningful approximate output and that are able to refine it otherwise. First, we introduce two progressive algorithms for the computation of the critical points and the extremum-saddle persistence diagram of a scalar field. Next, we revisit this progressive framework to introduce an approximation algorithm for the persistence diagram of a scalar field, with strong guarantees on the related approximation error. Finally, in a effort to perform visual analysis of ensemble data, we present a novel progressive algorithm for the computation of the discrete Wasserstein barycenter of a set of persistence diagrams, a notoriously computationally intensive task. Our progressive approach enables the approximation of the barycenter within interactive times. We extend this method to a progressive, time-constraint, topological ensemble clustering algorithm
Sibony, Eric. "Analyse mustirésolution de données de classements." Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0036.
This thesis introduces a multiresolution analysis framework for ranking data. Initiated in the 18th century in the context of elections, the analysis of ranking data has attracted a major interest in many fields of the scientific literature : psychometry, statistics, economics, operations research, machine learning or computational social choice among others. It has been even more revitalized by modern applications such as recommender systems, where the goal is to infer users preferences in order to make them the best personalized suggestions. In these settings, users express their preferences only on small and varying subsets of a large catalog of items. The analysis of such incomplete rankings poses however both a great statistical and computational challenge, leading industrial actors to use methods that only exploit a fraction of available information. This thesis introduces a new representation for the data, which by construction overcomes the two aforementioned challenges. Though it relies on results from combinatorics and algebraic topology, it shares several analogies with multiresolution analysis, offering a natural and efficient framework for the analysis of incomplete rankings. As it does not involve any assumption on the data, it already leads to overperforming estimators in small-scale settings and can be combined with many regularization procedures for large-scale settings. For all those reasons, we believe that this multiresolution representation paves the way for a wide range of future developments and applications
Périnel, Emmanuel. "Segmentation en analyse de données symboliques : le cas de données probabilistes." Paris 9, 1996. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1996PA090079.
Aaron, Catherine. "Connexité et analyse des données non linéaires." Phd thesis, Université Panthéon-Sorbonne - Paris I, 2005. http://tel.archives-ouvertes.fr/tel-00308495.
Darlay, Julien. "Analyse combinatoire de données : structures et optimisation." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00683651.
Operto, Grégory. "Analyse structurelle surfacique de données fonctionnelles cétrébrales." Aix-Marseille 3, 2009. http://www.theses.fr/2009AIX30060.
Functional data acquired by magnetic resonance contain a measure of the activity in every location of the brain. If many methods exist, the automatic analysis of these data remains an open problem. In particular, the huge majority of these methods consider these data in a volume-based fashion, in the 3D acquisition space. However, most of the activity is generated within the cortex, which can be considered as a surface. Considering the data on the cortical surface has many advantages : on one hand, its geometry can be taken into account in every processing step, on the other hand considering the whole volume reduces the detection power of usually employed statistical tests. This thesis hence proposes an extension of the application field of volume-based methods to the surface-based domain by adressing problems such as projecting data onto the surface, performing surface-based multi-subjects analysis, and estimating results validity
Le, Béchec Antony. "Gestion, analyse et intégration des données transcriptomiques." Rennes 1, 2007. http://www.theses.fr/2007REN1S051.
Aiming at a better understanding of diseases, transcriptomic approaches allow the analysis of several thousands of genes in a single experiment. To date, international standard initiatives have allowed the utilization of large quantity of data generated using transcriptomic approaches by the whole scientific community, and a large number of algorithms are available to process and analyze the data sets. However, the major challenge remaining to tackle is now to provide biological interpretations to these large sets of data. In particular, their integration with additional biological knowledge would certainly lead to an improved understanding of complex biological mechanisms. In my thesis work, I have developed a novel and evolutive environment for the management and analysis of transcriptomic data. Micro@rray Integrated Application (M@IA) allows for management, processing and analysis of large scale expression data sets. In addition, I elaborated a computational method to combine multiple data sources and represent differentially expressed gene networks as interaction graphs. Finally, I used a meta-analysis of gene expression data extracted from the literature to select and combine similar studies associated with the progression of liver cancer. In conclusion, this work provides a novel tool and original analytical methodologies thus contributing to the emerging field of integrative biology and indispensable for a better understanding of complex pathophysiological processes
Abdali, Abdelkebir. "Systèmes experts et analyse de données industrielles." Lyon, INSA, 1992. http://www.theses.fr/1992ISAL0032.
To analyses industrial process behavio, many kinds of information are needed. As tye ar mostly numerical, statistical and data analysis methods are well-suited to this activity. Their results must be interpreted with other knowledge about analysis prcess. Our work falls within the framework of the application of the techniques of the Artificial Intelligence to the Statistics. Its aim is to study the feasibility and the development of statistical expert systems in an industrial process field. The prototype ALADIN is a knowledge-base system designed to be an intelligent assistant to help a non-specialist user analyze data collected on industrial processes, written in Turbo-Prolong, it is coupled with the statistical package MODULAD. The architecture of this system is flexible and combing knowledge with general plants, the studied process and statistical methods. Its validation is performed on continuous manufacturing processes (cement and cast iron processes). At present time, we have limited to principal Components analysis problems
David, Claire. "Analyse de XML avec données non-bornées." Paris 7, 2009. http://www.theses.fr/2009PA077107.
The motivation of the work is the specification and static analysis of schema for XML documents paying special attention to data values. We consider words and trees whose positions are labeled both by a letter from a finite alphabet and a data value from an infinite domain. Our goal is to find formalisms which offer good trade-offs between expressibility, decidability and complexity (for the satisfiability problem). We first study an extension of first-order logic with a binary predicate representing data equality. We obtain interesting some interesting results when we consider the two variable fragment. This appraoch is elegant but the complexity results are not encouraging. We proposed another formalism based data patterns which can be desired, forbidden or any boolean combination thereof. We drw precisely the decidability frontier for various fragments on this model. The complexity results that we get, while still high, seems more amenable. In terms of expressivity theses two approaches are orthogonal, the two variable fragment of the extension of FO can expressed unary key and unary foreign key while the boolean combination of data pattern can express arbitrary key but can not express foreign key
Carvalho, Francisco de. "Méthodes descriptives en analyse de données symboliques." Paris 9, 1992. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1992PA090025.
Royer, Jean-Jacques. "Analyse multivariable et filtrage des données régionalisées." Vandoeuvre-les-Nancy, INPL, 1988. http://www.theses.fr/1988NAN10312.
Faye, Papa Abdoulaye. "Planification et analyse de données spatio-temporelles." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22638/document.
Spatio-temporal modeling allows to make the prediction of a regionalized variable at unobserved points of a given field, based on the observations of this variable at some points of field at different times. In this thesis, we proposed a approach which combine numerical and statistical models. Indeed by using the Bayesian methods we combined the different sources of information : spatial information provided by the observations, temporal information provided by the black-box and the prior information on the phenomenon of interest. This approach allowed us to have a good prediction of the variable of interest and a good quantification of incertitude on this prediction. We also proposed a new method to construct experimental design by establishing a optimality criterion based on the uncertainty and the expected value of the phenomenon
Jamal, Sara. "Analyse spectrale des données du sondage Euclid." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0263.
Large-scale surveys, as Euclid, will produce a large set of data that will require the development of fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. From a survey, the redshift is an essential quantity to measure. Distinct methods to estimate redshifts exist in the literature but there is no fully-automated definition of a reliability criterion for redshift measurements. In this work, we first explored common techniques of spectral analysis, as filtering and continuum extraction, that could be used as preprocessing to improve the accuracy of spectral features measurements, then focused on developing a new methodology to automate the reliability assessment of spectroscopic redshift measurements by exploiting Machine Learning (ML) algorithms and features of the posterior redshift probability distribution function (PDF). Our idea consists in quantifying, through ML and zPDFs descriptors, the reliability of a redshift measurement into distinct partitions that describe different levels of confidence. For example, a multimodal zPDF refers to multiple (plausible) redshift solutions possibly with similar probabilities, while a strong unimodal zPDF with a low dispersion and a unique and prominent peak depicts of a more "reliable" redshift estimate. We assess that this new methodology could be very promising for next-generation large spectroscopic surveys on the ground and space such as Euclid and WFIRST
Bobin, Jérôme. "Diversité morphologique et analyse de données multivaluées." Paris 11, 2008. http://www.theses.fr/2008PA112121.
Lambert, Thierry. "Réalisation d'un logiciel d'analyse de données." Paris 11, 1986. http://www.theses.fr/1986PA112274.
Fraisse, Bernard. "Automatisation, traitement du signal et recueil de données en diffraction x et analyse thermique : Exploitation, analyse et représentation des données." Montpellier 2, 1995. http://www.theses.fr/1995MON20152.