Contents
Academic literature on the topic 'Plongements de documents'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Plongements de documents.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Plongements de documents"
Konstantinovskaya, Elena, Gennady Ivanov, Jean-Louis Feybesse, and Jean-Luc Lescuyer. "Structural Features of the Central Labrador Trough: A Model for Strain Partitioning, Differential Exhumation and Late Normal Faulting in a Thrust Wedge under Oblique Shortening." Geoscience Canada, March 29, 2019, 5–30. http://dx.doi.org/10.12789/geocanj.2019.46.143.
Full textDissertations / Theses on the topic "Plongements de documents"
Mazoyer, Béatrice. "Social Media Stories. Event detection in heterogeneous streams of documents applied to the study of information spreading across social and news media." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASC009.
Full textSocial Media, and Twitter in particular, has become a privileged source of information for journalists in recent years. Most of them monitor Twitter, in the search for newsworthy stories. This thesis aims to investigate and to quantify the effect of this technological change on editorial decisions. Does the popularity of a story affects the way it is covered by traditional news media, regardless of its intrinsic interest?To highlight this relationship, we take a multidisciplinary approach at the crossroads of computer science and economics: first, we design a novel approach to collect a representative sample of 70% of all French tweets emitted during an entire year. Second, we study different types of algorithms to automatically discover tweets that relate to the same stories. We test several vector representations of tweets, looking at both text and text-image representations, Third, we design a new method to group together Twitter events and media events. Finally, we design an econometric instrument to identify a causal effect of the popularity of an event on Twitter on its coverage by traditional media. We show that the popularity of a story on Twitter does have an effect on the number of articles devoted to it by traditional media, with an increase of about 1 article per 1000 additional tweets
Morbieu, Stanislas. "Leveraging textual embeddings for unsupervised learning." Electronic Thesis or Diss., Université Paris Cité, 2020. http://www.theses.fr/2020UNIP5191.
Full textTextual data is ubiquitous and is a useful information pool for many companies. In particular, the web provides an almost inexhaustible source of textual data that can be used for recommendation systems, business or technological watch, information retrieval, etc. Recent advances in natural language processing have made possible to capture the meaning of words in their context in order to improve automatic translation systems, text summary, or even the classification of documents according to predefined categories. However, the majority of these applications often rely on a significant human intervention to annotate corpora: This annotation consists, for example in the context of supervised classification, in providing algorithms with examples of assigning categories to documents. The algorithm therefore learns to reproduce human judgment in order to apply it for new documents. The object of this thesis is to take advantage of these latest advances which capture the semantic of the text and use it in an unsupervised framework. The contributions of this thesis revolve around three main axes. First, we propose a method to transfer the information captured by a neural network for co-clustering of documents and words. Co-clustering consists in partitioning the two dimensions of a data matrix simultaneously, thus forming both groups of similar documents and groups of coherent words. This facilitates the interpretation of a large corpus of documents since it is possible to characterize groups of documents by groups of words, thus summarizing a large corpus of text. More precisely, we train the Paragraph Vectors algorithm on an augmented dataset by varying the different hyperparameters, classify the documents from the different vector representations and apply a consensus algorithm on the different partitions. A constrained co-clustering of the co-occurrence matrix between terms and documents is then applied to maintain the consensus partitioning. This method is found to result in significantly better quality of document partitioning on various document corpora and provides the advantage of the interpretation offered by the co-clustering. Secondly, we present a method for evaluating co-clustering algorithms by exploiting vector representations of words called word embeddings. Word embeddings are vectors constructed using large volumes of text, one major characteristic of which is that two semantically close words have word embeddings close by a cosine distance. Our method makes it possible to measure the matching between the partition of the documents and the partition of the words, thus offering in a totally unsupervised setting a measure of the quality of the co-clustering. Thirdly, we are interested in recommending classified ads. We present a system that allows to recommend similar classified ads when consulting one. The descriptions of classified ads are often short, syntactically incorrect, and the use of synonyms makes it difficult for traditional systems to accurately measure semantic similarity. In addition, the high renewal rate of classified ads that are still valid (product not sold) implies choices that make it possible to have low computation time. Our method, simple to implement, responds to this use case and is again based on word embeddings. The use of these has advantages but also involves some difficulties: the creation of such vectors requires choosing the values of some parameters, and the difference between the corpus on which the word embeddings were built upstream. and the one on which they are used raises the problem of out-of-vocabulary words, which have no vector representation. To overcome these problems, we present an analysis of the impact of the different parameters on word embeddings as well as a study of the methods allowing to deal with the problem of out-of-vocabulary words
Liu, Guogang. "Sur les lacets positifs des plongements legendriens lâches." Thesis, Nantes, 2016. http://www.theses.fr/2016NANT4045/document.
Full textIn the thesis, we have studied the problem of positive Lengendrian isotopies. That is to say, the isotopies preservepo the contact structure and the hamiltonnian functions of the isotopies are positive. We have proved that for a loose Legendrian there exists a positive loop of Legendrian embeddings based in it. We treated this result in two cases. In lower dimensions cases, we constructed positive loops by hand. In higher dimensions cases, we applied the advanced h-principle techniques. Given a loose Legendrian embedding, firstly, by the holonomic approximation, we constructed a loop of Legendrian embeddings based in it which is positive away from a finite number of disks. Secondly, we deformed it to a positive loop by the idea of convex integration. The result has two immediate applications. Firstly, we reprove the theorem that the spaces of contact elements are tight without holomorphic curves techniques. Secondly, we proved the contact product of an overtwisted contact manifold is overtwisted and the diagonal is loose, furthermore, the diagonal is in positive loop. In the end, we have defined a partial order on the universal cover of the contactomorphism group by positive Legendrian isotopies in the contact product. It will help us to study the properties of contactomorphism via positive Legendrian isotopies
Gaillard, Loïc. "Espaces de Müntz, plongements de Carleson, et opérateurs de Cesàro." Thesis, Artois, 2017. http://www.theses.fr/2017ARTO0406/document.
Full textFor a sequence ⋀ = (λn) satisfying the Müntz condition Σn 1/λn < +∞ and for p ∈ [1,+∞), we define the Müntz space Mp⋀ as the closed subspace of Lp([0, 1]) spanned by the monomials yn : t ↦ tλn. The space M∞⋀ is defined in the same way as a subspace of C([0, 1]). When the sequence (λn + 1/p)n is lacunary with a large ratio, we prove that the sequence of normalized Müntz monomials (gn) in Lp is (1 + ε)-isometric to the canonical basis of lp. In the case p = +∞, the monomials (yn) form a sequence which is (1 + ε)-isometric to the summing basis of c. These results are asymptotic refinements of a well known theorem for the lacunary sequences. On the other hand, for p ∈ [1, +∞), we investigate the Carleson measures for Müntz spaces, which are defined as the Borel measures μ on [0; 1) such that the embedding operator Jμ,p : Mp⋀ ⊂ Lp(μ) is bounded. When ⋀ is lacunary, we prove that if the (gn) are uniformly bounded in Lp(μ), then for any q > p, the measure μ is a Carleson measure for Mq⋀. These questions are closely related to the behaviour of μ in the neighborhood of 1. Wealso find some geometric conditions about the behaviour of μ near the point 1 that ensure the compactness of Jμ,p, or its membership to some thiner operator ideals. More precisely, we estimate the approximation numbers of Jμ,p in the lacunary case and we even obtain some equivalents for particular lacunary sequences ⋀. At last, we show that the essentialnorm of the Cesàro-mean operator Γp : Lp → Lp coincides with its norm, which is p'. This result is also valid for the Cesàro sequence operator. We introduce some Müntz subspaces of the Cesàro function spaces Cesp, for p ∈ [1, +∞]. We show that the value of the essential norm of the multiplication operator TΨ is ∥Ψ∥∞ in the Cesàaro spaces. In the Müntz-Cesàrospaces, the essential norm of TΨ is equal to |Ψ(1)|
Catusse, Nicolas. "Spanners pour des réseaux géométriques et plongements dans le plan." Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX22119/document.
Full textIn this thesis, we study several problems related to the design of geometric networks and isometric embeddings into the plane.We start by considering the generalization of the classical Minimum Manhattan Network problem to all normed planes. We search the minimum network that connects each pair of terminals by a shortest path in this norm. We propose a factor 2.5 approximation algorithm in time O(mn^3), where n is the number of terminals and m is the number of directions of the unit ball.The second problem presented is an oriented version of the minumum Manhattan Network problem, we want to obtain a minimum oriented network such that for each pair u, v of terminals, there is a shortest rectilinear path from u to v and another path from v to u.We describe a factor 2 approximation algorithm with complexity O(n^3) where n is the number of terminals for this problem.Then we study the problem of finding a planar spanner (a subgraph which approximates the distances) of the Unit Disk Graph (UDG) which is used to modelize wireless ad hoc networks. We present an algorithm for computing a constant hop stretch factor planar spanner for all UDG. This algorithm uses only local properties and it can be implemented in distributed manner.Finally, we study the problem of recognizing metric spaces that can be isometrically embbed into the rectilinear plane and we provide an optimal time O(n^2) algorithm to solve this problem. We also study the generalization of this problem to all normed planes whose unit ball is a centrally symmetric convex polygon
Netillard, François. "Plongements grossièrement Lipschitz et presque Lipschitz dans les espaces de Banach." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCD020/document.
Full textThe central theme of this thesis is the study of embeddings of metric spaces into Banach spaces.The first study focuses on the coarse Lipschitz embeddings between James Spaces Jp for p≻1 and p finite. We obtain that, for p,q different, Jq does not coarse Lipschitz embed into Jp. We also obtain, in the case where q≺p, that the compression exponent of Jq in Jp is lower or equal to q/p. Another natural question is to know whether we have similar results for the dual spaces of James spaces. We obtain that, for p,q different, Jp* does not coarse Lipschitz embed into Jq*. Further to this work, we establish a more general result about the coarse Lipschitz embeddability of a Banach space which has a q-AUS norm into a Banach space which has a p-AMUC norm for p≺q. With the help of a renorming theorem, we deduce also a result about the Szlenk index. Moreover, after defining the quasi-Lipschitz embeddability, which is slightly different to the almost Lipschitz embeddability, we obtain the following result: For two Banach spaces X, if X is crudely finitely representable with constant C (where C≻1) in any subspace of Y of finite codimension, then every proper subset M of X quasi-Lipschitz embeds into Y. To conclude, we obtain the following corollary: Let X be a locally minimal Banach space, and Y be a Banach space which is crudely finitely representable in X. Then, for M a proper subspace of Y, M quasi-Lipschitz embeds into X
Dutailly, Bruno. "Plongement de surfaces continues dans des surfaces discrètes épaisses." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0444/document.
Full textIn the context of archaeological sciences, 3D images produced by Computer Tomography scanners are segmented into regions of interest corresponding to virtual objects in order to make some scientific analysis. These virtual objects are often used for the purpose of performing accurate measurements. Some of these analysis require extracting the surface of the regions of interest. This PhD falls within this framework and aims to improve the accuracy of surface extraction. We present in this document our contributions : first of all, the weighted HMH algorithm whose objective is to position precisely a point at the interface between two materials. But, applied to surface extraction, this method often leads to topology problems on the resulting surface. So we proposed two other methods : The discrete HMH method which allows to refine the 3D object segmentation, and the surface HMH method which allows a constrained surface extraction ensuring a topologically correct surface. It is possible to link these two methods on a pre-segmented 3D image in order to obtain a precise surface extraction of the objects of interest These methods were evaluated on simulated CT-scan acquisitions of synthetic objects and real acquisitions of archaeological artefacts
Boroş, Emanuela. "Neural Methods for Event Extraction." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS302/document.
Full textWith the increasing amount of data and the exploding number data sources, the extraction of information about events, whether from the perspective of acquiring knowledge or from a more directly operational perspective, becomes a more and more obvious need. This extraction nevertheless comes up against a recurring difficulty: most of the information is present in documents in a textual form, thus unstructured and difficult to be grasped by the machine. From the point of view of Natural Language Processing (NLP), the extraction of events from texts is the most complex form of Information Extraction (IE) techniques, which more generally encompasses the extraction of named entities and relationships that bind them in the texts. The event extraction task can be represented as a complex combination of relations linked to a set of empirical observations from texts. Compared to relations involving only two entities, there is, therefore, a new dimension that often requires going beyond the scope of the sentence, which constitutes an additional difficulty. In practice, an event is described by a trigger and a set of participants in that event whose values are text excerpts. While IE research has benefited significantly from manually annotated datasets to learn patterns for text analysis, the availability of these resources remains a significant problem. These datasets are often obtained through the sustained efforts of research communities, potentially complemented by crowdsourcing. In addition, many machine learning-based IE approaches rely on the ability to extract large sets of manually defined features from text using sophisticated NLP tools. As a result, adaptation to a new domain is an additional challenge. This thesis presents several strategies for improving the performance of an Event Extraction (EE) system using neural-based approaches exploiting morphological, syntactic, and semantic properties of word embeddings. These have the advantage of not requiring a priori modeling domain knowledge and automatically generate a much larger set of features to learn a model. More specifically, we proposed different deep learning models for two sub-tasks related to EE: event detection and argument detection and classification. Event Detection (ED) is considered an important subtask of event extraction since the detection of arguments is very directly dependent on its outcome. ED specifically involves identifying instances of events in texts and classifying them into specific event types. Classically, the same event may appear as different expressions and these expressions may themselves represent different events in different contexts, hence the difficulty of the task. The detection of the arguments is based on the detection of the expression considered as triggering the event and ensures the recognition of the participants of the event. Among the difficulties to take into account, it should be noted that an argument can be common to several events and that it does not necessarily identify with an easily recognizable named entity. As a preliminary to the introduction of our proposed models, we begin by presenting in detail a state-of-the-art model which constitutes the baseline. In-depth experiments are conducted on the use of different types of word embeddings and the influence of the different hyperparameters of the model using the ACE 2005 evaluation framework, a standard evaluation for this task. We then propose two new models to improve an event detection system. One allows increasing the context taken into account when predicting an event instance by using a sentential context, while the other exploits the internal structure of words by taking advantage of seemingly less obvious but essentially important morphological knowledge. We also reconsider the detection of arguments as a high-order relation extraction and we analyze the dependence of arguments on the ED task
Bérard, Alexandre. "Neural machine translation architectures and applications." Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I022/document.
Full textThis thesis is centered on two main objectives: adaptation of Neural Machine Translation techniques to new tasks and research replication. Our efforts towards research replication have led to the production of two resources: MultiVec, a framework that facilitates the use of several techniques related to word embeddings (Word2vec, Bivec and Paragraph Vector); and a framework for Neural Machine Translation that implements several architectures and can be used for regular MT, Automatic Post-Editing, and Speech Recognition or Translation. These two resources are publicly available and now extensively used by the research community. We extend our NMT framework to work on three related tasks: Machine Translation (MT), Automatic Speech Translation (AST) and Automatic Post-Editing (APE). For the machine translation task, we replicate pioneer neural-based work, and do a case study on TED talks where we advance the state-of-the-art. Automatic speech translation consists in translating speech from one language to text in another language. In this thesis, we focus on the unexplored problem of end-to-end speech translation, which does not use an intermediate source-language text transcription. We propose the first model for end-to-end AST and apply it on two benchmarks: translation of audiobooks and of basic travel expressions. Our final task is automatic post-editing, which consists in automatically correcting the outputs of an MT system in a black-box scenario, by training on data that was produced by human post-editors. We replicate and extend published results on the WMT 2016 and 2017 tasks, and propose new neural architectures for low-resource automatic post-editing
Mabrouki, Mbarka. "Etude de la préservation des propriétés temporelles des réseaux de régulation génétique au travers du plongement : vers une caractérisation des systèmes complexes par l'émergence de propriétés." Thesis, Evry-Val d'Essonne, 2010. http://www.theses.fr/2010EVRY0039/document.
Full textThe thesis proposes a generic framework to denote specifications of basic system components and to characterize the notion of complex system by the presence of emergent property, that are either in conflict with the properties attached to the subsystems the constituent, either are directly due to the cooperation of the subsystems. The framework is declined for the cases of the relative systems and genetic regulatory network