Literatura académica sobre el tema "Modèles transformeurs"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Modèles transformeurs".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Modèles transformeurs"
Giband, David y Corinne Siino. "La rénovation urbaine en France : entre pilotage à distance et fabrique urbaine". Sociologie et sociétés 45, n.º 2 (21 de febrero de 2014): 153–76. http://dx.doi.org/10.7202/1023177ar.
Texto completoBrunier, Sylvain y Samuel Pinaud. "Au rythme du capital". Revue française de sociologie Vol. 63, n.º 3 (20 de julio de 2023): 527–54. http://dx.doi.org/10.3917/rfs.633.0527.
Texto completoHamon, Benoît y Philippe Frémeaux. ""Transformer en profondeur notre modèle"". Les dossiers d’alternatives économiques Hors-série 5, HS5 (1 de enero de 2017): 90–91. http://dx.doi.org/10.3917/dae.hs5.0090.
Texto completoNicolaï, Robert. "How languages Change and How They Adapt: Some Challenges for the Future Dynamique du langage et élaboration des langues : quelques défis à relever". Journal of Language Contact 2, n.º 1 (2008): 311–52. http://dx.doi.org/10.1163/000000008792525345.
Texto completoBach, Jean-Christophe. "Une approche hybride GPL-DSL pour transformer des modèles". Techniques et sciences informatiques 33, n.º 3 (febrero de 2014): 175–201. http://dx.doi.org/10.3166/tsi.33.175-201.
Texto completoLefebvre, Maxime. "L’Europe, modèle de paix structurelle". Questions internationales 99-100, n.º 4 (23 de octubre de 2019): 129–36. http://dx.doi.org/10.3917/quin.099.0129.
Texto completoHoms, Oriol. "L'évolution de la fonction technique dans les industries espagnoles. La situation des ingénieurs dans les années 1980". Sociétés contemporaines 6, n.º 2 (1 de julio de 1991): 81–92. http://dx.doi.org/10.3917/soco.p1991.6n1.0081.
Texto completoGryson, Olivier. "Qui initiera la transformation digitale de la santé?" médecine/sciences 34, n.º 6-7 (junio de 2018): 587–89. http://dx.doi.org/10.1051/medsci/20183406019.
Texto completoPicon, Antoine. "Révolution numérique, paternité et propriété intellectuelle". Le Visiteur N° 23, n.º 1 (1 de marzo de 2018): 75–85. http://dx.doi.org/10.3917/visit.023.0072.
Texto completoTanguy, François. "La gestion des ressources humaines à la DGFiP, d’une gestion administrative à une approche personnalisée et prospective". Gestion & Finances Publiques, n.º 5 (septiembre de 2021): 74–82. http://dx.doi.org/10.3166/gfp.2021.5.012.
Texto completoTesis sobre el tema "Modèles transformeurs"
Ait, Saada Mira. "Unsupervised learning from textual data with neural text representations". Electronic Thesis or Diss., Université Paris Cité, 2023. http://www.theses.fr/2023UNIP7122.
Texto completoThe digital era generates enormous amounts of unstructured data such as images and documents, requiring specific processing methods to extract value from them. Textual data presents an additional challenge as it does not contain numerical values. Word embeddings are techniques that transform text into numerical data, enabling machine learning algorithms to process them. Unsupervised tasks are a major challenge in the industry as they allow value creation from large amounts of data without requiring costly manual labeling. In thesis we explore the use of Transformer models for unsupervised tasks such as clustering, anomaly detection, and data visualization. We also propose methodologies to better exploit multi-layer Transformer models in an unsupervised context to improve the quality and robustness of document clustering while avoiding the choice of which layer to use and the number of classes. Additionally, we investigate more deeply Transformer language models and their application to clustering, examining in particular transfer learning methods that involve fine-tuning pre-trained models on a different task to improve their quality for future tasks. We demonstrate through an empirical study that post-processing methods based on dimensionality reduction are more advantageous than fine-tuning strategies proposed in the literature. Finally, we propose a framework for detecting text anomalies in French adapted to two cases: one where the data concerns a specific topic and the other where the data has multiple sub-topics. In both cases, we obtain superior results to the state of the art with significantly lower computation time
Belkaid, Zahir. "Modèles et outils pour la conception de composants magnétiques HF dédiés à l'électronique de puissance". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTS016/document.
Texto completoThe magnetic components are essential constituents of the power electronic converters in terms of volume and cost, particularly in switching power supplies. Therefore, it is essential to develop methods and software tools that can optimize the magnetic device design in relation to the conversion parameters. The design and optimization of the magnetic component includes several constraints that are imposed by the specifications including the choice of electrical conductors and magnetic circuits, both in terms of materials and their geometries. It is necessary to calculate the losses in these parts and to know the thermal models that allows a better design by considering the major constraint namely, the operation temperatures of different parts of the component.The current work describes the basics of a generic tool that will help in the optimal design of a magnetic components based on both analytical and numerical modeling
Hihat, Nabil. "Modèles quasi 3D pour l'analyse de structures présentant une anisotropie 3D". Thesis, Artois, 2010. http://www.theses.fr/2010ARTO0206/document.
Texto completoThis thesis focuses on the analysis and the modeling the magnetic flux distribution in electrical machines with anisotropic laminated magnetic circuit. The anisotropy of magnetic sheets in transformers induces complex 3D phenomena in step-lap magnetic joints where the sheets are overlapped. Moreover, in order to increase the energy efficiency of rotating machines, new structures based on grain-oriented electrical steel are developed.However, an accurate 3D simulation of a laminated core with thin sheets and insulation of a few microns leads to very large computation time. In this context, we present a homogenization method, which purpose is to define the equivalent magnetic characteristics of any laminated core made of sheets and air gaps. Its formulation is based on the energy minimization and the magnetic flux conservation. The results of this method applied to a step-lap magnetic joint are compared with experimental measurements and a 3D finite element model. The latter requires to know the magnetic characteristics of the sheets in the rolling, transverse and normal directions. The determination of the sheets permeability in the normal direction is problematic and it constitutes an original point of our study.Two methods, analytical and numerical, based on measurements obtained with a static characterization bench makes possible the determination of the normale permeability
Mbengue, Serigne Saliou. "Étude des déformations induites par l'aimantation des dispositifs électrotechniques : développement d'un modèle magnéto-élastique macroscopique". Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2265/document.
Texto completoThe work presented in this document is part of a project (dBET : diminution des Bruits Electriques de Trains) which aims a better understanding of electromagnetic-origin vibration phenomena (indirectly noise) from electrical devices (transformers, inductors, motors) in trains. This project results from the collaboration of several laboratories and companies including Alstom, ESI Group.... Our contribution in this project consists in building a relevant model to predict the magnetostrictive strain, considered as one of the causes of electromagnetic-origin noise of electrical devices. A process of identification of the model parameters from experimental data is presented. The model is used to compute the magnetostrictive strain of a test bench thanks to finite elements method. Model results will be compared with measurements about ferromagnetic single sheet and the test bench which is a stack of ferromagnetic sheets
Pasquiou, Alexandre. "Deciphering the neural bases of language comprehension using latent linguistic representations". Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG041.
Texto completoIn the last decades, language models (LMs) have reached human level performance on several tasks. They can generate rich representations (features) that capture various linguistic properties such has semantics or syntax. Following these improvements, neuroscientists have increasingly used them to explore the neural bases of language comprehension. Specifically, LM's features computed from a story are used to fit the brain data of humans listening to the same story, allowing the examination of multiple levels of language processing in the brain. If LM's features closely align with a specific brain region, then it suggests that both the model and the region are encoding the same information. LM-brain comparisons can then teach us about language processing in the brain. Using the fMRI brain data of fifty US participants listening to "The Little Prince" story, this thesis 1) investigates the reasons why LMs' features fit brain activity and 2) examines the limitations of such comparisons. The comparison of several pre-trained and custom-trained LMs (GloVe, LSTM, GPT-2 and BERT) revealed that Transformers better fit fMRI brain data than LSTM and GloVe. Yet, none are able to explain all the fMRI signal, suggesting either limitations related to the encoding paradigm or to the LMs. Focusing specifically on Transformers, we found that no brain region is better fitted by specific attentional head or layer. Our results caution that the nature and the amount of training data greatly affects the outcome, indicating that using off-the-shelf models trained on small datasets is not effective in capturing brain activations. We showed that LMs' training influences their ability to fit fMRI brain data, and that perplexity was not a good predictor of brain score. Still, training LMs particularly improves their fitting performance in core semantic regions, irrespective of the architecture and training data. Moreover, we showed a partial convergence between brain's and LM's representations.Specifically, they first converge during model training before diverging from one another. This thesis further investigates the neural bases of syntax, semantics and context-sensitivity by developing a method that can probe specific linguistic dimensions. This method makes use of "information-restricted LMs", that are customized LMs architectures trained on feature spaces containing a specific type of information, in order to fit brain data. First, training LMs on semantic and syntactic features revealed a good fitting performance in a widespread network, albeit with varying relative degrees. The quantification of this relative sensitivity to syntax and semantics showed that brain regions most attuned to syntax tend to be more localized, while semantic processing remain widely distributed over the cortex. One notable finding from this analysis was that the extent of semantic and syntactic sensitive brain regions was similar across hemispheres. However, the left hemisphere had a greater tendency to distinguish between syntactic and semantic processing compared to the right hemisphere. In a last set of experiments we designed "masked-attention generation", a method that controls the attention mechanisms in transformers, in order to generate latent representations that leverage fixed-size context. This approach provides evidence of context-sensitivity across most of the cortex. Moreover, this analysis found that the left and right hemispheres tend to process shorter and longer contextual information respectively
Lefébure, Alessia. "Transformer la culture administrative par les marges : l’introduction en Chine du Master in Public Administration (MPA)". Thesis, Paris, Institut d'études politiques, 2016. http://www.theses.fr/2016IEPP0011/document.
Texto completoChanging nothing to change everything: innovation and continuity in the reforms of Chinese administrative training — At the end of the Maoist era, the Communist Party of China (CPC) attempted to create a more highly‑skilled bureaucracy to achieve economic development in a stable political context. Reforms concerned not just recruitment, management and civil service organization, but also the training of officials in order to improve their skills. The introduction of the Master of Public Administration (MPA) in 1999 enhanced the scientific character of administrative knowledge while pre‑existing selective mechanisms were retained. The MPA supports the country’s modernization by enabling the State‑Party to undertake continuous reform of public administration. It allows as well the emergence of a new ethos and a community of vision among the new generation of civil servants, whose competences are adjustable to several possible political scenarios
Tran, Tuan Vu. "Problèmes combinatoires et modèles multi-niveaux pour la conception optimale des machines électriques". Phd thesis, Ecole Centrale de Lille, 2009. http://tel.archives-ouvertes.fr/tel-00425590.
Texto completoLeplus, François. "Sur la modélisation numérique des transformateurs monophasé et triphasé : Application aux montages redresseurs et gradateurs". Lille 1, 1989. http://www.theses.fr/1989LIL10073.
Texto completoDouzon, Thibault. "Language models for document understanding". Electronic Thesis or Diss., Lyon, INSA, 2023. http://www.theses.fr/2023ISAL0075.
Texto completoEvery day, an uncountable amount of documents are received and processed by companies worldwide. In an effort to reduce the cost of processing each document, the largest companies have resorted to document automation technologies. In an ideal world, a document can be automatically processed without any human intervention: its content is read, and information is extracted and forwarded to the relevant service. The state-of-the-art techniques have quickly evolved in the last decades, from rule-based algorithms to statistical models. This thesis focuses on machine learning models for document information extraction. Recent advances in model architecture for natural language processing have shown the importance of the attention mechanism. Transformers have revolutionized the field by generalizing the use of attention and by pushing self-supervised pre-training to the next level. In the first part, we confirm that transformers with appropriate pre-training were able to perform document understanding tasks with high performance. We show that, when used as a token classifier for information extraction, transformers are able to exceptionally efficiently learn the task compared to recurrent networks. Transformers only need a small proportion of the training data to reach close to maximum performance. This highlights the importance of self-supervised pre-training for future fine-tuning. In the following part, we design specialized pre-training tasks, to better prepare the model for specific data distributions such as business documents. By acknowledging the specificities of business documents such as their table structure and their over-representation of numeric figures, we are able to target specific skills useful for the model in its future tasks. We show that those new tasks improve the model's downstream performances, even with small models. Using this pre-training approach, we are able to reach the performances of significantly bigger models without any additional cost during finetuning or inference. Finally, in the last part, we address one drawback of the transformer architecture which is its computational cost when used on long sequences. We show that efficient architectures derived from the classic transformer require fewer resources and perform better on long sequences. However, due to how they approximate the attention computation, efficient models suffer from a small but significant performance drop on short sequences compared to classical architectures. This incentivizes the use of different models depending on the input length and enables concatenating multimodal inputs into a single sequence
Le, Moine Veillon Clément. "Neural Conversion of Social Attitudes in Speech Signals". Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS034.pdf.
Texto completoAs social animals, humans communicate with each other by transmitting various types of information about the world and about themselves. At the heart of this process, the voice allows the transmission of linguistic messages denoting a strict meaning that can be decoded by the interlocutor. By conveying other information such as attitudes or emotions that connote the strict meaning, the voice enriches and enhances the communication process. In the last few decades, the digital world has become an important part of our lives. In many everyday situations, we are moving away from keyboards, mice and even touch screens to interactions with voice assistants or even virtual agents that enable human-like communication with machines. In the emergence of a hybrid world where physical and virtual reality coexist, it becomes crucial to enable machines to capture, interpret, and replicate the emotions and attitudes conveyed by the human voice.This research focuses on speech social attitudes, which can be defined - in a context of interaction - as speech dispositions towards others and aims to develop algorithms for their conversion. Fulfilling this objective requires data, i.e. a collection of audio recordings of utterances conveying various vocal attitudes. This research is thus built out of this initial step in gathering raw material - a dataset dedicated to speech social attitudes. Designing such algorithms involves a thorough understanding of what these attitudes are both in terms of production - how do individuals use their vocal apparatus to produce attitudes? - and perception - how do they decode those attitudes in speech? We therefore conducted two studies, a first uncovering the production strategies of speech attitudes and a second - based on a Best Worst Scaling (BWS) experiment - mainly hinting at biases involved in the perception such vocal attitudes, thus providing a twofold account for how speech attitudes are communicated by French individuals. These findings were the basis for the choice of speech signal representation as well as the architectural and optimisation choices for the design of a speech attitude conversion algorithm. In order to extend the knowledge on the perception of vocal attitudes gathered during this second study to the whole database, we worked on the elaboration of a BWS-Net allowing the detection of mis-communicated attitudes, and thus provided clean data for conversion learning. In order to learn how to convert vocal attitudes, we adopted a transformer-based approach in a many-to-many conversion paradigm with mel-spectrogram as speech signal representation. Since early experiments revealed a loss of intelligibility in the converted utterances, we proposed a linguistic conditioning of the conversion algorithm through incorporation of a speech-to-text module. Both objective and subjective measures have shown the resulting algorithm achieves better performance than the baseline transformer both in terms of intelligibility and attitude conveyed
Capítulos de libros sobre el tema "Modèles transformeurs"
NOUCHER, Matthieu. "La communication cartographique sur le Géoweb : entre cartes et données". En Communication cartographique, 147–71. ISTE Group, 2022. http://dx.doi.org/10.51926/iste.9091.ch5.
Texto completoHYGOUNENC, Emmanuel. "Réaliser un modèle de conception simulable". En Ingénierie des systèmes, 253–70. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9108.ch14.
Texto completoActas de conferencias sobre el tema "Modèles transformeurs"
Cattan, Oralie, Sahar Ghannay, Christophe Servan y Sophie Rosset. "Etude comparative de modèles Transformers en compréhension de la parole en Français". En XXXIVe Journées d'Études sur la Parole -- JEP 2022. ISCA: ISCA, 2022. http://dx.doi.org/10.21437/jep.2022-76.
Texto completoInformes sobre el tema "Modèles transformeurs"
Dufour, Quentin, David Pontille y Didier Torny. Contracter à l’heure de la publication en accès ouvert. Une analyse systématique des accords transformants. Ministère de l'enseignement supérieur et de la recherche, abril de 2021. http://dx.doi.org/10.52949/2.
Texto completo