Teses / dissertações sobre o tema "Computationnelle"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Computationnelle".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Debarnot, Valentin. "Microscopie computationnelle". Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30156.
Texto completo da fonteThe contributions of this thesis are numerical and theoretical tools for the resolution of blind inverse problems in imaging. We first focus in the case where the observation operator is unknown (e.g. microscopy, astronomy, photography). A very popular approach consists in estimating this operator from an image containing point sources (microbeads or fluorescent proteins in microscopy, stars in astronomy). Such an observation provides a measure of the impulse response of the degradation operator at several points in the field of view. Processing this observation requires robust tools that can rapidly use the data. We propose a toolbox that estimates a degradation operator from an image containing point sources. The estimated operator has the property that at any location in the field of view, its impulse response is expressed as a linear combination of elementary estimated functions. This makes it possible to estimate spatially invariant (convolution) and variant (product-convolution expansion) operators. An important specificity of this toolbox is its high level of automation: only a small number of easily accessible parameters allows to cover a large majority of practical cases. The size of the point source (e.g. bead), the background and the noise are also taken in consideration in the estimation. This tool, coined PSF-estimator, comes in the form of a module for the Fiji software, and is based on a parallelized implementation in C++. The operators generated by an optical system are usually changing for each experiment, which ideally requires a calibration of the system before each acquisition. To overcome this, we propose to represent an optical system not by a single operator (e.g. convolution blur with a fixed kernel for different experiments), but by subspace of operators. This set allows to represent all the possible states of a microscope. We introduce a method for estimating such a subspace from a collection of low rank operators (such as those estimated by the toolbox PSF-Estimator). We show that under reasonable assumptions, this subspace is low-dimensional and consists of low rank elements. In a second step, we apply this process in microscopy on large fields of view and with spatially varying operators. This implementation is possible thanks to the use of additional methods to process real images (e.g. background, noise, discretization of the observation).The construction of an operator subspace is only one step in the resolution of blind inverse problems. It is then necessary to identify the degradation operator in this set from a single observed image. In this thesis, we provide a mathematical framework to this operator identification problem in the case where the original image is constituted of point sources. Theoretical conditions arise from this work, allowing a better understanding of the conditions under which this problem can be solved. We illustrate how this formal study allows the resolution of a blind deblurring problem on a microscopy example.[...]
Touret, Alain. "Vers une géométrie computationnelle". Nice, 2000. http://www.theses.fr/2000NICE5491.
Texto completo da fonteLe, Calvez Rozenn. "Approche computationnelle de l'acquisition précoce des phonèmes". Paris 6, 2007. http://www.theses.fr/2007PA066345.
Texto completo da fontePanel, Nicolas. "Étude computationnelle du domaine PDZ de Tiam1". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX062/document.
Texto completo da fonteSmall protein domains often direct protein-protein interactions and regulate eukaryotic signalling pathways. PDZ domains are among the most widespread and best-studied. They specifically recognize the 4-10 C-terminal amino acids of target proteins. Tiam1 is a Rac GTP exchange factor that helps control cellmigration and proliferation and whose PDZ domain binds the proteins syndecan-1 (Sdc1), Caspr4, and Neurexin. Short peptides and peptidomimetics can potentially inhibit or modulate its action and act as bioreagents or therapeutics. We used computational protein design (CPD) and molecular dynamics (MD) free energy simulations to understand and engineer its peptide specificity. CPD uses a structural model and an energy function to explore the space of sequences and structures and identify stable and functional protein or peptide variants. We used our in-house Proteus CPD package to completely redesign the Tiam1 PDZ domain. The designed sequences were similar to natural PDZ domains, with similarity and fold recognition scores comarable to the widely-used Rosetta CPD package. Selected sequences, containing around 60 mutated positions out of 90, were tested by microsecond MD simulations and biophysical experiments. Four of five sequences tested experimentally (by our collaborators) displayed reversible unfolding around 50°C. Proteus also accurately scored the binding specificity of several protein and peptide variants. As a more refined model for specificity, we parameterized a semi-empirical free energy model of the Poisson-Boltzmann Linear Interaction Energy or PB/LIE form, which scores conformations extracted from explicit solvent MD simulations of PDZ:peptide complexes. With three adjustable parameters, the model accurately reproduced the experimental binding affinities of 41 variants, with a mean unsigned error of just 0.4 kcal/mol, andgave predictions for 10 new variants. The PB/LIE model was tested further by comparing to non-empirical, alchemical, MD free energy simulations, which have no adjustable parameters and were found to give chemical accuracy for 12 Tiam1:peptide complexes. The tools and insights obtained should help discover new tight binding peptides or peptidomimetics and have broad implications for engineering PDZ:peptide interactions
Pauwels, Edouard. "Applications de l'apprentissage statistique à la biologie computationnelle". Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://pastel.archives-ouvertes.fr/pastel-00958432.
Texto completo da fonteJacob, Laurent. "A priori structurés pour l'apprentissage supervisé en biologie computationnelle". Phd thesis, École Nationale Supérieure des Mines de Paris, 2009. http://pastel.archives-ouvertes.fr/pastel-00005743.
Texto completo da fonteZheng, Léon. "Frugalité en données et efficacité computationnelle dans l'apprentissage profond". Electronic Thesis or Diss., Lyon, École normale supérieure, 2024. http://www.theses.fr/2024ENSL0009.
Texto completo da fonteThis thesis focuses on two challenges of frugality and efficiency in modern deep learning: data frugality and computational resource efficiency. First, we study self-supervised learning, a promising approach in computer vision that does not require data annotations for learning representations. In particular, we propose a unification of several self-supervised objective functions under a framework based on rotation-invariant kernels, which opens up prospects to reduce the computational cost of these objective functions. Second, given that matrix multiplication is the predominant operation in deep neural networks, we focus on the construction of fast algorithms that allow matrix-vector multiplication with nearly linear complexity. More specifically, we examine the problem of sparse matrix factorization under the constraint of butterfly sparsity, a structure common to several fast transforms like the discrete Fourier transform. The thesis establishes new theoretical guarantees for butterfly factorization algorithms, and explores the potential of butterfly sparsity to reduce the computational costs of neural networks during their training or inference phase. In particular, we explore the efficiency of GPU implementations for butterfly sparse matrix multiplication, with the goal of truly accelerating sparse neural networks
Sander, David. "Approche computationnelle des mécanismes émotionnels : test de l'hypothèse de polarité". Lyon 2, 2002. http://demeter.univ-lyon2.fr:8080/sdx/theses/lyon2/2002/sander_d.
Texto completo da fonteSander, David Koenig Olivier. "Approche computationnelle des mécanismes émotionnels test de l'hypothèse de polarité /". Lyon : Université Lumière Lyon 2, 2002. http://demeter.univ-lyon2.fr:8080/sdx/theses/lyon2/2002/sander_d.
Texto completo da fonteBérenger, François. "Nouveaux logiciels pour la biologie structurale computationnelle et la chémoinformatique". Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1047/document.
Texto completo da fonteThis thesis introduces five software useful in three different areas : parallel and distributed computing, computational structural biology and chemoinformatics. The software from the parallel and distributed area is PAR. PAR allows to execute independent experiments in a parallel and distributed way. The software for computational structural biology are Durandal, EleKit and Fragger. Durandal exploits the propagation of geometric constraints to accelerate the exact clustering algorithm for protein models. EleKit allows to measure the electrostatic similarity between a chemical molecule and the protein it is designed to replace at a protein-protein interface. Fragger is a fragment picker able to select protein fragments in the whole protein data-bank. Finally, the chemoinformatics software is ACPC. ACPC encodes in a rotation-translation invariant way a chemical molecule in any or a combination of three chemical spaces (electrostatic, steric or hydrophobic). ACPC is a ligand-based virtual screening tool supporting consensus queries, query molecule annotation and multi-core computers
Pizzolato, Marco. "IRM computationnelle de diffusion et de perfusion en imagerie cérébrale". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4017/document.
Texto completo da fonteDiffusion and Perfusion Magnetic Resonance Imaging (dMRI & pMRI) represent two modalities that allow sensing important and different but complementary aspects of brain imaging. This thesis presents a theoretical and methodological investigation on the MRI modalities based on diffusion-weighted (DW) and dynamic susceptibility contrast (DSC) images. For both modalities, the contributions of the thesis are related to the development of new methods to improve and better exploit the quality of the obtained signals. With respect to contributions in diffusion MRI, the nature of the complex DW signal is investigated to explore a new potential contrast related to tissue microstructure. In addition, the complex signal is exploited to correct a bias induced by acquisition noise of DW images, thus improving the estimation of structural scalar metrics. With respect to contributions in perfusion MRI, the DSC signal processing is revisited in order to account for the bias due to bolus dispersion. This phenomenon prevents the correct estimation of perfusion metrics but, at the same time, can give important insights about the pathological condition of the brain tissue. The contributions of the thesis are presented within a theoretical and methodological framework, validated on both synthetical and real images
Magnaudet, Mathieu Stéphane. "Le défi anti-représentationnaliste : dynamicisme et théorie computationnelle de l'esprit". Bordeaux 2, 2006. http://www.theses.fr/2006BOR21314.
Texto completo da fonteChateau-Laurent, Hugo. "Modélisation Computationnelle des Interactions Entre Mémoire Épisodique et Contrôle Cognitif". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0019.
Texto completo da fonteEpisodic memory is often illustrated with the madeleine de Proust excerpt as the ability to re-experience a situation from the past following the perception of a stimulus. This simplistic scenario should not lead into thinking that memory works in isolation from other cognitive functions. On the contrary, memory operations treat highly processed information and are themselves modulated by executive functions in order to inform decision making. This complex interplay can give rise to higher-level functions such as the ability to imagine potential future sequences of events by combining contextually relevant memories. How the brain implements this construction system is still largely a mystery. The objective of this thesis is to employ cognitive computational modeling methods to better understand the interactions between episodic memory, which is supported by the hippocampus, and cognitive control, which mainly involves the prefrontal cortex. It provides elements as to how episodic memory can help an agent to act. It is shown that Neural Episodic Control, a fast and powerful method for reinforcement learning, is in fact mathematically close to the traditional Hopfield Network, a model of associative memory that has greatly influenced the understanding of the hippocampus. Neural Episodic Control indeed fits within the Universal Hopfield Network framework, and it is demonstrated that it can be used to store and recall information, and that other kinds of Hopfield networks can be used for reinforcement learning. The question of how executive functions can control episodic memory operations is also tackled. A hippocampus-inspired network is constructed with as little assumption as possible and modulated with contextual information. The evaluation of performance according to the level at which contextual information is sent provides design principles for controlled episodic memory. Finally, a new biologically inspired model of one-shot sequence learning in the hippocampus is proposed. The model performs very well on multiple datasets while reproducing biological observations. It ascribes a new role to the recurrent collaterals of area CA3 and the asymmetric expansion of place fields, that is to disambiguate overlapping sequences by making retrospective splitter cells emerge. Implications for theories of the hippocampus are discussed and novel experimental predictions are derived
Lopez, orozco Francisco. "Modélisation cognitive computationnelle de la recherche d'information utilisant des données oculomotrices". Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00910178.
Texto completo da fonteMancheva, Lyuba. "Modélisation cognitive computationnelle de la lecture de textes chez les enfants". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS045/document.
Texto completo da fonteLearning to read requires the coordination of eye movements, attention, as well as lexical processing in order to achieve smooth reading. Contrary to what we perceive, the behavior of our eyes is very complex and depends on several factors linked to the properties of the language, the lexical knowledge, the reading skills and the goal of the task. The study of interindividual differences could help us to better understand the reading processes.The eye movement control during reading could be analyzed with statistical methods and with computational models which can simulate the reading processes spatially and temporally. In this work, we used both methods in order to better understand why we observe differences in the reading patterns between different groups of readers of different ages (adults vs children) and of the same age (poor and good readers). The observed changes between different groups could be explained by differences in the oculomotor development and/or by differences in lexical knowledge. One fundamental question is to understand the causes of these changes at the level of eye movement control.The crucial questions that we tried to investigate in this work are the following: what are the differences in terms of eye movements between different groups of reading age? Why do we observe these differences? What are the patterns of eye movements for different groups of poor and good readers of the same age? How can we simulate the observed patterns with computational modeling of reading?
Lopez, Orozco Francisco. "Modélisation cognitive computationnelle de la recherche d'information utilisant des données oculomotrices". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENS013/document.
Texto completo da fonteThis computer science thesis presents a computational cognitive modeling work using eye movements of people faced to different information search tasks on textual material. We studied situations of everyday life when people are seeking information on a newspaper or a web page. People should judge whether a piece of text is semantically related or not to a goal expressed by a few words. Because quite often time is a constraint, texts may not be entirely processed before the decision occurs. More specifically, we analyzed eye movements during two information search tasks: reading a paragraph with the task of quickly deciding i) if it is related or not to a given goal and ii) whether it is better related to a given goal than another paragraph processed previously. One model is proposed for each of these situations. Our simulations are done at the level of eye fixations and saccades. In particular, we predicted the time at which participants would decide to stop reading a paragraph because they have enough information to make their decision. The models make predictions at the level of words that are likely to be fixated before a paragraph is abandoned. Human semantic judgments are mimicked by computing the semantic similarities between sets of words using Latent Semantic Analysis (LSA) (Landauer et al., 2007). We followed a statistical parametric approach in the construction of our models. The models are based on a Bayesian classifier. We proposed a two-variable linear threshold to account for the decision to stop reading a paragraph, based on the Rank of the fixation and i) the semantic similarity (Cos) between the paragraph and the goal and ii) the difference of semantic similarities (Gap) between each paragraph and the goal. For both models, the performance results showed that we are able to replicate in average people's behavior faced to the information search tasks studied along the thesis. The thesis includes two main parts: 1) designing and carrying out psychophysical experiments in order to acquire eye movement data and 2) developing and testing the computational cognitive models
Filipis, Luiza. "Etude optique et computationnelle de la fonction des canaux ioniques neuronaux". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAY078.
Texto completo da fonteThe physiology of ion channels is a major topic of interest in modern neuroscience since the functioning of these molecules is the biophysical ground of electrical and chemical behaviour of neurons. Ion channels are diverse membrane proteins that allow the selective passage of ions across the lipid bilayer of cells. They are involved in a variety of fundamental physiological processes from electrical signal integration, action potential generation and propagation to cell growth and even apoptosis, while their dysfunction is the cause of several diseases. Ion channels have extensively studied using electrode methods, in particular the patch-clamp technique, but these approaches are limited in studying native channels during physiological activity in situ. In particular, electrodes give limited spatial information while it is recognised that the contribution of channels in all different processes is a function not only of their discrete biophysical properties but also of their distribution across the neurons surface at the different compartments. Optical techniques, in particular those involving fluorescence imaging, can overcome intrinsic limitations of electrode techniques as they allow to record electrical and ionic signals with high spatial and temporal resolution. Finally, the ability of optical techniques combined with neuronal modelling can potentially give pivotal information significantly advancing our understanding on how neurons work.The ambitious goal of my thesis was to progress in this direction by developing novel approaches to combine cutting-edge imaging techniques with modelling to extract ion currents and channel kinetics in specific neuronal regions. The body of this work was divided in three methodological pieces, each of them described in a dedicated chapter
Lartillot-Nakamura, Olivier. "Fondements d'un système d'analyse musicale computationnelle suivant une modélisation cognitiviste de l'écoute". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2004. http://tel.archives-ouvertes.fr/tel-00006909.
Texto completo da fonteDa, Sylva Lyne. "Interprétation linguistique et computationnelle des valeurs par défaut dans le domaine syntaxique". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0001/NQ39734.pdf.
Texto completo da fonteRoussel, Julien. "Analyse théorique et numérique de dynamiques non-réversibles en physique statistique computationnelle". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1115/document.
Texto completo da fonteThis thesis deals with four topics related to non-reversible dynamics. Each is the subject of a chapter which can be read independently. The first chapter is a general introduction presenting the problematics and some major results of computational statistical physics. The second chapter concerns the numerical resolution of hypoelliptic partial differential equations, i.e. involving an invertible but non-coercive differential operator. We prove the consistency of the Galerkin method as well as convergence rates for the error. The analysis is also carried out in the case of a saddle-point formulation, which is the most appropriate in the cases of interest to us. We demonstrate that our assumptions are met in a simple case and numerically check our theoretical predictions on this example. In the third chapter we propose a general strategy for constructing control variates for nonequilibrium dynamics. In particular, this method reduces the variance of transport coefficient estimators by ergodic mean. This variance reduction is quantified in a perturbative regime. The control variate is based on the solution of a partial differential equation. In the case of Langevin's equation this equation is hypoelliptic, which motivates the previous chapter. The proposed method is tested numerically on three examples. The fourth chapter is connected to the third since it uses the same idea of a control variate. The aim is to estimate the mobility of a particle in the underdamped regime, where the dynamics are close to being Hamiltonian. This work was done in collaboration with G. Pavliotis during a stay at Imperial College London. The last chapter deals with Piecewise Deterministic Markov Processes, which allow measure sampling in high-dimension. We prove the exponential convergence towards the equilibrium of several dynamics of this type under a general formalism including the Zig-Zag process (ZZP), the Bouncy Particle Sampler (BPS) and the Randomized Hybrid Monte Carlo (RHMC). The dependencies of the bounds on the convergence rate that we demonstrate are explicit with respect to the parameters of the problem. This allows in particular to control the size of the confidence intervals for empirical averages when the size of the underlying phase space is large. This work was done in collaboration with C. Andrieu, A. Durmus and N. Nüsken
Tondo, Yoya Ariel Christopher. "Imagerie computationnelle active et passive à l’aide d’une cavité chaotique micro-ondes". Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S130/document.
Texto completo da fonteThe broad topic of the presented Ph.D focuses on active and passive microwave computational imaging. The use of a chaotic cavity as a compressive component is studied both theoretically (mathematical model, algorithmic resolution of the inverse problem) and experimentally. The underlying idea is to replace an array of antennas with a single reverberant cavity with an array of openings on the front panel that encodes the spatial information of a scene in the temporal response of the cavity. The reverberation of electromagnetic waves inside the cavity provides the degrees of freedom necessary to reconstruct an image of the scene. Thus it is possible to create a high-resolution image of a scene in real time from a single impulse response. Applications include security or imaging through walls. In this work, the design and characterization of an open chaotic cavity is performed. Using this device, active computational imaging is demonstrated to produce images of targets of various shapes. The number of degrees of freedom is further improved by changing the boundary conditions with the addition of commercial fluorescent lamps. The interaction of the waves with these plasma elements allows new cavity configurations to be created, thus improving image resolution. Compressive imaging is next applied to the passive detection and localization of natural thermal radiation from noise sources, based on the correlation of signals received over two channels. Finally, an innovative method of interferometric target imaging is presented. It is based on the reconstruction of the impulse response between two antennas from the microwave thermal noise emitted by a network of neon lamps. This work constitutes a step towards for future imaging systems
Chanceaux, Myriam. "Modélisation cognitive computationnelle de trajets oculomoteurs lors d'une tâche de recherche d'information". Phd thesis, Grenoble 1, 2009. http://www.theses.fr/2009GRE10232.
Texto completo da fonteThis thesis examines the combination of visual and semantic processes in information seeking task on textual interfaces such as web pages. The methodology used is the simulation of cognitive models. This approach aims to design a program based on theoretical cognitive models replicating human behaviour. Our model simulates the oculomotor scan path of an average user searching for information. The processes involved in this kind of tasks are modelled to replicate human eye movements recorded during different experiments. Models of visual and semantics processes are added to a model of memory processes inherent in information retrieval task. For the visual part, we go by saliency maps which predict areas in the display attracting attention, according to low level information (color, orientation and contrast), and the physiological properties of human vision. For the semantic part, the method used to measure semantic similarities between the search goal of the user and the different parts of the page is LSA (Latent Semantic Analysis) (Landauer, 1998). For the memory part the mechanism of inhibition of return (Klein, 1999) and the Variable Memory Model (Horowitz, 2006) are used. The thesis includes three parts: designing a theoretical model of interaction, designing a simulation tool, and developing psychophysical experiments with eye-tracking techniques to validate and refine the proposed model
Chanceaux, Myriam. "Modélisation cognitive computationnelle de trajets oculomoteurs lors d'une tâche de recherche d'information". Phd thesis, Université Joseph Fourier (Grenoble), 2009. http://tel.archives-ouvertes.fr/tel-00430624.
Texto completo da fonteCollins, Anne. "Apprentissage et contrôle cognitif : une théorie computationnelle de la fonction exécutive préfontale humaine". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00814840.
Texto completo da fonteUm, Nlend Ingrid. "Analyse computationnelle des protéines kinases surexprimées dans le cancer du sein «Triple-négatif»". Mémoire, Université de Sherbrooke, 2014. http://hdl.handle.net/11143/5386.
Texto completo da fonteCazé, Romain. "Le rôle des sommations non-linéaires dendritiques dans la puissance computationnelle du neurone". Paris 7, 2012. http://www.theses.fr/2012PA077238.
Texto completo da fonteSeminal computational models of the neuron assume that excitatory post-synaptic inputs (EPSPs) sum linearly in dendrites. Nevertheless, the sum of multiple EPSPs can be larger than their arithmetic sum, a superlinear summation. The so-called dendritic spike. An impact of dendritic spikes on computation remains a malter of debate. Moreover, the sum of multiple of EPSPs can also be smaller than their arithmetic sum these saturations are sometime presented as a glitch which should be corrected by dendritic spikes. L provide here arguments against these daims, I show that dendritic saturations as well as dendritic spikes, even when they cannot directly make the neuron fire, enhance single neuron computation. I use a binary neuron models to demonstrate that a single dendritic non-linearity, either spiking or saturating, combined with somatic non-linearity, enables a neuron linearly non-separable functions. I coin these functions as spatial functions because the neuron's output depends more on the labeling of the active inputs than on their number. Secondly, we show that realistic biophysical models of the neuron are capable of computing spatial functions. Within these models the dendritic and somatic non-linearity are tightly coupled. We use this biophysical model to predict that some neurons will be more likely to fire when inputs are scattered over their dendritic tree than when they are clustered, we predict that such a neuron is capable of computing spatial functions. These results suggest a new perspective on neural networks, suggesting for instance that memory can be stored in the neurons themselves
Gris, Barbara. "Approche modulaire sur les espaces de formes, géométrie sous-riemannienne et anatomie computationnelle". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN069/document.
Texto completo da fonteThis thesis is dedicated to the development of a new deformation model to study shapes. Deformations, and diffeormophisms in particular, have played a tremendous role in the field of statistical shape analysis, as a proxy to measure and interpret differences between similar objects but with different shapes. Diffeomorphisms usually result from the integration of a flow of regular velocity fields, whose parameters have not enabled so far a full control of the local behaviour of the deformation. We propose a new model in which velocity fields are built on the combination of a few local and interpretable vector fields. These vector fields are generated thanks to a structure which we name deformation module. Deformation modules generate vector fields of a particular type (e.g. a scaling) chosen in advance: they allow to incorporate a constraint in the deformation model. These constraints can correspond either to an additional knowledge one would have on the shapes under study, or to a point of view from which one would want to study these shapes. In a first chapter we introduce this notion of deformation module and we give several examples to show how diverse they can be. We also explain how one can easily build complex deformation modules adapted to complex constraints by combining simple deformation modules. Then we introduce the construction of modular large deformations as flow of vector fields generated by a deformation module. Vector fields generated by a deformation module are parametrized by two variables: a geometrical one named geometrical descriptor and a control one. We build large deformations so that the geometrical descriptor follows the deformation of the ambient space. Then defining a modular large deformation corresponds to defining an initial geometrical descriptor and a trajectory of controls. We also associate a notion of cost for each couple of geometrical descriptor and control. In a second chapter we explain how we can use a given deformation module to study data shapes. We first build a sub-Riemannian structure on the space defined as the product of the data shape space and the space of geometrical descriptors. The sub-Riemannian metric comes from the chosen cost: we equip the new (shape) space with a chosen metric, which is not in general the pull-back of a metric on vector fields but takes into account the way vector fields are built with the chosen constraints. Thanks to this structure we define a sub-Riemannian distance on this new space and we show the existence, under some mild assumptions, of geodesics (trajectories whose length equals the distance between the starting and ending points). The study of geodesics amounts to an optimal control problem, and they can be estimated thanks to an Hamiltonian framework: in particular we show that they can be parametrized by an initial variable named momentum. Afterwards we introduce optimal modular large deformations transporting a source shape into a target shape. We also define the modular atlas of a population of shapes which is made of a mean shape, and one modular large deformation per shape. In the discussion we study an alternative model where geodesics are parametrized in lower dimension. In a third chapter we present the algorithm that was implemented in order to compute these modular large deformations and the gradient descent to estimate the optimal ones as well as mean shapes. In a last chapter we introduce several numerical examples thanks to which we study specific aspects of our model. In particular we show that the choice of the used deformation module influences the form of the estimated mean shape, and that by choosing an adapted deformation module we are able to perform in a satisfying and robust way simultaneously rigid and non linear registration. In the last example we study shapes without any prior knowledge, then we use a module corresponding to weak constraints and we show that the atlas computation still gives interesting results
Lefort, Sébastien. ""How much is 'about'?" modélisation computationnelle de l'interprétation cognitive des expressions numériques approximatives". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066421/document.
Texto completo da fonteApproximate Numerical Expressions (ANE) are imprecise linguistic expressions implying numerical values, illustrated by "about 100". We first focus on ANE interpretation, both in its human and computational aspects. After defining original arithmetical and cognitive dimensions allowing to characterize ANEs, we conducted an empirical study to collect the intervals of values denoted by ANEs. We show that the proposed dimensions are involved in ANE interpretation. In a second step, we proposed two interpretation models, based on the same principle of a compromise between the cognitive salience of the endpoints and their distance to the ANE reference value, formalized by Pareto frontiers. The first model estimates the denoted interval, the second one generates a fuzzy interval representing the associated imprecision. The experimental validation of the models, based on real data, show that they offer better performances than existing models. We also show the relevance of the fuzzy model by implementing it in the framework of flexible database queries. We then show, by the mean of an empirical study, that the semantic context has little effect on the collected intervals. Finally, we focus on the additions and products of ANE, for instance to assess the area of a room whose walls are "about 10" and "about 20 meters" long. We conducted an empirical study whose results indicate that the imprecisions associated with the operands are not taken into account during the calculations
Lefort, Sébastien. ""How much is 'about'?" modélisation computationnelle de l'interprétation cognitive des expressions numériques approximatives". Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066421.
Texto completo da fonteApproximate Numerical Expressions (ANE) are imprecise linguistic expressions implying numerical values, illustrated by "about 100". We first focus on ANE interpretation, both in its human and computational aspects. After defining original arithmetical and cognitive dimensions allowing to characterize ANEs, we conducted an empirical study to collect the intervals of values denoted by ANEs. We show that the proposed dimensions are involved in ANE interpretation. In a second step, we proposed two interpretation models, based on the same principle of a compromise between the cognitive salience of the endpoints and their distance to the ANE reference value, formalized by Pareto frontiers. The first model estimates the denoted interval, the second one generates a fuzzy interval representing the associated imprecision. The experimental validation of the models, based on real data, show that they offer better performances than existing models. We also show the relevance of the fuzzy model by implementing it in the framework of flexible database queries. We then show, by the mean of an empirical study, that the semantic context has little effect on the collected intervals. Finally, we focus on the additions and products of ANE, for instance to assess the area of a room whose walls are "about 10" and "about 20 meters" long. We conducted an empirical study whose results indicate that the imprecisions associated with the operands are not taken into account during the calculations
Ben, Salamah Janan. "Extraction de connaissances dans des textes arabes et français par une méthode linguistico-computationnelle". Thesis, Paris 4, 2017. http://www.theses.fr/2017PA040137.
Texto completo da fonteIn this thesis, we proposed a multilingual generic approach for the automatic information extraction. Particularly, events extraction of price variation and temporal information extraction linked to temporal referential. Our approach is based on the constitution of several semantic maps by textual analysis in order to formalize the linguistic traces expressed by categories. We created a database for an expert system to identify and annotate information (categories and their characteristics) based on the contextual rule groups. Two algorithms AnnotEC and AnnotEV have been applied in the SemanTAS platform to validate our assumptions. We have obtained a satisfactory result; Accuracy and recall are around 80%. We presented extracted knowledge by a summary file. In order to approve the multilingual aspect of our approach, we have carried out experiments on French and Arabic. We confirmed the scalability level by the annotation of large corpus
Domenech, Philippe. "Une approche neuro-computationnelle de la prise de décision et de sa régulation contextuelle". Phd thesis, Université Claude Bernard - Lyon I, 2011. http://tel.archives-ouvertes.fr/tel-00847494.
Texto completo da fonteSantolini, Marc. "Analyse computationnelle des éléments cis-régulateurs dans les génomes des drosophiles et des mammifères". Phd thesis, Université Paris-Diderot - Paris VII, 2013. http://tel.archives-ouvertes.fr/tel-00865159.
Texto completo da fonteMercier, Steeve. "Phénomènes d'interface, stades d'acquisition et variabilité : explications en termes de degrés de complexité computationnelle". Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/27898/27898.pdf.
Texto completo da fontePassot, Jean-Baptiste. "Etude computationnelle du rôle du cervelet dans les mouvements volontaires et la navigation spatiale". Paris 6, 2011. http://www.theses.fr/2011PA066382.
Texto completo da fonteGuyader, Nathalie. "Scènes visuelles : catégorisation basée sur des modèles de perception. Approches (neuro) computationnelle et psychophysique". Université Joseph Fourier (Grenoble), 2004. http://www.theses.fr/2004GRE10082.
Texto completo da fonteThe aim is to use the biology of the human visual system in order to describe images of natural scenes. This wrok is based on two main approaches : a computational and a psychophysical ones. We show that human use the information contained is the amplitude spectrum of images in order to categorize. Ln the opposite the phase amplitude does not play a role. This is shown only for rapid categorization
Mercier, Jonathan. "Logique paracohérente pour l’annotation fonctionnelle des génomes au travers de réseaux biologiques". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLE007/document.
Texto completo da fonteOne consequence of increasing sequencing capacity is the the accumulation of in silico predictions in biological sequence databanks. This amount of data exceeds human curation capacity and, despite methodological progress, numerous errors on the prediction of protein functions are made. Therefore, tools are required to guide human expertise in the evaluation of bioinformatics predictions taking into account background knowledge on the studied organism.GROOLS (for “Genomic Rule Object-Oriented Logic System”) is an expert system that is able to reason on incomplete and contradictory information. It was developed with the objective of assisting biologists in the process of genome functional annotation by integrating high quantity of information from various sources. GROOLS adopts a generic representation of knowledge using a directed acyclic graph of concepts that represent the different components of a biological process (e.g. a metabolic pathway) connected by two types of relations (i.e. “part-of” and “subtype-of”). These concepts are called “Prior Knowledge concepts” and correspond to theories for which their presence in an organism needs to be elucidated. They serve as basis for the reasoning and are evaluated from observations of “Prediction” (e.g. a predicted enzymatic activity) or “Expectation” (e.g. growth phenotypes) type. Indeed, GROOLS implements a paraconsistent logic on set of facts that are observations. Using different rules, “Prediction” and “Expectation” values are propagated on the graph as sets of truth values. At the end of the reasoning, a conclusion is given on each “Prior Knowledge concepts” by combining “Prediction” and “Expectation” values. Conclusions may, for example, indicate a “Confirmed-Presence” (i.e. the function is predicted and expected), a “Missing” concept (i.e. the function is expected but not predicted) or an “Unexpected-Presence” (i.e. the function is predicted but not expected in the organisms).GROOLS reasoning was applied on several organisms and with different sources of “Predictions” (i.e. annotations from UniProtKB or MicroScope) and biological processes (i.e. GenomeProperties and UniPathway). For “Expectations”, growth phenotype data and amino-acid biosynthesis pathways were used. GROOLS results are useful to quickly evaluate the overall annotation quality of a genome and to propose annotations to be completed or corrected by a biocurator. More generally, the GROOLS software can be used to improve the reconstruction of the metabolic network of an organism which is an essential step in obtaining a high-quality metabolic model
Maignan, Luidnel. "Points, distances, et automates cellulaires : algorithmique géométrique et spatiale". Paris 11, 2010. http://www.theses.fr/2010PA112325.
Texto completo da fonteSpatial computing aims at providing a scalable framework where computation is distributed on a uniform computing medium and communication happen locally between nearest neighbors. We study the particular framework of cellular automata, using a regular grid and synchronous update. We propose to develop primitives allowing to structure the medium around a set of particles. We consider three problems of geometrical nature: moving the particles on the grid in order to uniformize the density, constructing their convex hull, constructing a connected proximity graph establishing connection between nearest particles. The last two problems are considered for multidimensional grid while uniformization is solved specifically for the one dimensional grid. The work approach is to consider the metric space underlying the cellular automata topology and construct generic mathematical object based solely on this metric. As a result, the algorithms derived from the properties of those objects, generalize over arbitrary regular grid. We implemented the usual ones, including hexagonal, 4 neighbors, and 8 neighbors square grid. All the solutions are based on the same basic component: the distance field, which associates to each site of the space its distance to the nearest particle. While the distance values are not bounded, it is shown that the difference between the values of neighboring sites is bounded, enabling encoding of the gradient into a finite state field. Our algorithms are expressed in terms of movements according to such gradient, and also detecting patterns in the gradient, and can thus be encoded in finite state of automata, using only a dozen of state
Pichené, Matthieu. "Analyse multi-niveaux en biologie systémique computationnelle : le cas des cellules HeLa sous traitement apoptotique". Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S026/document.
Texto completo da fonteThis thesis examines a new way to study the impact of a given pathway on the dynamics of a tissue through Multi-Level Analysis. The analysis is split in two main parts: The first part considers models describing the pathway at the cellular level. Using these models, one can compute in a tractable manner the dynamics of a group of cells, representing it by a multivariate distribution over concentrations of key molecules. % of the distribution of the states of this pathway through groups of cells. The second part proposes a 3d model of tissular growth that considers the population of cell as a set of subpopulations, partitionned such as each subpopulation shares the same external conditions. For each subpopulation, the tractable model presented in the first part can be used. This thesis focuses mainly on the first part, whereas a chapter covers a draft of a model for the second part
Najjar, Mohamed Mehdi. "Une approche cognitive computationnelle de représentation de la connaissance au sein des environnements informatiques d'apprentissage". Thèse, Université de Sherbrooke, 2006. http://savoirs.usherbrooke.ca/handle/11143/5069.
Texto completo da fonteRobinet, Vivien. "Modélisation cognitive computationnelle de l'apprentissage inductif de chunks basée sur la théorie algorithmique de l'information". Grenoble INPG, 2009. http://www.theses.fr/2009INPG0108.
Texto completo da fonteThis thesis presents a computational cognitive model of inductive learning based both on the MDL and on the chunking mechanism. The chunking process is used as a basis for many computational cognitive models. The MDL principle is a formalisation of the simplicity principle. It implements the notion of simplicity through the well-defined concept of codelength. The theoretical results justifying the simplicity principle are established through the algorithmic information theory. The MDL principle could be considered as a computable approximation of measures defined in this theory. Using these two mechanisms the model automatically generates the shortest representation of discrete stimuli. Such a representation could be compared to those produced by human participants facing the same set of stimuli. Through the proposed model and experiments, the purpose of this thesis is to assess both the theoretical and the practical effectiveness of the simplicity principle for cognitive modeling
Cohendet, Romain. "Prédiction computationnelle de la mémorabilité des images : vers une intégration des informations extrinsèques et émotionnelles". Thesis, Nantes, 2016. http://www.theses.fr/2016NANT4033/document.
Texto completo da fonteThe study of image memorability in computer science is a recent topic. First attempts were based on learning algorithms, used to infer the extent to which a picture is memorable from a set of low-level visual features. In this dissertation, we first investigate theoretical foundations of image memorability; we especially focus on the emotions the images convey, closely related to their memorability. In this light, we propose to widen the scope of image memorability prediction, to incorporate not only intrinsic, but also extrinsic image information, related to their context of presentation and to the observers. Accordingly, we build a new database for the study of image memorability; this database will be useful to test the existing models, trained on the unique database available so far. We then introduce deep learning for image memorability prediction: our model obtains the best performance to date. To improve its prediction accuracy, we try to model contextual and individual influences on image memorability. In the final part, we test the performance of computational models of visual attention, that attract growing interest for memorability prediction, for images which vary according to their degree of memorability and the emotion they convey. Finally, we present the "emotional" interactive movie, which enable us to study the links between emotion and visual attention for videos
Najjar, Mohamed Mehdi. "Une approche cognitive computationnelle de représentation de la connaissance au sein des environnements informatiques d'apprentissage". [S.l. : s.n.], 2006.
Encontre o texto completo da fonteFaur, Caroline. "Approche computationnelle du regulatory focus pour des agents interactifs : un pas vers une personnalité artificielle". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS352/document.
Texto completo da fonteThe development of affective computing is leading to the design of artificial devices endowed with a form of social and emotional intelligence. The study of human-computer interaction in this context offers many research tracks. Among them is the question of personality: how to model some characteristics of an artificial personality? How these characteristics will influence the course of interaction with users? This goal rises several research questions: how to define personality? On which models and theories from psychology should we rely to define some artificial personality? Which methodology will help to address the implementation of such a complex psychological concept? What could artificial personality bring to the field of human-computer interaction? And to the psychology of personality? How to experimentally evaluate these contributions? To address these issues, this thesis takes a multidisciplinary approach, at the crossing of computing science and psychology. Given its relevance to a computational approach, we modeled self-regulation as a component of personality. This concept is approached from the regulatory focus theory. On this theoretical basis, a conceptual framework and a computational model are proposed. Our theoretical proposals led to two data-driven implementations (dimensional vs. socio-cognitive) which endowed our artificial agents with regulatory focus by using machine-learning. A French questionnaire measuring regulatory focus was designed and validated. Two user studies (brief interaction with artificial agents vs. repeated sessions with animated agents), where the regulatory focus of agents is conveyed via game strategies, enabled the study of regulatory focus perception and its impact on the interaction. Our results support the use of regulatory focus in affective computing and open perspectives on the theoretical and methodological links between computer science and psychology
Ahn, Yun-Kang. "L'analyse musicale computationnelle : rapport avec la composition, la segmentation et la représentation à l'aide de graphes". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2009. http://tel.archives-ouvertes.fr/tel-00447778.
Texto completo da fonteNdiaye, Dieynaba. "Rôle des mitochondries dans la régulation des oscillations de calcium des hépatocytes : approches expérimentale et computationnelle". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00998301.
Texto completo da fonteAhn, Yun-Kang. "L'analyse musicale computationnelle : rapport avec la composition, la segmentation et la représentation à l'aide des graphes". Paris 6, 2009. https://tel.archives-ouvertes.fr/tel-00447778.
Texto completo da fonteChen, Dexiong. "Modélisation de données structurées avec des machines profondes à noyaux et des applications en biologie computationnelle". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM070.
Texto completo da fonteDeveloping efficient algorithms to learn appropriate representations of structured data, including sequences or graphs, is a major and central challenge in machine learning. To this end, deep learning has become popular in structured data modeling. Deep neural networks have drawn particular attention in various scientific fields such as computer vision, natural language understanding or biology. For instance, they provide computational tools for biologists to possibly understand and uncover biological properties or relationships among macromolecules within living organisms. However, most of the success of deep learning methods in these fields essentially relies on the guidance of empirical insights as well as huge amounts of annotated data. Exploiting more data-efficient models is necessary as labeled data is often scarce.Another line of research is kernel methods, which provide a systematic and principled approach for learning non-linear models from data of arbitrary structure. In addition to their simplicity, they exhibit a natural way to control regularization and thus to avoid overfitting.However, the data representations provided by traditional kernel methods are only defined by simply designed hand-crafted features, which makes them perform worse than neural networks when enough labeled data are available. More complex kernels inspired by prior knowledge used in neural networks have thus been developed to build richer representations and thus bridge this gap. Yet, they are less scalable. By contrast, neural networks are able to learn a compact representation for a specific learning task, which allows them to retain the expressivity of the representation while scaling to large sample size.Incorporating complementary views of kernel methods and deep neural networks to build new frameworks is therefore useful to benefit from both worlds.In this thesis, we build a general kernel-based framework for modeling structured data by leveraging prior knowledge from classical kernel methods and deep networks. Our framework provides efficient algorithmic tools for learning representations without annotations as well as for learning more compact representations in a task-driven way. Our framework can be used to efficiently model sequences and graphs with simple interpretation of predictions. It also offers new insights about designing more expressive kernels and neural networks for sequences and graphs
Da, Silva Figueiredo Celestino Priscila. "Étude des mécanismes oncogéniques d'activation et de résistance des récepteurs tyrosine kinase de type III". Thesis, Cachan, Ecole normale supérieure, 2015. http://www.theses.fr/2015DENS0026/document.
Texto completo da fonteThe receptors tyrosine kinase (RTKs) CSF-1R and KIT are important mediators of signal transduction. Their normal function is altered by gain-of-function mutations associated with cancer diseases. A secondary effect of the mutations is the alteration of receptors’ sensitivity imatinib, employed in cancer treatment. Our goals in this thesis consist of (i) study the structural and dynamical effects induced by the D802V mutation in CSF-1R; (ii) characterize imatinib’s affinity to the wild-type (WT) and mutant forms of KIT (V560G, S628N and D816V) and CSF-1R (D802V). By means of molecular dynamics (MD) simulations, we have shown that the D802V mutation disrupts the allosteric communication between the activation loop and the auto-inhibitory juxtamembrane (JMR) domain. However, this rupture is not sufficient to induce the JMR’s departure. The subtle effect of this mutation in CSF-1R was associated with differences in the primary sequence between CSF-1R and KIT in the JMR region. The affinity of imatinib to the different targets was estimated by docking, DM and binding energy calculations. The electrostatic interactions showed to be the main force driving the resistance, with mutations D802/816V being the most deleterious in energy contribution. As a general conclusion, we have established that the D802V mutation in CSF-1R does not provoke the same structural effects as its equivalent in KIT. In addition, the study of both receptors in their WT and mutant forms complexed with imatinib indicate that the conformational changes induced by the mutations allied to the electrostatic interactions with the ligand could explain the resistance phenomena
Aliakbari, khoei Mina. "Une approche computationnelle de la dépendance au mouvement du codage de la position dans la système visuel". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4041/document.
Texto completo da fonteCoding the position of moving objects is an essential ability of the visual system in fulfilling precise and robust tracking tasks. This thesis is focalized upon this question: How does the visual system efficiently encode the position of moving objects, despite various sources of uncertainty? This study deploys the hypothesis that the visual systems uses prior knowledge on the temporal coherency of motion (Burgi et al 2000; Yuille and Grzywacz 1989). We implemented this prior by extending the modeling framework previously proposed to explain the aperture problem (Perrinet and Masson, 2012), so-called motion-based prediction (MBP). This model is a Bayesian motion estimation framework implemented by particle filtering. Based on that, we have introduced a theory on motion-based position coding, to investigate how neural mechanisms encoding the instantaneous position of moving objects might be affected by motion. Results of this thesis suggest that motion-based position coding might be a generic neural computation among all stages of the visual system. This mechanism might partially compensate the accumulative and restrictive effects of neural delays in position coding. Also it may account for motion-based position shifts as the flash lag effect. As a specific case, results of diagonal MBP model reproduced the anticipatory response of neural populations in the primary visual cortex of macaque monkey. Our results imply that an efficient and robust position coding might be highly dependent on trajectory integration and that it constitutes a key neural signature to study the more general problem of predictive coding in sensory areas
Carpentier, Grégoire. "Approche computationnelle de l'orchestration musciale - Optimisation multicritère sous contraintes de combinaisons instrumentales dans de grandes banques de sons". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://tel.archives-ouvertes.fr/tel-00417606.
Texto completo da fonteLes rares outils actuels ramènent le problème de l'orchestration à la découverte, au sein de banques d'échantillons sonores instrumentaux, de combinaisons approchant au mieux un timbre fixé par le compositeur. Cette approche sera également la nôtre. Mais là où les méthodes actuelles contournent systématiquement le problème combinatoire de l'orchestration par le recours à des principes de décomposition ou à des algorithmes de matching pursuit, l'originalité de notre démarche est de placer les enjeux combinatoires au coeur de nos travaux et de traiter l'orchestration à la mesure de sa complexité.
Envisageant tout d'abord la question comme un problème de sac à dos multi-objectifs, nous montrons que les non-linéarités dans les modèles de perception du timbre imposent un cadre théorique plus large pour l'aide à l'orchestration. Nous proposons une formalisation générique et extensible en nous plaçant dans un cadre de recherche combinatoire multicritère sous contraintes, dans lequel plusieurs dimensions perceptives sont optimisées conjointement pour approcher un timbre cible défini par le compositeur.
Nous validons dans un premier temps notre approche théorique en montrant, sur un ensemble de problèmes de petite taille et pour une caractérisation exclusivement spectrale du timbre, que les solutions du problème formel correspondent à des propositions d'orchestration pertinentes. Nous présentons alors un algorithme évolutionnaire permettant de découvrir en un temps raisonnable un ensemble de solutions optimales. S'appuyant sur la prédiction des propriétés acoustiques des alliages instrumentaux, cette méthode propose des solutions d'orchestration en fonction de critères perceptifs et encourage ainsi la découverte de mélanges de timbres auxquels le savoir et l'expérience n'auraient pas nécessairement conduit.
En outre, la recherche peut-être à tout moment orientée dans une direction privilégiée. Parallèlement, nous définissons un cadre formel pour l'expression de contraintes globales et introduisons une métaheuristique innovante de résolution, permettant de guider la recherche vers des orchestrations satisfaisant un ensemble de propriétés symboliques en lien direct avec l'écriture musicale.
Nous présentons enfin un prototype expérimental d'outil d'aide à l'orchestration utilisable directement par les compositeurs, dans lequel l'exploration des possibilités de timbres est facilitée à travers une représentation multi-points de vue des solutions et un mécanisme interactif des préférences d'écoute. Nous terminons avec une série d'exemples d'application de nos travaux à des problèmes compositionnels concrets.