Dissertationen zum Thema „Prédiction des processus“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Prédiction des processus" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Cheze-Payaud, Nathalie. „Régression, prédiction et discrétisation des processus à temps continu“. Paris 6, 1994. http://www.theses.fr/1994PA066524.
Der volle Inhalt der QuelleBreton, Nicolas. „Prédiction des structures secondaires séquentiellement optimales de l'ARN“. Université de Marne-la-Vallée, 1998. http://www.theses.fr/1998MARN0020.
Der volle Inhalt der QuelleWintenberger, Olivier. „Contributions à la statistique des processus : estimation, prédiction et extrêmes“. Habilitation à diriger des recherches, Université Paris Dauphine - Paris IX, 2012. http://tel.archives-ouvertes.fr/tel-00757756.
Der volle Inhalt der QuelleKoval, Morgane. „Visualisations prédictives pour des processus décisionnels personnels informés“. Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0039.
Der volle Inhalt der QuelleIn everyday life, one needs to make a variety of micro-decisions which may have a greater or lesser impact on comfort, and greater or lesser consequences on other aspects of life (planning, finances, ecological impact). Many of such decisions involve projecting oneself into the future (e.g., how long will it take to pick up a parcel? Will there be enough time left to go to the bakery before it closes?) This thesis explores the use of predictive visualizations to inform different decision processes in casual contexts (i.e., informal, non-professional). Predictive visualizations are visualizations of uncertainty that shows plausible future outcomes. This work builds on the assumption that visualizing data about future uncertain events generated by simulations can be useful to have a better mental representation and understanding of what can possibly happen and make informed decisions. Two cases are studied to evaluate the potential of such an approach: temporal uncertainty (how long a task may take), and space-based temporal uncertainty (how long walking between two points may take). However, there are other types of data than temporal that could benefit from predictive visualizations. With the aim of presenting the issues involved in implementing and studying predictive visualizations using new technologies (e.g., augmented reality) and simulations whose models are, to date, difficult to implement, a final case is studied, focusing on the representation of food quantities for predictive purposes. This allows detailing and commenting on the future directions this field may seek to address, and on the potential of predictive visualizations for informed decisions in a casual context
Mangin, Christian. „Simulation, estimation des paramètres et prédiction pour un processus de Kendall“. Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6749.
Der volle Inhalt der QuelleEspinasse, Thibault. „Champs et processus gaussiens indexés par des graphes, estimation et prédiction“. Toulouse 3, 2011. http://thesesups.ups-tlse.fr/1578/.
Der volle Inhalt der QuelleIn this work, westudy Gaussian processes indexed by graphs. Weaim at providing tools for modelisation, estimation, and prediction, that uses the structure of the underlying graphs. In the first Chapter,we deal with the blind prediction problem, and compute, in the case of short range dependancy, the rate of convergence of the bias in the prediction error. This rate depends on the regularity of the spectral density of the process. Then, we use the eigenstructure of the adjacency operatorofa graphto propose some models for covariance operators of Gaussian fields indexedby this graph. It leads to aspectral representation for this operator, that can be used to extend Whittle approximation, and quasi-maximum likelihoo destimation. Finally, this construction may be extended to the spatio-temporal case, where the Szegö lemma still holds
Deregnaucourt, Thomas. „Prédiction spatio-temporelle de surfaces issues de l'imagerie en utilisant des processus stochastiques“. Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC088.
Der volle Inhalt der QuelleThe prediction of a surface is now an important problem due to its use in multiple domains, such as computer vision, the simulation of avatars for cinematography or video games, etc. Since a surface can be static or dynamic, i.e. evolving with time, this problem can be separated in two classes: a spatial prediction problem and a spatio-temporal one. In order to propose a new approach for each of these problems, this thesis works have been separated in two parts.First of all, we have searched to predict a static surface, which is supposed cylindrical, knowing it partially from curves. The proposed approach consisted in deforming a cylinder on the known curves in order to reconstruct the surface of interest. First, a correspondence between known curves and the cylinder is generated with the help of shape analysis tools. Once this step done, an interpolation of the deformation field, which is supposed Gaussian, have been estimated using maximum likelihood and Bayesian inference. This methodology has then been applied to real data from two domains of imaging: medical imaging and infography. The obtained results show that the proposed approach exceeds the existing methods in the literature, with better results using Bayesian inference.In a second hand, we have been interested in the spatio-temporal prediction of dynamic surfaces. The objective was to predict a dynamic surface based on its initial surface. Since the prediction needs to learn on known observations, we first have developed a spatio-temporal surface analysis tool. This analysis is based on shape analysis tools, and allows a better learning. Once this preliminary step done, we have estimated the temporal deformation of the dynamic surface of interest. More precisely, an adaptation, with is usable on the space of surfaces, of usual statistical estimators has been used. Using this estimated deformation on the initial surface, an estimation of the dynamic surface has been created. This process has then been applied for predicting 4D expressions of faces, which allow us to generate visually convincing expressions
Chagneau, Pierrette. „Modélisation bayésienne hiérarchique pour la prédiction multivariée de processus spatiaux non gaussiens et processus ponctuels hétérogènes d'intensité liée à une variable prédite : application à la prédiction de la régénération en forêt tropicale humide“. Montpellier 2, 2009. http://www.theses.fr/2009MON20157.
Der volle Inhalt der QuelleOne of the weak points of forest dynamics models is the recruitment. Classically, ecologists make the assumption that recruitment mainly depends on both spatial pattern of mature trees and environment. A detailed inventory of the stand and the environmental conditions enabled them to show the effects of these two factors on the local density of seedlings. In practice, such information is not available: only a part of seedlings is sampled and the environment is partially observed. The aim of the paper is to propose an approach in order to predict the spatial distribution and the seedlings genotype on the basis of a reasonable sampling of seedling, mature trees and environmental conditions. The spatial pattern of the seedlings is assumed to be a realization of a marked point process. The intensity of the process is not only related to the seed and pollen dispersal but also to the sapling survival. The sapling survival depends on the environment; so the environment must be predicted on the whole study area. The environment is characterized through spatial variables of different nature and predictions are obtained using a spatial hierarchical model. Unlike the existing models which assume the environmental covariables as exactly known, the recruitment model we propose takes into account the error related to the prediction of the environment. The prediction of seedling recruitment in tropical rainforest in French Guiana illustrates our approach
Carraro, Laurent. „Questions de prédiction pour le mouvement brownien et le processus de Wiener à plusieurs paramètres“. Lyon 1, 1985. http://www.theses.fr/1985LYO11660.
Der volle Inhalt der QuelleKozhemyak, Alexey. „Modélisation de séries financières à l'aide de processus invariants d'échelle. Application à la prédiction du risque“. Phd thesis, Ecole Polytechnique X, 2006. http://pastel.archives-ouvertes.fr/pastel-00002224.
Der volle Inhalt der QuelleKozhemyak, Alexey. „Modélisation de séries financières à l'aide de processus invariants d'échelles. Application à la prédiction du risque“. Palaiseau, Ecole polytechnique, 2006. http://www.theses.fr/2006EPXX0054.
Der volle Inhalt der QuelleChauvel, Cecile. „Processus empiriques pour l'inférence dans le modèle de survie à risques non proportionnels“. Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066399/document.
Der volle Inhalt der QuelleIn this thesis, we focus on particular empirical processes on which we can base inference in the non-proportional hazards model. This time-varying coefficient model generalizes the widely used proportional hazards model in the field of survival analysis. Our focus is on the standardized score process that is a sequential sum of standardized model-based residuals. We consider first the process with one covariate in the model, before looking at its extension for multiple and possibly correlated covariates. The outline of the manuscript is composed of three parts. In the first part, we establish the limit properties of the process under the model as well as under a misspecified model. In the second part, we use these convergence results to derive tests for the value of the model parameter. We show that one proposed test is asymptotically equivalent to the log-rank test, which is a benchmark for comparing survival experience of two or more groups. We construct more powerful tests under some alternatives. Finally, in the last part, we propose a methodology linking prediction and goodness of fit in order to construct models. The resulting models will have a good fit and will optimize predictive ability. We also introduce a goodness-of-fit test of the proportional hazards model. The performances of our methods, either tests for the parameter value or goodness-of-fit tests, are compared to standard methods via simulations. The methods are illustrated on real life datasets
Brunet, François. „Etudes des modes de déformation angulaire des molécules tétraédriques XY4 : prédiction de transitions multiphotoniques du silane“. Dijon, 1991. http://www.theses.fr/1991DIJOS003.
Der volle Inhalt der QuelleFulgenzi, Chiara. „Navigation autonome en environnement dynamique utilisant des modèles probabilistes de perception et de prédiction du risque de collision“. Phd thesis, Grenoble INPG, 2009. http://www.theses.fr/2009INPG0078.
Der volle Inhalt der QuelleIn this document we address the problem of autonomous navigation in dynamic unknown environ- ments. The key of the problem is to guarantee safety for all the agents moving in the space (people, vehicles and the robot itself) while moving toward a predefined goal. In contrast with static or controlled environments, high dynamic environments present many difficult issues: the detection and tracking of moving obstacles, the prediction of the future state of the world and the on-line motion planning and navigation. We moved our research starting from the fact that the decision about motion must take into account the limits of sensor perception, the velocity and future trajectory of the moving obstacles and that it must be related with the on-line updating of the world perception. Our approach is based on the use of probabilistic frameworks to represent the uncertainty about the static environment and the dynamic obstacles and we have developed decision methods that take explicitly into account such information for the autonomous navigation task. At first we focused our attention on reactive approaches. The developed approach is based on the integration between a probabilistic extension of the Velocity Obstacle framework within a dynamic occupancy grid (Bayesian Occupancy Filter). The dynamic occupancy grid gives a probabilistic and explicit representation of the perception limits and of the model error and noise while the Probabilistic Velocity Obstacles compute a probability of collision in time. The probability of collision and hence the choice of the next control, take into account limited sensor range, occlusions, obstacles shape and velocity estimation errors. Simulation results show the robot adapting its behaviour to different conditions of perception, avoiding static and moving obstacles and moving toward the goal. In the second part of the document we move toward more complex strategies, integrating a Partial Motion Planning algorithm with probabilistic representation of the environment and collision risk prediction. We propose a novel probabilistic extension of the well known Rapidly-exploring Random Trees framework and we integrate this search algorithm into a partial planning anytime approach, updating on-line the decisions of the robot with the last observations. We consider at first the case of a short-term prediction based on a classic Multi-Target-Tracking algorithm. Secondly, we consider the case of a longer term prediction based on a priori known typical patterns. We compare two kinds of patterns representation: the Hidden Markov Model representation and the continuous Gaussian Processes representation. The search and planning algorithm is adapted to these different kinds prediction and simulation results for navigation among multiple moving obstacles are shown
Fulgenzi, Chiara. „Navigation autonome en environnement dynamique utilisant des modèles probabilistes de perception et de prédiction du risque de collision“. Phd thesis, Grenoble INPG, 2009. http://tel.archives-ouvertes.fr/tel-00398055.
Der volle Inhalt der QuelleCoste, Nicolas. „Vers la prédiction de performance de modèles compositionnels dans les architectures GALS“. Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM028.
Der volle Inhalt der QuelleValidation, comprising functional verification and performance evaluation, is critical for complex hardware designs. Indeed, due to the high level of parallelism in modern designs, a functionally verified design may not meet its performance specifications. In addition, the later a design error is identified, the greater its cost. Thus, validation of designs should start as early as possible. This thesis proposes a compositional modeling framework, taking into account functional and time aspects of hardware systems, and defines a performance evaluation approach to analyze constructed models. The modeling framework, called Interactive Probabilistic Chain (IPC), is a discrete-time process algebra, representing delays as probabilistic phase type distributions. We defined a branching bisimulation and proved that it is a congruence with respect to parallel composition, a crucial property for compositional modeling. IPCs can be considered as a transposition of Interactive Markov Chains in a discrete-time setting, allowing a precise and compact modeling of fixed hardware delays. For performance evaluation, a fully specified IPC is transformed, assuming urgency of actions, into a discrete-time Markov chain, which can then be analyzed. Additionally, we defined a performance measure, called latency, and provided an algorithm to compute its long-run average distribution. The modeling approach and the computation of latency distributions have been implemented in a tool-chain relying on the CADP toolbox. Using this tool-chain, we studied communication aspects of an industrial hardware design, the xSTtream architecture, developed at STMicroelectronics
Jung, Sung-Cheol. „La personnalité dans le processus d’évaluation : stabilité structurale de l’inventaire NEO-PI-R et prédiction des comportements au travail“. Paris 10, 2007. http://www.theses.fr/2007PA100021.
Der volle Inhalt der QuelleThe Five Factor Model of personality which has a very good cross-cultural and trans-situational factorial stability, made it possible to undertake scientific studies on the relationships between personality and performance or counter-productive behaviors at work. The principal aims of this study consist in examining 1) the effects of the social desirability on the factor structure and the scores of the NEO PI R in the situation of recruitment, 2) the predictability of the NEO-PI-R for performance and counter-productive behaviors at work, and 3) the interpretation of the personality disorders by means of the NEO-PI-R and its applicability in the prediction of the counter-performance. We undertook our research with 21,349 job applicants in a French public transportation company. Our results show that the factor structure of the NEO-PI-R was not affected by the social desirability, nevertheless its scores were affected in the situation of recruitment. The applicants presented themselves more "Conscientious" and less "Neurotic" than the individuals of normative sample. The scores of the personality disorders which are calculated by means of the NEO-PI-R have correct sensitivities in spite of the dissymmetry of the distributions. As for the relationship between personality and performance, the dimension "Conscientiousness" predicts positively success in training and negatively counter-productive behaviors, the dimension "Neuroticism" predicts positively bus drivers’ accidents. The examination of the predictability of performance with the facets and the personality disorders offers additional information
Bou, nader Ralph. „Enhancing email management efficiency : A business process mining approach“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAS017.
Der volle Inhalt der QuelleBusiness Process Management (BPM) involves continuous improvement through stages such as design, modeling, execution, monitoring, optimization, and automation. A key aspect of BPM is Business Process (BP) mining, which analyzes event logs to identify process inefficiencies and deviations, focusing on process prediction and conformance checking. This thesis explores the challenges of BP mining within email-driven processes, which are essential for streamlining operations and maximizing productivity.Conformance checking ensures that actual process execution aligns with predicted models, maintaining adherence to predefined standards. Process prediction forecasts future behavior based on historical data, aiding in resource optimization and workload management. Applying these techniques to email-driven processes presents unique challenges, as these processes lack the formal models found in traditional BPM systems and thus require tailored methodologies.The unique structure of email-derived event logs, featuring attributes such as interlocutor speech acts and relevant business data, complicates the application of standard BP mining methods. Integrating these attributes into existing business process techniques and email systems demands advanced algorithms and substantial customization, further complicated by the dynamic context of email communications.To address these challenges, this thesis aims to implement multi-perspective conformance checking and develop a process-activity-aware email response recommendation system. This involves creating a process model based on sequential and contextual constraints specified by a data analyst/expert, developing algorithms to identify fulfilling and violating events, leveraging event logs to predict BP knowledge, and recommending email response templates. The guiding principles include context sensitivity, interdisciplinarity, consistency, automation, and integration.The contributions of this research include a comprehensive framework for analyzing email-driven processes, combining process prediction and conformance checking to enhance email communication by suggesting appropriate response templates and evaluating emails for conformance before sending. Validation is achieved through real email datasets, providing a practical basis for comparison and future research
Coste, Nicolas. „Vers la prédiction de performance de modèles compositionnels dans les architectures GALS“. Phd thesis, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00538425.
Der volle Inhalt der QuelleRichard, Hugues. „Prédiction de la localisation cellulaire des protéines à l'aide de leurs séquences biologiques“. Phd thesis, Université d'Evry-Val d'Essonne, 2005. http://tel.archives-ouvertes.fr/tel-00011707.
Der volle Inhalt der QuelleAinsi la majorité de ce travail de thèse s'intéresse au problème de la prédiction du compartiment cellulaire d'une protéine à partir de sa séquence primaire.
Nous nous sommes attachés à proposer des alternatives descriptives aux méthodes existantes de prédiction de la localisation cellulaire en utilisant : (1) de nouveaux descripteurs issus de la séquence nucléique, (2) une approche par chaînes de Markov cachées (CMC) et arbres de décision. L'approche par CMC est justifiée biologiquement a posteriori car elle permet la modélisation de signaux d'adressage conjointement à la prise en compte de la composition globale. En outre, l'étape de classification hiérarchique par arbre améliore nettement les résultats de classification. Les résultats obtenues lors des comparaisons avec les méthodes existantes et utilisant des descripteurs fondés sur la composition globale possèdent des performances similaires.
Rynkiewicz, Joseph. „Modèles hybrides intégrant des réseaux de neurones artificiels à des modèles de chaînes de Markov cachées : application à la prédiction de séries temporelles“. Paris 1, 2000. http://www.theses.fr/2000PA010077.
Der volle Inhalt der QuelleAussem, Alexandre. „Théorie et applications des réseaux de neurones récurrents et dynamiques à la prédiction, à la modélisation et au contrôle adaptif des processus dynamiques“. Paris 5, 1995. http://www.theses.fr/1995PA05S002.
Der volle Inhalt der QuelleBoeken, Tom. „Modèles mathématiques pour la radiologie interventionnelle personnalisée : Application au traitement du cancer“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX036.
Der volle Inhalt der QuelleThe integration of computer vision into Image-Guided interventions has the potential to change our medical practice. This work lays some bricks for the future of autonomous interventions in our specific field regarding cancer patients, addressing key components necessary for its realization.We first explore the transformative impact of AI on the physical abilities of interventional radiologists. We emphasize the need to navigate technical and ethical challenges. Interdisciplinary collaboration and robust evaluation processes are highlighted as essential for the safe integration of AI into clinical practiceWe then propose an organ agnostic method for detecting focal anomalies on volumetric cross-sectional imaging. Leveraging the Large Diffeomorphic Deformation Metric Mapping (LDDMM) framework, this approach showcases enhanced object reconstruction and precise lesion localization. In the same framework, we propose a classifier, where patient selection presents unique challenges due to the complex benefice/risk ratios.To go beyond images, clinical data from tumor DNA analysis is integrated into a prospective study specifically conducted for this work. Generative Adversarial Networks (GAN) and Modelling Atlases Using the Markov Chain Monte Carlo - Stochastic Approximation Expectation-Maximization (MCMC-SAEM) Algorithms are used to predict patient trajectories. This approach enables the exploration of new trajectories, enhancing our understanding of disease progression and treatment response in relationship of circulating tumor DNA.Lastly, we explore advanced visualization techniques for in vivo and ex vivo 3D vasculature. We propose a planar representation of undescribed anatomy, offering a promising avenue for further exploration and understanding.Together, these sections offer solutions to parts of the realization of autonomous interventions in our field
Delecroix, Michel. „Sur l'estimation et la prévision non paramétrique des processus ergodiquesCycles économiques et taches solaires“. Lille 1, 1987. http://www.theses.fr/1987LIL10146.
Der volle Inhalt der QuelleRenaud, Bertrand. „Aide à la décision médicale par les règles de prédiction clinique au service d'urgence : l'exemple de la pneumopathie aigue communautaire“. Paris 6, 2009. http://www.theses.fr/2009PA066543.
Der volle Inhalt der QuelleThe explonentially increasing amount of medical knowledge compromises its transfer to medical practice and results in suboptimal quality of care. This is of particular interest with regard to emergency medicine. Indeed, in few other domains of medicine is there such variety, novelty, distraction, and chaos, all juxtaposed to a need for expeditious and judicious thinking and in no other area of medicine, is decision density as high. Therefore, emergency medicine is particularly exposed to reveal the cognitive limits of medical decision making. Indeed, medical decision mainly depends on emergency physicians ability to predict patients’ outcome based on data available at presentation. Clinical prediction rules are the best evidence for guiding medical decision. The following text reports several studies conducted by the emergency department team of H Mondor university related hospital about the usefulness of a clinical prediction rule for guiding medical decision making process of patients presenting with a community acquired pneumonia (CAP). First, the European validation of the Pneumonia Severity Index (PSI) that has been intially developped in North America is reported. The second study reports the impact of routine use of the PSI in French emergency departments. Then, we report an evaluation of professional practices consisting in the implemention of a comprehensive strategy that included PSI assessment via the emergency department computerized medical file. Finally, the last two reports present on the one hand the development of a new clinical prediction rule for the severe CAP (REA-ICU: Risk of Early Admission to Intensive Care Unit) and on the other hand a demonstration by recurrence of the actual usefulness of this new rule that could be able to signicantly modify medical practices
Es-Seddiqi, Mouna. „Le rôle de la voie amygdalo-nigro-striée dans les processus attentionnels dans les apprentissages instrumentaux, classiques et temporels“. Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066072/document.
Der volle Inhalt der QuelleAssociative learning is a highly complex mechanism, involving several processes at the same time. The attentional process is one of the first to be mobilized during an association; it would also be involved to extract the temporal parameters associated with an unconditional biologically meaningful stimulus even before any effective association (Balsam, Drew, and Yang 2002). Some studies have shown the involvement of certain neurobiological structures, which may underlie attentional processes. For the Holland PC team, for example, orientation responses to a conditioned stimulus (top-down attention) (Lee et al., 2005) involve the central nucleus and nigro-striatal dopaminergic projections, whereas presentation of a new stimulus during an association (bottom-up attention) would rather imply the substancia inominata which would be modulated by the central nucleus of amygdala (CeA) and the parietal cortex (Holland and Gallagher 2006). At the same time, in temporal discrimination in which associative learning requires, besides discrete sensory stimuli, performances related to the judgment of durations, the mechanism of the attentional process mobilizes other conceptual models that gravitate mainly around the internal clock model and, in particular, the striatal beat frequency model which propose also neurobiological explanations (Matell & Meck, 2004). In this work, we aimed at understanding the role of the Amygdalo-nigro-striatal (ANS) circuit in the development of the attentional process in associative learning oriented towards discrete and temporal sensory stimuli in the rat. We also aimed at examining the role of this circuit in the evolution of the attentional process after over-training permitting the development of habits. In order to achieve this objective, we compared performance of rats with cross-lesion by altering the CeA in one hemisphere and the nigro-striatal circuit in the other hemisphere (Amygdalo-nigro-striatal disconnection; Contra group) to rats with lesions in the same hemisphere (CeA and nigro-striatal circuit: group Ipsi). A third group was submitted to bilateral lesions of the CeA (Amy group). A control group had pseudo lesions (groupe Sham).Through our three experimental groups (Contra, Ipsi and Amy) and the control group (Sham), we have shown the involvement of the CeA in the modulation of the attentional process when a novelty was introduced in the experimental situation (surprise) both in the presence of an appetitive discrete sensory stimulus and of a temporal stimulus in an aversive context. We have also shown that the ANS circuit is involved in habit formation and that there is probably a differential effect between the posterior and anterior part of the CeA. Our work also highlighted the implication of the nigro-striatal circuit in temporal discrimination and of the ANS circuit in the attentional treatment in temporal perception tasks, this effect being different depending on whether the discrimination concerns short or long durations
Zgheib, Rawad. „Algorithmes adaptatifs d'identification et de reconstruction de processus AR à échantillons manquants“. Phd thesis, Université Paris Sud - Paris XI, 2007. http://tel.archives-ouvertes.fr/tel-00273585.
Der volle Inhalt der QuelleBotterman, Hông-Lan. „Corrélations dans les graphes d'information hétérogène : prédiction et modélisation de liens à partir de méta-chemins“. Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS083.
Der volle Inhalt der QuelleMany entities, possibly of different natures, are linked by physical or virtual links, that may also be of different natures. Such data can be represented by a heterogeneous information network (HIN). In addition, there are often correlations between real-life entities or events. Once represented by suitable abstractions (such as HIN), these correlations can therefore be found in the HIN. Motivated by these considerations, this thesis investigates the effects of possible correlations between the links of an HIN on its structure. This present work aims at answering questions such as: are there indeed correlations between different types of links? If so, is it possible to quantify them? What do they mean? How can they be interpreted? Can these correlations be used to predict the occurrence of links? To model co-evolution dynamics? The examples studied can be divided into two categories. First, the use of correlations for the prediction of the links’ weight is studied. It is shown that correlations between links, and more specifically between paths, can be used to recover and, to some extent, predict the weight of other links of a specified type. Second, a link weight dynamics is considered. It is shown that link co-evolution can be used, for example, to define a model of attention between individuals and subjects. The preliminary results are in agreement with others in the literature, mainly related to models of opinion dynamics. Overall, this work illustrates the importance of correlations between the links of an HIN. In addition, it supports the general fact that different types of nodes and links abound in nature and that it could be important and instructive to take this diversity into account in order to understand the organization and functioning of a system
Legrand, Valentine. „Modélisation des processus de précipitation et prédiction des propriétés mécaniques résultantes dans les alliages d’aluminium à durcissement structural : Application au soudage par Friction Malaxage (FSW) de tôles AA2024“. Thesis, Paris, ENMP, 2015. http://www.theses.fr/2015ENMP0090/document.
Der volle Inhalt der QuelleIn the aeronautic industry, the friction stir welding (FSW) process is seen as an interesting option to lighten aircraft structure by replacing the standard riveting technology used to join parts. Numerical simulation is chosen to improve understanding of the different mechanisms occurring during FSW. The aluminum alloy studied is an AA2024-T3 grade. Its mechanical properties mainly derive from structural hardening mechanisms. An accurate model of precipitate evolution is essential to define hardness profile of the weld. The chosen simulation has to be robust and time-efficient in order to be suitable for the FSW process modeling. It must consider the two families of precipitates (GPB zones and S phase) and model nucleation, growth and coarsening phenomena. A PSD model is chosen and coupled with thermodynamic equilibrium calculations. To define the growth kinetics of precipitates, an exact analytical solution is extended to a multi-component alloy. Knowing the distribution of precipitates size, the mechanical properties are defined based on an empirical model. The amount and properties of phases are initialized through non-isothermal DSC calibration and comparison between experimental heat flux and simulated one. Isothermal test is selected to establish the link between precipitation state and mechanical properties. The model is applied to the simulation of microstructural evolution in FSW in order to predict the final properties of the weld. Thermal changes are determined through the use of a macroscopic model developed during a twin project within the Chair Daher. Numerical results are compared with instrumented experiments and show a good estimate of hardness. The experimental profiles are found, as well as the characteristics of the different areas. This validates the approach and its efficiency to simulate the evolution of the precipitation process
Lafitte, Anthony. „Prédiction de l'aéroacoustique de jets subsoniques confinés à l'aide d'une méthode stochastique de génération de la turbulence“. Phd thesis, Ecole Centrale de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00805414.
Der volle Inhalt der QuelleRavel, Sabrina. „Implication des neurones à activité tonique du stiatum dans les processus cognitifs et motivationnels : Etude électrophysiologique de l'influence de la prédiction temporelle et de la signification affective des stimuli chez le singe“. Aix-Marseille 2, 2001. http://www.theses.fr/2001AIX22004.
Der volle Inhalt der QuelleMaes, Francis. „Learning in Markov decision processes for structured prediction : applications to sequence labeling, tree transformation and learning for search“. Paris 6, 2009. http://www.theses.fr/2009PA066500.
Der volle Inhalt der QuelleHourcade-Potelleret, Florence. „De la dose à l'effet clinique : utilisation de la modélisation dans les différentes étapes du processus de prédiction du critère clinique : Exemple avec un nouveau médicament en prévention secondaire de la morbidité-mortalité cardiovasculaire“. Phd thesis, Université Jean Monnet - Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00979667.
Der volle Inhalt der QuelleNguyen, Thi Thu Tam. „Learning techniques for the load forecasting of parcel pick-up points“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG034.
Der volle Inhalt der QuellePick-Up Points (PUP) represent an alternative delivery option for purchases from online retailers (Business-to-Customer, B2C) or online Customer-to-Customer (C2C) marketplaces. Parcels are delivered at a reduced cost to a PUP and wait until being picked up by customers or returned to the original warehouse if their sojourn time is over. When the chosen PUP is overloaded, the parcel may be refused and delivered to the next available PUP on the carrier tour. PUP load forecasting is an efficient method for the PUP management company (PMC) to better balance the load of each PUP and reduce the number of rerouted parcels. This thesis aims to describe the parcel flows in a PUP and to proposed models used to forecast the evolution of the load. For the PUP load associated with the B2C business, the parcel life-cycle has been taken into account in the forecasting process via models of the flow of parcel orders, the delivery delays, and the pick-up process. Model-driven and data-driven approaches are compared in terms of load-prediction accuracy. For the PUP load associated with the C2C business, the daily number of parcels dropped off with a given PUP as target is described by a Markov-Switching AutoRegressive model to account for the non-stationarity of the second-hand shopping activity. The life-cycle of each parcel is modeled by a Markov jump process. Model parameters are evaluated from previous parcel drop-off, delivery, and pick-up records. The probability mass function of the future load of a PUP is then evaluated using all information available on parcels with this PUP as target. In both cases, the proposed model-driven approaches give, for most of the cases, better forecasting performance, compared with the data-driven models, involving LSTM, Random forest, Holt-Winters, and SARIMA models, up to four days ahead in the B2C case and up to six days ahead in the C2C case. The first approach applied to the B2C parcel load yields an MAE of 3 parcels for the one-day ahead prediction and 8 parcels for the four-day ahead prediction. The second approach applied to the C2C parcel load yields an MAE of 5 parcels for the one-day ahead prediction and 8 parcels for the seven-day ahead prediction. These prediction horizons are consistent with the delivery delay associated with these parcels (1-3 days in the case of a B2C parcel and 4-5 days in the case of a C2C parcel). Future research directions aim at optimizing the prediction accuracy, especially in predicting future orders and studying a load-balancing approach to better share the load between PUPs
Salhi, Asma. „Towards a combined statistical shape and musculoskeletal modeling framework for pediatric shoulder joint“. Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0137/document.
Der volle Inhalt der QuelleObstetrician Brachial Plexus Palsy (OBPP) is a common birth injury in children leading to shoulder joint deformity and abnormal function. While the management of OBPP disorder focuses on restoring the shoulder joint function, the underlying pathomechanics is not clearly understood yet. Computational models are effective to provide such insights, however, there is no pediatric shoulder joint model to understand the OBPP disorder. Thus, the global aim of this research work was to build a computational framework combining the advances in statistical shape modeling (SSM) and multi-body musculoskeletal modeling (MSKM) domains. Due to a lack of sufficient data in the pediatric cohort, I first developed the framework for adult shoulder joint. For this, I illustrated the accuracy of SSM in predicting 1) missing part of the scapula, and 2) muscle insertion regions on scapula and humerus bones. This method was then integrated with adult shoulder MSKMs to show the differences between generic and subject specific constructs. For the second aim of this thesis, I developed a pediatric MSKM of the shoulder joint complex using OpenSim software. Pediatric MSKM represented scapulothoracic, sternoclavicular, acromioclavicular, and glenohumeral joints with 13 degrees of freedom, and actuated by 52 musculotendon actuators representing 14 shoulder muscles. Using inverse kinematics and inverse dynamics approaches, the model was used to determine the differences in joint kinematics, and joint dynamics between healthy and unhealthy side of a single OBPP subject. Future work is focused on completing the framework on pediatric population and understanding the pathomechanics of OBPP
Rousseeuw, Kévin. „Modélisation de signaux temporels hautes fréquences multicapteurs à valeurs manquantes : Application à la prédiction des efflorescences phytoplanctoniques dans les rivières et les écosystèmes marins côtiers“. Thesis, Littoral, 2014. http://www.theses.fr/2014DUNK0374/document.
Der volle Inhalt der QuelleBecause of the growing interest for environmental issues and to identify direct and indirect effects of anthropogenic activities on ecosystems, environmental monitoring programs have recourse more and more frequently to high resolution, autonomous and multi-sensor instrumented stations. These systems are implemented in harsh environment and there is a need to stop measurements for calibration, service purposes or just because of sensors failure. Consequently, data could be noisy, missing or out of range and required some pre-processing or filtering steps to complete and validate raw data before any further investigations. In this context, the objective of this work is to design an automatic numeric system able to manage such amount of data in order to further knowledge on water quality and more precisely with consideration about phytoplankton determinism and dynamics. Main phase is the methodological development of phytoplankton bloom forecasting models giving the opportunity to end-user to handle well-adapted protocols. We propose to use hybrid Hidden Markov Model to detect and forecast environment states (identification of the main phytoplankton bloom steps and associated hydrological conditions). The added-value of our approach is to hybrid our model with a spectral clustering algorithm. Thus all HMM parameters (states, characterisation and dynamics of these states) are built by unsupervised learning. This approach was applied on three data bases: first one from the marine instrumented station MAREL Carnot (Ifremer) (2005-2009), second one from a Ferry Box system implemented in the eastern English Channel en 2012 and third one from a freshwater fixed station in the river Deûle in 2009 (Artois Picardie Water Agency). These works fall within the scope of a collaboration between IFREMER, LISIC/ULCO and Artois Picardie Water Agency in order to develop optimised systems to study effects of anthropogenic activities on aquatic systems functioning in a regional context of massive blooms of the harmful algae, Phaeocystis globosa
Sànchez, Pérez Andrés. „Agrégation de prédicteurs pour des séries temporelles, optimalité dans un contexte localement stationnaire“. Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0051/document.
Der volle Inhalt der QuelleThis thesis regroups our results on dependent time series prediction. The work is divided into three main chapters where we tackle different problems. The first one is the aggregation of predictors of Causal Bernoulli Shifts using a Bayesian approach. The second one is the aggregation of predictors of what we define as sub-linear processes. Locally stationary time varying autoregressive processes receive a particular attention; we investigate an adaptive prediction scheme for them. In the last main chapter we study the linear regression problem for a general class of locally stationary processes
Dujardin, Alain. „Prédiction des mouvements du sol dus à un séisme : différences de décroissance entre petits et gros séismes et simulations large bande par fonctions de Green empiriques“. Thesis, Nice, 2015. http://www.theses.fr/2015NICE4070/document.
Der volle Inhalt der QuelleThe prediction of ground motion generated by an earthquake is a major issue for the consideration of seismic risk. This is one of the objectives of SIGMA project in which I realized my thesis. It consists of two parts. The first focuses on the magnitude dependence of the ground motion parameters decay with distance. This is a concern both for the use of relation of attenuation (GMPEs) than methods based on the use of small events as empirical Green functions. We have shown that as the shorter distances (less than the length of the fault), the saturation effect due to the fault size is preponderant. For larger distances, it’s the eanelastic attenuation effect which becomes predominant. So we have shown that it can be tricky to mix data from different regions in GMPEs and we validated the use of empirical Green functions at every distance. In the second part are tested three different simulation methods in a complex context: a code combining finite fault source in k2 and EGFs, a point-source code with EGFs and a stochastic code. We chose to work on the Mw 5.9 earthquake (May 29, 2012) which occurs in a deep sedimentary basin (the Po plain), and which has generated seismograms often dominated by surface waves. We show that without a priori knowledge of the propagation medium, methods based on EGFs can reproduce surface waves, the values of PGA, PGV, and the durations of the signals generated
Sànchez, Pérez Andrés. „Agrégation de prédicteurs pour des séries temporelles, optimalité dans un contexte localement stationnaire“. Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0051.
Der volle Inhalt der QuelleThis thesis regroups our results on dependent time series prediction. The work is divided into three main chapters where we tackle different problems. The first one is the aggregation of predictors of Causal Bernoulli Shifts using a Bayesian approach. The second one is the aggregation of predictors of what we define as sub-linear processes. Locally stationary time varying autoregressive processes receive a particular attention; we investigate an adaptive prediction scheme for them. In the last main chapter we study the linear regression problem for a general class of locally stationary processes
Platini, Marc. „Apprentissage machine appliqué à l'analyse et à la prédiction des défaillances dans les systèmes HPC“. Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM041.
Der volle Inhalt der QuelleWith the increase in size of supercomputers, also increases the number of failures or abnormal events. This increase of the number of failures reduces the availability of these systems.To manage these failures and be able to reduce their impact on HPC systems, it is important to implement solutions to understand the failures and to predict them. HPC systems produce a large amount of monitoring data that contains useful information about the status of these systems. However, the analysis of these data is difficult and can be very tedious because these data reflect the complexity and the size of HPC systems. The work presented in this thesis proposes to use machine-learning-based solutions to analyse these data in an automated way. More precisely, this thesis presents two main contributions: the first one focuses on the prediction of processors overheating events in HPC systems, the second one focuses on the analysis and the highlighting of the relationships between the events present in the system logs. Both contributions are evaluated on real data from a large HPC system used in production.To predict CPU overheating events, we propose a solution that uses only the temperature of the CPUs. It is based on the analysis of the general shape of the temperature prior to an overheating event and on the automated learning of the correlations between this shape and overheating events using a supervised learning model. The use of the general curve shape and a supervised learning model allows learning using temperature data with low accuracy and using a limited number of overheating events. The evaluation of the solution shows that it is able to predict overheating events several minutes in advance with high accuracy and recall. Furthermore, the evaluation of these results shows that it is possible to use preventive actions based on the predictions made by the solution to reduce the impact of overheating events on the system.To analyze and to extract in an automated way the causal relations between the events described in the HPC system logs, we propose an unconventional use of a deep machine learning model. Indeed, this type of model is classically used for prediction tasks. Thanks to the addition of a new layer proposed by state-of-the-art contributions of the machine learning community, it is possible to determine the weight of the algorithm inputs associated to its prediction. Using this information, we are able to detect the causal relations between the different events. The evaluation of the solution shows that it is able to extract the causal relations of the vast majority of events occurring in an HPC system. Moreover, its evaluation by administrators validates the highlighted correlations.Both contributions and their evaluations show the benefit of using machine learning solutions for understanding and predicting failures in HPC systems by automating the analysis of supervision data
Bogadhi, Amarender R. „Une étude expérimentale et théorique de l'intégration de mouvement pour la poursuite lente : Un modèle Bayesien récurrent et hiérarchique“. Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM5009/document.
Der volle Inhalt der QuelleThis thesis addresses two studies by studying smooth pursuit eye movements for a translating tilted bar stimulus. First, the dynamic integration of local visual motion signals originating from retina and second, the influence of extra-retinal signals on motion integration. It also proposes a more generalized, hierarchical recurrent bayesian framework for smooth pursuit. The first study involved investigating dynamic motion integration for varying contrasts and speeds using a tilted bar stimuli. Results show that higher speeds and lower contrasts result in higher initial direction bias and subsequent dynamics of motion integration is slower for lower contrasts. It proposes an open-loop version of a recurrent bayesian model where a recurrent bayesian network is cascaded with an oculomotor plant to generate smooth pursuit responses. The model responses qualitatively account for the different dynamics observed in smooth pursuit responses to tilted bar stimulus at different speeds and contrasts. The second study investigated the dynamic interactions between retinal and extra-retinal signals in dynamic motion integration for smooth pursuit by transiently blanking the target at different moments during open-loop and steady-state phases of pursuit. The results suggest that weights to retinal and extra-retinal signals are dynamic in nature and extra-retinal signals dominate retinal signals on target reappearance after a blank introduced during open-loop of pursuit when compared to a blank introduced during steady-state of pursuit. The previous version of the model is updated to a closed-loop version and extended to a hierarchical recurrent bayesian model
Perais, Arthur. „Increasing the performance of superscalar processors through value prediction“. Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S070/document.
Der volle Inhalt der QuelleAlthough currently available general purpose microprocessors feature more than 10 cores, many programs remain mostly sequential. This can either be due to an inherent property of the algorithm used by the program, to the program being old and written during the uni-processor era, or simply to time to market constraints, as writing and validating parallel code is known to be hard. Moreover, even for parallel programs, the performance of the sequential part quickly becomes the limiting improvement factor as more cores are made available to the application, as expressed by Amdahl's Law. Consequently, increasing sequential performance remains a valid approach in the multi-core era. Unfortunately, conventional means to do so - increasing the out-of-order window size and issue width - are major contributors to the complexity and power consumption of the chip. In this thesis, we revisit a previously proposed technique that aimed to improve performance in an orthogonal fashion: Value Prediction (VP). Instead of increasing the execution engine aggressiveness, VP improves the utilization of existing resources by increasing the available Instruction Level Parallelism. In particular, we address the three main issues preventing VP from being implemented. First, we propose to remove validation and recovery from the execution engine, and do it in-order at Commit. Second, we propose a new execution model that executes some instructions in-order either before or after the out-of-order engine. This reduces pressure on said engine and allows to reduce its aggressiveness. As a result, port requirement on the Physical Register File and overall complexity decrease. Third, we propose a prediction scheme that mimics the instruction fetch scheme: Block Based Prediction. This allows predicting several instructions per cycle with a single read, hence a single port on the predictor array. This three propositions form a possible implementation of Value Prediction that is both realistic and efficient
Kalaitzidis, Kleovoulos. „Advanced speculation to increase the performance of superscalar processors“. Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S007.
Der volle Inhalt der QuelleEven in the multicore era, making single cores faster is paramount to achieve high- performance computing, given the existence of programs that are either inherently sequential or expose non-negligible sequential parts. Sequential performance has been essentially improving with the scaling of the processor structures that enable instruction-level parallelism (ILP). However, as modern microarchitectures continue to extract more ILP by employing larger instruction windows, true data dependencies remain a major performance bottleneck. Value Prediction (VP) and Load-Address Prediction (LAP) are two developing techniques that allow to overcome this obstacle and harvest more ILP by enabling the execution of instructions in a data-wise speculative manner. This thesis proposes mechanisms that are related with VP and LAP and lead to effectively higher performance improvements. First, VP is examined in an ISA-aware manner, that discloses the impact of certain ISA particularities on the anticipated speedup. Second, a novel binary-based VP model is introduced, namely VSEP, that allows to exploit certain value patterns that although they are encountered frequently, they cannot be captured by previous works. VSEP improves the obtained speedup by 19% and also, by virtue of its structure, it mitigates the cost of predicting values wider than 64 bits. By adapting this approach to perform LAP allows to predict the memory addresses of 48% of the committed loads. Eventually, a microarchitecture that leverages carefully this LAP mechanism can execute 32% of the committed loads early
Ruiz, Carmona Victor Manuel. „Commande prédictive adaptative des procédés chimiques“. Toulouse, INPT, 1990. http://www.theses.fr/1990INPT042G.
Der volle Inhalt der QuelleFonte, Christophe. „Une nouvelle régulation pour les processus industriels : la commande adaptative à multiples modèles de référence“. Lille 1, 1986. http://www.theses.fr/1986LIL10102.
Der volle Inhalt der QuelleJardillier, Rémy. „Evaluation de différentes variantes du modèle de Cox pour le pronostic de patients atteints de cancer à partir de données publiques de séquençage et cliniques“. Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALS008.
Der volle Inhalt der QuelleCancer has been the leading cause of premature mortality (death before the age of 65) in France since 2004. For the same organ, each cancer is unique, and personalized prognosis is therefore an important aspect of patient management and follow-up. The decrease in sequencing costs over the last decade have made it possible to measure the molecular profiles of many tumors on a large scale. Thus, the TCGA database provides RNA-seq data of tumors, clinical data (age, sex, grade, stage, etc.), and follow-up times of associated patients over several years (including patient survival, possible recurrence, etc.). New discoveries are thus made possible in terms of biomarkers built from transcriptomic data, with individualized prognoses. These advances require the development of large-scale data analysis methods adapted to take into account both survival data (right-censored), clinical characteristics, and molecular profiles of patients. In this context, the main goal of the thesis is to compare and adapt methodologies to construct prognostic risk scores for survival or recurrence of patients with cancer from sequencing and clinical data.The Cox model (semi-parametric) is widely used to model these survival data, and allows linking them to explanatory variables. The RNA-seq data from TCGA contain more than 20,000 genes for only a few hundred patients. The number p of variables then exceeds the number n of patients, and parameters estimation is subject to the “curse of dimensionality”. The two main strategies to overcome this issue are penalty methods and gene pre-filtering. Thus, the first objective of this thesis is to compare the classical penalization methods of Cox's model (i.e. ridge, lasso, elastic net, adaptive elastic net). To this end, we use real and simulated data to control the amount of information contained in the transcriptomic data. Then, the second issue addressed concerns the univariate pre-filtering of genes before using a multivariate Cox model. We propose a methodology to increase the stability of the genes selected, and to choose the filtering thresholds by optimizing the predictions. Finally, although the cost of sequencing (RNA-seq) has decreased drastically over the last decade, it remains too high for routine use in practice. In a final section, we show that the sequencing depth of miRNAs can be reduced without degrading the quality of predictions for some TCGA cancers, but not for others
Vignot, Stéphane. „Analyse différentielle Tumeur Primitive / Métastase : impact sur la détermination des facteurs prédictifs de réponse et sur la compréhension des mécanismes du processus métastatique“. Thesis, Paris 11, 2013. http://www.theses.fr/2013PA11T068.
Der volle Inhalt der QuelleThe question of spatial and temporal variability of biomarkers in solid tumors is a key issue in an era where personalized therapy is strongly advocated by clinicians, researchers and patients. The purpose of the work is to compare molecular profiles of primary tumor and matched metastasis in order to precise tumor heterogeneity during metastatic progression and to investigate pathways potentially involved in the metastatic process.Frozen samples from two cohorts of patients were considered: non-small cell lung cancer (15 patients) and colorectal cancer (13 patients, 11 patients with healthy control tissues). These samples were analyzed by high-throughput sequencing and gene expression. Highly conserved genomic profiles were observed in the first metastatic progression for known recurrent genes in non-small cell lung cancer and colorectal cancer. If gene expression studies do not highlight specific profile of metastasis, they provide useful data on the conservation of some important oncogenic pathways; identify genes of interest in the study of metastatic progression and highlight the putative impact of the effect of healthy tissue for expression analysis
Koubaa, Mohamed Ali. „Contribution à la commande prédictive : mise en oeuvre pour le pilotage d'un autoclave de teinture“. Lille 1, 1997. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/1997/50376-1997-25.pdf.
Der volle Inhalt der QuelleHaquin, Thierry. „Séquences de branchements : prédiction de branchements et optimisation du chargement des instructions“. Toulouse 3, 2003. http://www.theses.fr/2003TOU30160.
Der volle Inhalt der QuelleHassad, Hassane. „Contribution à l'analyse de la stabilité de la commande prédictive généralisée : application à un processus thermique“. Nancy 1, 1993. http://www.theses.fr/1993NAN10053.
Der volle Inhalt der Quelle