Дисертації з теми "Processus modeling"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Processus modeling".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Dávila-Felipe, Miraine. "Pathwise decompositions of Lévy processes : applications to epidemiological modeling." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066651.
This dissertation is devoted to the study of some pathwise decompositions of spectrally positive Lévy processes, and duality relationships for certain (possibly non-Markovian) branching processes, driven by the use of the latter as probabilistic models of epidemiological dynamics. More precisely, we model the transmission tree of a disease as a splitting tree, i.e. individuals evolve independently from one another, have i.i.d. lifetimes (periods of infectiousness) that are not necessarily exponential, and give birth (secondary infections) at a constant rate during their lifetime. The incidence of the disease under this model is a Crump-Mode-Jagers process (CMJ); the overarching goal of the two first chapters is to characterize the law of this incidence process through time, jointly with the partially observed (inferred from sequence data) transmission tree. In Chapter I we obtain a description, in terms of probability generating functions, of the conditional likelihood of the number of infectious individuals at multiple times, given the transmission network linking individuals that are currently infected. In the second chapter, a more elegant version of this characterization is given, passing by a general result of invariance under time reversal for a class of branching processes. Finally, in Chapter III we are interested in the law of the (sub)critical branching process seen from its extinction time. We obtain a duality result that implies in particular the invariance under time reversal from their extinction time of the (sub)critical CMJ processes and the excursion away from 0 of the critical Feller diffusion (the width process of the continuum random tree)
Suri, Kunal. "Modeling the internet of things in configurable process models." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL005/document.
On the one hand, a growing number of multi-national organizations have embraced the Process-Aware Information Systems (PAIS) to reap the benefits of using streamlined processes that are based on predefined models, also called as Business Process (BP) models. However, today's dynamic business environment demands flexibility and systematic reuse of BPs, which is provided by the use of Configurable Process Models (CPMs). It avoids the development of processes from scratch, which is both time-consuming and error-prone, and facilitates the sharing of a family of BP variants that can be customized based on concrete business requirements. On the other hand, the adoption of the Internet of Things (IoT) resources in various cross-organizational BPs is also on a rise. However, to attain the desired business value, these IoT resources must be used efficiently. These IoT devices are heterogeneous due to their diverse properties and manufactures (proprietary standards), which leads to issues related to interoperability. Further, being resource-constrained, they need to be allocated (and consumed) keeping in the mind relevant constraints such as energy cost, computation cost, to avoid failures during the time of their consumption in the processes. Thus, it is essential to explicitly model the IoT resource perspective in the BP models during the process design phase. In the literature, various research works in Business Process Management (BPM) domain are usually focused on the control-flow perspective. While there do exist some approaches that focus on the resource perspective, they are typically dedicated to the human resource perspective. Thus, there is limited work on integrating the IoT resource perspective into BPs, without any focus on solving issues related to heterogeneity in IoT domain. Likewise, in the context of CPMs, there is no configuration support to model IoT resource variability at the CPM level. This variability is a result of specific IoT resource features such as Shareability and Replication that is relevant in the context of BPs. In this thesis, we address the aforementioned limitations by proposing an approach to integrate IoT perspective in the BPM domain and to support the development of IoT-Aware CPMs. This work contributes in the following manner: (1) it provides a formal description of the IoT resource perspective and its relationships with the BPM domain using semantic technology and (2) it provides novel concepts to enable configurable IoT resource allocation in CPMs. To validate our approach and to show its feasibility, we do the following: (1) implement proof of concept tools that assist in the development of IoT-aware BPs and IoT-aware CPMs and (2) perform experiments on the process model datasets. The experimentation results show the effectiveness of our approach and affirm its feasibility
Beccuti, Marco. "Modeling and analisys of probabilistic system : Formalism and efficient algorithm." Paris 9, 2008. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=2008PA090060.
Yongsiriwit, Karn. "Modeling and mining business process variants in cloud environments." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL002/document.
More and more organizations are adopting cloud-based Process-Aware Information Systems (PAIS) to manage and execute processes in the cloud as an environment to optimally share and deploy their applications. This is especially true for large organizations having branches operating in different regions with a considerable amount of similar processes. Such organizations need to support many variants of the same process due to their branches' local culture, regulations, etc. However, developing new process variant from scratch is error-prone and time consuming. Motivated by the "Design by Reuse" paradigm, branches may collaborate to develop new process variants by learning from their similar processes. These processes are often heterogeneous which prevents an easy and dynamic interoperability between different branches. A process variant is an adjustment of a process model in order to flexibly adapt to specific needs. Many researches in both academics and industry are aiming to facilitate the design of process variants. Several approaches have been developed to assist process designers by searching for similar business process models or using reference models. However, these approaches are cumbersome, time-consuming and error-prone. Likewise, such approaches recommend entire process models which are not handy for process designers who need to adjust a specific part of a process model. In fact, process designers can better develop process variants having an approach that recommends a well-selected set of activities from a process model, referred to as process fragment. Large organizations with multiple branches execute BP variants in the cloud as environment to optimally deploy and share common resources. However, these cloud resources may be described using different cloud resources description standards which prevent the interoperability between different branches. In this thesis, we address the above shortcomings by proposing an ontology-based approach to semantically populate a common knowledge base of processes and cloud resources and thus enable interoperability between organization's branches. We construct our knowledge base built by extending existing ontologies. We thereafter propose an approach to mine such knowledge base to assist the development of BP variants. Furthermore, we adopt a genetic algorithm to optimally allocate cloud resources to BPs. To validate our approach, we develop two proof of concepts and perform experiments on real datasets. Experimental results show that our approach is feasible and accurate in real use-cases
Domingues, Rémi. "Probabilistic Modeling for Novelty Detection with Applications to Fraud Identification." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS473.pdf.
Novelty detection is the unsupervised problem of identifying anomalies in test data which significantly differ from the training set. While numerous novelty detection methods were designed to model continuous numerical data, tackling datasets composed of mixed-type features, such as numerical and categorical data, or temporal datasets describing discrete event sequences is a challenging task. In addition to the supported data types, the key criteria for efficient novelty detection methods are the ability to accurately dissociate novelties from nominal samples, the interpretability, the scalability and the robustness to anomalies located in the training data. In this thesis, we investigate novel ways to tackle these issues. In particular, we propose (i) a survey of state-of-the-art novelty detection methods applied to mixed-type data, including extensive scalability, memory consumption and robustness tests (ii) a survey of state-of-the-art novelty detection methods suitable for sequence data (iii) a probabilistic nonparametric novelty detection method for mixed-type data based on Dirichlet process mixtures and exponential-family distributions and (iv) an autoencoder-based novelty detection model with encoder/decoder modelled as deep Gaussian processes. The learning of this last model is made tractable and scalable through the use of random feature approximations and stochastic variational inference. The method is suitable for large-scale novelty detection problems and data with mixed-type features. The experiments indicate that the proposed model achieves competitive results with state-of-the-art novelty detection methods
Petit, Sébastien. "Improved Gaussian process modeling : Application to Bayesian optimization." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG063.
This manuscript focuses on Bayesian modeling of unknown functions with Gaussian processes. This task arises notably for industrial design, with numerical simulators whose computation time can reach several hours. Our work focuses on the problem of model selection and validation and goes in two directions. The first part studies empirically the current practices for stationary Gaussian process modeling. Several issues on Gaussian process parameter selection are tackled. A study of parameter selection criteria is the core of this part. It concludes that the choice of a family of models is more important than that of the selection criterion. More specifically, the study shows that the regularity parameter of the Matérn covariance function is more important than the choice of a likelihood or cross-validation criterion. Moreover, the analysis of the numerical results shows that this parameter can be selected satisfactorily by the criteria, which leads to a practical recommendation. Then, particular attention is given to the numerical optimization of the likelihood criterion. Observing important inconsistencies between the different libraries available for Gaussian process modeling like Erickson et al. (2018), we propose elementary numerical recipes making it possible to obtain significant gains both in terms of likelihood and model accuracy. Finally, the analytical formulas for computing cross-validation criteria are revisited under a new angle and enriched with similar formulas for the gradients. This last contribution aligns the computational cost of a class of cross-validation criteria with that of the likelihood. The second part presents a goal-oriented methodology. It is designed to improve the accuracy of the model in an (output) range of interest. This approach consists in relaxing the interpolation constraints on a relaxation range disjoint from the range of interest. We also propose an approach for automatically selecting the relaxation range. This new method can implicitly manage potentially complex regions of interest in the input space with few parameters. Outside, it learns non-parametrically a transformation improving the predictions on the range of interest. Numerical simulations show the benefits of the approach for Bayesian optimization, where one is interested in low values in the minimization framework. Moreover, the theoretical convergence of the method is established under some assumptions
Ben, salem Malek. "Model selection and adaptive sampling in surrogate modeling : Kriging and beyond." Thesis, Lyon, 2018. https://tel.archives-ouvertes.fr/tel-03097719.
Surrogate models are used to replace an expensive-to-evaluate function to speed-up the estimation of a feature of a given function (optimum, contour line, …). Three aspects of surrogate modeling are studied in the work:1/ We proposed two surrogate model selection algorithms. They are based on a novel criterion called the penalized predictive score. 2/ The main advantage of probabilistic approach is that it provides a measure of uncertainty associated with the prediction. This uncertainty is an efficient tool to construct strategies for various problems such as prediction enhancement, optimization or inversion. We defined a universal approach for uncertainty quantification that could be applied for any surrogate model. It is based on a weighted empirical probability measure supported by cross-validation sub-models’ predictions.3/ We present the so-called Split-and-Doubt algorithm that performs sequentially both feature estimation and dimension reduction
Larvaron, Benjamin. "Modeling battery health degradation with uncertainty quantification." Electronic Thesis or Diss., Université de Lorraine, 2024. http://www.theses.fr/2024LORR0028.
With the acceleration of climate change, significant measures must be taken to decarbonize the economy. This includes a transformation of the transportation and energy production sectors. These changes increase the use of electrical energy and raise the need for storage, particularly through Lithium-ion batteries.In this thesis, we focus on modeling battery health degradation. To quantify the risks associated with performance guarantees, uncertainties must be taken into account. Degradation is a complex phenomenon involving various interacting physical mechanisms. It varies depending on the battery type and usage conditions. We first addressed the issue of the temporal degradation under a reference experimental condition using a data-driven approach based on Gaussian processes. This approach allows for learning complex models while incorporating uncertainty quantification. Building upon the state-of-the art, we proposed an adaptation of Gaussian process regression. By designing appropriate kernels, the model explicitly considers performance variability among batteries. However, Gaussian process regression generally relies on a stationarity assumption, which is too restrictive to account for uncertainty evolution over time. Therefore, we have leveraged the broader framework of chained Gaussian process regression, based on variational inference. With a suitable choice of likelihood function, this framework allows for adjusting a non-parametric model of the evolution of the variability among batteries, significantly improving uncertainty quantification. While this approach yields a model that fits observed cycles well, it does not generalize effectively to predict future degradation with consistent physical behaviors. Specifically, monotonicity and concavity of degradation curves are not always preserved. To address this, we proposed an approach to incorporate these constraints into chained Gaussian process regression. As a result, we have enhanced predictions over several hundred cycles, potentially reducing the necessary battery testing time—a significant cost for manufacturers. We then expanded the problem to account for the effect of experimental conditions on battery degradation. Initially, we attempted to adapt Gaussian process-based methods by including experimental factors as additional explanatory variables. This approach yielded interesting results in cases with similar degradation conditions. However, for more complex settings, the results became inconsistent with physical knowledge and were no longer usable. As a result, we proposed an alternative two-step approach, separating the temporal evolution from the effect of factors. In the first step, temporal evolution was modeled using the previously mentioned Gaussian process methods. The second, more complex step utilized the results from the previous stage—Gaussian distributions—to learn a model of experimental conditions. This required a regression approach for complex data. We suggest using Wasserstein conditional barycenters, which are well-suited for distribution cases. Two models were introduced. The first model, within the structured regression framework, incorporates a physical degradation model. The second model, using Fréchet regression, improves results by interpolating experimental conditions and accounting for multiple experimental factors
Suri, Kunal. "Modeling the internet of things in configurable process models." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL005.
On the one hand, a growing number of multi-national organizations have embraced the Process-Aware Information Systems (PAIS) to reap the benefits of using streamlined processes that are based on predefined models, also called as Business Process (BP) models. However, today's dynamic business environment demands flexibility and systematic reuse of BPs, which is provided by the use of Configurable Process Models (CPMs). It avoids the development of processes from scratch, which is both time-consuming and error-prone, and facilitates the sharing of a family of BP variants that can be customized based on concrete business requirements. On the other hand, the adoption of the Internet of Things (IoT) resources in various cross-organizational BPs is also on a rise. However, to attain the desired business value, these IoT resources must be used efficiently. These IoT devices are heterogeneous due to their diverse properties and manufactures (proprietary standards), which leads to issues related to interoperability. Further, being resource-constrained, they need to be allocated (and consumed) keeping in the mind relevant constraints such as energy cost, computation cost, to avoid failures during the time of their consumption in the processes. Thus, it is essential to explicitly model the IoT resource perspective in the BP models during the process design phase. In the literature, various research works in Business Process Management (BPM) domain are usually focused on the control-flow perspective. While there do exist some approaches that focus on the resource perspective, they are typically dedicated to the human resource perspective. Thus, there is limited work on integrating the IoT resource perspective into BPs, without any focus on solving issues related to heterogeneity in IoT domain. Likewise, in the context of CPMs, there is no configuration support to model IoT resource variability at the CPM level. This variability is a result of specific IoT resource features such as Shareability and Replication that is relevant in the context of BPs. In this thesis, we address the aforementioned limitations by proposing an approach to integrate IoT perspective in the BPM domain and to support the development of IoT-Aware CPMs. This work contributes in the following manner: (1) it provides a formal description of the IoT resource perspective and its relationships with the BPM domain using semantic technology and (2) it provides novel concepts to enable configurable IoT resource allocation in CPMs. To validate our approach and to show its feasibility, we do the following: (1) implement proof of concept tools that assist in the development of IoT-aware BPs and IoT-aware CPMs and (2) perform experiments on the process model datasets. The experimentation results show the effectiveness of our approach and affirm its feasibility
Azzi, Soumaya. "Surrogate modeling of stochastic simulators." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT009.
This thesis is a contribution to the surrogate modeling and the sensitivity analysis on stochastic simulators. Stochastic simulators are a particular type of computational models, they inherently contain some sources of randomness and are generally computationally prohibitive. To overcome this limitation, this manuscript proposes a method to build a surrogate model for stochastic simulators based on Karhunen-Loève expansion. This thesis also aims to perform sensitivity analysis on such computational models. This analysis consists on quantifying the influence of the input variables onto the output of the model. In this thesis, the stochastic simulator is represented by a stochastic process, and the sensitivity analysis is then performed on the differential entropy of this process.The proposed methods are applied to a stochastic simulator assessing the population’s exposure to radio frequency waves in a city. Randomness is an intrinsic characteristic of the stochastic city generator. Meaning that, for a set of city parameters (e.g. street width, building height and anisotropy) does not define a unique city. The context of the electromagnetic dosimetry case study is presented, and a surrogate model is built. The sensitivity analysis is then performed using the proposed method
Yongsiriwit, Karn. "Modeling and mining business process variants in cloud environments." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLL002.
More and more organizations are adopting cloud-based Process-Aware Information Systems (PAIS) to manage and execute processes in the cloud as an environment to optimally share and deploy their applications. This is especially true for large organizations having branches operating in different regions with a considerable amount of similar processes. Such organizations need to support many variants of the same process due to their branches' local culture, regulations, etc. However, developing new process variant from scratch is error-prone and time consuming. Motivated by the "Design by Reuse" paradigm, branches may collaborate to develop new process variants by learning from their similar processes. These processes are often heterogeneous which prevents an easy and dynamic interoperability between different branches. A process variant is an adjustment of a process model in order to flexibly adapt to specific needs. Many researches in both academics and industry are aiming to facilitate the design of process variants. Several approaches have been developed to assist process designers by searching for similar business process models or using reference models. However, these approaches are cumbersome, time-consuming and error-prone. Likewise, such approaches recommend entire process models which are not handy for process designers who need to adjust a specific part of a process model. In fact, process designers can better develop process variants having an approach that recommends a well-selected set of activities from a process model, referred to as process fragment. Large organizations with multiple branches execute BP variants in the cloud as environment to optimally deploy and share common resources. However, these cloud resources may be described using different cloud resources description standards which prevent the interoperability between different branches. In this thesis, we address the above shortcomings by proposing an ontology-based approach to semantically populate a common knowledge base of processes and cloud resources and thus enable interoperability between organization's branches. We construct our knowledge base built by extending existing ontologies. We thereafter propose an approach to mine such knowledge base to assist the development of BP variants. Furthermore, we adopt a genetic algorithm to optimally allocate cloud resources to BPs. To validate our approach, we develop two proof of concepts and perform experiments on real datasets. Experimental results show that our approach is feasible and accurate in real use-cases
Liu, Baolong. "Dynamic modeling in sustainable operations and supply chain management." Thesis, Cergy-Pontoise, Ecole supérieure des sciences économiques et commerciales, 2018. http://www.theses.fr/2018ESEC0006.
This thesis articulates several important issues in sustainable operations and supply chain management not only to provide insights for enhancing the performance of firms but also to appeal to the enterprises to adopt appropriate means for a better environment of our society. The link from firm level to society level is that, to improve the green performance through better operations management efficiency in firms and supply chains, is an indispensable element to ameliorate the environment in our society. Taking China as an example. Since a few years ago (The Straitstimes, 2017; Stanway & Perry, 2018), the government started to spare no effort in resolving the air pollution problems. An important and useful means is to put strict regulations and monitoring the efforts of firms which will face serious fine if certain standards are not met by random inspection. Therefore, firms have to cooperate for the betterment of its profitability and, more importantly, the environmental impacts. Throughout the endeavor, despite the uncertain future situation, the air quality has gradually improved in China (Zheng, 2018). This thesis, in a more general setting, aims to provide important insights to firms so that they are not only able to meet the regulations but genuinely to make contributions to building a better environment for our future generations. Basically, our goal is to obtain deep understanding of the trade-offs with which companies are faced, and to model the problems for seeking possible solutions and helping firms/supply chains to enhance their performance from a theoretical point of view. Then, indirectly, the work will help firms to realize the importance of developing sustainable operations and supply chain management means on our society. The structure of the thesis is organized as follows. Chapter 2 introduces the thesis in French. Chapter 3 is the first essay, Environmental Collaboration and Process Innovation in Supply Chain Management with Coordination. Chapter 4 includes the contents of the second essay, Remanufacturing of Multi-Component Systems with Product Substitution , and the third essay, Joint Dynamic Pricing and Return Quality Strategies Under Demand Cannibalization, is introduced in Chapter 5. Chapter 6 gives the general concluding remarks of the three essays which is followed by the reference list and the appendices
Démoulin, Baptiste. "QM/MM modeling of the retinal chromophore in complex environments." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEN049/document.
We have used our QM/MM interface to model the photochemical and photophysical properties of the retinal chromophore in several environments.First, we proved that methylation of the retinal backbone, which converts a slow photochemistry to an ultra-fast protein-like behaviour in methanol solution, modifies the interplay between the retinal excited states, favouring the formation of a photo-active transient intermediate. Then, we have studied the direct effect of the environment in the case of rhodopsin mimics, where point mutations of a few amino-acids lead to systems that can absorb in the wide visible range. Combined with ultra-fast pump-probe spectroscopy, our method has shown that the electrostatic potential around the retinal can affect the shape of the excited potential energy surface, and is able to tune the excited state lifetime as well as the location of the photoisomerization. Next, we showed that the currently accepted protonation state of amino-acids in the vicinity of the retinal in bacteriorhodopsin leads to a strongly blue shifted absorption, while the protonation of Asp212 leads to accurate results; we now aim toward a validation of this protonation by computation of fluorescence and excited state lifetime. Finally, we have modeled the photophysics of the unprotonated Schiff base in a UV-pigment, where an original an previously unreported photochemistry takes place, especially with the direct involvement of a doubly excited state. These studies have shown the reliability of our QM/MM potential for modeling a wide range of different environments
Soueidan, Hayssam. "Discrete event modeling and analysis for systems biology models." Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13916/document.
A general goal of systems biology is to acquire a detailed understanding of the dynamics of living systems by relating functional properties of whole systems with the interactions of their constituents. Often this goal is tackled through computer simulation. A number of different formalisms are currently used to construct numerical representations of biological systems, and a certain wealth of models is proposed using ad hoc methods. There arises an interesting question of to what extent these models can be reused and composed, together or in a larger framework. In this thesis, we propose BioRica as a means to circumvent the difficulty of incorporating disparate approaches in the same modeling study. BioRica is an extension of the AltaRica specification language to describe hierarchical non-deterministic General Semi-Markov processes. We first extend the syntax and automata semantics of AltaRica in order to account for stochastic labeling. We then provide a semantics to BioRica programs in terms of stochastic transition systems, that are transition systems with stochastic labeling. We then develop numerical methods to symbolically compute the probability of a given finite path in a stochastic transition systems. We then define algorithms and rules to compile a BioRica system into a stand alone C++ simulator that simulates the underlying stochastic process. We also present language extensions that enables the modeler to include into a BioRica hierarchical systems nodes that use numerical libraries (e.g. Mathematica, Matlab, GSL). Such nodes can be used to perform numerical integration or flux balance analysis during discrete event simulation. We then consider the problem of using models with uncertain parameter values. Quantitative models in Systems Biology depend on a large number of free parameters, whose values completely determine behavior of models. Some range of parameter values produce similar system dynamics, making it possible to define general trends for trajectories of the system (e.g. oscillating behavior) for some parameter values. In this work, we defined an automata-based formalism to describe the qualitative behavior of systems’ dynamics. Qualitative behaviors are represented by finite transition systems whose states contain predicate valuation and whose transitions are labeled by probabilistic delays. We provide algorithms to automatically build such automata representation by using random sampling over the parameter space and algorithms to compare and cluster the resulting qualitative transition system. Finally, we validate our approach by studying a rejuvenation effect in yeasts cells population by using a hierarchical population model defined in BioRica. Models of ageing for yeast cells aim to provide insight into the general biological processes of ageing. For this study, we used the BioRica framework to generate a hierarchical simulation tool that allows dynamic creation of entities during simulation. The predictions of our hierarchical mathematical model has been validated experimentally by the micro-biology laboratory of Gothenburg
Zhang, Min. "Modélisation et analyse de processus biologiques dans des algèbres de processus." Phd thesis, Université Paris-Diderot - Paris VII, 2007. http://tel.archives-ouvertes.fr/tel-00155301.
Dans la première partie, nous modélisons la transduction du signal, et plus spécifiquement le processus d'activation de la protéine "ras". On introduit une nouvelle extension du π-calcul, le Iπ-calcul, pour modéliser ce processus biologique en présence d'aberrance. Le calcul est obtenu en ajoutant deux actions aberrantes au Iπ-calcul. Le Iπ-calcul, quoique déjà assez expressif pour exprimer l'aberrance, peut être encore précisé par l'introduction d'informations supplémentaires dans la syntaxe, soit sous forme de "tags" soit sous forme de types. Les tags sont plus intuitifs, mais ils introduisent de la redondance, qui est éliminée dans la présentation de cette information sous forme de types. Nous montrons l'équivalence entre les deux espèces de décoration. Le système de types / tags présenté ici est très rudimentaire, mais notre espoir est de l'enrichir pour intégrer des paramètres quantitatifs tels que la température, la concentration, etc... dans la modélisation des processus biologiques.
Dans la seconde partie, nous abordons d'un point de vue formel la question de l'auto-assemblage dans le κ-calcul, un langage de description d'interactions protéine-protéine. Nous définissons un sous-ensemble de règles de calcul réversibles nous permettant d'assurer un codage sans blocage du calcul "à gros grain" (le κ-calcul) dans un calcul "à grain fin" (le mκ-calcul). Nous prouvons la correction de cette simulation de manière interne (à l'aide des règles réversibles), améliorant ainsi les résultats de Danos et Laneve.
Enfin, dans une partie plus prospective, nous suggérons comment l'on peut utiliser les bigraphes pour modéliser et analyser les processus biologiques. Nous montrons d'abord commment coder l'exemple "ras" dans ce formalisme. Puis nous indiquons sur un exemple comment l'on peut traduire le κ--calcul dans les bigraphes.
Fodra, Pietro. "Modeling of the price microstructure and applications of stochastic control to algorithmic trading." Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC090.
In this thesis, we take care of the modelling of the price of assets in the limit order book and of the application of the techniques of the optimal control in the algorithmic trading, in particular of market making. For assets with small tick, we develop an algorithm of market making in a book where the arrivais of order follow a Poisson law with average which decreases exponentially with the distance of the order from the mid-price. Thanks to techniques of asymptotic developments, we obtain explicit results for a very wide class of models, for which we suppose to know oniy the first two moments. For assets with large tick, we propose a new model based on a semi-Markov process, thanks to which we are able to replicate some stylized facts as the noise mean-reversion, the large-scale Brownian behavior, and the dependence of the variance estimator on the sampling frequency. In this environment, we describe an algorithm of market making using techniques of optimal control and asymptotic development, amazingly reducing the numerical part. Finally, we improve the previous model by using VLMCs (Variable length Markov chains), which allow to describe the long memory of the price, and, that, even if losing explicit formulae, allow to obtain interesting applications in the algorithmic trading
Li, Haizhou. "Modeling and verification of probabilistic data-aware business processes." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22563/document.
There is a wide range of new applications that stress the need for business process models that are able to handle imprecise data. This thesis studies the underlying modelling and analysis issues. It uses as formal model to describe process behaviours a labelled transitions system in which transitions are guarded by conditions defined over a probabilistic database. To tackle verification problems, we decompose this model to a set of traditional automata associated with probabilities named as world-partition automata. Next, this thesis presents an approach for testing probabilistic simulation preorder in this context. A complexity analysis reveals that the problem is in 2-exptime, and is exptime-hard, w.r.t. expression complexity while it matches probabilistic query evaluation w.r.t. data-complexity. Then P-LTL and P-CTL model checking methods are studied to verify this model. In this context, the complexity of P-LTL and P-CTL model checking is in exptime. Finally a prototype called ”PRODUS” which is a modeling and verification tool is introduced and we model a realistic scenario in the domain of GIS (graphical information system) by using our approach
Lagache, Thibault. "Modeling the early steps of viral infection : a stochastic approach." Paris 6, 2009. http://www.theses.fr/2009PA066470.
Carré, Jérôme. "The modeling and detection of gravitational waves for Lisa." Paris 7, 2012. http://www.theses.fr/2012PA077194.
Gravitational waves (GWs) are spacetime metric perturbations propagating at the speed of light. They are direct predictions of the General Theory of Relativity. To reach the 10^-4 and 1Hz frequency band, ESA and NASA proposed, in a joint mission, the first GW space detector, the Laser Interferometer Space Antenna (LISA). The high sensibility of LISA to GWs obliges the community to developed and implement new methods in data analysis like analytical waveform modeling, numerical relativity or stochastic search alogrithms. Our work takes its place in this attempt, and we will treat different problems. After giving an overview on GW astronomy, and particularly for space based detectors, we treat of the presence of gaps in the data stream, caused by technical reasons. We propose a first-step attempt to deal with gaps, and examine the effect of their frequency on parameter estimation for white dwarf - white dwarf galactic binary systems. We decided to use a window function to smooth this discontinuity. We found that the best choice is to use a Hann function. Finally we looked at the impact of the gap frequency and we conclude that a frequency of a gap per week is a better choice than a gap per day. Then we look at extreme mass ratio inspirals and we see that to generate their waveform in the case of circular equatorial orbits we use a post-Newtonian approximation for the flux. We build a more complet post-Newtonian flux than previous works and we try different variant to use the Padé and the Chebyshev re-summation techniques. We found that the flux needs to be treated differently depending of the spin of the central black hole. And, in a general way the best results were obtained when using an inverted Chebyshev flux. Finally we examine a new method for parameter estimation for supermassive black holes, the Hamiltonian Markov chain (HMC). We present the HMC and we show how to tune it for supermassive black hole binaries. Then we compare the HMC with the Markov chain Monte Carlo (MCMC) and we will show that the HMC has a faster convergence than the MCMC
Charpentier, Frédéric. "Maîtrise du processus de modélisation géométrique et physique en conception mécanique." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2014. http://tel.archives-ouvertes.fr/tel-01016665.
Coron, Camille. "Stochastic modeling and eco-evolution of a diploid population." Palaiseau, Ecole polytechnique, 2013. http://pastel.archives-ouvertes.fr/docs/00/91/82/57/PDF/These.pdf.
We study the random modeling and the genetic evolution of diploid populations, in an eco-evolutionary context. The population is always modeled by a multi-type birth-and-death process with interaction and whose birth rates are designed to model Mendelian reproduction. In particular, the population size is not assumed to be constant, and can be small. In a first part, we provide a probabilistic study of the mutational meltdown, a phenomenon in which the size of a small population decreases more and more rapidly due to more and more frequent fixations of deleterious mutations. We give a formula for the fixation probability of a slightly deleterious allele, as a function of the initial genetic composition of the population and we prove the existence of a mutational meltdown under a rare mutations hypothesis. Besides, we give numerical results and detailed biological interpretations of the observed behaviors. In particular, we study the impact of the mutational meltdown on the mean population size dynamics and we quantify this phenomenon in terms of demographic parameters. In a second part, under an approximation of large population size and frequent birth and death events, we first study the convergence toward a slow-fast stochastic dynamics and the quasi-stationary behavior of a diploid population characterized by its genetic composition at one bi-allelic locus. In particular, we study the possibility of a long time coexistence of the two alleles in the population conditioned on non extinction. Next, we generalize this slow-fast dynamics for a population presenting an arbitrary finite number of alleles. The population is finally modeled by a measure-valued stochastic process that converges when the number of alleles goes to infinity, toward a generalized Fleming-Viot superprocess with randomly evolving population size and diploid additive selection
Lin, Yishuai. "An organizational ontology for multiagent-based Enterprise Process modeling and automation." Phd thesis, Université de Technologie de Belfort-Montbeliard, 2013. http://tel.archives-ouvertes.fr/tel-00977726.
Zhang, Qiang. "Process modeling of innovative design using systems engineering." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD007/document.
We develop a series of process models to comprehensively describe and effectively manage innovative design in order to achieve adequate balance between innovation and control, following the design research methodology (DRM). Firstly, we introduce a descriptive model of innovative design. This model reflects the actual process and pattern of innovative design, locates innovation opportunities in the process and supports a systematic perspective whose focus is the external and internal factors affecting the success of innovative design. Secondly, we perform an empirical study to investigate how control and flexibility can be balanced to manage uncertainty in innovative design. After identifying project practices that cope with these uncertainties in terms of control and flexibility, a case-study sample based on five innovative design projects from an automotive company is analyzed and shows that control and flexibility can coexist. Based on the managerial insights of the empirical study, we develop the procedural process model and the activity-based adaptive model of innovative design. The former one provides the conceptual framework to balance innovation and control by the process structuration at the project-level and the integration of flexible practices at the operation-level. The latter model considers innovative design as a complex adaptive system, and thereby proposes the method of process design that dynamically constructs the process architecture of innovative design. Finally, the two models are verified by supporting a number of process analysis and simulation within a series of innovative design projects
Lin, Yanhui. "A holistic framework of degradation modeling for reliability analysis and maintenance optimization of nuclear safety systems." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC002/document.
Components of nuclear safety systems are in general highly reliable, which leads to a difficulty in modeling their degradation and failure behaviors due to the limited amount of data available. Besides, the complexity of such modeling task is increased by the fact that these systems are often subject to multiple competing degradation processes and that these can be dependent under certain circumstances, and influenced by a number of external factors (e.g. temperature, stress, mechanical shocks, etc.). In this complicated problem setting, this PhD work aims to develop a holistic framework of models and computational methods for the reliability-based analysis and maintenance optimization of nuclear safety systems taking into account the available knowledge on the systems, degradation and failure behaviors, their dependencies, the external influencing factors and the associated uncertainties.The original scientific contributions of the work are: (1) For single components, we integrate random shocks into multi-state physics models for component reliability analysis, considering general dependencies between the degradation and two types of random shocks. (2) For multi-component systems (with a limited number of components):(a) a piecewise-deterministic Markov process modeling framework is developed to treat degradation dependency in a system whose degradation processes are modeled by physics-based models and multi-state models; (b) epistemic uncertainty due to incomplete or imprecise knowledge is considered and a finite-volume scheme is extended to assess the (fuzzy) system reliability; (c) the mean absolute deviation importance measures are extended for components with multiple dependent competing degradation processes and subject to maintenance; (d) the optimal maintenance policy considering epistemic uncertainty and degradation dependency is derived by combining finite-volume scheme, differential evolution and non-dominated sorting differential evolution; (e) the modeling framework of (a) is extended by including the impacts of random shocks on the dependent degradation processes.(3) For multi-component systems (with a large number of components), a reliability assessment method is proposed considering degradation dependency, by combining binary decision diagrams and Monte Carlo simulation to reduce computational costs
Arpón, Javier. "Statistical analysis and modeling of nuclear architecture in Arabidopsis Thaliana." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS586/document.
Eukaryotic cell nuclei contain distinct functionally or structurally defined compartments at multiple scale that present a highly ordered spatial arrangement. Several studies in plants and animals have pointed out to links between nuclear organization and genome functions. A key challenge is to identify rules according to which nuclear compartments are organized in space and to describe how these rules may vary according to physiological or experimental conditions. Spatial statistics have been rarely used for this purpose, and were generally limited to the evaluation of complete spatial randomness. In this Thesis, we develop a spatial statistical approach, which combines cytology, image analysis and spatial modeling to better understand spatial configurations inside the nucleus. Our first contribution is a methodological framework that allows to test models of spatial organization at the population level. Several spatial models have been developed and implemented, including randomness, orbital randomness, maximum regularity, territorial randomness or orbital territorial randomness of biological objects within finite 3D domains of arbitrary shape. New spatial descriptors, in combination with classical ones, are used to compare observed patterns to expected configurations under the tested models. An unbiased version of a previously published statistical test is proposed to evaluate the goodness-of-fit of spatial models over populations of observed patterns. In the second part of this Thesis, we investigate the properties of the proposed approach using simulated and real data. The robustness of the proposed approach to segmentation errors and the reliability of the spatial evaluations are examined. Besides, the basis for a method to compare spatial distributions between different experimental groups is proposed. In the last part of this work, the proposed approach is applied on A. thaliana leaf cell nuclei images to analyze the spatial distribution of chromocenters, which are dynamic and plastic heterochromatic compartments that have an important structural role in the genome. A first application was to analyze isolated and cryo-section nuclei from wild type plants. We show that chromocenters present a highly regular distribution, confirming previously published results. Using new spatial statistical descriptors, we then demonstrate objectively and quantitatively, for the first time, that chromocenters exhibit a preferentially peripheral localization. Employing a new spatial model, we then reject the hypothesis that the regular organization is explained solely by the peripheral positioning. We further demonstrate that chromocenters organization displays a close-to-maximum spatial regularity at the global scale, but not at the local one. Lastly, we use simulations to examine a model according to which chromocenters positioning is constrained by chromosome territories and by interactions with the nuclear boundary. The second application was to elucidate whether chromocenters distribution could be altered under different mutations. We analyze nuclei data from crwn and kaku mutants, which are known to affect nuclear morphology. The results suggest that these mutations impact on nuclear morphology and on heterochromatin features but do not alter the regularity of chromocenters distribution even when the relative distance to the border is affected. The genericity of the proposed framework to analyze object distributions in finite 3D domains and its expandability to add more spatial models and spatial descriptors should be of interest to decipher spatial organizations within biological systems at various scales
Deschatre, Thomas. "Dependence modeling between continuous time stochastic processes : an application to electricity markets modeling and risk management." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED034/document.
In this thesis, we study some dependence modeling problems between continuous time stochastic processes. These results are applied to the modeling and risk management of electricity markets. In a first part, we propose new copulae to model the dependence between two Brownian motions and to control the distribution of their difference. We show that the class of admissible copulae for the Brownian motions contains asymmetric copulae. These copulae allow for the survival function of the difference between two Brownian motions to have higher value in the right tail than in the Gaussian copula case. Results are applied to the joint modeling of electricity and other energy commodity prices. In a second part, we consider a stochastic process which is a sum of a continuous semimartingale and a mean reverting compound Poisson process and which is discretely observed. An estimation procedure is proposed for the mean reversion parameter of the Poisson process in a high frequency framework with finite time horizon, assuming this parameter is large. Results are applied to the modeling of the spikes in electricity prices time series. In a third part, we consider a doubly stochastic Poisson process with stochastic intensity function of a continuous semimartingale. A local polynomial estimator is considered in order to infer the intensity function and a method is given to select the optimal bandwidth. An oracle inequality is derived. Furthermore, a test is proposed in order to determine if the intensity function belongs to some parametrical family. Using these results, we model the dependence between the intensity of electricity spikes and exogenous factors such as the wind production
Ulmer, Jean-Stéphane. "Approche générique pour la modélisation et l'implémentation des processus." Thesis, Toulouse, INPT, 2011. http://www.theses.fr/2011INPT0002/document.
A company must be able to describe and to remain responsive against endogenous or exogenous events. Such flexibility can be obtained with the Business Process Management (BPM). Through a BPM approach, different transformations operate on process models, developed by the business analyst and IT expert. A non-alignment is created between these heterogeneous models during their manipulation: this is the "business-IT gap" as described in the literature. The objective of our work is to propose a methodological framework for a better management of business processes in order to reach a systematic alignment from their modelling to their implementation within the target system. Using concepts from Model-driven Enterprise and Information System engineering, we define a generic approach ensuring an intermodal consistency. Its role is to maintain and provide all information related to the model structure and semantics. By allowing a full restitution of a transformed model, in the sense of reverse engineering, our platform enables synchronization between analysis model and implementation model. The manuscript also presents the possible match between process engineering and BPM through a multi- erspective scale
Nivot, Christophe. "Analyse et étude des processus markoviens décisionnels." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0057/document.
We investigate the potential of the Markov decision processes theory through two applications. The first part of this work is dedicated to the numerical study of an industriallauncher integration process in co-operation with Airbus DS. It is a particular case of inventory control problems where a launch calendar has a key role. The model we propose implies that standard optimization techniques cannot be used. We then investigate two simulation-based algorithms. They return non trivial optimal policies which can be applied in actual practice. The second part of this work deals with the study of partially observable optimal stopping problems. We propose an approximation method using optimal quantization for problems with general state space. We study the convergence of the approximated optimal value towards the real optimal value. The convergence rate is also under study. We apply our method to a numerical example
Deng, Yuxin. "Axiomatisations et types pour des processus probabilistes et mobiles." Phd thesis, École Nationale Supérieure des Mines de Paris, 2005. http://tel.archives-ouvertes.fr/tel-00155225.
pour des syst`emes r´epartis modernes. Deux caract´eristiques importantes des mod`eles pour
ces syst`emes sont les probabilit´es et la mobilit´e typ´ee : des probabilit´es peuvent ˆetre utilis´ees pour
quantifier des comportements incertains ou impr´evisibles, et des types peuvent ˆetre utilis´es pour
garantir des comportements sˆurs dans des syst`emes mobiles. Dans cette th`ese nous d´eveloppons
des techniques alg´ebriques et des techniques bas´ees sur les types pour l'´etude comportementale des
processus probabilistes et mobiles.
Dans la premi`ere partie de la th`ese nous ´etudions la th´eorie alg´ebrique d'un calcul de processus
qui combine les comportements non-d´eterministe et probabiliste dans le mod`ele des automates probabilistes
propos´es par Segala et Lynch. Nous consid´erons diverses ´equivalences comportementales
fortes et faibles, et nous fournissons des axiomatisations compl`etes pour des processus `a ´etats finis,
limit´ees `a la r´ecursion gard´ee dans le cas des ´equivalences faibles.
Dans la deuxi`eme partie de la th`ese nous ´etudions la th´eorie alg´ebrique du -calcul en pr´esence
des types de capacit´es, qui sont tr`es utiles dans les calculs de processus mobiles. Les types de
capacit´es distinguent la capacit´e de lire sur un canal, la capacit´e d'´ecrire sur un canal, et la capacit´e
de lire et d'´ecrire `a la fois. Ils introduisent ´egalement une relation de sous-typage naturelle et
puissante. Nous consid´erons deux variantes de la bisimilarit´e typ´ee, dans leurs versions retard´ees
et anticip´ees. Pour les deux variantes, nous donnons des axiomatisations compl`etes pour les termes
ferm´es. Pour une des deux variantes, nous fournissons une axiomatisation compl`ete pour tous les
termes finis.
Dans la derni`ere partie de la th`ese nous d´eveloppons des techniques bas´ees sur les types pour
v´erifier la propri´et´e de terminaison de certains processus mobiles. Nous fournissons quatre syst`emes
de types pour garantir cette propri´et´e. Les syst`emes de types sont obtenus par des am´eliorations
successives des types du -calcul simplement typ´e. Les preuves de terminaison utilisent des techniques
employ´ees dans les syst`emes de r´e´ecriture. Ces syst`emes de types peuvent ˆetre utilis´es pour
raisonner sur le comportement de terminaison de quelques exemples non triviaux : les codages des
fonctions r´ecursives primitives, le protocole pour coder le choix s´epar´e en terme de composition
parall`ele, une table de symboles implement´ee comme une chaˆıne dynamique de cellules.
Ces r´esultats ´etablissent des bases pour une future ´etude de mod`eles plus avanc´es qui peuvent
combiner des probabilit´es avec des types. Ils soulignent ´egalement la robustesse des techniques
alg´ebriques et de celles bas´ees sur les types pour le raisonnement comportemental.
COHEN, ROBERT. "Modelisation, filtrage et controle de processus stochastiques." Paris, ENST, 1987. http://www.theses.fr/1987ENST0015.
Pan-Yu, Yiyan. "Spectres de processus de Markov." Phd thesis, Université Joseph Fourier (Grenoble), 1997. http://tel.archives-ouvertes.fr/tel-00004959.
Pirayesh, Neghab Amir. "Évaluation basée sur l’interopérabilité de la performance des collaborationsdans le processus de conception." Thesis, Paris, ENSAM, 2014. http://www.theses.fr/2014ENAM0033/document.
A design process, whether for a product or for a service, is composed of a large number of activities connected by many data and information exchanges. The quality of these exchanges, called collaborations, requires being able to send and receive data and information which are useful, understandable and unambiguous to the different designers involved. The research question is thus focused on the definition and evaluation of the performance of collaborations, and by extension, of the design process in its entirety. This performance evaluation requires the definition of several key elements such as object(s) to be evaluated, the performance indicator(s) and action variable(s).In order to define the object of evaluation, this research relies on a study of the literature resulting in a meta-model of collaborative process. The measurement of the performance of these collaborations is for its part based on the concept of interoperability. The measurement of the performance of the collaborations is for its part based on the concept of interoperability. Furthermore, this measure estimates the technical and conceptual interoperability of the different elementary collaborations. This work is concluded by proposing a tooled methodological framework for collaborations' performance evaluation. Through a two-step approach (modeling and evaluation), this framework facilitates the identification of inefficient collaborations and their causes. This framework is illustrated and partially validated using an academic example and a case study in design domain
Paredes, Moreno Daniel. "Modélisation stochastique de processus d'agrégation en chimie." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30368/document.
We center our interest in the Population Balance Equation (PBE). This equation describes the time evolution of systems of colloidal particles in terms of its number density function (NDF) where processes of aggregation and breakage are involved. In the first part, we investigated the formation of groups of particles using the available variables and the relative importance of these variables in the formation of the groups. We use data in (Vlieghe 2014) and exploratory techniques like principal component analysis, cluster analysis and discriminant analysis. We used this scheme of analysis for the initial population of particles as well as in the resulting populations under different hydrodynamics conditions. In the second part we studied the use of the PBE in terms of the moments of the NDF, and the Quadrature Method of Moments (QMOM) and the Generalized Minimal Extrapolation (GME), in order to recover the time evolution of a finite set of standard moments of the NDF. The QMOM methods uses an application of the Product-Difference algorithm and GME recovers a discrete non-negative measure given a finite set of its standard moments. In the third part, we proposed an discretization scheme in order to find a numerical approximation to the solution of the PBE. We used three cases where the analytical solution is known (Silva et al. 2011) in order to compare the theoretical solution to the approximation found with the discretization scheme. In the last part, we proposed a method for estimate the parameters involved in the modelization of aggregation and breakage processes in PBE. The method uses the numerical approximation found, as well as the Extended Kalman Filter. The method estimates iteratively the parameters at each time, using an non- linear Least Square Estimator
Kessouri, Fayçal. "Cycles biogéochimiques de la mer Méditerranée : processus et bilans." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30367/document.
The Mediterranean Sea is characterized by various trophic regimes due to river inputs, as well as heterogeneous nitrogen to phosphorus ratios and hydrodynamic processes, in particular vertical mixing. The objective of this thesis that was performed in the framework of the MISTRALS program is the study of the biogeochemical cycles of these various regimes that composed the Mediterranean. It is mainly based on the in situ observations collected during DeWEx project and on 3D physical/biogeochemical modeling at the scale of the entire basin and at the regional of the western sub-basin. The first study is a ten-year basin scale study which allows an ecological classification, giving bioregions in function of their physical and biogeochemical properties. Southern oligotrophic areas in both eastern and western sub-basins are characterized by low efflorescence and deep nutriclines. While in the north, near Rhodes Island, in the southern Adriatic Sea and in the Liguro-Provencal sub-basin, the summer oligotrophic regime is altered annually by intense vertical dynamics, the deep convection. This process is considered as a driving force of the thermohaline circulation and entrains an enrichment of nutrients in surface layer which allows rapid and intense blooms in spring. The Mediterranean receives organic matter from the Atlantic Ocean, while, it is a source of inorganic matter for the Atlantic. In a second part of this thesis, a high resolution model on the western sub-basin was embedded in the basin model to quantify the physical and biological processes that determine the temporal variability of the nutrient stocks and their stoichiometry. The north-western Mediterranean turns from an eutrophic regime during deep convection to an oligotrophic regime after the spring bloom. During the eutrophic regime new production dominates the primary production at the surface while during the oligotrophic regime primary production is essentially associated to subsurface regenerated one. Adjacent water masses of the convection zone and west-southern regions are characterized by a dominance of regenerated production all over the year
Bansept, Florence. "Biophysical modeling of bacterial population dynamics and the immune response in the gut." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS397.
The first part of this thesis focuses on the colonization dynamics of a bacterial population in early infection of the gut. The aim is to infer biologically relevant parameters from indirect data. We discuss the optimal observable to characterize the variability in genetic tags distributions. In a first one-population model, biological arguments and inconsistencies between several experimental observables lead to the study of a second model with two-subpopulations replicating at different rates. As expected, this model allows for broader possibilities in observables combination, even though no clear conclusion can be drawn as to a data set on Salmonella in mice. The second part concerns the mechanisms that make the immune response effective. The main effector of the immune system in the gut, IgA (an antibody), enchains daughter bacteria in clonal clusters upon replication. Our model predicting the ensuing reduction of diversity in the bacterial population contributes to evidence this phenomenon, called “enchained growth”. Inside the host, the interplay of cluster growth and fragmentation results in preferentially trapping faster-growing and potentially noxious bacteria away from the epithelium, which could be a way for the immune system to regulate the microbiota composition. At the scale of the hosts population, in the context of evolution of antibiotic resistance, if bacteria are transmitted via clonal clusters, the probability to transmit a resistant bacteria is reduced in immune populations. Thus we use statistical physics tools to identify some generic mechanisms in biology
Sene, Alsane. "Modélisation et structuration des connaissances dans les processus de télémédecine dédiés aéronautique." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30251/document.
There is an inherent risk in the practice of medicine that can affect the conditions of medical activities (diagnostic or therapeutic purposes). The management of uncertainty is also an integral part of decision-making processes in the medical field. In the case of a medical incident during an air travel, this uncertainty includes three additional sources: (1) variability of the aeronautical conditions, (2) individual variability of the patient's conditions, (3) individual variability of the intervener's skills. Presently, medical incidents in the plane are estimated worldwide at 350 per day and when they occur, they are handled in 95 \% of cases by health professionals who are passengers. It is often for them a first experience. The main reason for the reluctance of health professionals to respond to the aircraft captain's call is the need to improvise; having to make a diagnosis and assess the severity of the patient's condition under difficult conditions. Apart from telemedicine with remote assistance, the intervener, often alone in the face of his doubts and uncertainty, has no other decision aid tool on board. Civil aviation also has feedback systems to manage the complexity of such processes. Event collection and analysis policies are put in place internationally, for example ECCAIRS (European Co-ordination Center for Accident and Incident Reporting Systems) and ASRS (Aviation Safety Reporting System). In this work, we first propose a semantic formalization based on ontologies to clarify conceptually the vocabulary of medical incidents occurring during commercial flights. Then, we implement a knowledge extraction process from the data available on existing databases to identify the patterns of the different groups of incidents. Finally, we propose a Clinical Decision Support System (CDSS) architecture that integrates the management of the uncertainties present on both the collected data and the skill levels of the medical professionals involved
Guerrier, Claire. "Multi-scale modeling and asymptotic analysis for neuronal synapses and networks." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066518/document.
In the present PhD thesis, we study neuronal structures at different scales, from synapses to neural networks. Our goal is to develop mathematical models and their analysis, in order to determine how the properties of synapses at the molecular level shape their activity and propagate to the network level. This change of scale can be formulated and analyzed using several tools such as partial differential equations, stochastic processes and numerical simulations. In the first part, we compute the mean time for a Brownian particle to arrive at a narrow opening defined as the small cylinder joining two tangent spheres. The method relies on Möbius conformal transformation applied to the Laplace equation. We also estimate, when the particle starts inside a boundary layer near the hole, the splitting probability to reach the hole before leaving the boundary layer, which is also expressed using a mixed boundary-value Laplace equation. Using these results, we develop model equations and their corresponding stochastic simulations to study vesicular release at neuronal synapses, taking into account their specific geometry. We then investigate the role of several parameters such as channel positioning, the number of entering ions, or the organization of the active zone. In the second part, we build a model for the pre-synaptic terminal, formulated in an initial stage as a reaction-diffusion problem in a confined microdomain, where Brownian particles have to bind to small target sites. We coarse-grain this model into two reduced ones. The first model couples a system of mass action equations to a set of Markov equations, which allows to obtain analytical results. We develop in a second phase a stochastic model based on Poissonian rate equations, which is derived from the mean first passage time theory and the previous analysis. This model allows fast stochastic simulations, that give the same results than the corresponding naïve and endless Brownian simulations. In the final part, we present a neural network model of bursting oscillations in the context of the respiratory rhythm. We build a mass action model for the synaptic dynamic of a single neuron and show how the synaptic activity between individual neurons leads to the emergence of oscillations at the network level. We benchmark the model against several experimental studies, and confirm that respiratory rhythm in resting mice is controlled by recurrent excitation arising from the spontaneous activity of the neurons within the network
Bézenac, Emmanuel de. "Modeling physical processes with deep learning : a dynamical systems approach." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS203.
Deep Learning has emerged as a predominant tool for AI, and has already abundant applications in fields where data is abundant and access to prior knowledge is difficult. This is not necessarily the case for natural sciences, and in particular, for physical processes. Indeed, these have been the object of study since centuries, a vast amount of knowledge has been acquired, and elaborate algorithms and methods have been developped. Thus, this thesis has two main objectives. The first considers the study of the role that deep learning has to play in this vast ecosystem of knowledge, theory and tools. We will attempt to answer this general question through a concrete problem: the one of modelling complex physical processes, leveraging deep learning methods in order to make up for lacking prior knowledge. The second objective is somewhat its converse: it focuses on how perspectives, insights and tools from the field of study of physical processes and dynamical systems can be applied in the context of deep learning, in order to gain a better understanding and develop novel algorithms
Waycaster, Garrett. "Stratégies d'optimisation par modélisation explicite des différents acteurs influençant le processus de conception." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30122/document.
The commercial success or failure of engineered systems has always been significantly affected by their interactions with competing designs, end users, and regulatory bodies. Designs which deliver too little performance, have too high a cost, or are deemed unsafe or harmful will inevitably be overcome by competing designs which better meet the needs of customers and society as a whole. Recent efforts to address these issues have led to techniques such as design for customers or design for market systems. In this dissertation, we seek to utilize a game theory framework in order to directly incorporate the effect of these interactions into a design optimization problem which seeks to maximize designer profitability. This approach allows designers to consider the effects of uncertainty both from traditional design variabilities as well as uncertain future market conditions and the effect of customers and competitors acting as dynamic decision makers. Additionally, we develop techniques for modeling and understanding the nature of these complex interactions from observed data by utilizing causal models. Finally, we examine the complex effects of safety on design by examining the history of federal regulation on the transportation industry. These efforts lead to several key findings; first, by considering the effect of interactions designers may choose vastly different design concepts than would otherwise be considered. This is demonstrated through several case studies with applications to the design of commercial transport aircraft. Secondly, we develop a novel method for selecting causal models which allows designers to gauge the level of confidence in their understanding of stakeholder interactions, including uncertainty in the impact of potential design changes. Finally, we demonstrate through our review of regulations and other safety improvements that the demand for safety improvement is not simply related to ratio of dollars spent to lives saved; instead the level of personal responsibility and the nature and scale of potential safety concerns are found to have causal influence on the demand for increased safety in the form of new regulations
Goyet, Sophie. "Modélisation du processus d'application des connaissances entre Recherche et Santé publique." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20077/document.
Health research generates a growing body of scientific literature. However this scientific production is not systematically integrated into public health. Researchers and policy makers have operations and constraints that do not naturally facilitate exchanges and knowledge translation (KT) from research into health policy. This thesis focuses on the gap between research and health policy and analyzes the determinants of success or failure of KT between research and health policies in Cambodia.The first chapter defines the KT process and reviews the scarce KT interventions reported in the literature. This review shows that KT is not a new concept, even though it remains somewhat under applied. In this chapter, we also look at the tools used to model processes and health research. We conclude that the UML (unified modeling language) appears to be the best modeling tool available to analyze the KT process.The second chapter describes a KT intervention we implemented and subsequently analyzes its impact and the determinants of its partial success, using UML tools. Most of identified barriers were related to either a lack of synchronization between the production of knowledge and the health policy making, or to some lack of mutual understanding between researchers and policymakers. Among the contributing factors, we identified the key roles of an actor who was both policymaker and researcher, and of organizations which acted as communication vectors between researchers and policymakers.The third chapter first includes the quantitative and qualitative analysis of the health research scientific production in Cambodia. It shows that even though more than 85% of articles published were accessible free of charge they do not cover all public health priorities of Cambodia. The following study identifies the main sources of information for policy makers who contributed to the preparation of the first national health policy against antibiotic resistance. We show that, as elsewhere, the scientific literature is not an appropriate medium to communicate with the Cambodian health authorities.Finally in the last chapter we integrate the various findings from previous chapters into the analysis of the determinants of KT. From this analysis we draw a generic UML model (class diagram), that we test on four research projects also conducted in Cambodia. This model may be used in Cambodia or in other countries with limited resources.We conclude that if the principles of the CA can be summarized in a few simple rules, they face many barriers when they are operationally implemented. KT is a dynamic, complex , iterative, and highly context –dependent process. A number of barriers to KT identified in Cambodia are identical to those found in the West. Among the facilitating factors for KT, we show that the connection between research institutions and national or provincial health is a major asset
Nembé, Jocelyn. "Estimation de la fonction d'intensité d'un processus ponctuel par complexité minimale." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00346118.
Lu, Wei. "New Results on Stochastic Geometry Modeling of Cellular Networks : Modeling, Analysis and Experimental Validation." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS253/document.
The increasing heterogeneity and irregular deployment of the emerging wireless networks give enormous challenges to the conventional hexagonal model for abstracting the geographical locations of wireless transmission nodes. Against this backdrop, a new network paradigm by modeling the wireless nodes as a Poisson Point Process (PPP), leveraging on the mathematical tools of stochastic geometry for tractable mathematical analysis, has been proposed with the capability of fairly accurately estimating the performance of practical cellular networks. This dissertation investigated the mathematical tractability of the PPP-based approach by proposing new mathematical methodologies, fair approximations incorporating practical channel propagation models. First, a new mathematical framework, which is referred to as an Equivalent-in-Distribution (EiD)-based approach, has been proposed for computing exact error probability of cellular networks based on random spatial networks. The proposed approach is easy to compute and is shown to be applicable to a bunch of MIMO setups where the modulation techniques and signal recovery techniques are explicitly considered. Second, the performance of relay-aided cooperative cellular networks, where the relay nodes, the base stations, and the mobile terminals are modeled according to three independent PPPs, has been analyzed by assuming flexible cell association criteria. It is shown from the mathematical framework that the performance highly depends on the path-loss exponents of one-hop and two-hop links, and the relays provide negligible gains on the performance if the system is not adequately designed. Third, the PPP modeling of cellular networks with unified signal attenuation model is generalized by taking into account the effect of line-of-sight (LOS) and non-line-of-sight (NLOS) channel propagation. A tractable yet accurate link state model has been proposed to estimate other models available in the literature. It is shown that an optimal density for the BSs deployment exists when the LOS/NLOS links are classified in saturate load cellular networks. In addition, the Monte Carlo simulation results of the real BSs deployments with empirical building blockages are compared with those with PPP distributed BSs with the proposed link state approximation at the end of this dissertation as supplementary material. In general, a good matching is observed
Krug, Jean. "Intéractions calottes polaires/océan : modélisation des processus de vêlage au front des glaciers émissaires." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENU033/document.
Polar ice-sheets discharge and subsequent sea level rise is a major concern. Warming climate affects the behaviour of tidewater outlets glaciers and increases their ice discharge. As they drain the ice flow toward the ocean, it is pivotal to incorporate their dynamics when modelling the ice-sheet response to global warming. However, tidewater glacier dynamics is still complicated to understand, as they are believed to involve many feedbacks. The one between calving margin dynamics and glacier general dynamics is fundamental. This PhD thesis focuses on modelling the calving front of outlet glaciers, in order to enhance the representation of physical processes occurring at their margin. To do so, we build up a new framework for calving based on damage mechanics and fracture mechanics. This allows us to represent the slow degradation of the ice rheological properties from a virgin state to the appearance of a crevasse field, as well as the rapid fracture propagation associated with calving events. Our model is then constrained within a 2D flow-line representation of Helheim Glacier, Greenland. We find some parameters sets for which the glacier behaviour is coherent with its past evolution. Sensitivity tests are carried out and they reveal the significance of each model parameter. This new calving law is then employed to study the impact of submarine frontal melting and ice mélange (heterogeneous mixture of sea-ice and icebergs) on glacier dynamics. These two forcings are usually suspected to be responsible for the seasonal variations of the calving margin. Our results show that both forcings impact the front dynamics. The melting, however, only slightly changes the front position, when the ice mélange can force the glacier front to displace up to a few kilometers. Additionally, if the melting at the front is not sufficient to affect the inter-annual mass balance, this is not obvious when forced by ice mélange. At last, our model highlights a feature which is specific to floating glaciers: for the strongest forcings, the glacier equilibrium may be modified, as well as its pluri-annual mass balance.STAR
Lambert-Lacroix, Sophie. "Fonction d'autocorrélation partielle des processus à temps discret non stationnaires et applications." Phd thesis, Université Joseph Fourier (Grenoble), 1998. http://tel.archives-ouvertes.fr/tel-00004893.
Fontaine, Gauthier. "Modélisation théorique et processus associés pour Architectes Modèle dans un environnement multidisciplinaire." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLY004.
Multi-disciplinary and multi-physics simulation represents a major scientific and industrial challenge. The simulation has essentially been considered by physicists (mechanic domain, electromagnetic domain, ...) as a numerical problem on specific case studies but has never been adressed from a system perspective. The general problem induced by the numerical simulation of complex systems include model composition, multi-objective optimization, the semantics and formal verification of compositions and the frame of systems engineering. This thesis proposes an original approach establishing the theoretical and methodological foundations for a seamless process between systems engineering, multi-objective optimization and multi-physics simulation. Automotive case studies show the validity of such an approach based on Modelica langage
Monteiro, Julian Geraldes. "Modeling and analysis of reliable peer-to-peer storage systems." Nice, 2010. http://www.theses.fr/2010NICE4038.
This thesis aims at providing tools to analyze and predict the performance of general large scale data storage systems. We use these tools to analyze the impact of different choices of system design on different performance metrics. For instance, the bandwidth consumption, the storage space overhead, and the probability of data loss should be as small as possible. Different techniques are studied and applied. First, we describe a simple Markov chain model that harnesses the dynamics of a storage system under the effects of peer failures and of data repair. Then we provide closed-form formulas that give good approximations of the model. These formulas allow us to understand the interactions between the system parameters. Indeed, a lazy repair mechanism is studied and we describe how to tune the system parameters to obtain an efficient utilization of bandwidth. We confirm by comparing to simulations that this model gives correct approximations of the system average behavior, but it does not capture its variations over time. We then propose a new stochastic model based on a fluid approximation that indeed captures the deviations around the mean behavior. These variations are most of the time neglected by previous works, despite being very important to correctly allocate the system resources. We additionally study several other aspects of a distributed storage system: we propose queuing models to calculate the repair time distribution under limited bandwidth scenarios; we discuss the trade-offs of a Hybrid coding (mixing erasure codes and replication); and finally we study the impact of different ways to distribute data fragments among peers, i. E. , placement strategies
Ebbinghaus, Maximilian. "Stochastic modeling of intracellular processes : bidirectional transport and microtubule dynamics." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00592078.
Coeurjolly, Jean-François. "Inférence statistique pour les mouvements browniens fractionnaires et multifractionnaires." Phd thesis, Université Joseph Fourier (Grenoble), 2000. http://tel.archives-ouvertes.fr/tel-00006736.
El, Jirari Soukaïna. "Modélisation numérique du processus de creusement pressurisé des tunnels." Thesis, Lyon, 2021. http://www.theses.fr/2021LYSET002.
Due to traffic congestion and environmental factors, urban development nowadays involves the construction of new tunnels for transportation purposes or underground networks. Especially in urban areas where the control of soil deformation during excavation is the predominant problem, the use of pressurized tunnel boring machines (TBM) is becoming increasingly frequent. This excavation process is the most adapted to guarantee a stringent control of surface settlements and displacements near the tunnel, and consequently for preserving the dense and vulnerable housing area at ground surface, as well as existing underground construction nearby. However, pressurized TBM tunnelling remains a complex process. In particular, the interaction between the machine and the surrounding soil involves various mechanisms that need to be well understood in order to be well controlled. This PhD work falls within this context. It aims to describe and to analyse the behaviour of the interaction between the TBM and the surrounding soil by means of a numerical approach based on finite elements method. In a first part, a 3D numerical model is developed in order to reproduce as well as possible the different stages of the tunnelling process and the various volume losses occurring into the ground around the machine. Based on a parametric study, the influence of the machine control parameters and the mechanical characteristics of the soil on the kinematics of the ground around the tunnel is studied and quantified. In a second part, a simplified 2D numerical model is developed to be used in the framework of pre-project studies by design offices. This 2D model aims to predict the amplitude of ground movements around the current section of the tunnel according to the distance to the tunnel face. The hypothesis of this “equivalent” 2D model are deduced from the results of the 3D numerical analysis and the well-known “convergence/confinement” method is used to take into account the 3D nature of the problem. In a last part, the capacities and limitations of these two numerical models are analysed and discussed by comparing the numerical results with data from three monitored sections of two construction sites. For this comparison, the geological, hydrogeological and geotechnical conditions crossed as well as the pressurization mode of the TBM (EPB, slurry TBM) and the controlling conditions of the machine are taken into account
Giannakas, Theodoros. "Joint modeling and optimization of caching and recommendation systems." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS317.
Caching content closer to the users has been proposed as a win-win scenario in order to offer better rates to the users while saving costs from the operators. Nonetheless, caching can be successful if the cached files manage to attract a lot of requests. To this end, we take advantage of the fact that the internet is becoming more entertainment oriented and propose to bind recommendation systems and caching in order to increase the hit rate. We model a user who requests multiple contents from a network which is equipped with a cache. We propose a modeling framework for such a user which is based on Markov chains and depart from the IRM. We delve into different versions of the problem and derive optimal and suboptimal solutions according to the case we examine. Finally we examine the variation of the Recommendation aware caching problem and propose practical algorithms that come with performance guarantees. For the former, the results indicate that there are high gains for the operators and that myopic schemes without a vision, are heavily suboptimal. While for the latter, we conclude that the caching decisions can significantly improve when taking into consideration the underlying recommendations