Tesi sul tema "Indice de sensibilité à la désertification"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-19 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Indice de sensibilité à la désertification".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Kone, Alassane. "Modelling and Decision Support for a Desertification Issue Using Cellular Automata Approach". Electronic Thesis or Diss., Guyane, 2023. http://www.theses.fr/2023YANE0001.
Testo completoDesertification, as a significant challenge impacting life on Earth, has extensive consequences that degrade human life quality, daily activities, and livelihoods. In response, international organizations have implemented actions to slow or stop its progress and reduce its impacts. This thesis focuses on combating desertification by modelling the process of land degradation leading to desertification. Two models are developed: the first combines continuous Cellular Automata and the MEDALUS assessment, evaluating desertification based on soil, vegetation, climate, and management. The second model simulates land degradation using cellular automata approach, enriched with anthropogenic factors like land use practices, exploitability factor and ownership, forming the Enhanced Model of Desertification. This model serves as the basis for DESERTIfication Cellular Automata Software (DESERTICAS), simulating spatio- temporal land degradation evolution. DESERTICAS facilitates scenario exploration by simulating land degradation progression over time and space. The models incorporate dynamic processes into the MEDALUS model, expanding classical Cellular Automata to continuous states. Identifying a predominant factor influencing desertification, management emerges as crucial, affecting other factors indirectly. Positive management actions can interrupt degradation sources, slowing or halting land degradation. The thesis also applies control theory to the Cellular Automata model, aiming to influence the predominant factor using Genetic Algorithms. By integrating land protection actions into desertification simulations, the DESERTICAS software becomes a decision support tool
Beaulieu, Lucie. "Analyse de sensibilité d'un indice de risque de perte de phosphore en zone cultivée". Thesis, Université Laval, 2005. http://www.theses.ulaval.ca/2005/22790/22790.pdf.
Testo completoSan, Emeterio Cabañes José Luis. "Désertification ou reverdissement ? Etude multiscalaire de l'évolution du couvert végétal en Afrique Sahélienne à partir de données de télédétection". Sorbonne Paris Cité, 2015. http://www.theses.fr/2015USPCC079.
Testo completoThe Sahel region has become the archetype of desertification and land degradation since the important droughts that took place during the 70s and 80s. However, the rainfall recovery since the middle of the 90s and the re-greening trend observed from remoting sensed vegetation indexes has challenged the view of an advancing desertification in the Sahel. Nevertheless, the relation between these indexes and land degradation is very complex and the conclusions made are sometimes contradictory. In fact, the high climate variability and the important landscape mutations, due to demographic growth, make of land degradation assessment a difficult task in this region. The strong interdependency between temporal and spatial scales of land degradation, led to carry out a multi-scalar analysis to understand; what is the actual situation of the Sahel concerning land degradation, and what is the most effective way to assess this phenomenon at a regional scale. This analysis has been done for the entire Sahel region during the period 1982-2011using the NDVI GIMMS-3g vegetation index and rainfall products. It has been later transposed to south-west Niger using the NDVI MODIS index and aerial and satellite photographs of the last decades
Petitcolin, Marie-Anne. "Vieillissement artériel et sensibilité au calcium de la contraction : couplage entre récepteurs [alpha]1-adrénergiques et protéines G[indice i/o]". Nancy 1, 2000. http://www.theses.fr/2000NAN12012.
Testo completoJacques, Julien. "Contributions à l'analyse de sensibilité et à l'analyse discriminante généralisée". Phd thesis, Université Joseph Fourier (Grenoble), 2005. http://tel.archives-ouvertes.fr/tel-00011169.
Testo completoL'analyse de sensibilité globale d'un modèle mathématique étudie comment les variables de sortie de ce dernier réagissent à des perturbations de ses entrées. Les méthodes basées sur l'étude de la variance quantifient les parts de variance de la réponse du modèle dues à chaque variable d'entrée et chaque sous-ensemble de variables d'entrée. Le premier problème abordé est l'impact d'une incertitude de modèle sur les résultats d'une analyse de sensibilité. Deux formes particulières d'incertitude sont étudiées : celle due à une mutation du modèle de référence, et celle due à l'utilisation d'un modèle simplifié à la place du modèle de référence. Un second problème relatif à l'analyse de sensibilité a été étudié au cours de cette thèse, celui des modèles à entrées corrélées. En effet, les indices de sensibilité classiques n'ayant pas de signification (d'un point de vue interprétation) en présence de corrélation des entrées, nous proposons une approche multidimensionnelle consistant à exprimer la sensibilité de la sortie du modèle à des groupes de variables corrélées. Des applications dans le domaine de l'ingénierie nucléaire illustrent ces travaux.
L'analyse discriminante généralisée consiste à classer les individus d'un échantillon test en groupes, en utilisant l'information contenue dans un échantillon d'apprentissage, lorsque ces deux échantillons ne sont pas issus d'une même population. Ce travail étend les méthodes existantes dans un cadre gaussien au cas des données binaires. Une application en santé publique illustre l'utilité des modèles de discrimination généralisée ainsi définis.
Al-Rajeh, Mohammad Rasoul. "Evaluation de la sensibilité des mesures accélérométriques comme indice de performance locomotrice : évaluation thérapeutique de la pathologie coxarthrosique par arthroplastie et par différent voies d'abord". Rouen, 2013. https://theses.hal.science/tel-00918921.
Testo completoWalking is the most convenient way to travel short distances. Free joint mobility and appropriate muscle force increase walking efficiency. Coxarthroses is a noninflammatory degenerative disease of the hip joint which usually appears in late middle or old age. The treatment of hip arthrosis is especially for decreasing the pain by medication, physics therapies and in the case of advanced coxarthrosis, total hip arthroplasty is used. Thus, we have tried in this research to evaluated a recent technology, the accelerometer, fixed to the sacral vertebrate, in the study we have compared the results of our apparatus with these of others (Vicon and locometer). As we have compared the results of the locometer by using tow surgical approaches, the first is anatomical MI and the other is traditional and we have find that the recovery is better by using the anatomical approach
Also, Alio Ramatou. "Influence de la densité mammaire, du traitement hormonal substitutif et de l'indice de masse corporelle sur la sensibilité et la spécificité de la mammographie de dépistage : Programme Québécois de Dépistage du Cancer du Sein (PQDCS) 2000-2005". Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/28624/28624.pdf.
Testo completoChastaing, Gaëlle. "Indices de Sobol généralisés par variables dépendantes". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM046.
Testo completoA mathematical model aims at characterizing a complex system or process that is too expensive to experiment. However, in this model, often strongly non linear, input parameters can be affected by a large uncertainty including errors of measurement of lack of information. Global sensitivity analysis is a stochastic approach whose objective is to identify and to rank the input variables that drive the uncertainty of the model output. Through this analysis, it is then possible to reduce the model dimension and the variation in the output of the model. To reach this objective, the Sobol indices are commonly used. Based on the functional ANOVA decomposition of the output, also called Hoeffding decomposition, they stand on the assumption that the incomes are independent. Our contribution is on the extension of Sobol indices for models with non independent inputs. In one hand, we propose a generalized functional decomposition, where its components is subject to specific orthogonal constraints. This decomposition leads to the definition of generalized sensitivity indices able to quantify the dependent inputs' contribution to the model variability. On the other hand, we propose two numerical methods to estimate these constructed indices. The first one is well-fitted to models with independent pairs of dependent input variables. The method is performed by solving linear system involving suitable projection operators. The second method can be applied to more general models. It relies on the recursive construction of functional systems satisfying the orthogonality properties of summands of the generalized decomposition. In parallel, we illustrate the two methods on numerical examples to test the efficiency of the techniques
Causse, Mathieu. "Contributions à l'extension de la méthode des Sparse Grids pour les calculs de fiabilité en modélisation de processus". Toulouse 3, 2010. http://www.theses.fr/2010TOU30336.
Testo completoThe aim of this thesis is to show the efficiency of Sparse Grid approximation method applied to high dimensional real-life problems. For that kind of problems main parameters detection is fundamental. First we introduce Sparse Grid approximation method and emphasize its adaptive form. Then we show the efficiency of the method on standard test functions to show Sparse Grid specificities in main parameters detection. Due to excellent performance properties of the method, we apply it to a real-life problem and obtain accurate results with a reduced computation cost. The first application is dedicated to a pollutant diffusion problem, the second one aims to evaluate the performance of a power network
Andrianandraina. "Approche d'éco-conception basée sur la combinaison de l'analyse de cycle de vie et de l'analyse de sensibilité : Cas d'application sur le cycle de vie du matériau d'isolation thermique biosourcé, le béton de chanvre". Ecole centrale de Nantes, 2014. http://www.theses.fr/2014ECDN0005.
Testo completoThe purpose of this PhD thesis is to establish an ecodesign method based on Life Cycle Assessment, that should allow identifying action levers specific for each economic actor of the life cycle of a product, for improved environmental performances. Life Cycle Assessment was coupled with two methods of sensitivity analysis in five steps: (i) definition of objectives and system, (ii) modeling calculation of inventory and impact indicators with different approaches according to foreground and background sub-systems, (iii) characterization of parameters using a typology specific to possibilities of control of the considered economic actor, (iv) application of two sensitivity analysis methods (Morris and Sobol) and (v) results interpretation in order to identify potential efficient improvements. The approach was applied on the hemp concrete insulation product, including agricultural production, industrial transformation of hemp fibers, and use of hemp concrete as a thermal insulator for buildings. The approach provides potential technological scenarios improving environmental performances for each single economic actor of the product’s life cycle. Performing the method presently requires additional information, but will probably be paid back in the future by driving more robust choices for a given product
Wu, QiongLi. "Sensitivity Analysis for Functional Structural Plant Modelling". Phd thesis, Ecole Centrale Paris, 2012. http://tel.archives-ouvertes.fr/tel-00719935.
Testo completoKamari, Halaleh. "Qualité prédictive des méta-modèles construits sur des espaces de Hilbert à noyau auto-reproduisant et analyse de sensibilité des modèles complexes". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASE010.
Testo completoIn this work, the problem of estimating a meta-model of a complex model, denoted m, is considered. The model m depends on d input variables X1 , ..., Xd that are independent and have a known law. The meta-model, denoted f ∗ , approximates the Hoeffding decomposition of m, and allows to estimate its Sobol indices. It belongs to a reproducing kernel Hilbert space (RKHS), denoted H, which is constructed as a direct sum of Hilbert spaces (Durrande et al. (2013)). The estimator of the meta-model, denoted f^, is calculated by minimizing a least-squares criterion penalized by the sum of the Hilbert norm and the empirical L2-norm (Huet and Taupin (2017)). This procedure, called RKHS ridge group sparse, allows both to select and estimate the terms in the Hoeffding decomposition, and therefore, to select the Sobol indices that are non-zero and estimate them. It makes possible to estimate the Sobol indices even of high order, a point known to be difficult in practice.This work consists of a theoretical part and a practical part. In the theoretical part, I established upper bounds of the empirical L2 risk and the L2 risk of the estimator f^. That is, upper bounds with respect to the L2-norm and the empirical L2-norm for the f^ distance between the model m and its estimation f into the RKHS H. In the practical part, I developed an R package, called RKHSMetaMod, that implements the RKHS ridge group sparse procedure and a spacial case of it called the RKHS group lasso procedure. This package can be applied to a known model that is calculable in all points or an unknown regression model. In order to optimize the execution time and the storage memory, except for a function that is written in R, all of the functions of the RKHSMetaMod package are written using C++ libraries GSL and Eigen. These functions are then interfaced with the R environment in order to propose an user friendly package. The performance of the package functions in terms of the predictive quality of the estimator and the estimation of the Sobol indices, is validated by a simulation study
Niang, Ibrahima. "Quantification et méthodes statistiques pour le risque de modèle". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1015/document.
Testo completoIn finance, model risk is the risk of loss resulting from using models. It is a complex risk which recover many different situations, and especially estimation risk and risk of model misspecification. This thesis focuses: on model risk inherent in yield and credit curve construction methods and the analysis of the consistency of Sobol indices with respect to stochastic ordering of model parameters. it is divided into three chapters. Chapter 1 focuses on model risk embedded in yield and credit curve construction methods. We analyse in particular the uncertainty associated to the construction of yield curves or credit curves. In this context, we derive arbitrage-free bounds for discount factor and survival probability at the most liquid maturities. In Chapter 2 of this thesis, we quantify the impact of parameter risk through global sensitivity analysis and stochastic orders theory. We analyse in particular how Sobol indices are transformed further to an increase of parameter uncertainty with respect to the dispersive or excess wealth orders. Chapter 3 of the thesis focuses on contrast quantile index. We link this latter with the risk measure CTE and then we analyse on the other side, in which circumstances an increase of a parameter uncertainty in the sense of dispersive or excess wealth orders implies and increase of contrast quantile index. We propose finally an estimation procedure for this index. We prove under some conditions that our estimator is consistent and asymptotically normal
Mtibaa, Mohamed. "Οptimisatiοn de cοuplage Ρrοcédé/Ρrοpriétés/Fiabilité des Structures en Μatériaux Cοmpοsites Fοnctiοnnels". Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMLH03.
Testo completoThis research focuses on the challenges and interactions between the manufacturing processes (Resin Transfer Molding ‘RTM’ and Compression Resin Transfer Molding ‘CRTM’), the mechanical properties, and the reliability of composite material structures; more specifically the functional composites. A number of numerical models have been developed for simulating the suspension (resin + particles) impregnation through the fibrous medium (fibers) in the RTM and CRTM processes. These models are validated by comparing their results with experimental, semi-analytical, and analytical ones from the literature. A parametric study is carried out to demonstrate the impact of various process parameters on particles’ distribution in the final composite. Moreover, a comparison between the injection and compression modes is done. The results of this part show that the distribution of particles in the final part depends on the initial concentration, the distance travelled, and the initial fibers’ volume fraction. However, it is independent of the parameters values of injection and compression. It is also observed that the CRTM process with imposed pressure injection and imposed force compression represents the most favorable scenario for producing composite parts.For the purpose of controlling the final particles’ distribution in the composite material, manufactured by the RTM process, two key steps have been identified. The first step consists in a sensitivity analysis that examines three parameters: the temporal evolution of the initial injected particles’ concentration, the injection pressure field and the initial fibers’ porosity. The conclusions indicate a minimal impact of the initial porosity and the injection pressure field; while the evolution of the initial concentration of the injected particles has a dominant effect. In a second step, an optimization algorithm is implemented in the numerical model of the RTM process. It is used to determine the optimal configuration of the initial injected particles’ concentration’s evolution; in order to approximate the particles’ distribution in the final composite to the desired profiles. The obtained results from the genetic algorithm provide a very satisfactory control of this distribution. To complete this section, a model, estimating the mechanical properties of the manufactured part, is developed. It is found that there is a positive correlation between the particles’ fraction and certain mechanical properties, namely the elastic modulus E11 and E22, and the shear modulus G12 and G23. Nevertheless, the Poisson’s ratio (Nu12) is inversely proportional to the particles’ fraction. Also, the shear module G12 is the most significantly influenced by this fraction.Following this, the control of the mechanical properties of the composite parts, manufactured by the CRTM process, is targeted, and compared to the results of the RTM process. The conclusions reveal that the RTM process offers a better control of these properties. Whereas, the CRTM process improves considerably the mechanical properties of the parts due to its compression phase, which increases the fibers’ volume fraction and consequently enhances these properties.Finally, a static analysis is conducted based on the developed numerical model that uses the finite element method (Ansys APDL). This model is combined with those of the CRTM process and the mechanical properties calculation. An optimization algorithm is integrated in our global model to adapt the mechanical properties of the composite part according to the configuration (cantilever or simply supported) and the load distribution. Moreover, it minimizes the composite part’s weight and ensures the respect of the predetermined mechanical constraints such as the maximum deformation limit. The obtained results correspond perfectly to these objectives
Parveaud, Claude-Eric. "Propriétés radiatives des couronnes de Noyers (Juglans nigra x J. regia) et croissance des pousses annuelles - Influence de la géométrie du feuillage, de la position des pousses et de leur climat radiatif". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2006. http://tel.archives-ouvertes.fr/tel-00087909.
Testo completoVauchel, Nicolas. "Estimation des indices de Sobol à l'aide d'un métamodèle multi-éléments : application à la dynamique du vol". Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILN008.
Testo completoThe thesis is addressing a concrete issue on aircrafts safety. The post-stall flight domain is a complex flight domain where flows around an airfoil may be highly unstable and massively stalled. In this domain, which can be reached on purpose or accidentally, usual controls are less efficient or completely inefficient, which can endanger the pilot and its passengers. The thesis is about the determination of the flight predictions in the post-stall flight domain, their dependences to the selected model structure and about the uncertainties of the experimental data the model relies on. The dynamic of the motion of the aircraft is governed by a dynamic system of ordinary non-linear differential equations. In these equations, the effects from the fluid on the aircraft are traduced by the global aerodynamic coefficients, the dimensionless forces and moments applied by the fluid on the aircraft. These coefficients depend on a high number of variables in a non-linear fashion. Among these variables are the geometry of the aircraft, its velocity and its rotation rates compared to earth, and characteristics of the surrounding flow. A representation model having a selected structure is determined for every aerodynamic coefficient, in order to represent these complex dependences. This model rely on experimental data obtained on a scale model, free flight data on a real aircraft being too expensive and too risky to get in the post-stall domain. Another way of obtaining data would be to use computational simulations. Nevertheless, the complex and unsteady flows around the 3D geometry of the aircraft makes the simulation too expensive with the current ressources, even if some recent studies begin to explore this direction of research. The selected models in the thesis are built on experimental data only. In the dynamic system, the global aerodynamic coefficients are evaluated by interpolation in these databases according to the selected model structure. The fact of selecting a simplified structure of the model makes it deficient. Moreover, as these models rely on experimental data, they are uncertain. The gaps and the uncertainties of the model have some impacts on the flight predictions. The initial objective of the thesis is therefore to study these impacts.During the thesis, new scientific objectives appeared, objectives going beyond the scope of Flight Dynamics. First, a new multi-element surrogate model for Uncertainty Quantification based on modern Machine learning methods is developed. Multi-element surrogate models were developed to address the loss of accuracy of Polynomial Chaos model in presence of discontinuities. Then, a formula linking the sensitivity Sobol indices to the coefficient of a multi-element surrogate model is derived. These results are used in the case of Flight Dynamics in order to address the issue raised in the initial objective of the thesis. The numerous bifurcations of the dynamic system can be traduced by discontinuities and/or irregularities in the evolution of the state variables compared to the uncertain parameters. The methods of Sensitivity Analysis and of Uncertainty Quantification developed in the thesis are therefore good candidates to analyse the system
Passelergue, Jean-Christophe. "Interactions des dispositifs FACTS dans les grands réseaux électriques". Phd thesis, Grenoble INPG, 1998. http://www.theses.fr/1998INPG0148.
Testo completoPower fiow increase and environmental constraints in power Systems hâve led to FACTS (Flexible AC Transmission Systems) devices insertion in order to improve the power System exploitation. Thèse devices are able to cany out some funétions such as the voltage support, the power transfer control and the increase of power transfer capability. Moreover, due to their fast response time, they are an efficient tool for damping low frequency oscillations. This new FACTS devices application is important as power Systems are more and more interconnected and thereby more sensitive to inter-area eîectromechanical oscillations. However, the recourse to several FACTS devices in a power System requires the careful study of the possible controller interaction phenomena between FACTS devices and with others system éléments. This thesis deals with the analysis and resolution of dynamic phenomena due to interaction problems resulting from the insertion of one or several shunt FACTS devices. Sensitivity and influence indices are defined from the controllability and observability notions, respectively, in order to preview the interaction phenomena importance due to a FACTS device insertions and to identify the influence areas of a FACTS device. Thèse indices are applied to a two-area four-machine test system and to a simplified real 29-machine power system. Two coordination methods (" minimax " method and decentralized linear quadratic method) are used to coordinate the FACTS devices themselves and a FACTS device and PSS (Power System Stabilizer) in the two-area four-machine test system
Gohore, Bi Goue D. "Évaluation et contrôle de l'irrégularité de la prise médicamenteuse : proposition et développement de stratégies rationnelles fondées sur une démarche de modélisations pharmacocinétiques et pharmacodynamiques". Thèse, 2010. http://hdl.handle.net/1866/4535.
Testo completoThe heterogeneity of PK and/or PD profiles in patients undergoing the same treatment regimen should be avoided during treatment or clinical trials. Two traditional approaches are continually used to achieve this purpose. One builds on the interactive synergy between the health caregiver and the patient to exert the patients to become a whole part of his own compliance. Another attempt is to develop drugs or drug dosing regimens that forgive the poor compliance. The main objective of this thesis was to develop new methodologies for assessing and monitoring the impact of irregular drug intake on the therapeutic outcome. Specifically, the first phase of this research was to develop algorithms for evaluation of the efficacy of a treatment by improving classical breakpoint estimation methods to the situation of variable drug disposition. This method introduces the ``efficiency'' of a PK profile by using the efficacy function as a weight in the area under curve ($AUC$) formula. It gives a more powerful PK/PD link and reveales, through some examples, interesting issues about uniqueness of therapeutic outcome indices and antibiotic resistance problems. The second part of this thesis was to determine the optimal sampling times by accounting for the intervariability in drug disposition in collectively treated pigs. For this, we have developed an advanced mathematical model able to generate different PK profiles for various feed strategies. Three algorithms have been performed to identify the optimal sampling times with the criteria of minimizing the PK intervariability . The median-based method yielded suitable sampling periods in terms of convenience for farm staff and animal welfare. The last part of our research was to establish a rational way to delineate drugs in terms of their ``forgiveness'', based on drugs PK/PD properties. For this, a global sensitivity analysis (GSA) has been performed to identify the most sensitive parameters to dose omissions. Then we have proposed a comparative drug forgiveness index to rank the drugs in terms of their tolerability to non compliance with application to four calcium channel blockers. The classification of these molecules in terms of drug forgiveness is in concordance to what has been reported in experimental studies. The strategies developed in this Ph.D. project and essentially based on the analysis of complex relationships between drug intake history, pharmacokinetic and pharmacodynamic properties are able to assess and regulate noncompliance impact with an acceptable uncertainly. In general, the algorithms that imply these approaches will be undoubtedly efficient tools in patient monitoring during dosing regimen. Moreover, they will contribute to control the harmful impact of non-compliance by developing new drugs able to tolerate sporadic dose omission.
Gohore, Bi Gouê Denis. "Évaluation et contrôle de l'irrégularité de la prise médicamenteuse : proposition et développement de stratégies rationnelles fondées sur une démarche de modélisations pharmacocinétiques et pharmacodynamiques". Thèse, 2010. http://hdl.handle.net/1866/4535.
Testo completoThe heterogeneity of PK and/or PD profiles in patients undergoing the same treatment regimen should be avoided during treatment or clinical trials. Two traditional approaches are continually used to achieve this purpose. One builds on the interactive synergy between the health caregiver and the patient to exert the patients to become a whole part of his own compliance. Another attempt is to develop drugs or drug dosing regimens that forgive the poor compliance. The main objective of this thesis was to develop new methodologies for assessing and monitoring the impact of irregular drug intake on the therapeutic outcome. Specifically, the first phase of this research was to develop algorithms for evaluation of the efficacy of a treatment by improving classical breakpoint estimation methods to the situation of variable drug disposition. This method introduces the ``efficiency'' of a PK profile by using the efficacy function as a weight in the area under curve ($AUC$) formula. It gives a more powerful PK/PD link and reveales, through some examples, interesting issues about uniqueness of therapeutic outcome indices and antibiotic resistance problems. The second part of this thesis was to determine the optimal sampling times by accounting for the intervariability in drug disposition in collectively treated pigs. For this, we have developed an advanced mathematical model able to generate different PK profiles for various feed strategies. Three algorithms have been performed to identify the optimal sampling times with the criteria of minimizing the PK intervariability . The median-based method yielded suitable sampling periods in terms of convenience for farm staff and animal welfare. The last part of our research was to establish a rational way to delineate drugs in terms of their ``forgiveness'', based on drugs PK/PD properties. For this, a global sensitivity analysis (GSA) has been performed to identify the most sensitive parameters to dose omissions. Then we have proposed a comparative drug forgiveness index to rank the drugs in terms of their tolerability to non compliance with application to four calcium channel blockers. The classification of these molecules in terms of drug forgiveness is in concordance to what has been reported in experimental studies. The strategies developed in this Ph.D. project and essentially based on the analysis of complex relationships between drug intake history, pharmacokinetic and pharmacodynamic properties are able to assess and regulate noncompliance impact with an acceptable uncertainly. In general, the algorithms that imply these approaches will be undoubtedly efficient tools in patient monitoring during dosing regimen. Moreover, they will contribute to control the harmful impact of non-compliance by developing new drugs able to tolerate sporadic dose omission.