Dissertations / Theses on the topic 'Non-parametric and semiparametric model'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Non-parametric and semiparametric model.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Yan, Boping. "Double kernel non-parametric estimation in semiparametric econometric models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ35817.pdf.
Full textMays, James Edward. "Model robust regression: combining parametric, nonparametric, and semiparametric methods." Diss., Virginia Polytechnic Institute and State University, 1995. http://hdl.handle.net/10919/49937.
Full textPh. D.
incomplete_metadata
Zhang, Tianyang. "Partly parametric generalized additive model." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/913.
Full textSong, Rui, Shikai Luo, Donglin Zeng, Hao Helen Zhang, Wenbin Lu, and Zhiguo Li. "Semiparametric single-index model for estimating optimal individualized treatment strategy." INST MATHEMATICAL STATISTICS, 2017. http://hdl.handle.net/10150/625783.
Full textStarnes, Brett Alden. "Asymptotic Results for Model Robust Regression." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/30244.
Full textPh. D.
Abdel-Salam, Abdel-Salam Gomaa. "Profile Monitoring with Fixed and Random Effects using Nonparametric and Semiparametric Methods." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29387.
Full textPh. D.
Margevicius, Seunghee P. "Modeling of High-Dimensional Clinical Longitudinal Oxygenation Data from Retinopathy of Prematurity." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1523022165691473.
Full textBijou, Mohammed. "Qualité de l'éducation, taille des classes et mixité sociale : Un réexamen à partir des méthodes à variables instrumentales et semi-paramétriques sur données multiniveaux - Cas du Maroc -." Electronic Thesis or Diss., Toulon, 2021. http://www.theses.fr/2021TOUL2004.
Full textThis thesis objective is to examine the quality of the Moroccan education system exploiting the data of the programs TIMSS and PIRLS 2011.The thesis is structured around three chapters. The first chapter examines the influence of individual student and school characteristics on school performance, as well as the important role of the school environment (effect of size and social composition). In the second chapter, we seek to estimate the optimal class size that ensures widespread success for all students at both levels, namely, the fourth year of primary school and the second year of college. The third chapter proposes to study the relationship between the social and economic composition of the school and academic performance, while demonstrating the role of social mix in student success. In order to study this relationship, we mobilize different econometric approaches, by applying a multilevel model with correction for the problem of endogeneity (chapter 1), a hierarchical semi-parametric model (chapter 2) and a contextual hierarchical semi-parametric model (chapter 3). The results show that academic performance is determined by several factors that are intrinsic to the student and also contextual. Indeed, a smaller class size and a school with a mixed social composition are the two essential elements for a favourable environment and assured learning for all students. According to our results, governments should give priority to reducing class size by limiting it to a maximum of 27 students. In addition, it is necessary to consider making the school map more flexible in order to promote social mixing at school. The results obtained allow a better understanding of the Moroccan school system, in its qualitative aspect and the justification of relevant educational policies to improve the quality of the Moroccan education system
Avramidis, Panagiotis. "Estimation of the volatility function : non-parametric and semiparametric approaches." Thesis, London School of Economics and Political Science (University of London), 2004. http://etheses.lse.ac.uk/1793/.
Full textTachet, des combes Rémi. "Non-parametric model calibration in finance." Phd thesis, Ecole Centrale Paris, 2011. http://tel.archives-ouvertes.fr/tel-00658766.
Full textMartinez-Sanchis, Elena. "Essays on identification and estimation of structural parametric and semiparametric models in microeconomics." Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1444804/.
Full textMcNeney, William Bradley. "Asymptotic efficiency in semiparametric models with non-i.i.d. data /." Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/9604.
Full textBartcus, Marius. "Bayesian non-parametric parsimonious mixtures for model-based clustering." Thesis, Toulon, 2015. http://www.theses.fr/2015TOUL0010/document.
Full textThis thesis focuses on statistical learning and multi-dimensional data analysis. It particularly focuses on unsupervised learning of generative models for model-based clustering. We study the Gaussians mixture models, in the context of maximum likelihood estimation via the EM algorithm, as well as in the Bayesian estimation context by maximum a posteriori via Markov Chain Monte Carlo (MCMC) sampling techniques. We mainly consider the parsimonious mixture models which are based on a spectral decomposition of the covariance matrix and provide a flexible framework particularly for the analysis of high-dimensional data. Then, we investigate non-parametric Bayesian mixtures which are based on general flexible processes such as the Dirichlet process and the Chinese Restaurant Process. This non-parametric model formulation is relevant for both learning the model, as well for dealing with the issue of model selection. We propose new Bayesian non-parametric parsimonious mixtures and derive a MCMC sampling technique where the mixture model and the number of mixture components are simultaneously learned from the data. The selection of the model structure is performed by using Bayes Factors. These models, by their non-parametric and sparse formulation, are useful for the analysis of large data sets when the number of classes is undetermined and increases with the data, and when the dimension is high. The models are validated on simulated data and standard real data sets. Then, they are applied to a real difficult problem of automatic structuring of complex bioacoustic data issued from whale song signals. Finally, we open Markovian perspectives via hierarchical Dirichlet processes hidden Markov models
PINTO, MARIANA DA PAIXAO. "A MIXED PARAMETRIC AND NON PARAMETRIC INTERNAL MODEL TO UNDERWRITING RISK FOR LIFE INSURANCE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=31951@1.
Full textCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Com as falências ocorridas nas últimas décadas, no setor de seguros, um movimento surgiu para desenvolver modelos matemáticos capazes de ajudar no gerenciamento do risco, os chamados modelos internos. No Brasil, a SUSEP, seguindo a tendência mundial, exigiu que as empresas, interessadas em atuar no país, utilizassem um modelo interno para risco de subscrição. Com isto, obter um modelo interno tornou-se primordial para as empresas seguradoras no país. O modelo proposto neste trabalho ilustrado para seguro de vida para risco de subscrição se baseia em Cadeias de Markov, no Teorema Central do Limite, parte paramétrica, e na Simulação de Monte Carlo, parte não paramétrica. Em sua estrutura foi considerada a dependência entre titular e dependentes. Uma aplicação a dados reais mascarados foi feita para analisar o modelo. O capital mínimo requerido calculado utilizando o método híbrido foi comparado com o valor obtido utilizando somente o método paramétrico. Em seguida foi feita a análise de sensibilidade do modelo.
The bankruptcies occurred in recent decades in the insurance sector, a movement arose to develop mathematical models capable of assisting in the management of risk, called internal models. In Brazil, the SUSEP, following the worldwide trend, demanded that the companies, interested in working in the country, using an internal model for underwriting risk. Because of this, developing an internal model has become vital for insurance companies in the country. The proposed model in this work illustrated to life insurance for the underwriting risk was based on the Markov chains, on the Central Limit Theorem to the parametric method, and Monte Carlo Simulation to the non-parametric method. In its structure, the dependence between the holder and dependents was considered. An application to masked real data was made to analyze the model. The minimum required capital calculated using the hybrid method was compared with the value obtained using only the parametric method. Then the sensitivities of the model were investigated.
Hoare, Armando. "Parametric, non-parametric and statistical modeling of stony coral reef data." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002470.
Full textJavier, Hidalgo Moreno Francisco. "Estimation of semiparametric econometric time-series models with non-linear or heteroscedastic disturbances." Thesis, London School of Economics and Political Science (University of London), 1990. http://etheses.lse.ac.uk/2581/.
Full textHeinz, Daniel. "Hyper Markov Non-Parametric Processes for Mixture Modeling and Model Selection." Research Showcase @ CMU, 2010. http://repository.cmu.edu/dissertations/11.
Full textSchildcrout, Jonathan Scott. "Marginal modeling of longitudinal, binary response data : semiparametric and parametric estimation with long response series and an efficient outcome dependent sampling design /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/9540.
Full textJoshi, Niranjan Bhaskar. "Non-parametric probability density function estimation for medical images." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:ebc6af07-770b-4fee-9dc9-5ebbe452a0c1.
Full textRen, Yan. "A Non-parametric Bayesian Method for Hierarchical Clustering of Longitudinal Data." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1337085531.
Full textMONTEIRO, ANDRE MONTEIRO DALMEIDA. "NON-PARAMETRIC ESTIMATIONS OF INTEREST RATE CURVES : MODEL SELECTION CRITERION: MODEL SELECTION CRITERIONPERFORMANCE DETERMINANT FACTORS AND BID-ASK S." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2002. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=2684@1.
Full textEsta tese investiga a estimação de curvas de juros sob o ponto de vista de métodos não-paramétricos. O texto está dividido em dois blocos. O primeiro investiga a questão do critério utilizado para selecionar o método de melhor desempenho na tarefa de interpolar a curva de juros brasileira em uma dada amostra. Foi proposto um critério de seleção de método baseado em estratégias de re-amostragem do tipo leave-k-out cross validation, onde K k £ £ 1 e K é função do número de contratos observados a cada curva da amostra. Especificidades do problema reduzem o esforço computacional requerido, tornando o critério factível. A amostra tem freqüência diária: janeiro de 1997 a fevereiro de 2001. O critério proposto apontou o spline cúbico natural -utilizado com método de ajuste perfeito aos dados - como o método de melhor desempenho. Considerando a precisão de negociação, este spline mostrou-se não viesado. A análise quantitativa de seu desempenho identificou, contudo, heterocedasticidades nos erros simulados. A partir da especificação da variância condicional destes erros e de algumas hipóteses, foi proposto um esquema de intervalo de segurança para a estimação de taxas de juros pelo spline cúbico natural, empregado como método de ajuste perfeito aos dados. O backtest sugere que o esquema proposto é consistente, acomodando bem as hipóteses e aproximações envolvidas. O segundo bloco investiga a estimação da curva de juros norte-americana construída a partir dos contratos de swaps de taxas de juros dólar-Libor pela Máquina de Vetores Suporte (MVS), parte do corpo da Teoria do Aprendizado Estatístico. A pesquisa em MVS tem obtido importantes avanços teóricos, embora ainda sejam escassas as implementações em problemas reais de regressão. A MVS possui características atrativas para a modelagem de curva de juros: é capaz de introduzir já na estimação informações a priori sobre o formato da curva e sobre aspectos da formação das taxas e liquidez de cada um dos contratos a partir dos quais ela é construída. Estas últimas são quantificadas pelo bid-ask spread (BAS) de cada contrato. A formulação básica da MVS é alterada para assimilar diferentes valores do BAS sem que as propriedades dela sejam perdidas. É dada especial atenção ao levantamento de informação a priori para seleção dos parâmetros da MVS a partir do formato típico da curva. A amostra tem freqüência diária: março de 1997 a abril de 2001. Os desempenhos fora da amostra de diversas especificações da MVS foram confrontados com aqueles de outros métodos de estimação. A MVS foi o método que melhor controlou o trade- off entre viés e variância dos erros.
This thesis investigates interest rates curve estimation under non-parametric approach. The text is divided into two parts. The first one focus on which criterion to use to select the best performance method in the task of interpolating Brazilian interest rate curve. A selection criterion is proposed to measure out-of-sample performance by combining resample strategies leave-k-out cross validation applied upon the whole sample curves, where K k £ £ 1 and K is function of observed contract number in each curve. Some particularities reduce substantially the required computational effort, making the proposed criterion feasible. The data sample range is daily, from January 1997 to February 2001. The proposed criterion selected natural cubic spline, used as data perfect-fitting estimation method. Considering the trade rate precision, the spline is non-biased. However, quantitative analysis of performance determinant factors showed the existence of out-of-sample error heteroskedasticities. From a conditional variance specification of these errors, a security interval scheme is proposed for interest rate generated by perfect-fitting natural cubic spline. A backtest showed that the proposed security interval is consistent, accommodating the evolved assumptions and approximations. The second part estimate US free-for-floating interest rate swap contract curve by using Support Vector Machine (SVM), a method derived from Statistical Learning Theory. The SVM research has got important theoretical results, however the number of implementation on real regression problems is low. SVM has some attractive characteristics for interest rates curves modeling: it has the ability to introduce already in its estimation process a priori information about curve shape and about liquidity and price formation aspects of the contracts that generate the curve. The last information set is quantified by the bid-ask spread. The basic SVM formulation is changed in order to be able to incorporate the different values for bid-ask spreads, without losing its properties. Great attention is given to the question of how to extract a priori information from swap curve typical shape to be used in MVS parameter selection. The data sample range is daily, from March 1997 to April 2001. The out-of-sample performances of different SVM specifications are faced with others method performances. SVM got the better control of trade- off between bias and variance of out-of-sample errors.
Remella, Siva Rama Karthik. "Steady State Mathematical Modeling of Non-Conventional Loop Heat Pipes: A Parametric and a Design Approach." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1353154991.
Full textBrandt, James M. "A parametric cost model for estimating operating and support costs of US Navy (Non-Nuclear) surface ships /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA363539.
Full text"June 1999". Thesis advisor(s): Timothy P. Anderson, Samuel E. Buttrey. Includes bibliographical references (p. 171). Also avaliable online.
Rydén, Patrik. "Estimation of the reliability of systems described by the Daniels Load-Sharing Model." Licentiate thesis, Umeå universitet, Matematisk statistik, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-46724.
Full textLanhede, Daniel. "Non-parametric Statistical Process Control : Evaluation and Implementation of Methods for Statistical Process Control at GE Healthcare, Umeå." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-104512.
Full textStatistisk processkontroll (SPC) är en samling verktyg för att upptäcka förändringar, i fördelningen, hos utfallen i en process. Det kan fungera som en värdefull resurs för att upprätthålla en hög kvalitet i en tillverkningsprocess. Denna rapport är baserad på arbetet med att utvärdera och implementera metoder för SPC i en monteringsprocess av kromatografiinstrument på GE Healthcare, Umeå. Åtta styrdiagram, tre för för fas I analys, och fem för fas II analys, studeras i denna rapport. Användbarheten hos styrdiagrammen bedöms efter hur enkla de är att tolka och förmågan att upptäcka fördelningsförändringar. Den senare utvärderas med simuleringar. Resultatet av projektet är införandet av RS/P-metod, utvecklad av Capizzi et al (2013), för analysen i fas I. Av de utvärderade metoderna, (och simuleringsscenarier), har RS/P-diagrammet den högsta övergripande sannolikheten, för att upptäcka en mängd olika fördelningsförändringar. Vidare är metodens grafiska diagram lätt att tolka, vilket underlättar analysen. För fas II analys, har två styrdiagram, ett baserat på Mann-Whitney's U teststatistika, som föreslagits av Chakraborti et al (2008), och ett på Mood's teststatistika för spridning, som föreslagits av Ghute et al (2014), implementerats. Styrkan i dessa styrdiagram ligger främst i dess enkla tolkning. För snabbare identifiering av processförändringar kan styrdiagrammet baserat på Cramer von Mises teststatistika, som föreslagits av Ross et al (2012), användas. Baserat på enskilda observationer, istället för stickprov, har styrdiagrammet en högre uppdateringsfrekvens. Detta leder dock till ett ökat antal falska larm och styrdiagrammet anses dessutom vara avsevärt mycket svårare att tolka för SPC-utövaren.
Deng, Chunqin. "Statistical Approach to Detect and Estimate Hormesis." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1004369636.
Full textDellagi, Hatem. "Estimations paramétrique et non paramétrique des données manquantes : application à l'agro-climatologie." Paris 6, 1994. http://www.theses.fr/1994PA066546.
Full textPavão, André Luis. "Modelos de duração aplicados à sobrevivência das empresas paulistas entre 2003 e 2007." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/12/12140/tde-24072013-154206/.
Full textThis thesis presents the main results that determined the bankruptcy of enterprises located in the São Paulo State from 2003 to 2007. The models used in this work were possible due to the partnership with SEBRAE, Small Business Service Supporting, located in the State of São Paulo. This institution provided the data basis for this research and its final version was compound by 662 enterprises and 33 variables, which were collected from a survey done by SEBRAE and the related enterprise. For first time available for research like this The research was supported by econometrics models, more precisely duration models, which identified the most important factors regarding enterprises survival. Two enterprise groups were distinguished: that one that will survive and grow and another will fail. In this work, three models were used: parametric, non-parametric and proportional risk with all of them presenting similar results. The proportional risk approach was applied for economic sectors and enterprises size. For the micro size business, the entrepreneurship\'s age and the resources applied on the employee\'s qualification were important to reduce the risk to fail in the time, whereas for small enterprises, variables like innovation and business plan building were the most important variables. For the commerce and service sectors, the enterprises related to the first one, the enterprises which kept attention on financial results (cash flow) presented lower risk to fail. For service sector, variables such as: entrepreneur\'s age, investment on the employee\'s qualification and enterprise\'s size were the most important variables to explain the difference the risk to fail between the enterprises. Another result presented was the risk to fail, which indicates the likelihood of an enterprise to leave its business activity. In this case, the parametric model using Weibull distribution concluded that the risk grows in the first five years. However, this result must be carefully evaluated since it would be necessary a longer term data to ensure this result.
Gebremeskel, Haftu Gebrehiwot. "Implementing hierarchical bayesian model to fertility data: the case of Ethiopia." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424458.
Full textBackground: L’Etiopia è una nazione divisa in 9 regioni amministrative (definite su base etnica) e due città. Si tratta di una nazione citata spesso come esempio di alta fecondità e rapida crescita demografica. Nonostante gli sforzi del governo, fecondità e cresita della popolazione rimangono elevati, specialmente a livello regionale. Pertanto, lo studio della fecondità in Etiopia e nelle sue regioni – caraterizzate da un’alta variabilità – è di vitale importanza. Un modo semplice di rilevare le diverse caratteristiche della distribuzione della feconditàè quello di costruire in modello adatto, specificando diverse funzioni matematiche. In questo senso, vale la pena concentrarsi sui tassi specifici di fecondità, i quali mostrano una precisa forma comune a tutte le popolazioni. Tuttavia, molti paesi mostrano una “simmetrizzazione” che molti modelli non riescono a cogliere adeguatamente. Pertanto, per cogliere questa la forma dei tassi specifici, sono stati utilizzati alcuni modelli parametrici ma l’uso di tali modelliè ancora molto limitato in Africa ed in Etiopia in particolare. Obiettivo: In questo lavoro si utilizza un nuovo modello per modellare la fecondità in Etiopia con quattro obiettivi specifici: (1). esaminare la forma dei tassi specifici per età dell’Etiopia a livello nazionale e regionale; (2). proporre un modello che colga al meglio le varie forme dei tassi specifici sia a livello nazionale che regionale. La performance del modello proposto verrà confrontata con quella di altri modelli esistenti; (3). adattare la funzione di fecondità proposta attraverso un modello gerarchico Bayesiano e mostrare che tale modelloè sufficientemente flessibile per stimare la fecondità delle singole regioni – dove le stime possono essere imprecise a causa di una bassa numerosità campionaria; (4). confrontare le stime ottenute con quelle fornite da metodi non gerarchici (massima verosimiglianza o Bayesiana semplice) Metodologia: In questo studio, proponiamo un modello a 4 parametri, la Normale Asimmetrica, per modellare i tassi specifici di fecondità. Si mostra che questo modello è sufficientemente flessibile per cogliere adeguatamente le forme dei tassi specifici a livello sia nazionale che regionale. Per valutare la performance del modello, si è condotta un’analisi preliminare confrontandolo con altri dieci modelli parametrici e non parametrici usati nella letteratura demografica: la funzione splie quadratica, la Cubic-Spline, i modelli di Coale e Trussel, Beta, Gamma, Hadwiger, polinomiale, Gompertz, Peristera-Kostaki e l’Adjustment Error Model. I modelli sono stati stimati usando i minimi quadrati non lineari (nls) e il Criterio d’Informazione di Akaike viene usato per determinarne la performance. Tuttavia, la stima per le singole regioni pu‘o risultare difficile in situazioni dove abbiamo un’alta variabilità della numerosità campionaria. Si propone, quindi di usare procedure gerarchiche che permettono di ottenere stime più affidabili rispetto ai modelli non gerarchici (“pooling” completo o “unpooling”) per l’analisi a livello regionale. In questo studia si formula un modello Bayesiano gerarchico ottenendo la distribuzione a posteriori dei parametri per i tassi di fecnodità specifici a livello regionale e relativa stima dell’incertezza. Altri metodi non gerarchici (Bayesiano semplice e massima verosimiglianza) vengono anch’essi usati per confronto. Gli algoritmi Gibbs Sampling e Metropolis-Hastings vengono usati per campionare dalla distribuzione a posteriori di ogni parametro. Anche il metodo del “Data Augmentation” viene utilizzato per ottenere le stime. La robustezza dei risultati viene controllata attraverso un’analisi di sensibilità e l’opportuna diagnostica della convergenza degli algoritmi viene riportata nel testo. In tutti i casi, si sono usate distribuzioni a priori non-informative. Risultati: I risutlati ottenuti dall’analisi preliminare mostrano che il modello Skew Normal ha il pi`u basso AIC nelle regioni Addis Ababa, Dire Dawa, Harari, Affar, Gambela, Benshangul-Gumuz e anche per le stime nazionali. Nelle altre regioni (Tigray, Oromiya, Amhara, Somali e SNNP) il modello Skew Normal non risulta il milgiore, ma comunque mostra un buon adattamento ai dati. Dunque, il modello Skew Normal risulta il migliore in 6 regioni su 11 e sui tassi specifici di tutto il paese. Conclusioni: Dunque, il modello Skew Normal risulta globalmente il migliore. Da questo risultato iniziale, siè partiti per costruire i modelli Gerachico Bayesiano, Bayesiano semplice e di massima verosimiglianza. Il risultato del confronto tra questi tre approcci è che il modello gerarchico fornisce stime più preciso rispetto agli altri.
Le, Corff Sylvain. "Estimations pour les modèles de Markov cachés et approximations particulaires : Application à la cartographie et à la localisation simultanées." Phd thesis, Telecom ParisTech, 2012. http://tel.archives-ouvertes.fr/tel-00773405.
Full textABUABIAH, MOHAMMAD IBRAHIM FAREED. "A set-membership approach to direct data-driven control design." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2737672.
Full textBelhedi, Amira. "Modélisation du bruit et étalonnage de la mesure de profondeur des caméras Temps-de-Vol." Thesis, Clermont-Ferrand 1, 2013. http://www.theses.fr/2013CLF1MM08/document.
Full text3D cameras open new possibilities in different fields such as 3D reconstruction, Augmented Reality and video-surveillance since they provide depth information at high frame-rates. However, they have limitations that affect the accuracy of their measures. In particular for TOF cameras, two types of error can be distinguished : the stochastic camera noise and the depth distortion. In state of the art of TOF cameras, the noise is not well studied and the depth distortion models are difficult to use and don't guarantee the accuracy required for some applications. The objective of this thesis is to study, to model and to propose a calibration method of these two errors of TOF cameras which is accurate and easy to set up. Both for the noise and for the depth distortion, two solutions are proposed. Each of them gives a solution for a different problem. The former aims to obtain an accurate model. The latter, promotes the simplicity of the set up. Thereby, for the noise, while the majority of the proposed models are only based on the amplitude information, we propose a first model which integrate also the pixel position in the image. For a better accuracy, we propose a second model where we replace the amplitude by the depth and the integration time. Regarding the depth distortion, we propose a first solution based on a non-parametric model which guarantee a better accuracy. Then, we use the prior knowledge of the planar geometry of the observed scene to provide a solution which is easier to use compared to the previous one and to those of the litterature
GITTO, SIMONE. "The measurement of productivity and efficiency: theory and applications." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2009. http://hdl.handle.net/2108/835.
Full textThis thesis shows several methods to measure the productivity and the efficiency. Recent improvements in methods are discussed and three applications are reported. In particular, I presented an application of the Tornqvist index numbers to measure the total factor productivity of Alitalia, the main Italian airline; a study of Italian airport sector with the use of bootstrapped-DEA; and an investigation of the efficiency of public Italian hospitals using a hyperbolic, alpha-quantile estimator.
Eamrurksiri, Araya. "Applying Machine Learning to LTE/5G Performance Trend Analysis." Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139126.
Full textBalzotti, Christopher Stephen. "Multidisciplinary Assessment and Documentation of Past and Present Human Impacts on the Neotropical Forests of Petén, Guatemala." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2129.
Full textChikhaoui, Khaoula. "Conception robuste de structures périodiques à non-linéarités fonctionnelles." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCD029/document.
Full textDynamic analysis of large scale structures including several uncertain parameters and localized or distributed nonlinearities may be computationally unaffordable. In order to overcome this issue, approximation models can be developed to reproduce accurately the structural response at a low computational cost.The purpose of the first part of this thesis is to develop numerical models which must be robust against structural modifications (localized nonlinearities, parametric uncertainties or perturbations) and reduce the size of the initial problem. These models are created, according to the direct condensation and the component mode synthesis, by enriching truncated reduction modal bases and Craig-Bampton transformations, respectively, with static residual vectors accounting for the structural modifications. To propagate uncertainties through these first-level and second-level reduced order models, respectively, we focus particularly on the generalized polynomial chaos method. This methods combination allows creating first-level and second-level metamodels, respectively. The two proposed metamodels are compared to other metamodels based on the polynomial chaos method and Latin Hypercube method applied on reduced and full models. The proposed metamodels allow approximating the structural behavior at a low computational cost without a significant loss of accuracy.The second part of this thesis is devoted to the dynamic analysis of nonlinear periodic structures in presence of imperfections: parametric perturbations or uncertainties. Deterministic or stochastic analyses, respectively, are therefore carried out. For both configurations, a generic discrete analytical model is proposed. It consists in applying the multiple scales method and the perturbation theory to solve the equation of motion and then on projecting the resulting solution on standing wave modes. The proposed model leads to a set of coupled complex algebraic equations, depending on the number and positions of imperfections in the structure. Uncertainty propagation through the proposed model is finally done using the Latin Hypercube method and the generalized polynomial chaos expansion. The robustness the collective dynamics against imperfections is studied through statistical analysis of the frequency responses and the basins of attraction dispersions in the multistability domain. Numerical results show that the presence of imperfections in a periodic structure strengthens its nonlinearity, expands its multistability domain and generates a multiplicity of multimodal branches
Channarond, Antoine. "Recherche de structure dans un graphe aléatoire : modèles à espace latent." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112338/document.
Full text.This thesis addresses the clustering of the nodes of a graph, in the framework of randommodels with latent variables. To each node i is allocated an unobserved (latent) variable Zi and the probability of nodes i and j being connected depends conditionally on Zi and Zj . Unlike Erdos-Renyi's model, connections are not independent identically distributed; the latent variables rule the connection distribution of the nodes. These models are thus heterogeneous and their structure is fully described by the latent variables and their distribution. Hence we aim at infering them from the graph, which the only observed data.In both original works of this thesis, we propose consistent inference methods with a computational cost no more than linear with respect to the number of nodes or edges, so that large graphs can be processed in a reasonable time. They both are based on a study of the distribution of the degrees, which are normalized in a convenient way for the model.The first work deals with the Stochastic Blockmodel. We show the consistency of an unsupervised classiffcation algorithm using concentration inequalities. We deduce from it a parametric estimation method, a model selection method for the number of latent classes, and a clustering test (testing whether there is one cluster or more), which are all proved to be consistent. In the second work, the latent variables are positions in the ℝd space, having a density f. The connection probability depends on the distance between the node positions. The clusters are defined as connected components of some level set of f. The goal is to estimate the number of such clusters from the observed graph only. We estimate the density at the latent positions of the nodes with their degree, which allows to establish a link between clusters and connected components of some subgraphs of the observed graph, obtained by removing low degree nodes. In particular, we thus derive an estimator of the cluster number and we also show the consistency in some sense
Nacanabo, Amade. "Impact des chocs climatiques sur la sécurité alimentaire dans les pays sahéliens : approches macroéconomiques et microéconomiques." Electronic Thesis or Diss., Toulon, 2021. http://www.theses.fr/2021TOUL2007.
Full textOften used metaphorically to refer to the southern fringes of the Sahara, the Sahel's geographical position makes it a region vulnerable to climate change. Agriculture is highly rain-fed and largely dependent on climatic conditions. If food security is to be achieved in the Sahel, climate change must be taken into account. By combining empirical and theoretical work, this thesis aims to contribute to a better understanding of the impact of climate change on food security in the Sahel at the microeconomic and macroeconomic levels. The first chapter examines the food security situation in the Sahel at the macroeconomic level, after analysing its demographic dynamism. The results of this chapter show that the Sahel has not yet begun its demographic transition. The demographic growth rate is high compared with the average for sub-Saharan Africa. Undernourishment is on the decline, but remains prevalent in the region. Reducing undernourishment necessarily involves agricultural production, which is dependent on the vagaries of the climate. The second chapter therefore looks at the effects of climate change on the yields of certain crops (millet, sorghum and maize) in the Sahel. The results indicate that climate change is having an overall negative impact on agricultural yields in the Sahel. This analysis at the macroeconomic level is then supplemented by two chapters which, at the microeconomic level, focus on the behaviour of farmers in the Sahel. The third chapter seeks to analyse the impact of climatic shocks, as measured by farmers' perceptions, on the inefficiency of agricultural plots. This study shows that climatic shocks increase the inefficiency of agricultural plots. Through lower yields and plot inefficiency, climate change may affect the poverty and food vulnerability of Burkinabé farming households. To this end, the fourth chapter identifies the individual and contextual determinants of poverty and food vulnerability among farming households in Burkina Faso. The results show that, in addition to the individual characteristics of farm households, such as their size or the level of education of the head of household, the climatic context in which they live helps to explain their poverty and food vulnerability
Iuga, Relu Adrian. "Modélisation et analyse statistique de la formation des prix à travers les échelles, Market impact." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1090/document.
Full textThe development of organized electronic markets induces a constant pressure on academic research in finance. A central issue is the market impact, i.e. the impact on the price of a transaction involving a large amount of shares over a short period of time. Monitoring and controlling the market impact is of great interest for practitioners; its modeling and has thus become a central point of quantitative finance research. Historically, stochastic calculus gradually imposed in finance, under the assumption that the price satisfies a diffusive dynamic. But this assumption is not appropriate at the level of ”price formation”, i.e. when looking at the fine scales of market participants, and new mathematical techniques are needed as the point processes. The price (last trade, mid-price) appears as events on a discrete network, the order book, at very short time scales (milliseconds). The Brownien motion becomes rather a macroscopic description of the complex price formation process. In the first chapter, we review the properties of electronic markets. We recall the limit of diffusive models and introduce the Hawkes processes. In particular, we make a review of the market impact research and present this thesis advanced. In the second part, we introduce a new model for market impact model at continuous time and living on a discrete space using process Hawkes. We show that this model that takes into account the market microstructure and it is able to reproduce recent empirical results as the concavity of the temporary impact. In the third chapter, we investigate the impact of large orders on the price formation process at intraday scale and at a larger scale (several days after the meta-order execution). Besides, we use our model to discuss stylized facts discovered in the database. In the fourth part, we focus on the non-parametric estimation for univariate Hawkes processes. Our method relies on the link between the auto-covariance function and the kernel process. In particular, we study the performance of the estimator in squared error loss over Sobolev spaces and over a certain class containing "very'' smooth functions
Kamari, Halaleh. "Qualité prédictive des méta-modèles construits sur des espaces de Hilbert à noyau auto-reproduisant et analyse de sensibilité des modèles complexes." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASE010.
Full textIn this work, the problem of estimating a meta-model of a complex model, denoted m, is considered. The model m depends on d input variables X1 , ..., Xd that are independent and have a known law. The meta-model, denoted f ∗ , approximates the Hoeffding decomposition of m, and allows to estimate its Sobol indices. It belongs to a reproducing kernel Hilbert space (RKHS), denoted H, which is constructed as a direct sum of Hilbert spaces (Durrande et al. (2013)). The estimator of the meta-model, denoted f^, is calculated by minimizing a least-squares criterion penalized by the sum of the Hilbert norm and the empirical L2-norm (Huet and Taupin (2017)). This procedure, called RKHS ridge group sparse, allows both to select and estimate the terms in the Hoeffding decomposition, and therefore, to select the Sobol indices that are non-zero and estimate them. It makes possible to estimate the Sobol indices even of high order, a point known to be difficult in practice.This work consists of a theoretical part and a practical part. In the theoretical part, I established upper bounds of the empirical L2 risk and the L2 risk of the estimator f^. That is, upper bounds with respect to the L2-norm and the empirical L2-norm for the f^ distance between the model m and its estimation f into the RKHS H. In the practical part, I developed an R package, called RKHSMetaMod, that implements the RKHS ridge group sparse procedure and a spacial case of it called the RKHS group lasso procedure. This package can be applied to a known model that is calculable in all points or an unknown regression model. In order to optimize the execution time and the storage memory, except for a function that is written in R, all of the functions of the RKHSMetaMod package are written using C++ libraries GSL and Eigen. These functions are then interfaced with the R environment in order to propose an user friendly package. The performance of the package functions in terms of the predictive quality of the estimator and the estimation of the Sobol indices, is validated by a simulation study
Magnant, Clément. "Approches bayésiennes pour le pistage radar de cibles de surface potentiellement manoeuvrantes." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0136/document.
Full textAs part of the ground or maritime surveillance by using airborne radars, one of the mainobjectives is to detect and track a wide variety of targets over time. These treatments are generallybased on Bayesian filtering to estimate recursively the kinematic parameters (position,velocity and acceleration) of the targets. It is based on the state-space representation and moreparticularly on the prior modeling of the target evolutions (uniform motion, uniformly acceleratedmotion, movement rotational, etc.). If maneuvering targets are tracked, several motionmodels, each with a predefined dynamic, are typically combined in a multiple-model structure.Although these approaches are relevant, improvements can be made at several levels, includinghow to select and define a priori the models to be used.In this framework, several issues must be addressed.1 / When using a multiple-model structure, it is generally considered two to three models. Thischoice is made in the algorithm design stage according to the system knowledge and the userexpertise. However, it does not exist in our knowledge tools or/and rules to define the types ofmotions and their associated parameters.2 / It is preferable that the choice of the motion model(s) is consistent with the type of targetto be tracked.3 / When a type of motion model is used, its parameters are fixed a priori but these values ??arenot necessarily appropriate in all phases of the movement. One of the major challenges is theway to define the covariance matrix of the model noise and to model its evolution.The work presented in this thesis consists of algorithmic solutions to the previous problemsin order to improve the estimation of target trajectories.First, we establish a dissimilarity measure based on Jeffrey divergence between probability densitiesassociated with two different state models. It is applied to the comparison of motion models.It is then used to compare a set of several state models. This study is then harnessed to providea method for selecting a priori models constituting multiple-model algorithms.Then we present non-parametric Bayesian models (BNP) using the Dirichlet process to estimatemodel noise statistics. This model has the advantage of representing multimodal noises withoutspecifying a priori the number of modes and their features. Two cases are treated. In the firstone, the model noise precision matrix is estimated for a single motion model without issue ofany a priori on its structure. In the second one, we take advantage of the structural forms ofprecision matrices associated to motion models to estimate only a small number of hyperparameters.For both approaches, the joint estimation of the kinematic parameters of the target andthe precision matrix of the model noise is led by particle filtering. The contributions includecalculating the distribution optimal importance in each case.Finally, we take advantage of methods known as joint tracking and classification (JTC) forsimultaneously leading the classification of the target and the inference of its parameters. Inthis case, each target class is associated with a set of evolution models. In order to achievethe classification, we use the target position measurements and the target extent measurementscorresponding to the projection of the target length on the line of sight radar-target. Note that this approach is applied in a single target tracking context and a multiple-target environment
Cardozo, Sandra Vergara. "Função da probabilidade da seleção do recurso (RSPF) na seleção de habitat usando modelos de escolha discreta." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-11032009-143806/.
Full textIn ecology, the behavior of animals is often studied to better understand their preferences for different types of habitat and food. The present work is concerned with this topic. It is divided into three chapters. The first concerns the estimation of a resource selection probability function (RSPF) compared with a discrete choice model (DCM) using chi-squared to obtain estimates. The best estimates were obtained by the DCM method. Nevertheless, animals were not selected based on choice alone. With RSPF, the maximum likelihood estimates used with the logistic regression still did not reach the objectives, since the animals have more than one choice. R and Minitab software and the FORTRAN programming language were used for the computations in this chapter. The second chapter discusses further the likelihood presented in the first chapter. A new likelihood for a RSPF is presented, which takes into account the units used and not used, and parametric and non-parametric bootstrapping are employed to study the bias and variance of parameter estimators, using a FORTRAN program for the calculations. In the third chapter, the new likelihood presented in chapter 2, with a discrete choice model is used to resolve a part of the problem presented in the first chapter. A nested structure is proposed for modelling selection by 28 spotted owls (Strix occidentalis) as well as a generalized nested logit model using random utility maximization and a random RSPF. Numerical optimization methods and the SAS system were employed to estimate the nested structural parameters.
Wu, Seung Kook. "Adaptive traffic control effect on arterial travel time charateristics." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31839.
Full textCommittee Chair: Hunter, Michael; Committee Member: Guensler, Randall; Committee Member: Leonard, John; Committee Member: Rodgers, Michael; Committee Member: Roshan J. Vengazhiyil. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Sjöwall, Fredrik. "Alternative Methods for Value-at-Risk Estimation : A Study from a Regulatory Perspective Focused on the Swedish Market." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146217.
Full textBetydelsen av sund finansiell riskhantering har blivit alltmer betonad på senare år, i synnerhet i och med finanskrisen 2007-08. Baselkommittén fastställer internationella normer och regler för banker och finansiella institutioner, och särskilt under marknadsrisk föreskriver de intern tillämpning av måttet Value-at-Risk. Däremot har den mest etablerade icke-parametriska Value-at-Risk-modellen, historisk simulering, kritiserats för några av dess orealistiska antaganden. Denna avhandling undersöker alternativa metoder för att beräkna icke-parametrisk Value-at‑Risk, genom att granska och jämföra prestationsförmågan hos tre motverkande viktningsmetoder för historisk simulering: en exponentiellt avtagande tidsviktningsteknik, en volatilitetsuppdateringsmetod, och slutligen ett mer generellt tillvägagångssätt för viktning som möjliggör specifikation av en avkastningsfördelnings centralmoment. Modellerna utvärderas med verklig finansiell data ur ett prestationsbaserat perspektiv, utifrån precision och kapitaleffektivitet, men också med avseende på deras lämplighet i förhållande till existerande regelverk, med särskilt fokus på den svenska marknaden. Den empiriska studien visar att prestandan hos historisk simulering förbättras avsevärt, från båda prestationsperspektiven, genom införandet av en viktningsmetod. Dessutom pekar resultaten i huvudsak på att volatilitetsuppdateringsmodellen med ett 500 dagars observationsfönster är den mest användbara viktningsmetoden i alla berörda aspekter. Slutsatserna i denna uppsats bidrar i väsentlig grad både till befintlig forskning om Value-at-Risk, liksom till kvaliteten på bankers och finansiella institutioners interna hantering av marknadsrisk.
Salloum, Zahraa. "Maximum de vraisemblance empirique pour la détection de changements dans un modèle avec un nombre faible ou très grand de variables." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1008/document.
Full textIn this PHD thesis, we propose a nonparametric method based on the empirical likelihood for detecting the change in the parameters of nonlinear regression models and the change in the coefficient of linear regression models, when the number of model variables may increase as the sample size increases. Firstly, we test the null hypothesis of no-change against the alternative of one change in the regression parameters. Under null hypothesis, the consistency and the convergence rate of the regression parameter estimators are proved. The asymptotic distribution of the test statistic under the null hypothesis is obtained, which allows to find the asymptotic critical value. On the other hand, we prove that the proposed test statistic has the asymptotic power equal to 1. The epidemic model, a particular case of model with two change-points, under the alternative hypothesis, is also studied. Afterwards, we use the empirical likelihood method for constructing the confidence regions for the difference between the parameters of a two-phases nonlinear model with random design. We show that the empirical likelihood ratio has an asymptotic χ2 distribu- tion. Empirical likelihood method is also used to construct the confidence regions for the difference between the parameters of a two-phases nonlinear model with response variables missing at randoms (MAR). In order to construct the confidence regions of the parameter in question, we propose three empirical likelihood statistics : empirical likelihood based on complete-case data, weighted empirical likelihood and empirical likelihood with imputed va- lues. We prove that all three empirical likelihood ratios have asymptotically χ2 distributions. An another aim for this thesis is to test the change in the coefficient of linear regres- sion models for high-dimensional model. This amounts to testing the null hypothesis of no change against the alternative of one change in the regression coefficients. Based on the theoretical asymptotic behaviour of the empirical likelihood ratio statistic, we propose, for a deterministic design, a simpler test statistic, easier to use in practice. The asymptotic normality of the proposed test statistic under the null hypothesis is proved, a result which is different from the χ2 law for a model with a fixed variable number. Under alternative hypothesis, the test statistic diverges
Koch, Erwan. "Outils et modèles pour l'étude de quelques risques spatiaux et en réseaux : application aux extrêmes climatiques et à la contagion en finance." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10138/document.
Full textThis thesis aims at developing tools and models that are relevant for the study of some spatial risks and risks in networks. The thesis is divided into five chapters. The first one is a general introduction containing the state of the art related to each study as well as the main results. Chapter 2 develops a new multi-site precipitation generator. It is crucial to dispose of models able to produce statistically realistic precipitation series. Whereas previously introduced models in the literature deal with daily precipitation, we develop a hourly model. The latter involves only one equation and thus introduces dependence between occurrence and intensity; the aforementioned literature assumes that these processes are independent. Our model contains a common factor taking large scale atmospheric conditions into account and a multivariate autoregressive contagion term accounting for local propagation of rainfall. Despite its relative simplicity, this model shows an impressive ability to reproduce real intensities, lengths of dry periods as well as the spatial dependence structure. In Chapter 3, we propose an estimation method for max-stable processes, based on simulated likelihood techniques. Max-stable processes are ideally suited for the statistical modeling of spatial extremes but their inference is difficult. Indeed the multivariate density function is not available and thus standard likelihood-based estimation methods cannot be applied. Under appropriate assumptions, our estimator is efficient as both the temporal dimension and the number of simulation draws tend towards infinity. This approach by simulation can be used for many classes of max-stable processes and can provide better results than composite-based methods, especially in the case where only a few temporal observations are available and the spatial dependence is high
Hadrich, Ben Arab Atizez. "Étude des fonctions B-splines pour la fusion d'images segmentées par approche bayésienne." Thesis, Littoral, 2015. http://www.theses.fr/2015DUNK0385/document.
Full textIn this thesis we are treated the problem of nonparametric estimation probability distributions. At first, we assumed that the unknown density f was approximated by a basic mixture quadratic B-spline. Then, we proposed a new estimate of the unknown density function f based on quadratic B-splines, with two methods estimation. The first is based on the maximum likelihood method and the second is based on the Bayesian MAP estimation method. Then we have generalized our estimation study as part of the mixture and we have proposed a new estimator mixture of unknown distributions based on the adapted estimation of two methods. In a second time, we treated the problem of semi supervised statistical segmentation of images based on the hidden Markov model and the B-sline functions. We have shown the contribution of hybridization of the hidden Markov model and B-spline functions in unsupervised Bayesian statistical image segmentation. Thirdly, we presented a fusion approach based on the maximum likelihood method, through the nonparametric estimation of probabilities, for each pixel of the image. We then applied this approach to multi-spectral and multi-temporal images segmented by our nonparametric and unsupervised algorithm
Caron, Emmanuel. "Comportement des estimateurs des moindres carrés du modèle linéaire dans un contexte dépendant : Étude asymptotique, implémentation, exemples." Thesis, Ecole centrale de Nantes, 2019. http://www.theses.fr/2019ECDN0036.
Full textIn this thesis, we consider the usual linear regression model in the case where the error process is assumed strictly stationary.We use a result from Hannan (1973) who proved a Central Limit Theorem for the usual least squares estimator under general conditions on the design and on the error process. Whatever the design and the error process satisfying Hannan’s conditions, we define an estimator of the asymptotic covariance matrix of the least squares estimator and we prove its consistency under very mild conditions. Then we show how to modify the usual tests on the parameter of the linear model in this dependent context. We propose various methods to estimate the covariance matrix in order to correct the type I error rate of the tests. The R package slm that we have developed contains all of these statistical methods. The procedures are evaluated through different sets of simulations and two particular examples of datasets are studied. Finally, in the last chapter, we propose a non-parametric method by penalization to estimate the regression function in the case where the errors are Gaussian and correlated
Rodrigues, Christelle. "Optimisation des posologies des antiépileptiques chez l’enfant à partir de données pharmacocinétiques pédiatriques et adultes Population pharmacokinetics of oxcarbazepine and its monohydroxy derivative in epileptic children A population pharmacokinetic model taking into account protein binding for the sustained-release granule formulation of valproic acid in children with epilepsy Conditional non-parametric bootstrap for non-linear mixed effect models Pharmacokinetics evaluation of vigabatrin dose for the treatment of refractory focal seizures in children using adult and pediatric data Pharmacokinetic extrapolation from adult to children in case of nonlinear elimination: a case study." Thesis, Sorbonne Paris Cité, 2018. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=2398&f=17336.
Full textChildren greatly differ from adults not only in terms of size but also in physiological terms. Indeed, developmental changes occur during growth due to maturation. These processes occur in a nonlinear fashion and can cause pharmacokinetic and pharmacodynamic differences. Thus, oppositely to common practice, it is not appropriate to scale pediatric doses directly and linearly from adults. The study of pharmacokinetics in children is then essential to determine those pediatric dosages. The more commonly used methodology is population analysis through non-linear mixed effects models. This method allows the analysis of sparse and unbalanced data. In return, the lack of individual data has to be balanced with the inclusion of more individuals. This can be a problem when the indication of treatment is a rare disease, as are epileptic syndromes of childhood. In this case, extrapolation of adult pharmacokinetic models to the pediatric population may be interesting. The objective of this thesis was to evaluate the dosage recommendations of antiepileptic drugs when pediatric pharmacokinetic data are sufficient to be modeled, and when they are not, extrapolating adequately adult information. Firstly, a parent-metabolite model of oxcarbazepine and its monohydroxy derivative (MHD) was developed in epileptic children aged 2 to 12 years. This model showed that younger children require higher doses, as well as patients co-treated with enzyme inducers. A model was also developed for epileptic children aged 1 to 18 years treated with a valproic acid sustained release microsphere formulation. This model took into account the flip-flop associated with the formulation and the non-linear relationship between clearance and dose caused by a saturable protein binding. Again, the need for higher doses for younger children was highlighted. Then, an adult model of vigabatrin was extrapolated to children to determine which doses allow to achieve exposures similar to adults in resistant focal onset seizures. From the results obtained, which are in agreement with the conclusions of clinical trials, we have been able to propose an ideal maintenance dose for this indication. Finally, we studied the relevance of extrapolation by theoretical allometry in a context of non-linearity with the example of stiripentol. We concluded that this method seems to provide good predictions from the age of 8, unlike the linear elimination molecules where it seems correct from 5 years. In conclusion, we were able to test and compare different approaches to help determine dosing recommendations in children. The study of pediatric pharmacokinetics in specific trials remains essential for the proper use of drugs
Aabid, Sami El. "Méthode basée modèle pour le diagnostic de l'état de santé d'une pile à combustible PEMFC en vue de sa maintenance." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0011.
Full textNowadays, Fuel cells (FCs) are considered as an attractive technological solution for energy storage. In addition to their high efficiency conversion to electrical energy and their high energy density, FCs are a potential candidate to reduce the environmental impact of aircrafts. The present PhD thesis can be located within this context, and especially contributes to the development of methodologies dedicated to the monitoring of the state of health (SoH) of Proton Exchange Membrane Fuel Cells (PEMFCs). FCs are submitted to ageing and various operating conditions leading to several failures or abnormal operation modes. Hence, there is a need to develop tools dedicated to the diagnosis and fuel cell ageing monitoring. One of reliable approaches used for the FC SoH monitoring is based on parametric identification of a model through experimental data. Widely used for the FC characterization, the polarization curve (V-I) and the Electrochemical Impedance Spectroscopy (EIS) coupled with a model describing the involved phenomena may provide further information about the FC SoH. Two models were thus developed: a quasi-static model whose parameters are identified from the polarization curve and a dynamic one identified from EIS data. The need to develop a dynamic model whose formulation may vary over time “without a priori” has been reported in this thesis. The original approach of this thesis is to consider conjointly both characterizations during all the proposed analysis process. This global strategy ensures the separation of the different fuel cell phenomena in the quasi-static and dynamic domains by introducing into each parametrization process (one for the quasi-static model and one for the dynamic model) parameters and/or laws stemming from the other part. The global process starting from the a priori knowledge until the identification of the models parameters was developed during the chapters of this thesis. In addition to the good reproduction of experimental data and the separation of the losses in both static and dynamic domains, the method makes it possible to monitor the FC SoH via the evolution of models parameters. The fact to take into account the coupling between quasi-static and dynamic models revealed the notion of a “residualimpedance”. This impedance makes it possible to overcome the recurrent experimental observation made by the daily users of EIS: there is a not-clearly explained difference between the low frequency resistance of the EIS and the slope of the polarization curve for a given currentndensity. Theoretically the two quantities have to tend towards the same value. In others words, a part of the impedance spectra is not clearly and easily exploitable to characterize fuel cell performance. This topic has been discussed in the literature in the last years. An attempt to explain physico-chemical phenomena related to this impedance is also a part of objectives of this thesis. From an experimental point of view, before applying this method to ageing monitoring, it was indeed necessary to “calibrate” it regarding its relative complexity. In this way, experiments with a single cell with different sets of internal components (different membrane thicknesses and different platinum loadings in the Active Layer (AL)) were achieved and analyzed by applying the proposed method. Therefore, the method was evaluated in the framework of three ageing campaigns carried out with three 1 kW PEM stacks