Дисертації з теми "Kernel testing"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-25 дисертацій для дослідження на тему "Kernel testing".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Lee, Kevin Sung-ho. "Kernel-adaptor interface testing of Project Timeliner." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/49939.
Повний текст джерелаOzier-Lafontaine, Anthony. "Kernel-based testing and their application to single-cell data." Electronic Thesis or Diss., Ecole centrale de Nantes, 2023. http://www.theses.fr/2023ECDN0025.
Повний текст джерелаSingle-cell technologies generate data at the single-cell level. They are coumposed of hundreds to thousands of observations (i.e. cells) and tens of thousands of variables (i.e. genes). New methodological challenges arose to fully exploit the potentialities of these complex data. A major statistical challenge is to distinguish biological informationfrom technical noise in order to compare conditions or tissues. This thesis explores the application of kernel testing on single-cell datasets in order to detect and describe the potential differences between compared conditions.To overcome the limitations of existing kernel two-sample tests, we propose a kernel test inspired from the Hotelling-Lawley test that can apply to any experimental design. We implemented these tests in a R and Python package called ktest that is their first useroriented implementation. We demonstrate the performances of kernel testing on simulateddatasets and on various experimental singlecell datasets. The geometrical interpretations of these methods allows to identify the observations leading a detected difference. Finally, we propose a Nyström-based efficient implementationof these kernel tests as well as a range of diagnostic and interpretation tools
Kotlyarova, Yulia. "Kernel estimators : testing and bandwidth selection in models of unknown smoothness." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=85179.
Повний текст джерелаWe present theoretical results on the asymptotic distribution of the estimators under various smoothness assumptions and derive the limiting joint distributions for estimators with different combinations of bandwidths and kernel functions. Using these nontrivial joint distributions, we suggest a new way of improving accuracy and robustness of the estimators by considering a linear combination of estimators with different smoothing parameters. The weights in the combination minimize an estimate of the mean squared error. Monte Carlo simulations confirm suitability of this method for both smooth and non-smooth models.
For the original and smoothed maximum score estimators, a formal procedure is introduced to test for equivalence of the maximum likelihood estimators and these semiparametric estimators, which converge to the true value at slower rates. The test allows one to identify heteroskedastic misspecifications in the logit/probit models. The method has been applied to analyze the decision of married women to join the labour force.
Liero, Hannelore. "Testing the Hazard Rate, Part I." Universität Potsdam, 2003. http://opus.kobv.de/ubp/volltexte/2011/5151/.
Повний текст джерелаFriedrichs, Stefanie Verfasser], Heike [Akademischer Betreuer] Bickeböller, Thomas [Gutachter] [Kneib, and Tim [Gutachter] Beißbarth. "Kernel-Based Pathway Approaches for Testing and Selection / Stefanie Friedrichs ; Gutachter: Thomas Kneib, Tim Beißbarth ; Betreuer: Heike Bickeböller." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2017. http://d-nb.info/114137952X/34.
Повний текст джерелаLi, Yinglei. "Genetic Association Testing of Copy Number Variation." UKnowledge, 2014. http://uknowledge.uky.edu/statistics_etds/8.
Повний текст джерелаAkcin, Haci Mustafa. "NONPARAMETRIC INFERENCES FOR THE HAZARD FUNCTION WITH RIGHT TRUNCATION." Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/math_diss/12.
Повний текст джерелаLi, Na. "MMD and Ward criterion in a RKHS : application to Kernel based hierarchical agglomerative clustering." Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0033/document.
Повний текст джерелаClustering, as a useful tool for unsupervised classification, is the task of grouping objects according to some measured or perceived characteristics of them and it has owned great success in exploring the hidden structure of unlabeled data sets. Kernel-based clustering algorithms have shown great prominence. They provide competitive performance compared with conventional methods owing to their ability of transforming nonlinear problem into linear ones in a higher dimensional feature space. In this work, we propose a Kernel-based Hierarchical Agglomerative Clustering algorithms (KHAC) using Ward’s criterion. Our method is induced by a recently arisen criterion called Maximum Mean Discrepancy (MMD). This criterion has firstly been proposed to measure difference between different distributions and can easily be embedded into a RKHS. Close relationships have been proved between MMD and Ward's criterion. In our KHAC method, selection of the kernel parameter and determination of the number of clusters have been studied, which provide satisfactory performance. Finally an iterative KHAC algorithm is proposed which aims at determining the optimal kernel parameter, giving a meaningful number of clusters and partitioning the data set automatically
Bissyande, Tegawende. "Contributions for improving debugging of kernel-level services in a monolithic operating system." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00821893.
Повний текст джерелаSingh, Yuvraj. "Regression Models to Predict Coastdown Road Load for Various Vehicle Types." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595265184541326.
Повний текст джерелаLi, Yuyi. "Empirical likelihood with applications in time series." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/empirical-likelihood-with-applications-in-time-series(29c74808-f784-4306-8df9-26f45b30b553).html.
Повний текст джерелаBounliphone, Wacha. "Tests d’hypothèses statistiquement et algorithmiquement efficaces de similarité et de dépendance." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC002/document.
Повний текст джерелаThe dissertation presents novel statistically and computationally efficient hypothesis tests for relative similarity and dependency, and precision matrix estimation. The key methodology adopted in this thesis is the class of U-statistic estimators. The class of U-statistics results in a minimum-variance unbiased estimation of a parameter.The first part of the thesis focuses on relative similarity tests applied to the problem of model selection. Probabilistic generative models provide a powerful framework for representing data. Model selection in this generative setting can be challenging. To address this issue, we provide a novel non-parametric hypothesis test of relative similarity and test whether a first candidate model generates a data sample significantly closer to a reference validation set.Subsequently, the second part of the thesis focuses on developing a novel non-parametric statistical hypothesis test for relative dependency. Tests of dependence are important tools in statistical analysis, and several canonical tests for the existence of dependence have been developed in the literature. However, the question of whether there exist dependencies is secondary. The determination of whether one dependence is stronger than another is frequently necessary for decision making. We present a statistical test which determine whether one variables is significantly more dependent on a first target variable or a second.Finally, a novel method for structure discovery in a graphical model is proposed. Making use of a result that zeros of a precision matrix can encode conditional independencies, we develop a test that estimates and bounds an entry of the precision matrix. Methods for structure discovery in the literature typically make restrictive distributional (e.g. Gaussian) or sparsity assumptions that may not apply to a data sample of interest. Consequently, we derive a new test that makes use of results for U-statistics and applies them to the covariance matrix, which then implies a bound on the precision matrix
Chwialkowski, K. P. "Topics in kernal hypothesis testing." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1519607/.
Повний текст джерелаHeideklang, René. "Data Fusion for Multi-Sensor Nondestructive Detection of Surface Cracks in Ferromagnetic Materials." Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19586.
Повний текст джерелаFatigue cracking is a dangerous and cost-intensive phenomenon that requires early detection. But at high test sensitivity, the abundance of false indications limits the reliability of conventional materials testing. This thesis exploits the diversity of physical principles that different nondestructive surface inspection methods offer, by applying data fusion techniques to increase the reliability of defect detection. The first main contribution are novel approaches for the fusion of NDT images. These surface scans are obtained from state-of-the-art inspection procedures in Eddy Current Testing, Thermal Testing and Magnetic Flux Leakage Testing. The implemented image fusion strategy demonstrates that simple algebraic fusion rules are sufficient for high performance, given adequate signal normalization. Data fusion reduces the rate of false positives is reduced by a factor of six over the best individual sensor at a 10 μm deep groove. Moreover, the utility of state-of-the-art image representations, like the Shearlet domain, are explored. However, the theoretical advantages of such directional transforms are not attained in practice with the given data. Nevertheless, the benefit of fusion over single-sensor inspection is confirmed a second time. Furthermore, this work proposes novel techniques for fusion at a high level of signal abstraction. A kernel-based approach is introduced to integrate spatially scattered detection hypotheses. This method explicitly deals with registration errors that are unavoidable in practice. Surface discontinuities as shallow as 30 μm are reliably found by fusion, whereas the best individual sensor requires depths of 40–50 μm for successful detection. The experiment is replicated on a similar second test specimen. Practical guidelines are given at the end of the thesis, and the need for a data sharing initiative is stressed to promote future research on this topic.
Coudret, Raphaël. "Stochastic modelling using large data sets : applications in ecology and genetics." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00865867.
Повний текст джерелаNguyen, Van Hanh. "Modèles de mélange semi-paramétriques et applications aux tests multiples." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00987035.
Повний текст джерелаVitale, Raffaele. "Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90442.
Повний текст джерелаLa presente tesis doctoral, concebida principalmente para apoyar y reforzar la relación entre la academia y la industria, se desarrolló en colaboración con Shell Global Solutions (Amsterdam, Países Bajos) en el esfuerzo de aplicar y posiblemente extender los enfoques ya consolidados basados en variables latentes (es decir, Análisis de Componentes Principales - PCA - Regresión en Mínimos Cuadrados Parciales - PLS - o PLS discriminante - PLSDA) para la resolución de problemas complejos no sólo en los campos de mejora y optimización de procesos, sino también en el entorno más amplio del análisis de datos multivariados. Con este fin, en todos los capítulos proponemos nuevas soluciones algorítmicas eficientes para abordar tareas dispares, desde la transferencia de calibración en espectroscopia hasta el modelado en tiempo real de flujos de datos. El manuscrito se divide en las seis partes siguientes, centradas en diversos temas de interés: Parte I - Prefacio, donde presentamos un resumen de este trabajo de investigación, damos sus principales objetivos y justificaciones junto con una breve introducción sobre PCA, PLS y PLSDA; Parte II - Sobre las extensiones basadas en kernels de PCA, PLS y PLSDA, donde presentamos el potencial de las técnicas de kernel, eventualmente acopladas a variantes específicas de la recién redescubierta proyección de pseudo-muestras, formulada por el estadista inglés John C. Gower, y comparamos su rendimiento respecto a metodologías más clásicas en cuatro aplicaciones a escenarios diferentes: segmentación de imágenes Rojo-Verde-Azul (RGB), discriminación y monitorización de procesos por lotes y análisis de diseños de experimentos de mezclas; Parte III - Sobre la selección del número de factores en el PCA por pruebas de permutación, donde aportamos una guía extensa sobre cómo conseguir la selección de componentes de PCA mediante pruebas de permutación y una ilustración completa de un procedimiento algorítmico original implementado para tal fin; Parte IV - Sobre la modelización de fuentes de variabilidad común y distintiva en el análisis de datos multi-conjunto, donde discutimos varios aspectos prácticos del análisis de componentes comunes y distintivos de dos bloques de datos (realizado por métodos como el Análisis Simultáneo de Componentes - SCA - Análisis Simultáneo de Componentes Distintivos y Comunes - DISCO-SCA - Descomposición Adaptada Generalizada de Valores Singulares - Adapted GSVD - ECO-POWER, Análisis de Correlaciones Canónicas - CCA - y Proyecciones Ortogonales de 2 conjuntos a Estructuras Latentes - O2PLS). Presentamos a su vez una nueva estrategia computacional para determinar el número de factores comunes subyacentes a dos matrices de datos que comparten la misma dimensión de fila o columna y dos planteamientos novedosos para la transferencia de calibración entre espectrómetros de infrarrojo cercano; Parte V - Sobre el procesamiento y la modelización en tiempo real de flujos de datos de alta dimensión, donde diseñamos la herramienta de Procesamiento en Tiempo Real (OTFP), un nuevo sistema de manejo racional de mediciones multi-canal registradas en tiempo real; Parte VI - Epílogo, donde presentamos las conclusiones finales, delimitamos las perspectivas futuras, e incluimos los anexos.
La present tesi doctoral, concebuda principalment per a recolzar i reforçar la relació entre l'acadèmia i la indústria, es va desenvolupar en col·laboració amb Shell Global Solutions (Amsterdam, Països Baixos) amb l'esforç d'aplicar i possiblement estendre els enfocaments ja consolidats basats en variables latents (és a dir, Anàlisi de Components Principals - PCA - Regressió en Mínims Quadrats Parcials - PLS - o PLS discriminant - PLSDA) per a la resolució de problemes complexos no solament en els camps de la millora i optimització de processos, sinó també en l'entorn més ampli de l'anàlisi de dades multivariades. A aquest efecte, en tots els capítols proposem noves solucions algorítmiques eficients per a abordar tasques dispars, des de la transferència de calibratge en espectroscopia fins al modelatge en temps real de fluxos de dades. El manuscrit es divideix en les sis parts següents, centrades en diversos temes d'interès: Part I - Prefaci, on presentem un resum d'aquest treball de recerca, es donen els seus principals objectius i justificacions juntament amb una breu introducció sobre PCA, PLS i PLSDA; Part II - Sobre les extensions basades en kernels de PCA, PLS i PLSDA, on presentem el potencial de les tècniques de kernel, eventualment acoblades a variants específiques de la recentment redescoberta projecció de pseudo-mostres, formulada per l'estadista anglés John C. Gower, i comparem el seu rendiment respecte a metodologies més clàssiques en quatre aplicacions a escenaris diferents: segmentació d'imatges Roig-Verd-Blau (RGB), discriminació i monitorització de processos per lots i anàlisi de dissenys d'experiments de mescles; Part III - Sobre la selecció del nombre de factors en el PCA per proves de permutació, on aportem una guia extensa sobre com aconseguir la selecció de components de PCA a través de proves de permutació i una il·lustració completa d'un procediment algorítmic original implementat per a la finalitat esmentada; Part IV - Sobre la modelització de fonts de variabilitat comuna i distintiva en l'anàlisi de dades multi-conjunt, on discutim diversos aspectes pràctics de l'anàlisis de components comuns i distintius de dos blocs de dades (realitzat per mètodes com l'Anàlisi Simultània de Components - SCA - Anàlisi Simultània de Components Distintius i Comuns - DISCO-SCA - Descomposició Adaptada Generalitzada en Valors Singulars - Adapted GSVD - ECO-POWER, Anàlisi de Correlacions Canòniques - CCA - i Projeccions Ortogonals de 2 blocs a Estructures Latents - O2PLS). Presentem al mateix temps una nova estratègia computacional per a determinar el nombre de factors comuns subjacents a dues matrius de dades que comparteixen la mateixa dimensió de fila o columna, i dos plantejaments nous per a la transferència de calibratge entre espectròmetres d'infraroig proper; Part V - Sobre el processament i la modelització en temps real de fluxos de dades d'alta dimensió, on dissenyem l'eina de Processament en Temps Real (OTFP), un nou sistema de tractament racional de mesures multi-canal registrades en temps real; Part VI - Epíleg, on presentem les conclusions finals, delimitem les perspectives futures, i incloem annexos.
Vitale, R. (2017). Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90442
TESIS
Wende, Ulrich. "Modellierung und Testverfahren für CMOS-kompatible Fluxgatesensoren mit planaren weichmagnetischen Kernen - Modeling and testing of a CMOS compatible fluxgate sensor with a planar softmagnetic core." Gerhard-Mercator-Universitaet Duisburg, 2001. http://www.ub.uni-duisburg.de/ETD-db/theses/available/duett-05222001-120352/.
Повний текст джерелаJeunesse, Paulien. "Estimation non paramétrique du taux de mort dans un modèle de population générale : Théorie et applications. A new inference strategy for general population mortality tables Nonparametric adaptive inference of birth and death models in a large population limit Nonparametric inference of age-structured models in a large population limit with interactions, immigration and characteristics Nonparametric test of time dependance of age-structured models in a large population limit." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED013.
Повний текст джерелаIn this thesis, we study the mortality rate in different population models to apply our results to demography or biology. The mathematical framework includes statistics of process, nonparametric estimations and analysis.In a first part, an algorithm is proposed to estimate the mortality tables. This problematic comes from actuarial science and the aim is to apply our results in the insurance field. This algorithm is founded on a deterministic population model. The new estimates we gets improve the actual results. Its advantage is to take into account the global population dynamics. Thanks to that, births are used in our model to compute the mortality rate. Finally these estimations are linked with the precedent works. This is a point of great importance in the field of actuarial science.In a second part, we are interested in the estimation of the mortality rate in a stochastic population model. We need to use the tools coming from nonparametric estimations and statistics of process to do so. Indeed, the mortality rate is a function of two parameters, the time and the age. We propose minimax optimal and adaptive estimators for the mortality and the population density. We also demonstrate some non asymptotics concentration inequalities. These inequalities quantifiy the deviation between the stochastic process and its deterministic limit we used in the first part. We prove that our estimators are still optimal in a model where the mortality is influenced by interactions. This is for example the case for the logistic population.In a third part, we consider the testing problem to detect the existence of interactions. This test is in fact designed to detect the time dependance of the mortality rate. Under the assumption the time dependance in the mortality rate comes only from the interactions, we can detect the presence of interactions. Finally we propose an algorithm to do this test
Friedrichs, Stefanie. "Kernel-Based Pathway Approaches for Testing and Selection." Doctoral thesis, 2017. http://hdl.handle.net/11858/00-1735-0000-0023-3F2D-5.
Повний текст джерелаGheorghe, Marian, R. Ceterchi, F. Ipate, Savas Konur, and Raluca Lefticaru. "Kernel P systems: from modelling to verification and testing." 2017. http://hdl.handle.net/10454/11720.
Повний текст джерелаA kernel P system integrates in a coherent and elegant manner some of the most successfully used features of the P systems employed in modelling various applications. It also provides a theoretical framework for analysing these applications and a software environment for simulating and verifying them. In this paper, we illustrate the modelling capabilities of kernel P systems by showing how other classes of P systems can be represented with this formalism and providing a number of kernel P system models for a sorting algorithm and a broadcasting problem. We also show how formal verification can be used to validate that the given models work as desired. Finally, a test generation method based on automata is extended to non-deterministic kernel P systems.
The work of MG, FI and RL were supported by a grant of the Romanian National Author- ity for Scientific Research, CNCS-UEFISCDI (project number: PN-II-ID-PCE-2011-3-0688); RCUK
LIN, CHIH-HUA, and 林之華. "A Practical Application of the Kernel Method of Testing Equation." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/36842061472673580182.
Повний текст джерела國立臺北大學
統計學系
104
When a standardized achievement test is adopted, whether the scores between the current test taker and the predecessor are fair is our major concern. In order to ensure the fairness of the tests, a statistical procedure to standardize the achievement test under different measurement time and different versions of the examination problems is called "equating". In this study, the basic theory of "The Kernel Method of Test Equating" (2004) was extended to the theory named "observed-score kernel method” in the area of score equating. For convenience, the “kequate” package of free software R was applied to complete the analysis and the equating procedure of this study. According to the Item Response Theory proposed by Yu, M. N. (1992), the basis of "equating" theory should satisfied four different basic assumptions. Only all these four assumptions are satisfied, IRT can be applied to analyze test data. In addition, IRT has some valuable properties. The particular one is to provide the standard errors of estimated abilities for individual taker. Furthermore, the Item Information Function is defined as the inverse of the square of the estimated standard error, which is an indicator of the accuracy of estimate similar to the “reliability" indicator in classical test theory. The achievement test scores of calculus of NTPU freshmen in 2014 and 2015 academic years was adopted as the practical data in this study to show how to apply observed-score kernel method to equating the scores between two different years. We also presented how to collect data by the NEAT-CE design, how to sort data, and how to use the generalized linear models to analyze data, last but not the least, how to choose the best model to manipulate the “equating” stage by the “kequate” package. By comparing the descriptive statistics and the equating output, the character of the test was evaluated as well.
Shen, Shu. "Three essays in econometrics." Thesis, 2011. http://hdl.handle.net/2152/26903.
Повний текст джерелаtext
LIU, SHIFANG. "Statistical Methods for Testing Treatment-Covariate Interactions in Cancer Clinical Trials." Thesis, 2011. http://hdl.handle.net/1974/6758.
Повний текст джерелаThesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2011-09-27 11:09:28.449
(7027331), Jiasen Yang. "Statistical Learning and Model Criticism for Networks and Point Processes." Thesis, 2019.
Знайти повний текст джерела