Rozprawy doktorskie na temat „Set-Based Methods”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Set-Based Methods”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Stoican, Florin. "Fault tolerant control based on set-theoretic methods". Phd thesis, Supélec, 2011. http://tel.archives-ouvertes.fr/tel-00633622.
Pełny tekst źródłaXu, Feng. "Diagnosis and fault-tolerant control using set-based methods". Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284831.
Pełny tekst źródłaLa capacidad de los sistemas para tolerar fallos es una importante especificación de desempeño para la mayoría de sistemas. Ejemplos que muestran su importancia son algunas catástrofes en aviación civil. De acuerdo a investigaciones oficiales, algunos incidentes aéreos son técnicamente evitables si los pilotos pudiesen tomar las medidas adecuadas. Aun así, basándose en las habilidades y experiencia de los pilotos, no se puede garantizar que decisiones de vuelo confiables serán siempre posible de tomar. En cambio, si estrategias de tolerancia a fallos se pudieran incluir en el proceso de toma de decisión, los vuelos serían mucho más seguros. El control tolerante a fallos es generalmente clasificado en control pasivo y activo. El control pasivo se basa en la robustez del controlador, el cual sólo provee una habilidad limitada de tolerancia a fallos, mientras que el control tolerante a fallos de tipo activo se convierte en un modulo de detección y aislamiento de fallos que permite obtener información de éstos, y luego, activamente, tomar acciones para tolerar el efecto de dichos fallos. Así pues, el control activo generalmente tiene habilidades más fuertes de tolerancia a fallos. Esta tesis se enfoca en control tolerante a fallos activo, para lo cual considera el control predictivo basado en modelos y la detección y aislamiento de fallos basados en conjuntos. El control predictivo basado en modelos es una estrategia de control exitosa en la industria de procesos y ha sido ampliamente utilizada para procesos químicos y tratamiento de aguas, debido a su habilidad de tratar con sistemas multivariables con restricciones. A pesar de esto, el desempeño del control predictivo basado en modelos tiene una profunda dependencia de la precisión del modelo del sistema. Siendo realistas, es imposible evitar el efecto de errores de modelado, perturbaciones, ruidos y fallos, que siempre llevan a diferencias entre el modelo y el sistema real. Comparativamente, el error de modelo inducido por los fallos es posible de ser manejado efectivamente por estrategias adecuadas de control tolerante a fallos. Con el fin de alcanzar este objetivo, métodos de detección y aislamiento de fallos basados en conjuntos son utilizados en los esquemas de tolerancia a fallos propuestos en esta tesis. La ventaja importante de estas técnicas de detección y aislamiento de fallos basadas en conjuntos es que puede tomar decisiones robustas de detección y aislamiento, lo cual es clave para tomar medidas acertadas de tolerancia a fallos. Esta tesis esta dividida en cuatro partes. La primera parte es introductoria, presenta el estado del arte y hace una introducción a las herramientas de investigación utilizadas. La segunda parte expone la detección y aislamiento de fallos en actuadores y/o sensores, basándose en teoría de conjuntos, a partir de observadores de intervalo, y conjuntos invariantes. La tercera parte se enfoca en el control predictivo robusto (con enfoques basados tanto en tubos robustos como en min-max) con tolerancia a fallos en actuadores y/o sensores. La cuarta parte presenta algunas conclusiones, hace un resumen de esta investigación y da algunas ideas para trabajos futuros.
Stankovic, Nikola. "Set-based control methods for systems affected by time-varying delay". Thesis, Supélec, 2013. http://www.theses.fr/2013SUPL0025/document.
Pełny tekst źródłaWe considered the process regulation which is based on feedback affected by varying delays. Proposed approach relies on set-based control methods. One part of the thesis examines active control design for compensation of delays in sensor-to controller communication channel. This problem is regarded in a general perspective of the fault tolerant control where delays are considered as a particular degradation mode of the sensor. Obtained results are also adapted to the systems with redundant sensing elements that are prone to abrupt faults. In this sense, an unified framework is proposed in order to address the control design with outdated measurements provided by unreliable sensors.Positive invariance for linear discrete-time systems with delays is outlined in the second part of the thesis. Concerning this class of dynamics, there are two main approaches which define positive invariance. The first one relies on rewriting a delay-difference equation in the augmented state-space and applying standard analysis and control design tools for the linear systems. The second approach considers invariance in the initial state-space. However, the initial state-space characterization is still an open problem even for the linear case and it represents our main subject of interest. As a contribution, we provide new insights on the existence of the positively invariant sets in the initial state-space. Moreover, a construction algorithm for the minimal robust D-invariant set is outlined. Additionally, alternative invariance concepts are discussed
Tariq, Muhammad Farzan. "Set-based design rules and implementation methods in concept development phase". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118491.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 52).
There are numerous methodologies that organizations employ during concept development cycles. These range from agile, waterfall, point-based designs etc. One of the emerging such methodologies is called Set-Based Design (SBD). There has been flurry of research conducted into SBD process. Most of the documentations about SBD highlight its general principles and characteristics. In this thesis, I have taken a more focused approach by targeting planning and concept development phases in particular. Rules to select or deselect concepts have been extensively discussed in this research followed by providing an effective structure to implement SBD in concept development process. The form and function distinction during the concept development cycle has been clearly examined and documented. The research has been conducted independent of any organization or product type and therefore is applicable to any product development scenario and can be easily adopted by any organization.
by Muhammad Farzan Tariq.
S.M. in Engineering and Management
Léon, Cantón Plinio de. "Dependable control of uncertain linear systems based on set theoretic methods". Achen Shaker, 2009. http://d-nb.info/995737347/04.
Pełny tekst źródła譚玉貞 i Yuk-ching Tam. "Some practical issues in estimation based on a ranked set sample". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221683.
Pełny tekst źródłaKern, Benjamin Verfasser], i Rolf [Gutachter] [Findeisen. "Set-based methods for interconnected control systems / Benjamin Kern ; Gutachter: Rolf Findeisen". Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2019. http://d-nb.info/1220036447/34.
Pełny tekst źródłaUllah, Baseer. "Structural topology optimisation based on the boundary element and level set methods". Thesis, Durham University, 2014. http://etheses.dur.ac.uk/10659/.
Pełny tekst źródłaKern, Benjamin [Verfasser], i Rolf [Gutachter] Findeisen. "Set-based methods for interconnected control systems / Benjamin Kern ; Gutachter: Rolf Findeisen". Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2019. http://d-nb.info/1220036447/34.
Pełny tekst źródłaMulagaleti, Sampath Kumar. "Invariant Set-based Methods for the Computation of Input and Disturbance Sets". Thesis, IMT Alti Studi Lucca, 2023. http://e-theses.imtlucca.it/370/1/Mulagaleti_phdthesis.pdf.
Pełny tekst źródłaBernstein, Joshua I. (Joshua Ian) 1974. "Design methods in the aerospace industry : looking for evidence of set-based practices". Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/82675.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 209-211).
by Joshua I. Bernstein.
M.S.
de, Léon Cantón Plinio [Verfasser]. "Dependable control of uncertain linear systems based on set-theoretic methods / Plinio de Léon Cantón". Aachen : Shaker, 2009. http://d-nb.info/1159832757/34.
Pełny tekst źródłaFalkeborn, Rikard. "Evaluation of Differential Algebraic Elimination Methods for Deriving Consistency Relations from an Engine Model". Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7973.
Pełny tekst źródłaNew emission legislations introduced in the European Union and the U.S. have made truck manufacturers face stricter requirements for low emissions and on-board diagnostic systems. The on-board diagnostic system typically consists of several tests that are run when the truck is driving. One way to construct such tests is to use so called consistency relations. A consistency relation is a relation with known variables that in the fault free case always holds. Calculation of a consistency relation typically involves eliminating unknown variables from a set of equations.
To eliminate variables from a differential polynomial system, methods from differential algebra can be used. In this thesis, the purely algebraic Gröbner basis algorithm and the differential Rosenfeld-Gröbner algorithm implemented in the Maple package Diffalg have been compared and evaluated. The conclusion drawn is that there are no significant differences between the methods. However, since using Gröbner basis requires differentiations to be made in advance, the recommendation is to use the Rosenfeld-Gröbner algorithm.
Further, attempts to calculate consistency relations using the Rosenfeld-Gröbner algorithm have been made to a real application, a model of a Scania diesel engine. These attempts did not yield any successful results. It was only possible to calculate one consistency relation. This can be explained by the high complexity of the model.
Webb, Grayson. "A Gaussian Mixture Model based Level Set Method for Volume Segmentation in Medical Images". Thesis, Linköpings universitet, Beräkningsmatematik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148548.
Pełny tekst źródłaMaringanti, Rajaram Seshu. "INVERSE-DISTANCE INTERPOLATION BASED SET-POINT GENERATION METHODS FOR CLOSED-LOOP COMBUSTION CONTROL OF A CIDI ENGINE". The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1253553419.
Pełny tekst źródłaHeikkinen, Tim, i Jakob Müller. "Multidisciplinary analysis of jet engine components : Development of methods and tools for design automatisation in a multidisciplinary context". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Maskinteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-27784.
Pełny tekst źródłaGoddard, Aaron Matthew. "Applying vessel inlet/outlet conditions to patient-specific models embedded in Cartesian grids". Thesis, University of Iowa, 2015. https://ir.uiowa.edu/etd/1970.
Pełny tekst źródłaTrumpp, Alexander, Johannes Lohr, Daniel Wedekind, Martin Schmidt, Matthias Burghardt, Axel R. Heller, Hagen Malberg i Sebastian Zaunseder. "Camera-based photoplethysmography in an intraoperative setting". Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-234950.
Pełny tekst źródłaRobinson, Elinirina Iréna. "Filtering and uncertainty propagation methods for model-based prognosis". Thesis, Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1189/document.
Pełny tekst źródłaIn this manuscript, contributions to the development of methods for on-line model-based prognosis are presented. Model-based prognosis aims at predicting the time before the monitored system reaches a failure state, using a physics-based model of the degradation. This time before failure is called the remaining useful life (RUL) of the system.Model-based prognosis is divided in two main steps: (i) current degradation state estimation and (ii) future degradation state prediction to predict the RUL. The first step, which consists in estimating the current degradation state using the measurements, is performed with filtering techniques. The second step is realized with uncertainty propagation methods. The main challenge in prognosis is to take the different uncertainty sources into account in order to obtain a measure of the RUL uncertainty. There are mainly model uncertainty, measurement uncertainty and future uncertainty (loading, operating conditions, etc.). Thus, probabilistic and set-membership methods for model-based prognosis are investigated in this thesis to tackle these uncertainties.The ability of an extended Kalman filter and a particle filter to perform RUL prognosis in presence of model and measurement uncertainty is first studied using a nonlinear fatigue crack growth model based on the Paris' law and synthetic data. Then, the particle filter combined to a detection algorithm (cumulative sum algorithm) is applied to a more realistic case study, which is fatigue crack growth prognosis in composite materials under variable amplitude loading. This time, model uncertainty, measurement uncertainty and future loading uncertainty are taken into account, and real data are used. Then, two set-membership model-based prognosis methods based on constraint satisfaction and unknown input interval observer for linear discete-time systems are presented. Finally, an extension of a reliability analysis method to model-based prognosis, namely the inverse first-order reliability method (Inverse FORM), is presented.In each case study, performance evaluation metrics (accuracy, precision and timeliness) are calculated in order to make a comparison between the proposed methods
Xia, Xiaolin. "A Comparison Study on a Set of Space Syntax based Methods : Applying metric, topological and angular analysis to natural streets, axial lines and axial segments". Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-15524.
Pełny tekst źródłaRobinson, Elinirina Iréna. "Filtering and uncertainty propagation methods for model-based prognosis". Electronic Thesis or Diss., Paris, CNAM, 2018. http://www.theses.fr/2018CNAM1189.
Pełny tekst źródłaIn this manuscript, contributions to the development of methods for on-line model-based prognosis are presented. Model-based prognosis aims at predicting the time before the monitored system reaches a failure state, using a physics-based model of the degradation. This time before failure is called the remaining useful life (RUL) of the system.Model-based prognosis is divided in two main steps: (i) current degradation state estimation and (ii) future degradation state prediction to predict the RUL. The first step, which consists in estimating the current degradation state using the measurements, is performed with filtering techniques. The second step is realized with uncertainty propagation methods. The main challenge in prognosis is to take the different uncertainty sources into account in order to obtain a measure of the RUL uncertainty. There are mainly model uncertainty, measurement uncertainty and future uncertainty (loading, operating conditions, etc.). Thus, probabilistic and set-membership methods for model-based prognosis are investigated in this thesis to tackle these uncertainties.The ability of an extended Kalman filter and a particle filter to perform RUL prognosis in presence of model and measurement uncertainty is first studied using a nonlinear fatigue crack growth model based on the Paris' law and synthetic data. Then, the particle filter combined to a detection algorithm (cumulative sum algorithm) is applied to a more realistic case study, which is fatigue crack growth prognosis in composite materials under variable amplitude loading. This time, model uncertainty, measurement uncertainty and future loading uncertainty are taken into account, and real data are used. Then, two set-membership model-based prognosis methods based on constraint satisfaction and unknown input interval observer for linear discete-time systems are presented. Finally, an extension of a reliability analysis method to model-based prognosis, namely the inverse first-order reliability method (Inverse FORM), is presented.In each case study, performance evaluation metrics (accuracy, precision and timeliness) are calculated in order to make a comparison between the proposed methods
Bornschlegell, Augusto Salomao. "Optimisation aérothermique d'un alternateur à pôles saillants pour la production d'énergie électrique décentralisée". Thesis, Valenciennes, 2012. http://www.theses.fr/2012VALE0023/document.
Pełny tekst źródłaThis work relates the thermal optimization of an electrical machine. The lumped method is used to simulate the temperature field. This model solves the heat equation in three dimensions, in cylindrical coordinates and in transient or steady state. We consider two transport mechanisms: conduction and convection. The evaluation of this model is performed by means of 13 design variables that correspond to the main flow rates of the equipment. We analyse the machine cooling performance by varying these 13 flow rates. Before starting the study of such a complicated geometry, we picked a simpler case in order to better understand the variety of the available optimization tools. The experience obtained in the simpler case is applyed in the resolution of the thermal optimization problem of the electrical machine. This machine is evaluated from the thermal point of view by combining two criteria : the maximum and the mean temperature. Constraints are used to keep the problem consistent. We solved the problem using the gradient based methods (Active-set and Interior-Point) and the Genetic Algorithms
Prodan, Ionela. "Control of Multi-Agent Dynamical Systems in the Presence of Constraints". Thesis, Supélec, 2012. http://www.theses.fr/2012SUPL0019/document.
Pełny tekst źródłaThe goal of this thesis is to propose solutions for the optimal control of multi-agent dynamical systems under constraints. Elements from control theory and optimization are merged together in order to provide useful tools which are further applied to different problems involving multi-agent formations. The thesis considers the challenging case of agents subject to dynamical constraints. To deal with these issues, well established concepts like set-theory, differential flatness, Model Predictive Control (MPC), Mixed-Integer Programming (MIP) are adapted and enhanced. Using these theoretical notions, the thesis concentrates on understanding the geometrical properties of the multi-agent group formation and on providing a novel synthesis framework which exploits the group structure. In particular, the formation design and the collision avoidance conditions are casted as geometrical problems and optimization-based procedures are developed to solve them. Moreover, considerable advances in this direction are obtained by efficiently using MIP techniques (in order to derive an efficient description of the non-convex, non-connected feasible region which results from multi-agent collision and obstacle avoidance constraints) and stability properties (in order to analyze the uniqueness and existence of formation configurations). Lastly, some of the obtained theoretical results are applied on a challenging practical application. A novel combination of MPC and differential flatness (for reference generation) is used for the flight control of Unmanned Aerial Vehicles (UAVs)
Bertin, Étienne. "Robust optimal control for the guidance of autonomous vehicles". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAE012.
Pełny tekst źródłaThe guidance of a reusable launcher is a control problem that requires both precision and robustness: one must compute a trajectory and a control such that the system reaches the landing zone, without crashing into it or exploding mid-flight, all while using as little fuel as possible. Optimal control methods based on Pontryagin's Maximum Principle can compute an optimal trajectory with great precision, but uncertainties, the discrepancies between estimated values of the initial state and parameters and actual values, cause the actual trajectory to deviate, which can be dangerous. In parallel, set-based methods and notably validated simulation can enclose all trajectories of a system with uncertainties.This thesis combines those two approaches to enclose sets of optimal trajectories of a problem with uncertainties to guarantee the robustness of the guidance of autonomous vehicles.We start by defining sets of optimal trajectories for systems with uncertainties, first for mathematically perfect trajectories, then for the trajectory of a vehicle subject to estimation errors that can use, or not use, sensor information to compute a new trajectory online. Pontryagin's principle characterizes those sets as solutions of a boundary value problem with dynamics subject to uncertainties. We develop algorithms that enclose all solutions of these boundary value problem using validated simulation, interval arithmetic and contractor theory. However, validated simulation with intervals is subject to significant over-approximation that limits our methods. To remedy that we replace intervals by constrained symbolic zonotopes. We use those zonotopes to simulate hybrid systems, enclose the solutions of boundary value problems and build an inner-approximation to complement the classical outer-approximation. Finally, we combine all our methods to compute sets of trajectories for aerospace systems and use those sets to assess the robustness of a control
Lo, Shin-en. "A Fire Simulation Model for Heterogeneous Environments Using the Level Set Method". Scholarship @ Claremont, 2012. http://scholarship.claremont.edu/cgu_etd/72.
Pełny tekst źródłaMueller, Martin F. "Physics-driven variational methods for computer vision and shape-based imaging". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54034.
Pełny tekst źródłaHartmann, Daniel [Verfasser]. "A Level-Set Based Method for Premixed Combustion in Compressible Flow / Daniel Hartmann". Aachen : Shaker, 2010. http://d-nb.info/1120864143/34.
Pełny tekst źródłaYamada, Takayuki. "A Level Set-Based Topology Optimization Incorporating Concept of the Phase-Field Method". 京都大学 (Kyoto University), 2010. http://hdl.handle.net/2433/126804.
Pełny tekst źródłaJadhav, Trishul. "Knowledge Based Gene Set analysis (KB-GSA) : A novel method for gene expression analysis". Thesis, University of Skövde, School of Life Sciences, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-4352.
Pełny tekst źródłaMicroarray technology allows measurement of the expression levels of thousand of genes simultaneously. Several gene set analysis (GSA) methods are widely used for extracting useful information from microarrays, for example identifying differentially expressed pathways associated with a particular biological process or disease phenotype. Though GSA methods like Gene Set Enrichment Analysis (GSEA) are widely used for pathway analysis, these methods are solely based on statistics. Such methods can be awkward to use if knowledge of specific pathways involved in particular biological processes are the aim of the study. Here we present a novel method (Knowledge Based Gene Set Analysis: KB-GSA) which integrates knowledge about user-selected pathways that are known to be involved in specific biological processes. The method generates an easy to understand graphical visualization of the changes in expression of the genes, complemented with some common statistics about the pathway of particular interest.
Shopple, John P. "An interface-fitted finite element based level set method algorithm, implementation, analysis and applications /". Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3359494.
Pełny tekst źródłaTitle from first page of PDF file (viewed July 14, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 59-60).
Rosenthal, Paul, Vladimir Molchanov i Lars Linsen. "A Narrow Band Level Set Method for Surface Extraction from Unstructured Point-based Volume Data". Universitätsbibliothek Chemnitz, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-70373.
Pełny tekst źródłaRaudberget, Dag. "Industrial Experiences of Set-based Concurrent Engineering- Effects, results and applications". Licentiate thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Produktutveckling - Datorstödd konstruktion, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-20149.
Pełny tekst źródłaDillard, Seth Ian. "Image based modeling of complex boundaries". Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/950.
Pełny tekst źródłaIoan, Daniel. "Safe Navigation Strategies within Cluttered Environment". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG047.
Pełny tekst źródłaThis thesis pertains to optimization-based navigation and control in multi-obstacle environments. The design problem is commonly stated in the literature in terms of a constrained optimization problem over a non-convex domain. Thus, building on the combination of Model Predictive Control and set-theoretic concepts, we develop a couple of constructive methods based on the geometrical interpretation. In its first part, the thesis focuses based on a thorough analysis of the recent results in the field on the multi-obstacle environment's representation. Hence, we opted to exploit a particular class of convex sets endowed with the symmetry property to model the environment, reduce complexity, and enhance performance. Furthermore, we solve an open problem in navigation within cluttered environments: the feasible space partitioning in accordance with the distribution of obstacles. This methodology's core is the construction of a convex lifting which boils down to convex optimization. We cover both the mathematical foundations and the computational details of the implementation. Finally, we illustrate the concepts with geometrical examples, and we complement the study by further providing global feasibility guarantees and enhancing the effective control by operating at the strategical level
Li, Min, i Min Li. "Numerical model building based on XFEM/level set method to simulate ledge freezing/melting in Hall-Héroult cell". Doctoral thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27919.
Pełny tekst źródłaAu cours de la production de l'aluminium via le procédé de Hall-Héroult, le bain gelé, obtenu par solidification du bain électrolytique, joue un rôle significatif dans le maintien de la stabilité de la cellule d'électrolyse. L'objectif de ce travail est le développement d'un modèle numérique bidimensionnel afin de prédire le profil du bain gelé dans le système biphasé bain liquide/bain gelé, et ce, en résolvant trois problèmes physiques couplés incluant le problème de changement de phase (problème de Stefan), la variation de la composition chimique du bain et le mouvement de ce dernier. Par souci de simplification, la composition chimique du bain est supposée comme étant un système binaire. La résolution de ces trois problèmes, caractérisés par le mouvement de l'interface entre les deux phases et les discontinuités qui ont lieu à l'interface, constitue un grand défi pour les méthodes de résolution conventionnelles, basées sur le principe de la continuité des variables. En conséquence, la méthode des éléments finis étendus (XFEM) est utilisée comme alternative afin de traiter les discontinuités locales inhérentes à chaque solution tandis que la méthode de la fonction de niveaux (level-set) est exploitée pour capturer, implicitement, l'évolution de l'interface entre les deux phases. Au cours du développement de ce modèle, les problématiques suivantes : 1) l'écoulement monophasique à densité variable 2) le problème de Stefan couplé au transport d'espèces chimiques dans un système binaire sans considération du phénomène de la convection et 3) le problème de Stefan et le mouvement du fluide qui en résulte sont investigués par le biais du couplage entre deux problèmes parmi les problèmes mentionnées ci-dessus. La pertinence et la précision de ces sous-modèles sont testées à travers des comparaisons avec des solutions analytiques ou des résultats obtenus via des méthodes numériques conventionnelles. Finalement, le modèle tenant en compte les trois physiques est appliqué à la simulation de certains scénarios de solidification/fusion du système bain liquide-bain gelé. Dans cette dernière application, le mouvement du bain, induit par la différence de densité entre les deux phases ou par la force de flottabilité due aux gradients de température et/ou de concentration, est décrit par le problème de Stokes. Ce modèle se caractérise par le couplage entre différentes physiques, notamment la variation de la densité du fluide et de la température de fusion en fonction de la concentration des espèces chimiques. En outre, la méthode XFEM démontre sa précision et sa flexibilité pour traiter différents types de discontinuité tout en considérant un maillage fixe.
During the Hall-Héroult process for smelting aluminium, the ledge formed by freezing the molten bath plays a significant role in maintaining the internal working condition of the cell at stable state. The present work aims at building a vertically two-dimensional numerical model to predict the ledge profile in the bath-ledge two-phase system through solving three interactive physical problems including the phase change problem (Stefan problem), the variation of bath composition and the bath motion. For the sake of simplicity, the molten bath is regarded as a binary system in chemical composition. Solving the three involved problems characterized by the free moving internal boundary and the presence of discontinuities at the free boundary is always a challenge to the conventional continuum-based methods. Therefore, as an alternative method, the extended finite element method (XFEM) is used to handle the local discontinuities in each solution space while the interface between phases is captured implicitly by the level set method. In the course of model building, the following subjects: 1) one-phase density driven flow 2) Stefan problem without convection mechanism in the binary system 3) Stefan problem with ensuing melt flow in pure material, are investigated by coupling each two of the problems mentioned above. The accuracy of the corresponding sub-models is verified by the analytical solutions or those obtained by the conventional methods. Finally, the model by coupling three physics is applied to simulate the freezing/melting of the bath-ledge system under certain scenarios. In the final application, the bath flow is described by Stokes equations and induced either by the density jump between different phases or by the buoyancy forces produced by the temperature or/and compositional gradients. The present model is characterized by the coupling of multiple physics, especially the liquid density and the melting point are dependent on the species concentration. XFEM also exhibits its accuracy and flexibility in dealing with different types of discontinuity based on a fixed mesh.
During the Hall-Héroult process for smelting aluminium, the ledge formed by freezing the molten bath plays a significant role in maintaining the internal working condition of the cell at stable state. The present work aims at building a vertically two-dimensional numerical model to predict the ledge profile in the bath-ledge two-phase system through solving three interactive physical problems including the phase change problem (Stefan problem), the variation of bath composition and the bath motion. For the sake of simplicity, the molten bath is regarded as a binary system in chemical composition. Solving the three involved problems characterized by the free moving internal boundary and the presence of discontinuities at the free boundary is always a challenge to the conventional continuum-based methods. Therefore, as an alternative method, the extended finite element method (XFEM) is used to handle the local discontinuities in each solution space while the interface between phases is captured implicitly by the level set method. In the course of model building, the following subjects: 1) one-phase density driven flow 2) Stefan problem without convection mechanism in the binary system 3) Stefan problem with ensuing melt flow in pure material, are investigated by coupling each two of the problems mentioned above. The accuracy of the corresponding sub-models is verified by the analytical solutions or those obtained by the conventional methods. Finally, the model by coupling three physics is applied to simulate the freezing/melting of the bath-ledge system under certain scenarios. In the final application, the bath flow is described by Stokes equations and induced either by the density jump between different phases or by the buoyancy forces produced by the temperature or/and compositional gradients. The present model is characterized by the coupling of multiple physics, especially the liquid density and the melting point are dependent on the species concentration. XFEM also exhibits its accuracy and flexibility in dealing with different types of discontinuity based on a fixed mesh.
Ewald, Jens. "A level set based flamelet model for the prediction of combustion in homogeneous charge and direct injection spark ignition engines /". Göttingen : Cuvillier, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=014901502&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Pełny tekst źródłaDuramaz, Alper. "Image Segmentation Based On Variational Techniques". Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607721/index.pdf.
Pełny tekst źródłabut for the hierarchical four-phase segmentation, it is observed that this method sometimes gives unsatisfactory results. In this work, a fast hierarchical four-phase segmentation method is proposed where the Chan-Vese active contour method is applied following the gradient flows method. After the segmentation process, the segmented regions are denoised using diffusion filters. Additionally, for the low signal-to-noise ratio applications, the prefiltering scheme using nonlinear diffusion filters is included in the proposed method. Simulations have shown that the proposed method provides an effective solution to the image segmentation and denoising problem.
Mortensen, Clifton H. "A Computational Fluid Dynamics Feature Extraction Method Using Subjective Logic". BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2208.
Pełny tekst źródłaAltinoklu, Metin Burak. "Image Segmentation Based On Variational Techniques". Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610415/index.pdf.
Pełny tekst źródła#8211
Shah variational approach have been studied. By obtaining an optimum point of the Mumford-Shah functional which is a piecewise smooth approximate image and a set of edge curves, an image can be decomposed into regions. This piecewise smooth approximate image is smooth inside of regions, but it is allowed to be discontinuous region wise. Unfortunately, because of the irregularity of the Mumford Shah functional, it cannot be directly used for image segmentation. On the other hand, there are several approaches to approximate the Mumford-Shah functional. In the first approach, suggested by Ambrosio-Tortorelli, it is regularized in a special way. The regularized functional (Ambrosio-Tortorelli functional) is supposed to be gamma-convergent to the Mumford-Shah functional. In the second approach, the Mumford-Shah functional is minimized in two steps. In the first minimization step, the edge set is held constant and the resultant functional is minimized. The second minimization step is about updating the edge set by using level set methods. The second approximation to the Mumford-Shah functional is known as the Chan-Vese method. In both approaches, resultant PDE equations (Euler-Lagrange equations of associated functionals) are solved by finite difference methods. In this study, both approaches are implemented in a MATLAB environment. The overall performance of the algorithms has been investigated based on computer simulations over a series of images from simple to complicated.
Patuelli, Claudia. "Implementation, set up and validation of multiplex qualitative two-step RT-PCR based on TaqMan® method for the diagnosis of viruses in Vitis vinifera L". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Znajdź pełny tekst źródłaGoddard, Aaron M. "A primarily Eulerian means of applying left ventricle boundary conditions for the purpose of patient-specific heart valve modeling". Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6584.
Pełny tekst źródłaBeisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects". Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2011. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-71564.
Pełny tekst źródłaDie Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben
Li, Yilun. "Numerical methodologies for topology optimization of electromagnetic devices". Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS228.
Pełny tekst źródłaTopology optimization is the conceptual design of a product. Comparing with conventional design approaches, it can create a novel topology, which could not be imagined beforehand, especially for the design of a product without prior-experiences or knowledge. Indeed, the topology optimization technique with the ability of finding efficient topologies starting from scratch has become a serious asset for the designers. Although originated from structure optimization, topology optimization in electromagnetic field has flourished in the past two decades. Nowadays, topology optimization has become the paradigm of the predominant engineering techniques to provide a quantitative design method for modern engineering design. However, due to its inherent complex nature, the development of applicable methods and strategies for topology optimization is still in progress. To address the typical problems and challenges encountered in an engineering optimization process, considering the existing methods in the literature, this thesis focuses on topology optimization methods based on deterministic and stochastic algorithms. The main work and achievement can be summarized as: Firstly, to solve the premature convergence to a local optimal point of existing ON/OFF method, a Tabu-ON/OFF, an improved Quantum-inspired Evolutionary Algorithm (QEA) and an improved Genetic Algorithm (GA) are proposed successively. The characteristics of each algorithm are elaborated, and its performance is compared comprehensively. Secondly, to solve the intermediate density problem encountered in density-based methods and the engineering infeasibility of the finally optimized topology, two topology optimization methods, namely Solid Isotropic Material with Penalization-Radial Basis Function (SIMP-RBF) and Level Set Method-Radial Basis Function (LSM-RBF) are proposed. Both methods calculate the sensitivity information of the objective function, and use deterministic optimizers to guide the optimizing process. For the problem with a large number of design variables, the computational cost of the proposed methods is greatly reduced compared with those of the methods accounting on stochastic algorithms. At the same time, due to the introduction of RBF data interpolation smoothing technique, the optimized topology is more conducive in actual productions. Thirdly, to reduce the excessive computing costs when a stochastic searching algorithm is used in topology optimization, a design variable redistribution strategy is proposed. In the proposed strategy, the whole searching process of a topology optimization is divided into layered structures. The solution of the previous layer is set as the initial topology for the next optimization layer, and only elements adjacent to the boundary are chosen as design variables. Consequently, the number of design variables is reduced to some extent; and the computation time is thereby shortened. Finally, a multi-objective topology optimization methodology based on the hybrid multi-objective optimization algorithm combining Non-dominated Sorting Genetic Algorithm II (NSGAII) and Differential Evolution (DE) algorithm is proposed. The comparison results on test functions indicate that the performance of the proposed hybrid algorithm is better than those of the traditional NSGAII and Strength Pareto Evolutionary Algorithm 2 (SPEA2), which guarantee the good global optimal ability of the proposed methodology, and enables a designer to handle constraint conditions in a direct way. To validate the proposed topology optimization methodologies, two study cases are optimized and analyzed
Li, Honghao. "Interpretable biological network reconstruction from observational data". Electronic Thesis or Diss., Université Paris Cité, 2021. http://www.theses.fr/2021UNIP5207.
Pełny tekst źródłaThis thesis is focused on constraint-based methods, one of the basic types of causal structure learning algorithm. We use PC algorithm as a representative, for which we propose a simple and general modification that is applicable to any PC-derived methods. The modification ensures that all separating sets used during the skeleton reconstruction step to remove edges between conditionally independent variables remain consistent with respect to the final graph. It consists in iterating the structure learning algorithm while restricting the search of separating sets to those that are consistent with respect to the graph obtained at the end of the previous iteration. The restriction can be achieved with limited computational complexity with the help of block-cut tree decomposition of the graph skeleton. The enforcement of separating set consistency is found to increase the recall of constraint-based methods at the cost of precision, while keeping similar or better overall performance. It also improves the interpretability and explainability of the obtained graphical model. We then introduce the recently developed constraint-based method MIIC, which adopts ideas from the maximum likelihood framework to improve the robustness and overall performance of the obtained graph. We discuss the characteristics and the limitations of MIIC, and propose several modifications that emphasize the interpretability of the obtained graph and the scalability of the algorithm. In particular, we implement the iterative approach to enforce separating set consistency, and opt for a conservative rule of orientation, and exploit the orientation probability feature of MIIC to extend the edge notation in the final graph to illustrate different causal implications. The MIIC algorithm is applied to a dataset of about 400 000 breast cancer records from the SEER database, as a large-scale real-life benchmark
Kaelo, Professor. "Some Population Set-Based Methods for Unconstrained Global Optimization". Thesis, 2006. http://hdl.handle.net/10539/1771.
Pełny tekst źródłaMany real-life problems are formulated as global optimization problems with continuous variables. These problems are in most cases nonsmooth, nonconvex and often simulation based, making gradient based methods impossible to be used to solve them. Therefore, ef#2;cient, reliable and derivative-free global optimization methods for solving such problems are needed. In this thesis, we focus on improving the ef#2;ciency and reliability of some global optimization methods. In particular, we concentrate on improving some population set-based methods for unconstrained global optimization, mainly through hybridization. Hybridization has widely been recognized to be one of the most attractive areas of unconstrained global optimization. Experiments have shown that through hybridization, new methods that inherit the strength of the original elements but not their weakness can be formed. We suggest a number of new hybridized population set-based methods based on differential evolution (de), controlled random search (crs2) and real coded genetic algorithm (ga). We propose #2;ve new versions of de. In the #2;rst version, we introduce a localization, called random localization, in the mutation phase of de. In the second version, we propose a localization in the acceptance phase of de. In the third version, we form a de hybrid algorithm by probabilistically combining the point generation scheme of crs2 with that of de in the de algorithm. The fourth and #2;fth versions are also de hybrids. These versions hybridize the mutation of de with the point generation rule of the electromagnetism-like (em) algorithm. We also propose #2;ve new versions of crs2. The #2;rst version modi#2;es the point generation scheme of crs2 by introducing a local mutation technique. In the second and third modi#2;cations, we probabilistically combine the point generation scheme of crs2 with the linear interpolation scheme of a trust-region based method. The fourth version is a crs hybrid that probabilistically combines the quadratic interpolation scheme with the linear interpolation scheme in crs2. In the #2;fth version, we form a crs2 hybrid algorithm by probabilistically combining the point generation scheme of crs2 with that of de in the crs2 algorithm. Finally, we propose #2;ve new versions of the real coded genetic algorithm (ga) with arithmetic crossover. In the #2;rst version of ga, we introduce a local technique. We propose, in the second version, an integrated crossover rule that generates two children at a time using two different crossover rules. We introduce a local technique in the second version to obtain the third version. The fourth and #2;fth versions are based on the probabilistic adaptation of crossover rules. The ef#2;ciency and reliability of the new methods are evaluated through numerical experiments using a large test suite of both simple and dif#2;cult problems from the literature. Results indicate that the new hybrids are much better than their original counterparts both in reliability and ef#2;ciency. Therefore, the new hybrids proposed in this study offer an alternative to many currently available stochastic algorithms for solving global optimization problems in which the gradient information is not readily available.
"A finite element based level set method for structural topology optimization". Thesis, 2009. http://library.cuhk.edu.hk/record=b6074757.
Pełny tekst źródłaNumerical examples are involved in this thesis to illustrate the reliability of the proposed method. Problems on both regular and irregular design domains are considered and different meshes are tested and compared.
Solving the level set equation with the standard Galerkin FEM might produce unstable results because of the hyperbolic characteristic of this equation. Therefore, the streamline diffusion finite element method (SDFEM), a stabilized method, is employed to solve the level set equation. In addition to the advantage of simplicity, this method generates a system of equations with a constant, symmetric, and positive defined coefficient matrix. Furthermore, this matrix can be diagonalized by virtue of the lumping technique in structural dynamics. This makes the cost in solving and storing quite low. It is more important that the lumped coefficient matrix may help to improve the stability under some circumstance.
The accuracy of the finite element based level set method (FELSM) is compared with that of the finite difference based level set method (FDLSM). The FELSM is a first-order accurate algorithm but we prove that its accuracy is enough for structural optimization problems considered in this study. Even higher-order accurate FDLSM schemes are used, the numerical results are still the same as those obtained by FELSM. It is also shown that if the Courant-Friedreichs-Lewy (CFL) number is large, the FELSM is more robust and accurate than FDLSM.
The reinitialization equation is also solved with the SDFEM and an extra diffusion term is added to improve the stability near the boundary. We propose a criterion to select the factor of the diffusion term. Due to numerical errors and the diffusion term, boundary will drift during the process of reinitialization. To constrain the boundary from moving, a Dirichlet boundary condition is enforced. Within the framework of FEM, this enforcement can be conveniently preformed with the Lagrangian multiplier method or the penalty method.
Velocity extension is discussed in this thesis. A natural extension method and a partial differential equation (PDE)-based extension method are introduced. Some related topics, such as the "ersatz" material approach and the recovery of stresses, are discussed as well.
Xing, Xianghua.
Adviser: Michael Yu Wang.
Source: Dissertation Abstracts International, Volume: 71-01, Section: B, page: 0628.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 102-113).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
HUANG, MING-CHONG, i 黃銘崇. "Location-search-based solution methods for the set covering problem". Thesis, 1991. http://ndltd.ncl.edu.tw/handle/99011874033646558249.
Pełny tekst źródłaZhao, Kaiqiong. "Gene-pair based statistical methods for testing gene set enrichment in microarray gene expression studies". 2016. http://hdl.handle.net/1993/31796.
Pełny tekst źródłaOctober 2016
Tang, Yu-Chuan, i 湯育全. "Methods based on distance statistics for detection of differentially expressed genes and gene set enrichment analysis". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/4bnub4.
Pełny tekst źródła國立臺灣大學
農藝學研究所
107
The first part of this paper is to study the effectiveness of differentially expressed gene analysis. Statistical methods such as t-test or SAM treat each gene as independent and separately identify whether it is a differentially expressed gene. However, the results of the test may be biased because of the correlation between genes. Therefore, a novel statistic called OR value is proposed for identifying differentially expressed genes recently. The advantage of OR value is no model assumptions and no estimated parameters, as well as the Euclidean distance is used to consider the correlation between genes and the dispersion of data. In this paper, multivariate normal distribution, multivariate t distribution, and mixed distribution are used to simulate gene expression data, and then the OR value is used to identify whether the gene is a differentially expressed gene, and compared it to the commonly used t-test and non-OR methods. The results show that the weighted quantile difference method using OR value performs well in all cases, especially in the multivariate t distribution with a high correlation coefficient and the mixed distribution with shift amount greater than 0. The second aim of this paper is gene set analysis (GSA) using the self-contained hypothesis. Adjustments for the GSA method is carried out using statistics in the first part, and we also compared it to commonly used gene set analysis methods. The results show that only in the multivariate t distribution, the distance-based methods such as the sum of the quantile difference, the sum of the weighted quantile difference and the energy test method perform better than other methods, and there is no apparent method outperforming others under other conditions. Finally, we applied the OR-based method and competing methods to a large scale dataset from a group of breast cancer patients to perform the differentially expressed gene and gene set analysis. In summary, the OR value is a worthwhile method when performing the differentially expressed gene analysis, but a more robust statistic may be needed to extend the analysis for gene-set level.
Lin, Yi, i 林以. "Enhancing Decision Prediction Based on Rough Set Method". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/63553732729976697390.
Pełny tekst źródła南台科技大學
資訊管理系
97
Data Mining is a tool to grasp pratical information through abundant datum developing fast and build up association rules for decision references or classification prediction. Nevertheless, the quantity of association rules derived from Data Mining is usually immeasurable, and which are difficult to find essential ones. Rough Set Theory is a mathematical instrument in Data Mining, and is usually used to search for vital knowledge from abundant datum. Also, it can eliminate redundant information and establish effective association rules in Information Table. Rough Set Theory is implemented reguarly in discrete data. However, there are also huge amount of information belonging to numeric data in practical application. Thus, the aim of this research is to find important rules and enhance decision prediction towards numeric data via Rough Set Theory. This research uses Rough Set Theory to get rid of unnessary datum and search Reduct containing complete information in Information Table constructed by numeric data. Then, two key approaches to enhance decision prediction are proposed towards Reduct rules: (1) Setting rule weights for rules derived from Reducts and discovering the critical decision rules under the basis of rule weight. (2) Placing threshold value towards decision rules of Reduct and exploring decision rules which succeed in passing the threshold tests under original Decision Table and most of its subtables ( so-called “stable” decision rules ). Experiments are generated using Matlab and compared to the well known rough set analysis tool-Rough Set Exploaration Sysytem (RSES). The results show that the approaches proposed by this research can find important rules and enhance decision prediction.