To see the other types of publications on this topic, follow the link: Dynamic design of experiments.

Dissertations / Theses on the topic 'Dynamic design of experiments'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Dynamic design of experiments.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schillinger, Mark [Verfasser], and Oliver [Gutachter] Nelles. "Safe and dynamic design of experiments / Mark Schillinger ; Gutachter: Oliver Nelles." Siegen : Universitätsbibliothek der Universität Siegen, 2019. http://d-nb.info/1203374852/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Saied, Hussein. "On control of parallel robots for high dynamic performances : from design to experiments." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS110.

Full text
Abstract:
Les manipulateurs cinématiques parallèles (PKMs) ont acquis une popularité croissante au cours des dernières décennies. Cet intérêt a été stimulé par les avantages significatifs des PKM par rapport à leurs homologues en série en matière d’accélérations élevées et de précision.Un développement de contrôles efficaces et performant joue un rôle essentiel dans l’amélioration des performances globales des PKM. Le contrôle des PKM est souvent considéré dans la littérature comme une tâche très difficile en raison de leur dynamique non linéaire, de leurs incertitudes abondantes, de la variation des paramètres et de la redondance des actionnements. Dans cette thèse, nous visons à améliorer les performances dynamiques des PKM en matière de mouvements à grande vitesse, de robustesse et de précision.Ainsi, nous proposons d’améliorer certaines stratégies de contrôle robustes (RISE et SMC supertorsion) et compenser certaines sources d’erreurs (dynamique de l’actionneur et du frottement) par des solutions de contrôle dynamique. Une analyse de stabilité à base de Lyaponuv est établie pour tous les contrôleurs proposés vérifiant la convergence asymptotique de l’erreur de suivi à zéro à mesure que le temps passe à l’infini. Afin de valider les contrôleurs proposés, des expériences en temps réel sont menées sur plusieurs prototypes de robots parallèles: robot Delta à 3 DOF a l’EPFL, en Suisse, robot VELOCE à 4DOF et robot SPIDER4 à 5 DOF au LIRMM, en France. Plusieurs scénarios sont testés, notamment les scénarios nominaux, la robustesse face aux variations de vitesses et la robustesse face aux variations de charge utile. Les méthodes de contrôle proposées vérifient leur capacité et leur capacité à améliorer les performances dynamiques des manipulateurs parallèles,en particulier lorsqu’elles fonctionnent dans des conditions dynamiques élevées(haute vitesse et charge utile gérée)
Parallel Kinematic Manipulators (PKMs) have gained an increased popularity in thelast few decades. This interest has been stimulated by the significant advantages of PKMscompared to their serial counterparts, such as better precision and higher accelerationcapabilities. Efficient and performant control algorithms play a crucial role in improvingthe overall performance of PKMs. Control of PKMs is often considered in the literature achallenging task due to their highly nonlinear dynamics, abundant uncertainties, parametersvariation, and actuation redundancy. In this thesis, we aim at improving the dynamicperformance of PKMs in terms of precision and robustness towards changes of operatingconditions. Thus, we propose robust control strategies being extensions of (i) the standardRobust Integral of the Sign of the Error (RISE) feedback control and (ii) the super-twistingSliding Mode Control (SMC). Moreover, an actuator and friction dynamics formulation isproposed within a model-based control strategy to compensate for their resulting errors.Lyaponuv-based stability analysis is established for all the proposed controllers verifyingthe asymptotic convergence of the tracking errors. In order to validate the proposed controllers,real-time experiments are conducted on several parallel robot prototypes: the 3-DOF Delta robot at EPFL, Switzerland, the 4-DOF VELOCE robot, and the 5-DOF SPIDER4robot at LIRMM, France. Several experiments are tested including nominal scenarios, robustnesstowards speed variation, and robustness towards payload changes. The relevanceof the proposed control schemes is proved through the improvement of the tracking errorsat different dynamic operating conditions
APA, Harvard, Vancouver, ISO, and other styles
3

Galvanin, Federico. "Optimal model-based design of experiments in dynamic systems: novel techniques and unconventional applications." Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3427095.

Full text
Abstract:
Model-based design of experiments (MBDoE) techniques are a very useful tool for the rapid assessment and development of dynamic deterministic models, providing a significant support to the model identification task on a broad range of process engineering applications. These techniques allow to maximise the information content of an experimental trial by acting on the settings of an experiment in terms of initial conditions, profiles of the manipulated inputs and number and time location of the output measurements. Despite their popularity, standard MBDoE techniques are still affected by some limitations. In fact, when a set of constraints is imposed on the system inputs or outputs, factors like uncertainty on prior parameter estimation and structural system/model mismatch may lead the design procedure to plan experiments that turn out, in practice, to be suboptimal (i.e. scarcely informative) and/or unfeasible (i.e. violating the constraints imposed on the system). Additionally, standard MBDoE techniques have been originally developed considering a discrete acquisition of the information. Therefore, they do not consider the possibility that the information on the system itself could be acquired very frequently if there was the possibility to record the system responses in a continuous manner. In this Dissertation three novel MBDoE methodologies are proposed to address the above issues. First, a strategy for the online model-based redesign of experiments is developed, where the manipulated inputs are updated while an experiment is still running. Thanks to intermediate parameter estimations, the information is exploited as soon as it is generated from an experiment, with great benefit in terms of precision and accuracy of the final parameter estimate and of experimental time. Secondly, a general methodology is proposed to formulate and solve the experiment design problem by explicitly taking into account the presence of parametric uncertainty, so as to ensure by design both feasibility and optimality of an experiment. A prediction of the system responses for the given parameter distribution is used to evaluate and update suitable backoffs from the nominal constraints, which are used in the design session in order to keep the system within a feasible region with specified probability. Finally, a design criterion particularly suitable for systems where continuous measurements are available is proposed in order to optimise the information dynamics of the experiments since the very beginning of the trial. This approach allows tailoring the design procedure to the specificity of the measurement system. A further contribution of this Dissertation is aimed at assessing the general applicability of both standard and advanced MBDoE techniques to the biomedical area, where unconventional experiment design applications are faced. In particular, two identification problems are considered: one related to the optimal drug administration in cancer chemotherapy, and one related to glucose homeostasis models for subjects affected by type 1 diabetes mellitus (T1DM). Particular attention is drawn to the optimal design of clinical tests for the parametric identification of detailed physiological models of T1DM. In this latter case, advanced MBDoE techniques are used to ensure a safe and optimally informative clinical test for model identification. The practicability and effectiveness of a complex approach taking simultaneously into account the redesign-based and the backoff-based MBDoE strategies are also shown. The proposed experiment design procedure provides alternative test protocols that are sufficiently short and easy to carry out, and allow for a precise, accurate and safe estimation of the model parameters defining the metabolic portrait of a diabetic subject.
Le moderne tecniche di progettazione ottimale degli esperimenti basata su modello (MBDoE, model-based design of experiments) si sono dimostrate utili ed efficaci per sviluppare e affinare modelli matematici dinamici di tipo deterministico. Queste tecniche consentono di massimizzare il contenuto informativo di un esperimento di identificazione, determinando le condizioni sperimentali più opportune da adottare nella sperimentazione allo scopo di stimare i parametri di un modello nel modo più rapido ed efficiente possibile. Le tecniche MBDoE sono state applicate con successo in svariate applicazioni industriali. Tuttavia, nella loro formulazione standard, esse soffrono di alcune limitazioni. Infatti, quando sussistono vincoli sugli ingressi manipolabili dallo sperimentatore oppure sulle risposte del sistema, l’incertezza nell’informazione preliminare che lo sperimentatore possiede sul sistema fisico (in termini di struttura del modello e precisione nella stima dei parametri) può profondamente influenzare l’efficacia della procedura di progettazione dell’esperimento. Come conseguenza, è possibile che venga progettato un esperimento poco informativo e dunque inadeguato per stimare i parametri del modello in maniera statisticamente precisa ed accurata, o addirittura un esperimento che porta a violare i vincoli imposti sul sistema in esame. Inoltre, le tecniche MBDoE standard non considerano nella formulazione stessa del problema di progettazione la specificità e le caratteristiche del sistema di misura in termini di frequenza, precisione e accuratezza con cui le misure sono disponibili. Nella ricerca descritta in questa Dissertazione sono sviluppate metodologie avanzate di progettazione degli esperimenti con lo scopo di superare tali limitazioni. In particolare, sono proposte tre nuove tecniche per la progettazione ottimale di esperimenti dinamici basata su modello: 1. una tecnica di progettazione in linea degli esperimenti (OMBRE, online model-based redesign of experiments), che consente di riprogettare un esperimento mentre questo è ancora in esecuzione; 2. una tecnica basata sul concetto di “backoff” (arretramento) dai vincoli, per gestire l’incertezza parametrica e strutturale del modello; 3. una tecnica di progettazione che consente di ottimizzare l’informazione dinamica di un esperimento (DMBDoE, dynamic model-based design of experiments) allo scopo di considerare la specificità del sistema di misura disponibile. La procedura standard MBDoE per la progettazione di un esperimento è sequenziale e si articola in tre stadi successivi. Nel primo stadio l’esperimento viene progettato considerando l’informazione preliminare disponibile in termini di struttura del modello e stima preliminare dei parametri. Il risultato della progettazione è una serie di profili ottimali delle variabili manipolabili (ingressi) e l’allocazione ottimale dei tempi di campionamento delle misure (uscite). Nel secondo stadio l’esperimento viene effettivamente condotto, impiegando le condizioni sperimentali progettate e raccogliendo le misure come da progetto. Nel terzo stadio, le misure vengono utilizzate per stimare i parametri del modello. Seguendo questa procedura, l’informazione ottenuta dall’esperimento viene sfruttata solo a conclusione dell’esperimento stesso. La tecnica OMBRE proposta consente invece di riprogettare l’esperimento, e quindi di aggiornare i profili manipolabili nel tempo, mentre l’esperimento è ancora in esecuzione, attuando stime intermedie dei parametri. In questo modo l’informazione viene sfruttata progressivamente mano a mano che l’esperimento procede. I vantaggi di questa tecnica sono molteplici. Prima di tutto, la procedura di progettazione diventa meno sensibile, rispetto alla procedura standard, alla qualità della stima preliminare dei parametri. In secondo luogo, essa consente una stima dei parametri statisticamente più soddisfacente, grazie alla possibilità di sfruttare in modo progressivo l’informazione generata dall’esperimento. Inoltre, la tecnica OMBRE consente di ridurre le dimensioni del problema di ottimizzazione, con grande beneficio in termini di robustezza computazionale. In alcune applicazioni, risulta di importanza critica garantire la fattibilità dell’esperimento, ossia l’osservanza dei vincoli imposti sul sistema. Nella Dissertazione è proposta e illustrata una nuova procedura di progettazione degli esperimenti basata sul concetto di “backoff” (arretramento) dai vincoli, nella quale l’effetto dell’incertezza sulla stima dei parametri e/o l’inadeguatezza strutturale del modello vengono inclusi nella formulazione delle equazioni di vincolo grazie ad una simulazione stocastica. Questo approccio porta a ridurre lo spazio utile per la progettazione dell’esperimento in modo tale da assicurare che le condizioni di progettazione siano in grado di garantire non solo l’identificazione dei parametri del modello, ma anche la fattibilità dell’esperimento in presenza di incertezza strutturale e/o parametrica del modello. Nelle tecniche standard di progettazione la formulazione del problema di ottimo prevede che le misure vengano acquisite in maniera discreta, considerando una certa distanza temporale tra misure successive. Di conseguenza, l’informazione attesa dall’esperimento viene calcolata e massimizzata durante la progettazione mediante una misura discreta dell’informazione di Fisher. Tuttavia, nella pratica, sistemi di misura di tipo continuo permetterebbero di seguire la dinamica del processo mediante misurazioni molto frequenti. Per questo motivo viene proposto un nuovo criterio di progettazione (DMBDoE), nel quale l’informazione attesa dall’esperimento viene ottimizzata in maniera continua. Il nuovo approccio consente di generalizzare l’approccio della progettazione includendo le caratteristiche del sistema di misura (in termini di frequenza di campionamento, accuratezza e precisione delle misure) nella formulazione stessa del problema di ottimo. Un ulteriore contributo della ricerca presentata in questa Dissertazione è l’estensione al settore biomedico di tecniche MBDoE standard ed avanzate. I sistemi fisiologici sono caratterizzati da elevata complessità, e spesso da scarsa controllabilità e scarsa osservabilità. Questi elementi rendono particolarmente lunghe e complesse le procedure di identificazione parametrica di modelli fisiologici dettagliati. L’attività di ricerca ha considerato due problemi principali inerenti l’identificazione parametrica di modelli fisiologici: il primo legato a un modello per la somministrazione ottimale di agenti chemioterapici per la cura del cancro, il secondo relativo ai modelli complessi dell’omeostasi glucidica per soggetti affetti da diabete mellito di tipo 1. In quest’ultimo caso, al quale è rivolta attenzione particolare, l’obiettivo principale è identificare il set di parametri individuali del soggetto diabetico. Ciò consente di tracciarne un ritratto metabolico, fornendo così un prezioso supporto qualora si intenda utilizzare il modello per sviluppare e verificare algoritmi avanzati per il controllo del diabete di tipo 1. Nella letteratura e nella pratica medica esistono test clinici standard, quali il test orale di tolleranza al glucosio e il test post-prandiale da carico di glucosio, per la diagnostica del diabete e l’identificazione di modelli dell’omeostasi glucidica. Tali test sono sufficientemente brevi e sicuri per il soggetto diabetico, ma si possono rivelare poco informativi quando l’obiettivo è quello di identificare i parametri di modelli complessi del diabete. L’eccitazione fornita durante questi test al sistema-soggetto, in termini di infusione di insulina e somministrazione di glucosio, può infatti essere insufficiente per stimare in maniera statisticamente soddisfacente i parametri del modello. In questa Dissertazione è proposto l’impiego di tecniche MBDoE standard e avanzate per progettare test clinici che permettano di identificare nel modo più rapido ed efficiente possibile il set di parametri che caratterizzano un soggetto affetto da diabete, rispettando durante il test i vincoli imposti sul livello glicemico del soggetto. Partendo dai test standard per l’identificazione di modelli fisiologici del diabete, è così possibile determinare dei protocolli clinici modificati in grado di garantire test clinici altamente informativi, sicuri, poco invasivi e sufficientemente brevi. In particolare, si mostra come un test orale opportunamente modificato risulta altamente informativo per l’identificazione, sicuro per il paziente e di facile implementazione per il clinico. Inoltre, viene evidenziato come l’integrazione di tecniche avanzate di progettazione (quali OMBRE e tecniche basate sul concetto di backoff) è in grado di garantire elevata significatività e sicurezza dei test clinici anche in presenza di incertezza strutturale, oltre che parametrica, del modello. Infine, si mostra come, qualora siano disponibili misure molto frequenti della glicemia, ottimizzare mediante tecniche DMBDoE l’informazione dinamica progressivamente acquisita dal sistema di misura durante il test consente di sviluppare protocolli clinici altamente informativi, ma di durata inferiore, minimizzando così lo stress sul soggetto diabetico. La struttura della Dissertazione è la seguente. Il primo Capitolo illustra lo stato dell’arte delle attuali tecniche di progettazione ottimale degli esperimenti, analizzandone le limitazioni e identificando gli obiettivi della ricerca. Il secondo Capitolo contiene la trattazione matematica necessaria per comprendere la procedure standard di progettazione degli esperimenti. Il terzo Capitolo presenta la nuova tecnica OMBRE per la riprogettazione in linea di esperimenti dinamici. La tecnica viene applicata a due casi di studio, riguardanti un processo di fermentazione di biomassa in un reattore semicontinuo e un processo per la produzione di uretano. Il quarto Capitolo propone e illustra il metodo basato sul concetto di “backoff” per gestire l’effetto dell’incertezza parametrica e strutturale nella formulazione stessa del problema di progettazione. L’efficacia del metodo è verificata su due casi di studio in ambito biomedico. Il primo riguarda l’ottimizzazione dell’infusione di insulina per l’identificazione di un modello dettagliato del diabete mellito di tipo 1; il secondo la somministrazione ottimale di agenti chemioterapici per la cura del cancro. Il quinto Capitolo riguarda interamente il problema della progettazione ottimale di test clinici per l’identificazione di un modello fisiologico complesso del diabete mellito di tipo 1. La progettazione di protocolli clinici modificati avviene adottando tecniche MBDoE in presenza di elevata incertezza parametrica tra modello e soggetto diabetico. Il sesto Capitolo affronta il problema della progettazione dei test clinici assumendo sia incertezza di modello parametrica che strutturale. Il settimo Capitolo propone un nuovo criterio di progettazione (DMBDoE) che ottimizza l’informazione dinamica acquisibile da un esperimento. La tecnica viene applicata a un modello complesso del diabete mellito di tipo 1 e ad un processo per la fermentazione di biomassa in un reattore semicontinuo. Conclusioni e possibili sviluppi futuri vengono descritti nella sezione conclusiva della Dissertazione.
APA, Harvard, Vancouver, ISO, and other styles
4

DiPietro, Anthony Louis. "Design and experimental evaluation of a dynamic thermal distortion generator for turbomachinery research." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-09292009-020206/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Paz, Sandro. "Antiviral Resistance and Dynamic Treatment and Chemoprophylaxis of Pandemic Influenza." Scholar Commons, 2014. https://scholarcommons.usf.edu/etd/5097.

Full text
Abstract:
Public health data show the tremendous economic and societal impact of pandemic influenza in the past. Currently, the welfare of society is threatened by the lack of planning to ensure an adequate response to a pandemic. This preparation is difficult because the characteristics of the virus that would cause the pandemic are unknown, but primarily because the response requires tools to support decision-making based on scientific methods. The response to the next pandemic influenza will likely include extensive use of antiviral drugs, which will create an unprecedented selective pressure for the emergence of antiviral resistant strains. Nevertheless, the literature has insufficient exhaustive models to simulate the spread and mitigation of pandemic influenza, including infection by an antiviral resistant strain. We are building a large-scale simulation optimization framework for development of dynamic antiviral strategies including treatment of symptomatic cases and chemoprophylaxis of pre- and post-exposure cases. The model considers an oseltamivir-sensitive strain and a resistant strain with low/high fitness cost, induced by the use of the several antiviral measures. The mitigation strategies incorporate age/immunitybased risk groups for treatment and pre-/post-exposure chemoprophylaxis, and duration of pre-exposure chemoprophylaxis. The model is tested on a hypothetical region in Florida, U.S., involving more than one million people. The analysis is conducted under different virus transmissibility and severity scenarios, varying intensity of non-pharmaceutical interventions, measuring the levels of antiviral stockpile availability. The model is intended to support pandemic preparedness and response policy making.
APA, Harvard, Vancouver, ISO, and other styles
6

Simms, Christine. "Process optimisation using design experiments and some of the principles of Taguchi : resolving multi-criteria conflicts within parameter design in static and dynamic processes." Thesis, University of Ulster, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Burke, Richard D. "Investigation into the interactions between thermal management, lubrication and control systems of a diesel engine." Thesis, University of Bath, 2011. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.545325.

Full text
Abstract:
Engine thermal and lubricant systems have only recently been a serious focus in engine design and in general remain under passive control. The introduction of active control has shown benefits in fuel consumption during the engine warm-up period, however there is a lack of rigorous calibration of these devices in conjunction with other engine systems. For these systems, benefits in fuel consumption (FC) are small and accurate measurement systems are required. Analysis of both FC and NOx emissions measurements processes was conducted and showed typical errors of 1% in FC from thermal expansion and 2% in NOx per g/kg change in absolute humidity. Correction factors were derived both empirically and from first principles to account for these disturbances. These improvements are applicable to the majority of experimental facilities and will be essential as future engine developments are expected to be achieved through small incremental steps. Using prototype hardware installed on a production 2.4L Diesel engine, methodologies for optimising the design, control and integration of these systems were demonstrated. Design of experiments (DoE) based approaches were used to model the engine behaviour under transient conditions. A subsequent optimisation procedure demonstrated a 3.2% reduction in FC during warm-up from 25°C under iso-NOx conditions. This complemented a 4% reduction from reduced oil pumping work using a variable displacement pump. A combination of classical DoE and transient testing allowed the dynamic behaviour of the engine to be captured empirically when prototype hardware is available. Furthermore, the enhancement of dynamic DoE approaches to include the thermal condition of the engine can produce models that, when combined with other available simulation packages, offer a tool for design optimisation when hardware is not available. These modelling approaches are applicable to a wide number of problems to evaluate design considerations at different stages of the engine development process. These allow the transient thermal behaviour of the engine to be captured, significantly enhancing conventional model based calibration approaches.
APA, Harvard, Vancouver, ISO, and other styles
8

Coleman, Mathew Riley. "Design and Characterization of a Coaxial Plasma Railgun for Jet Collision Experiments." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102740.

Full text
Abstract:
Plasma railguns are electromagnetic accelerators used to produce controlled high velocity plasma jets. This thesis discusses the design and characterization of a small coaxial plasma railgun intended to accelerate argon-helium plasma jets. The railgun will be used for the study of plasma shocks in jet collisions. The railgun is mounted on a KF-40 vacuum port and operated using a 90 kA, 11 kV LC pulse forming network. Existing knowledge of coaxial railgun plasma instabilities and material interactions at vacuum and plasma interfaces are applied to the design. The design of individual gun components is detailed. Jet velocity and density are characterized by analyzing diagnostic data collected from a Rogowski coil, interferometer, and photodiode. Peak line-integrated electron number densities of approximately 8 × 1015 cm-2 and jet velocities of tens of km/s are inferred from the data recorded from ten experimental pulses.
Master of Science
Plasma is a gaseous state of matter which is electrically conductive and interacts with electric and magnetic fields. Plasmas are used in many everyday objects such as fluorescent lights, but some of the physics of plasmas are still not entirely understood. One set of plasma interactions that have not been fully explored are those which occur during high-velocity collisions between plasmas. Experiments aimed to further the understanding of these interactions require the generation of plasmas with specified properties at very high velocities. A device known as a plasma railgun can be used to produce plasmas which meet these experimental demands. In a plasma railgun, a short pulse of current is passed through a plasma located between two parallel electrodes, or "rails". This current generates a magnetic field which propels the plasma forward. The plasma is accelerated until it leaves the muzzle of the railgun. In coaxial plasma railguns, the electrodes are concentric. This paper discusses the design and testing of a small, relatively low power coaxial plasma railgun. Specific elements of the design are examined and the inherent physical and material difficulties of a coaxial design are explored. The experiment which was performed to confirm the properties of the plasma jets produced by the coaxial plasma railgun is explained. The results of this experiment confirm that the design succeeds in producing plasmas which meet targets for plasma properties.
APA, Harvard, Vancouver, ISO, and other styles
9

Graf, Stefan Wilhelm [Verfasser], André [Akademischer Betreuer] Bardow, and Dieter [Akademischer Betreuer] Bathen. "A design approach for adsorption energy systems integrating dynamic modeling with small-scale experiments / Stefan Wilhelm Graf ; André Bardow, Dieter Bathen." Aachen : Universitätsbibliothek der RWTH Aachen, 2018. http://d-nb.info/1186900172/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Nygren, Kip P. "An investigation of helicopter higher harmonic control using a dynamic system coupler simulation." Diss., Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/12082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bae, Suk Joo. "Analysis of dynamic robust design experiment and modeling approach for degradation testing." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04052004-180010/unrestricted/bae%5Fsuk%5Fj%5F2003%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Childers, Adam Fletcher. "Parameter Identification and the Design of Experiments for Continuous Non-Linear Dynamical Systems." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/28236.

Full text
Abstract:
Mathematical models are useful for simulation, design, analysis, control, and optimization of complex systems. One important step necessary to create an effective model is designing an experiment from which the unknown model parameter can be accurately identified and then verified. The strategy which one approaches this problem is dependent on the amount of data that can be collected and the assumptions made about the behavior of the error in the statistical model. In this presentation we describe how to approach this problem using a combination of statistical and mathematical theory with reliable computation. More specifically, we present a new approach to bounded error parameter validation that approximates the membership set by solving an inverse problem rather than using the standard forward interval analysis methods. For our method we provide theoretical justification, apply this technique to several examples, and describe how it relates to designing experiments. We also address how to define infinite dimensional designs that can be used to create designs of any finite dimension. In general, finding a good design for an experiment requires a careful investigation of all available information and we provide an effective approach to dthe problem.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Cardona, Cardona Luis Andrés. "Dynamic partial reconfiguration in fpgas for the design and evaluation of critical systems." Doctoral thesis, Universitat Autònoma de Barcelona, 2016. http://hdl.handle.net/10803/386416.

Full text
Abstract:
Los dispositivos FPGA persisten como componentes fundamentales para el diseño y evaluación de sistemas electrónicos. En el caso de las FPGAs basadas en memoria SRAM de Xilinx, éstas soportan Reconfigurabilidad Parcial Dinámica (DPR) por medio del Internal Configuration Access Port (ICAP). Este componente físico permite acceder a la memoria de configuración mientras el sistema está operando y por lo tanto la DPR puede ser usada para modificar partes específicas del sistema mientras que el resto sigue funcionando sin ser afectado. La arquitectura del sistema pude ser modificada a nivel de componentes lógicos básicos como Look-Up-Tables (LUTs), o a nivel de bloques más grandes como IPs con lo cual la flexibilidad de los sistemas puede mejorar. Ésta es una gran ventaja especialmente en sistemas críticos, como los aeroespaciales, donde el acceso al sistema para modificar su hardware no es una tarea sencilla. Pero el principal problema que estas FPGAs presentan cuando son utilizadas para aplicaciones críticas es su susceptibilidad a Single Event Upset (SEU) y Multi-bit Upset (MBU) en la memoria de configuración. Éste es un factor limitante que debe ser considerado para evitar malfuncionamiento del hardware implementado. Esta tesis está enfocada en usar DPR como un mecanismo para: i) mejorar la flexibilidad del hardware, ii) emular fallos de forma precisa en diseños ASIC mapeados en FPGAs y iii) mejorar la tolerancia a fallos acumulados o múltiples en la memoria de configuración de circuitos con Triple Redundancia Modular (TMR). Este trabajo aborda estos aspectos considerando como figura de mérito fundamental la velocidad de ejecución de las tareas. Por lo tanto uno de los principales objetivos es acelerar las tareas relacionadas con DPR. En primer lugar un controlador hardware para el ICAP fue diseñado: AC_ICAP. Éste además de soportar lectura y escritura de frames, manejo de bitstreams parciales desde memoria flash y memoria interna de la FPGA, también permite DPR de alta velocidad a nivel de LUTs sin necesidad de bitstreams parciales previamente generados. Esta última característica es posible gracias a ingeniería inversa en el bitstream con la cual se puede ejecutar DPR de LUTs individuales en menos de 5 μs. Ésto representa una mejora en tiempo de reconfiguración de más de 380 veces comparado con el controlador XPS_HWICAP de Xilinx En segundo lugar, la DPR a nivel de LUTs es utilizada en la emulación de fallos para evaluar circuitos ASIC mapeados en FPGAs. Para ello se diseña un CAD que incluye un traductor de la descripción ASIC a una descripción basada en LUTs para ser implementada en FPGAs, generación de diccionarios de fallos y extracción de patrones de prueba. Una plataforma hardware usa el listado de fallos y aprovecha la DPR de la FPGA para la inyección precisa de fallos seguida de la aplicación de los patrones de test para analizar los efectos de los fallos en el circuito. Finalmente la DPR es utilizada para mejorar la tolerancia a fallos de circuitos TMR implementados en FPGAs basados en memoria SRAM. En estos dispositivos la acumulación de fallos en la memoria de configuración puede generar fallos en las réplicas TMR. Por lo tanto la rápida detección y corrección de fallos sin detener el sistema es un requerimiento que se debe cumplir cuando se usan estas FPGAs en la implementación de sistemas críticos. Para ello se insertan detectores de errores de tipo XNOR que convergen en componentes carry-chain de la FPGA y además cada dominio es aislado en áreas diferentes del dispositivo para los cuales se extraen bitstreams parciales. Éstos son utilizados para corregir los fallos cuando los detectores son activados.
Field Programmable Gate Array (FPGA) devices persist as fundamental components in the design and evaluation of electronic systems. They are continuously reported as final implementation platforms rather than only prototype elements. The inherent reconfigurable characteristics that FPGAs offer are one of the most important advantages in the actual hardware implementation and redesign of systems. In the case of Xilinx SRAM-based FPGAs they support Dynamic Partial Reconfiguration (DPR) by means of the Internal Configuration Access Port (ICAP). This hardwired element allows the configuration memory to be accessed at run time. DPR can then be used to change specific parts of the system while the rest continues to operate with no affection in its computations. Therefore the architecture of the system can be modified at the level of basic logic components such as Look-Up-Tables (LUTs), or bigger blocks such as IP cores, and in this way more flexible systems can be designed. It is a great advantage especially in critical and aerospace applications where the access to the system to re-design the hardware is not a trivial task. But on the other hand, the main problem these FPGAs present when used for critical applications is their sensitivity to Single Event Upset (SEU) and Multi-bit Upset (MBU) in the configuration memory. It is a limiting factor that must be considered to avoid misbehavior of the implemented hardware. This thesis is focused on using DPR as a mechanism to: i) improve hardware flexibility, ii) emulate faults on ASIC designs mapped in FPGAs and iii) improve tolerance to accumulated or multiple faults in the configuration memory of Triple Modular Redundancy (TMR) circuits. This work addresses the three challenges considering as one of the most relevant figures of merit the speed at which the tasks can be performed. It is therefore one of the main objectives we consider: the speed-up of DPR related tasks. In the first place we developed a new high speed ICAP controller, named AC_ICAP, completely implemented in hardware. In addition to similar solutions to accelerate the management of partial bitstreams and frames, AC_ICAP also supports DPR of LUTs without requiring pre-computed partial bitstreams. This last characteristic was possible by performing reverse engineering on the bitstream. This allows DPR of single LUTs in Virtex-5 devices to be performed in less than 5 μs which implies a speed-up of more than 380x compared to the Xilinx XPS_HWICAP controller. In the second place, the fine grain DPR obtained with the utilization of the AC_ICAP is used in the emulation of faults to test ASIC circuits implemented in FPGAs. It is achieved by designing a CAD flow that includes a custom technology mapping of the ASIC net-list to LUT-level FPGA net-list, the creation of fault dictionaries and the extraction of test patterns. A hardware platform takes the fault list and leverages the partial reconfiguration capabilities of the FPGA for fault injection followed by application of test patterns for fault analysis purposes. Finally, we use DPR to improve the fault tolerance of TMR circuits implemented in SRAM-based FPGAs. In these devices the accumulation of faults in the configuration memory can cause the TMR replicas to fail. Therefore fast detection and correction of faults without stopping the system is a required constraint when these FPGAs in the implementation of critical systems. This is carried out by inserting flag error detector based on XNOR and carry-chain components, isolating and constraining the three domains to known areas and extracting partial bitstreams for each domain. The latter are used to correct faults when the flags are activated.
APA, Harvard, Vancouver, ISO, and other styles
14

Burr, Steven Reed. "The Design and Implementation of the Dynamic Ionosphere Cubesat Experiment (Dice) Science Instrumetns." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1770.

Full text
Abstract:
Dynamic Ionosphere Cubesat Experiment (DICE) is a satellite project funded by the National Science Foundation (NSF) to study the ionosphere, more particularly Storm Enhanced Densities (SED) with a payload consisting of plasma diagnostic instrumentation. Three instruments onboard DICE include an Electric Field Probe (EFP), Ion Langmuir Probe (ILP), and Three Axis Magnetometer (TAM). The EFP measures electric elds from 8V and consists of three channels a DC to 40Hz channel, a Floating Potential Probe (FPP), and an spectrographic channel with four bands from 16Hz to 512Hz. The ILP measures plasma densities from 1x104 cm�3 to 2x107 cm�3. The TAM measures magnetic field strength with a range 0.5 Gauss with a sensitivity of 2nT. To achieve desired mission requirements careful selection of instrument requirements and planning of the instrumentation design to achieve mission success. The analog design of each instrument is described in addition to the digital framework required to sample the science data at a 70Hz rate and prepare the data for the Command and Data Handing (C&DH) system. Calibration results are also presented and show fulllment of the mission and instrumentation requirements.
APA, Harvard, Vancouver, ISO, and other styles
15

Ramesh, Periyakulam S. "Experimental design and results of 2D dynamic damping of payload motion for cranes." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-07102009-040346/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Blanco, Mark Richard. "Design and Qualification of a Boundary-Layer Wind Tunnel for Modern CFD Validation Experiments." Youngstown State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1559237473563483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jain, Jhilmil Cross James H. "User experience design and experimental evaluation of extensible and dynamic viewers for data structures." Auburn, Ala., 2007. http://repo.lib.auburn.edu/2006%20Fall/Dissertations/JAIN_JHILMIL_3.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Whiting, Nicole Lynn. "Design and Validation of a New Experimental Setup for Dynamic Stall and Preliminary ControlResults." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1565823943355036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Freye, Jeffrey T. "Design of experiment analysis for the Joint Dynamic Allocation of Fires and Sensors (JDAFS) simulation." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion-image.exe/07Jun%5FFreye.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, June 2007.
Thesis Advisor(s): Lucas, Thomas W. . "June 2007." Description based on title screen as viewed on August 15, 2007. Includes bibliographical references (p. 135-137). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
20

Hahn, Casey Bernard. "Design and Validation of the New Jet Facility and Anechoic Chamber." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1311877224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Karlberg, Victor. "Dynamic analysis of high-rise timber buildings : A factorial experiment." Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-65559.

Full text
Abstract:
Today high-rise timber buildings are more popular than ever and designers all over the world have discovered the beneficial material properties of timber. In the middle of the 1990’s cross-laminated timber (CLT), was developed in Austria. CLT consists of laminated timber panels that are glued together to form a strong and flexible timber element. In recent years CLT has been on the rise and today it is regarded as a good alternative to concrete and steel in the design of particularly tall buildings. Compared to concrete and steel, timber has lower mass and stiffness. A high-rise building made out of timber is therefore more sensitive to vibration. The vibration of the building can cause the occupants discomfort and it is thus important to thoroughly analyze the building’s dynamic response to external excitation. The standard ISO 10137 provides guidelines for the assesment of habitability of buildings with respect to wind-induced vibration. The comfort criteria herein is based on the first natural frequency and the acceleration of the building, along with human perception of vibration. The aim of this thesis is to identify the important structural properties affecting a dynamic analysis of a high-rise timber building. An important consequence of this study is hopefully a better understanding of the interactions between the structural properties in question. To investigate these properties and any potential interactions a so-called factorial experiment is performed. A factorial experiment is an experiment where all factors are varied together, instead of one at a time, which makes it possible to study the effects of the factors as well as any interactions between these. The factors are varied between two levels, that is, a low level and a high level. The design of a factorial experiment includes all combinations of the levels of the factors. The experiment is performed using the software FEM-Design, which is a modeling software for finite element analysis. A fictitious building is modelled using CLT as the structural system. The modeling and the subsequent dynamic analysis is repeated according to the design of the factorial experiment. The experiment is further analyzed using statistical methods and validated according to ISO 10137 in order to study performance and patterns between the different models. The statistical analysis of the experiment shows that the height of the building, the thickness of the walls and the addition of mass are important in a dynamic analysis. It also shows that interaction is present between the height of the building and the thickness of the walls as well as between the height of the building and the addition of mass. Most of the models of the building does not satisfy the comfort criteria according to ISO 10137. However, it still shows patterns that provides useful information about the dynamic properties of the building. Lastly, based on the natural frequency of the building this study recognizes the stiffness as more relevant than the mass for a building with CLT as the structural system and with up to 16 floors in height.
Idag är höga trähus mer populära än någonsin och konstruktörer runtom i världen har upptäckt de fördelaktiga materialegenskaperna hos trä. I mitten på 1990-talet utvecklades korslimmat trä (KL-trä) i Österrike. KL-trä består av hyvlade brädor som limmas ihop för att bilda en lätt och stark träskiva. På senare år har KL-trä varit på uppgång och idag anses materialet vara ett bra alternativ till betong och stål i framför allt höga byggnader. Jämfört med betong och stål har trä både lägre massa och styvhet. En hög träbyggnad är därför mer känslig för vibrationer. En vibrerande byggnad kan leda till obehag för de boende och det är därför viktigt att analysera byggnadens dynamiska respons då den utsätts för yttre belastning. Standarden ISO 10137 ger riktlinjer för att kunna utvärdera komfortkravet för byggnader med avseende på människors känslighet för vibrationer orsakade av vind. Komfortkravet i fråga jämför byggnadens första naturliga egenfrekvens med dess acceleration. Syftet med detta examensarbete är att identifiera de viktiga egenskaperna i en dynamisk analys av en hög träbyggnad. Förhoppningsvis leder det här examensarbetet till en ökad förståelse av samspelseffekterna mellan dessa egenskaper. För att undersöka dessa egenskaper och eventuella samspelseffekter genomförs ett så kallat faktorförsök. Ett faktorförsök är ett försök där alla faktorer varieras tillsammans, istället för en och en, vilket gör det möjligt att studera effekterna av faktorerna samt eventuella samspelseffekter. Faktorerna varieras mellan två nivåer: en låg nivå och en hög nivå. Ett faktorförsök använder sig av samtliga kombinationer av faktorernas nivåer. Försöket utförs med hjälp av programmet FEM-Design, vilket är ett modelleringsverktyg för FE-analys. En fiktiv byggnad modelleras med CLT som stomsystem och en dynamisk analys görs. Försöket analyseras ytterligare med hjälp av statistiska metoder och valideras enligt ISO 10137. Dessa steg upprepas enligt faktorförsöket. Den statistiska analysen av försöket visar att höjden på byggnaden, tjockleken på väggarna samt en ökad massa är viktiga i en dynamisk analys. Den visar också på en samspelseffekt mellan höjden på byggnaden och tjockleken på väggarna, samt mellan höjden på byggnaden och en ökad massa. Merparten av modellerna av byggnaden uppfyller inte komfortkravet enligt ISO 10137. Däremot går det att urskönja mönster som bidrar med viktig information om byggnadens dynamiska egenskaper. Avslutningsvis, baserat på byggnadens naturliga egenfrekvens framhåller den här studien byggnadens styvhet framför dess massa då byggnaden i fråga stabiliseras med KL-trä och har upp till 16 våningar.
APA, Harvard, Vancouver, ISO, and other styles
22

Keller, Andrew R. "An experimental analysis of the dynamic failure resistance of TiB₂/A1₂O₃ composites." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/16657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Opperman, Roedolph A. (Roedolph Adriaan). "Enhanced dynamic load sensor for the International Space Station : design, development, musculoskeletal modeling and experimental evaluation." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122498.

Full text
Abstract:
Thesis: Ph. D. in Aerospace Systems Engineering, Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 163-179).
Prolonged exposure of a vertebrate musculoskeletal system to the microgravity environment of space leads to a reduction in bone mineral density, muscle mass, strength and endurance. Such deconditioning may impede critical astronaut activities and presents an increased injury risk during flight and when exposed to increased gravity like that of Earth or Mars. Exercise countermeasures are used extensively on the International Space Station to mitigate musculoskeletal deconditioning during long duration spaceflight missions. Despite vigorous exercise protocols, bone loss and muscle atrophy are often observed even when countermeasures are in effect. As a first step in understanding the mechanisms of injury and how on-orbit exercise countermeasures compare to those on the ground, an accurate load sensing system is needed to collect ground reaction force data in reduced gravity.
To date, no means of continuous, high resolution biomechanical force data collection and analysis has been realized for on-orbit exercise. Such a capability may advance the efficiency of these systems in mitigating the incidence of bone and muscle loss and injury risk by quantifying loading intensity and distribution during exercise in microgravity, thus allowing for cause-effect tracking of ISS exercise regimes and biomechanics. By measuring these forces and moments on the exercise device and correlating them with the post-flight fitness of crewmembers, the efficacy of various exercise devices may be assessed. More importantly, opportunities for improvement, including optimized loading protocols and lightweight exercise device designs will become apparent.
The overall goal of this research effort is to improve the understanding of astronaut joint loading during resistive exercise in a microgravity environment through the use of rigorous quantitative dynamic analysis, simulation and experimentation. This is accomplished with the development and evaluation of a novel, self-contained load sensing system. The sensor assembly augments existing countermeasures and measures loads imparted by the crew during exercise. Data collected with this system is used to parameterize a unique musculoskeletal model which is then used to evaluate associated joint reaction forces generated during exercise. The effects of varying body posture and load application points on joint loading were investigated and recommendations for enhancing on-orbit exercise protocols that mitigate both injury and deconditioning are discussed.
By validating the sensor and modeling joint loading during on-orbit exercise as described herein, a unique contribution is made in expanding NASA's capability to continuously record and quantify crew loading during exercise on ISS. Data obtained through the system is used to characterize joint loading, inform and optimize exercise protocols to mitigate musculoskeletal deconditioning and may aid in the design of improved, lightweight exercise equipment for use during long-duration spaceflight, including future missions to Mars.
"This research effort was supported by a NASA Phase I Small Business Innovation Research (SBIR) contract awarded to Aurora Flight Sciences Corporation with MIT as subcontractor. The contract period of performance spanned from June 2014 through August 2016. Contract number: 2012-11 NNX14CS55C"--Page 6
by Roedolph Adriaan Opperman.
Ph. D. in Aerospace Systems Engineering
Ph.D.inAerospaceSystemsEngineering Massachusetts Institute of Technology, Department of Aeronautics and Astronautics
APA, Harvard, Vancouver, ISO, and other styles
24

Swanson, Erik Evan. "Design and Evaluation of an Automated Experimental Test Rig for Determination of the Dynamic Characteristics of Fluid-Film Bearings." Diss., Virginia Tech, 1998. http://hdl.handle.net/10919/30727.

Full text
Abstract:
Hydrodynamic journal bearings are applied in a wide range of both old and new, advanced rotating machinery designs. To maintain existing machinery, as well as to design new, state of the art machines, validated analytical models for these bearings are needed. This work documents the development and evaluation of an automated test rig for the evaluation of hydrodynamic journal bearings to provide some of the needed experimental data. This work describes the test rig in detail, including the results of experimental characterization of many of the test rig subsystems. Experimental data for a two axial groove bearing and a pressure dam bearing under steady load conditions are presented for a range of loads at two different shaft speeds. Experimental data and analytical results for dynamic loading are also discussed. The work concludes with a summary of the state of the test rig and recommendations for further work.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Sun, Allen Y. "An Experimental Study of the Dynamic Response of Spur Gears Having Tooth Index Errors." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1430749459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

FERREIRA, Fábio Martins Gonçalves. "Otimização de Sistema de Ancoragem equivalente em Profundidade Truncada." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/17553.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-07-28T12:37:32Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_Doutorado_EngCivil_FMGF_2016_[digital].pdf: 9767217 bytes, checksum: e33d3971801fd7f7f68b85fc05826ba3 (MD5)
Made available in DSpace on 2016-07-28T12:37:32Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_Doutorado_EngCivil_FMGF_2016_[digital].pdf: 9767217 bytes, checksum: e33d3971801fd7f7f68b85fc05826ba3 (MD5) Previous issue date: 2016-01-29
Ao esgotar as reservas de hidrocarbonetos em terra e águas rasas, a indústria vem explorando e produzindo petróleo em águas profundas e ultraprofundas. No entanto, a verificação hidrodinâmica de novos sistemas flutuantes de produção continua usando as metodologias consagradas, especialmente os ensaios em tanques oceânicos de laboratório. A utilização de modelos em escala reduzida vem sendo adotada desde os primeiros projetos em águas rasas e continua até hoje nos projetos em águas ultraprofundas. No entanto, os ensaios em profundidades superiores a 1.500m necessitam de um fator de escala muito elevado, com diversos problemas associados, dentre eles as dificuldades de acomodar as linhas de ancoragem e as incertezas relacionadas a modelos muito pequenos. Dentre as soluções possíveis, os ensaios híbridos (numérico-experimental) se apresentam como a solução mais viável para verificação experimental em águas ultraprofundas, em especial o ensaio híbrido passivo. Esse tipo de ensaio é organizado em etapas, sendo a primeira delas responsável pela definição do sistema truncado. Se essa etapa não for executada de forma satisfatória, o sucesso do ensaio pode ser comprometido. Assim, a fim de minimizar essa questão, propõe-se nesta tese de doutorado uma forma sistemática para encontrar sistemas truncado equivalentes, considerando os efeitos estáticos e dinâmicos, através da utilização de ferramentas de otimização. Nesse sentido, a abordagem adotada utiliza um simulador para análise estática e dinâmica de estruturas offshore denominado Dynasim e um algoritmo de otimização baseado em gradiente através do sistema Dakota. Também é utilizada a metodologia de planejamento de experimentos para identificar os fatores que influenciam as respostas estática e dinâmica do problema, evitando o uso de variáveis de projeto irrelevantes no estudo da otimização. Ressalta-se que essa metodologia não foi aplicada em outros trabalhos no contexto de sistemas de ancoragem truncado, segundo nosso conhecimento. Além disso, analisa-se o projeto ótimo do sistema truncado em várias condições ambientais, cujo interesse é verificar a concordância dele com o sistema de ancoragem na profundidade completa. Devido ao elevado custo computacional envolvido nessa verificação, utiliza-se a computação de alto desempenho, com processamento paralelo, para viabilizar a realização dessas análises. Como é demonstrado neste trabalho, a metodologia proposta facilita a busca de sistemas de ancoragem truncado equivalente preservando as características estáticas e dinâmicas do sistema de ancoragem completo. São apresentados e discutidos quatro casos, os dois primeiros se referem a casos simplificados, o terceiro é baseado na literatura e o quarto é baseado em um cenário real. Os resultados obtidos nos casos estudados mostram que os sistemas truncados equivalentes encontrados conseguem reproduzir o comportamento dos sistemas completos para as condições verificadas.
With the depletion of onshore and offshore shallow-water reserves, the industry has exploited and produced oil in deep water and ultra-deepwater. However, the hydrodynamic verification of new floating production systems continues using the established methodologies, especially by carrying out tests on ocean basin laboratories. Small-scale model tests have been used since the first projects in shallow water and continue today in the projects in ultra-deepwater. However, tests in depths above 1,500m require a very high scale factor, which poses several complications, among them the difficulties to accommodate the mooring lines and the small models related uncertainties. Among the possible solutions, the hybrid testing (numerical and experimental) are the most feasible solution to experimental verification in ultra-deepwater, especially the hybrid passive systems test. Such test is divided into steps, the first one responsible for the definition of the truncated system. If this step is not performed satisfactorily, the success of the test may be compromised. Thus, in order to minimize this issue, a systematic way to find equivalent truncated systems, considering the static and dynamic effects through the use of the optimization tools is proposed in this doctoral thesis. Accordingly, the approach adopted uses a numerical simulator, called Dynasim, for static and dynamic analysis of offshore structures, and a gradient based optimization algorithm, given in Dakota computational system. Additionally, the design of experiments methodology is used to identify the factors that influence the static and dynamic responses of the problem, avoiding the use of irrelevant design variables in the optimization process. It has to be emphasized that this methodology has not been used in other works in the context of truncated mooring systems, to our knowledge. Furthermore, the optimal design of the truncated system is analyzed for several environmental conditions. The aim is to verify the agreement of the truncated mooring system with system in the full-depth. Due to the high computational cost involved in the verification, we use the high-performance computing, with parallel computation, to perform the analyzes. As shown in this work, the proposed methodology easy the search for equivalent truncated mooring systems preserving the static and dynamic characteristics of full-depth mooring systems. Four case studies are presented and discussed. The first two refer to simplified cases; the third is based on the literature and the fourth is based on a real scenario. The results in each case show that the truncated equivalent system found can reproduce the behavior of full-depth system for the verified conditions.
APA, Harvard, Vancouver, ISO, and other styles
27

Gustafsson, Patrik. "Design of Experimental Setup for Investigation of Effect of Moisture Content on Transformer Paper Ageing during Intermittent Load." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232187.

Full text
Abstract:
In this project an experimental setup is designed to investigate the effect of intermittent load patterns in combination with moisture content on cellulose ageing. It is done by exposing groups of samples to different intermittent load patterns with varying frequency. A literature review is done on transformer insulation system and cellulose degradation. Various technical solutions to different aspects of the experimental design are reviewed. The final experimental setup is explained with the primary focus being on the hardware and programming of the controlling system. The controlling system consists of a Data Acquisition (DAQ) system from National Instruments and is programmed in LabVIEW. The controlling system is examined in two investigative tests where it performed satisfactorily. Three load patterns are developed. This project suggests how to prepare the samples and what direct- and indirect tests to apply to the insulation system for future analysis.Over the years, a considerable amount of scientific work has been devoted to understanding paper ageing in order to improve transformer diagnostics and investments for utility owners. However, transformer loading guidelines of today do not take intermittent load in combination with moisture content into account [1] [2]. Previous work suggests that the thermal models may be improved by looking into the effects of moisture content [3].The primary aim of the proposed experimental setup is to investigate whether an intermittent load pattern in combination with moisture content have a considerable detrimental effect on cellulose ageing. The intent is to contribute with new knowledge about transformer diagnostics and long-term possibly improve the current thermal models used for Dynamic Transformer Rating (DTR) which do not take this phenomenon into account. This would in particular benefit transformers with typically intermittent load patterns, e.g. a transformer connected to a wind farm or photovoltaic panels. Increasing renewable energy installations increase the need for developing the thermal models in the transformer loading guidelines to take unconventional load profiles into account.
I det här projektet utformas ett experiment för att undersöka inverkan av intermittenta lastmönster i kombination med fukthalt på åldrande av cellulosa. Detta görs genom att utsätta provgrupper för olika lastmönster med varierande frekvens. En litteraturgenomgång görs på transformatorisoleringssystem och nedbrytning av cellulosa. Olika tekniska lösningar för olika aspekter av experimentets design ses över. Den slutliga utformningen av experimentet förklaras med fokus på hårdvara och programmering av kontrollsystemet. Kontrollsystemet består av ett system för datainsamling från National Instruments och programmeras i LabVIEW. Kontrollsystemet utvärderas i två undersökande test där det förfor på ett tillfredsställande sätt. Tre lastmönster till experimentet har tagits fram. Det här projektet föreslår hur man förbereder proverna och vilka direkta och indirekta test som kan göras för framtida analyser.Under åren har en betydande mängd vetenskapligt arbete ägnats åt att förstå pappersåldring för att förbättra transformatordiagnostik och investeringsunderlaget för nätägare. De industriella standarderna tar dock inte hänsyn till intermittent belastning i kombination med fukthalt [1] [2]. Tidigare arbete föreslår att de termiska modellerna möjligen kan förbättras genom att undersöka effekterna av fukthalt [3].Det huvudsakliga målet med den föreslagna experimentella uppställningen är att undersöka huruvida ett intermittent belastningsmönster i kombination med fukthalt har en betydande inverkan på pappersåldrandet. Föresatsen är att bidra med ny kunskap om transformatordiagnostik och för att om möjligt långsiktigt förbättra de nuvarande termiska modellerna som används till dynamiska lastbarhetsmodeller för transformatorer. Detta skulle särskilt gynna transformatorer med typiskt intermittenta belastningsmönster, t ex en transformator ansluten till en vindkraftpark eller solcellspaneler. Ökande antal av anläggningar för förnybar energi ökar behovet av att utveckla de termiska modellerna för att ta hänsyn till okonventionella lastprofiler.
APA, Harvard, Vancouver, ISO, and other styles
28

CAMMARANO, SANDRO. "STATIC AND DYNAMIC ANALYSIS OF HIGH-RISE BUILDINGS." Doctoral thesis, Politecnico di Torino, 2014. http://hdl.handle.net/11583/2565549.

Full text
Abstract:
This thesis is focused on the structural behaviour of high-rise buildings subjected to transversal loads expressed in terms of shears and torsional moments. As horizontal reinforcement, the resistant skeleton of the construction can be composed by different vertical bracings, such as shear walls, braced frames and thin-walled open section profiles, having constant or variable geometrical properties along the height. In this way, most of the traditional structural schemes can be modelled, from moment resisting frames up to outrigger and tubular systems. In particular, an entire chapter is addressed to the case of thin-walled open section shear walls which are defined by a coupled flexural-torsional behaviour, as described by Vlasov’s theory of the sectorial areas. From the analytical point of view, the three-dimensional formulation proposed by Al. Carpinteri and An. Carpinteri (1985) is considered and extended in order to perform dynamic analyses and encompass innovative structural solutions which can twist and taper from the bottom to the top of the building. Such approach is based on the hypothesis of in-plane infinitely rigid floors which assure the connection between the vertical bracings and, consequently, reduce the number of degrees of freedom being only three for each level. By means of it, relevant design information such as the floor displacements, the external load distribution between the structural components, the internal actions, the free vibrations as well as the mode shapes can be quickly obtained. The clearness and the conciseness of the matrix formulation allow to devise a simple computer program which, starting from basic information as the building geometry, the number and type of vertical stiffening, the material properties and the intensity of the external forces, provides essential results for preliminary designs.
APA, Harvard, Vancouver, ISO, and other styles
29

Seth, Ajay. "A Predictive Control Method for Human Upper-Limb Motion: Graph-Theoretic Modelling, Dynamic Optimization, and Experimental Investigations." Thesis, University of Waterloo, 2000. http://hdl.handle.net/10012/787.

Full text
Abstract:
Optimal control methods are applied to mechanical models in order to predict the control strategies in human arm movements. Optimality criteria are used to determine unique controls for a biomechanical model of the human upper-limb with redundant actuators. The motivation for this thesis is to provide a non-task-specific method of motion prediction as a tool for movement researchers and for controlling human models within virtual prototyping environments. The current strategy is based on determining the muscle activation levels (control signals) necessary to perform a task that optimizes several physical determinants of the model such as muscular and joint stresses, as well as performance timing. Currently, the initial and final location, orientation, and velocity of the hand define the desired task. Several models of the human arm were generated using a graph-theoretical method in order to take advantage of similar system topology through the evolution of arm models. Within this framework, muscles were modelled as non-linear actuator components acting between origin and insertion points on rigid body segments. Activation levels of the muscle actuators are considered the control inputs to the arm model. Optimization of the activation levels is performed via a hybrid genetic algorithm (GA) and a sequential quadratic programming (SQP) technique, which provides a globally optimal solution without sacrificing numerical precision, unlike traditional genetic algorithms. Advantages of the underlying genetic algorithm approach are that it does not require any prior knowledge of what might be a 'good' approximation in order for the method to converge, and it enables several objectives to be included in the evaluation of the fitness function. Results indicate that this approach can predict optimal strategies when compared to benchmark minimum-time maneuvers of a robot manipulator. The formulation and integration of the aforementioned components into a working model and the simulation of reaching and lifting tasks represents the bulk of the thesis. Results are compared to motion data collected in the laboratory from a test subject performing the same tasks. Discrepancies in the results are primarily due to model fidelity. However, more complex models are not evaluated due to the additional computational time required. The theoretical approach provides an excellent foundation, but further work is required to increase the computational efficiency of the numerical implementation before proceeding to more complex models.
APA, Harvard, Vancouver, ISO, and other styles
30

Tosto, Francesco. "Investigation of performance and surge behavior of centrifugal compressors through CFD simulations." Thesis, KTH, Mekanik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-226159.

Full text
Abstract:
The use of turbocharged Diesel engines is nowadays a widespread practice in the automotive sector: heavy-duty vehicles like trucks or buses, in particular, are often equipped with turbocharged engines. An accurate study of the flow field developing inside both the main components of a turbocharger, i.e. compressor and turbine, is therefore necessary: the synergistic use of CFD simulations and experimental tests allows to fulfill this requirement. The aim of this thesis is to investigate the performance and the flow field that develops inside a centrifugal compressor for automotive turbochargers. The study is carried out by means of numerical simulations, both steady-state and transient, based on RANS models (Reynolds Averaged Navier-Stokes equations). The code utilized for the numerical simulations is Ansys CFX.   The first part of the work is an engineering attempt to develop a CFD method for predicting the performance of a centrifugal compressor which is based solely on steady-state RANS models. The results obtained are then compared with experimental observations. The study continues with an analysis of the sensitivity of the developed CFD method to different parameters: influence of both position and model used for the rotor-stator interfaces and the axial tip-clearance on the global performances is studied and quantified.   In the second part, a design optimization study based on the Design of Experiments (DoE) approach is performed. In detail, transient RANS simulations are used to identify which geometry of the recirculation cavity hollowed inside the compressor shroud (ported shroud design) allows to mitigate the backflow that appears at low mass-flow rates. Backflow can be observed when the operational point of the compressor is suddenly moved from design to surge conditions. On actual heavy-duty vehicles, these conditions may arise when a rapid gear shift is performed.
Användningen av turboladdade dieselmotorer ärr numera utbredd inom bilindustrin: i synnerhet tunga fordon som lastbilar eller bussar ärr ofta utrustade med turbo-laddade motorer. En utförlig förståelse av flödesfältet som utvecklas innuti båda huvudkomponenterna hos en turboladdare, dvs kompressor och turbin, är därför nödvändig: den synergistiska användningen av CFD-simuleringar och experimentel-la tester möjliggör att detta krav uppfylls. Syftet med denna avhandling är att undersöka prestanda och det flödesfält som utvecklas i en centrifugalkompressor för turboladdare. Studien utförs genom nu-meriska simuleringar, både steady state och transient, baserat på RANS-modeller (Reynolds Averaged Navier-Stokes-ekvationer). Koden som används för de numeriska simuleringarna är Ansys CFX.   Den första delen av arbetet ¨ar ett försöka att utveckla en CFD-metod för att förutsäga prestanda för en centrifugalkompressor med hjälp av steady-state RANS-modeller. De erhållna resultaten jämförs sedan med experimentella observationer. Studien fortsätter med en analys av känsligheten hos den utvecklade CFD-metoden till olika parametrar: Inflytande av både position och modell som används för rotor-statorgränssnitt samt axiellt spel mellan rotor och hus på de globala prestationerna studeras och kvantifieras.   I andra delen utförs en designoptimeringsstudie baserad på Design of Experiments (DoE). I detalj används tidsupplösta RANS-simuleringar för att identifiera vilken utformning av ported shroud som minskar backflöde i kompressorn under en snabb minskning av massflöde och varvtal och därmed ger bättre prestanda i transient surge. På tunga fordon kan dessa förhållanden uppstå under växling.
APA, Harvard, Vancouver, ISO, and other styles
31

Howard, Mitchell James. "Development of a machine-tooling-process integrated approach for abrasive flow machining (AFM) of difficult-to-machine materials with application to oil and gas exploration componenets." Thesis, Brunel University, 2014. http://bura.brunel.ac.uk/handle/2438/9262.

Full text
Abstract:
Abrasive flow machining (AFM) is a non-traditional manufacturing technology used to expose a substrate to pressurised multiphase slurry, comprised of superabrasive grit suspended in a viscous, typically polymeric carrier. Extended exposure to the slurry causes material removal, where the quantity of removal is subject to complex interactions within over 40 variables. Flow is contained within boundary walls, complex in form, causing physical phenomena to alter the behaviour of the media. In setting factors and levels prior to this research, engineers had two options; embark upon a wasteful, inefficient and poor-capability trial and error process or they could attempt to relate the findings they achieve in simple geometry to complex geometry through a series of transformations, providing information that could be applied over and over. By condensing process variables into appropriate study groups, it becomes possible to quantify output while manipulating only a handful of variables. Those that remain un-manipulated are integral to the factors identified. Through factorial and response surface methodology experiment designs, data is obtained and interrogated, before feeding into a simulated replica of a simple system. Correlation with physical phenomena is sought, to identify flow conditions that drive material removal location and magnitude. This correlation is then applied to complex geometry with relative success. It is found that prediction of viscosity through computational fluid dynamics can be used to estimate as much as 94% of the edge-rounding effect on final complex geometry. Surface finish prediction is lower (~75%), but provides significant relationship to warrant further investigation. Original contributions made in this doctoral thesis include; 1) A method of utilising computational fluid dynamics (CFD) to derive a suitable process model for the productive and reproducible control of the AFM process, including identification of core physical phenomena responsible for driving erosion, 2) Comprehensive understanding of effects of B4C-loaded polydimethylsiloxane variants used to process Ti6Al4V in the AFM process, including prediction equations containing numerically-verified second order interactions (factors for grit size, grain fraction and modifier concentration), 3) Equivalent understanding of machine factors providing energy input, studying velocity, temperature and quantity. Verified predictions are made from data collected in Ti6Al4V substrate material using response surface methodology, 4) Holistic method to translating process data in control-geometry to an arbitrary geometry for industrial gain, extending to a framework for collecting new data and integrating into current knowledge, and 5) Application of methodology using research-derived CFD, applied to complex geometry proven by measured process output. As a result of this project, four publications have been made to-date – two peer-reviewed journal papers and two peer-reviewed international conference papers. Further publications will be made from June 2014 onwards.
APA, Harvard, Vancouver, ISO, and other styles
32

Park, Jangho. "Efficient Global Optimization of Multidisciplinary System using Variable Fidelity Analysis and Dynamic Sampling Method." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/91911.

Full text
Abstract:
Work in this dissertation is motivated by reducing the design cost at the early design stage while maintaining high design accuracy throughout all design stages. It presents four key design methods to improve the performance of Efficient Global Optimization for multidisciplinary problems. First, a fidelity-calibration method is developed and applied to lower-fidelity samples. Function values analyzed by lower fidelity analysis methods are updated to have equivalent accuracy to that of the highest fidelity samples, and these calibrated data sets are used to construct a variable-fidelity Kriging model. For the design of experiment (DOE), a dynamic sampling method is developed and includes filtering and infilling data based on mathematical criteria on the model accuracy. In the sample infilling process, multi-objective optimization for exploitation and exploration of design space is carried out. To indicate the fidelity of function analysis for additional samples in the variable-fidelity Kriging model, a dynamic fidelity indicator with the overlapping coefficient is proposed. For the multidisciplinary design problems, where multiple physics are tightly coupled with different coupling strengths, multi-response Kriging model is introduced and utilizes the method of iterative Maximum Likelihood Estimation (iMLE). Through the iMLE process, a large number of hyper-parameters in multi-response Kriging can be calculated with great accuracy and improved numerical stability. The optimization methods developed in the study are validated with analytic functions and showed considerable performance improvement. Consequentially, three practical design optimization problems of NACA0012 airfoil, Multi-element NLR 7301 airfoil, and all-moving-wingtip control surface of tailless aircraft are performed, respectively. The results are compared with those of existing methods, and it is concluded that these methods guarantee the equivalent design accuracy at computational cost reduced significantly.
Doctor of Philosophy
In recent years, as the cost of aircraft design is growing rapidly, and aviation industry is interested in saving time and cost for the design, an accurate design result during the early design stages is particularly important to reduce overall life cycle cost. The purpose of the work to reducing the design cost at the early design stage with design accuracy as high as that of the detailed design. The method of an efficient global optimization (EGO) with variable-fidelity analysis and multidisciplinary design is proposed. Using the variable-fidelity analysis for the function evaluation, high fidelity function evaluations can be replaced by low-fidelity analyses of equivalent accuracy, which leads to considerable cost reduction. As the aircraft system has sub-disciplines coupled by multiple physics, including aerodynamics, structures, and thermodynamics, the accuracy of an individual discipline affects that of all others, and thus the design accuracy during in the early design states. Four distinctive design methods are developed and implemented into the standard Efficient Global Optimization (EGO) framework: 1) the variable-fidelity analysis based on error approximation and calibration of low-fidelity samples, 2) dynamic sampling criteria for both filtering and infilling samples, 3) a dynamic fidelity indicator (DFI) for the selection of analysis fidelity for infilled samples, and 4) Multi-response Kriging model with an iterative Maximum Likelihood estimation (iMLE). The methods are validated with analytic functions, and the improvement in cost efficiency through the overall design process is observed, while maintaining the design accuracy, by a comparison with existing design methods. For the practical applications, the methods are applied to the design optimization of airfoil and complete aircraft configuration, respectively. The design results are compared with those by existing methods, and it is found the method results design results of accuracies equivalent to or higher than high-fidelity analysis-alone design at cost reduced by orders of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
33

Berg, Anna, and Magdalena Enlöf. "Att leva är att känna - en pilotstudie i affektfokuserad terapi för unga vuxna." Thesis, Örebro universitet, Institutionen för juridik, psykologi och socialt arbete, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-38177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sahin, Emre. "Conceptual Design, Testing And Manufacturing Of An Industrial Type Electro-hydraulic Vacuum Sweeper." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613747/index.pdf.

Full text
Abstract:
CONCEPTUAL DESIGN, TESTING AND MANUFACTURING OF AN INDUSTRIAL TYPE ELECTRO-HYDRAULIC VACUUM SWEEPER SAHIN, Emre M.Sc., Department of Mechanical Engineering Supervisor : Prof. Dr. Kahraman ALBAYRAK Co-Supervisor: Prof. Dr. Bilgin KAFTANOGLU September 2011, 156 pages In this thesis, conceptual design, testing, development and manufacturing processes of the cleaning (elevator and fan system) and electro-hydraulic systems of an industrial type vacuum sweeper are presented. Thesis is financially supported by Ministry of Science, Industry and Technology (Turkey) and Mü
san A.S. (Makina Ü
retim Sanayi ve Ticaret A.S.) under the SAN-TEZ projects with numbers 00028.STZ.2007-1 and 00623.STZ.2010-1. The main purpose is to make critical design changes on existing fan system, designing a new elevator system and eventually obtaining efficient and powerful cleaning system. For design, Catia and SolidWorks softwares are used. Within the SAN-TEZ project, all CFD solutions were provided by Punto Engineering. Unlike many industrial type vacuum sweepers, new design will be electrically and electro-hydraulic controlled. All cleaning system of new &lsquo

SAN Vacuum Sweeper&rsquo
will be activated by using hydraulic motors (traction system including hydraulic system is driven by the brushless DC electric motor as well) and the power of all these systems is supplied by batteries which are placed in the middle of the vehicle. Elevator and fan system can be considered as a group for a street sweeper for cleaning operations. Fan and elevator systems both gain an important place especially in cleaning operations due to lifting heavy and small particles from the ground. Fan system is used for sucking the small materials and dust by vacuum and elevator system is used to elevate heavier materials such stones, bottles, cans. Therefore, it is essential to design an efficient and powerful fan and elevator system for a street sweeper. The thesis work includes the design, development, supervision of manufacturing, simulation and testing of the cleaning (elevator and fan systems) and electro-hydraulic system of the street cleaners.
APA, Harvard, Vancouver, ISO, and other styles
35

Abeysinghe, Mudiyanselage Chanaka Madushan Abeysinghe. "Static and dynamic performance of lightweight hybrid composite floor plate system." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/60323/1/Chanaka_Abeysinghe_Mudiyanselage_Thesis.pdf.

Full text
Abstract:
In the modern built environment, building construction and demolition consume a large amount of energy and emits greenhouse gasses due to widely used conventional construction materials such as reinforced and composite concrete. These materials consume high amount of natural resources and possess high embodied energy. More energy is required to recycle or reuse such materials at the cessation of use. Therefore, it is very important to use recyclable or reusable new materials in building construction in order to conserve natural resources and reduce the energy and emissions associated with conventional materials. Advancements in materials technology have resulted in the introduction of new composite and hybrid materials in infrastructure construction as alternatives to the conventional materials. This research project has developed a lightweight and prefabricatable Hybrid Composite Floor Plate System (HCFPS) as an alternative to conventional floor system, with desirable properties, easy to construct, economical, demountable, recyclable and reusable. Component materials of HCFPS include a central Polyurethane (PU) core, outer layers of Glass-fiber Reinforced Cement (GRC) and steel laminates at tensile regions. This research work explored the structural adequacy and performance characteristics of hybridised GRC, PU and steel laminate for the development of HCFPS. Performance characteristics of HCFPS were investigated using Finite Element (FE) method simulations supported by experimental testing. Parametric studies were conducted to develop the HCFPS to satisfy static performance using sectional configurations, spans, loading and material properties as the parameters. Dynamic response of HCFPS floors was investigated by conducting parametric studies using material properties, walking frequency and damping as the parameters. Research findings show that HCFPS can be used in office and residential buildings to provide acceptable static and dynamic performance. Design guidelines were developed for this new floor system. HCFPS is easy to construct and economical compared to conventional floor systems as it is lightweight and prefabricatable floor system. This floor system can also be demounted and reused or recycled at the cessation of use due to its component materials.
APA, Harvard, Vancouver, ISO, and other styles
36

Pontes, Karen Valverde. "Desenvolvimento de resinas de polietileno linear atraves de metodos de otimização." [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/267259.

Full text
Abstract:
Orientador: Rubens Maciel Filho
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica
Made available in DSpace on 2018-08-11T18:20:16Z (GMT). No. of bitstreams: 1 Pontes_KarenValverde_D.pdf: 6046859 bytes, checksum: 07c62254e493125c08f79bb23d284453 (MD5) Previous issue date: 2008
Resumo: Para o desenvolvimento de novas resinas poliméricas ou para a melhoria da qualidade de resinas já existentes, usualmente realizam-se experimentos em escala industrial ou em planta piloto. Entretanto, tais práticas são altamente imprecisas, demoradas, caras e levam à formação de produtos fora de especificação. Ferramentas baseadas em modelos matemáticos são alternativas atraentes para contornar essas desvantagens e, portanto, o incentivo para o desenvolvimento de políticas operacionais ótimas para os reatores de polimerização tem crescido muito nos últimos anos. Neste escopo, o presente estudo aborda o projeto de resinas de polietileno com propriedades feitas sob medida, as quais compreendem não apenas as médias da distribuição de peso molecular (DPM), mas também toda a DPM, já que algumas propriedades de uso final são melhores correlacionadas com certas frações da DPM. O caso de estudo é a polimerização do eteno em solução com catalisador Ziegler-Natta, que ocorre em uma série de reatores agitados e tubulares contínuos. Devido à presença de dois tipos de reatores, diversas resinas podem ser produzidas. Inicialmente, são realizados planejamentos fatoriais tipo Plackett-Burman para determinar as variáveis mais importantes, e portanto que devem ser consideradas como variáveis de decisão na futura otimização. Além disso, planejamentos fatoriais completos e superfícies de resposta permitem uma fácil identificação de comportamentos não lineares e um melhor entendimento do processo. Dois métodos diferentes de otimização são utilizados: algoritmos baseados em SQP (sequential quadratic programming) e algoritmo genético. No primeiro método, o processo é modelado como um sistema multi-estágio no estado estacionário, de modo que métodos de otimização dinâmica ou controle ótimo são adequados para resolver o problema ao se considerar a coordenada axial do reator tubular como variável independente ao invés do tempo. Ambos os métodos de otimização mostram ótima predição das propriedades desejadas do polímero, sendo que a performance do algoritmo genético melhora quando a solução da otimização baseada em SQP é incluída na população inicial.
Abstract: In order to design new polymer grades or to improve the quality of existing polymer resins, usually pilot plant or industrial scale experiments are carried out. Such practices, though, are highly imprecise, time delayed, result in high costs and lead to off-spec products. Model based tools present an attractive alternative to overcome such disadvantages and therefore, in recent years, there is an increasing incentive to develop optimal operating policies for polymeric reactors. Whithin this scope, the present study approaches the design of polyethylene resins with tailored properties that comprehend not only average properties, but also the whole molecular weight distribution (MWD), since some end-use properties are better correlated with certain fractions of the MWD. The case study is the ethylene polymerization in solution with Ziegler-Natta catalyst, which takes place is a serie of continuous stirred and tubular reactors. Due to the presence of two types of reactors, a broad range of polymer resins can be produced. Initially, Plackett-Burman designs of experiments is carried out in order to ascertain the most important variables that should be considered as degrees of freedom for the future optimization. In addition to that, complete factorial designs and surface responses allow for an easy identification of nonlinear behaviors and a better understanding of the process. Two different methods are employed for the optimization: SQP based algorithms and genetic algorithm. For the former, the process is modeled as a multi-stage system at the steady state, in such a manner that optimal control tools are suitable to solve the problem if the axial coordinate of the tubular reactor is the independent variables, replacing time. Both methods present good prediction of the desired polymer properties and the performance of the GA improves when the solution of the SQP optimization is included in the initial population.
Doutorado
Desenvolvimento de Processos Químicos
Doutor em Engenharia Química
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Chutu. "The effects of CPAP tube reverse flow." Click here to access this resource online, 2008. http://hdl.handle.net/10292/659.

Full text
Abstract:
CPAP is the most common treatment for moderate to severe sleep apnea in adults. Despite its efficacy, patients’ safety, comfort and compliance are issues to be considered and improved in CPAP design. The issues include condensation, carbon dioxide in inhaled air, humidity and temperature of inhaled air. When a CPAP user breaths deeply, there will be some air not fully expelled and may be driven back into the heated air delivery tube (HADT). An interest has existed in what impacts this so called reverse flow may bring about to the CPAP use. The main objectives of this research are to quantify the reverse flow and its influence on carbon dioxide re-breathing, delivered humidity to the patient and condensation in the HADT. Within this thesis, two computer models of the CPAP system have been constructed on Simulink™ in the Matlab™ environment. One is about the CPAP fluid dynamic performance and carbon dioxide re-breathing and the other is on thermodynamic performance. The models can predict the dynamic behaviour of the CPAP machine. They are able to mimic the breath induced airflow fluctuation, and flow direction changes over wide real working ranges of ambient conditions, settings and coefficients. These models can be used for future analysis, development, improvement and design of the machine. The fluid dynamic and thermodynamic models were experimentally validated and they have proved to be valuable tool in the work. The main conclusions drawn from this study are: • Reverse flow increases when breaths load increases and pressure setting decreases. • Reverse flow does not definitely add exhaled air to the next inhalation unless the reverse flow is relatively too much. • Mask capacity does not influence the reverse flow. • The exhaled air re-breathed is mainly due to that stays in the mask, therefore larger mask capacity increases the exhaled air re-breath and the percentage of exhaled air in next inhalation drops when the breath load increases. • Deep breathing does not significantly change the total evaporation in chamber. • When deep breathing induced reverse flow occurs, condensation occurs or worsens in the HADT near the mask. This happens only when the humidity of the airflow from the CPAP is much lower than that of the exhaled air and the tube wall temperature is low enough for condensation to occur. • The deep breathing and reverse flow do not significantly influence the average inhaled air temperature. • The overall specific humidity in inhaled air is lower under deep breathing. • Mask capacity does not influence the thermal conditions in the HADT and the inhaled air specific humidity. Also the mask capacity does not significantly influences the inhaled air temperature.
APA, Harvard, Vancouver, ISO, and other styles
38

Millithaler, Pierre. "Dynamic behaviour of electric machine stators : modelling guidelines for efficient finite-element simulations and design specifications for noise reduction." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2003/document.

Full text
Abstract:
Dopées par un intérêt croissant des industries telles que l’automobile, les technologies demotorisation100% électriques équipent de plus en plus de véhicules à la portée du grand public. Endépit d’une opinion commune favorable sur les faibles émissions sonores des moteurs électriques,la maîtrise des performances vibratoires et acoustiques d’une telle machine reste un challenge trèscoûteux à relever. Associant l’expertise de l’entreprise Vibrate cet du département MécaniqueAppliquée de l’institut Femto-ST, cette thèse CIFRE vise à améliorer les connaissances actuellessur le comportement mécanique de machines électriques. De nouvelles méthodes de modélisationpar éléments finis sont proposées à partir d’approches d’homogénéisation,analyses expérimentales,recalage de modèles et études de variabilité en température et en fréquence,pour permettre uneprédiction plus performante du comportement vibratoire d’un moteur électrique
Boosted by the increasing interest of industries such as automotive,100% electric engine technologies power more and more affordable vehicles for the general public.Inspite of a rather favourable common opinion about the low noisee mitted by electric motors, controlling the vibratory and acoustic performances of such machines remains a very costly challenge to take up. Associating the expertise of the company Vibratec and the institute Femto-ST Applied Mechanics Department, this industry-orientedPh.D.thesisaimsatimprovingthecurrentknowledgeaboutthe mechanicalbehaviour ofelectric machines. New finite-element modelling method sare proposedf rom homogenisation approaches,experimental analyses, model up dating procedures and variability studies in temperature and frequency, in order to predict the behaviour of an electric motor more efficiently
APA, Harvard, Vancouver, ISO, and other styles
39

R, Kyvik Adriana. "Self-assembled monolayers for biological applications: design, processing, characterization and biological studies." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/666882.

Full text
Abstract:
Self-assembled monolayers (SAMs) on gold surfaces have been designed, processed, characterized and used for specific biological studies. The studies performed include the control of lipid bilayer diffusion, cell adhesion and vascularization studies and also the creation of antimicrobial surfaces. More specifically, dynamic SAMs on surfaces whose properties can be modified with an electrochemical external stimulus have been developed and used to interrogate biological systems. The developed platform has been applied to two different applications to overcome present challenges when performing biological studies. Firstly, in Chapter 2, the design and synthesis of all the molecules needed to develop an electroactive platform, its processing as SAMs and the optimization of the surface confined redox process between a non-reactive Hydroquinone (HQ) termination and its corresponding reactive Benzoquinone (BQ) is reported. Two different interfacial reactions taking place on the electroactivated surfaces were studied in detail; the Diels-Alder (DA) and the Michael Addition (MA) interfacial reactions, with cyclopentadiene (Cp) or thiol tagged molecules, respectively. The comparative study between DA and MA as surface functionalization strategies with a temporal control reveal that even though MA is not commonly used for this purpose it offers an attractive strategy for stimulus activated functionalization for biological applications. In Chapter 3, the developed platform has been used to achieve a temporal control of cell adhesion and in this way mimic in vivo conditions more accurately. Cell adhesion plays fundamental roles in biological functions and as such, it is important to control cell adhesion on materials used for biomedical applications. Towards this aim, the dynamic interface developed has been used to immobilize cell adhesion promoting peptides through the two different interfacial reactions, namely the DA and the MA reaction, and a comparative study has been carried out. Moreover, a study involving immobilized VEGF-mimicking peptide Qk has been conducted demonstrating the possibility of using the novel peptide for directing cell differentiation into tubular networks for in vitro platforms, by attaching them on a surface. In Chapter 4, we have used the developed electroactive interface to control the dynamics of lipid bilayers as cell membrane models, designed for transmembrane protein characterization in a more in vivo like environment. Specifically, electroactive SAMs have been used to control the moment in which tethering of lipid bilayer deposited on them occurs and consequently decrease its diffusion. In this way, proteins and lipids can maintain their fluidity until tethering is desired, a useful platform for transmembrane protein characterization. iii Finally, in Chapter 5, a surface biofunctionalization strategy also based on SAMs has been used to produce a bactericidal surface by successfully immobilizing novel antimicrobial proteins produced by recombinant DNA technology. This is relevant in view of the verge of an imminent antibiotics crisis. To confirm the antimicrobial activity and biofilm growth prevention of these surfaces, a biofilm assay was performed demonstrating that proteins retain their antimicrobial effect when immobilized. All these strategies open new possibilities for controlled biomolecule immobilization for fundamental biological studies and for applications in biotechnology, in the interface of materials science and biology.
APA, Harvard, Vancouver, ISO, and other styles
40

Ceylanoglu, Arda. "An Accelerated Aerodynamic Optimization Approach For A Small Turbojet Engine Centrifugal Compressor." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611371/index.pdf.

Full text
Abstract:
Centrifugal compressors are widely used in propulsion technology. As an important part of turbo-engines, centrifugal compressors increase the pressure of the air and let the pressurized air flow into the combustion chamber. The developed pressure and the flow characteristics mainly affect the thrust generated by the engine. The design of centrifugal compressors is a challenging and time consuming process including several tests, computational fluid dynamics (CFD) analyses and optimization studies. In this study, a methodology on the geometry optimization and CFD analyses of the centrifugal compressor of an existing small turbojet engine are introduced as increased pressure ratio being the objective. The purpose is to optimize the impeller geometry of a centrifugal compressor such that the pressure ratio at the maximum speed of the engine is maximized. The methodology introduced provides a guidance on the geometry optimization of centrifugal impellers supported with CFD analysis outputs. The original geometry of the centrifugal compressor is obtained by means of optical scanning. Then, the parametric model of the 3-D geometry is created by using a CAD software. A design of experiments (DOE) procedure is applied through geometrical parameters in order to decrease the computation effort and guide through the optimization process. All the designs gathered through DOE study are modelled in the CAD software and meshed for CFD analyses. CFD analyses are carried out to investigate the resulting pressure ratio and flow characteristics. The results of the CFD studies are used within the Artificial Neural Network methodology to create a fit between geometric parameters (inputs) and the pressure ratio (output). Then, the resulting fit is used in the optimization study and a centrifugal compressor with higher pressure ratio is obtained by following a single objective optimization process supported by design of experiments methodology.
APA, Harvard, Vancouver, ISO, and other styles
41

Fletcher, Nathan James. "Design and Implementation of Periodic Unsteadiness Generator for Turbine Secondary Flow Studies." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1560810428267352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Abbasi, Baharanchi Ahmadreza. "Development of a Two-Fluid Drag Law for Clustered Particles Using Direct Numerical Simulation and Validation through Experiments." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2489.

Full text
Abstract:
This dissertation focused on development and utilization of numerical and experimental approaches to improve the CFD modeling of fluidization flow of cohesive micron size particles. The specific objectives of this research were: (1) Developing a cluster prediction mechanism applicable to Two-Fluid Modeling (TFM) of gas-solid systems (2) Developing more accurate drag models for Two-Fluid Modeling (TFM) of gas-solid fluidization flow with the presence of cohesive interparticle forces (3) using the developed model to explore the improvement of accuracy of TFM in simulation of fluidization flow of cohesive powders (4) Understanding the causes and influential factor which led to improvements and quantification of improvements (5) Gathering data from a fast fluidization flow and use these data for benchmark validations. Simulation results with two developed cluster-aware drag models showed that cluster prediction could effectively influence the results in both the first and second cluster-aware models. It was proven that improvement of accuracy of TFM modeling using three versions of the first hybrid model was significant and the best improvements were obtained by using the smallest values of the switch parameter which led to capturing the smallest chances of cluster prediction. In the case of the second hybrid model, dependence of critical model parameter on only Reynolds number led to the fact that improvement of accuracy was significant only in dense section of the fluidized bed. This finding may suggest that a more sophisticated particle resolved DNS model, which can span wide range of solid volume fraction, can be used in the formulation of the cluster-aware drag model. The results of experiment suing high speed imaging indicated the presence of particle clusters in the fluidization flow of FCC inside the riser of FIU-CFB facility. In addition, pressure data was successfully captured along the fluidization column of the facility and used as benchmark validation data for the second hybrid model developed in the present dissertation. It was shown the second hybrid model could predict the pressure data in the dense section of the fluidization column with better accuracy.
APA, Harvard, Vancouver, ISO, and other styles
43

Leite, Ricardo Augusto de Barros. "Planejamento de processos de peen forming baseado em modelos analíticos do jato de granalhas e do campo de tensões residuais induzidas na peça." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-28092016-103619/.

Full text
Abstract:
Peen forming é um processo de conformação plástica a frio de laminas ou painéis metálicos através do impacto de um jato regulado de pequenas esferas de aço em sua superfície, a fim de produzir uma curvatura pré-determinada. A aplicação da técnica de shot peening como um processo de conformação já é conhecida da indústria desde a década de 1940, mas a demanda crescente por produtos de grande confiabilidade tem impulsionado o desenvolvimento de novas pesquisas visando o seu aperfeiçoamento e automação. . O planejamento do processo de peen forming requer medição e controle de diversas variáveis relacionadas à dinâmica do jato de granalhas e à sua interação com o material a ser conformado. Conforme demonstrado por diversos autores, a velocidade de impacto é uma das variáveis que mais contribui para a formação do campo de tensões residuais que leva o material a se curvar. Neste trabalho é apresentado um modelo dinâmico simplificado que descreve o movimento de um grande número de pequenas esferas arrastadas por um fluxo de ar em regime permanente e sujeitas a múltiplas colisões entre si e com a peça a ser conformada. Simulações deste modelo permitiram identificar a correlação entre o campo de velocidades das granalhas e os demais parâmetros do processo. Mediante a aplicação da técnica de projeto de experimentos pôde-se estimar os valores dos parâmetros que otimizam o processo. Ao final, elaborou-se um algoritmo que permite realizar o planejamento de processos de peen forming, ou seja, determinar os valores desses parâmetros, de modo tal a produzir uma curvatura pré-determinada em uma placa metálica originalmente plana.
Peen forming is a plastic cold work process of shaping a metallic sheet or panel through the impact of a regulated blast of small round steel shots on its surface, in order to produce a previously desired curvature. The application of the shot peening as a forming process has been a known technique in the industry since the decade of 1940, but the increasing demand for products of high reliability have pushed the development of new research in order to enhance and automate it. Peen forming process planning requires the measurement and control of several variables concerning the dynamics of the shot jet and its interaction with the piece to be shaped. As previously shown by several authors, impact velocity is one of the variables that most contribute to the development of the residual stress field that causes the material to bend. In this article we present a simplified dynamical model describing the motion of a large number of small spheres (shot) dragged by an air flow in steady conditions and exposed to multiple collisions with each other and with the piece to be shaped. Computer simulations of this model allowed to identify correlations between the shot field velocity and the parameters of the process. Applying design of experiments techniques it was possible to estimate the value of parameters that optimize the process. It was, then, elaborated an algorithm that enables peen forming process planning, allowing the determination of the parameters, in order to make a predetermined bending in a metallic plate originally plane.
APA, Harvard, Vancouver, ISO, and other styles
44

Liu, Suo. "Minimizing distortions in dynamic light scattering experiments." Thesis, University of Nottingham, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489892.

Full text
Abstract:
Several methods are discussed and applied for minimizing distortions m dynamic light scattering (DLS). Multiple scattering, a very common phenomenon in DLS experiments, is discussed. Distortions caused by multiple scattering and its influence on correlation data are analyzed in detail. Five different methods are applied in simulations of suppression of multiple scattering in DLS experiments. The effect of foreign large particles, such as larger particles left by previous experiments, fast sinking dust in the sample suspension or agglomerations of the sample particles, are discussed and analyzed. Two 'dust' removal methods are discussed and applied on anticontamination simulations and real experimental data. Investigations are carried out considering samples flowing with velocities and also distortions coming from stray light. An area difference method is introduced in detecting a sample's velocity and stray light. From the results of simulations and real experiment data, the conclusion can be made that the methods discussed in this thesis perform well in minimizing the distortions in DLS experiment.
APA, Harvard, Vancouver, ISO, and other styles
45

Sharafi, Amir. "Development and Implementation of an Advanced Remotely Controlled Vibration Laboratory." Thesis, Blekinge Tekniska Högskola, Institutionen för maskinteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11101.

Full text
Abstract:
Term of remote-lab is certain types of laboratories which practical experiments are directedfrom a separate area by remote controller devices. This study is part of developing andupgrading advanced vibration remote laboratory. In the new remote lab, users have theability to measure the dynamic characteristics of the test object similar to the current existingremote lab. But in addition to current existing remote lab, they are capable to modifydynamic properties of the test object remotely by attaching vibration test instruments; such asa block of mass, spring-mass or non-linear spring. Doing several accurate experimental testsremotely on the test object are the toughest issues we faced as designers. In creating anddeveloping of this remote-lab, number of different approaches was adopted for producingwell-defined tests. Also, instead of implementing routine devices and techniques for regularvibration laboratories, the new prototypes were designed by finite elements method (FEM)and LABVIEW. For instance, the desirable test object, the attachment mechanism, usefulapplications, and proper software for managing via internet were prepared.
APA, Harvard, Vancouver, ISO, and other styles
46

Prasad, Badri Krishnamurthy 1959. "Experimental investigation of sleeved columns." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277134.

Full text
Abstract:
Results of experimental tests are presented for twelve 'Sleeved Column' specimens. All the specimens had an outer sleeve and an inner core, both of rectangular cross section. Outer sleeve was 23 in. long and the inner core was 23.5 in., with axial load applied only to the core. There was a gap between the sleeve and the core for all specimens except for one which had zero gap. The parameters considered for the study were core thickness and gap. It was concluded from the study that the sleeved column system carries substantially more load than the conventional Euler's column. The stiffness of the core and the gap between the sleeve and the core affects the load carrying capacity of sleeved column system significantly. For the same core size, specimens with least gap carried more load when compared to other specimens with larger gaps.
APA, Harvard, Vancouver, ISO, and other styles
47

Bousbia-Salah, Ryad. "Optimisation dynamique en temps-réel d’un procédé de polymérisation par greffage." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0242/document.

Full text
Abstract:
D'une manière schématique, l'optimisation dynamique de procédés consiste en trois étapes de base : (i) la modélisation, dans laquelle un modèle (phénoménologique) du procédé est construit, (ii) la formulation du problème, dans laquelle le critère de performance, les contraintes et les variables de décision sont définis, (iii) et la résolution, dans laquelle les profils optimaux des variables de décision sont déterminés. Il est important de souligner que ces profils optimaux garantissent l'optimalité pour le modèle mathématique utilisé. Lorsqu'ils sont appliqués au procédé, ces profils ne sont optimaux que lorsque le modèle décrit parfaitement le comportement du procédé, ce qui est très rarement le cas dans la pratique. En effet, les incertitudes sur les paramètres du modèle, les perturbations du procédé, et les erreurs structurelles du modèle font que les profils optimaux des variables de décision basés sur le modèle ne seront probablement pas optimaux pour le procédé. L'application de ces profils au procédé conduit généralement à la violation de certaines contraintes et/ou à des performances sous-optimales. Pour faire face à ces problèmes, l'optimisation dynamique en temps-réel constitue une approche tout à fait intéressante. L'idée générale de cette approche est d'utiliser les mesures expérimentales associées au modèle du procédé pour améliorer les profils des variables de décision de sorte que les conditions d'optimalité soient vérifiées sur le procédé (maximisation des performances et satisfaction des contraintes). En effet, pour un problème d'optimisation sous contraintes, les conditions d'optimalité possèdent deux parties : la faisabilité et la sensibilité. Ces deux parties nécessitent différents types de mesures expérimentales, à savoir les valeurs du critère et des contraintes, et les gradients du critère et des contraintes par rapport aux variables de décision. L'objectif de cette thèse est de développer une stratégie conceptuelle d'utilisation de ces mesures expérimentales en ligne de sorte que le procédé vérifie non seulement les conditions nécessaires, mais également les conditions suffisantes d'optimalité. Ce développement conceptuel va notamment s'appuyer sur les récents progrès en optimisation déterministe (les méthodes stochastiques ne seront pas abordées dans ce travail) de procédés basés principalement sur l'estimation des variables d'état non mesurées à l'aide d'un observateur à horizon glissant. Une méthodologie d'optimisation dynamique en temps réel (D-RTO) a été développée et appliquée à un réacteur batch dans lequel une réaction de polymérisation par greffage a lieu. L'objectif est de déterminer le profil temporel de température du réacteur qui minimise le temps opératoire tout en respectant des contraintes terminales sur le taux de conversion et l'efficacité de greffage
In a schematic way, process optimization consists of three basic steps: (i) modeling, in which a (phenomenological) model of the process is developed, (ii) problem formulation, in which the criterion of Performance, constraints and decision variables are defined, (iii) the resolution of the optimal problem, in which the optimal profiles of the decision variables are determined. It is important to emphasize that these optimal profiles guarantee the optimality for the model used. When applied to the process, these profiles are optimal only when the model perfectly describes the behavior of the process, which is very rarely the case in practice. Indeed, uncertainties about model parameters, process disturbances, and structural model errors mean that the optimal profiles of the model-based decision variables will probably not be optimal for the process. The objective of this thesis is to develop a conceptual strategy for using experimental measurements online so that the process not only satisfies the necessary conditions, but also the optimal conditions. This conceptual development will in particular be based on recent advances in deterministic optimization (the stochastic methods will not be dealt with in this work) of processes based on the estimation of the state variables that are not measured by a moving horizon observer. A dynamic real-time optimization (D-RTO) methodology has been developed and applied to a batch reactor where polymer grafting reactions take place. The objective is to determine the on-line reactor temperature profile that minimizes the batch time while meeting terminal constraints on the overall conversion rate and grafting efficiency
APA, Harvard, Vancouver, ISO, and other styles
48

Yew, Zu Thur. "Enhancing the imterpretation of dynamic force spectroscopy experiments." Thesis, University of Leeds, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.531593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Benito, David Charles. "Simulations and Experiments using a Dynamic Holographic Assembler." Thesis, University of Bristol, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.503892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cooper, Matthew. "Bayesian system identification for nonlinear dynamical vehicle models." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/213212/1/Matthew_Cooper_Thesis.pdf.

Full text
Abstract:
This thesis investigates the use of novel Bayesian system identification techniques to estimate unknown parameters in nonlinear vehicle dynamics. In the first part of this thesis, a dual merging particle filter is proposed that accurately estimates non-Gaussian posterior parameter distributions for different vehicle models. In the second part of this thesis, a novel myopic sequential technique is proposed to design informative experiments for estimating the unknown parameters of a real-world robotic vehicle. This myopic technique is extended in the last part of the thesis to incorporate a rolling horizon to design superior experiments with non-myopic utilities.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography