Dissertations / Theses on the topic 'Seconds method'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Seconds method.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Badahmane, Achraf. "Méthodes de sous espaces de Krylov préconditionnées pour les problèmes de point-selle avec plusieurs seconds membres." Thesis, Littoral, 2019. http://www.theses.fr/2019DUNK0543.
Full textIn these last years there has been a surge of interest in saddle point problems. For example, the mechanics of fluids and solids often lead to saddle point problems. These problems are usually presented by partial differential equations that we linearize and discretize. The linear problem obtained is often ill-conditioned. Solving it by standard iterative methods is not appropriate. In addition, when the size of the problem is large, it is necessary to use the projection methods. We are interested in this thesis topic to develop an efficient numerical methods for solving saddle point problems. We apply the Krylov subspace methods incorporated with suitable preconditioners for solving these types of problems. The effectiveness of these methods is illustrated by the numerical experiments
Ekman, Filip, and Malin Molander. "Utvärdering av GNSS-baserade fri stationsetableringsmetoder : En jämförelse av realtidsuppdaterad fri station och 180-sekundersmetoden." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-84934.
Full textThe purpose of this study is to investigate two GNSS-based methods of establishing a free total station. Due to technological advances made within GNSS-based measuring, the total station is seeing less use by surveyors in the field. Despite this, there are situations where GNSS-receivers might struggle and the need to use a total station arises. In these situations, there needs to be a reliable method of establishing the total station without known points and with a low uncertainty. This can be accomplished by utilizing real time updated free station (RUFRIS) and the 180-seconds method. Both RUFRIS and the 180-seconds method is frequently used by municipalities and companies, which raises the question about which of these methods performs better. To answer this, a comparison is made between these two methods regarding their uncertainty, their user friendliness, which situations they are best suited for and how different time aspects might affect them. A total of 60 establishments have been made over the course of three days while comparing the results to a known reference point. The results showed that RUFRIS is better suited for horizontal measurements, is quick to use and needs a larger area, while the 180-seconds method is better suited for vertical measurements, takes a bit longer and requires less space.
Slavova, Tzvetomila. "Résolution triangulaire de systèmes linéaires creux de grande taille dans un contexte parallèle multifrontal et hors-mémoire." Thesis, Toulouse, INPT, 2009. http://www.theses.fr/2009INPT016H/document.
Full textWe consider the solution of very large systems of linear equations with direct multifrontal methods. In this context the size of the factors is an important limitation for the use of sparse direct solvers. We will thus assume that the factors have been written on the local disks of our target multiprocessor machine during parallel factorization. Our main focus is the study and the design of efficient approaches for the forward and backward substitution phases after a sparse multifrontal factorization. These phases involve sparse triangular solution and have often been neglected in previous works on sparse direct factorization. In many applications, however, the time for the solution can be the main bottleneck for the performance. This thesis consists of two parts. The focus of the first part is on optimizing the out-of-core performance of the solution phase. The focus of the second part is to further improve the performance by exploiting the sparsity of the right-hand side vectors. In the first part, we describe and compare two approaches to access data from the hard disk. We then show that in a parallel environment the task scheduling can strongly influence the performance. We prove that a constraint ordering of the tasks is possible; it does not introduce any deadlock and it improves the performance. Experiments on large real test problems (more than 8 million unknowns) using an out-of-core version of a sparse multifrontal code called MUMPS (MUltifrontal Massively Parallel Solver) are used to analyse the behaviour of our algorithms. In the second part, we are interested in applications with sparse multiple right-hand sides, particularly those with single nonzero entries. The motivating applications arise in electromagnetism and data assimilation. In such applications, we need either to compute the null space of a highly rank deficient matrix or to compute entries in the inverse of a matrix associated with the normal equations of linear least-squares problems. We cast both of these problems as linear systems with multiple right-hand side vectors, each containing a single nonzero entry. We describe, implement and comment on efficient algorithms to reduce the input-output cost during an outof- core execution. We show how the sparsity of the right-hand side can be exploited to limit both the number of operations and the amount of data accessed. The work presented in this thesis has been partially supported by SOLSTICE ANR project (ANR-06-CIS6-010)
Karlgaard, Christopher David. "Second-Order Relative Motion Equations." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/34025.
Full textMaster of Science
Ben, Romdhane Mohamed. "Higher-Degree Immersed Finite Elements for Second-Order Elliptic Interface Problems." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/39258.
Full textPh. D.
Gulino, Sarah, and Christine Guzman. "A Comparison of Bergstrom’s 60 Second Kinetics Method with the Matzke Method of Vancomycin Kinetics." The University of Arizona, 2008. http://hdl.handle.net/10150/624272.
Full textObjectives: A novel method of predicting vancomycin trough levels at steady state was studied to determine whether it could effectively predict vancomycin trough levels compared to an established predictor method (Matzke). Methods: Adult patients who received at least two consecutive doses of vancomycin and had at least one reported vancomycin trough at steady state were considered. Data extracted and analyzed included patient gender, age, weight, height, and serum creatinine as well as vancomycin dose and interval, number of consecutive doses prior to the trough, time between trough and preceding dose, and measured vancomycin trough level. This data was applied to each of the prediction methods to determine how accurately they predicted actual measured vancomycin trough levels at steady state. Results: Data from 103 patients was analyzed. Vancomycin trough predictions using the Bergstrom method averaged 12.2 mg/dl, with a standard deviation of 3.4. The average actual trough concentration was 10.7 mg/dl with a standard deviation of 3.9, while the Matzke method predicted an average trough concentration of 19.2 mg/dl with a standard deviation of 8.6. Predictions made using the Bergstrom Method were not significantly different than the actual trough concentrations (p = 0.91). The Bergstrom method predicted concentrations within 25% of actual concentrations 42% of the time and within 50% of actual concentrations 78% of the time. Conclusions: The Bergstrom method was a more reliable predictor of vancomycin trough concentrations than the Matzke method in this patient population. Although more research is needed, the Bergstrom method may prove to be a useful tool for pharmacists to predict vancomycin trough concentrations quickly and with relative accuracy for individual patients.
Rumbe, George Otieno. "Performance evaluation of second price auction using Monte Carlo simulation." Diss., Online access via UMI:, 2007.
Find full textGarza, Maria. "Second Language Recall in Methods of Learning." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6788.
Full textDobrev, Veselin Asenov. "Preconditioning of discontinuous Galerkin methods for second order elliptic problems." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2531.
Full textEl-Sharif, Najla Saleh Ahmed. "Second-order methods for some nonlinear second-order initial-value problems with forcing." Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309501.
Full textAuffredic, Jérémy. "A second order Runge–Kutta method for the Gatheral model." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49170.
Full textMashalaba, Qaphela. "Implementation of numerical Fourier method for second order Taylor schemes." Master's thesis, Faculty of Commerce, 2019. http://hdl.handle.net/11427/30978.
Full textRodríguez, Cuesta Mª José. "Limit of detection for second-order calibration methods." Doctoral thesis, Universitat Rovira i Virgili, 2006. http://hdl.handle.net/10803/9013.
Full textThe lowest quantity of a substance that can be distinguished from the absence of that substance (a blank value) is called the detection limit or limit of detection (LOD). Traditionally, in the context of simple measurements where the instrumental signal only depends on the amount of analyte, a multiple of the blank value is taken to calculate the LOD (traditionally, the blank value plus three times the standard deviation of the measurement). However, the increasing complexity of the data that analytical instruments can provide for incoming samples leads to situations in which the LOD cannot be calculated as reliably as before.
Measurements, instruments and mathematical models can be classified according to the type of data they use. Tensorial theory provides a unified language that is useful for describing the chemical measurements, analytical instruments and calibration methods. Instruments that generate two-dimensional arrays of data are second-order instruments. A typical example is a spectrofluorometer, which provides a set of emission spectra obtained at different excitation wavelengths.
The calibration methods used with each type of data have different features and complexity. In this thesis, the most commonly used calibration methods are reviewed, from zero-order (or univariate) to second-order (or multi-linears) calibration models. Second-order calibration models are treated in details since they have been applied in the thesis.
Concretely, the following methods are described:
- PARAFAC (Parallel Factor Analysis)
- ITTFA (Iterative Target Transformation Analysis)
- MCR-ALS (Multivariate Curve Resolution-Alternating Least Squares)
- N-PLS (Multi-linear Partial Least Squares)
Analytical methods should be validated. The validation process typically starts by defining the scope of the analytical procedure, which includes the matrix, target analyte(s), analytical technique and intended purpose. The next step is to identify the performance characteristics that must be validated, which may depend on the purpose of the procedure, and the experiments for determining them. Finally, validation results should be documented, reviewed and maintained (if not, the procedure should be revalidated) as long as the procedure is applied in routine work.
The figures of merit of a chemical analytical process are 'those quantifiable terms which may indicate the extent of quality of the process. They include those terms that are closely related to the method and to the analyte (sensitivity, selectivity, limit of detection, limit of quantification, ...) and those which are concerned with the final results (traceability, uncertainty and representativity) (Inczédy et al., 1998). The aim of this thesis is to develop theoretical and practical strategies for calculating the limit of detection for complex analytical situations. Specifically, I focus on second-order calibration methods, i.e. when a matrix of data is available for each sample.
The methods most often used for making detection decisions are based on statistical hypothesis testing and involve a choice between two hypotheses about the sample. The first hypothesis is the "null hypothesis": the sample is analyte-free. The second hypothesis is the "alternative hypothesis": the sample is not analyte-free. In the hypothesis test there are two possible types of decision errors. An error of the first type occurs when the signal for an analyte-free sample exceeds the critical value, leading one to conclude incorrectly that the sample contains a positive amount of the analyte. This type of error is sometimes called a "false positive". An error of the second type occurs if one concludes that a sample does not contain the analyte when it actually does and it is known as a "false negative". In zero-order calibration, this hypothesis test is applied to the confidence intervals of the calibration model to estimate the LOD as proposed by Hubaux and Vos (A. Hubaux, G. Vos, Anal. Chem. 42: 849-855, 1970).
One strategy for estimating multivariate limits of detection is to transform the multivariate model into a univariate one. This strategy has been applied in this thesis in three practical applications:
1. LOD for PARAFAC (Parallel Factor Analysis).
2. LOD for ITTFA (Iterative Target Transformation Factor Analysis).
3. LOD for MCR-ALS (Multivariate Curve Resolution - Alternating Least Squares)
In addition, the thesis includes a theoretical contribution with the proposal of a sample-dependent LOD in the context of multivariate (PLS) and multi-linear (N-PLS) Partial Least Squares.
La Química Analítica es pot dividir en dos tipus d'anàlisis, l'anàlisi quantitativa i l'anàlisi qualitativa. La gran part de la química analítica moderna és quantitativa i fins i tot els govern fan ús d'aquesta ciència per establir regulacions que controlen, per exemple, nivells d'exposició a substàncies tòxiques que poden afectar la salut pública. El concepte de mínima quantitat d'un analit o component que es pot detectar apareix en moltes d'aquestes regulacions, en general com una part de la validació dels mètodes per tal de garantir la qualitat i la validesa dels resultats.
La mínima quantitat d'una substància que pot ser diferenciada de l'absència d'aquesta substància (el que es coneix com un blanc) s'anomena límit de detecció (limit of detection, LOD). En procediments on es treballa amb mesures analítiques que són degudes només a la quantitat d'analit present a la mostra (situació d'ordre zero) el LOD es pot calcular com un múltiple de la mesura del blanc (tradicionalment, 3 vegades la desviació d'aquesta mesura). Tanmateix, l'evolució dels instruments analítics i la complexitat creixent de les dades que generen, porta a situacions en les que el LOD no es pot calcular fiablement d'una forma tan senzilla. Les mesures, els instruments i els models de calibratge es poden classificar en funció del tipus de dades que utilitzen. La Teoria Tensorial s'ha utilitzat en aquesta tesi per fer aquesta classificació amb un llenguatge útil i unificat. Els instruments que generen dades en dues dimensions s'anomenen instruments de segon ordre i un exemple típic és l'espectrofluorímetre d'excitació-emissió, que proporciona un conjunt d'espectres d'emissió obtinguts a diferents longituds d'ona d'excitació.
Els mètodes de calibratge emprats amb cada tipus de dades tenen diferents característiques i complexitat. En aquesta tesi, es fa una revisió dels models de calibratge més habituals d'ordre zero (univariants), de primer ordre (multivariants) i de segon ordre (multilinears). Els mètodes de segon ordre estan tractats amb més detall donat que són els que s'han emprat en les aplicacions pràctiques portades a terme.
Concretament es descriuen:
- PARAFAC (Parallel Factor Analysis)
- ITTFA (Iterative Target Transformation Analysis)
- MCR-ALS (Multivariate Curve Resolution-Alternating Least Squares)
- N-PLS (Multi-linear Partial Least Squares)
Com s'ha avançat al principi, els mètodes analítics s'han de validar. El procés de validació inclou la definició dels límits d'aplicació del procediment analític (des del tipus de mostres o matrius fins l'analit o components d'interès, la tècnica analítica i l'objectiu del procediment). La següent etapa consisteix en identificar i estimar els paràmetres de qualitat (figures of merit, FOM) que s'han de validar per, finalment, documentar els resultats de la validació i mantenir-los mentre sigui aplicable el procediment descrit.
Algunes FOM dels processos químics de mesura són: sensibilitat, selectivitat, límit de detecció, exactitud, precisió, etc. L'objectiu principal d'aquesta tesi és desenvolupar estratègies teòriques i pràctiques per calcular el límit de detecció per problemes analítics complexos. Concretament, està centrat en els mètodes de calibratge que treballen amb dades de segon ordre.
Els mètodes més emprats per definir criteris de detecció estan basats en proves d'hipòtesis i impliquen una elecció entre dues hipòtesis sobre la mostra. La primera hipòtesi és la hipòtesi nul·la: a la mostra no hi ha analit. La segona hipòtesis és la hipòtesis alternativa: a la mostra hi ha analit. En aquest context, hi ha dos tipus d'errors en la decisió. L'error de primer tipus té lloc quan es determina que la mostra conté analit quan no en té i la probabilitat de cometre l'error de primer tipus s'anomena fals positiu. L'error de segon tipus té lloc quan es determina que la mostra no conté analit quan en realitat si en conté i la probabilitat d'aquest error s'anomena fals negatiu. En calibratges d'ordre zero, aquesta prova d'hipòtesi s'aplica als intervals de confiança de la recta de calibratge per calcular el LOD mitjançant les fórmules d'Hubaux i Vos (A. Hubaux, G. Vos, Anal. Chem. 42: 849-855, 1970)
Una estratègia per a calcular límits de detecció quan es treballa amb dades de segon ordre es transformar el model multivariant en un model univariant. Aquesta estratègia s'ha fet servir en la tesi en tres aplicacions diferents::
1. LOD per PARAFAC (Parallel Factor Analysis).
2. LOD per ITTFA (Iterative Target Transformation Factor Analysis).
3. LOD per MCR-ALS (Multivariate Curve Resolution - Alternating Least Squares)
A més, la tesi inclou una contribució teòrica amb la proposta d'un LOD que és específic per cada mostra, en el context del mètode multivariant PLS i del multilinear N-PLS.
Snyman, H. "Second order analyses methods for stirling engine design." Thesis, Stellenbosch : University of Stellenbosch, 2007. http://hdl.handle.net/10019.1/16102.
Full text121 Leaves printed single pages, preliminary pages a-l and numbered pages 1-81.
ENGLISH ABSTRACT:In the midst of the current non-renewable energy crises specifically with regard to fossil fuel, various research institutions across the world have turned their focus to renewable and sustainable development. Using our available non.renewable resources as efficiently as possible has been a focal point the past decades and will certainly be as long as these resources exist Various means to utilize the world's abundant and freely available renewable energy has been studied and some even introduced and installed as sustainable energy sources, Electricity generation by means of wind powered turbines, photo-voltaic cells, and tidal and wave energy are but a few examples. Modern photo-voltaic cells are known to have a solar to electricity conversion efficiency of 12% (Van Heerden, 2003) while wind turbines have an approximate wind to electricity conversion efficiency of 50% (Twele et aI., 2002). This low solar to electricity conversion efficiency together with the fact that renewable energy research is a relatively modern development, lead to the investigation into methods capable of higher solar to electricity conversion efficiencies. One such method could be to use the relatively old technology of the Stirling cycle developed in the early 1800's (solar to electricity conversion efficiency in the range of 20.24 % according Van Heerden, 2003). The Stirling cycle provides a method for converting thermal energy to mechanical power which can be used to generate electricity, One of the main advantages of Stirling machines is that they are capable of using any form of heat source ranging from solar to biomass and waste heat. This document provides a discussion of some of the available methods for the analysis of Stirling machines. The six (6) different methods considered include: the method of Beale, West, mean-pressurepower- formula (MPPF), Schmidt, idea! adiabatic and the simple analysis methods. The first three (3) are known to be good back-of-the-envelope methods specifically for application as synthesis tools during initialisation of design procedures, while the latter three (3) are analysis tools finding application during Stirling engine design and analysis procedures. These analysis methods are based on the work done by Berchowitz and Urieli (1984) and form the centre of this document. Sections to follow provide a discussion of the mathematical model as well as the MATlAB implementation thereof. Experimental tests were conducted on the Heinrici engine to provide verification of the simulated resutls. Shortcomings of these analyses methods are also discussed in the sections to follow. Recommendations regarding improvements of the simulation program, possible fields of application for Stirling technology, as well as future fields of study are made in the final chapter of this document. A review of relevanl literature regarding modern applications of Stirling technology and listings of companies currently manufacturing and developing Stirling machines and findings of research done at various other institutions are provided.
AFRIKAANSE OPSOMMING:Die tempo van uitputling van die wereld se nie-hernubare energiebronne die afgelope jare het aanleiding gegee daartoe dal daar loenemend fokus toegespits word op die ontwikkeling van hernubare alternatiewe. Meer doeltreffende benutting van die wereld se nie-hernubare energie is reeds 'n fokus punt, vir navorsers reg oor die wereld, vir die afgelope dekades. Die aarde se oorvloedryke hernubare energie bronne word reeds met verskeie metodes ontgin. Die omskakeling van wind-, son- en gety energie na elektrisieteids is net 'n paar voorbeelde. Die effektiwiteid van sonkrag na elektrisietyds omskakeling van moderne fotovo!la'iese selle is in die orde van 12% (Van Heerden, 2003) terwyl die doeltreffendeid van wind energie na elektrisiteit omskakelling in die orde van 50% (Twele et at, 2002) is. Hierdie relatief lae omskelings doeltreffendeid van sonkrag na elektrisietyd, tesame met die feit dat die hernubare industrie nag relatief jonk is, lei lot die soeke na ander meer doellreffende moontlikhede Die Stirling siklus is nie 'n mod erne beginsel nie, maar die toepassing daarvan veral in die hernubare energie industrie is wei 'n relatiewe nuwe beg rip, veral in teme van die omskakeling van sonkrag na elektriese energie (gemiddelde sonkrag na lektriese energie omskakelings doellreffendeid in die orde van 20-24% is gevind deur Van Heerden, 2003). Die omskakeling van lermiese energie na meganiese energie is sekerlik die hoof uitkomsle van die Stirling siklus, alhoewel dit ook toepassing vind in die verkoefingsindustrie. Die feit dat die Stirling siklus van enige vorm van termiese energie (bv. son. biomassa, asook hilte geproduseer as byproduk tydens sekere prosesse) gebruik kan maak. is een van die redes wat die tegnologie 56 aanloklik maak, spesifiek !.o,v. die hernubare energie sektor. Ses (6) metodes vir die analise van die Stirling siklus word in hierdie dokument bespreek. Dit slui! die volgnde in: Beale-, West-, die gemiddelde-druk-krag-metode (GDKM), Schmidt-, adiabatiese- en die eenvoudige analise melodes. Die eerste drie (3) metodes is handige berekenings metodes Iydens die aanvangs en sinlesefase van Stirling enjin ontwerp, lerwyl die laaste drie (3) meer loegespils is op die volledige ontwerps- en analisefases gedurende die Stirling eniin ontwerps proses. Die drie (3) analise melodes is gebaseer op die werk wat deur Berchowitz en Urieli (1984) gedoen is en maak die kern van die dokument uit. Die wiskundige model, implimentering daarvan in MATlAB, sowel as die eksperimentele verifieering van die resultate word bespreek. Tekortkominge van die analise metodes word ook aangespreek in elke hoofsluk. Moontlikke verbeterings len opsigte van die verskeie aannames word in die finale hoofsluk van die dokumenl aangespreek. Verskeie voorgestelde riglings vir toekomslige navorsings projekle word ook in die finale hoofstuk van die dokument genoem. 'n Kort oorsig van die relevanle lileraluur in verband mel huidige loepassings van die Stirling legnologie, asook die name van maatskappye wal tans hierdie tegnologiee ontwikkel en vervaardig, word genoem.
Werner, Catherine. "An Innovative method for measuring cortical acetylcholine release in animals on a second-by-second time scale." Connect to resource, 2006. http://hdl.handle.net/1811/6619.
Full textTitle from first page of PDF file. Document formatted into pages: contains 50 p.; also includes graphics. Includes bibliographical references (p. 28-37). Available online via Ohio State University's Knowledge Bank.
Beamis, Christopher Paul 1960. "Solution of second order differential equations using the Godunov integration method." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/277319.
Full textDzacka, Charles Nunya. "A Variation of the Carleman Embedding Method for Second Order Systems." Digital Commons @ East Tennessee State University, 2009. https://dc.etsu.edu/etd/1877.
Full textBakhsh, Jameel. "SECOND LANGUAGE LEARNERS UNDERGOING CULTURE SHOCK:PERCEPTIONS OF ENGLISH LANGUAGE TEACHING METHOD." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent160042669071272.
Full textVie, Jean-Léopold. "Second-order derivatives for shape optimization with a level-set method." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1072/document.
Full textThe main purpose of this thesis is the definition of a shape optimization method which combines second-order differentiationwith the representation of a shape by a level-set function. A second-order method is first designed for simple shape optimization problems : a thickness parametrization and a discrete optimization problem. This work is divided in four parts.The first one is bibliographical and contains different necessary backgrounds for the rest of the work. Chapter 1 presents the classical results for general optimization and notably the quadratic rate of convergence of second-order methods in well-suited cases. Chapter 2 is a review of the different modelings for shape optimization while Chapter 3 details two particular modelings : the thickness parametrization and the geometric modeling. The level-set method is presented in Chapter 4 and Chapter 5 recalls the basics of the finite element method.The second part opens with Chapter 6 and Chapter 7 which detail the calculation of second-order derivatives for the thickness parametrization and the geometric shape modeling. These chapters also focus on the particular structures of the second-order derivative. Then Chapter 8 is concerned with the computation of discrete derivatives for shape optimization. Finally Chapter 9 deals with different methods for approximating a second-order derivative and the definition of a second-order algorithm in a general modeling. It is also the occasion to make a few numerical experiments for the thickness (defined in Chapter 6) and the discrete (defined in Chapter 8) modelings.Then, the third part is devoted to the geometric modeling for shape optimization. It starts with the definition of a new framework for shape differentiation in Chapter 10 and a resulting second-order method. This new framework for shape derivatives deals with normal evolutions of a shape given by an eikonal equation like in the level-set method. Chapter 11 is dedicated to the numerical computation of shape derivatives and Chapter 12 contains different numerical experiments.Finally the last part of this work is about the numerical analysis of shape optimization algorithms based on the level-set method. Chapter 13 is concerned with a complete discretization of a shape optimization algorithm. Chapter 14 then analyses the numerical schemes for the level-set method, and the numerical error they may introduce. Finally Chapter 15 details completely a one-dimensional shape optimization example, with an error analysis on the rates of convergence
Andrade, Prashant William. "Implementation of second-order absorbing boundary conditions in frequency-domain computations /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.
Full textKim, Taejong. "Mesh independent convergence of modified inexact Newton methods for second order nonlinear problems." Texas A&M University, 2003. http://hdl.handle.net/1969.1/3870.
Full textCrews, Hugh Bates. "Fast FSR Methods for Second-Order Linear Regression Models." NCSU, 2008. http://www.lib.ncsu.edu/theses/available/etd-04282008-151809/.
Full textBrabazon, Keeran J. "Multigrid methods for nonlinear second order partial differential operators." Thesis, University of Leeds, 2014. http://etheses.whiterose.ac.uk/8481/.
Full textBooth, Andrew S. "Collocation methods for a class of second order initial value problems with oscillatory solutions." Thesis, Durham University, 1993. http://etheses.dur.ac.uk/5664/.
Full textZhang, Chun Yang. "A second order ADI method for 2D parabolic equations with mixed derivative." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2592940.
Full textDunn, Kyle George. "An Integral Equation Method for Solving Second-Order Viscoelastic Cell Motility Models." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/578.
Full textChibi, Ahmed-Salah. "Defect correction and Galerkin's method for second-order elliptic boundary value problems." Thesis, Imperial College London, 1989. http://hdl.handle.net/10044/1/47378.
Full textKusama, Koichi. "Bilingual method in CALL software : the role of L1 in CALL software for reading." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247831.
Full textYildirim, Ufuk. "Assessment Of Second-order Analysis Methods Presented In Design Codes." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610498/index.pdf.
Full text#948
and P-&
#916
effects. In addition, the approximate methods defined in AISC 2005 (B1 &ndash
B2 Method), and TS648 (1980) will be discussed in detail. Then, example problems will be solved for the demonstration of theoretical formulations for members with and without end translation cases. Also, the results obtained from the structural analysis software, SAP2000, will be compared with the results acquired from the exact and the approximate methods. Finally, conclusions related to the study will be stated.
Abuazoum, Latifa Abdalla. "Advanced model updating methods for generally damped second order systems." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12063/.
Full textSpencer, Dawna. "Visual and auditory metalinguistic methods for Spanish second language acquisition." Connect online, 2008. http://library2.up.edu/theses/2008_spencerd.pdf.
Full textFleming, Cliona Mary. "Second order chemometric methods and the analysis of complex data /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/8552.
Full textHazda, Jakub. "Analýza Stirlingova oběhu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231710.
Full textArmstrong, Robert A. 1969. "The fifth competence : discovering the self through intensive second language immersion." Thesis, McGill University, 1999. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=30142.
Full textRichards, Jeffrey Robert. "The Natural Approach and the Audiolingual Method: A Question of Student Gains and Retention." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4696.
Full textRiches, Caroline. "The development of mother tongue and second language reading in two bilingual education contexts /." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=37819.
Full textThe research involved two Grade 1 classes mainly comparing the language of initial formal reading instruction. One site was a French immersion school offering a 50% English/50% French program in which initial formal reading instruction was in English. The second site was a French school, with a majority of anglophone students and initial formal reading instruction was in French. The participants in this study were 12 children from each class, their parents, and the classroom teachers.
Three main tools of inquiry were used: classroom observations were carried out in each of the two classes during the Grade 1 school year; samples of oral reading and retellings, in English and in French, were collected from the participating children for miscue analysis, and informal interviews were conducted with all the participants.
The analysis revealed that regardless of the language of initial formal reading instruction, the children's reading abilities developed in both languages. Children tended to feel more comfortable reading in the language in which they had been formally instructed but, despite this, meaning-construction was more effective in the mother tongue. Differences in reading abilities for both groups could be accounted for by limitations in knowledge of the second language rather than by language of initial instruction. Finally, children with initial formal reading instruction in the second language easily applied their reading abilities to reading in their mother tongue.
The conclusions drawn from this inquiry are that having supportive home and community environments, exemplary teachers and constructive classroom environments enables children to use their creative abilities and language resources to make sense of reading in two languages. It is the continuities and connections between these elements which enables children to transcend any difficulties arising from the fact that reading is being encountered in two languages.
Comas, Lou Enric. "Application of the generalized rank annihilation method (GRAM) to second-order liquid chromatographic data." Doctoral thesis, Universitat Rovira i Virgili, 2005. http://hdl.handle.net/10803/8995.
Full textEn aquesta tesi es van utilitzar els dades d'ordre dos, obtingudes mitjançant un cromatògraf líquid d'alta resolució amb un detector de diodes en fila (DAD).
L'instrument HPLC-DAD és força comú. Tot i això, normalment, per determinar la concentració dels analits d'interès no s'utilitzen totes les dades enregistrades per l'instrument. El mode espectral només s'utilitza per identificar els analits o per verificar la puresa dels pics, mentre que l'àrea o l'alçada del pic s'utilitza per quantificar mitjançant calibratge univariant. Aquesta manera de treballar és molt útil sempre i quan la resposta mesurada sigui selectiva per l'analit d'interès.
En analitzar contaminants ambientals en mostres complexes, com poden ser mostres d'aigua de riu, no és senzill obtenir mesures selectives. Quan les respostes no son selectives, els mètodes de calibratge de segon ordre (els que utilitzen dades de segon ordre) es poden utlitzar per a quantificar l'analit d'interès.
La present tesi es basa en l'estudi de les propietats del mètode de calibratge de segon ordre Generalized Rank Annihilation Method (GRAM). Aquest mètode fou desenvolupat a mitjans de la dècada dels 80, i té unes propietats molt atractives:
1) Per a determinar la concentració de l'analit d'interès en una mostra test només cal una mostra de calibratge o estàndard.
2) No calen mesures selectives, amb la qual cosa el temps de la separació es pot reduir de manera considerable.
Tot i això, el GRAM té una sèrie de limitacions que fan que no s'apliqui de manera rutinària. L'objectiu de la tesi és estudiar els avantatges i les limitacions del GRAM i millorar els aspectes necessaris per a què és pugui aplicar de manera rutinària.
Per emprar GRAM les dades experimentals han de complir una sèrie de requisits matemàtics: (i) la resposta mesurada ha de ser suma de respostes corresponents als diferents analits i (ii) la resposta d'un analit ha de ser proporcional en les diferents mostres: l'analit ha d'eluir exactament al mateix temps de retenció tant en l'estàndard com en la mostra test. Si aquest requisit no es compleix, les prediccions del GRAM son esbiaixades.
S'han desenvolupat fórmules de superar aquestes dificultats. S'ha desenvolupat un mètode per alienar pics cromatogràfics, basat en un mètode de resolució de corbes (Iterative Target Transformation Factor Analysis, ITTFA). En sistemes HPLC-DAD, és força habitual que els pics de l'analit d'interès elueixin a diferents temps de retenció.
Les diferencies no son gaire grans (pocs segons) però poden ser suficients per fer que els resultats del GRAM siguin incorrectes.
El GRAM és un mètode basat en factors, i cal introduir aquest paràmetre per a calcular un model. S'ha desenvolupat un mètode gràfic per a triar el nombre de factors que s'utilitzen per calcular el model GRAM. Està basat en un paràmetre de l'algorisme GRAM (á).
Finalment s'ha desenvolupat un criteri per a determinar mostres discrepants (outliers).
El criteri desenvolupat per detectar outliers està basat en el Senyal Analític Net (NAS).
Tot l'esmentat anteriorment, s'ha aplicat a casos reals, en concret a l'anàlisi de naftalensulfonats i de contaminats polars presents en mostres d'aigua, tant de riu com de depuradora. Així s'ha pogut demostrar la utilitat del GRAM a la cromatografia, i comparar el GRAM amb altres mètodes de calibratge de segon ordre com el PARAFAC i MCR-ALS. Es va trobar que tots tres mètodes produïen resultats comparables.
Analytical measurements and the instruments that generate them can be classified regarding the number of data that are obtained when a sample is measured. When a matrix of response is obtained, it is known as second-order data.
In this thesis, second-order data were used, obtained from a high performance liquid chromatography (HPLC) couple with a diode array detector (DAD). This instrument is quite common in the analytical laboratories. However, the concentration of the analytes of interest is normally found without using all the measured data. The spectral model only is used to identify the analytes of for verifying the peak purity, whereas the area or the height of the peak is used to quantify using univariate calibration. This is a very useful strategy. However, the measured response must be selective to the analyte of interest.
When environmental pollutants were analyzed, like water samples, it is no so easy to get selective measurements. When the responses are not selective, the analyte on interest can still be quantified by using second-order calibration methods, which are the methods that use second-order data.
This thesis is based on the study of the properties of the second-order calibration method Generalized Rank Annihilation Method (GRAM).
This method was developed in the mid eighties and has very attractive properties:
1) To determine the concentration of the analyte of interest in a test sample, it is only necessary one calibration sample or standard.
2) Selective measurements are not necessary, implying the reduce of the separation time.
Despite these advantages, GRAM has some limitations which make that it is not applied routinely. The objectives of the thesis are to study the advantages and limitations of GRAM and improve the negative points in order to apply GRAM routinely.
To use GRAM the experimental data must accomplish some mathematical requirements: (i) the measured response must be result of the addition due to the different analytes in the peak and (ii) the response of the analyte must be proportional in the different samples: the analyte of interest must elute at the same retention time both in the calibration and in the test sample. When these conditions are not met, the GRAM predictions are biased.
Mathematical algorithms have been developed to overcome such difficulties. An algorithm to align chromatographic peaks has been developed, based on curve resolution method (Iterative Target Transformation Factor Analysis, ITTFA). In HPLCDAD systems is quite often that the peaks of the analyte of interest elute at different retention time in the calibration and in the test sample. Even the differences are not big (few seconds), they can be enough to make the GRAM results incorrect.
GRAM is a factor based calibration method, and the number of factors has to be introduced as an input to build a GRAM method. A graphical criterion has been selected to determine the number the number of factors, which is base on the use of a parameter of the GRAM algorithm (á).
Finally, a criterion to detect outlying samples has been developed, which is based on the Net Analyte Signal (NAS).
All the above commented were applied to real cases. Specifically to the analysis of aromatic sulfonates and polar pollutants in water form river samples and waste water plants. We were able to show the applicability of GRAM and to compare GRAM with other second-order calibration methods, such as PARAFAC i MCR-ALS. We found that the three methods provided comparable results.
Clack, Jhules. "Theoretical Analysis for Moving Least Square Method with Second Order Pseudo-Derivatives and Stabilization." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1418910272.
Full textChen, Xiaobo. "Etude des reponses du second ordre d'une structure soumise a une houle aleatoire." Nantes, 1988. http://www.theses.fr/1988NANT2040.
Full textRitter, Baird S. "Solution strategies for second order, nonlinear, one dimensional, two point boundary value problems by FEM analysis." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA246063.
Full textThesis Advisor: Salinas, D. "December 1990." Description based on title screen as viewed on April 1, 2010. DTIC Identifier(s): Boundary value problems, finite element analysis, differential equations, problem solving, theses, interpolation, iterations, one dimensional, computer programs, approximation/mathematics, linearity. Author(s) subject terms: Galerkin FEM, nonlinear, quasilinearization, linearization, interpolation, iteration, differential equation, convergence. Includes bibliographical references (p. 164). Also available in print.
Duncan, Fraser Andrew. "Monte Carlo simulation and aspects of the magnetostatic design of the TRIUMF second arm spectrometer." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28378.
Full textScience, Faculty of
Physics and Astronomy, Department of
Graduate
KORTE, MATTHEW. "Corpus Methods in Interlanguage Analysis." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1218835515.
Full textGorokhov, Alexei. "Séparation autodidacte de mélanges convolutifs : methodes du second ordre." Paris, ENST, 1997. http://www.theses.fr/1997ENST0007.
Full textHaug, Nils Arne. "An investigation into applicability of second temple period Jewesh Hermeneutical Methodologies to the interpretation of popular eschatology." Thesis, University of Zululand, 2003. http://hdl.handle.net/10530/1339.
Full textThis study endeavours to ascertain whether or not eschatological scenarios propounded by certain writers of highly influential and popular "end-time" texts are biblically sustainable, according to the hermeneutical methods employed by them. Firstly, the hermeneutical methods utilised by Christianity's exegetical predecessors, namely, the rabbinical Pharisees and the Qumran sectaries of the Second Temple period, are considered. Such methods, and the eschatological convictions ensuing therefrom, are apparent from canonical and non-canonical literature relevant to these two groups. Thereafter, the applicability of these methods to a Second Testament context is examined, the rationale being that if the use of such methods is significantly evident in the Second Testament, then they should, it is proposed, be germane to Christian scholars of both earlier and modem times since Christianity arose from the matrix of early Judaism. This is particularly so as regards the writers of popular eschatology whose end-time positions are then examined in the light of early Jewish hermeneutical methods, and their own interpretative stance. The conclusion is reached that the Second Testament does reflect extensive use of the hermeneutical methods of early Judaism and that, consequently, subsequent Christian scholars should endorse these methods. It appears, though, that Christians through the ages have ignored such methods. It is further concluded that the main eschatological issues promoted by the popuiarisers cannot easily be defended solely through the use of the exegetical methods employed by them. However, it is submitted that many such issues can be substantially justified through the use of traditional Jewish hermeneuticai methods, as employed by the Second Testament redactors and Jesus himself.
Davis, Benjamin J. "A study into discontinuous Galerkin methods for the second order wave equation." Thesis, Monterey, California: Naval Postgraduate School, 2015. http://hdl.handle.net/10945/45836.
Full textThere are numerous numerical methods for solving different types of partial differential equations (PDEs) that describe the physical dynamics of the world. For instance, PDEs are used to understand fluid flow for aerodynamics, wave dynamics for seismic exploration, and orbital mechanics. The goal of these numerical methods is to approximate the solution to a continuous PDE with an accurate discrete representation. The focus of this thesis is to explore a new Discontinuous Galerkin (DG) method for approximating the second order wave equation in complex geometries with curved elements. We begin by briefly highlighting some of the numerical methods used to solve PDEs and discuss the necessary concepts to understand DG methods. These concepts are used to develop a one- and two-dimensional DG method with an upwind flux, boundary conditions, and curved elements. We demonstrate convergence numerically and prove discrete stability of the method through an energy analysis.
Faulk, Songhui. "Exploring alternative methods for teaching English as a second language in Korea." CSUSB ScholarWorks, 1999. https://scholarworks.lib.csusb.edu/etd-project/1639.
Full textEtcheverlepo, Adrien. "Développement de méthodes de domaines fictifs au second ordre." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00821897.
Full textMaxwell, Wendy. "Evaluating the effectiveness of the accelerative integrated method for teaching French as a second language." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58676.pdf.
Full textKilty, Susanna. "Supporting a project method for teaching English as a second language in the senior years." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ62767.pdf.
Full textSärkkä, J. (Jussi). "A novel method for hazard rate estimates of the second level interconnections in infrastructure electronics." Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514288197.
Full textTiivistelmä Elektronisen laitteen materiaalien yhteensopivuus ja käyttöympäristö määrittävät sen kokemat rasitukset. Laitteen komponentteihin tai niiden liitoksiin kohdistuvat rasitukset aiheuttavat lopulta laitteen vikaantumisen. Vikaantumisten taajuuteen vaikuttavat paitsi rasituksen taso ja tyyppi, myös laitteen materiaalien ominaisuudet. Todellinen vikaantumistaajuus perustuu kuitenkin muihinkin parametreihin, mistä johtuen vikaantumisennusteet voivat olla epätarkkoja. Tästä syystä käytännölliselle liitosten vikaantumisen arviointimenetelmälle on tarve. Tässä väitöskirjassa esitellään uusi komponenttien juotosliitosten arviointimenetelmä, jonka avulla voidaan muuntaa kiihdytetyn rasitustestauksen tulos vikaantumistaajuusarvioksi laitteen todellisessa käyttöympäristössä. Menelmässä hyödynnetään levytason rasitustestauksen tuloksia komponenttien kotelotyyppikohtaisiin vikaantumisennusteisiin. Menetelmää voidaan käyttää sellaisenaan arvioitaessa komponenttikoteloiden luotettavuutta todellisissa rasitus- tai tuoteympäristöissä. Väitöskirjassa esitellään myös uusi menetelmä vikaantuneiden liitosten kustannusten määrittämiseen, mikä auttaa myös uuden liitosteknologian kokonaiskustannusten arvioimisessa. Lisäksi väitöskirjatyössä osoitetaan, että liitosvikojen aiheuttamat kustannukset voivat olla erittäin korkeita, mikäli juotosliitoksiin kohdistuvat rasitukset ylittävät liitosten suunnitellun kestävyyden. Elektroniikan lyijyttömän valmistamisen myötä komponenttien, juotteen ja piirilevyn materiaalien yhteensopivuus korostuu. Väitöskirjatyössä osoitetaan, että yhteensopimattomien materiaalien käyttäminen komponenteissa voi johtaa komponentin liialliseen taipumaan kuumakonvektiojuottamisen aikana. Lisäksi esitellään menetelmä komponentin taipuman arvioimiseksi lämpötilan funktiona. Tässä väitöskirjassa esitellään uusi menetelmä, jolla voidaan arvioida komponenttien juotosliitosten vikaantumista ja vikaantumisen vaikutusta tuotteiden kokonaiskustannuksiin. Menetelmä perustuu kiihdytetyn rasitustestauksen tuloksiin, joita voidaan käyttää juotosliitosten vikaantumisten arvioimiseen tuotteen todellisissa käyttöolosuhteissa. Lisäksi väitöskirjatyössä on arvioitu juotosmateriaalin ja juotosaluemitoituksen vaikutusta juotosliitosten luotettavuuteen