To see the other types of publications on this topic, follow the link: Seconds method.

Dissertations / Theses on the topic 'Seconds method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Seconds method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Badahmane, Achraf. "Méthodes de sous espaces de Krylov préconditionnées pour les problèmes de point-selle avec plusieurs seconds membres." Thesis, Littoral, 2019. http://www.theses.fr/2019DUNK0543.

Full text
Abstract:
La résolution numérique des problèmes de point-selle a eu une attention particulière ces dernières années. À titre d'exemple, la mécanique des fluides et solides conduit souvent à des problèmes de point-selle. Ces problèmes se présentent généralement par des équations aux dérivées partielles que nous linéarisons et discrétisons. Le problème linéaire obtenu est souvent mal conditionné. Le résoudre par des méthodes itératives standard n'est donc pas approprié. En plus, lorsque la taille du problème est grande, il est nécessaire de procéder par des méthodes de projections. Nous nous intéressons dans ce sujet de thèse à développer des méthodes numériques robustes et efficaces de résolution numérique de problèmes de point-selle. Nous appliquons les méthodes de Krylov avec des techniques de préconditionnement bien adaptées à la résolution de problème de point selle. L'efficacité de ces méthodes dans les tests numériques
In these last years there has been a surge of interest in saddle point problems. For example, the mechanics of fluids and solids often lead to saddle point problems. These problems are usually presented by partial differential equations that we linearize and discretize. The linear problem obtained is often ill-conditioned. Solving it by standard iterative methods is not appropriate. In addition, when the size of the problem is large, it is necessary to use the projection methods. We are interested in this thesis topic to develop an efficient numerical methods for solving saddle point problems. We apply the Krylov subspace methods incorporated with suitable preconditioners for solving these types of problems. The effectiveness of these methods is illustrated by the numerical experiments
APA, Harvard, Vancouver, ISO, and other styles
2

Ekman, Filip, and Malin Molander. "Utvärdering av GNSS-baserade fri stationsetableringsmetoder : En jämförelse av realtidsuppdaterad fri station och 180-sekundersmetoden." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-84934.

Full text
Abstract:
Behovet av mätning med totalstation har inom många områden minskat till förmån för mätning med GNSS-baserad teknik som ett resultat av dess större flexibilitet och ofta acceptabla osäkerhet. GNSS-baserad mätning kan dock begränsas av olika faktorer, vilket skapar ett behov av mätning med totalstation. Etablering av totalstation sker traditionellt genom kända punkter, men när dessa inte finns tillgängliga behövs andra metoder för etablering som ger en låg osäkerhet.  Syftet med denna studie är att undersöka två GNSS-baserade fri stationsetablerings-metoder. Realtidsuppdaterad fri station (RUFRIS) bygger på kombinerad mätning, där koordinaterna för minst 15 bakåtobjekt mäts in med NRTK samtidigt som totalstationen mäter längd och riktning mellan station och bakåtobjekt. 180-sekundersmetoden bygger på kontinuerlig mätning i tre minuter på minst tre punkter, för att sedan använda dessa punkter som bakåtobjekt under fri stationsetablering. Under tre dagar insamlades mätdata från totalt 60 etableringar i Skålsjön i Ovanåkers kommun. Totalt 30 etableringar per metod utfördes växlande med varandra för att få samma tidspåverkan för mätningarna. Platsen valdes till följd av en närliggande stompunkt av hög kvalitet samt en realistisk mätningsmiljö. Insamlade data bearbetades och beräknades med avseende på spridning och mätosäkerhet. Dessutom utfördes en tidsanalys av erhållna mätdata. Den enskilda standardosäkerheten för RUFRIS beräknades vara 6,7 mm i plan och 15 mm i höjd. För 180-sekundersmetoden beräknades standardsosäkerheten till 10 mm i plan och 7,2 mm i höjd. Enligt den lägeskontroll som utfördes i studien var det endast RUFRIS som klarade den beräknade toleransen i plan. I höjd var det enbart 180-sekundersmetoden som befann sig inom toleransen. RUFRIS klarade dock toleransen när samtliga grova fel uteslöts från beräkningen. Slutsatsen som drogs i denna studie var att RUFRIS lämpar sig väl för mätnings-situationer med fokus på plan i områden med god sikt. 180-sekundersmetoden lämpar sig däremot bättre till höjdmätning och kan potentiellt vara ett alternativ till avvägning när toleransen i höjd är inom 10 mm. Mätningarna utfördes under goda förhållanden med avseende på jonosfären, därav antogs upplevda störningar gällande erhållandet av fixlösning och mätvärden inte härstamma från denna felkälla. Mätosäkerheten ökade i samband med kraftigt snöfall, vilket tyder på att vädret påverkade resultatet. Sammanfattningsvis har båda metoderna sina styrkor och svagheter, men ingen av metoderna visade sig vara mer lämplig än den andra när etableringen avser mätning i både plan och höjd.
The purpose of this study is to investigate two GNSS-based methods of establishing a free total station. Due to technological advances made within GNSS-based measuring, the total station is seeing less use by surveyors in the field. Despite this, there are situations where GNSS-receivers might struggle and the need to use a total station arises. In these situations, there needs to be a reliable method of establishing the total station without known points and with a low uncertainty. This can be accomplished by utilizing real time updated free station (RUFRIS) and the 180-seconds method. Both RUFRIS and the 180-seconds method is frequently used by municipalities and companies, which raises the question about which of these methods performs better. To answer this, a comparison is made between these two methods regarding their uncertainty, their user friendliness, which situations they are best suited for and how different time aspects might affect them. A total of 60 establishments have been made over the course of three days while comparing the results to a known reference point. The results showed that RUFRIS is better suited for horizontal measurements, is quick to use and needs a larger area, while the 180-seconds method is better suited for vertical measurements, takes a bit longer and requires less space.
APA, Harvard, Vancouver, ISO, and other styles
3

Slavova, Tzvetomila. "Résolution triangulaire de systèmes linéaires creux de grande taille dans un contexte parallèle multifrontal et hors-mémoire." Thesis, Toulouse, INPT, 2009. http://www.theses.fr/2009INPT016H/document.

Full text
Abstract:
Nous nous intéressons à la résolution de systèmes linéaires creux de très grande taille par des méthodes directes de factorisation. Dans ce contexte, la taille de la matrice des facteurs constitue un des facteurs limitants principaux pour l'utilisation de méthodes directes de résolution. Nous supposons donc que la matrice des facteurs est de trop grande taille pour être rangée dans la mémoire principale du multiprocesseur et qu'elle a donc été écrite sur les disques locaux (hors-mémoire : OOC) d'une machine multiprocesseurs durant l'étape de factorisation. Nous nous intéressons à l'étude et au développement de techniques efficaces pour la phase de résolution après une factorization multifrontale creuse. La phase de résolution, souvent négligée dans les travaux sur les méthodes directes de résolution directe creuse, constitue alors un point critique de la performance de nombreuses applications scientifiques, souvent même plus critique que l'étape de factorisation. Cette thèse se compose de deux parties. Dans la première partie nous nous proposons des algorithmes pour améliorer la performance de la résolution hors-mémoire. Dans la deuxième partie nous pousuivons ce travail en montrant comment exploiter la nature creuse des seconds membres pour réduire le volume de données accédées en mémoire. Dans la première partie de cette thèse nous introduisons deux approches de lecture des données sur le disque dur. Nous montrons ensuite que dans un environnement parallèle le séquencement des tâches peut fortement influencer la performance. Nous prouvons qu'un ordonnancement contraint des tâches peut être introduit; qu'il n'introduit pas d'interblocage entre processus et qu'il permet d'améliorer les performances. Nous conduisons nos expériences sur des problèmes industriels de grande taille (plus de 8 Millions d'inconnues) et utilisons une version hors-mémoire d'un code multifrontal creux appelé MUMPS (solveur multifrontal parallèle). Dans la deuxième partie de ce travail nous nous intéressons au cas de seconds membres creux multiples. Ce problème apparaît dans des applications en electromagnétisme et en assimilation de données et résulte du besoin de calculer l'espace propre d'une matrice fortement déficiente, du calcul d'éléments de l'inverse de la matrice associée aux équations normales pour les moindres carrés linéaires ou encore du traitement de matrices fortement réductibles en programmation linéaire. Nous décrivons un algorithme efficace de réduction du volume d'Entrées/Sorties sur le disque lors d'une résolution hors-mémoire. Plus généralement nous montrons comment le caractère creux des seconds -membres peut être exploité pour réduire le nombre d'opérations et le nombre d'accès à la mémoire lors de l'étape de résolution. Le travail présenté dans cette thèse a été partiellement financé par le projet SOLSTICE de l'ANR (ANR-06-CIS6-010)
We consider the solution of very large systems of linear equations with direct multifrontal methods. In this context the size of the factors is an important limitation for the use of sparse direct solvers. We will thus assume that the factors have been written on the local disks of our target multiprocessor machine during parallel factorization. Our main focus is the study and the design of efficient approaches for the forward and backward substitution phases after a sparse multifrontal factorization. These phases involve sparse triangular solution and have often been neglected in previous works on sparse direct factorization. In many applications, however, the time for the solution can be the main bottleneck for the performance. This thesis consists of two parts. The focus of the first part is on optimizing the out-of-core performance of the solution phase. The focus of the second part is to further improve the performance by exploiting the sparsity of the right-hand side vectors. In the first part, we describe and compare two approaches to access data from the hard disk. We then show that in a parallel environment the task scheduling can strongly influence the performance. We prove that a constraint ordering of the tasks is possible; it does not introduce any deadlock and it improves the performance. Experiments on large real test problems (more than 8 million unknowns) using an out-of-core version of a sparse multifrontal code called MUMPS (MUltifrontal Massively Parallel Solver) are used to analyse the behaviour of our algorithms. In the second part, we are interested in applications with sparse multiple right-hand sides, particularly those with single nonzero entries. The motivating applications arise in electromagnetism and data assimilation. In such applications, we need either to compute the null space of a highly rank deficient matrix or to compute entries in the inverse of a matrix associated with the normal equations of linear least-squares problems. We cast both of these problems as linear systems with multiple right-hand side vectors, each containing a single nonzero entry. We describe, implement and comment on efficient algorithms to reduce the input-output cost during an outof- core execution. We show how the sparsity of the right-hand side can be exploited to limit both the number of operations and the amount of data accessed. The work presented in this thesis has been partially supported by SOLSTICE ANR project (ANR-06-CIS6-010)
APA, Harvard, Vancouver, ISO, and other styles
4

Karlgaard, Christopher David. "Second-Order Relative Motion Equations." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/34025.

Full text
Abstract:
This thesis presents an approximate solution of second order relative motion equations. The equations of motion for a Keplerian orbit in spherical coordinates are expanded in Taylor series form using reference conditions consistent with that of a circular orbit. Only terms that are linear or quadratic in state variables are kept in the expansion. A perturbation method is employed to obtain an approximate solution of the resulting nonlinear differential equations. This new solution is compared with the previously known solution of the linear case to show improvement, and with numerical integration of the quadratic differential equation to understand the error incurred by the approximation. In all cases, the comparison is made by computing the difference of the approximate state (analytical or numerical) from numerical integration of the full nonlinear Keplerian equations of motion.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Ben, Romdhane Mohamed. "Higher-Degree Immersed Finite Elements for Second-Order Elliptic Interface Problems." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/39258.

Full text
Abstract:
A wide range of applications involve interface problems. In most of the cases, mathematical modeling of these interface problems leads to partial differential equations with non-smooth or discontinuous inputs and solutions, especially across material interfaces. Different numerical methods have been developed to solve these kinds of problems and handle the non-smooth behavior of the input data and/or the solution across the interface. The main focus of our work is the immersed finite element method to obtain optimal numerical solutions for interface problems. In this thesis, we present piecewise quadratic immersed finite element (IFE) spaces that are used with an immersed finite element (IFE) method with interior penalty (IP) for solving two-dimensional second-order elliptic interface problems without requiring the mesh to be aligned with the material interfaces. An analysis of the constructed IFE spaces and their dimensions is presented. Shape functions of Lagrange and hierarchical types are constructed for these spaces, and a proof for the existence is established. The interpolation errors in the proposed piecewise quadratic spaces yield optimal O(h³) and O(h²) convergence rates, respectively, in the L² and broken H¹ norms under mesh refinement. Furthermore, numerical results are presented to validate our theory and show the optimality of our quadratic IFE method. Our approach in this thesis is, first, to establish a theory for the simplified case of a linear interface. After that, we extend the framework to quadratic interfaces. We, then, describe a general procedure for handling arbitrary interfaces occurring in real physical practical applications and present computational examples showing the optimality of the proposed method. Furthermore, we investigate a general procedure for extending our quadratic IFE spaces to p-th degree and construct hierarchical shape functions for p=3.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Gulino, Sarah, and Christine Guzman. "A Comparison of Bergstrom’s 60 Second Kinetics Method with the Matzke Method of Vancomycin Kinetics." The University of Arizona, 2008. http://hdl.handle.net/10150/624272.

Full text
Abstract:
Class of 2008 Abstract
Objectives: A novel method of predicting vancomycin trough levels at steady state was studied to determine whether it could effectively predict vancomycin trough levels compared to an established predictor method (Matzke). Methods: Adult patients who received at least two consecutive doses of vancomycin and had at least one reported vancomycin trough at steady state were considered. Data extracted and analyzed included patient gender, age, weight, height, and serum creatinine as well as vancomycin dose and interval, number of consecutive doses prior to the trough, time between trough and preceding dose, and measured vancomycin trough level. This data was applied to each of the prediction methods to determine how accurately they predicted actual measured vancomycin trough levels at steady state. Results: Data from 103 patients was analyzed. Vancomycin trough predictions using the Bergstrom method averaged 12.2 mg/dl, with a standard deviation of 3.4. The average actual trough concentration was 10.7 mg/dl with a standard deviation of 3.9, while the Matzke method predicted an average trough concentration of 19.2 mg/dl with a standard deviation of 8.6. Predictions made using the Bergstrom Method were not significantly different than the actual trough concentrations (p = 0.91). The Bergstrom method predicted concentrations within 25% of actual concentrations 42% of the time and within 50% of actual concentrations 78% of the time. Conclusions: The Bergstrom method was a more reliable predictor of vancomycin trough concentrations than the Matzke method in this patient population. Although more research is needed, the Bergstrom method may prove to be a useful tool for pharmacists to predict vancomycin trough concentrations quickly and with relative accuracy for individual patients.
APA, Harvard, Vancouver, ISO, and other styles
7

Rumbe, George Otieno. "Performance evaluation of second price auction using Monte Carlo simulation." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Garza, Maria. "Second Language Recall in Methods of Learning." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6788.

Full text
Abstract:
This dissertation examined the relationship between the acquisition and recall of English language vocabulary. This study explored 2 different learning recall strategies to determine which approach was the quickest or more efficient way to remember vocabulary words. Previous researchers had focused on learning a second language phonetically and had not explored different instructional strategies to study the most useful or quickest way to learn a second language for adults. However, there remains an important gap in the current research regarding how to present different methods of instruction to acquire a new second language more rapidly. The purpose of this study was to determine which method was easier and quicker to assist the second language learner to recall and acquire vocabulary. The sample came from 3 different adult second language classrooms. The participants completed a pretest to assess their English word knowledge before the treatment. The participants had a timed 15-min or 30-min period to learn the cards for recall using flash cards with words only or with words and pictures. Once the period was over, the participants completed a posttest measure of language acquisition. There were no statistically significant differences in posttest scores based on method of learning, length of time for learning, or the interaction between the two. The results of the study added to the research on determining whether different instructional methods assisted an adult second language learner to acquire a second language more swiftly.
APA, Harvard, Vancouver, ISO, and other styles
9

Dobrev, Veselin Asenov. "Preconditioning of discontinuous Galerkin methods for second order elliptic problems." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

El-Sharif, Najla Saleh Ahmed. "Second-order methods for some nonlinear second-order initial-value problems with forcing." Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Auffredic, Jérémy. "A second order Runge–Kutta method for the Gatheral model." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49170.

Full text
Abstract:
In this thesis, our research focus on a weak second order stochastic Runge–Kutta method applied to a system of stochastic differential equations known as the Gatheral Model. We approximate numerical solutions to this system and investigate the rate of convergence of our method. Both call and put options are priced using Monte-Carlo simulation to investigate the order of convergence. The numerical results show that our method is consistent with the theoretical order of convergence of the Monte-Carlo simulation. However, in terms of the Runge-Kutta method, we cannot accept the consistency of our method with the theoretical order of convergence without further research.
APA, Harvard, Vancouver, ISO, and other styles
12

Mashalaba, Qaphela. "Implementation of numerical Fourier method for second order Taylor schemes." Master's thesis, Faculty of Commerce, 2019. http://hdl.handle.net/11427/30978.

Full text
Abstract:
The problem of pricing contingent claims in a complete market has received a significant amount of attention in literature since the seminal work of Black, Fischer and Scholes, Myron (1973). It was also in 1973 that the theory of backward stochastic differential equations (BSDEs) was developed by Bismut, Jean-Michel (1973), but it was much later in the literature that BSDEs developed links to contingent claim pricing. This dissertation is a thorough exposition of the survey paper Ruijter, Marjon J and Oosterlee, Cornelis W (2016) in which a highly accurate and efficient Fourier pricing technique compatible with BSDEs is developed and implemented. We prove our understanding of this technique by reproducing some of the numerical experiments and results in Ruijter, Marjon J and Oosterlee, Cornelis W (2016), and outlining some key implementationl considerations.
APA, Harvard, Vancouver, ISO, and other styles
13

Rodríguez, Cuesta Mª José. "Limit of detection for second-order calibration methods." Doctoral thesis, Universitat Rovira i Virgili, 2006. http://hdl.handle.net/10803/9013.

Full text
Abstract:
Analytical chemistry can be split into two main types, qualitative and quantitative. Most modern analytical chemistry is quantitative. Popular sensitivity to health issues is aroused by the mountains of government regulations that use science to, for instance, provide public health information to prevent disease caused by harmful exposure to toxic substances. The concept of the minimum amount of an analyte or compound that can be detected or analysed appears in many of these regulations (for example, to discard the presence of traces of toxic substances in foodstuffs) generally as a part of method validation aimed at reliably evaluating the validity of the measurements.

The lowest quantity of a substance that can be distinguished from the absence of that substance (a blank value) is called the detection limit or limit of detection (LOD). Traditionally, in the context of simple measurements where the instrumental signal only depends on the amount of analyte, a multiple of the blank value is taken to calculate the LOD (traditionally, the blank value plus three times the standard deviation of the measurement). However, the increasing complexity of the data that analytical instruments can provide for incoming samples leads to situations in which the LOD cannot be calculated as reliably as before.

Measurements, instruments and mathematical models can be classified according to the type of data they use. Tensorial theory provides a unified language that is useful for describing the chemical measurements, analytical instruments and calibration methods. Instruments that generate two-dimensional arrays of data are second-order instruments. A typical example is a spectrofluorometer, which provides a set of emission spectra obtained at different excitation wavelengths.

The calibration methods used with each type of data have different features and complexity. In this thesis, the most commonly used calibration methods are reviewed, from zero-order (or univariate) to second-order (or multi-linears) calibration models. Second-order calibration models are treated in details since they have been applied in the thesis.

Concretely, the following methods are described:
- PARAFAC (Parallel Factor Analysis)
- ITTFA (Iterative Target Transformation Analysis)
- MCR-ALS (Multivariate Curve Resolution-Alternating Least Squares)
- N-PLS (Multi-linear Partial Least Squares)

Analytical methods should be validated. The validation process typically starts by defining the scope of the analytical procedure, which includes the matrix, target analyte(s), analytical technique and intended purpose. The next step is to identify the performance characteristics that must be validated, which may depend on the purpose of the procedure, and the experiments for determining them. Finally, validation results should be documented, reviewed and maintained (if not, the procedure should be revalidated) as long as the procedure is applied in routine work.

The figures of merit of a chemical analytical process are 'those quantifiable terms which may indicate the extent of quality of the process. They include those terms that are closely related to the method and to the analyte (sensitivity, selectivity, limit of detection, limit of quantification, ...) and those which are concerned with the final results (traceability, uncertainty and representativity) (Inczédy et al., 1998). The aim of this thesis is to develop theoretical and practical strategies for calculating the limit of detection for complex analytical situations. Specifically, I focus on second-order calibration methods, i.e. when a matrix of data is available for each sample.

The methods most often used for making detection decisions are based on statistical hypothesis testing and involve a choice between two hypotheses about the sample. The first hypothesis is the "null hypothesis": the sample is analyte-free. The second hypothesis is the "alternative hypothesis": the sample is not analyte-free. In the hypothesis test there are two possible types of decision errors. An error of the first type occurs when the signal for an analyte-free sample exceeds the critical value, leading one to conclude incorrectly that the sample contains a positive amount of the analyte. This type of error is sometimes called a "false positive". An error of the second type occurs if one concludes that a sample does not contain the analyte when it actually does and it is known as a "false negative". In zero-order calibration, this hypothesis test is applied to the confidence intervals of the calibration model to estimate the LOD as proposed by Hubaux and Vos (A. Hubaux, G. Vos, Anal. Chem. 42: 849-855, 1970).

One strategy for estimating multivariate limits of detection is to transform the multivariate model into a univariate one. This strategy has been applied in this thesis in three practical applications:
1. LOD for PARAFAC (Parallel Factor Analysis).
2. LOD for ITTFA (Iterative Target Transformation Factor Analysis).
3. LOD for MCR-ALS (Multivariate Curve Resolution - Alternating Least Squares)

In addition, the thesis includes a theoretical contribution with the proposal of a sample-dependent LOD in the context of multivariate (PLS) and multi-linear (N-PLS) Partial Least Squares.
La Química Analítica es pot dividir en dos tipus d'anàlisis, l'anàlisi quantitativa i l'anàlisi qualitativa. La gran part de la química analítica moderna és quantitativa i fins i tot els govern fan ús d'aquesta ciència per establir regulacions que controlen, per exemple, nivells d'exposició a substàncies tòxiques que poden afectar la salut pública. El concepte de mínima quantitat d'un analit o component que es pot detectar apareix en moltes d'aquestes regulacions, en general com una part de la validació dels mètodes per tal de garantir la qualitat i la validesa dels resultats.

La mínima quantitat d'una substància que pot ser diferenciada de l'absència d'aquesta substància (el que es coneix com un blanc) s'anomena límit de detecció (limit of detection, LOD). En procediments on es treballa amb mesures analítiques que són degudes només a la quantitat d'analit present a la mostra (situació d'ordre zero) el LOD es pot calcular com un múltiple de la mesura del blanc (tradicionalment, 3 vegades la desviació d'aquesta mesura). Tanmateix, l'evolució dels instruments analítics i la complexitat creixent de les dades que generen, porta a situacions en les que el LOD no es pot calcular fiablement d'una forma tan senzilla. Les mesures, els instruments i els models de calibratge es poden classificar en funció del tipus de dades que utilitzen. La Teoria Tensorial s'ha utilitzat en aquesta tesi per fer aquesta classificació amb un llenguatge útil i unificat. Els instruments que generen dades en dues dimensions s'anomenen instruments de segon ordre i un exemple típic és l'espectrofluorímetre d'excitació-emissió, que proporciona un conjunt d'espectres d'emissió obtinguts a diferents longituds d'ona d'excitació.

Els mètodes de calibratge emprats amb cada tipus de dades tenen diferents característiques i complexitat. En aquesta tesi, es fa una revisió dels models de calibratge més habituals d'ordre zero (univariants), de primer ordre (multivariants) i de segon ordre (multilinears). Els mètodes de segon ordre estan tractats amb més detall donat que són els que s'han emprat en les aplicacions pràctiques portades a terme.

Concretament es descriuen:

- PARAFAC (Parallel Factor Analysis)
- ITTFA (Iterative Target Transformation Analysis)
- MCR-ALS (Multivariate Curve Resolution-Alternating Least Squares)
- N-PLS (Multi-linear Partial Least Squares)

Com s'ha avançat al principi, els mètodes analítics s'han de validar. El procés de validació inclou la definició dels límits d'aplicació del procediment analític (des del tipus de mostres o matrius fins l'analit o components d'interès, la tècnica analítica i l'objectiu del procediment). La següent etapa consisteix en identificar i estimar els paràmetres de qualitat (figures of merit, FOM) que s'han de validar per, finalment, documentar els resultats de la validació i mantenir-los mentre sigui aplicable el procediment descrit.

Algunes FOM dels processos químics de mesura són: sensibilitat, selectivitat, límit de detecció, exactitud, precisió, etc. L'objectiu principal d'aquesta tesi és desenvolupar estratègies teòriques i pràctiques per calcular el límit de detecció per problemes analítics complexos. Concretament, està centrat en els mètodes de calibratge que treballen amb dades de segon ordre.

Els mètodes més emprats per definir criteris de detecció estan basats en proves d'hipòtesis i impliquen una elecció entre dues hipòtesis sobre la mostra. La primera hipòtesi és la hipòtesi nul·la: a la mostra no hi ha analit. La segona hipòtesis és la hipòtesis alternativa: a la mostra hi ha analit. En aquest context, hi ha dos tipus d'errors en la decisió. L'error de primer tipus té lloc quan es determina que la mostra conté analit quan no en té i la probabilitat de cometre l'error de primer tipus s'anomena fals positiu. L'error de segon tipus té lloc quan es determina que la mostra no conté analit quan en realitat si en conté i la probabilitat d'aquest error s'anomena fals negatiu. En calibratges d'ordre zero, aquesta prova d'hipòtesi s'aplica als intervals de confiança de la recta de calibratge per calcular el LOD mitjançant les fórmules d'Hubaux i Vos (A. Hubaux, G. Vos, Anal. Chem. 42: 849-855, 1970)

Una estratègia per a calcular límits de detecció quan es treballa amb dades de segon ordre es transformar el model multivariant en un model univariant. Aquesta estratègia s'ha fet servir en la tesi en tres aplicacions diferents::
1. LOD per PARAFAC (Parallel Factor Analysis).
2. LOD per ITTFA (Iterative Target Transformation Factor Analysis).
3. LOD per MCR-ALS (Multivariate Curve Resolution - Alternating Least Squares)

A més, la tesi inclou una contribució teòrica amb la proposta d'un LOD que és específic per cada mostra, en el context del mètode multivariant PLS i del multilinear N-PLS.
APA, Harvard, Vancouver, ISO, and other styles
14

Snyman, H. "Second order analyses methods for stirling engine design." Thesis, Stellenbosch : University of Stellenbosch, 2007. http://hdl.handle.net/10019.1/16102.

Full text
Abstract:
Thesis (MScIng( Mechanical Engineering)--University of Stellenbosch, 2007.
121 Leaves printed single pages, preliminary pages a-l and numbered pages 1-81.
ENGLISH ABSTRACT:In the midst of the current non-renewable energy crises specifically with regard to fossil fuel, various research institutions across the world have turned their focus to renewable and sustainable development. Using our available non.renewable resources as efficiently as possible has been a focal point the past decades and will certainly be as long as these resources exist Various means to utilize the world's abundant and freely available renewable energy has been studied and some even introduced and installed as sustainable energy sources, Electricity generation by means of wind powered turbines, photo-voltaic cells, and tidal and wave energy are but a few examples. Modern photo-voltaic cells are known to have a solar to electricity conversion efficiency of 12% (Van Heerden, 2003) while wind turbines have an approximate wind to electricity conversion efficiency of 50% (Twele et aI., 2002). This low solar to electricity conversion efficiency together with the fact that renewable energy research is a relatively modern development, lead to the investigation into methods capable of higher solar to electricity conversion efficiencies. One such method could be to use the relatively old technology of the Stirling cycle developed in the early 1800's (solar to electricity conversion efficiency in the range of 20.24 % according Van Heerden, 2003). The Stirling cycle provides a method for converting thermal energy to mechanical power which can be used to generate electricity, One of the main advantages of Stirling machines is that they are capable of using any form of heat source ranging from solar to biomass and waste heat. This document provides a discussion of some of the available methods for the analysis of Stirling machines. The six (6) different methods considered include: the method of Beale, West, mean-pressurepower- formula (MPPF), Schmidt, idea! adiabatic and the simple analysis methods. The first three (3) are known to be good back-of-the-envelope methods specifically for application as synthesis tools during initialisation of design procedures, while the latter three (3) are analysis tools finding application during Stirling engine design and analysis procedures. These analysis methods are based on the work done by Berchowitz and Urieli (1984) and form the centre of this document. Sections to follow provide a discussion of the mathematical model as well as the MATlAB implementation thereof. Experimental tests were conducted on the Heinrici engine to provide verification of the simulated resutls. Shortcomings of these analyses methods are also discussed in the sections to follow. Recommendations regarding improvements of the simulation program, possible fields of application for Stirling technology, as well as future fields of study are made in the final chapter of this document. A review of relevanl literature regarding modern applications of Stirling technology and listings of companies currently manufacturing and developing Stirling machines and findings of research done at various other institutions are provided.
AFRIKAANSE OPSOMMING:Die tempo van uitputling van die wereld se nie-hernubare energiebronne die afgelope jare het aanleiding gegee daartoe dal daar loenemend fokus toegespits word op die ontwikkeling van hernubare alternatiewe. Meer doeltreffende benutting van die wereld se nie-hernubare energie is reeds 'n fokus punt, vir navorsers reg oor die wereld, vir die afgelope dekades. Die aarde se oorvloedryke hernubare energie bronne word reeds met verskeie metodes ontgin. Die omskakeling van wind-, son- en gety energie na elektrisieteids is net 'n paar voorbeelde. Die effektiwiteid van sonkrag na elektrisietyds omskakeling van moderne fotovo!la'iese selle is in die orde van 12% (Van Heerden, 2003) terwyl die doeltreffendeid van wind energie na elektrisiteit omskakelling in die orde van 50% (Twele et at, 2002) is. Hierdie relatief lae omskelings doeltreffendeid van sonkrag na elektrisietyd, tesame met die feit dat die hernubare industrie nag relatief jonk is, lei lot die soeke na ander meer doellreffende moontlikhede Die Stirling siklus is nie 'n mod erne beginsel nie, maar die toepassing daarvan veral in die hernubare energie industrie is wei 'n relatiewe nuwe beg rip, veral in teme van die omskakeling van sonkrag na elektriese energie (gemiddelde sonkrag na lektriese energie omskakelings doellreffendeid in die orde van 20-24% is gevind deur Van Heerden, 2003). Die omskakeling van lermiese energie na meganiese energie is sekerlik die hoof uitkomsle van die Stirling siklus, alhoewel dit ook toepassing vind in die verkoefingsindustrie. Die feit dat die Stirling siklus van enige vorm van termiese energie (bv. son. biomassa, asook hilte geproduseer as byproduk tydens sekere prosesse) gebruik kan maak. is een van die redes wat die tegnologie 56 aanloklik maak, spesifiek !.o,v. die hernubare energie sektor. Ses (6) metodes vir die analise van die Stirling siklus word in hierdie dokument bespreek. Dit slui! die volgnde in: Beale-, West-, die gemiddelde-druk-krag-metode (GDKM), Schmidt-, adiabatiese- en die eenvoudige analise melodes. Die eerste drie (3) metodes is handige berekenings metodes Iydens die aanvangs en sinlesefase van Stirling enjin ontwerp, lerwyl die laaste drie (3) meer loegespils is op die volledige ontwerps- en analisefases gedurende die Stirling eniin ontwerps proses. Die drie (3) analise melodes is gebaseer op die werk wat deur Berchowitz en Urieli (1984) gedoen is en maak die kern van die dokument uit. Die wiskundige model, implimentering daarvan in MATlAB, sowel as die eksperimentele verifieering van die resultate word bespreek. Tekortkominge van die analise metodes word ook aangespreek in elke hoofsluk. Moontlikke verbeterings len opsigte van die verskeie aannames word in die finale hoofsluk van die dokumenl aangespreek. Verskeie voorgestelde riglings vir toekomslige navorsings projekle word ook in die finale hoofstuk van die dokument genoem. 'n Kort oorsig van die relevanle lileraluur in verband mel huidige loepassings van die Stirling legnologie, asook die name van maatskappye wal tans hierdie tegnologiee ontwikkel en vervaardig, word genoem.
APA, Harvard, Vancouver, ISO, and other styles
15

Werner, Catherine. "An Innovative method for measuring cortical acetylcholine release in animals on a second-by-second time scale." Connect to resource, 2006. http://hdl.handle.net/1811/6619.

Full text
Abstract:
Thesis (Honors)--Ohio State University, 2006.
Title from first page of PDF file. Document formatted into pages: contains 50 p.; also includes graphics. Includes bibliographical references (p. 28-37). Available online via Ohio State University's Knowledge Bank.
APA, Harvard, Vancouver, ISO, and other styles
16

Beamis, Christopher Paul 1960. "Solution of second order differential equations using the Godunov integration method." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/277319.

Full text
Abstract:
This MS Thesis proposes the use of an integration technique due to Godunov for the direct numerical solution of systems of second order differential equations. This method is to be used instead of the conventional technique of separating each second order equation into two first order equations and then solving the resulting system with one of the many methods available for systems of first order differential equations. Stability domains and expressions for the truncation error will be developed for this method when it is used to solve the wave equation, a passive mechanical system, and a passive electrical circuit. It will be shown both analytically and experimentally that the Godunov method compares favorably with the Adams-Bashforth third order method when used to solve both the wave equation and the mechanical system, but that there are potential problems when this method is used to simulate electrical circuits which result in integro-differential equations.
APA, Harvard, Vancouver, ISO, and other styles
17

Dzacka, Charles Nunya. "A Variation of the Carleman Embedding Method for Second Order Systems." Digital Commons @ East Tennessee State University, 2009. https://dc.etsu.edu/etd/1877.

Full text
Abstract:
The Carleman Embedding is a method that allows us to embed a finite dimensional system of nonlinear differential equations into a system of infinite dimensional linear differential equations. This technique works well when dealing with first-order nonlinear differential equations. However, for higher order nonlinear ordinary differential equations, it is difficult to use the Carleman Embedding method. This project will examine the Carleman Embedding and a variation of the method which is very convenient in applying to second order systems of nonlinear equations.
APA, Harvard, Vancouver, ISO, and other styles
18

Bakhsh, Jameel. "SECOND LANGUAGE LEARNERS UNDERGOING CULTURE SHOCK:PERCEPTIONS OF ENGLISH LANGUAGE TEACHING METHOD." Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent160042669071272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Vie, Jean-Léopold. "Second-order derivatives for shape optimization with a level-set method." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1072/document.

Full text
Abstract:
Le but de cette thèse est de définir une méthode d'optimisation de formes qui conjugue l'utilisation de la dérivée seconde de forme et la méthode des lignes de niveaux pour la représentation d'une forme.On considèrera d'abord deux cas plus simples : un cas d'optimisation paramétrique et un cas d'optimisation discrète.Ce travail est divisé en quatre parties.La première contient le matériel nécessaire à la compréhension de l'ensemble de la thèse.Le premier chapitre rappelle des résultats généraux d'optimisation, et notamment le fait que les méthodes d'ordre deux ont une convergence quadratique sous certaines hypothèses.Le deuxième chapitre répertorie différentes modélisations pour l'optimisation de formes, et le troisième se concentre sur l'optimisation paramétrique puis l'optimisation géométrique.Les quatrième et cinquième chapitres introduisent respectivement la méthode des lignes de niveaux (level-set) et la méthode des éléments-finis.La deuxième partie commence par les chapitres 6 et 7 qui détaillent des calculs de dérivée seconde dans le cas de l'optimisation paramétrique puis géométrique.Ces chapitres précisent aussi la structure et certaines propriétés de la dérivée seconde de forme.Le huitième chapitre traite du cas de l'optimisation discrète.Dans le neuvième chapitre on introduit différentes méthodes pour un calcul approché de la dérivée seconde, puis on définit un algorithme de second ordre dans un cadre général.Cela donne la possibilité de faire quelques premières simulations numériques dans le cas de l'optimisation paramétrique (Chapitre 6) et dans le cas de l'optimisation discrète (Chapitre 7).La troisième partie est consacrée à l'optimisation géométrique.Le dixième chapitre définit une nouvelle notion de dérivée de forme qui prend en compte le fait que l'évolution des formes par la méthode des lignes de niveaux, grâce à la résolution d'une équation eikonale, se fait toujours selon la normale.Cela permet de définir aussi une méthode d'ordre deux pour l'optimisation.Le onzième chapitre détaille l'approximation d'intégrales de surface et le douzième chapitre est consacré à des exemples numériques.La dernière partie concerne l'analyse numérique d'algorithmes d'optimisation de formes par la méthode des lignes de niveaux.Le Chapitre 13 détaille la version discrète d'un algorithme d'optimisation de formes.Le Chapitre 14 analyse les schémas numériques relatifs à la méthodes des lignes de niveaux.Enfin le dernier chapitre fait l'analyse numérique complète d'un exemple d'optimisation de formes en dimension un, avec une étude des vitesses de convergence
The main purpose of this thesis is the definition of a shape optimization method which combines second-order differentiationwith the representation of a shape by a level-set function. A second-order method is first designed for simple shape optimization problems : a thickness parametrization and a discrete optimization problem. This work is divided in four parts.The first one is bibliographical and contains different necessary backgrounds for the rest of the work. Chapter 1 presents the classical results for general optimization and notably the quadratic rate of convergence of second-order methods in well-suited cases. Chapter 2 is a review of the different modelings for shape optimization while Chapter 3 details two particular modelings : the thickness parametrization and the geometric modeling. The level-set method is presented in Chapter 4 and Chapter 5 recalls the basics of the finite element method.The second part opens with Chapter 6 and Chapter 7 which detail the calculation of second-order derivatives for the thickness parametrization and the geometric shape modeling. These chapters also focus on the particular structures of the second-order derivative. Then Chapter 8 is concerned with the computation of discrete derivatives for shape optimization. Finally Chapter 9 deals with different methods for approximating a second-order derivative and the definition of a second-order algorithm in a general modeling. It is also the occasion to make a few numerical experiments for the thickness (defined in Chapter 6) and the discrete (defined in Chapter 8) modelings.Then, the third part is devoted to the geometric modeling for shape optimization. It starts with the definition of a new framework for shape differentiation in Chapter 10 and a resulting second-order method. This new framework for shape derivatives deals with normal evolutions of a shape given by an eikonal equation like in the level-set method. Chapter 11 is dedicated to the numerical computation of shape derivatives and Chapter 12 contains different numerical experiments.Finally the last part of this work is about the numerical analysis of shape optimization algorithms based on the level-set method. Chapter 13 is concerned with a complete discretization of a shape optimization algorithm. Chapter 14 then analyses the numerical schemes for the level-set method, and the numerical error they may introduce. Finally Chapter 15 details completely a one-dimensional shape optimization example, with an error analysis on the rates of convergence
APA, Harvard, Vancouver, ISO, and other styles
20

Andrade, Prashant William. "Implementation of second-order absorbing boundary conditions in frequency-domain computations /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Taejong. "Mesh independent convergence of modified inexact Newton methods for second order nonlinear problems." Texas A&M University, 2003. http://hdl.handle.net/1969.1/3870.

Full text
Abstract:
In this dissertation, we consider modified inexact Newton methods applied to second order nonlinear problems. In the implementation of Newton's method applied to problems with a large number of degrees of freedom, it is often necessary to solve the linear Jacobian system iteratively. Although a general theory for the convergence of modified inexact Newton's methods has been developed, its application to nonlinear problems from nonlinear PDE's is far from complete. The case where the nonlinear operator is a zeroth order perturbation of a fixed linear operator was considered in the paper written by Brown et al.. The goal of this dissertation is to show that one can develop modified inexact Newton's methods which converge at a rate independent of the number of unknowns for problems with higher order nonlinearities. To do this, we are required to first, set up the problem on a scale of Hilbert spaces, and second, to devise a special iterative technique which converges in a higher order Sobolev norm, i.e., H1+alpha(omega) \ H1 0(omega) with 0 < alpha < 1/2. We show that the linear system solved in Newton's method can be replaced with one iterative step provided that the initial iterate is close enough. The closeness criteria can be taken independent of the mesh size. In addition, we have the same convergence rates of the method in the norm of H1 0(omega) using the discrete Sobolev inequalities.
APA, Harvard, Vancouver, ISO, and other styles
22

Crews, Hugh Bates. "Fast FSR Methods for Second-Order Linear Regression Models." NCSU, 2008. http://www.lib.ncsu.edu/theses/available/etd-04282008-151809/.

Full text
Abstract:
Many variable selection techniques have been developed that focus on first-order linear regression models. In some applications, such as modeling response surfaces, fitting second-order terms can improve predictive accuracy. However, the number of spurious interactions can be large leading to poor results with many methods. We focus on forward selection, describing algorithms that use the natural hierarchy existing in second-order linear regression models to limit spurious interactions. We then develop stopping rules by extending False Selection Rate methodology to these algorithms. In addition, we describe alternative estimation methods for fitting regression models including the LASSO, CART, and MARS. We also propose a general method for controlling multiple-group false selection rates, which we apply to second-order linear regression models. By estimating a separate entry level for first-order and second-order terms, we obtain equal contributions to the false selection rate from each group. We compare the methods via Monte Carlo simulation and apply them to optimizing response surface experimental designs.
APA, Harvard, Vancouver, ISO, and other styles
23

Brabazon, Keeran J. "Multigrid methods for nonlinear second order partial differential operators." Thesis, University of Leeds, 2014. http://etheses.whiterose.ac.uk/8481/.

Full text
Abstract:
This thesis is concerned with the efficient numerical solution of nonlinear partial differential equations (PDEs) of elliptic and parabolic type. Such PDEs arise frequently in models used to describe many physical phenomena, from the diffusion of a toxin in soil to the flow of viscous fluids. The main focus of this research is to better understand the implementation and performance of nonlinear multigrid methods for the solution of elliptic and parabolic PDEs, following their discretisation. For the most part finite element discretisations are considered, but other techniques are also discussed. Following discretisation of a PDE the two most frequently used nonlinear multigrid methods are Newton-Multigrid and the Full Approximation Scheme (FAS). These are both very efficient algorithms, and have the advantage that when they are applied to practical problems, their execution times scale linearly with the size of the problem being solved. Even though this has yet to be proved in theory for most problems, these methods have been widely adopted in practice in order to solve highly complex nonlinear (systems of) PDEs. Many research groups use either Newton-MG or FAS without much consideration as to which should be preferred, since both algorithms perform satisfactorily. In this thesis we address the question as to which method is likely to be more computationally efficient in practice. As part of this investigation the implementation of the algorithms is considered in a framework which allows the direct comparison of the computational effort of the two iterations. As well as this, the convergence properties of the methods are considered, applied to a variety of model problems. Extensive results are presented in the comparison, which are explained by available theory whenever possible. The strength and range of results presented allows us to confidently conclude that for a practical problem, discretised using a finite element discretisation, an improved efficiency and stability of a Newton-MG iteration, compared to an FAS iteration, is likely to be observed. The relative advantage of a Newton-MG method is likely to be larger the more complex the problem being solved becomes.
APA, Harvard, Vancouver, ISO, and other styles
24

Booth, Andrew S. "Collocation methods for a class of second order initial value problems with oscillatory solutions." Thesis, Durham University, 1993. http://etheses.dur.ac.uk/5664/.

Full text
Abstract:
We derive and analyse two families of multistep collocation methods for periodic initial-value problems of the form y" = f(x, y); y((^x)o) = yo, y(^1)(xo) = zo involving ordinary differential equations of second order in which the first derivative does not appear explicitly. A survey of recent results and proposed numerical methods is given in chapter 2. Chapter 3 is devoted to the analysis of a family of implicit Chebyshev methods proposed by Panovsky k Richardson. We show that for each non-negative integer r, there are two methods of order 2r from this family which possess non-vanishing intervals of periodicity. The equivalence of these methods with one-step collocation methods is also established, and these methods are shown to be neither P-stable nor symplectic. In chapters 4 and 5, two families of multistep collocation methods are derived, and their order and stability properties are investigated. A detailed analysis of the two-step symmetric methods from each class is also given. The multistep Runge-Kutta-Nystrom methods of chapter 4 are found to be difficult to analyse, and the specific examples considered are found to perform poorly in the areas of both accuracy and stability. By contrast, the two-step symmetric hybrid methods of chapter 5 are shown to have excellent stability properties, in particular we show that all two-step 27V-point methods of this type possess non-vanishing intervals of periodicity, and we give conditions under which these methods are almost P-stable. P-stable and efficient methods from this family are obtained and demonstrated in numerical experiments. A simple, cheap and effective error estimator for these methods is also given.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Chun Yang. "A second order ADI method for 2D parabolic equations with mixed derivative." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2592940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Dunn, Kyle George. "An Integral Equation Method for Solving Second-Order Viscoelastic Cell Motility Models." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/578.

Full text
Abstract:
For years, researchers have studied the movement of cells and mathematicians have attempted to model the movement of the cell using various methods. This work is an extension of the work done by Zheltukhin and Lui (2011), Mathematical Biosciences 229:30-40, who simulated the stress and displacement of a one-dimensional cell using a model based on viscoelastic theory. The report is divided into three main parts. The first part considers viscoelastic models with a first-order constitutive equation and uses the standard linear model as an example. The second part extends the results of the first to models with second-order constitutive equations. In this part, the two examples studied are Burger model and a Kelvin-Voigt element connected with a dashpot in series. In the third part, the effects of substrate with variable stiffness are explored. Here, the effective adhesion coefficient is changed from a constant to a spatially-dependent function. Numerical results are generated using two different functions for the adhesion coefficient. Results of this thesis show that stress on the cell varies greatly across each part of the cell depending on the constitute equation we use, while the position and velocity of the cell remain essentially unchanged from a large-scale point of view.
APA, Harvard, Vancouver, ISO, and other styles
27

Chibi, Ahmed-Salah. "Defect correction and Galerkin's method for second-order elliptic boundary value problems." Thesis, Imperial College London, 1989. http://hdl.handle.net/10044/1/47378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kusama, Koichi. "Bilingual method in CALL software : the role of L1 in CALL software for reading." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Yildirim, Ufuk. "Assessment Of Second-order Analysis Methods Presented In Design Codes." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610498/index.pdf.

Full text
Abstract:
The main objective of the thesis is evaluating and comparing Second-Order Elastic Analysis Methods defined in two different specifications, AISC 2005 and TS648 (1980). There are many theoretical approaches that can provide exact solution for the problem. However, approximate methods are still needed for design purposes. Simple formulations for code applications were developed, and they are valid as acceptable results can be obtained within admissible error limits. Within the content of the thesis, firstly background information related to second-order effects will be presented. The emphasis will be on the definition of geometric non-linearity, also called as P-&
#948
and P-&
#916
effects. In addition, the approximate methods defined in AISC 2005 (B1 &ndash
B2 Method), and TS648 (1980) will be discussed in detail. Then, example problems will be solved for the demonstration of theoretical formulations for members with and without end translation cases. Also, the results obtained from the structural analysis software, SAP2000, will be compared with the results acquired from the exact and the approximate methods. Finally, conclusions related to the study will be stated.
APA, Harvard, Vancouver, ISO, and other styles
30

Abuazoum, Latifa Abdalla. "Advanced model updating methods for generally damped second order systems." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/12063/.

Full text
Abstract:
This thesis is mostly about the analysis of second order linear vibrating systems. The main purpose of this study is to extend methods which have previously been developed for either undamped or proportionally damped or classically damped systems to the general case. These methods are commonly used in aerospace industries. Ground vibration testing of aircraft is performed to identify the dynamic behaviour of the structure. New aircraft materials and joining methods - composite materials and/or novel adhesive bonding approaches in place of riveted or welded joints - cause higher levels of damping that have not been seen before in aircraft structure. Any change occurring in an original structure causes associated changes of the dynamic behaviour of the structure. Analytical finite element analyses and experimental modal testing have become essential tools for engineers. These techniques are used to determine the dynamic characteristics of mechanical structures. In Chapters 3 and 4, structural analysis and modal testing have been carried out an aircraft-like structure. Modal analysis techniques are used to extract modal data which are identified from a single column of the frequency response matrix. The proposed method is presented for fitting modal peaks one by one. This technique overcomes the difficulty due to the conventional methods which require a series of measured FRFs at different points of excitation. New methods presented in this thesis are developed and implemented initially for undamped systems in all cases. These ideas are subsequently extended for generally damped linear systems. The equations of motion of second order damped systems are represented in state space. These methods have been developed based on Lancaster Augmented Matrices (LAMs) and diagonalising structure preserving equivalences (DSPEs). In Chapter 5, new methods are developed for computing the derivatives of the non-zeros of the diagonalised system and the derivatives of the diagonalising SPEs with respect to modifications in the system matrices. These methods have provided a new approach to the evaluation and the understanding of eigenvalue and eigenvector derivatives. This approach resolves the quandary where eigenvalue and eigenvector derivatives become undefined when a pair of complex eigenvalues turns into a pair of real eigenvalues or vice-versa. They also have resolved when any one or more of the system matrices is singular. Numerical examples have illustrated the new methods and they have shown that the method results overcome certain difficulties of conventional methods. In Chapter 6, Möbius transformations are used to address a problem where the mass matrix is singular. Two new transformations are investigated called system spectral transformation SSTNQ and diagonalising spectral/similarity transformation DSTOQ. The transformation SSTNQ maps between matrices of two systems having the same short eigenvectors and their diagonalised system matrices. The transformation DSTOQ maps between two diagonalising SPE‟s having identical eigenvalues. Modal correlation methods are implemented to evaluate and quantify the differences between the output results from these techniques. Different cross orthogonality measures represent a class of methods which are recently performed as modal correlation for damped systems. In Chapter 7, cross orthogonality measures and mutual orthogonality measures are developed for undamped systems. These measures are defined in terms of real matrices - the diagonalising structure preserving equivalences (DSPEs). New methods are well developed for ill-conditioned system such that they work for all occasions and not only for cases where mass matrix is non-singular. Also a measure of the residuals is introduced which does not demand invertibility of diagonalised system matrices. Model updating methods are used in order to update models of systems by matching the output results from analytical system models with the experimentally obtained values. In Chapter 8, both cross-orthogonality measures and mutual-orthogonality measures are developed and used in the model updating of generally damped linear systems. Model updating based on the mutual orthogonality measures exhibits monotonic convergence from every starting position. That is to say, the ball of convergence has an infinite radius whereas updating procedures based on comparing eigenvectors exhibit a finite ball of convergence. Craig Bampton transformations are one of component methods which are used to reduce and decouple large structure systems. In Chapter 9 Craig Bampton transformations are developed for undamped systems and extended for damped second order systems in state space. Craig Bampton transformations are generalised and presented in SPEs forms. The two parts of the Craig Bampton transformations are extended in the full sizes of the substructure. The extended Craig Bampton transformations are modified to format each block of transformed substructure matrices as LAMs matrices format. This thesis generalises and develops the methods mentioned above and illustrates these concepts with an experimental modal test and some examples. The thesis also contains brief information about basic vibration properties of general linear structures and literature review relevant to this project.
APA, Harvard, Vancouver, ISO, and other styles
31

Spencer, Dawna. "Visual and auditory metalinguistic methods for Spanish second language acquisition." Connect online, 2008. http://library2.up.edu/theses/2008_spencerd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Fleming, Cliona Mary. "Second order chemometric methods and the analysis of complex data /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/8552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hazda, Jakub. "Analýza Stirlingova oběhu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231710.

Full text
Abstract:
This paper deals with the thermodynamic cycle of Striling engine. Analysis of the ideal cycle, Schmidt analysis and second-order method with loss correction by PROSA 2.4 software is applied. The results are compared with experimental data of two model engines.
APA, Harvard, Vancouver, ISO, and other styles
34

Armstrong, Robert A. 1969. "The fifth competence : discovering the self through intensive second language immersion." Thesis, McGill University, 1999. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=30142.

Full text
Abstract:
This inquiry examines observations made by nine former participants in the 1996 Dalhousie University Summer Language Bursary Program (SLBP) in Halifax, Nova Scotia. The SLBP is a five-week residential total second language immersion characterized by its intensity. In individual interviews, the informants were encouraged to explore whether and to what extent they had perceived changes in themselves as a result of their participation in the immersion program. These changes were not related to target-language proficiency. Rather, they focused primarily on aspects of the informants' self-perceived or other-perceived identities, which are conceived of as contextual, multiple, fluid and dynamic. Analysis of these observations indicates that changes to identity may indeed be an important byproduct of intensive second language immersion. Elements of such personal growth include perceived increases in participants' senses of resourcefulness, self-confidence, wanderlust, autonomy, open-mindedness, and sociability. Informants also enumerate the SLBP's unique factors which promote changes in self-perception. Changes in participants' perspectives on identity are not viewed simply as incidental immersion outcomes. Rather, they are viewed as components of 'personal competence', both as factors in and results of successful participation in residential total second language immersion.
APA, Harvard, Vancouver, ISO, and other styles
35

Richards, Jeffrey Robert. "The Natural Approach and the Audiolingual Method: A Question of Student Gains and Retention." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4696.

Full text
Abstract:
The purpose of this study was to determine the difference in the short term and and long term second language (L2) gains of first year Spanish students exposed to the Audiolingual Method (ALM) and the Natural Approach. The experiment consisted of two randomly selected groups which were exposed to four presentations. Two of these presentations delivered content material following a Natural Approach lesson design while the other two delivered content material following an ALM lesson design in such a way that both groups were exposed to two ALM lessons and two Natural Approach lessons. All subjects were pre-tested prior to the delivery of these lessons and subsequently tested after the first lessons for short term L2 gains. They were then re-tested after several weeks to measure long term L2 gains. The number of subjects that participated in the experiment was 249 and included all enrolled first year Spanish students at Oregon State University for the 1992 fall term. The data were analyzed using the two-way analysis of variance. The results of the investigation indicated that teaching method was not a significant factor in students' short term and long term L2 aquisition gains. The study thus implies that neither the Natural Approach nor the ALM can be considered superior in terms of quantifiable student gains and retention. Recommendations for further study are presented.
APA, Harvard, Vancouver, ISO, and other styles
36

Riches, Caroline. "The development of mother tongue and second language reading in two bilingual education contexts /." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=37819.

Full text
Abstract:
The effects that various forms of bilingual education may have on children's reading development are of concern to parents and educators alike. In this thesis, I investigate the development of mother tongue and second language reading in two bilingual education contexts, and assess the effects of the language of initial formal reading instruction upon this development. This study examines children's reading within the home, classroom and community environments.
The research involved two Grade 1 classes mainly comparing the language of initial formal reading instruction. One site was a French immersion school offering a 50% English/50% French program in which initial formal reading instruction was in English. The second site was a French school, with a majority of anglophone students and initial formal reading instruction was in French. The participants in this study were 12 children from each class, their parents, and the classroom teachers.
Three main tools of inquiry were used: classroom observations were carried out in each of the two classes during the Grade 1 school year; samples of oral reading and retellings, in English and in French, were collected from the participating children for miscue analysis, and informal interviews were conducted with all the participants.
The analysis revealed that regardless of the language of initial formal reading instruction, the children's reading abilities developed in both languages. Children tended to feel more comfortable reading in the language in which they had been formally instructed but, despite this, meaning-construction was more effective in the mother tongue. Differences in reading abilities for both groups could be accounted for by limitations in knowledge of the second language rather than by language of initial instruction. Finally, children with initial formal reading instruction in the second language easily applied their reading abilities to reading in their mother tongue.
The conclusions drawn from this inquiry are that having supportive home and community environments, exemplary teachers and constructive classroom environments enables children to use their creative abilities and language resources to make sense of reading in two languages. It is the continuities and connections between these elements which enables children to transcend any difficulties arising from the fact that reading is being encountered in two languages.
APA, Harvard, Vancouver, ISO, and other styles
37

Comas, Lou Enric. "Application of the generalized rank annihilation method (GRAM) to second-order liquid chromatographic data." Doctoral thesis, Universitat Rovira i Virgili, 2005. http://hdl.handle.net/10803/8995.

Full text
Abstract:
Les mesures analítiques i els instruments que les generen poden classificar-se en funció del numero de dades que s'obtenen al mesurar una mostra. Si s'obté una matriu de respostes, s'anomenen dades d'ordre dos.
En aquesta tesi es van utilitzar els dades d'ordre dos, obtingudes mitjançant un cromatògraf líquid d'alta resolució amb un detector de diodes en fila (DAD).
L'instrument HPLC-DAD és força comú. Tot i això, normalment, per determinar la concentració dels analits d'interès no s'utilitzen totes les dades enregistrades per l'instrument. El mode espectral només s'utilitza per identificar els analits o per verificar la puresa dels pics, mentre que l'àrea o l'alçada del pic s'utilitza per quantificar mitjançant calibratge univariant. Aquesta manera de treballar és molt útil sempre i quan la resposta mesurada sigui selectiva per l'analit d'interès.
En analitzar contaminants ambientals en mostres complexes, com poden ser mostres d'aigua de riu, no és senzill obtenir mesures selectives. Quan les respostes no son selectives, els mètodes de calibratge de segon ordre (els que utilitzen dades de segon ordre) es poden utlitzar per a quantificar l'analit d'interès.
La present tesi es basa en l'estudi de les propietats del mètode de calibratge de segon ordre Generalized Rank Annihilation Method (GRAM). Aquest mètode fou desenvolupat a mitjans de la dècada dels 80, i té unes propietats molt atractives:
1) Per a determinar la concentració de l'analit d'interès en una mostra test només cal una mostra de calibratge o estàndard.
2) No calen mesures selectives, amb la qual cosa el temps de la separació es pot reduir de manera considerable.
Tot i això, el GRAM té una sèrie de limitacions que fan que no s'apliqui de manera rutinària. L'objectiu de la tesi és estudiar els avantatges i les limitacions del GRAM i millorar els aspectes necessaris per a què és pugui aplicar de manera rutinària.
Per emprar GRAM les dades experimentals han de complir una sèrie de requisits matemàtics: (i) la resposta mesurada ha de ser suma de respostes corresponents als diferents analits i (ii) la resposta d'un analit ha de ser proporcional en les diferents mostres: l'analit ha d'eluir exactament al mateix temps de retenció tant en l'estàndard com en la mostra test. Si aquest requisit no es compleix, les prediccions del GRAM son esbiaixades.
S'han desenvolupat fórmules de superar aquestes dificultats. S'ha desenvolupat un mètode per alienar pics cromatogràfics, basat en un mètode de resolució de corbes (Iterative Target Transformation Factor Analysis, ITTFA). En sistemes HPLC-DAD, és força habitual que els pics de l'analit d'interès elueixin a diferents temps de retenció.
Les diferencies no son gaire grans (pocs segons) però poden ser suficients per fer que els resultats del GRAM siguin incorrectes.
El GRAM és un mètode basat en factors, i cal introduir aquest paràmetre per a calcular un model. S'ha desenvolupat un mètode gràfic per a triar el nombre de factors que s'utilitzen per calcular el model GRAM. Està basat en un paràmetre de l'algorisme GRAM (á).
Finalment s'ha desenvolupat un criteri per a determinar mostres discrepants (outliers).
El criteri desenvolupat per detectar outliers està basat en el Senyal Analític Net (NAS).
Tot l'esmentat anteriorment, s'ha aplicat a casos reals, en concret a l'anàlisi de naftalensulfonats i de contaminats polars presents en mostres d'aigua, tant de riu com de depuradora. Així s'ha pogut demostrar la utilitat del GRAM a la cromatografia, i comparar el GRAM amb altres mètodes de calibratge de segon ordre com el PARAFAC i MCR-ALS. Es va trobar que tots tres mètodes produïen resultats comparables.
Analytical measurements and the instruments that generate them can be classified regarding the number of data that are obtained when a sample is measured. When a matrix of response is obtained, it is known as second-order data.
In this thesis, second-order data were used, obtained from a high performance liquid chromatography (HPLC) couple with a diode array detector (DAD). This instrument is quite common in the analytical laboratories. However, the concentration of the analytes of interest is normally found without using all the measured data. The spectral model only is used to identify the analytes of for verifying the peak purity, whereas the area or the height of the peak is used to quantify using univariate calibration. This is a very useful strategy. However, the measured response must be selective to the analyte of interest.
When environmental pollutants were analyzed, like water samples, it is no so easy to get selective measurements. When the responses are not selective, the analyte on interest can still be quantified by using second-order calibration methods, which are the methods that use second-order data.
This thesis is based on the study of the properties of the second-order calibration method Generalized Rank Annihilation Method (GRAM).
This method was developed in the mid eighties and has very attractive properties:
1) To determine the concentration of the analyte of interest in a test sample, it is only necessary one calibration sample or standard.
2) Selective measurements are not necessary, implying the reduce of the separation time.
Despite these advantages, GRAM has some limitations which make that it is not applied routinely. The objectives of the thesis are to study the advantages and limitations of GRAM and improve the negative points in order to apply GRAM routinely.
To use GRAM the experimental data must accomplish some mathematical requirements: (i) the measured response must be result of the addition due to the different analytes in the peak and (ii) the response of the analyte must be proportional in the different samples: the analyte of interest must elute at the same retention time both in the calibration and in the test sample. When these conditions are not met, the GRAM predictions are biased.
Mathematical algorithms have been developed to overcome such difficulties. An algorithm to align chromatographic peaks has been developed, based on curve resolution method (Iterative Target Transformation Factor Analysis, ITTFA). In HPLCDAD systems is quite often that the peaks of the analyte of interest elute at different retention time in the calibration and in the test sample. Even the differences are not big (few seconds), they can be enough to make the GRAM results incorrect.
GRAM is a factor based calibration method, and the number of factors has to be introduced as an input to build a GRAM method. A graphical criterion has been selected to determine the number the number of factors, which is base on the use of a parameter of the GRAM algorithm (á).
Finally, a criterion to detect outlying samples has been developed, which is based on the Net Analyte Signal (NAS).
All the above commented were applied to real cases. Specifically to the analysis of aromatic sulfonates and polar pollutants in water form river samples and waste water plants. We were able to show the applicability of GRAM and to compare GRAM with other second-order calibration methods, such as PARAFAC i MCR-ALS. We found that the three methods provided comparable results.
APA, Harvard, Vancouver, ISO, and other styles
38

Clack, Jhules. "Theoretical Analysis for Moving Least Square Method with Second Order Pseudo-Derivatives and Stabilization." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1418910272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Xiaobo. "Etude des reponses du second ordre d'une structure soumise a une houle aleatoire." Nantes, 1988. http://www.theses.fr/1988NANT2040.

Full text
Abstract:
En utilisant la methode de perturbation, le probleme non-lineaire de diffraction-radiation est decompose en premier et second ordre. Les efforts complets du second ordre sont evalues pour une houle bichromatique. La fonction de transfert des efforts du second ordre est calculee et representee graphiquement
APA, Harvard, Vancouver, ISO, and other styles
40

Ritter, Baird S. "Solution strategies for second order, nonlinear, one dimensional, two point boundary value problems by FEM analysis." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA246063.

Full text
Abstract:
Thesis (M.S. in Mechanical Engineering)--Naval Postgraduate School, December 1990.
Thesis Advisor: Salinas, D. "December 1990." Description based on title screen as viewed on April 1, 2010. DTIC Identifier(s): Boundary value problems, finite element analysis, differential equations, problem solving, theses, interpolation, iterations, one dimensional, computer programs, approximation/mathematics, linearity. Author(s) subject terms: Galerkin FEM, nonlinear, quasilinearization, linearization, interpolation, iteration, differential equation, convergence. Includes bibliographical references (p. 164). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
41

Duncan, Fraser Andrew. "Monte Carlo simulation and aspects of the magnetostatic design of the TRIUMF second arm spectrometer." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28378.

Full text
Abstract:
The optical design of the TRIUMF Second Arm Spectrometer (SASP) has been completed and the engineering design started. The effects of the dipole shape and field clamps on the aperture fringe fields were studied. It was determined that a field clamp would be necessary to achieve the field specifications over the desired range of dipole excitations. A specification of the dipole pole edges and field clamps for the SASP is made. A Monte Carlo simulator for the SASP was written. During the design this was used to study the profiles of rays passing through the SASP. These profiles were used in determining the positioning of the dipole vacuum boxes and the SASP detector arrays. The simulator is intended to assess experimental arrangements of the SASP.
Science, Faculty of
Physics and Astronomy, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
42

KORTE, MATTHEW. "Corpus Methods in Interlanguage Analysis." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1218835515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Gorokhov, Alexei. "Séparation autodidacte de mélanges convolutifs : methodes du second ordre." Paris, ENST, 1997. http://www.theses.fr/1997ENST0007.

Full text
Abstract:
Ce travail de thèse se situe dans le cadre du problème général de l'identification autodidacte d'un mélange convoluta plusieurs entrées/plusieurs sorties qui est aussi un problème de l'identification de la matrice de transfert d'un système linéaire à partir des sorties observées et des propriétés statistiques des entrées. Parmi nombreuses applications de cette recherche il faut accentuer égalisation non-supervisée de canaux des communications numériques ainsi que séparation de plusieurs utilisateurs situes dans le même canal hertzien des systèmes a accès multiple sdma/cdma. Notre étude est particulièrement focalisée sur le développement des estimateurs du second ordre qui permettent une mise en œuvre explicite et jouissent de bonnes performances dans les environnements peu bruites quand le nombre d'observations est limite. La plupart de résultats sont obtenus pour les systèmes linéaires à réponse impulsionnelle finie caractérises par une matrice de transfert polynomiale.
APA, Harvard, Vancouver, ISO, and other styles
44

Haug, Nils Arne. "An investigation into applicability of second temple period Jewesh Hermeneutical Methodologies to the interpretation of popular eschatology." Thesis, University of Zululand, 2003. http://hdl.handle.net/10530/1339.

Full text
Abstract:
A Dissertation Submitted to the Faculty of Theology and Religion Studies in Fulfilment of the requirements for the Degree of Master of Arts (Biblical Studies), at the University of Zululand, South Africa, 2003.
This study endeavours to ascertain whether or not eschatological scenarios propounded by certain writers of highly influential and popular "end-time" texts are biblically sustainable, according to the hermeneutical methods employed by them. Firstly, the hermeneutical methods utilised by Christianity's exegetical predecessors, namely, the rabbinical Pharisees and the Qumran sectaries of the Second Temple period, are considered. Such methods, and the eschatological convictions ensuing therefrom, are apparent from canonical and non-canonical literature relevant to these two groups. Thereafter, the applicability of these methods to a Second Testament context is examined, the rationale being that if the use of such methods is significantly evident in the Second Testament, then they should, it is proposed, be germane to Christian scholars of both earlier and modem times since Christianity arose from the matrix of early Judaism. This is particularly so as regards the writers of popular eschatology whose end-time positions are then examined in the light of early Jewish hermeneutical methods, and their own interpretative stance. The conclusion is reached that the Second Testament does reflect extensive use of the hermeneutical methods of early Judaism and that, consequently, subsequent Christian scholars should endorse these methods. It appears, though, that Christians through the ages have ignored such methods. It is further concluded that the main eschatological issues promoted by the popuiarisers cannot easily be defended solely through the use of the exegetical methods employed by them. However, it is submitted that many such issues can be substantially justified through the use of traditional Jewish hermeneuticai methods, as employed by the Second Testament redactors and Jesus himself.
APA, Harvard, Vancouver, ISO, and other styles
45

Davis, Benjamin J. "A study into discontinuous Galerkin methods for the second order wave equation." Thesis, Monterey, California: Naval Postgraduate School, 2015. http://hdl.handle.net/10945/45836.

Full text
Abstract:
Approved for public release; distribution is unlimited
There are numerous numerical methods for solving different types of partial differential equations (PDEs) that describe the physical dynamics of the world. For instance, PDEs are used to understand fluid flow for aerodynamics, wave dynamics for seismic exploration, and orbital mechanics. The goal of these numerical methods is to approximate the solution to a continuous PDE with an accurate discrete representation. The focus of this thesis is to explore a new Discontinuous Galerkin (DG) method for approximating the second order wave equation in complex geometries with curved elements. We begin by briefly highlighting some of the numerical methods used to solve PDEs and discuss the necessary concepts to understand DG methods. These concepts are used to develop a one- and two-dimensional DG method with an upwind flux, boundary conditions, and curved elements. We demonstrate convergence numerically and prove discrete stability of the method through an energy analysis.
APA, Harvard, Vancouver, ISO, and other styles
46

Faulk, Songhui. "Exploring alternative methods for teaching English as a second language in Korea." CSUSB ScholarWorks, 1999. https://scholarworks.lib.csusb.edu/etd-project/1639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Etcheverlepo, Adrien. "Développement de méthodes de domaines fictifs au second ordre." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00821897.

Full text
Abstract:
La simulation d'écoulements dans des géométries complexes nécessite la création de maillages parfois difficile à réaliser. La méthode de pénalisation proposée dans ce travail permet de simplifier cette étape. En effet, la résolution des équations qui gouvernent l'écoulement se fait sur un maillage plus simple mais non-adapté à la géométrie du problème. Les conditions aux limites sur les parties du domaine physique immergées dans le maillage sont prises en compte à travers l'ajout d'un terme de pénalisation dans les équations. Nous nous sommes intéressés à l'approximation du terme de pénalisation pour une discrétisation par volumes finis sur maillages décalés et colocatifs. Les cas tests de vérification réalisés attestent d'un ordre de convergence spatial égal à 2 pour la méthode de pénalisation appliquée à la résolution d'une équation de type Poisson ou des équations de Navier-Stokes. Enfin, on présente les résultats obtenus pour la simulation d'écoulements turbulents autour d'un cylindre à Re=3900 et à l'intérieur d'une partie d'un assemblage combustible à Re=9500.
APA, Harvard, Vancouver, ISO, and other styles
48

Maxwell, Wendy. "Evaluating the effectiveness of the accelerative integrated method for teaching French as a second language." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ58676.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kilty, Susanna. "Supporting a project method for teaching English as a second language in the senior years." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ62767.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Särkkä, J. (Jussi). "A novel method for hazard rate estimates of the second level interconnections in infrastructure electronics." Doctoral thesis, University of Oulu, 2008. http://urn.fi/urn:isbn:9789514288197.

Full text
Abstract:
Abstract Electronic devices are subjected to various usage environments, wherein stresses are induced to components and their interconnections. The level of stress affects the interval of failure occurrences. When the stress level and aging characteristics of sub-material parts are known, the failure occurrence can be predicted. However, the predictions are based on uncertainties and a practical method to help to assess the component interconnection reliability is needed. In this thesis a novel method to utilize the accelerated stress test data for the hazard rate estimates is introduced. The hazard rate expectations of the interconnection elements are presented as interconnection failures in time (i-FIT) figures that can be used as a part of the conventional product reliability estimates. The method utilizes second level reliability test results for a packaging type specific failure occurrence estimates. Furthermore, the results can be used as such in the component packaging reliability estimates. Moreover, a novel method to estimate the interconnection failures in terms of costs is presented. In this novel method the interconnection elements are dealt as cost elements. It is also shown that the costs of the interconnection failures could be very high, if the stress-strength characteristics of the interconnection system are wrongly chosen. The lead-free manufacturing has emphasized the thermal compatibility of the materials of the component, the solder and the Printed Wiring Board. Improper materials for Area Array components will result as excessive component warping during the reflow, as is shown in this thesis. A novel method for estimating the amount of component warping during the lead-free reflow is introduced. In this thesis, a method to predict the second level interconnection hazard rate is introduced. The method utilizes the second level reliability test data in the life time predictions of the component solder joints. The resulted hazard rates can be used as a part of product field performance estimates. Also, the effect of the process variation and the material properties on the lead-free solder joint reliability is introduced
Tiivistelmä Elektronisen laitteen materiaalien yhteensopivuus ja käyttöympäristö määrittävät sen kokemat rasitukset. Laitteen komponentteihin tai niiden liitoksiin kohdistuvat rasitukset aiheuttavat lopulta laitteen vikaantumisen. Vikaantumisten taajuuteen vaikuttavat paitsi rasituksen taso ja tyyppi, myös laitteen materiaalien ominaisuudet. Todellinen vikaantumistaajuus perustuu kuitenkin muihinkin parametreihin, mistä johtuen vikaantumisennusteet voivat olla epätarkkoja. Tästä syystä käytännölliselle liitosten vikaantumisen arviointimenetelmälle on tarve. Tässä väitöskirjassa esitellään uusi komponenttien juotosliitosten arviointimenetelmä, jonka avulla voidaan muuntaa kiihdytetyn rasitustestauksen tulos vikaantumistaajuusarvioksi laitteen todellisessa käyttöympäristössä. Menelmässä hyödynnetään levytason rasitustestauksen tuloksia komponenttien kotelotyyppikohtaisiin vikaantumisennusteisiin. Menetelmää voidaan käyttää sellaisenaan arvioitaessa komponenttikoteloiden luotettavuutta todellisissa rasitus- tai tuoteympäristöissä. Väitöskirjassa esitellään myös uusi menetelmä vikaantuneiden liitosten kustannusten määrittämiseen, mikä auttaa myös uuden liitosteknologian kokonaiskustannusten arvioimisessa. Lisäksi väitöskirjatyössä osoitetaan, että liitosvikojen aiheuttamat kustannukset voivat olla erittäin korkeita, mikäli juotosliitoksiin kohdistuvat rasitukset ylittävät liitosten suunnitellun kestävyyden. Elektroniikan lyijyttömän valmistamisen myötä komponenttien, juotteen ja piirilevyn materiaalien yhteensopivuus korostuu. Väitöskirjatyössä osoitetaan, että yhteensopimattomien materiaalien käyttäminen komponenteissa voi johtaa komponentin liialliseen taipumaan kuumakonvektiojuottamisen aikana. Lisäksi esitellään menetelmä komponentin taipuman arvioimiseksi lämpötilan funktiona. Tässä väitöskirjassa esitellään uusi menetelmä, jolla voidaan arvioida komponenttien juotosliitosten vikaantumista ja vikaantumisen vaikutusta tuotteiden kokonaiskustannuksiin. Menetelmä perustuu kiihdytetyn rasitustestauksen tuloksiin, joita voidaan käyttää juotosliitosten vikaantumisten arvioimiseen tuotteen todellisissa käyttöolosuhteissa. Lisäksi väitöskirjatyössä on arvioitu juotosmateriaalin ja juotosaluemitoituksen vaikutusta juotosliitosten luotettavuuteen
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography