Dissertations / Theses on the topic 'Standard equation'

To see the other types of publications on this topic, follow the link: Standard equation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 41 dissertations / theses for your research on the topic 'Standard equation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bendaas, Saïda. "Quelques applications de l'analyse non standard aux équations aux dérivées partielles." Mulhouse, 1994. http://www.theses.fr/1994MULH0298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rakesh, Arora. "Fine properties of solutions for quasi-linear elliptic and parabolic equations with non-local and non-standard growth." Thesis, Pau, 2020. http://www.theses.fr/2020PAUU3021.

Full text
Abstract:
Dans cette thèse, nous étudions les propriétés fines des solutions d'équations elliptiques et paraboliques quasi-linéaires impliquant une croissance non locale et non standard. Nous nous sommes concentrés sur trois différents types d’équations aux dérivées partielles (EDP).Dans un premier temps, nous étudions les propriétés qualitatives des solutions faibles et fortes d’équations d'évolution comportant des termes à croissance non-standard. La motivation de l'étude de ces types d'équations réside dans la modélisation de caractéristiques anisotropes se produisant dans les modèles de fluides électro-rhéologiques, la restauration d'images, le processus de filtration dans les milieux complexes, les problèmes de stratigraphie ou encore les interactions biologiques hétérogènes. Dans cette étude, nous déterminons des conditions suffisantes sur les données initiales pour obtenir l'existence et l'unicité de solution forte. Nous établissons également la régularité de second ordre de la solution forte ainsi que des résultats optimaux d'intégrabilité à l’aide de nouvelles inégalités d'interpolation.Nous étudions en outre les propriétés des solutions faibles de problèmes doublement non-linéaires impliquant premièrement une classe d'opérateurs de type Leray-Lions et une non-linéarité dans la dérivée temporelle. Nous considérons les questions d'existence, d'unicité, de régularité ainsi que de comportement à l’infini des solutions faibles de ces problèmesDans une deuxième étude, nous considérons des systèmes de type Kirchhoff impliquant des opérateurs non-linéaires de type Choquard avec des poids singuliers. Cette classe de problèmes apparaît dans de nombreux phénomènes physiques comme la variation de longueur d’une corde tendue en vibration où le terme de Kirchhoff mesure le changement de tension ou encore la propagation d’ondes électromagnétiques dans le plasma. Motivé par les nombreuses applications physiques, nous étudions cette classe d’équations et nous établissons l'existence et des résultats de non-unicité pour des systèmes impliquant le n-Laplacien et des opérateurs polyharmoniques à l’aide d’inégalités de type Adams, Moser et Trudinger.Enfin, nous étudions des problèmes singuliers impliquant des opérateurs non-locaux comme le p-Laplacien fractionnaire. Nous établissons l'existence et la multiplicité des solutions classiques dans le cas du Laplacien fractionnaire impliquant une non-linéarité exponentielle en utilisant la théorie des bifurcations. Pour caractériser le comportement des grandes solutions, nous étudions en détail les singularités isolées pour l'équation elliptique semi-linéaire singulière. Nous obtenons la symétrie de la solution classique du problème Laplacien fractionnaire grâce à la méthode du plan mobile et d’un principe du maximum. Nous étudions également le problème de p-Laplacian fractionnaire non-linéaire impliquant une non-linéarité singulière et des poids singuliers. Nous montrons l'existence/ non-existence, l'unicité et la régularité holdérienne en exploitant le comportement des solutions proche du bord du domaine et par des méthodes d'approximation
In this thesis, we study the fine properties of solutions to quasilinear elliptic and parabolic equations involving non-local and non-standard growth. We focus on three different types of partial differential equations (PDEs).Firstly, we study the qualitative properties of weak and strong solutions of the evolution equations with non-standard growth. The importance of investigating these kinds of evolutions equations lies in modeling various anisotropic features that occur in electrorheological fluids models, image restoration, filtration process in complex media, stratigraphy problems, and heterogeneous biological interactions. We derive sufficient conditions on the initial data for the existence and uniqueness of a strong solution of the evolution equation with Dirichlet type boundary conditions. We establish the global higher integrability and second-order regularity of the strong solution via proving new interpolation inequalities. We also study the existence, uniqueness, regularity, and stabilization of the weak solution of Doubly nonlinear equation driven by a class of Leray-Lions type operators and non-monotone sub-homogeneous forcing terms. Secondly, we study the Kirchhoff equation and system involving different kinds of non-linear operators with exponential nonlinearity of the Choquard type and singular weights. These type of problems appears in many real-world phenomena starting from the study in the length of the string during the vibration of the stretched string, in the study of the propagation of electromagnetic waves in plasma, Bose-Einstein condensation and many more. Motivating from the abundant physical applications, we prove the existence and multiplicity results for the Kirchhoff equation and system with subcritical and critical exponential non-linearity, that arise out of several inequalities proved by Adams, Moser, and Trudinger. To deal with the system of Kirchhoff equations, we prove new Adams, Moser and Trudinger type inequalities in the Cartesian product of Sobolev spaces.Thirdly, we study the singular problems involving nonlocal operators. We show the existence and multiplicity for the classical solutions of Half Laplacian singular problem involving exponential nonlinearity via bifurcation theory. To characterize the behavior of large solutions, we further study isolated singularities for the singular semi linear elliptic equation. We show the symmetry and monotonicity properties of classical solution of fractional Laplacian problem using moving plane method and narrow maximum principle. We also study the nonlinear fractional Laplacian problem involving singular nonlinearity and singular weights. We prove the existence, uniqueness, non-existence, optimal Sobolev and Holder regularity results via exploiting the C^1,1 regularity of the boundary, barrier arguments and approximation method
APA, Harvard, Vancouver, ISO, and other styles
3

Eslick, John. "A Dynamical Study of the Evolution of Pressure Waves Propagating through a Semi-Infinite Region of Homogeneous Gas Combustion Subject to a Time-Harmonic Signal at the Boundary." ScholarWorks@UNO, 2011. http://scholarworks.uno.edu/td/1367.

Full text
Abstract:
In this dissertation, the evolution of a pressure wave driven by a harmonic signal on the boundary during gas combustion is studied. The problem is modeled by a nonlinear, hyperbolic partial differential equation. Steady-state behavior is investigated using the perturbation method to ensure that enough time has passed for any transient effects to have dissipated. The zeroth, first and second-order perturbation solutions are obtained and their moduli are plotted against frequency. It is seen that the first and second-order corrections have unique maxima that shift to the right as the frequency decreases and to the left as the frequency increases. Dispersion relations are determined and their limiting behavior investigated in the low and high frequency regimes. It is seen that for low frequencies, the medium assumes a diffusive-like nature. However, for high frequencies the medium behaves similarly to one exhibiting relaxation. The phase speed is determined and its limiting behavior examined. For low frequencies, the phase speed is approximately equal to sqrt[ω/(n+1)] and for high frequencies, it behaves as 1/(n+1), where n is the mode number. Additionally, a maximum allowable value of the perturbation parameter, ε = 0.8, is determined that ensures boundedness of the solution. The location of the peak of the first-order correction, xmax, as a function of frequency is determined and is seen to approach the limiting value of 0.828/sqrt(ω) as the frequency tends to zero and the constant value of 2 ln 2 as the frequency tends to infinity. Analytic expressions are obtained for the approximate general perturbation solution in the low and high-frequency regimes and are plotted together with the perturbation solution in the corresponding frequency regimes, where the agreement is seen to be excellent. Finally, the solution obtained from the perturbation method is compared with the long-time solution obtained by the finite-difference scheme; again, ensuring that the transient effects have dissipated. Since the finite-difference scheme requires a right boundary, its location is chosen so that the wave dissipates in amplitude enough so that any reflections from the boundary will be negligible. The perturbation solution and the finite-difference solution are found to be in excellent agreement. Thus, the validity of the perturbation method is established.
APA, Harvard, Vancouver, ISO, and other styles
4

Dupaigne, Louis. "Equations elliptiques semilineaires avec potentiel singulier." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2001. http://tel.archives-ouvertes.fr/tel-00002721.

Full text
Abstract:
On considère des équations elliptiques semilinéaires simples de la forme Lu = F(x,u), où L est le Laplacien usuel avec condition de Dirichlet sur un ouvert borné régulier de R^n et où F peut être singulière en la variable x. On obtient notemment un critère exact pour l'existence de solutions, qui se traduit par l'apparition d'un nouvel exposant critique dans les applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Kuncoro, Andreas. "Employing Quality Management Principles to Improve the Performance of Educational Systems: An Empirical Study of the Effect of ISO 9001 Standard on Teachers and Administrators Performance in the Indonesian Vocational Education System." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5966.

Full text
Abstract:
ISO 9001 has been world widely implemented in both manufacturing and service organizations. A lot of studies have been conducted to investigate the effects of ISO 9001 implementation on the performance of these organizations. Most of these studies show that ISO 9001 implementation realized positive operational improvements and financial success. Building on the merits of successful implementation of ISO 9001 quality management system in manufacturing and service, educational institutions have been attempted to adopt it in their operations. Even though there are studies relating ISO implementation to education, no research has been done to investigate the effects of ISO 9001 at the individual level. The objective of this research is to investigate the effects of ISO 9001 quality management implementation on the performance of administrators and teachers. The Indonesian vocational education system is selected as a case example as there is a significant number of such institutions in Indonesia that attempt to achieve ISO certification and there is a national need to improve the performance of vocational education. It is a challenge to assess objectively the degree of ISO 9001 implementation in this specific educational context because of the size and diversity. This study relies on survey that measures the respondents' perception. Hence, this study applies a self-reported survey based performance measurement. The questionnaires are developed based on extensive literature review. Partial Least Squares Structural Equation Modeling (PLSSEM) has been used to examine the relationships between the different elements of quality management systems, quality culture; administrator and teacher performances. The study is able to examine multiple interrelated dependence and subsequent relationships simultaneously among examined factors such as teacher and administrator performance, existing quality culture and ISO principles; and to incorporate variables that cannot be directly measured, such as leadership, for example. The findings of this study show that ISO 9001 implementation has positive significant effect on the performance of the vocational school administrator and teacher. The study also identifies key influencing elements of the ISO quality management system and examines their direct and indirect relationships with teacher and administrator performances. This study is expected to improve the current practices in implementing ISO and quality culture in any educational settings, specifically in the case of vocational educational system.
Ph.D.
Doctorate
Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering
APA, Harvard, Vancouver, ISO, and other styles
6

Towler, Kim. "Non-standard discretizations of differential equations." Thesis, University of Kent, 2015. https://kar.kent.ac.uk/66665/.

Full text
Abstract:
This thesis explores non-standard numerical integration methods for a range of non-linear systems of differential equations with a particular interest in looking for the preservation of various features when moving from the continuous system to a discrete setting. Firstly the exsiting non-standard schemes such as one discovered by Hirota and Kimura (and also Kahan) [21, 32] will be presented along with general rules for creating an effective numerical integration scheme devised by Mickens [40]. We then move on to the specific example of the Lotka-Volterra system and present a method for finding the most general forms of a non-standard scheme that is both symplectic and birational. The resulting three schemes found through this method have also been discovered through an alternative method by Roeger in [52]. Next we look at discretizing examples of 3-dimensional bi-Hamiltonian systems from a list given by G¨umral and Nutku [18] using the Hirota-Kimura/Kahan method followed by the same method applied to the H´enon-Heiles case (ii) system. The B¨acklund transformation for the H´enon-Heiles is also considered. Finally chapter 6 looks at systems with cubic vector fields and limit cycles with an aim to find the most general form of a non-standard scheme for two examples. First we look at a trimolecular system and then a Hamiltonian system that has a quartic potential.
APA, Harvard, Vancouver, ISO, and other styles
7

Khabir, Mohmed Hassan Mohmed. "Numerical singular perturbation approaches based on spline approximation methods for solving problems in computational finance." Thesis, University of the Western Cape, 2011. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_7416_1320395978.

Full text
Abstract:
Options are a special type of derivative securities because their values are derived from the value of some underlying security. Most options can be grouped into either of the two categories: European options which can be exercised only on the expiration date, and American options which can be exercised on or before the expiration date. American options are much harder to deal with than European ones. The reason being the optimal exercise policy of these options which led to free boundary problems. Ever since the seminal work of Black and Scholes [J. Pol. Econ. 81(3) (1973), 637-659], the differential equation approach in pricing options has attracted many researchers. Recently, numerical singular perturbation techniques have been used extensively for solving many differential equation models of sciences and engineering. In this thesis, we explore some of those methods which are based on spline approximations to solve the option pricing problems. We show a systematic construction and analysis of these methods to solve some European option problems and then extend the approach to solve problems of pricing American options as well as some exotic options. Proposed methods are analyzed for stability and convergence. Thorough numerical results are presented and compared with those seen in the literature.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Qianqian. "Novel analytical and numerical methods for solving fractional dynamical systems." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/35750/1/Qianqian_Yang_Thesis.pdf.

Full text
Abstract:
During the past three decades, the subject of fractional calculus (that is, calculus of integrals and derivatives of arbitrary order) has gained considerable popularity and importance, mainly due to its demonstrated applications in numerous diverse and widespread fields in science and engineering. For example, fractional calculus has been successfully applied to problems in system biology, physics, chemistry and biochemistry, hydrology, medicine, and finance. In many cases these new fractional-order models are more adequate than the previously used integer-order models, because fractional derivatives and integrals enable the description of the memory and hereditary properties inherent in various materials and processes that are governed by anomalous diffusion. Hence, there is a growing need to find the solution behaviour of these fractional differential equations. However, the analytic solutions of most fractional differential equations generally cannot be obtained. As a consequence, approximate and numerical techniques are playing an important role in identifying the solution behaviour of such fractional equations and exploring their applications. The main objective of this thesis is to develop new effective numerical methods and supporting analysis, based on the finite difference and finite element methods, for solving time, space and time-space fractional dynamical systems involving fractional derivatives in one and two spatial dimensions. A series of five published papers and one manuscript in preparation will be presented on the solution of the space fractional diffusion equation, space fractional advectiondispersion equation, time and space fractional diffusion equation, time and space fractional Fokker-Planck equation with a linear or non-linear source term, and fractional cable equation involving two time fractional derivatives, respectively. One important contribution of this thesis is the demonstration of how to choose different approximation techniques for different fractional derivatives. Special attention has been paid to the Riesz space fractional derivative, due to its important application in the field of groundwater flow, system biology and finance. We present three numerical methods to approximate the Riesz space fractional derivative, namely the L1/ L2-approximation method, the standard/shifted Gr¨unwald method, and the matrix transform method (MTM). The first two methods are based on the finite difference method, while the MTM allows discretisation in space using either the finite difference or finite element methods. Furthermore, we prove the equivalence of the Riesz fractional derivative and the fractional Laplacian operator under homogeneous Dirichlet boundary conditions – a result that had not previously been established. This result justifies the aforementioned use of the MTM to approximate the Riesz fractional derivative. After spatial discretisation, the time-space fractional partial differential equation is transformed into a system of fractional-in-time differential equations. We then investigate numerical methods to handle time fractional derivatives, be they Caputo type or Riemann-Liouville type. This leads to new methods utilising either finite difference strategies or the Laplace transform method for advancing the solution in time. The stability and convergence of our proposed numerical methods are also investigated. Numerical experiments are carried out in support of our theoretical analysis. We also emphasise that the numerical methods we develop are applicable for many other types of fractional partial differential equations.
APA, Harvard, Vancouver, ISO, and other styles
9

Davis, Paige N. "Localised structures in some non-standard, singularly perturbed partial differential equations." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/201835/1/Paige_Davis_Thesis.pdf.

Full text
Abstract:
This thesis addresses the existence and stability of localised solutions in some nonstandard systems of partial differential equations. In particular, it locates the linearised spectrum of a Keller-Segel model for bacterial chemotaxis with logarithmic chemosensitivity, establishes the existence of travelling wave solutions to the Gatenby-Gawlinski model for tumour invasion with the acid-mediation hypothesis using geometric singular perturbation theory, and formulates the Evans function for a trivial defect solution in a general reaction diffusion equation with an added heterogeneous defect. Extending the analysis to these non-standard problems provides a foundation and insight for more general dynamical systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Ali, Zakaria Idriss. "Stochastic quasilinear parabolic equations with non standard growth : weak and strong solutions." Thesis, University of Pretoria, 2015. http://hdl.handle.net/2263/53502.

Full text
Abstract:
This thesis consists of two main parts. The rst part concerns the existence of weak probabilistic solutions (called elsewhere martingale solutions) for a stochastic quasilinear parabolic equation of generalized polytropic ltration, characterized by the presence of a nonlinear elliptic part admitting nonstandard growth. The deterministic version of the equation was rst introduced and studied by Samokhin in [178] as a generalized model for polytropic ltration. Our objective is to investigate the corresponding stochastic counterpart in the functional setting of generalized Lebesgue and Sobolev spaces. We establish an existence result of weak probabilistic solutions when the forcing terms do not satisfy Lipschitz conditions and the noise involves cylindrical Wiener processes. The second part is devoted to the existence and uniqueness results for a class of strongly nonlinear stochastic parabolic partial di erential equations. This part aims to treat an important class of higher-order stochastic quasilinear parabolic equations involving unbounded perturbation of zeroth order. The deterministic case was studied by Brezis and Browder (Proc. Natl. Acad. Sci. USA, 76(1): 38-40, 1979). Our main goal is to provide a detailed study of the corresponding stochastic problem. We establish the existence of a probabilistic weak solution and a unique strong probabilistic solution. The main tools used in this part of the thesis are a regularization through a truncation procedure which enables us to adapt the work of Krylov and Rozosvkii (Journal of Soviet Mathematics, 14: 1233-1277, 1981), combined with analytic and probabilistic compactness results (Prokhorov and Skorokhod Theorems), the theory of pseudomonotone operators, and a Banach space version of Yamada-Watanabe's theorem due to R ockner, Schmuland and Zhang. The study undertaken in this thesis is in some sense pioneering since both classes of stochastic partial di erential equations have not been the object of previous investigation, to the best of our knowledge. The results obtained are therefore original and constitute in our view signi cant contribution to the nonlinear theory of stochastic parabolic equations.
Thesis (PhD)--University of Pretoria, 2015.
Mathematics and Applied Mathematics
PhD
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
11

Curtis, S. McKay. "The "fair" triathlon : equating standard deviations using non-linear Bayesian models /." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd428.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Curtis, Steven McKay. "The "Fair" Triathlon: Equating Standard Deviations Using Non-Linear Bayesian Models." BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/32.

Full text
Abstract:
The Ironman triathlon was created in 1978 by combining events with the longest distances for races then contested in Hawaii in swimming, cycling, and running. The Half Ironman triathlon was formed using half the distances of each of the events in the Ironman. The Olympic distance triathlon was created by combining events with the longest distances for races sanctioned by the major federations for swimming, cycling, and running. The relative importance of each event in overall race outcome was not given consideration when determining the distances of each of the races in modern triathlons. Thus, there is a general belief among triathletes that the swimming portion of the standard-distance triathlons is underweighted. We present a nonlinear Bayesian model for triathlon finishing times that models time and standard deviation of time as a function of distance. We use this model to create "fair" triathlons by equating the standard deviations of the times taken to complete the swimming, cycling, and running events. Thus, in these "fair" triathlons, a one standard deviation improvement in any event has an equivalent impact on overall race time.
APA, Harvard, Vancouver, ISO, and other styles
13

Kama, Phumezile. "Non-standard finite difference methods in dynamical systems." Thesis, Pretoria : [s.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-07132009-163422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ali, Zakaria Idriss. "Existence result for a class of stochastic quasilinear partial differential equations with non-standard growth." Diss., University of Pretoria, 2010. http://hdl.handle.net/2263/29519.

Full text
Abstract:
In this dissertation, we investigate a very interesting class of quasi-linear stochastic partial differential equations. The main purpose of this article is to prove an existence result for such type of stochastic differential equations with non-standard growth conditions. The main difficulty in the present problem is that the existence cannot be easily retrieved from the well known results under Lipschitz type of growth conditions [42].
Dissertation (MSc)--University of Pretoria, 2010.
Mathematics and Applied Mathematics
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Jianing. "Non-standard backward stochastic differential equations and multiple optimal stopping problems with applications to securities pricing." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16713.

Full text
Abstract:
Zentraler Gegenstand dieser Dissertation ist die Entwicklung von mathematischen Methoden zur Charakterisierung und Implementierung von optimalen Investmentstrategien eines Kleininvestors auf einem Finanzmarkt. Zur Behandlung dieser Probleme ziehen wir als Hauptwerkzeug Stochastische Rückwärts-Differenzialgleichungen (BSDEs) mit nicht-linearen Drifts heran. Diese Nicht-Lineariäten ordnen sie außerhalb der Standardklasse der Lipschitz-stetigen BSDEs ein und treten häufig in finanzmathematischen Kontrollproblemen auf. Wir charakterisieren das optimale Vermögen und die optimale Investmentstrategie eines Kleininvestors mit Hilfe einer sog. Stochastischen Vorwärts-Rückwärts-Differenzialgleichung (FBSDE), einem System bestehend aus einer stochastischen Vorwärtsgleichung, die vollständig gekoppelt ist an eine Rückwärtsgleichung. Die Festlegung bestimmter Nutzenfunktionen führt uns schließlich zu einer weiteren Klasse von nicht-standard BSDEs, die in unmittelbarem Zusammenhang zu dem sog. Ansatz der stochastischen partiellen Rückwärts-Differenzialgleichungen (BSPDEs) steht. Anschließend entwickeln wir eine Methode zur numerischen Behandlung von quadratischen BSDEs, die auf einem stochastischen Analogon der Cole-Hopf-Transformation basiert. Wir studieren weiterhin eine Klasse von BSDEs, deren Drifts explizite Pfadabhängigkiten aufweisen und leiten mehrere analytische Eigenschaften her. Schließlich studieren wir Dualdarstellungen für Optimalen Mehrfachstoppprobleme. Wir leiten Martingal-Dualdarstellungen her, die die Grundlage für die Entwicklung von Regressions-basierten Monte Carlo Simulationsalgorithmen bilden, die schnell und effektiv untere und obere Schranken berechnen.
This thesis elaborates on the wealth maximization problem of a small investor who invests in a financial market. Key tools for our studies come across in the form of several classes of BSDEs with particular non-linearities, casting them outside the standard class of Lipschitz continuous BSDEs. We first give a characterization of a small investor''s optimal wealth and its associated optimal strategy by means of a systems of coupled equations, a forward-backward stochastic differential equation (FBSDE) with non-Lipschitz coefficients, where the backward component is of quadratic growth. We then examine how specifying concrete utility functions give rise to another class of non-standard BSDEs. In this context, we also investigate the relationship to a modeling approach based on random fields techniques, known by now as the backward stochastic partial differential equations (BSPDEs) approach. We continue with the presentation of a numerical method for a special type of quadratic BSDEs. This method is based on a stochastic analogue to the Cole-Hopf transformation from PDE theory. We discuss its applicability to numerically solve indifference pricing problems for contingent claims in an incomplete market. We then proceed to BSDEs whose drifts explicitly incorporate path dependence. Several analytical properties for this type of non-standard BSDEs are derived. Finally, we devote our attention to the problem of a small investor who is equipped with several exercise rights that allow her to collect pre-specified cashflows. We solve this problem by casting it into the language of multiple optimal stopping and develop a martingale dual approach for characterizing the optimal possible outcome. Moreover, we develop regression based Monte Carlo algorithms which simulate efficiently lower and upper price bounds.
APA, Harvard, Vancouver, ISO, and other styles
16

Seloula, Nour El Houda. "Mathematical analysis and numerical approximation of the Stokes and Navier-Stokes equations with non standard boundary conditions." Pau, 2010. http://www.theses.fr/2010PAUU3030.

Full text
Abstract:
Les travaux de la thèse portent sur la résolution des équations de Stokes, d'abord avec des conditions au bord portant sur la composante normale du champ de vitesse et la composante tangentielle du tourbillon, ensuite avec des conditions au bord portant sur la pression et la composante tangentielle du champ de vitesse. Dans chaque cas nous démontrons l'existence, l'unicité et la régularité de la solution. Nous traitons aussi le cas de solutions très faibles, par dualité. Le cadre fonctionnel que nous avons choisi est celui des espaces de Banach du type H(div) et H(rot) ou l'intersection des deux, basés sur l'espace Lp, avec 1 < p < 1. En particulier, on se place dans des domaines non simplement connexes, avec des frontières non connexes. Nous nous intéressons en premier lieu à l'obtention d'inégalités de Sobolev pour des champs de vecteurs u 2 Lp(). Dans un second temps, nous établissons des résultats d'existence pour les potentiels vecteurs avec diverses conditions aux limites. Ceci nous permet d'abord d'effectuer des décompositions de type Helmholtz et ensuite de démontrer des conditions Inf-Sup lorsque la forme bilinéaire est un produit de rotationnels. Ces conditions aux limites font que l'équation de la pression est indépendante des autres variables. C'est la raison pour laquelle nous sommes naturellement conduit à étudier les problèmes elliptiques qui se traduisent par les systèmes de Stokes sans la pression. La résolution de ces problèmes se fait au moyen des Conditions Inf-Sup qui jouent un rôle clef pour établir l'existence et l'unicité de solutions. Nous donnons une applications aux systèmes de Navier-Stokes, où on obtient l'existence d'une solution en effectuant un point fixe autour du problème d'Oseen. Enfin, deux méthodes numériques sont proposées pour approcher le problème de Stokes. Nous analysons d'abord une méthode de Nitsche et puis une méthode de Galerkin discontinu. Quelques résultats numériques de convergence sont décrits qui sont parfaitement cohérents avec l'analyse
This work of thesis deals with the solving of the Stokes problem, first with boundary conditions on the normal component of the velocity field and the tangential component of the vorticity, next with boundary conditions on the pressure and the tangential component of the velocity field. In each case, we give existence, uniqueness and regularity of solutions. The case of very weak solutions is also treated by using a duality argument. The functional framework that we have choosed is that of Banach spaces of type H(div) and H(rot) or their intersection based on the space Lp, with 1 < p < 1. In particular, we suppose that is multiply connected and that the boundary R is not connexe. We are interested in a first time by some Sobolev inequality for vector fields u 2 Lp(). In a second time, we give some results concerning vector potentials with different boundary conditions. This allow to establish Helmholtz decompositions and Inf-Sup condition when the bilinear form is a rotational product. Due to these non standard boundary conditions, the pressure is decoupled from the system. It is the reason whay we are naturally reduced to solving elliptic problems which are the Stokes equations without the pressure term. For this, we use the Inf-Sup conditions, which plays a crutial role in the existence and uniqueness of solutions. We give an application to the Navier-Stokes equations where the proof of solutions is obtained by applying a fixed point theorem over the Oseen equations. Finally, two numerical methods are proposed inorder to approximate the Stokes problem. First, by means of the Nitsche method and next by means of the iscontinuous Galerkin method. Some numerical results of convergence verifying the theoretical predictions are given
APA, Harvard, Vancouver, ISO, and other styles
17

Andersson, Björn. "Contributions to Kernel Equating." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-234618.

Full text
Abstract:
The statistical practice of equating is needed when scores on different versions of the same standardized test are to be compared. This thesis constitutes four contributions to the observed-score equating framework kernel equating. Paper I introduces the open source R package kequate which enables the equating of observed scores using the kernel method of test equating in all common equating designs. The package is designed for ease of use and integrates well with other packages. The equating methods non-equivalent groups with covariates and item response theory observed-score kernel equating are currently not available in any other software package. In paper II an alternative bandwidth selection method for the kernel method of test equating is proposed. The new method is designed for usage with non-smooth data such as when using the observed data directly, without pre-smoothing. In previously used bandwidth selection methods, the variability from the bandwidth selection was disregarded when calculating the asymptotic standard errors. Here, the bandwidth selection is accounted for and updated asymptotic standard error derivations are provided. Item response theory observed-score kernel equating for the non-equivalent groups with anchor test design is introduced in paper III. Multivariate observed-score kernel equating functions are defined and their asymptotic covariance matrices are derived. An empirical example in the form of a standardized achievement test is used and the item response theory methods are compared to previously used log-linear methods. In paper IV, Wald tests for equating differences in item response theory observed-score kernel equating are conducted using the results from paper III. Simulations are performed to evaluate the empirical significance level and power under different settings, showing that the Wald test is more powerful than the Hommel multiple hypothesis testing method. Data from a psychometric licensure test and a standardized achievement test are used to exemplify the hypothesis testing procedure. The results show that using the Wald test can provide different conclusions to using the Hommel procedure.
APA, Harvard, Vancouver, ISO, and other styles
18

Kim, Ja Young. "Factors affecting accuracy of comparable scores for augmented tests under Common Core State Standards." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2543.

Full text
Abstract:
Under the Common Core State Standard (CCSS) initiative, states that voluntarily adopt the common core standards work together to develop a common assessment in order to supplement and replace existing state assessments. However, the common assessment may not cover all state standards, so states within the consortium can augment the assessment using locally developed items that align with state-specific standards to ensure that all necessary standards are measured. The purpose of this dissertation was to evaluate the linking accuracy of the augmented tests using the common-item nonequivalent groups design. Pseudo-test analyses were conducted by splitting a large-scale math assessment in half, creating two parallel common assessments, and by augmenting two sets of state-specific items from a large-scale science assessment. Based upon some modifications of the pseudo-data, a simulated study was also conducted. For the pseudo-test analyses, three factors were investigated: (1) the difference in ability between the new and old test groups, (2) the differential effect size for the common assessment and state-specific item set, and (3) the number of common items. For the simulation analyses, the latent-trait correlations between the common assessment and state-specific item set as well as the differential latent-trait correlations between the common assessment and state-specific item set were used in addition to the three factors considered for the pseudo-test analyses. For each of the analyses, four equating methods were used: the frequency estimation, chained equipercentile, item response theory (IRT) true score, and IRT observed score methods. The main findings of this dissertation were as follows: (1) as the group ability difference increased, bias also increased; (2) when the effect sizes differed for the common assessment and state-specific item set, larger bias was observed; (3) increasing the number of common items resulted in less bias, especially for the frequency estimation method when the group ability differed; (4) the frequency estimation method was more sensitive to the group ability difference than the differential effect size, while the IRT equating methods were more sensitive to the differential effect size than the group ability difference; (5) higher latent-trait correlation between the common assessment and state-specific item set was associated with smaller bias, and if the latent-trait correlation exceeded 0.8, the four equating methods provided adequate linking unless the group ability difference was large; (6) differential latent-trait correlations for the old and new tests resulted in larger bias than the same latent-trait correlations for the old and new tests, and (7) when the old and new test groups were equivalent, the frequency estimation method provided the least bias, but IRT true score and observed score equating resulted in smaller bias than the frequency estimation and chained equipercentile methods when group ability differed.
APA, Harvard, Vancouver, ISO, and other styles
19

REIS, João Alfíeres Andrade de Simões dos. "Teoria de Dirac modificada no Modelo Padrão Estendido não-mínimo." Universidade Federal do Maranhão, 2017. https://tedebc.ufma.br/jspui/handle/tede/tede/2024.

Full text
Abstract:
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-12-04T14:44:31Z No. of bitstreams: 1 João Andrade..pdf: 3163183 bytes, checksum: 0c7d19f31b8e570d13e85b371ea43554 (MD5)
Made available in DSpace on 2017-12-04T14:44:31Z (GMT). No. of bitstreams: 1 João Andrade..pdf: 3163183 bytes, checksum: 0c7d19f31b8e570d13e85b371ea43554 (MD5) Previous issue date: 2017-02-22
CAPES.
For the recent years, there has been a growing interest in Lorentz-violating theories. Studies have been carried out addressing the inclusion of Lorentz-violating terms into the Standard Model (SM). This has led to the development of the Standard Model Extension (SME), which is a framework containing modifications that are power-counting renormalizable and consistent with the gauge structure of the SM. More recently, a nonminimal version of the SME was developed for the photon, neutrino, and fermion sector additionally including higher-derivative terms. One of the new properties of this nonminimal version is the lost of renormalizability. In this work, we study the main aspects of a modified Dirac theory in the nonminimal Standard-Model Extension. We focus on two types of operators namely, pseudovector and two-tensor operators. These two operators display an unusual property; they break the degeneracy of spin. This new property becomes manifest in providing two di erent dispersion relations, one for each spin projection. To solve the Dirac equation modified by those operators, we introduce a new method that was suggested by Kostelecký and Mewes in a recent research paper. This method allows to block-diagonalizing the modified Dirac equation and, thus, permits us to obtain the spinors. The objectives of the current work are as follows. First, we will review the main concepts for understanding the SME. Second, we will introduce how to extend the minimal fermion sector to the nonminimal one. Third, we will describe the method that block-diagonalizes the modified Dirac equation and we will compute the field equations. And,finally, we will get the exact dispersion relations and the spinor solutions for operators of arbitrary mass dimension.
Nos últimos anos, houve um aumento significativo no interesse em teorias que violam a simetria de Lorentz. Estudos têm sido realizados na tentativa de incluir termos que violam a simetria de Lorentz no Modelo Padrão (MP). Esta tentativa culminou no surgimento do chamado Modelo Padrão Estendido (MPE). Este modelo contempla todas as possíveis modificações que são consistentes com as propriedades já bem estabelecidas, tais como renormalizabilidade e a estrutura de gauge do MP. Mais recentemente, uma versão não-mínima do MPE foi desenvolvida para os setores dos fótons, neutrinos e para os férmions. Esta versão não-mínima caracteriza-se pela presença de altas derivadas. Uma das novas propriedades nesta versão não-mínima é a perda da renormalizabilidade. Neste trabalho, estudamos os principais aspectos da teoria de Dirac modi cada no MPE não-mínimo. Nós nos concentramos em dois tipos de operadores a saber, operadores pseudovetoriais e tensoriais. Estes dois operadores exibem uma propriedade incomum, eles quebram a degenerescência de spin. Esta nova propriedade manifesta-se, por exemplo, na presença de duas relações de dispersão diferentes, uma para cada projeção do spin. Para resolver a equação de Dirac modi cada por esses operadores, introduzimos um novo método que foi sugerido por Kostelecký e Mewes em um trabalho recente. Este método permite bloco-diagonalizar a equação de Dirac modi cada e, assim, nos fornece uma nova maneira de obter os espinores. Os objetivos do presente trabalho são os seguintes. Primeiro, iremos rever alguns conceitos essenciais para o entendimento do MPE. Segundo, apresentaremos a extens ão do setor fermiônico mínimo para o não-mínimo. Terceiro, descreveremos o método que bloco-diagonaliza a equação de Dirac modi cada e calcularemos as equações de campo. Por fim, calcularemos as relações de dispersão exatas e as soluções espinoriais para cada configuração não-mínima dos operadores citados.
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Chunxin. "An investigation of bootstrap methods for estimating the standard error of equating under the common-item nonequivalent groups design." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1188.

Full text
Abstract:
The purpose of this study was to investigate the performance of the parametric bootstrap method and to compare the parametric and nonparametric bootstrap methods for estimating the standard error of equating (SEE) under the common-item nonequivalent groups (CINEG) design with the frequency estimation (FE) equipercentile method under a variety of simulated conditions. When the performance of the parametric bootstrap method was investigated, bivariate polynomial log-linear models were employed to fit the data. With the consideration of the different polynomial degrees and two different numbers of cross-product moments, a total of eight parametric bootstrap models were examined. Two real datasets were used as the basis to define the population distributions and the "true" SEEs. A simulation study was conducted reflecting three levels for group proficiency differences, three levels of sample sizes, two test lengths and two ratios of the number of common items and the total number of items. Bias of the SEE, standard errors of the SEE, root mean square errors of the SEE, and their corresponding weighted indices were calculated and used to evaluate and compare the simulation results. The main findings from this simulation study were as follows: (1) The parametric bootstrap models with larger polynomial degrees generally produced smaller bias but larger standard errors than those with lower polynomial degrees. (2) The parametric bootstrap models with a higher order cross product moment (CPM) of two generally yielded more accurate estimates of the SEE than the corresponding models with the CPM of one. (3) The nonparametric bootstrap method generally produced less accurate estimates of the SEE than the parametric bootstrap method. However, as the sample size increased, the differences between the two bootstrap methods became smaller. When the sample size was equal to or larger than 3,000, the differences between the nonparametric bootstrap method and the parametric bootstrap model that produced the smallest RMSE were very small. (4) Of all the models considered in this study, parametric bootstrap models with the polynomial degree of four performed better under most simulation conditions. (5) Aside from method effects, sample size and test length had the most impact on estimating the SEE. Group proficiency differences and the ratio of the number of common items to the total number of items had little effect on a short test, but had slight effect on a long test.
APA, Harvard, Vancouver, ISO, and other styles
21

Bradford, Jennifer Wolf. "Attachment Processes, Stress Processes, and Sociocultural Standards in the Development of Eating Disturbances in College Women." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5477/.

Full text
Abstract:
Minimal empirical research using longitudinal data to explore integrative models of eating disorder development exists. The purpose of this study was to further explore multidimensional models incorporating parental attachment, history of stress, appraisal/coping processes, internalization of the thin-ideal, negative affect, body image, and eating disordered behavior using prospective, longitudinal data. The models were evaluated using 238 participants who completed an initial series of self-report questionnaires during their first semester in college and completed follow-up questionnaires 6 months and 18 months later. Structural equation modeling was used to examine the relationships among the factors. Analyses confirmed that college freshman with insecure parental attachment relationships and those with a history of previous stressful experiences appraised the adjustment to college as more stressful and reported feeling less able to cope with the transition; these conditions predicted increased negative affect and increased eating disturbances. Women who reported experiencing negative affect and those that endorsed internalization of the thin-ideal also reported higher levels of body dissatisfaction; these women engaged in more disordered eating attitudes and behaviors. A second model investigating negative affect as mediating the relationship between the appraisal/coping process and eating disturbances also revealed that experiencing difficulties with the transition to college predicted later negative mood states. Further, women who reported increased negative affect also reported increased eating disturbances. Finally, cross-lagged and simultaneous effects between selected factors were evaluated. Results from these analyses are mixed, but they provide additional information about the predictive relationships among factors that play a role in the development of eating disorders. The results of this study provide valuable information about the development of eating disorders that can be used to aid prevention and treatment. Examination of these models in a large independent sample might provide confirmation of these relationships, and investigation of the models during different developmental periods might also provide important information about the development of eating disturbances and those individuals who are most at risk.
APA, Harvard, Vancouver, ISO, and other styles
22

He, Yiyang. "A Physically Based Pipeline for Real-Time Simulation and Rendering of Realistic Fire and Smoke." Thesis, Stockholms universitet, Numerisk analys och datalogi (NADA), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-160401.

Full text
Abstract:
With the rapidly growing computational power of modern computers, physically based rendering has found its way into real world applications. Real-time simulations and renderings of fire and smoke had become one major research interest in modern video game industry, and will continue being one important research direction in computer graphics. To visually recreate realistic dynamic fire and smoke is a complicated problem. Furthermore, to solve the problem requires knowledge from various areas, ranged from computer graphics and image processing to computational physics and chemistry. Even though most of the areas are well-studied separately, when combined, new challenges will emerge. This thesis focuses on three aspects of the problem, dynamic, real-time and realism, to propose a solution in form of a GPGPU pipeline, along with its implementation. Three main areas with application in the problem are discussed in detail: fluid simulation, volumetric radiance estimation and volumetric rendering. The weights are laid upon the first two areas. The results are evaluated around the three aspects, with graphical demonstrations and performance measurements. Uniform grids are used with Finite Difference (FD) discretization scheme to simplify the computation. FD schemes are easy to implement in parallel, especially with ComputeShader, which is well supported in Unity engine. The whole implementation can easily be integrated into any real-world applications in Unity or other game engines that support DirectX 11 or higher.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Jianing [Verfasser], Peter [Akademischer Betreuer] Imkeller, John G. M. [Akademischer Betreuer] Schoenmakers, and Stefan [Akademischer Betreuer] Ankirchner. "Non-standard backward stochastic differential equations and multiple optimal stopping problems with applications to securities pricing / Jianing Zhang. Gutachter: Peter Imkeller ; John G. M. Schoenmakers ; Stefan Ankirchner." Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://d-nb.info/1033586919/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Seloula, Nour El Houda. "Analyse mathématique et approximation numérique des équations de Stokes et de Navier-Stokes avec des conditions aux limites non standard." Phd thesis, Université de Pau et des Pays de l'Adour, 2010. http://tel.archives-ouvertes.fr/tel-00687740.

Full text
Abstract:
Les travaux de la thèse portent sur la résolution des équations de Stokes, d'abord avec des conditions au bord portant sur la composante normale du champ de vitesse et la composante tangentielle du tourbillon, ensuite avec des conditions au bord portant sur la pression et la composante tangentielle du champ de vitesse. Dans chaque cas nous démontrons l'existence, l'unicité et la régularité de la solution. Nous traitons aussi le cas de solutions très faibles, par dualité. Le cadre fonctionnel que nous avons choisi est celui des espaces de Banach du type H(div) et H(rot) ou l'intersection des deux, basés sur l'espace Lp , avec 1 < p < ∞. En particulier, on se place dans des domaines non simplement connexes, avec des frontières non connexes. Nous nous intéressons en premier lieu à l'obtention d'inégalités de Sobolev pour des champs de vecteurs u ∈ Lp (Ω). Dans un second temps, nous établissons des résultats d'existence pour les potentiels vecteurs avec diverses conditions aux limites. Ceci nous permet d'abord d'effectuer des décompositions de type Helmholtz et ensuite de démontrer des conditions Inf − Sup lorsque la forme bilinéaire est un produit de rotationnels. Ces conditions aux limites font que l'équation de la pression est indépendante des autres variables. C'est la raison pour laquelle nous sommes naturellement conduit à étudier les problèmes elliptiques qui se traduisent par les systèmes de Stokes sans la pression. La résolution de ces problèmes se fait au moyen des Conditions Inf − Sup qui jouent un rôle clef pour établir l'existence et l'unicité de solutions. Nous donnons une applications aux systèmes de Navier-Stokes, où on obtient l'existence d'une solution en effectuant un point fi xe autour du problème d'Oseen. Enfi n, deux méthodes numériques sont proposées pour approcher le problème de Stokes. Nous analysons d'abord une méthode de Nitsche et puis une méthode de Galerkin discontinu. Quelques résultats numériques de convergence sont décrits qui sont parfaitement cohérents avec l'analyse.
APA, Harvard, Vancouver, ISO, and other styles
25

Stonkutė, Alina. "KINTAMO DIFUZIJOS KOEFICIENTO PARABOLINIŲ LYGČIŲ SPRENDIMAS SKAITINIAIS METODAIS." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100903_130346-27452.

Full text
Abstract:
Magistro darbe sprendėme diferencialinę difuzijos lygtį naujais metodais. Išanalizavę standartinius kintamo difuzijos koeficiento parabolinių lygčių sprendimo metodus, mes šiame darbe pasiūlėme spręsti šias lygtis naudojant vadinamąsias „tilto“ funkcijas. Išbandėme dviejų rūšių „tilto“ funkcijas: hiperbolinio tangento ir trigonometrinio. Diferencialinės lygties sprendinio ieškojome per „tilto“ funkcijų ir polinomų sandaugų sumą: trigonometrinei „tilto“ funkcijai ir hiperbolinei tangento „tilto“ funkcijai. Gavome kompiuterinius sprendinius ir nustatėme tų sprendinių paklaidas. Palyginę trigonometrinio bei hiperbolinio tangento „tilto“ funkcijos paklaidų standartinius nuokrypius gavome, kad tikslesnis yra hiperbolinio tangento „tilto“ funkcijos metodas.
Master thesis solved differential equation of diffusion of new techniques methods. Having analyzed the standard variable diffusion coefficient parabolic equation solution methods suggested in this work we solve these equations using the so-called "bridge" function. Tried two types of "bridge" functions: tangent hyperbolic and trigonometric. Differential equation, the solution we were looking for a "bridge" function and the amount of products of powers of polynomials: trigonometry "bridge" function and hyperbolic tangent of a "bridge" function. We have received computer-based solutions and the solutions found at the margins. A comparison of hyperbolic tangent trigonometric "bridge" function of the error standard deviations have received, the more accurate the hyperbolic tangent of a "bridge" function approach.
APA, Harvard, Vancouver, ISO, and other styles
26

Tarhini, Ahmad. "Nouvelle physique, Matière noire et cosmologie à l'aurore du Large Hadron Collider." Phd thesis, Université Claude Bernard - Lyon I, 2013. http://tel.archives-ouvertes.fr/tel-00847781.

Full text
Abstract:
Dans la premi ère partie de cette th èse, je pr ésenterai le 5D MSSM qui est un mod èle supersym étrique avec une dimension suppl émentaire. (Five Dimensional Minimal Supersymmetric Standard Model). Apr ès compactication sur l'orbifold S1=Z2, le calcul des equations du groupe de renormalisation (RGE) a une boucle montre un changement dans l' évolution des param ètres ph énom énologiques. D es que l' énergie E = 1=R est atteinte, les états de Kaluza-Klein interviennent et donnent des contributions importantes. Plusieurs possibilit és pour les champs de mati ère sont discut és : ils peuvent se propager dans le "bulk" ou ils sont localis és sur la "brane". Je pr ésenterai d'une part l' évolution des équations de Yukawa dans le secteur des quarks ainsi que les param ètres de la matrice CKM, d'autre part, les e ffets de ce mod èle sur le secteur des neutrinos notamment les masses, les angles de m élange, les phases de Majorana et de Dirac. Dans la deuxi ème partie, je parlerai du mod èle AMSB et ses extensions (MM-AMSB et HC-AMSB). Ces mod èles sont des sc enarios de brisure assez bien motiv es en supersym étrie. En calculant des observables issues de la physique des particules puis en imposant des contraintes de cosmologie standard et alternative sur ces sc enarios, j'ai d étermin e les r égions qui respectent les contraintes de la mati ère noire et les limites de la physique des saveurs. Je reprendrai ensuite l'analyse de ces mod èles en utilisant de nouvelles limites pour les observables. La nouvelle analyse est faite en ajoutant les mesures r écentes sur la masse du Higgs et les rapports de branchement pour plusieurs canaux de d ésint égrations.
APA, Harvard, Vancouver, ISO, and other styles
27

Domrow, Nathan Craig. "Design, maintenance and methodology for analysing longitudinal social surveys, including applications." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16518/1/Nathan_Domrow_Thesis.pdf.

Full text
Abstract:
This thesis describes the design, maintenance and statistical analysis involved in undertaking a Longitudinal Survey. A longitudinal survey (or study) obtains observations or responses from individuals over several times over a defined period. This enables the direct study of changes in an individual's response over time. In particular, it distinguishes an individual's change over time from the baseline differences among individuals within the initial panel (or cohort). This is not possible in a cross-sectional study. As such, longitudinal surveys give correlated responses within individuals. Longitudinal studies therefore require different considerations for sample design and selection and analysis from standard cross-sectional studies. This thesis looks at the methodology for analysing social surveys. Most social surveys comprise of variables described as categorical variables. This thesis outlines the process of sample design and selection, interviewing and analysis for a longitudinal study. Emphasis is given to categorical response data typical of a survey. Included in this thesis are examples relating to the Goodna Longitudinal Survey and the Longitudinal Survey of Immigrants to Australia (LSIA). Analysis in this thesis also utilises data collected from these surveys. The Goodna Longitudinal Survey was conducted by the Queensland Office of Economic and Statistical Research (a portfolio office within Queensland Treasury) and began in 2002. It ran for two years whereby two waves of responses were collected.
APA, Harvard, Vancouver, ISO, and other styles
28

Domrow, Nathan Craig. "Design, maintenance and methodology for analysing longitudinal social surveys, including applications." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16518/.

Full text
Abstract:
This thesis describes the design, maintenance and statistical analysis involved in undertaking a Longitudinal Survey. A longitudinal survey (or study) obtains observations or responses from individuals over several times over a defined period. This enables the direct study of changes in an individual's response over time. In particular, it distinguishes an individual's change over time from the baseline differences among individuals within the initial panel (or cohort). This is not possible in a cross-sectional study. As such, longitudinal surveys give correlated responses within individuals. Longitudinal studies therefore require different considerations for sample design and selection and analysis from standard cross-sectional studies. This thesis looks at the methodology for analysing social surveys. Most social surveys comprise of variables described as categorical variables. This thesis outlines the process of sample design and selection, interviewing and analysis for a longitudinal study. Emphasis is given to categorical response data typical of a survey. Included in this thesis are examples relating to the Goodna Longitudinal Survey and the Longitudinal Survey of Immigrants to Australia (LSIA). Analysis in this thesis also utilises data collected from these surveys. The Goodna Longitudinal Survey was conducted by the Queensland Office of Economic and Statistical Research (a portfolio office within Queensland Treasury) and began in 2002. It ran for two years whereby two waves of responses were collected.
APA, Harvard, Vancouver, ISO, and other styles
29

Santiago, Pinazo Sonia. "Advanced Features in Protocol Verification: Theory, Properties, and Efficiency in Maude-NPA." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/48527.

Full text
Abstract:
The area of formal analysis of cryptographic protocols has been an active one since the mid 80’s. The idea is to verify communication protocols that use encryption to guarantee secrecy and that use authentication of data to ensure security. Formal methods are used in protocol analysis to provide formal proofs of security, and to uncover bugs and security flaws that in some cases had remained unknown long after the original protocol publication, such as the case of the well known Needham-Schroeder Public Key (NSPK) protocol. In this thesis we tackle problems regarding the three main pillars of protocol verification: modelling capabilities, verifiable properties, and efficiency. This thesis is devoted to investigate advanced features in the analysis of cryptographic protocols tailored to the Maude-NPA tool. This tool is a model-checker for cryptographic protocol analysis that allows for the incorporation of different equational theories and operates in the unbounded session model without the use of data or control abstraction. An important contribution of this thesis is relative to theoretical aspects of protocol verification in Maude-NPA. First, we define a forwards operational semantics, using rewriting logic as the theoretical framework and the Maude programming language as tool support. This is the first time that a forwards rewriting-based semantics is given for Maude-NPA. Second, we also study the problem that arises in cryptographic protocol analysis when it is necessary to guarantee that certain terms generated during a state exploration are in normal form with respect to the protocol equational theory. We also study techniques to extend Maude-NPA capabilities to support the verification of a wider class of protocols and security properties. First, we present a framework to specify and verify sequential protocol compositions in which one or more child protocols make use of information obtained from running a parent protocol. Second, we present a theoretical framework to specify and verify protocol indistinguishability in Maude-NPA. This kind of properties aim to verify that an attacker cannot distinguish between two versions of a protocol: for example, one using one secret and one using another, as it happens in electronic voting protocols. Finally, this thesis contributes to improve the efficiency of protocol verification in Maude-NPA. We define several techniques which drastically reduce the state space, and can often yield a finite state space, so that whether the desired security property holds or not can in fact be decided automatically, in spite of the general undecidability of such problems.
Santiago Pinazo, S. (2015). Advanced Features in Protocol Verification: Theory, Properties, and Efficiency in Maude-NPA [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48527
TESIS
APA, Harvard, Vancouver, ISO, and other styles
30

El-Otmany, Hammou. "Approximation par la méthode NXFEM des problèmes d'interface et d'interphase dans la mécanique des fluides." Thesis, Pau, 2015. http://www.theses.fr/2015PAUU3024/document.

Full text
Abstract:
La modélisation et la simulation numérique des interfaces sont au coeur de nombreuses applications en mécanique des fluides et des solides, telles que la biologie cellulaire (déformation des globules rouges dans le sang), l'ingénierie pétrolière et la sismique (modélisation de réservoirs, présence de failles, propagation des ondes), l'aérospatiale (problème de rupture, de chocs) ou encore le génie civil. Cette thèse porte sur l'approximation des problèmes d'interface et d'interphase en mécanique des fluides par la méthode NXFEM, qui permet de prendre en compte de façon précise une discontinuité non alignée avec le maillage. Nous nous sommes d'abord intéressés au développement de la méthode NXFEM pour des éléments finis non-conformes pour prendre en compte une interface séparant deux milieux. Nous avons proposé deux approches pour les équations de Darcy et de Stokes. La première consiste à modifier les fonctions de base de Crouzeix-Raviart sur les cellules coupées et la deuxième consiste à rajouter des termes de stabilisation sur les arêtes coupées. Les résultats théoriques obtenus ont été ensuite validés numériquement. Par la suite, nous avons étudié la modélisation asymptotique et l'approximation numérique des problèmes d'interphase, faisant apparaître une couche mince. Nous avons considéré d'abord les équations de Darcy en présence d'une faille et, en passant à la limite dans la formulation faible, nous avons obtenu un modèle asymptotique où la faille est décrite par une interface, avec des conditions de transmission adéquates. Pour ce problème limite, nous avons développé une méthode numérique basée sur NXFEM avec éléments finis conformes, consistante et stable. Des tests numériques, incluant une comparaison avec la littérature, ont été réalisés. La modélisation asymptotique a été étendue aux équations de Stokes, pour lesquelles nous avons justifié le modèle limite obtenu. Enfin, nous nous sommes intéressés à la modélisation de la membrane d'un globule rouge par un fluide non-newtonien viscoélastique de Giesekus, afin d'appréhender la rhéologie du sang. Pour un problème d'interphase composé de deux fluides newtoniens (l'extérieur et l'intérieur du globule) et d'un liquide de Giesekus (la membrane du globule), nous avons dérivé formellement le problème limite, dans lequel les équations dans la membrane sont remplacées par des conditions de transmission sur une interface
Numerical modelling and simulation of interfaces in fluid and solid mechanics are at the heart of many applications, such as cell biology (deformation of red blood cells), petroleum engineering and seismic (reservoir modelling, presence of faults, wave propagation), aerospace and civil engineering etc. This thesis focuses on the approximation of interface and interphase problems in fluid mechanics by means of the NXFEM method, which takes into account discontinuities on non-aligned meshes.We have first focused on the development of NXFEM for nonconforming finite elements in order to take into account the interface between two media. Two approaches have been proposed, for Darcy and Stokes equations. The first approach consists in modifying the basis functions of Crouzeix-Raviart on the cut cells and the second approach consists in adding some stabilization terms on each part of a cut edge. We have studied them from a theoretical and a numerical point of view. Then we have studied the asymptotic modelling and numerical approximation of interphase problems, involving a thin layer between two media. We have first considered the Darcy equations in the presence of a highly permeable fracture. By passing to the limit in the weak formulation, we have obtained an asymptotic model where the 2D fracture is described by an interface with adequate transmission conditions. A numerical method based on NXFEM with conforming finite elements has been developed for this limit problem, and its consistency and uniform stability have been proved. Numerical tests including a comparison with the literature have been presented. The asymptotic modelling has been finally extended to Stokes equations, for which we have justified the limit problem. Finally, we have considered the mechanical behaviour of red blood cells in order to better understand blood rheology. The last part of the thesis is devoted to the modelling of the membrane of a red blood cell by a non-Newtonian viscoelastic liquid, described by the Giesekus model. For an interphase problem composed of two Newtonian fluids (the exterior and the interior of the red blood cell) and a Giesekus liquid (the membrane), we formally derived the limit problem where the equations in the membrane are replaced by transmission conditions on an interface
APA, Harvard, Vancouver, ISO, and other styles
31

Dong, Shijie. "Résultats d'existence pour des équations d'ondes non-linéaires." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS086.

Full text
Abstract:
Cette thèse est consacrée à démontrer l'existence des solutions globales en temps pour certaines équations non-linéaires, y compris les équations d'ondes et de Klein-Gordon et les équations de Dirac. Pour les équations d'ondes qui sont basées sur la méthode de la feuilletage hyperboloïde, nous établissons plusieurs résultats de stabilité globale, explorons les comportements asymptotiques de ces solutions, et étudions comment les solutions sont affectées quand des paramètres de masse atteignent certaines limites. Comme application, nous prouvons que plusieurs modèles physiques sont globalement stables : le modèle de Dirac-Klein-Gordon, le modèle de Dirac-Proca, le modèle de Klein-Gordon-Zakharov, le modèle U(1) des interactions électro-faibles etc.. Dans la partie I, nous étudions le modèle standard électro-faible. Nous prouvons la stabilité globale non-linéaire du modèle U(1), dans laquelle nous obtenons des bornes uniformes (modulo une croissance lente de logarithme). Puis, pour le modèle standard complet, nous obtenons la stabilité globale dans certains cas spéciaux. Dans la partie II, nous analysons une sorte d'équations d'ondes et de Klein-Gordon avec non-linéalités critiques, et prouvons l'existence des solutions globales qui possède une propriété de décroissance pointue. D'ailleurs, nous étudions une sorte d'equation de Klein-Gordon avec une masse qui pourrait s'annuler, et prouvons la décroissance qui est uniforme en terme de sa masse. Dans la partie III, nous examinons principalement la transformation (hyperboloïde) de Fourier, et dérivons une nouvelle estimation de type-L2 pour les ondes
This thesis is devoted to showing the existence of global-in-time solutions to some nonlinear equations, including the wave-Klein-Gordon equations and the Dirac equations. For the wave-Klein-Gordon-Dirac equations, based on the hyperboloidal foliation method we establish several global stability results, explore the asymptotic behaviors of these solutions, and study how the solutions are affected when some mass parameters go to certain limits. As an application, we can prove that several physical models are globally stable: the Dirac-Klein-Gordon model, the Dirac-Proca model, the Klein-Gordon-Zakharov model, the U(1) model of electroweak interactions and so on. In Part I, we study the electroweak standard model. We first prove the global nonlinear stability results for the U(1) model, where we obtain uniform energy bounds (modulo a slow logarithm growth). Next we move to the full standard model, and get global stability results in some special cases. In Part II, we analyse a class of coupled wave-Klein-Gordon equations with critical nonlinearities, and prove the existence of global-in-time solutions which enjoy sharp pointwise decay property. Besides we study a class of Klein-Gordon equations with possibly vanishing mass, and prove sharp pointwise decay results which are uniform in its mass. In Part III, we mainly investigate the hyperbolic Fourier transform, and derive a new L2-type estimate for waves
APA, Harvard, Vancouver, ISO, and other styles
32

Khalid, Adeel S. "Development and Implementation of Rotorcraft Preliminary Design Methodology using Multidisciplinary Design Optimization." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14013.

Full text
Abstract:
A formal framework is developed and implemented in this research for preliminary rotorcraft design using IPPD methodology. All the technical aspects of design are considered including the vehicle engineering, dynamic analysis, stability and control, aerodynamic performance, propulsion, transmission design, weight and balance, noise analysis and economic analysis. The design loop starts with a detailed analysis of requirements. A baseline is selected and upgrade targets are identified depending on the mission requirements. An Overall Evaluation Criterion (OEC) is developed that is used to measure the goodness of the design or to compare the design with competitors. The requirements analysis and baseline upgrade targets lead to the initial sizing and performance estimation of the new design. The digital information is then passed to disciplinary experts. This is where the detailed disciplinary analyses are performed. Information is transferred from one discipline to another as the design loop is iterated. To coordinate all the disciplines in the product development cycle, Multidisciplinary Design Optimization (MDO) techniques e.g. All At Once (AAO) and Collaborative Optimization (CO) are suggested. The methodology is implemented on a Light Turbine Training Helicopter (LTTH) design. Detailed disciplinary analyses are integrated through a common platform for efficient and centralized transfer of design information from one discipline to another in a collaborative manner. Several disciplinary and system level optimization problems are solved. After all the constraints of a multidisciplinary problem have been satisfied and an optimal design has been obtained, it is compared with the initial baseline, using the earlier developed OEC, to measure the level of improvement achieved. Finally a digital preliminary design is proposed. The proposed design methodology provides an automated design framework, facilitates parallel design by removing disciplinary interdependency, current and updated information is made available to all disciplines at all times of the design through a central collaborative repository, overall design time is reduced and an optimized design is achieved.
APA, Harvard, Vancouver, ISO, and other styles
33

Arenas, Tawil Abraham José. "Mathematical modelling of virus RSV: qualitative properties, numerical solutions and validation for the case of the region of Valencia." Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/8316.

Full text
Abstract:
El objetivo de esta memoria se centra en primer lugar en la modelización del comportamiento de enfermedades estacionales mediante sistemas de ecuaciones diferenciales y en el estudio de las propiedades dinámicas tales como positividad, periocidad, estabilidad de las soluciones analíticas y la construcción de esquemas numéricos para las aproximaciones de las soluciones numéricas de sistemas de ecuaciones diferenciales de primer orden no lineales, los cuales modelan el comportamiento de enfermedades infecciosas estacionales tales como la transmisión del virus Respiratory Syncytial Virus (RSV). Se generalizan dos modelos matemáticos de enfermedades estacionales y se demuestran que tiene soluciones periódicas usando un Teorema de Coincidencia de Jean Mawhin. Para corroborar los resultados analíticos, se desarrollan esquemas numéricos usando las técnicas de diferencias finitas no estándar desarrolladas por Ronald Michens y el método de la transformada diferencial, los cuales permiten reproducir el comportamiento dinámico de las soluciones analíticas, tales como positividad y periocidad. Finalmente, las simulaciones numéricas se realizan usando los esquemas implementados y parámetros deducidos de datos clínicos De La Región de Valencia de personas infectadas con el virus RSV. Se confrontan con las que arrojan los métodos de Euler, Runge Kutta y la rutina de ODE45 de Matlab, verificándose mejores aproximaciones para tamaños de paso mayor a los que usan normalmente estos esquemas tradicionales.
Arenas Tawil, AJ. (2009). Mathematical modelling of virus RSV: qualitative properties, numerical solutions and validation for the case of the region of Valencia [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8316
Palancia
APA, Harvard, Vancouver, ISO, and other styles
34

Dakin, Del Thomas. "In situ sensing to enable the 2010 thermodynamic equation of seawater." Thesis, 2016. http://hdl.handle.net/1828/7713.

Full text
Abstract:
The thermodynamic equation of seawater - 2010 (TEOS-10) is hampered by the inability to measure absolute salinity or density in situ. No new advances for in situ salinity or density measurement have taken place since the adoption of the practical salinity scale in 1978. In this thesis three possible technologies for in situ measurements are developed and assessed: phased conductivity, an in situ density sensor and sound speed sensors. Of these, only sound speed sensors showed the potential for an in situ TEOS-10 measurement solution. To be implemented, sensor response times need to be matched and the sound speed sensor accuracy must be improved. Sound speed sensor accuracy is primarily limited by the calibration reference, pure water. Test results indicate the TEOS-10 sound speed coefficients may also need to be improved. A calibration system to improve sound speed sensor accuracy and verify the TEOS-10 coefficients is discussed.
Graduate
0415
0986
TDakin@UVic.ca
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Jiun-Yu. "Comparing Model-based and Design-based Structural Equation Modeling Approaches in Analyzing Complex Survey Data." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-08-8523.

Full text
Abstract:
Conventional statistical methods assuming data sampled under simple random sampling are inadequate for use on complex survey data with a multilevel structure and non-independent observations. In structural equation modeling (SEM) framework, a researcher can either use the ad-hoc robust sandwich standard error estimators to correct the standard error estimates (Design-based approach) or perform multilevel analysis to model the multilevel data structure (Model-based approach) to analyze dependent data. In a cross-sectional setting, the first study aims to examine the differences between the design-based single-level confirmatory factor analysis (CFA) and the model-based multilevel CFA for model fit test statistics/fit indices, and estimates of the fixed and random effects with corresponding statistical inference when analyzing multilevel data. Several design factors were considered, including: cluster number, cluster size, intra-class correlation, and the structure equality of the between-/within-level models. The performance of a maximum modeling strategy with the saturated higher-level and true lower-level model was also examined. Simulation study showed that the design-based approach provided adequate results only under equal between/within structures. However, in the unequal between/within structure scenarios, the design-based approach produced biased fixed and random effect estimates. Maximum modeling generated consistent and unbiased within-level model parameter estimates across three different scenarios. Multilevel latent growth curve modeling (MLGCM) is a versatile tool to analyze the repeated measure sampled from a multi-stage sampling. However, researchers often adopt latent growth curve models (LGCM) without considering the multilevel structure. This second study examined the influences of different model specifications on the model fit test statistics/fit indices, between/within-level regression coefficient and random effect estimates and mean structures. Simulation suggested that design-based MLGCM incorporating the higher-level covariates produces consistent parameter estimates and statistical inferences comparable to those from the model-based MLGCM and maintain adequate statistical power even with small cluster number.
APA, Harvard, Vancouver, ISO, and other styles
36

Muscolino, G., and Alessandro Palmeri. "Response of beams resting on viscoelastically damped foundation to moving oscillators." 2006. http://hdl.handle.net/10454/604.

Full text
Abstract:
The response of beams resting on viscoelastically damped foundation under moving SDoF oscillators is scrutinized through a novel state-space formulation, in which a number of internal variables is introduced with the aim of representing the frequency-dependent behaviour of the viscoelastic foundation. A suitable single-step scheme is provided for the numerical integration of the equations of motion, and the Dimensional Analysis is applied in order to define the dimensionless combinations of the design parameters that rule the responses of beam and moving oscillator. The effects of boundary conditions, span length and number of modes of the beam, along with those of the mechanical properties of oscillator and foundation, are investigated in a new dimensionless form, and some interesting trends are highlighted. The inaccuracy associated with the use of effective values of stiffness and damping for the viscoelastic foundation, as usual in the present state-of-practice, is also quantified.
APA, Harvard, Vancouver, ISO, and other styles
37

Szczerbiak, Paweł. "Własności ciemnej materii i ich związek z sektorem Higgsa w wybranych uogólnieniach Modelu Standardowego." Doctoral thesis, 2018. https://depotuw.ceon.pl/handle/item/2710.

Full text
Abstract:
W pracy zbadano wzajemny wpływ sektora zimnej ciemnej materii oraz rozszerzonego sektora Higgsa w dwóch popularnych supersymetrycznych rozszerzeniach Modelu Standardowego (MSSM, NMSSM). W analizie skupiono się na gęstości reliktowej ciemnej materii, jej oddziaływaniach z nukleonami, jak również poszukiwaniach nowej fizyki w LHC. Szczególny nacisk położony został na określenie dostępnej eksperymentalnie przestrzeni parametrów rozważanych modeli. W dalszej części pracy wyprowadzono równanie Boltzmanna dla cząstek relatywistycznych i zastosowano je w analizie gorącej ciemnej materii w modelu S. Weinberga z oddziaływaniem ciemnej materii poprzez portal Higgsa. Uzyskane wyniki zostały porównane z kilkoma popularnymi przybliżeniami liczenia gęstości reliktowej ciemnej materii.
Dissertation examines mutual influence between cold dark matter sector and extended Higgs sector in two popular supersymmetric generalizations of the Standard Model (MSSM, NMSSM). The study is focused on the relic density and direct detection of dark matter as well as on the LHC search for new physics. Special emphasis has been placed on estimation of experimentally allowed parameter space of the models under consideration. Subsequently, Boltzmann equation for relativistic species is derived and applied to the analysis of hot dark matter in S.~Weinberg's Higgs portal model. The results obtained with our method are also compared with some popular approximations of dark matter relic density calculation.
APA, Harvard, Vancouver, ISO, and other styles
38

Abhishek, Kumar *. "Seismic Microzonation Of Lucknow Based On Region Specific GMPE's And Geotechnical Field Studies." Thesis, 2012. http://etd.iisc.ernet.in/handle/2005/2559.

Full text
Abstract:
Mankind is facing the problem due to earthquake hazard since prehistoric times. Many of the developed and developing countries are under constant threats from earthquakes hazards. Theories of plate tectonics and engineering seismology have helped to understand earthquakes and also to predicate earthquake hazards on a regional scale. However, the regional scale hazard mapping in terms of seismic zonation has been not fully implemented in many of the developing countries like India. Agglomerations of large population in the Indian cities and poor constructions have raised the risk due to various possible seismic hazards. First and foremost step towards hazard reduction is estimation of the seismic hazards in regional scale. Objective of this study is to estimate the seismic hazard parameters for Lucknow, a part of Indo-Gangetic Basin (IGB) and develop regional scale microzonation map. Lucknow is a highly populated city which is located close to the active seismic belt of Himalaya. This belt came into existence during the Cenozoic era (40-50 million years ago) and is a constant source of seismic threats. Many of the devastating earthquakes which have happened since prehistoric times such as 1255 Nepal, 1555 Srinagar, 1737 Kolkata, 1803 Nepal, 1833 Kathmandu, 1897 Shillong, 1905 Kangra, 1934 Bihar-Nepal, 1950 Assam and 2005 Kashmir. Historic evidences show that many of these earthquakes had caused fatalities even up to 0.1 million. At present, in the light of building up strains and non-occurrence of a great event in between 1905 Kangra earthquake and 1934 Bihar-Nepal earthquake regions the stretch has been highlighted as central seismic gap. This location may have high potential of great earthquakes in the near future. Geodetic studies in these locations indicate a possible slip of 9.5 m which may cause an event of magnitude 8.7 on Richter scale in the central seismic gap. Lucknow, the capital of Uttar Pradesh has a population of 2.8 million as per Census 2011. It lies in ZONE III as per IS1893: 2002 and can be called as moderate seismic region. However, the city falls within 350 km radial distance from Main Boundary Thrust (MBT) and active regional seismic source of the Lucknow-Faizabad fault. Considering the ongoing seismicity of Himalayan region and the Lucknow-Faizabad fault, this city is under high seismic threat. Hence a comprehensive study of understanding the earthquake hazards on a regional scale for the Lucknow is needed. In this work the seismic microzonation of Lucknow has been attempted. The whole thesis is divided into 11 chapters. A detailed discussion on the importance of this study, seismicity of Lucknow, and methodology adopted for detailed seismic hazard assessment and microzonation are presented in first three chapters. Development of region specific Ground Motion Prediction Equation (GMPE) and seismic hazard estimation at bedrock level using highly ranked GMPEs are presented in Chapters 4 and 5 respectively. Subsurface lithology, measurement of dynamic soil properties and correlations are essential to assess region specific site effects and liquefaction potential. Discussion on the experimental studies, subsurface profiling using geotechnical and geophysical tests results and correlation between shear wave velocity (SWV) and standard penetration test (SPT) N values are presented in Chapter 6. Detailed shear wave velocity profiling with seismic site classification and ground response parameters considering multiple ground motion data are discussed in Chapters 7 and 8. Chapters 9 and 10 present the assessment of liquefaction potential and determination of hazard index with microzonation maps respectively. Conclusions derived from each chapter are presented in Chapter 11. A brief summary of the work is presented below: Attenuation relations or GMPEs are important component of any seismic hazard analysis which controls accurate prediction of the hazard values. Even though the Himalayas have experienced great earthquakes since ancient times, suitable GMPEs which are applicable for a wide range of distance and magnitude are limited. Most of the available regional GMPEs were developed considering limited recorded data and/or pure synthetic ground motion data. This chapter presents development of a regional GMPE considering both the recorded as well as synthetic ground motions. In total 14 earthquakes consisting of 10 events with recorded data and 4 historic events with Isoseismal maps are used for the same. Synthetic ground motions based on finite fault model have been generated at unavailable locations for recorded events and complete range distances for historic earthquakes. Model parameters for synthetic ground motion were arrived by detailed parametric study and from literatures. A concept of Apparent Stations (AS) has been used to generate synthetic ground motion in a wide range of distance as well as direction around the epicenter. Synthetic ground motion data is validated by comparing with available recorded data and peak ground acceleration (PGA) from Isoseismal maps. A new GMPE has been developed based on two step stratified regression procedure considering the combined dataset of recorded and synthetic ground motions. The new GMPE is validated by comparing with three recently recorded earthquakes events. GMPE proposed in this study is capable of predicting PGA values close to recorded data and spectral acceleration up to period of 2 seconds. Comparison of new GMPE with the recorded data of recent earthquakes shows a good matching of ground motion as well as response spectra. The new GMPE is applicable for wide range of earthquake magnitudes from 5 to 9 on Mw scale. Reduction of future earthquake hazard is possible if hazard values are predicted precisely. A detailed seismic hazard analysis is carried out in this study considering deterministic and probabilistic approaches. New seismotectonic map has been generated for Lucknow considering a radial distance of 350 km around the city centre, which also covers active Himalayan plate boundaries. Past earthquakes within the seismotectonic region have been collected from United State Geological Survey (USGS), Northern California Earthquake Data Centre (NCEDC), Indian Meteorological Department (IMD), Seismic Atlas of India and its Environs (SEISAT) etc. A total of 1831 events with all the magnitude range were obtained. Collected events were homogenized, declustered and filtered for Mw ≥ 4 events. A total of 496 events were found within the seismic study region. Well delineated seismic sources are compiled from SEISAT. Superimposing the earthquake catalogue on the source map, a seismotectonic map of Lucknow was generated. A total of 47 faults which have experienced earthquake magnitude of 4 and above are found which are used for seismic hazard analysis. Based on the distribution of earthquake events on the seismotectonic map, two regions have been identified. Region I which shows high density of seismic events in the area in and around of Main Boundary Thrust (MBT) and Region II which consists of area surrounding Lucknow with sparse distribution of earthquake events. Data completeness analysis and estimation of seismic parameter “a” and “b” are carried out separately for both the regions. Based on the analysis, available earthquake data is complete for a period of 80 years in both the regions. Using the complete data set, the regional recurrence relations have been developed. It shows a “b” value of 0.86 for region I and 0.9 for Region II which are found comparable with earlier studies. Maximum possible earthquake magnitude in each source has been estimated using observed magnitude and doubly truncated Gutenberg-Richter relation. The study area of Lucknow is divided into 0.015o x 0.015o grid size and PGA at each grid has been estimated by considering all sources and the three GMPEs. A Matlab code was generated for seismic hazard analysis and maximum PGA value at each grid point was determined and mapped. Deterministic seismic hazard analysis (DSHA) shows that maximum expected PGA values at bedrock level varies from 0.05g in the eastern part to 0.13g in the northern region. Response spectrum at city centre is also developed up to a period of 2 seconds. Further, Probabilistic seismic hazard analysis (PSHA) has been carried out and PGA values for 10 % and 2 % probability of exceedence in 50 years have been estimated and mapped. PSHA for 10 % probability shows PGA variation from 0.035g in the eastern parts to 0.07g in the western and northern parts of Lucknow. Similarly PSHA for 2 % probability of exceedence indicates PGA variation from 0.07g in the eastern parts while the northern parts are expecting PGA of 0.13g. Uniform hazard spectra are also developed for 2 % and 10 % probability for a period of up to 2 seconds. The seismic hazard analyses in this study show that the northern and western parts of Lucknow are more vulnerable when compared to other part. Bedrock hazard values completely change due to subsoil properties when it reaches the surface. A detailed geophysical and geotechnical investigation has been carried out for subsoil profiling and seismic site classification. The study area has been divided into grids of 2 km x 2 km and roughly one geophysical test using MASW (Multichannel Analysis Surface Wave) has been carried out in each grid and the shear wave velocity (SWV) profiles of subsoil layers are obtained. A total of 47 MASW tests have been carried out and which are uniformly distributed in Lucknow. In addition, 12 boreholes have also been drilled with necessary sampling and measurement of N-SPT values at 1.5 m interval till a depth of 30 m. Further, 11 more borelog reports are collected from the same agency hired for drilling the boreholes. Necessary laboratory tests are conducted on disturbed and undisturbed soil samples for soil classification and density measurement. Based on the subsoil informations obtained from these boreholes, two cross-sections up to a depth of 30 m have been generated. These cross-sections show the presence of silty sand in the top 10 m at most of the locations followed by clayey sand of low to medium compressibility till a depth of 30 m. In between the sand and clay traces of silt were also been found in many locations. In addition to these boreholes, 20 deeper boreholes (depth ≥150 m) are collected from Jal Nigam (Water Corporation) Lucknow, Government of Uttar Pradesh. Typical cross-section along the alignment of these deeper boreholes has been generated up to 150 m depth. This cross-section shows the presence of fine sand near Gomati while other locations are occupied by surface clayey sand. Also, the medium sand has been found in the western part of the city at a depth of 110 m which continues till 150 m depth. On careful examination of MASW and boreholes with N-SPT, 17 locations are found very close and SWV and N-SPT values are available up to 30 m depth. These SWV and N-SPT values are complied and used to develop correlations between SWV and N-SPT for sandy soil, clayey soil and all soil types. This correlation is the first correlation for IGB soil deposits considered measured data up to 30 m. The new correlation is verified graphically using normal consistency ratio and standard percentage error with respect to measured N-SPT and SWV. Further, SWV and N-SPT profiles are used Another important earthquake induced hazard is liquefaction. Even though many historic earthquakes caused liquefaction in India, very limited attempt has been made to map liquefaction potential in IGB. In this study, a detailed liquefaction analysis has been carried out for Lucknow a part of Ganga Basin to map liquefaction potential. Initially susceptibility of liquefaction for soil deposits has been assessed by comparing the grain size distribution curve obtained from laboratory tests with the range of grain size distribution for potentially liquefiable soils. Most of surface soil deposits in the study area are susceptible to liquefaction. At all the 23 borehole locations, measured N-SPT values are corrected for (a) Overburden Pressure (CN), (b) Hammer energy (CE), (c) Borehole diameter (CB), (d) presence or absence of liner (CS), (e) Rod length (CR) and (f) fines content (Cfines). Surface PGA values at each borehole locations are used to estimate Cyclic Stress Ratio (CSR). Corrected N-SPT values [(N1)60CS] are used to estimate Cyclic Resistance Ratio (CRR) at each layer. CSR and CRR values are used to estimate Factor of Safety (FOS) against liquefaction in each layer. Least factor safety values are indentified from each location and presented liquefaction factor of safety map for average and maximum amplified PGA values. These maps highlight that northern, western and central parts of Lucknow are very critical to critical against liquefaction while southern parts shows moderate to low critical area. The entire alignment of river Gomati falls in very critical to critical regions for liquefaction. Least FOS shows worst scenario and does not account thickness of liquefiable soil layers. Further, these FOS values are used to determine Liquefaction Potential Index (LPI) of each site and developed LPI map. Based on LPI map, the Gomati is found as high to very high liquefaction potential region. Southern and the central parts of Lucknow show low to moderate liquefaction potential while the northern and western Lucknow has moderate to high liquefaction potential. All possible seismic hazards maps for Lucknow have been combined to develop final microzonation map in terms of hazard index values. Hazard index maps are prepared by combining rock PGA map, site classification map in terms of shear wave velocity, amplification factor map, and FOS map and predominant period map by adopting Analytical Hierarchy Process (AHP). All these parameters have been given here in the order starting with maximum weight of 6 for PGA to lower weight of 1 for predominant frequency. Normalized weights of each parameter have been estimated. Depending upon the variation of each hazard parameter values, three to five ranks are assigned and the normalized ranks are calculated. Final hazard index values have been estimated by multiplying normalized ranks of each parameter with the normalized weights. Microzonation map has been generated by mapping hazard index values. Three maps were generated based on DSHA, PSHA for 2% and 10 % probability of exceedence in 50 years. Hazard index maps from DSHA and PSHA for 2 % probability show similar pattern. Higher hazard index were obtained in northern and western parts of Lucknow and lower values in others. The new microzonation maps can help in dividing the Lucknow into three parts as high area i.e. North western part, moderate hazard area i.e. central part and low hazard area which covers southern and eastern parts of Lucknow. This microzonation is different from the current seismic code where all area is lumped in one zone without detailed assessment of different earthquake hazard parameters. Finally this study brings out first region specific GMPE considering recorded and synthetic ground monitions for wide range of magnitudes and distances. Proposed GMPE can also be used in other part of the Himalayan region as it matches well with the highly ranked GMPEs. Detailed rock level PGA map has been generated for Lucknow considering DSHA and PSHA. A detailed geotechnical and geophysical experiments are carried out in Lucknow. These results are used to develop correction between SWV and N-SPT values for soil deposit in IGB and site classification maps for the study area. Amplification and liquefaction potential of Lucknow are estimated by considering multiple ground motions data to account different earthquake ground motion amplitude, duration and frequency, which is unique in the seismic microzonation study.
APA, Harvard, Vancouver, ISO, and other styles
39

(7011101), Ricki Lauren McKee. "Implementing Common Core Standards for Mathematics: Focus on Problem Solving." Thesis, 2019.

Find full text
Abstract:

Utilizing action research as the methodology, this study was developed with the ultimate goal of describing and reflecting on my implementation of one aspect of the Common Core State Standards for Mathematics (CCSSM) in an algebra classroom. This implementation focused on the Problem-Solving Standard of Mathematical Practice (SMP) as described in CCSSM (Making sense of problems and persevere in solving them). The research question that guided my work was the following: How is the Common Core State Standards for Mathematics (CCSSM) Problem-Solving Mathematical Standard enacted in an algebra class while using a Standards-based curriculum to teach a quadratics unit?

I explored this by focusing on the following sub-questions:

  • Q1. What opportunities to enact the components of the Problem-Solving Mathematical Standard are provided by the written curriculum?
  • Q2. In what way does the teacher’s implementation of the quadratics unit diminish or enhance the opportunities to enact the components of the Problem-Solving Mathematical Standard provided by the written curriculum?
  • Q3. In what ways does the teacher’s enactment of problem-solving opportunities change over the course of the unit?

Reviewing the literature related to the relevant learning theories (sociocultural theory, the situated perspective, and communities of practice), I outlined the history of CCSSM, National Council of Teachers of Mathematics (NCTM), National Research Council (NRC), and the No Child Left Behind Act of 2001. Exploring the details of CCSSM’s Standards of Mathematical Content (SMCs) and Standards of Mathematical Practice (SMPs), I discussed problem solving, the Problem Solving Components (PSCs) listed in the Problem-Solving SMP of CCSSM, teaching through problem solving, and Standards-based curricula, such as College Preparatory Mathematics (CPM) which is the algebra curricula I chose for this study.

There are many definitions of the construct problem solving. CCSSM describes this construct in unique ways specifically related to student engagement. The challenge for teachers is to not only make sense of CCSSM’s definition of problem solving and its components, but also to enact it in the classroom so that mathematical understanding is enhanced. For this reason, studies revealing how classroom teachers implemented CCSSM, especially in terms of problem solving, are necessary.

The Critical Theoretic/Action Research Paradigm is often utilized by researchers trying to improve their own practice; thus, I opted for an action research methodology because it could be conducted by the practitioner. These methods of data collection and analysis were employed in order to capture the nature of changes made in the classroom involving my teaching practice. I chose action research because this study met the key tenets of research in action, namely, a collaborative partnership concurrent with action, and a problem-solving approach.

While I knew how I wanted to change my classroom teaching style, implementing the change was harder than anticipated. From the onset, I never thought of myself as an absolute classroom authority, because I always maintained a relaxed classroom atmosphere where students were made to feel comfortable. However, this study showed me that students did view my presence as the authority and looked to me for correct answers, for approval, and/or for reassurance that they were on the right track. My own insecurities of not knowing how to respond to students in a way to get them to interact more with their group and stop looking to me for answers, while not being comfortable forcing students to talk in front of their peers, complicated this study. While it was easy to anticipate how I would handle situations in the classroom, it was hard to change in the moment.

The research revealed the following salient findings: while the written curriculum contained numerous opportunities for students to engage with the Focal PSCs, the teacher plays a crucial role in enacting the written curriculum. Through the teacher’s enactment of this curriculum, opportunities for students to engage with the Focal PSCs can be taken away, enacted as written, or enhanced all by the teacher. Additionally, change was gradual and difficult due to the complexities of teaching. Reflection and constant adapting are crucial when it comes to changing my practice.

As a classroom teacher, I value the importance of the changes that need to be made in the classroom to align with CCSSM. I feel that by being both a teacher and a researcher, my work can bridge the gap between research and classroom practice.
APA, Harvard, Vancouver, ISO, and other styles
40

Ncube, Nhlanhla Brian. "Comparing the equator principles' IFC performance standard 6 and the South African mining and biodiversity guideline to identify areas of overlap and gaps to improve biodiversity conservation in the mining sector." Thesis, 2015. http://hdl.handle.net/10539/19283.

Full text
Abstract:
A research report submitted to the Faculty of Science, in partial fulfilment of the requirements for the degree of Master of Science, University of the Witwatersrand, Johannesburg, 6 November 2015.
Environmental degradation and pollution continue to characterise the mining sector in South Africa despite a robust legislative framework which is aimed at enhancing sustainable mining practices. Of particular concern is the impact of mining on biodiversity. During 2013 the Departments of Environmental Affairs and Mineral Resources, together with the South African Mining and Biodiversity Forum, an alliance of stakeholders from industry, conservation organisations and government facilitated by the Chamber of Mines of South Africa, released the South African Mining and Biodiversity Guideline (SAMBG), which aim to mainstream biodiversity into the mining sector. The guideline seek to integrate biodiversity considerations into planning processes and manage biodiversity through the lifecycle of a mine, and so contribute to better outcomes. In addition to the guideline, mining companies that obtain funding from financial institutions that are signatory to the Equator Principles are required to implement IFC Performance Standard 6 (IFC PS6) which also deals with biodiversity conservation. There is a concern that the SAMBG adds further to the burgeoning pile of standards, guidelines and best practices that mining companies are required to meet, but without necessarily adding anything new. This research project deals with this concern through a review of the SAMBG to assess their potential contribution to biodiversity conservation and to determine, through a comparative analysis, whether any overlaps and gaps exist between the guideline and IFC PS6. A qualitative methodology was used to understand how the Aichi Biodiversity Targets are addressed by the SAMBG. Based on this review a conclusion as to the role of the SAMBG amongst the range of guidelines and standards was drawn. The research indicated that there is alignment between the SAMBG, the IFC PS6, the Aichi Biodiversity Targets and South African national environmental legislation. They all aim to achieve a similar outcome, the conservation and sustainable use of biodiversity, but provided different levels of detail and are targeted at slightly different audiences.
APA, Harvard, Vancouver, ISO, and other styles
41

Naidoo, Sugandren. "A framework for the integration of management systems in organisations." Thesis, 2017. http://hdl.handle.net/10500/26432.

Full text
Abstract:
During the last decade, the integration of management systems (this includes any management system that is used to achieve the goals of an organisation example PASCAL, ISO standards and enterprise resource planning), has become an increasingly important strategy adopted by organisations, as it represents an alternative to operating with multiple management systems in parallel (Abad, Cabrera & Medina, 2014:860). Despite the established need for the integration of management systems, research on how to carry out integration has yet to be developed fully and an elaborated methodology of integration needs full realisation (Bernardo, Casadesús, Karapetrovic & Heras, 2012; Rocha, Searcy, Karapetrovic, 2007; Wilkinson & Dale, 1999a; Zeng, Shi & Lou, 2007). The aim of the current study was to develop a framework for organisations that could be used for the integration of management systems in a structured manner. This study was undertaken by exploring the views and opinions of senior management through fourteen face-to-face semi-structured interviews. Thereafter, an online survey collected 220 responses from four South African multinational organisations involved with management system development and implementation. The research instrument used a seven-point Likert-type scale for the respondents to rate each question. The data was analysed statistically primarily using factor analysis to confirm the significant factors and then structural equation modelling to test the relationships between the factors, which ultimately confirmed the developed framework. The beneficiaries of this research are primarily organisations that have three or more management systems in an organisation. The framework will also be valuable to management in industry and policymakers since it addresses key integration issues, such as employee performance, organisational culture, employee motivation and policy as factors when considering integration of management systems.
Business Management
D.B.L.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography