Dissertations / Theses on the topic 'Crossed squares'

To see the other types of publications on this topic, follow the link: Crossed squares.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Crossed squares.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

PIZZAMIGLIO, LINDA. "Cohomologies of crossed modules." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Britton, Michael C. "Practical square cross-section helical antennas." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0001/MQ43337.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Britton, Michael C. (Michael Charles) Carleton University Dissertation Engineering Electronics. "Practical square cross-section helical antennas." Ottawa, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Skoglund, Ingegerd. "Algorithms for a Partially Regularized Least Squares Problem." Licentiate thesis, Linköping : Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wegelin, Jacob A. "Latent models for cross-covariance /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/8982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Onipede, Bolarinwa O. "Design of a cross section reduction extrusion tool for square bars." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4880.

Full text
Abstract:
The objective of this project is to design a tool for moderate cross section reduction of bars that are deformed within a channel slider tool that is used for equal channel angular extrusion (ECAE). The bars that are deformed via ECAE have an initial square cross section with a nominal value of 1.00 in2 and aspect ratios (length/width) ranging between 4 and 6. A systems engineering design methodology is used to generate a topbottom approach in the development of the tool's design. This includes defining a need statement, which is the "Need for an area reduction extrusion tool to replace the current practices of machining ECAE processed billets". The system functions and requirements are defined next and used to generate three concepts that are compared to select the winning concept for further refinement. Major components of the selected tool are: a container, ram, base plate, punch plate, four die-inserts, four wedges and four flange locks. For materials, such as copper (C10100) and aluminum (Al6061-T6), that can be processed by this tool, the upper bound extrusion pressure, which is derived by limit analysis, is set at 192 ksi. The upper bound extrusion pressure is constrained by the buckling limit of the ram, which is 202 ksi. The maximum wall stress experienced by the container is 113 ksi. For materials with the same cross section and dimensions, fixed end conditions of the Ram support larger bucking loads when compared to other end conditions such as rounded ends or rounded-fixed ends. With the application of the upper bound method, an increase in the extrusion ratio of the tool causes a corresponding rise in the optimal cone angle of the die further translating to a rise in the extrusion pressure.
APA, Harvard, Vancouver, ISO, and other styles
7

Leong, Wa-Un Alexis. "A study of aerodynamic and mechanical interference effects between two neighbouring square towers." Thesis, University of Glasgow, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kircher, Andrew J. "Estimation of the Squared Population Cross-Validity Under Conditions of Predictor Selection." TopSCHOLAR®, 2015. http://digitalcommons.wku.edu/theses/1472.

Full text
Abstract:
The current study employed a Monte Carlo design to examine whether samplebased and formula-based estimates of cross-validated R2 differ in accuracy when predictor selection is and is not performed. Analyses were conducted on three datasets with 5, 10, or 15 predictors and different predictor-criterion relationships. Results demonstrated that, in most cases, a formula-based estimate of the cross-validated R2 was as accurate as a sample-based estimate. The one exception was the five predictor case wherein the formula-based estimate exhibited substantially greater bias than the estimate from a sample-based cross validation study. Thus, formula-based estimates, which have an enormous practical advantage over a two sample cross validation study, can be used in most cases without fear of greater error.
APA, Harvard, Vancouver, ISO, and other styles
9

Fernandes, Diogo. "Low-cost implementation techniques for generic square and cross M-QAM constellations." Universidade Federal de Juiz de Fora, 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/1555.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-05-17T12:37:21Z No. of bitstreams: 1 diogofernandes.pdf: 2723080 bytes, checksum: 27ac16e618618f1cb4c4dc6394956f80 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-06-28T14:08:15Z (GMT) No. of bitstreams: 1 diogofernandes.pdf: 2723080 bytes, checksum: 27ac16e618618f1cb4c4dc6394956f80 (MD5)
Made available in DSpace on 2016-06-28T14:08:15Z (GMT). No. of bitstreams: 1 diogofernandes.pdf: 2723080 bytes, checksum: 27ac16e618618f1cb4c4dc6394956f80 (MD5) Previous issue date: 2015-08-31
CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico
Este trabalho tem como objetivo apresentar técnicas com complexidade computacional reduzida para implementação em hardware do modulador de amplitude em quadratura M-ária (M-ary quadrature amplitude modulation - M-QAM) de elevada ordem, que pode ser viável para sistemas banda larga. As técnicas propostas abrangem as constelações M-QAM quadradas e cruzadas (número par e ímpar de bits), a regra de decisão abrupta (hard decison rule), derivação de constelações M-QAM de baixa ordem das de elevada ordem. A análise de desempenho em termos de taxa de bits errados (bit error rate - BER) é realizada quando os símbolos M-QAM são corrompidos por ruído Gaussiano branco aditivo (additive white Gaussian noise - AWGN) e ruído Gaussiano impulsivo aditivo (additive impulsive Gaussian noise - AIGN). Os resultados de desempenho da taxa de bits errados mostram que a perda de desempenho das técnicas propostas é, em média, inferior a 1 dB, o que é um resultado surpreendente. Além disso, a implementação das técnicas propostas em arranjo de portas programáveis em campo (field programmable gate array - FPGA) é descrita e analisada. Os resultados obtidos com as implementações em dispositivo FPGA mostram que as técnicas propostas podem reduzir consideravelmente a utilização de recursos de hardware se comparadas com as técnicas presentes na literatura. Uma melhoria notável em termos de redução da utilização de recursos de hardware é conseguida através da utilização da técnica de modulação M-QAM genérica em comparação com a técnica de regra de decisão heurística (heuristic decision rule - HDR) aprimorada e uma técnica previamente concebida, a tà c cnica HDR. Com base nas análises apresentadas, a técnica HDR aprimorada é menos complexa do que a técnica HDR. Finalmente, os resultados numéricos mostram que a técnica de modulação M-QAM genérica pode ser oito vezes mais rápida do que as outras duas técnicas apresentadas, quando um grande número de símbolos M-QAM (p. ex., > 1000) são transmitidos consecutivamente.
This work aims at introducing techniques with reduced computational complexity for hardware implementation of high order M-ary quadrature amplitude modulation (MQAM) which may be feasible for broadband communication systems. The proposed techniques cover both square and cross M-QAM constellations (even and odd number of bits), hard decision rule, derivation of low-order M-QAM constellations from high order ones. Performance analyses, in terms of bit error rate (BER) is carried out when the M-QAM symbols are corrupted by either additive white Gaussian noise (AWGN) or additive impulsive Gaussian noise (AIGN). The bit error rate performance results show that the performance loss of the proposed techniques is, on average, less than 1 dB, which is a remarkable result. Additionally, the implementation of the proposed techniques in field programmable gate array (FPGA) device is described and outlined. The results based on FPGA show that the proposed techniques can considerably reduce hardware resource utilization. A remarkable improvement in terms of hardware resource utilization reduction is achieved by using the generic M-QAM technique in comparison with the enhanced heuristic decision rule (HDR) technique and a previously designed technique, the HDR technique. Based on the analyses performed, the enhanced HDR technique is less complex than the HDR technique. Finally, the numerical results show that the generic M-QAM technique can be eight times faster than the other two techniques when a large number of M-QAM symbols (e.g., > 1000) are consecutively transmitted.
APA, Harvard, Vancouver, ISO, and other styles
10

Sudarsan, Rangarajan. "Numerical investigation of shear-driven flow in a toroid of square cross-section." Diss., The University of Arizona, 2001. http://hdl.handle.net/10150/279918.

Full text
Abstract:
A numerical investigation has been performed for the 3-D flow of an incompressible fluid in a torus shaped enclosure of square cross-section, where the fluid motion is induced by sliding the top wall of the enclosure radially outwards. The flow in this geometry is characterized by two non-dimensional numbers, the curvature ratio (δ=d/Rc) and the Reynolds number (Re=uwalld/v) where Rc is the radius of curvature of the torus at the center of the cavity, d is the side length of the enclosure cross-section and uwall the velocity of the top wall of the enclosure. Calculations were performed for 3-D flow in an almost straight enclosure with δ = 0.005 at Re = 3200 and a strongly curved one with δ = 0.25 at Re = 2400. The 3-D flow was computed by choosing a small sector of the torus and applying periodic boundary conditions along the circumferential boundary. The 3-D flow calculations were started with axi-symmetric flow as initial condition and perturbed by a small random disturbance to seed the centrifugal instability into the flow. Integral quantites defined using different components of the vorticity were monitored at different cross sectional planes to study the development and dynamics of the 3-D flow. A technique of volume visualization was used to visualize r vorticity and θ vorticity contours through out the computational domain to understand the dynamics of the 3-D flow. The 3-D flow calculated for both cases δ = 0.005 and 0.25 shows span-wise vortices also called Taylor-Gortler-Like vortices. These vortices while being convected around by the primary re-circulating flow in the torus cross-section experience span-wise oscillation resulting from a secondary instability accompanied by their growth and collapse in size. The net effect of this dynamics results in the periodic rearrangement of the vortices, when viewed along the circumferential span. Volume visualization of r-vorticity contours show the existence of two pairs of vortices wrapped around each other as they are convected around by the primary re-circulating flow. The dynamics that induce the periodic rearrangement have been explained from volume visualization of the vorticity components. "Vortex tilting" of theta-component of vorticity is identified as a mechanism for explaining the interaction of the primary re-circulating flow in the span-wise vortices present.
APA, Harvard, Vancouver, ISO, and other styles
11

Sanchez, Benito. "Two essays on the predictability of asset prices: "Benchmarking problems and long horizon abnormal returns" and, "Low R square in the cross section of expected returns"." ScholarWorks@UNO, 2007. http://scholarworks.uno.edu/td/1080.

Full text
Abstract:
This dissertation consists of two essays on predictability of asset prices. "Benchmarking problems and long horizon abnormal returns" and, "Low R-square in the cross section of expected returns". Long run abnormal returns following Initial Public Offerings (IPOs), Seasoned Equity Offers (SEO) and other firm level events are well documented in the finance literature. These findings are difficult to reconcile in an efficient markets world. I examine the seriousness of potential benchmarking errors on the measurement of abnormal returns. I find that the simpler, more parsimonious models perform better in practice and finds that excess performance is not predictable regardless of the APM. Thus, the long run underperformance following SEOs found in the literature is consistent with market efficiency because excess performance itself is not predictable. In the other essay, "Low R-square in the cross section of expected returns", I examine the “low R-square” phenomenon observed in the literature. CAPM predicts exact linear relationship between return and betas (SML). This means that estimated time series betas for firms should be related with firms' future returns. However, the estimated betas have almost no relationship with future returns. The cross-sectional R2 are surprising low (3% average) while time series R2 are higher (around 30 % average). He develops a simple asset pricing model that explains this phenomenon. Even in a perfect world where there are no errors in the benchmark measurement or estimation of the price of market risk the difference in R-squares can be quite large due to the difference in variance between the "market" and average returns. I document that market variance exceeds the variance of average returns, with few exceptions, for the last 74 years.
APA, Harvard, Vancouver, ISO, and other styles
12

DI, MICCO DAVIDE. "AN INTRINSIC APPROACH TO THE NON-ABELIAN TENSOR PRODUCT." Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/703934.

Full text
Abstract:
The notion of a non-abelian tensor product of groups first appeared in a paper where Brown and Loday generalised a theorem on CW-complexes by using the new notion of non-abelian tensor product of two groups acting on each other, instead of the usual tensor product of abelian groups. In particular, they took two groups acting on each other and they defined their non-abelian tensor product via an explicit presentation. This led to the development of an algebraic theory based on this construction. Many results were obtained treating the properties which are satisfied by this non-abelian tensor product as well as some explicit calculations in particular classes of groups. In order to state many of their results regarding this tensor product, Brown and Loday needed to require, as an additional condition, that the two groups M and N acted on each other compatibly: these amount to the existence of a group L and of two crossed modules structures of M and N on L such that the original actions are induced from these crossed module structures. Furthermore, they proved that the non-abelian tensor product is part of a so-called crossed square of groups: this particular crossed square is the pushout of a specific diagram in the category of crossed squares of groups. Note that crossed squares are a 2-dimensional version of crossed modules of groups. Following the idea of generalising the algebraic theory arising from the study of the non-abelian tensor product of groups, Ellis gave a definition of non-abelian tensor product of Lie algebras, and obtained similar results. Further generalisations have been studied in the contexts of Leibniz algebras, restricted Lie algebras, Lie-Rinehart algebras, Hom-Lie algebras, Hom-Leibniz algebras, Hom-Lie-Rinehart algebras, Lie superalgebras and restricted Lie superalgebras. The aim of our work is to build a general version of non-abelian tensor product, having the specific definitions in the categories of groups and Lie algebras as particular instances. In order to do so we first extend the concept of a pair of compatible actions (introduced in the case of groups by Brown and Loday and in the case of Lie algebras by Ellis) to semi-abelian categories. This is indeed the most general environment in which we are able to talk about actions, due to the concept of internal actions. In this general context, we give a diagrammatic definition of the compatibility conditions for internal actions, which specialises to the particular definitions known for groups and Lie algebras. We then give a new construction of the Peiffer product in this setting and we use these tools to show that in any semi-abelian category satisfying the "Smith-is-Huq" condition, asking that two actions are compatible is the same as requiring that these actions are induced from a pair of internal crossed modules over a common base object. Thanks to this equivalence, in order to deal with the generalisation to the semi-abelian context of the non-abelian tensor product, we are able to use a pair of internal crossed modules over a common base object instead of a pair of compatible internal actions, whose formalism is far more intricate. Now we fix a semi-abelian category A satisfying "Smith-is-Huq" and we show that, for each pair of internal L-crossed modules, it is possible to construct an internal crossed square which is the pushout (in the category of crossed squares) of the general version of the diagram used by Brown and Loday in the groups case. The non-abelian tensor product is then defined as a piece of this internal crossed square. We show that if A is the category of groups or the category of Lie algebras, this general construction coincides with the specific notions of non-abelian tensor products already known in these settings. We construct an L-crossed module structure on this non-abelian tensor product, some additional universal properties are shown and by using these we prove that this tensor product is a bifunctor. Once we have the non-abelian tensor product among our tools, we are also able to state the new definition of "weak crossed square": the idea behind this is to generalise the explicit presentations of crossed squares known for groups and for Lie algebras. These equivalent definitions, which (contrarily to the semi-abelian one) do not rely on the formalism of internal groupoids but include some set-theoretic constructions, are shown to be equivalent to the implicit ones, where, by definition, crossed squares are crossed modules of crossed modules and hence normalisations of double groupoids. Our idea is to give an alternative explicit description of crossed squares of groups (resp. Lie algebras) using the non-abelian tensor product, so that it does not involve anymore the so-called emph{crossed pairing} (resp. emph{Lie pairing}), which is not a morphism in the base category but only a set-theoretic function; in its place we use a morphism from the non-abelian tensor product which is more suitable for generalisations. Doing so, the explicit definitions can be summarised by saying that a crossed square is a commutative square of crossed modules, compatible with an additional crossed module structure on the diagonal, and endowed with a morphism out of the non-abelian tensor product. Our definition of weak crossed squares is based on the one of non-abelian tensor product and plays the role of the explicit version of the definition of internal crossed squares: in particular we proved that it restricts to the explicit definitions for groups and Lie algebras and hence that in these cases weak crossed squares are equivalent to crossed squares. So far we have shown that any internal crossed square is automatically a weak crossed square, but we are currently missing precise conditions on the base category under which the converse is true: this means that any internal crossed square can be described explicitly as a particular weak crossed square, but this is not a complete characterisation. In order to give a direct application of our non-abelian tensor product construction, we focus on universal central extensions in the category of L-crossed modules: Casas and Van der Linden studied the theory of universal central extensions in semi-abelian categories, using the general notion of central extension (with respect to a Birkhoff subcategory) given by Janelidze and Kelly. We are mainly interested in one of their results, namely that, given a Birkhoff subcategory B of a semi-abelian category X with enough projectives, an object of X is B-perfect if and only if it admits a universal B-central extension. Edalatzadeh considered the category of L-crossed modules of Lie algebras and crossed modules with vanishing aspherical commutator as Birkhoff subcategory B. Since the first one is not a semi-abelian category the existing theory does not apply in this situation: nevertheless he managed to prove the same result, and furthermore he gave an explicit construction of the universal B-central extensions by using the non-abelian tensor product of Lie algebras. Using our general definition of non-abelian tensor product of L-crossed modules as given in the third chapter, we are able to extend Edalatzadeh's results to the category of L-crossed modules in any semi-abelian category A satisfying the "Smith-is-Huq" condition: this is a useful application of the construction of the non-abelian tensor product, which again manages to express in this more general setting exactly the same properties as in its known particular instances. Furthermore, taking the subcategory of abelian objects as Birkhoff subcategory of the category of crossed modules in A, we are able to show that, whenever the category A has enough projectives, our generalisation of Edalatzadeh's work is partly a consequence of Casas' and Van der Linden's theorem, reframing Edalatzadeh's result within the standard theory of universal central extensions in the semi-abelian context. There are two non-trivial consequences of this fact. First of all, besides the existence of the universal B-central extension for each B-perfect crossed module in A, we are also able to give its explicit construction by using the non-abelian tensor product: notice that this construction is completely unrelated to what has been done by Casas and Van der Linden. Secondly, this construction of universal B-central extensions is valid even when A does not have enough projectives, whereas within the general theory this is a key requirement for the result to hold.
APA, Harvard, Vancouver, ISO, and other styles
13

Harite, Shibani. "Evaluation of 10-fold cross validation and prediction error sums of squares statistic for population pharmacokinetic model validation." Scholarly Commons, 2003. https://scholarlycommons.pacific.edu/uop_etds/585.

Full text
Abstract:
It was the objective of the current study to evaluate the ability of 10-fold crossvalidation and prediction error sum of squares (PRESS) statistic to identify population pharmacokinetic models (PPKM) that were estimated from data without influence observations versus PPKMs from data containing influence observations. The evaluation of 10-fold cross validation and PRESS statistic from Leave-one-out cross-validation for PPK model validation was performed in 3 Phases. In Phase 1 model parameters (theta and clearance) were estimated for datasets with and without influence observations. It was found that influence observations caused an over-estimation of the model parameters. In Phase II the statistics from 10-fold and leave-one-out cross validation methods were used to detect models developed from influence data. The metrics of choice are RATIOK and RATIOPR statistics that can be used to identify models developed from influence data and these metrics may then find applicability across differing drugs and models. A cut-off value of 1.05 for RATIOK and RATIOPR was proposed as a discrete breakpoint to classify models that were generated from influence data versus noninfluence data. In Phase III data analysis was carried out using logistic regression and the sensitivity and specificity of Leave-one-out and 10-fold cross-validation methods were evaluated. It was found that RATIOK and RATIOPR were significant predictors when used individually in the model. Multicollinearity was detected when RATIOK and RATIOPR were present in the model at the same time. In terms of sensitivity and specificity both 10-fold cross validation and leave-one-out cross validation showed similar performance.
APA, Harvard, Vancouver, ISO, and other styles
14

Mertens, Bart Josepha August. "Efficient cross-validatory computations and influence measures for principal component and partial least squares decompositions with applications in chemometrics." Thesis, University College London (University of London), 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kuran, Sermet. "Fluidelastic stability of a rotated square array with multiple flexible cylinders subject to cross-flow." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=70262.

Full text
Abstract:
Over the past decade, theoretical investigations have revealed the possible existence of two distinct mechanisms, a fluid-damping controlled one requiring only a single degree-of-freedom system and a fluid-stiffness controlled one requiring two or more degree-of-freedom system, instrumental in causing fluidelastic instability of cylinder arrays subjected to fluid cross-flow. As yet, the existence of these mechanisms has not been verified experimentally, and some researchers tend to neglect one or the other of these mechanisms in their theoretical studies.
In this thesis, with the objective of obtaining further insight into the nature of fluidelastic instability mechanisms, experimental and theoretical studies have been performed on a rotated square array with $P/d$ = 2.12. Previous theoretical and experimental studies on this array have established the fact that a single flexible cylinder, in an otherwise rigid array, is fluidelastically stable. However, multiple flexible cylinder dynamic (vibration) experiments undertaken in this study show that fluidelastic instability develops when the array incorporates three of more flexible cylinders. This result verifies the duality of the instability mechanisms and suggests that the cylinder motion in the present array is dominated by the fluid-stiffness controlled mechanism, rather than the fluid-damping controlled mechanism.
Involved dynamic (vibration) experiments have been undertaken to elucidate the effect of various parameters such as, number of cylinders, cylinder position, cylinder mass, frequency detuning and fluidelastic coupling on the instability threshold of this array, in which the fluid-stiffness controlled mechanism prevails. It has been determined that varying mechanical damping has a small effect on the critical velocity, whereas, varying cylinder mass generates, relatively, large changes in the critical velocity. A "Connors type" instability equation, or versions of it, are shown not to be applicable in this array, mainly due to the strong dependence of the mass exponent on the actual value of the non-dimensional mass.
Frequency detuning of adjacent cylinders is also shown to have a significant effect on the critical velocity. Further dynamic (vibration) experiments revealed the co-existence of dynamic and static instabilities within close proximity to each other. It was possible to switch from one type of instability to the other, by varying one, or more, of the mechanical properties of the flexible cylinders.
Next, the time averaged fluid forces acting on static cylinders were measured as a function of monitored, and surrounding, cylinder displacements at different Reynold numbers, to attain a physical understanding of the flow pattern in the array. The results complemented and verified the various dynamic and static instability findings of the vibration (dynamic) experiments.
Finally, the fluid forces were incorporated in a quasi-steady, multiple degree-of-freedom model for comparison with experimental results.
APA, Harvard, Vancouver, ISO, and other styles
16

MARTINS, VICTOR KAMINSKI. "SIMULATION OF A TURBULENT FLOW IN A SQUARE CROSS-SECTION, USING THE REYNOLDS STRESS MODEL." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1994. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=18650@1.

Full text
Abstract:
O modelo de duas equações K-E, largamente empregado na análise de escoamentos turbulentos, não é capaz de adequedamente modelar problemas que envolvam escoamentos secundários e com rotação em dutos, descolamento de camada-limite e outras situações em que a anisotropia inerente ao escoamento turbulento necessite ser levada em conta. Modelos mais complexos, que consideram esta anisotropia - os chamados modelos de tensões de Reynolds - são utilizados no intuito de produzir resultados numéricos mais próximos daqueles obtidos experimentalmente. O problema geometricamente simples, o escoamento turbulento hidrodinamicamente desenvolvido em um duto de seção quadrática, no qual a ocorrência de escoamentos secundários foi constatada experimentalmente e documentada por diversos autores, foi modelado e resolvido através do Método dos Volumes Finitos. Inicialmente, o modelo k-e foi emplementado, mostrando-se incapaz de prever, devido a sua natureza isotrópica, o escoamento secundário numa seção transversal de duto. Em seguida, o modelo de tensões de Reynolds foi implementado. A validação deste modelo é obtida comparando-se os resultados numéricos obtidos a resultados experimentais e numéricos encontrados bibliografia.
The two-equation k-e model, widely employed in the analysis of turbulent flows, is not capable of adequately modelling problems involving secondary and swirling flows in ducts, boudary-layer detachment and other situations in which the inherent anisotropy of turbulent flows must be taken into account. More complex models, that take this anisotropy into account - the so-called Reynolds-stress models - are employed with the purpose of producing numerical results closer to those obtained experimentally. A geometrically simple problem, the turbulent flow in a duct with a square cross-section, in which the presence of secondary flows was observed experimentally and documentd by several authors, was modelletd and resolved using the Finite Volume Method. Initially, the k-e model was implemtend, being proven not capable of predicting, due to its isotropic nature, the secondary flows in a duct cross-section. The Reynolds-stress model was then implemented. The validation of this model is obtained through comparison of the numerical resuls to experimental and numerical results found in the bibliography.
APA, Harvard, Vancouver, ISO, and other styles
17

Alishahi, Reza. "Behaviour of CFRP Confined Reinforced Concrete Columns with Square Cross Section under Eccentric Compressive Loading." Thesis, Curtin University, 2018. http://hdl.handle.net/20.500.11937/70500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Badowski, Tomasz [Verfasser]. "Adaptive importance sampling via minimization of estimators of cross-entropy, mean square, and inefficiency constant / Tomasz Badowski." Berlin : Freie Universität Berlin, 2016. http://d-nb.info/1111558868/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Shrestha, Shruti. "A measurement of Z(vv̄)[photon] cross section and limits on anomalous triple gauge couplings at [square root of]s = 7 TeV using CMS." Diss., Kansas State University, 2013. http://hdl.handle.net/2097/15316.

Full text
Abstract:
Doctor of Philosophy
Department of Physics
Yurii Maravin
In this thesis, the first measurement of Z(vv̄)[photon] cross section in pp collisions at [square root of]s = 7 TeV has been done using data collected by the CMS detector. The measured cross section is 21.3 ± 4.2 (stat.) ± 4.3 (syst.) ± 0.5 (lumi.) fb. This measurement is based on the observations of events with missing transverse energy in excess of 130 GeV and photon in the rapidity range [eta] < 1.44 of transverse momentum in excess of 145 GeV in a data sample corresponding to an integrated luminosity of 5 fb⁻¹. This measured cross section is in good agreement with the theoretical prediction of 21.9 ± 1.1 fb from BAUR. Further, neutral triple gauge couplings involving Z bosons and photons have been studied. No evidence for the presence of such couplings is observed and is in agreement with the predictions of the standard model. We set the most stringent limits to date on these triple gauge couplings.
APA, Harvard, Vancouver, ISO, and other styles
20

Rosales, Jorge Luis. "A numerical investigation of the convective heat transfer in confined channel flow past cylinders of square cross-section." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/289081.

Full text
Abstract:
A numerical investigation was conducted to analyze the unsteady flow and heat transfer characteristics for cylinders of square cross-section in a laminar channel flow The study focuses on differences in the drag, lift, and heat transfer coefficients for a single and tandem pair of cylinders due to the proximity to a channel wall. Both uniform and parabolic inlet velocity profiles are considered. The cases are calculated for a fixed cylinder Reynolds number of 500 and a Prandtl number of 0.7. The heated cylinder is held at a constant temperature and is initially centered in the channel. The eddy promoter has side dimensions one-half of the downstream cylinder and is placed at a fixed distance upstream. The upstream cylinder is either located midway between the top and bottom cylinder surfaces (inline) or it is centered on the top or bottom edges (offset) of the primary cylinder. The results show that the cylinder Nusselt number decreases for both single and inline tandem cylinders as they approach the wall in a parabolic flow but remain almost constant in a uniform flow. This is primarily due to the reduced mean velocity near the wall. The time-averaged drag coefficient decreases for both single and inline tandem cylinders as they approach the wall in a parabolic flow. The presence of the upstream cylinder significantly reduces the drag on the downstream cylinder when compared to that of a single cylinder but has little affect on the cylinder lift. Additionally, the overall cylinder Nusselt number increases slightly in both uniform and parabolic flows. The Strouhal number is much larger for an inline tandem pair than for a single cylinder for all cylinder positions. The amplitude of eddy-shedding induced oscillations is significantly dampened as the cylinder(s) approach the channel wall. Offsetting the eddy promoter causes a significant reduction in the heat transfer and a large increase in the drag coefficient for the channel-centered cylinder when compared to the inline tandem case. The offset cylinder is found to slightly reduce the overall heat transfer and increase the drag from the downstream heated cylinder for the other two cross-stream locations. The study also indicates that placing the eddy promoter in higher velocity fluid increases the Strouhal number of the downstream cylinder.
APA, Harvard, Vancouver, ISO, and other styles
21

Vowels, Matthew James. "THE APPLICATION OF SPECTRAL AND CROSS-SPECTRAL ANALYSIS TO SOCIAL SCIENCES DATA." UKnowledge, 2018. https://uknowledge.uky.edu/hes_etds/58.

Full text
Abstract:
The primary goal of this paper is to demonstrate the application of a relatively esoteric and interdisciplinary technique, called spectral analysis, to dyadic social sciences data. Spectral analysis is an analytical and statistical technique, commonly used in engineering, that allows times series data to be analyzed for the presence of significant regular/periodic fluctuations/oscillations. These periodic fluctuations are reflected in the frequency domain as amplitude or energy peaks at certain frequencies. Furthermore, a Magnitude Squared Coherence analysis may be used to interrogate more than one time series concurrently in order to establish the degree of frequency domain correlation between the two series, as well to establish the phase (lead/lag) relationship between the coherent frequency components. In order to demonstrate the application of spectral analysis, the current study utilizes a secondary dyadic dataset comprising 30 daily reports of perceived sexual desire for 65 couples. The secondary goal of this paper is to establish a) whether there is significant periodic fluctuation in perceived levels of sexual desire for men and/or women, and at which specific frequencies, and b) how much correlation or `cross-spectral coherence' there is between partners' sexual desire within the dyads, and c) what the phase lead-lag relationship is between the partners at any of the identified frequency components. Sexual desire was found to have significant periodic components for both men and women, with a fluctuation of once per month being the most common frequency component across the groups of individuals under analysis. Mathematical models are presented in order to describe and illustrate these principal fluctuations. Partners in couples, on average, were found to fluctuate together at a number of identified frequencies, and the phase lead/lag relationships of these frequencies are presented.
APA, Harvard, Vancouver, ISO, and other styles
22

Everaerts, Pieter Bruno Bart. "W cross section measurement in the electron channel in pp collisions at [the square root of sigma]= 7 TeV." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68870.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2011.
In title on title page, "the square root of sigma" appears as the mathematical symbol, and "sigma" appears as the lower-case Greek letter. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 165-172).
From March until November 2010 the Compact Muon Solenoid 36 pb-1 of pp collisions at [the square root of sigma]= 7 TeV. One of the first precision Model that can be performed with this data is the measurement cross section and the charge asymmetry in the cross section. measurements are performed in the electron decay channel. The experiment recorded tests of the Standard of the W-production In this thesis, both results obtained are: [sigma](W -> ev) = 10.48 ± 0.03(stat.) 0.15(syst.) ± 0.09(th.) ± 0.42(lumi.)nb [sigma](W+ -> e+v, U(W+ - ez) = 1.430 ± 0.008(stat.) ± 0.022(syst.) ± 0.029(th.) [sigma](W- ->e-v) The measurements agree with state-of-the-art NNLO QCD calculations with the latest parton distribution functions.
by Pieter Bruno Bart Everaerts.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
23

Ashby, Shaun Francis. "A study of the process e'+e'-#->##mu#'+#mu#'-(#gamma#) at #square root#." Thesis, University of Birmingham, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bull, James. "Application of Quantum Mechanics to Fundamental Interactions in Chemical Physics: Studies of Atom-Molecule and Ion-Molecule Interactions Under Single-Collision Conditions: Crossed Molecular Beams; Single-Crystal Mössbauer Spectroscopy: Microscopic Tensor Properties of ⁵⁷Fe Sites in Inorganic Ferrous High-Spin Compounds." Thesis, University of Canterbury. Department of Chemistry, 2010. http://hdl.handle.net/10092/4292.

Full text
Abstract:
As part of this project and in preparation for future experimental studies of gas-phase ion-molecule reactions, extensive modification and characterization of the crossed molecular beam machine in the Department of Chemistry, University of Canterbury has been carried out. This instrument has been configured and some preliminary testing completed to enable the future study of gas-phase ion-molecule collisions of H⁺₃ and Y⁻ (Y = F, Cl, Br) with dipole-oriented CZ₃X (Z = H, F and X = F, Cl, Br). Theoretical calculations (ab initio and density functional theory) are reported on previously experimentally characterized Na + CH₃NO₂, Na + CH₃NC, and K + CH₃NC systems, and several other systems of relevance. All gas-phase experimental and theoretical studies have the common theme of studying collision orientation dependence of reaction under singlecollision conditions. Experimental measurements, theoretical simulations and calculations are also reported on some selected ferrous (Fe²⁺) high-spin (S=2) crystals, in an attempt to resolve microscopic contributions of two fundamental macroscopic tensor properties: the electric-field gradient (efg); and the mean square displacement (msd) in the case when more than one symmetry related site of low local point-group symmetry contributes to the same quadrupole doublet. These determinations have been made using the nuclear spectroscopic technique of Mössbauer spectroscopy, and complemented with X-ray crystallographic measurements.
APA, Harvard, Vancouver, ISO, and other styles
25

Wrona, Bozydar Adam. "Measurement of the W+- boson cross section in the electron decay channel at [square root] s=7 TeV with the ATLAS detector." Thesis, University of Liverpool, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.569161.

Full text
Abstract:
At the LHC, the process pp -+ W± X followed by the leptonic decays W- -+ e-v and W+ -+ e+lI is investigated to test the Standard Model in a completely new kinematic range. This thesis describes W± cross-section measurements using pp collisions recorded by the ATLAS detector in 2010. The charge dependence is measured both integrated and differentially in lepton pseudorapidity 'f); and analysis of the systematic uncertainties is presented. The results are compared with a recent publication by ATLAS which uses different reconstruction and background estimations. The cross-sections are also compared with theoretical predictions based on recent P DF sets determined recently by the CTEQ, MSTW, ABKM, HERAPDF and JR groups. The values of the W± cross-sections and their respective uncertainties, for 35.1 pb-1 at 7 TeV centre of mass energy, determined by this analysis, are: δfid/w+ x BR(W -+ e+ve) = 2.907 ± 0.015(stat.) ± 0.113(syst.) ± 0.099(lumi.) [nb] δfid/w+ x BR(W -+ e+ve) = 1.913 ± 0.012(stat.) ± 0.077(syst.) ± 0.065(lumi.) [nb]
APA, Harvard, Vancouver, ISO, and other styles
26

Corliss, Ross (Ross Cameron). "W boson cross sections and single spin asymmetries in polarized proton-proton collisions at [square root of] s =500 GeV at STAR." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/79258.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2012.
In title on title page, "[square root of]" appears as the mathematical symbol. Title as it appears in MIT Degrees Awarded booklet, September 2012: W production in polarized proton-proton collisions at 500 GeV at STAR. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 167-169).
Understanding the structure of the proton is an ongoing effort in the particle physics community. Existing in the region of nonperturbative QCD, the various models for proton structure must be informed and constrained by experimental data. In 2009, the STAR experiment at Brookhaven National Lab recorded over 12 pb-1 of data from polarized p+p collisions at 500 GeV center-of-mass energy provided by the RHIC accelerator. This has offered a first look at the spin-dependent production of W+(-) bosons, and hence at the spin-flavor structure of the proton, where the main production mode is through d+u (u+d) annihilation. Using STAR's large Time Projection Chamber and its wide-acceptance electromagnetic calorimeters, it is possible to identify the e+ + v (e- + v) decay mode of the W bosons produced. This thesis presents the first STAR measurement of charge-separated W production, both the pseudorapidity-dependent ratio and the longitudinal single-spin asymmetry. These results show good agreement with theoretical expectations, validating the methods used and paving the way for the analysis of larger datasets that will be available soon. In the near future the range of this measurement will be augmented with the Forward GEM Tracker. A discussion of the design and implementation of this upgrade is also included, along with projections for its impact.
by Ross Corliss.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
27

Hoffman, Alan Michael. "Longitudinal double-spin asymmetry and cross section for inclusive neutral pion production in polarized proton collisions at [the square root of sigma] = 200 GeV." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53214.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Physics, 2009.
In title on title page, "the square root of sigma" appears as the mathematical symbol, and "sigma" appears as the lower-case Greek letter. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 123-126).
Twenty years of polarized lepton-nucleon scattering experiments have found that the contribution from quark spins (1/2[delta] [sigma]) to the spin of the proton is only ~ 35%. This has lead researchers to look elsewhere, specifically to gluon spin ([delta sigma]) for a large contribution to proton spin. [delta sigma] has been only loosely constrained in polarized DIS and SIDIS experiments. Polarized proton-proton collisions at RHIC provide sensitivity to [delta sigma] through measurements of the longitudinal double-spin asymmetry, ALL. This work presents a measurement of ALL for inclusive 7ro production in polarized proton-proton collisions using the STAR detector and data from RHIC Run 6. 7r0s are abundantly produced at mid-rapidity in proton-proton collisions, making them natural candidates for studies of [delta] [sigma]. Novel techniques for reconstructing 7ros at STAR are discussed, and a measurement of the unpolarized cross section presented. Finally, the measured ALL is compared to perturbative QCD predictions and from this comparison constraints are placed on [delta] [sigma].
by Alan Michael Hoffman.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
28

Beirowski, Karin. "Cultural influences on attitudes toward aggression : a comparison between Spanish, Japanese and South African students." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53341.

Full text
Abstract:
Thesis (MA)--University of Stellenbosch, 2003.
ENGLISH ABSTRACT: The primary aim of the present study was to examine whether the culture of a society influences the way in which people justify certain aggressive behaviours in certain situations. A total of 756 students from Spain, Japan and South Africa participated in completing the CAMA, a measure of justification of aggression. The results showed that there were significant differences within the countries. There were differences in the levels of acceptance of certain acts between these countries. Further fmdings also indicated that there was a difference between the males of the countries and between the females of these countries. It was found that cultural influences and the norms within these countries bring about differences in justification of aggression in different situations. There were also some general trends of acceptance, with direct and indirect verbal acts e.g. sarcasm, hindering and shouting being more acceptable than physical acts such as hitting, killing and torture. It is hoped that the present findings of this research will make members of society more aware of their responsibility to help reduce aggressive acts by teaching and reinforcing norms against it. It is also hoped that the international community will gain better insight into the fact that South-Africa faces unique challenges because of the political and social changes in the country.
AFRIKAANSE OPSOMMING: Die primêre doel van die huidige studie was om vas te stelof 'n samelewing se kultuur 'n rol speel by die regverdiging van sekere aggressiewe gedrag in bepaalde omstandighede. 'n Totaal van 756 studente van Spanje, Japan en Suid Afrika het die CAMA vraelys voltooi. Die vraelys meet die regverdiging van aggressie in sekere omstandighede. Betekenisvolle verskille is tussen die lande gevind. Daar is ook betekenisvolle verskille tussen die mans van die drie lande asook tussen die vrouens van die drie lande gevind. Daar is gevind dat kulturele verskille en die norme binne 'n samelewing meebring dat daar verskille is in die mate waarin samelewings sekere aggressiewe gedrag aanvaarbaar vind in sekere situasies. Daar was ook 'n groter algemene aanvaarbaarheid van verbale aggressie bv. sarkasme, verhindering en skreeu as fisiese aggressie soos slaan, om dood te maak en marteling. Hopelik maak hierdie navorsing mense meer bewus van elkeen in die samelewing se verantwoordelikheid om die norme teen geweld te versterk asook om die norme aan hulle nageslagte oor te dra. Verder sal die internasionale gemeenskap hopelik beter insig kry oor die unieke uitdagings wat Suid-Afrika bied as gevolg van die politieke en sosiale veranderinge in die land.
APA, Harvard, Vancouver, ISO, and other styles
29

Boruvka, Audrey. "Data-driven estimation for Aalen's additive risk model." Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Davis, Brett Andrew, and Brett Davis@abs gov au. "Inference for Discrete Time Stochastic Processes using Aggregated Survey Data." The Australian National University. Faculty of Economics and Commerce, 2003. http://thesis.anu.edu.au./public/adt-ANU20040806.104137.

Full text
Abstract:
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ Longitudinal surveys provide data in the form of transition frequencies between the states of X. In Chapter Two we develop a method for modelling and estimating the one-step transition probabilities in the case where X is a non-homogeneous Markov chain and transition frequencies are observed at unit time intervals. However, due to their expense, longitudinal surveys are typically conducted at widely, and sometimes irregularly, spaced time points. That is, the observable frequencies pertain to multi-step transitions. Continuing to assume the Markov property for X, in Chapter Three, we show that these multi-step transition frequencies can be stochastically interpolated to provide accurate estimates of the one-step transition probabilities of the underlying process. These estimates for a unit time increment can be used to calculate estimates of expected future occupation time, conditional on an individual’s state at initial point of observation, in the different states of X.¶ For reasons of cost, most statistical collections run by official agencies are cross-sectional sample surveys. The data observed from an on-going survey of this type are marginal frequencies in the states of X at a sequence of time points. In Chapter Four we develop a model based technique for estimating the marginal probabilities of X using data of this form. Note that, in contrast to the longitudinal case, the Markov assumption does not simplify inference based on marginal frequencies. The marginal probability estimates enable estimation of future occupation times (in each of the states of X) for an individual of unspecified initial state. However, in the applications of the technique that we discuss (see Sections 4.4 and 4.5) the estimated occupation times will be conditional on both gender and initial age of individuals.¶ The longitudinal data envisaged in Chapter Two is that obtained from the surveillance of the same sample in each cycle of an on-going survey. In practice, to preserve data quality it is necessary to control respondent burden using sample rotation. This is usually achieved using a mechanism known as rotation group sampling. In Chapter Five we consider the particular form of rotation group sampling used by the Australian Bureau of Statistics in their Monthly Labour Force Survey (from which official estimates of labour force participation rates are produced). We show that our approach to estimating the one-step transition probabilities of X from transition frequencies observed at incremental time intervals, developed in Chapter Two, can be modified to deal with data collected under this sample rotation scheme. Furthermore, we show that valid inference is possible even when the Markov property does not hold for the underlying process.
APA, Harvard, Vancouver, ISO, and other styles
31

Adcock, Mark R. A. "Symbolic computation of electron-proton to slepton-squark scattering cross sections based on a left-right supersymmetric extension of the standard model." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ44883.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Eckstein, Adric. "Development of Robust Correlation Algorithms for Image Velocimetry using Advanced Filtering." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/36338.

Full text
Abstract:
Digital Particle Image Velocimetry (DPIV) is a planar measurement technique to measure the velocity within a fluid by correlating the motion of flow tracers over a sequence of images recorded with a camera-laser system. Sophisticated digital processing algorithms are required to provide a high enough accuracy for quantitative DPIV results. This study explores the potential of a variety of cross-correlation filters to improve the accuracy and robustness of the DPIV estimation. These techniques incorporate the use of the Phase Transform (PHAT) Generalized Cross Correlation (GCC) filter applied to the image cross-correlation. The use of spatial windowing is subsequently examined and shown to be ideally suited for the use of phase correlation estimators, due to their invariance to the loss of correlation effects. The Robust Phase Correlation (RPC) estimator is introduced, with the coupled use of the phase correlation and spatial windowing. The RPC estimator additionally incorporates the use of a spectral filter designed from an analytical decomposition of the DPIV Signal-to-Noise Ratio (SNR). This estimator is validated in a variety of artificial image simulations, the JPIV standard image project, and experimental images, which indicate reductions in error on the order of 50% when correlating low SNR images. Two variations of the RPC estimator are also introduced, the Gaussian Transformed Phase Correlation (GTPC): designed to optimize the subpixel interpolation, and the Spectral Phase Correlation (SPC): estimates the image shift directly from the phase content of the correlation. While these estimators are designed for DPIV, the methodology described here provides a universal framework for digital signal correlation analysis, which could be extended to a variety of other systems.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
33

Foster, Martie. "Withholding tax on services : a square peg in a round hole? : an analysis of intra-group cross border services in the context of source, related transfer pricing principles and witholding taxes." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/13146.

Full text
Abstract:
Includes bibliographical references.
Various countries have extended the levying of withholding taxes beyond the traditional withholding taxes on royalties, dividends and interest. Withholding taxes are now often levied on services such as management services, professional services, technical services, financial services, insurance services, fees, commission, advisory services and digital services, amongst others. The purpose of this paper is to consider the impact of these withholding taxes on certain services, in particularly intra-group cross border services in the context of source and related transfer pricing principles.
APA, Harvard, Vancouver, ISO, and other styles
34

Tang, Tian. "Infrared Spectroscopy in Combination with Advanced Statistical Methods for Distinguishing Viral Infected Biological Cells." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/math_theses/59.

Full text
Abstract:
Fourier Transform Infrared (FTIR) microscopy is a sensitive method for detecting difference in the morphology of biological cells. In this study FTIR spectra were obtained for uninfected cells, and cells infected with two different viruses. The spectra obtained are difficult to discriminate visually. Here we apply advanced statistical methods to the analysis of the spectra, to test if such spectra are useful for diagnosing viral infections in cells. Logistic Regression (LR) and Partial Least Squares Regression (PLSR) were used to build models which allow us to diagnose if spectral differences are related to infection state of the cells. A three-fold, balanced cross-validation method was applied to estimate the shrinkages of the area under the receiving operator characteristic curve (AUC), and specificities at sensitivities of 95%, 90% and 80%. AUC, sensitivity and specificity were used to gauge the goodness of the discrimination methods. Our statistical results shows that the spectra associated with different cellular states are very effectively discriminated. We also find that the overall performance of PLSR is better than that of LR, especially for new data validation. Our analysis supports the idea that FTIR microscopy is a useful tool for detection of viral infections in biological cells.
APA, Harvard, Vancouver, ISO, and other styles
35

Andriatis, Alexander. "Generator-level acceptance for the measurement of the inclusive cross section of W-boson and Z-boson production in pp collisions at [square root of] s = 5 TeV with the CMS detector at the LHC." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115668.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Physics, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 39-42).
The inclusive cross section of vector boson production in proton-proton collisions is one of the key measurements for constraining the Standard Model and an important part of the physics program at the LHC. Measurement of the inclusive cross section requires calculating the detector acceptance of decay products. The acceptance of the CMS detector of leptonic decays of W and Z bosons produced in pp colisions at [square root of]s = 5 TeV is calculated using Monte Carlo event simulation. Statistical and systematic uncertainties on the acceptance measurement from PDF and a, uncertainty and higher-order correction are reported. The use of the calculated acceptance in combination with measurements of detector efficiency, luminosity, and particle counting to determine the inclusive cross section is outlined. A total integrated luminosity of 331.64 pb-1 from 2015 and 2017 CMS data at [square root of]s = 5 TeV is available for the calculation of the inclusive cross section.
by Alexander Andriatis
S.B.
APA, Harvard, Vancouver, ISO, and other styles
36

Matoušek, Karel. "Řešení problematiky ohýbání dílců z tenkostěnných profilů." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229347.

Full text
Abstract:
Reconnaissance of problems and parameters of square hollow profile bend. Finding (identification) of deformation of cross-section. Finding of suitability of unannealed material for component part. Finding of mechanical characteristics of welds and influence of their bend position. This finding was done on the basis of comparison of tensile tests and consequently aplicated to specific part where the suitability for use of unannealed material were proved.
APA, Harvard, Vancouver, ISO, and other styles
37

Sheppard, Therese. "Extending covariance structure analysis for multivariate and functional data." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/extending-covariance-structure-analysis-for-multivariate-and-functional-data(e2ad7f12-3783-48cf-b83c-0ca26ef77633).html.

Full text
Abstract:
For multivariate data, when testing homogeneity of covariance matrices arising from two or more groups, Bartlett's (1937) modified likelihood ratio test statistic is appropriate to use under the null hypothesis of equal covariance matrices where the null distribution of the test statistic is based on the restrictive assumption of normality. Zhang and Boos (1992) provide a pooled bootstrap approach when the data cannot be assumed to be normally distributed. We give three alternative bootstrap techniques to testing homogeneity of covariance matrices when it is both inappropriate to pool the data into one single population as in the pooled bootstrap procedure and when the data are not normally distributed. We further show that our alternative bootstrap methodology can be extended to testing Flury's (1988) hierarchy of covariance structure models. Where deviations from normality exist, we show, by simulation, that the normal theory log-likelihood ratio test statistic is less viable compared with our bootstrap methodology. For functional data, Ramsay and Silverman (2005) and Lee et al (2002) together provide four computational techniques for functional principal component analysis (PCA) followed by covariance structure estimation. When the smoothing method for smoothing individual profiles is based on using least squares cubic B-splines or regression splines, we find that the ensuing covariance matrix estimate suffers from loss of dimensionality. We show that ridge regression can be used to resolve this problem, but only for the discretisation and numerical quadrature approaches to estimation, and that choice of a suitable ridge parameter is not arbitrary. We further show the unsuitability of regression splines when deciding on the optimal degree of smoothing to apply to individual profiles. To gain insight into smoothing parameter choice for functional data, we compare kernel and spline approaches to smoothing individual profiles in a nonparametric regression context. Our simulation results justify a kernel approach using a new criterion based on predicted squared error. We also show by simulation that, when taking account of correlation, a kernel approach using a generalized cross validatory type criterion performs well. These data-based methods for selecting the smoothing parameter are illustrated prior to a functional PCA on a real data set.
APA, Harvard, Vancouver, ISO, and other styles
38

Novellie, Jacqueline. "Institute for African Language Studies – an exploration of the constant and transformative." Diss., Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-10122006-122215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lemos, Gléverson Fabner Condé. "Técnicas de detecção e implementação em FPGA de modulações QAM de ordem elevada." Universidade Federal de Juiz de Fora (UFJF), 2011. https://repositorio.ufjf.br/jspui/handle/ufjf/4724.

Full text
Abstract:
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-05-30T12:08:23Z No. of bitstreams: 1 gléversonfabnercondelemos.pdf: 2102819 bytes, checksum: e934ec8e8bf0daaaa39a52749b708828 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-05-30T13:52:51Z (GMT) No. of bitstreams: 1 gléversonfabnercondelemos.pdf: 2102819 bytes, checksum: e934ec8e8bf0daaaa39a52749b708828 (MD5)
Made available in DSpace on 2017-05-30T13:52:51Z (GMT). No. of bitstreams: 1 gléversonfabnercondelemos.pdf: 2102819 bytes, checksum: e934ec8e8bf0daaaa39a52749b708828 (MD5) Previous issue date: 2011-09-12
A presente dissertação versa sobre técnicas de baixo custo para detecção, modulação e demodulação de constelações M-QAM (quadrature amplitude modulation) de ordem elevada, ou seja, M = 2n, n = {2,3, · · · ,16}. Al´em disso, s˜ao propostas constelações alternativas para M-QAM, M = 22n, n = {1,2, · · · ,8}, que buscam minimizar a PAPR (peak to average power ratio) quando um sistema OFDM (orthogonal frequency division multiplexing) ´e utilizado para a transmissão de dados. Uma implementação, de baixo custo e em dispositivo FPGA (field programmable gate array), de um esquema de modulação constante e adaptativa para sistemas OFDM, quando a modulação é MQAM, M = 22n, n = {1,2, · · · ,8}, é descrita e analisada. O desempenho das técnicas de detecção propostas é avaliado através de simulações computacionais quando o ruído é AWGN (additive white Gaussian noise) e AIGN (additive impulsive Gaussian noise). Os resultados em termos de BER × Eb/N0 indicam que as perdas de desempenho geradas com as técnicas propostas não são significativas e, portanto, tais técnicas são candidatas adequadas para a implementação de um sistema OFDM com elevada eficiência espectral. Os resultados computacionais revelam ainda que as propostas alternativas para constelações M-QAM reduzem a PAPR, mas, em contrapartida, degradam consideravelmente a BER. Finalmente, a análise da complexidade computacional das técnicas de detecção e demodulação, as quais foram implementadas em dispositivo FPGA, indica que há uma redução do custo computacional, ou seja, redução do uso de recursos de hardware do dispositivo FPGA quando tais técnicas são implementadas para a demodulação e detecção de símbolos M-QAM de ordem elevada.
This dissertation deals with low-cost techniques for detection, modulation and demodulation of high order M-QAM (quadrature amplitude modulation) constellations, i.e., M = 2n, n = {2,3, · · · ,16}. In addition, alternative constellations are proposed to M-QAM, M = 22n, n = {1,2, · · · ,8}, which seek to minimize the PAPR (peak to average power ratio) when an OFDM (orthogonal frequency division multiplexing) system is used for data transmission. A low-cost implementation using a FPGA (field programmable gate array) device of a modulation scheme for constant and adaptive OFDM systems when the modulation is M-QAM, M = 22n, n = {1,2, · · · ,8}, is described and analyzed. The performance of the proposed detection techniques is evaluated through computer simulations when the noise is AWGN (additive white Gaussian noise) and AIGN (additive impulsive Gaussian noise). The results in terms of BER × Eb/N0 indicate that the performance losses generated by the proposed techniques are not significant and, therefore, such techniques are appropriate candidates for the implementation of an OFDM system with high spectral efficiency. The computational results reveal that the alternative proposals for M-QAM constellations reduce the PAPR, but, considerably degrade the BER. Finally, the analysis of computational complexity of detection and demodulation techniques, which were implemented in a FPGA device, indicates that there is a computational cost reduction, i.e., a reduction of resource usage of hardware device such as FPGA when these techniques are implemented for the demodulation and detection of high-order M-QAM symbols.
APA, Harvard, Vancouver, ISO, and other styles
40

Oliveira, Cristina Maria Correia Teles Garcia de. "Função de autocorrelação estendida generalizada amostral: contributo para a identificação dos modelos de função transferência." Doctoral thesis, Instituto Superior de Economia e Gestão, 2001. http://hdl.handle.net/10400.5/9086.

Full text
Abstract:
Doutoramento em Matemática Aplicada à Economia e Gestão
Tradicionalmente, a identificação de um modelo de função transferência bivariado é realizada através da função de correlação cruzada amostral entre as séries temporais input e output. No entanto, a prática tem mostrado que aquela função, como instrumento de identificação, apresenta um apreciável grau de subjectividade na especificação das ordens r e s, associadas aos polinómios output e input, respectivamente. Com base no estabelecimento de estimadores dos mínimos quadrados iterados consistentes, é introduzida uma generalização do conceito de função de autocorrelação estendida amostral e é proposta uma metodologia de identificação dos modelos de função transferência bivariados. Um exemplo prático e um estudo de simulação são apresentados, ilustrando as potencialidades do procedimento proposto.
APA, Harvard, Vancouver, ISO, and other styles
41

Radeschnig, David. "Modelling Implied Volatility of American-Asian Options : A Simple Multivariate Regression Approach." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28951.

Full text
Abstract:
This report focus upon implied volatility for American styled Asian options, and a least squares approximation method as a way of estimating its magnitude. Asian option prices are calculated/approximated based on Quasi-Monte Carlo simulations and least squares regression, where a known volatility is being used as input. A regression tree then empirically builds a database of regression vectors for the implied volatility based on the simulated output of option prices. The mean squared errors between imputed and estimated volatilities are then compared using a five-folded cross-validation test as well as the non-parametric Kruskal-Wallis hypothesis test of equal distributions. The study results in a proposed semi-parametric model for estimating implied volatilities from options. The user must however be aware of that this model may suffer from bias in estimation, and should thereby be used with caution.
APA, Harvard, Vancouver, ISO, and other styles
42

Luo, Shan. "Advanced Statistical Methodologies in Determining the Observation Time to Discriminate Viruses Using FTIR." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_theses/86.

Full text
Abstract:
Fourier transform infrared (FTIR) spectroscopy, one method of electromagnetic radiation for detecting specific cellular molecular structure, can be used to discriminate different types of cells. The objective is to find the minimum time (choice among 2 hour, 4 hour and 6 hour) to record FTIR readings such that different viruses can be discriminated. A new method is adopted for the datasets. Briefly, inner differences are created as the control group, and Wilcoxon Signed Rank Test is used as the first selecting variable procedure in order to prepare the next stage of discrimination. In the second stage we propose either partial least squares (PLS) method or simply taking significant differences as the discriminator. Finally, k-fold cross-validation method is used to estimate the shrinkages of the goodness measures, such as sensitivity, specificity and area under the ROC curve (AUC). There is no doubt in our mind 6 hour is enough for discriminating mock from Hsv1, and Coxsackie viruses. Adeno virus is an exception.
APA, Harvard, Vancouver, ISO, and other styles
43

Kaphle, Manindra R. "Analysis of acoustic emission data for accurate damage assessment for structural health monitoring applications." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/53201/1/Manindra_Kaphle_Thesis.pdf.

Full text
Abstract:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
APA, Harvard, Vancouver, ISO, and other styles
44

Fuller, Matthew. "Transformer les capacités d'innovation : l'impact et l'influence des Fab Labs d'entreprise au sein de grands groupes Resetting innovation capabilities: the emergence of corporate fab labs Making nothing or something: corporate Fab Labs seen through their objects as they cross organizational boundarie Fitting squares into round holes: Enabling innovation, creativity, and entrepreneurship through corporate Fab Labs." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED045.

Full text
Abstract:
Inspirés par un modèle établi par une initiative sociale du Massachusetts Institute of Technology (MIT) en 2001, des salariés de plusieurs grandes groupes ont établi des Fab Labs d'entreprise avec l'intention de transformer les capacités d'innovation de leur entreprise.Cette thèse examine l'univers des Fab Labs entreprise, s'appuyant sur des données empiriques récoltées dans des dizaines de labs, avec des activités de recherche principales ayant lieu entre 2014 et 2017 dans les laboratoires de quatre grands groupes mondiaux. L'objectif de cette recherche est de 1) identifier si les Fab Labs d'entreprise influencent les capacités d'innovation d'une organisation, 2) articuler et affiner la représentation managériale utilisée pour justifier la création d'un tel lieu, ainsi que 3) esquisser un mécanisme simple qui permet aux décideurs stratégiques d'évaluer si les activités dans un lab lui permet d'atteindre ses objectifs
Based on a pattern established by an MIT academic outreach program created in 2001, individuals in dozens of large organizations established corporate Fab Labs in recent years with the intent to transform their firm’s ability to innovate.This thesis investigates the world of corporate Fab Labs, building on empirical data gathered from dozens of labs, with core research activities taking place in the labs of four large multinational firms from 2014 through 2017. The purpose of this research is to 1) identify whether corporate Fab Labs influence an organization’s innovation capabilities, 2) articulate and refine the managerial representation used to support the creation of these labs, and 3) outline a simple mechanism for managers to evaluate whether a lab attains its desired outcomes
APA, Harvard, Vancouver, ISO, and other styles
45

Šebek, František. "Výpočtová analýza rovnání čtvercových tyčí." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-230223.

Full text
Abstract:
Current requirements in mechanical engineering require more accurate operations and more efficient technologies. The aim of this thesis is the analysis of the leveling of square rods. The main problem is the setting of the leveling machine for the specified material and geometric data so that the initially curved material, which passes through alternatively positioned offset rollers, is leveled as much as possible. The main factor in the leveling process is the plastification of the material used for the redistribution of the residual stress. Based on existing theo-retical knowledge in this field, programs are set up to simulate the passing of the rod through the leveling machine. Further, modifications leading to the improvement of the whole process are presented. Finally, there is a verification of the results which is made independently of the submitted solution and processed by the finite element method.
APA, Harvard, Vancouver, ISO, and other styles
46

Štukovská, Petra. "Algoritmy detekce radarových cílů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-451229.

Full text
Abstract:
This thesis focuses on detection algorithms of radar targets, namely on group of techniques for removing of disturbing reflections from static objects - clutter and for suppression of distortion products caused by the phase noise of the transmitter and receiver. Methods for distortion suppression in received signal are designed for implementation in the developed active multistatic radar, which operates in the code division multiplex of several transmitters on single frequency. The aim of the doctoral thesis is to design, implement in tool for technical computing MATLAB and analyze the effectiveness and computational complexity of these techniques on simulated and real data.
APA, Harvard, Vancouver, ISO, and other styles
47

Klimeš, Filip. "Zpracování obrazových sekvencí sítnice z fundus kamery." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-220975.

Full text
Abstract:
Cílem mé diplomové práce bylo navrhnout metodu analýzy retinálních sekvencí, která bude hodnotit kvalitu jednotlivých snímků. V teoretické části se také zabývám vlastnostmi retinálních sekvencí a způsobem registrace snímků z fundus kamery. V praktické části je implementována metoda hodnocení kvality snímků, která je otestována na reálných retinálních sekvencích a vyhodnocena její úspěšnost. Práce hodnotí i vliv této metody na registraci retinálních snímků.
APA, Harvard, Vancouver, ISO, and other styles
48

Durán, Alcaide Ángel. "Development of high-performance algorithms for a new generation of versatile molecular descriptors. The Pentacle software." Doctoral thesis, Universitat Pompeu Fabra, 2010. http://hdl.handle.net/10803/7201.

Full text
Abstract:
The work of this thesis was focused on the development of high-performance algorithms for a new generation of molecular descriptors, with many advantages with respect to its predecessors, suitable for diverse applications in the field of drug design, as well as its implementation in commercial grade scientific software (Pentacle). As a first step, we developed a new algorithm (AMANDA) for discretizing molecular interaction fields which allows extracting from them the most interesting regions in an efficient way. This algorithm was incorporated into a new generation of alignmentindependent molecular descriptors, named GRIND-2. The computing speed and efficiency of the new algorithm allow the application of these descriptors in virtual screening. In addition, we developed a new alignment-independent encoding algorithm (CLACC) producing quantitative structure-activity relationship models which have better predictive ability and are easier to interpret than those obtained with other methods.
El trabajo que se presenta en esta tesis se ha centrado en el desarrollo de algoritmos de altas prestaciones para la obtención de una nueva generación de descriptores moleculares, con numerosas ventajas con respecto a sus predecesores, adecuados para diversas aplicaciones en el área del diseño de fármacos, y en su implementación en un programa científico de calidad comercial (Pentacle). Inicialmente se desarrolló un nuevo algoritmo de discretización de campos de interacción molecular (AMANDA) que permite extraer eficientemente las regiones de máximo interés. Este algoritmo fue incorporado en una nueva generación de descriptores moleculares independientes del alineamiento, denominados GRIND-2. La rapidez y eficiencia del nuevo algoritmo permitieron aplicar estos descriptores en cribados virtuales. Por último, se puso a punto un nuevo algoritmo de codificación independiente de alineamiento (CLACC) que permite obtener modelos cuantitativos de relación estructura-actividad con mejor capacidad predictiva y mucho más fáciles de interpretar que los obtenidos con otros métodos.
APA, Harvard, Vancouver, ISO, and other styles
49

Hussain, Zahir M. "Adaptive instantaneous frequency estimation: Techniques and algorithms." Thesis, Queensland University of Technology, 2002. https://eprints.qut.edu.au/36137/7/36137_Digitised%20Thesis.pdf.

Full text
Abstract:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
APA, Harvard, Vancouver, ISO, and other styles
50

Xie, TIAN. "Essays on Least Squares Model Averaging." Thesis, 2013. http://hdl.handle.net/1974/8113.

Full text
Abstract:
This dissertation adds to the literature on least squares model averaging by studying and extending current least squares model averaging techniques. The first chapter reviews existing literature and discusses the contributions of this dissertation. The second chapter proposes a new estimator for least squares model averaging. A model average estimator is a weighted average of common estimates obtained from a set of models. I propose computing weights by minimizing a model average prediction criterion (MAPC). I prove that the MAPC estimator is asymptotically optimal in the sense of achieving the lowest possible mean squared error. For statistical inference, I derive asymptotic tests on the average coefficients for the "core" regressors. These regressors are of primary interest to researchers and are included in every approximation model. In Chapter Three, two empirical applications for the MAPC method are conducted. I revisit the economic growth models in Barro (1991) in the first application. My results provide significant evidence to support Barro's (1991) findings. In the second application, I revisit the work by Durlauf, Kourtellos and Tan (2008) (hereafter DKT). Many of my results are consistent with DKT's findings and some of my results provide an alternative explanation to those outlined by DKT. In the fourth chapter, I propose using the model averaging method to construct optimal instruments for IV estimation when there are many potential instrument sets. The empirical weights are computed by minimizing the model averaging IV (MAIV) criterion through convex optimization. I propose a new loss function to evaluate the performance of the estimator. I prove that the instrument set obtained by the MAIV estimator is asymptotically optimal in the sense of achieving the lowest possible value of the loss function. The fifth chapter develops a new forecast combination method based on MAPC. The empirical weights are obtained through a convex optimization of MAPC. I prove that with stationary observations, the MAPC estimator is asymptotically optimal for forecast combination in that it achieves the lowest possible one-step-ahead second-order mean squared forecast error (MSFE). I also show that MAPC is asymptotically equivalent to the in-sample mean squared error (MSE) and MSFE.
Thesis (Ph.D, Economics) -- Queen's University, 2013-07-17 15:46:54.442
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography