Thèses sur le sujet « Statistical equivalence »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Statistical equivalence.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Statistical equivalence ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Park, Sung Min S. M. Massachusetts Institute of Technology. « On the equivalence of sparse statistical problems ». Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107375.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 43-47).
Sparsity is a widely used and theoretically well understood notion that has allowed inference to be statistically and computationally possible in the high-dimensional setting. Sparse Principal Component Analysis (SPCA) and Sparse Linear Regression (SLR) are two problems that have a wide range of applications and have attracted a tremendous amount of attention in the last two decades as canonical examples of statistical problems in high dimension. A variety of algorithms have been proposed for both SPCA and SLR, but their literature has been disjoint for the most part. We have a fairly good understanding of conditions and regimes under which these algorithms succeed. But is there be a deeper connection between computational structure of SPCA and SLR? In this paper we show how to efficiently transform a blackbox solver for SLR into an algorithm for SPCA. Assuming the SLR solver satisfies prediction error guarantees achieved by existing efficient algorithms such as those based on the Lasso, we show that the SPCA algorithm derived from it achieves state of the art performance, matching guarantees for testing and for support recovery under the single spiked covariance model as obtained by the current best polynomial-time algorithms. Our reduction not only highlights the inherent similarity between the two problems, but also, from a practical standpoint, it allows one to obtain a collection of algorithms for SPCA directly from known algorithms for SLR. Experiments on simulated data show that these algorithms perform well.
by Sung Min Park.
S.M.
2

Yang, Jun. « Statistical Implementation of Toxicity Equivalence Approach in Wet Test ». University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1187035422.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wang, Hui. « Error equivalence theory for manufacturing process control ». [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002252.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Luo, Yingchun. « Nonparametric statistical procedures for therapeutic clinical trials with survival endpoints ». Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/492.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ntantiso, Mzamo. « Exploring the statistical equivalence of the English and Xhosa versions of the Woodcock-Munõz Language Survey ». Thesis, Nelson Mandela Metropolitan University, 2009. http://hdl.handle.net/10948/d1018620.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
This study explored statistical equivalence of the adapted Xhosa and English version of the Woodcock-Muñoz Language Survey (WMLS) by investigating group differences on each subscale, in terms of mean scores, index reliability, and item characteristics for two language groups. A Convenience quota sampling technique was used to select 188 Xhosa (n = 188) and 198 English (n = 198) learners from Grades 6 and 7 living in rural and urban Eastern Cape. The WMLS Xhosa and English versions were administered to learners in their first languages. Significant mean group differences were found, but differences were not found on the reliability indices, or mean item characteristics. This pointed in the direction of statistical equivalence. However, scrutiny of the item characteristics of the individual items per subscale indicated possible problems at an item level that need to be investigated further with differential functioning analyses. Thus, stringent DIF analyses were suggested for future research on DIF items before the versions of the WMLS can be considered as equivalent.
6

Olivier, G. J. F. (Gerrit Jacobus Francois). « Statistical thermodynamics of long-range quantum spin systems ». Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/20003.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Thesis (MSc)--Stellenbosch University, 2012.
ENGLISH ABSTRACT:In this thesis we discuss some of the anomalies present in systems with long-range interactions, for instance negative speci c heat and negative magnetic susceptibility, and show how they can be related to the convexity properties of the thermodynamic potentials and nonequivalence of ensembles. We also discuss the possibility of engineering long-range quantum spin systems with cold atoms in optical lattices to experimentally verify the existence of nonequivalence of ensembles. We then formulate an expression for the density of states when the energy and magnetisation correspond to a pair of non-commuting operators. Finally we analytically compute the entropy s( ;m) as a function of energy, , and magnetisation, m, for the anisotropic Heisenberg model with Curie-Weiss type interactions. The results show that the entropy is non-concave in terms of magnetisation under certain circumstances which in turn indicates that the microcanonical and canonical ensembles are not equivalent and that the magnetic susceptibility is negative. After making an appropriate change of variables we show that a second-order phase transition can be present at negative temperatures in the microcanonical ensemble which cannot be represented in the canonical ensemble.
AFRIKAANSE OPSOMMING: In hierdie tesis bespreek ons van die onverwagte eienskappe wat sisteme met lang afstand wisselwerkings kan openbaar, byvoorbeeld negatiewe spesi eke warmte en negatiewe magnetiese suseptibiliteit. Ons dui ook die ooreenkoms tussen hierdie gedrag en die konveksiteit van die termodinamiese potensiale en nie-ekwivalente ensembles aan. Hierna bespreek ons die moontlikheid om lang afstand kwantum spin sisteme te realiseer met koue atome in 'n optiese rooster. Daarna wys ons hoe dit moontlik is om 'n uitdrukking vir die digtheid van toestande te formuleer vir sisteme waar die energie en magnetisasie ooreenstem met operatore wat nie met mekaar kommuteer nie. Uiteindelik bepaal ons die entropie, s( ;m), in terme van die energie, , en magnetisasie, m, vir die anisotropiese Heisenberg model met Curie-Weiss tipe interaksies. Die resultate wys dat die entropie onder sekere omstandighede nie konkaaf in terme van magnetisasie is nie. Dit, op sy beurt, dui aan dat die mikrokanoniese en kanoniese ensembles nie ekwivalent is nie en dat die magnetiese suseptibiliteit negatief kan wees. Nadat ons 'n toepaslike transformasie van veranderlikes maak, wys ons dat 'n tweede orde fase-oorgang by negatiewe temperature kan plaasvind in die mikrokanoniese ensemble wat nie verteenwoordig kan word in die kanoniese ensemble nie.
7

Shen, Emily (Emily Huei-Yi). « Pattern matching encryption, strategic equivalence of range voting and approval voting, and statistical robustness of voting rules ». Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79224.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 119-123).
We present new results in the areas of cryptography and voting systems. 1. Pattern matching encryption: We present new, general definitions for queryable encryption schemes - encryption schemes that allow evaluation of private queries on encrypted data without performing full decryption. We construct an efficient queryable encryption scheme supporting pattern matching queries, based on suffix trees. Storage and communication complexity are comparable to those for (unencrypted) suffix trees. The construction is based only on symmetric-key primitives, so it is practical. 2. Strategic equivalence of range voting and approval voting: We study strategic voting in the context of range voting in a formal model. We show that under general conditions, as the number of voters becomes large, strategic range-voting becomes equivalent to approval voting. We propose beta distributions as a new and interesting way to model voter's subjective information about other votes. 3. Statistical robustness of voting rules: We introduce a new notion called "statistical robustness" for voting rules: a voting rule is statistically robust if, for any profile of votes, the most likely winner of a sample of the profile is the winner of the complete profile. We show that plurality is the only interesting voting rule that is statistically robust; approval voting (perhaps surprisingly) and other common voting rules are not statistically robust.
by Emily Shen.
Ph.D.
8

Chen, Shaoqiang. « Manufacturing process design and control based on error equivalence methodology ». [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002511.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ikeda, Mitsuru, Kazuhiro Shimamoto, Takeo Ishigaki, Kazunobu Yamauchi, 充. 池田 et 一信 山内. « Statistical method in a comparative study in which the standard treatment is superior to others ». Nagoya University School of Medicine, 2002. http://hdl.handle.net/2237/5385.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Nguyen, Diep Thi. « Statistical Models to Test Measurement Invariance with Paired and Partially Nested Data : A Monte Carlo Study ». Scholar Commons, 2019. https://scholarcommons.usf.edu/etd/7869.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
While assessing emotions, behaviors or performance of preschoolers and young children, scores from adults such as parent psychiatrist and teacher ratings are used rather scores from children themselves. Data from parent ratings or from parents and teachers are often nested such as students are within teachers and a child is within their parents. This popular nested feature of data in educational, social and behavioral sciences makes measurement invariance (MI) testing across informants of children methodologically challenging. There was lack of studies that take into account the nested structure of data in MI testing for multiple adult informants, especially no simulation study that examines the performance of different models used to test MI across different raters. This dissertation focused on two specific nesting data types in testing MI between adult raters of children: paired and partial nesting. For the paired data, the independence assumption of regular MI testing is often violated because the two informants (e.g., father and mother) rate the same child and their scores are anticipated to be related or dependent. The partial nesting data refers to the research situation where teacher and parent ratings are compared. In this scenario, it is common that each parent has only one child to rate while each teacher has multiple children in their classroom. Thus, in case of teacher and parent ratings of the same children, data are repeated measures and also partially nested. Because of these unique features of data, MI testing between adult informants of children requires statistical models that take into account different types of data dependency. I proposed and evaluated the performance of the two statistical models that can handle repeated measures and partial nesting with several simulated research scenarios in addition to one commonly used and one potentially appropriate statistical models across several research scenario. Results of the two simulation studies in this dissertation showed that for the paired data, both multiple-group confirmatory factor analysis (CFA) and repeated measure CFA models were able to detect scalar invariance most of the time using Δχ2 test and ΔCFI. Although the multiple-group CFA (Model 2) was able to detect scalar invariance better than the repeated measure CFA model (Model 1), the detection rates of Model 1 were still at the high level (88% - 91% using Δχ2 test and 84% - 100% using ΔCFI or ΔRMSEA). For configural invariance and metric invariance conditions for the paired data, Model 1 had higher detection rate than Model 2 in almost examined research scenario in this dissertation. Particularly while Model 1 could detect noninvariance (either in intercepts only or in both intercepts and factor loadings) than Model 2 for paired data most of the time, Model 2 could rarely catch it if using suggested cut-off of 0.01 for RMSEA differences. For the paired data, although both Models 1 and 2 could be a good choice to test measurement invariance, Model 1 might be favored if researchers are more interested in detecting noninvariance due to its overall high detection rates for all three levels (i.e. configural, metric, and scalar) of measurement invariance. For scalar invariance with partially nested data, both multilevel repeated measure CFA and design-based multilevel CFA could detect invariance most of the time (from 81% to 100% of examined cases) with slightly higher detection rate for the former model than the later. Multiple-group CFA model hardly detect scalar invariance except when ICC was small. The detection rates for configural invariance using Δχ2 test or Satorra-Bentler LRT were also highest for Model 3 (82% to 100% except only two conditions with detection rates of 61%), following by Model 5 and lowest Model 4. Models 4 and 5 could reach these rates only with the largest sample sizes (i.e., large number of cluster or large cluster size or large in both factors) when the magnitude of noninvariance was small. Unlike scalar and configural invariance, the ability to detect metric invariance was highest for Model 4, following by Model 5 and lowest for Model 3 across many conditions using all of the three performance criteria. As higher detection rates for all configural and scalar invariance, and moderate detection rates for many metric invariance conditions (except cases of small number of clusters combined with large ICC), Model 3 could be a good candidate to test measurement invariance with partially nested data when having sufficient number of clusters or if having small number of clusters with small ICC. Model 5 might be also a reasonable option for this type of data if both the number of clusters and cluster size were large (i.e., 80 and 20, respectively), or either one of these two factors was large coupled with small ICC. If ICC is not small, it is recommended to have a large number of clusters or combination of large number of clusters and large cluster size to ensure high detection rates of measurement invariance for partially nested data. As multiple group CFA had better and reasonable detection rates than the design-based and multilevel repeated measure CFA models cross configural, metric and scalar invariance with the conditions of small cluster size (10) and small ICC (0.13), researchers can consider using this model to test measurement invariance when they can only collect 10 participants within a cluster (e.g. students within a classroom) and there is small degree of data dependency (e.g. small variance between clusters) in the data.
11

Werndl, Charlotte. « Philosophical aspects of chaos : definitions in mathematics, unpredictability, and the observational equivalence of deterministic and indeterministic descriptions ». Thesis, University of Cambridge, 2010. https://www.repository.cam.ac.uk/handle/1810/226754.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
This dissertation is about some of the most important philosophical aspects of chaos research, a famous recent mathematical area of research about deterministic yet unpredictable and irregular, or even random behaviour. It consists of three parts. First, as a basis for the dissertation, I examine notions of unpredictability in ergodic theory, and I ask what they tell us about the justification and formulation of mathematical definitions. The main account of the actual practice of justifying mathematical definitions is Lakatos's account on proof-generated definitions. By investigating notions of unpredictability in ergodic theory, I present two previously unidentified but common ways of justifying definitions. Furthermore, I criticise Lakatos's account as being limited: it does not acknowledge the interrelationships between the different kinds of justification, and it ignores the fact that various kinds of justification - not only proof-generation - are important. Second, unpredictability is a central theme in chaos research, and it is widely claimed that chaotic systems exhibit a kind of unpredictability which is specific to chaos. However, I argue that the existing answers to the question "What is the unpredictability specific to chaos?" are wrong. I then go on to propose a novel answer, viz. the unpredictability specific to chaos is that for predicting any event all sufficiently past events are approximately probabilistically irrelevant. Third, given that chaotic systems are strongly unpredictable, one is led to ask: are deterministic and indeterministic descriptions observationally equivalent, i.e., do they give the same predictions? I treat this question for measure-theoretic deterministic systems and stochastic processes, both of which are ubiquitous in science. I discuss and formalise the notion of observational equivalence. By proving results in ergodic theory, I first show that for many measure-preserving deterministic descriptions there is an observationally equivalent indeterministic description, and that for all indeterministic descriptions there is an observationally equivalent deterministic description. I go on to show that strongly chaotic systems are even observationally equivalent to some of the most random stochastic processes encountered in science. For instance, strongly chaotic systems give the same predictions at every observation level as Markov processes or semi-Markov processes. All this illustrates that even kinds of deterministic and indeterministic descriptions which, intuitively, seem to give very different predictions are observationally equivalent. Finally, I criticise the claims in the previous philosophical literature on observational equivalence.
12

Ganti, Satyakala. « DEVELOPMENT OF HPLC METHODS FOR PHARMACEUTICALLY RELEVANT MOLECULES ; METHOD TRANSFER TO UPLC : COMPARING METHODS STATISTICALLY FOR EQUIVALENCE ». Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/118587.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Chemistry
Ph.D.
High Pressure Liquid Chromatography (HPLC) is a well-known and widely used analytical technique which is prevalent throughout the pharmaceutical industry as a research tool. Despite its prominence HPLC possesses some disadvantages, most notably slow analysis time and large consumption of organic solvents. Ultra Pressure Liquid Chromatography (UPLC) is a relatively new technique which offers the same separation capabilities of HPLC with the added benefits of reduced run time and lower solvent consumption. One of the key developments which facilitate the new UPLC technology is sub 2-µm particles used as column packing material. These particles allow for higher operating pressures and increased flow rates while still providing strong separation. Although UPLC technology has been available since early 2000, few laboratories have embraced the new technology as an alternative to HPLC. Besides the resistance to investing in new capital, another major roadblock is converting existing HPLC methodology to UPLC without disruption. This research provides a framework for converting existing HPLC methods to UPLC. An existing HPLC method for analysis of Galantamine hydrobromide was converted to UPLC and validated according to ICH guidelines. A series of statistical evaluations on the validation data were performed to prove the equivalency between the original HPLC and the new UPLC method. This research presents this novel statistical strategy which can be applied to any two methodologies to determine parity.
Temple University--Theses
13

Santos, Carlos Eduardo Fiore dos. « \"Sistemas fora do equilíbrio termodinâmico : Um estudo em diferentes abordagens\" ». Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-11042007-140207/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Nesta tese de doutorado apresentamos um estudo sobre o comportamento de diversos sistemas irrevers?veis, caracterizados pela existencia de estados absorventes, atraves de abordagens distintas. Utilizamos aproximacoes de campo medio dinamico, simulacoes numericas usuais, mudanca de ensemble e expanso em serie. Alem disso, mostramos numa parte deste trabalho que a abordagem proposta para o estudo de sistemas irrevers?veis no ensemble em que o numero de part?culas e constante tambem pode ser estendida para sistemas em equil´?brio termodinamico, descrito pela distribuicao de probabilidades de Gibbs. Finalmente mostramos problemas em aberto para trabalhos futuros.
In this PHD thesis, we have presented a study about several nonequilibrium systems with absorbing states by means of different approaches, such as mean-field analysis, usual numerical simulations, analysis in another ensemble and perturbative series expansions. In a specific part of this thesis, we have shown that the approach proposed here for describing nonequilibrium systems in the constant particle number ensemble can also be used to caracterize equilibrium systems, described by Gibbs probability distribution. Finally, we have shown open problems for future researchs.
14

Crampton, Raymond J. « A nonlinear statistical MESFET model using low order statistics of equivalent circuit model parameter sets ». Thesis, This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-03032009-040420/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Clark, James Byron. « Fractional factorial designs-equivalence and augmenting / ». The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487949836206266.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Lafont, Thibault. « Statistical vibroacoustics : study of SEA assumptions ». Thesis, Ecully, Ecole centrale de Lyon, 2015. http://www.theses.fr/2015ECDL0003/document.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
La méthode SEA (Statistical Energy Analysis) est une approche statistique de la vibroacoustique permettant de décrire les systèmes complexes en termes d'échanges d'énergies vibratoires et acoustiques. En moyennes et hautes fréquences, cette méthode se présente comme une alternative aux méthodes déterministes (coût des calculs dû au grand nombre de modes, de degrés de liberté, unicité de la solution) Néanmoins, son utilisation requiert la connaissance et le respect d'hypothèses fortes qui limitent son domaine d'application. Dans ce mémoire, les fondements de la SEA ont été examinés afin de discuter chaque hypothèse. Le champ diffus, l'équipartition de l’énergie modale, le couplage faible, l'influence des modes non résonants et l'excitation rain-on-the-roof sont les cinq hypothèses qui ont été abordées. Sur la base d'exemples simples (oscillateurs couplés, plaques couplées), les équivalences et leurs influences sur la qualité des résultats ont été étudiées pour contribuer à la clarification des hypothèses nécessaires à l'application de la SEA ct pour borner son domaine de validité SEA
Statistical energy analysis is a statistical approach of vibroacoustics which allows to describe complex systems in terms of vibrational or acoustical energies. ln the high frequency range, this method constitutes an alternative to bypass the problems which can occur when applying deterministic methods (computation cost due to the large number of modes, the large number of degrees of freedom and the unicity of the solution). But SEA has numerous assumptions which are sometimes forgotten or misunderstood ln this thesis, foundations of SEA have been examined in order to discuss each assumption. Diffuse field, modal energy equipartition, weak coupling, the influence of non-resonant modes and the rain on the roof excitation are the five look up hypotheses. Based on simple examples (coupled oscillators, coupled plates), the possible equivalences and their influence on the quality of the results have been discussed to contribute to the clarification of the useful SEA assumptions and to mark out it's the validity domain
17

Murthi, Mamta. « Food Engel curves and equivalence scales in Sri Lanka ». Thesis, University of Oxford, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336116.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Odei, James Beguah. « Statistical Modeling, Exploration, and Visualization of Snow Water Equivalent Data ». DigitalCommons@USU, 2014. https://digitalcommons.usu.edu/etd/3871.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Due to a continual increase in the demand for water as well as an ongoing regional drought, there is an imminent need to monitor and forecast water resources in the Western United States. In particular, water resources in the IntermountainWest rely heavily on snow water storage. Thus, the need to improve seasonal forecasts of snowpack and considering new techniques would allow water resources to be more effectively managed throughout the entire water-year. Many available models used in forecasting snow water equivalent (SWE) measurements require delicate calibrations. In contrast to the physical SWE models most commonly used for forecasting, we offer a statistical model. We present a data-based statistical model that characterizes seasonal snow water equivalent in terms of a nested time-series, with the large scale focusing on the inter-annual periodicity of dominant signals and the small scale accommodating seasonal noise and autocorrelation. This model provides a framework for independently estimating the temporal dynamics of SWE for the various snow telemetry (SNOTEL) sites. We use SNOTEL data from ten stations in Utah over 34 water-years to implement and validate this model. This dissertation has three main goals: (i) developing a new statistical model to forecast SWE; (ii) bridging existing R packages into a new R package to visualize and explore spatial and spatio-temporal SWE data; and (iii) applying the newly developed R package to SWE data from Utah SNOTEL sites and the Upper Sheep Creek site in Idaho as case studies.
19

Yang, Peiling. « Practical equivalence inference as a model building strategy, with applications in multiple comparisons / ». The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487947908400878.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Ngeacharernkul, Pratak. « Particle size distribution (PSD) equivalency using novel statistical comparators and PBPK input models ». Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5973.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
For disperse system drug formulations, meaningful particle size distribution (PSD) comparators are essential in determining pharmaceutical equivalency and predicting biopharmaceutical equivalence in terms of the effect of particle size on the rate and extent of drug input. In formulation development and licensure, particle size characterization has been applied to establish relationships for bioequivalence of generic pharmaceutical drug products. The current approaches recommended by the US-FDA using median and span are not adequate to predict drug product performances or account for multi-modal PSD performance properties. The use of PSD similarity metric and the development and incorporation of drug release predictions based on PSD properties into PBPK models for various drug administration routes may provide a holistic approach for evaluating the effect of PSD differences on in vitro release of disperse systems and the resulting pharmacokinetic impact on drug product performance. The objectives of this study are to provide a rational approach for PSD comparisons by 1) developing similarity computations for PSD comparisons and 2) using PBPK-models to specifically account for PSD effects on drug input rates via a subcutaneous (SQ) administration route. Two techniques for measuring PSDs of reference (reference-listed drug product) and test (generic) drug products were investigated: OVL and PROB, as well as the current standard measurements of median and span. In addition, release rate profiles of each product pair simulated from modified Bikhazi and Higuchi’s model were used to compute release rate comparators such as similarity factor (f2) and fractional time ratios. A subcutaneous input PBPK model was developed and used to simulate blood concentration-time profiles of reference and test drug products. Pharmacokinetic responses such as AUC, Cmax, and Tmax were compared using standard bioequivalence criteria. PSD comparators, release rate comparators, and bioequivalence metrics were related to determine their relationships and identify the appropriate approach for bioequivalence waiver. OVL showed better predictions for bioequivalence compared to PROB, median, and span. For release profile comparisons, the f2 method was the best for bioequivalence prediction. The use of both release rate (e.g., f2) and PSD (e.g., OVL) comparison metrics significantly improved bioequivalence prediction to about 90%.
21

Guyader, Andrew Charles Iwan W. D. « A statistical approach to equivalent linearization with application to performance-based engineering / ». Diss., Pasadena, Calif. : California Institute of Technology, 2003. http://resolver.caltech.edu/CaltechETD:etd-06012003-123539.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Guyader, Andrew C. « A statistical approach to equivalent linearization with application to performance-based engineering / ». Pasadena : California Institute of Technology, Earthquake Engineering Research Laboratory, 2004. http://caltecheerl.library.caltech.edu.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Flodin, Mikael, et Shadi Khatibi. « Betyg och kön : likvärdighet eller diskriminering ? » Thesis, KTH, Lärande, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-227802.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Nationella och internationella kunskapsmätningar i matematik visar likartade resultat för flickor och pojkar. Trots det visar statistiken att flickor erhåller systematiskt högre slutbetyg. Denna studie undersöker huruvida betyg tjänar som likvärdigt mått på kunskap hos flickor och pojkar i gymnasiets matematikämne. Detta görs dels utifrån en kvantitativ ansats och dels utifrån en enkätstudie. Med utgångspunkt i nationell registerdata (SCB) för slutbetyg och resultat på nationella provet undersöks, medelst fyra olika analysmetoder, könsskillnader med avseende på kurs, skolform och län. Studien visar att flickor generellt erhåller högre slutbetyg än pojkar i relation till resultatet på nationella provet, vilket bekräftar tidigare forskning. Vidare påvisar analysen särskilt stora diskrepanser på betygsnivå C och högre; i matematikkurser på yrkesförberedande program; i senare kurser inom samtliga program; i Västernorrlands, Västmanlands, Gotlands och Kalmar län; liksom i fristående skolor. Korrelationsanalys tydliggör hur nationella provet utgör en mindre del av betygsunderlaget för flickor jämfört med pojkar. Dessutom avslöjar analysen ett omvänt samband mellan könsbetingad relativ prestation på nationella provet och avvikelse i slutbetyget. Enkätstudien undersöker bedömningspraktiken hos matematiklärare. Filtrering på lärarens kön, ålder, program och skolform, har tillämpats. Resultatet tyder på systematiska skillnader i bedömningspraktik mellan olika lärarkategorier, vilket innebär att betygssättningen kan brista i likvärdighet. Skillnader har påvisats mellan, i första hand, lärare på yrkesprogram och naturvetenskapliga program, såväl som mellan lärare i kommunala och fristående skolor. Också lärarens kön och ålder tycks ha viss betydelse. Studien avslutas med en diskussion kring möjliga lösningar.
National and international assessments in mathematics show similar results for girls and boys. Despite this, statistics show that girls receive systematically higher final grades. This study examines whether grades serve as an equivalent measure of knowledge of girls and boys in high school mathematics. This is done partly on the basis of a quantitative approach and partly on the basis of a survey. Based on national register data (Statistics Sweden) for final grades and results of national tests, using four different methods of analysis, gender differences with respect to course, school form and county, are examined. The study shows that girls generally get a higher final grade than boys in relation to their results on the national test, confirming previous research. Furthermore, the analysis shows particularly large discrepancies at grade C and higher; in mathematics courses on vocational programs; in later courses within all programs; in V¨asternorrland, V¨astmanland, Gotland and Kalmar County; as well as in independent schools. Correlation analysis clarifies how the national test constitutes a smaller part of the assessment basis for girls compared to boys. The analysis also reveals an inverse relationship between gender dependent relative performance on the national test and the final grade deviation. The survey examines the assessment practice among mathematics teachers. Filtering on the teacher’s gender, age, program and school form has been applied. The result suggests systematic differences in assessment practice between different teacher categories, implying that grades can break in equality. Differences have been shown between, primarily, teachers in vocational programs and science programs, as well as between teachers in municipal and independent schools. Also the teacher’s gender and age seems to be of some importance. The study concludes with a discussion about possible solutions.
24

Parvathaneni, Keerthi Krishna. « Characterization and multiscale modeling of textile reinforced composite materials considering manufacturing defects ». Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Lille Douai, 2020. http://www.theses.fr/2020MTLD0016.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
L’influence des porosités induites par les procédés de fabrication sur les propriétés mécaniques des composites textiles a été étudiée à la fois par caractérisation expérimentale et par modélisation multi-échelle. En particulier, les porosités ont été caractérisés en termes de fraction volumique, taille, forme et distribution, et les effets de chaque caractéristique sur les propriétés mécaniques des composites textiles ont été analysés. De nombreuses plaques de composites textiles ont été fabriquées par le procédé Resin Transfer Molding (RTM). Ainsi, un renfort textile en verre interlock 3D a été imprégné par une résine époxy injectée sous une pression constante pour générer différents types de porosités. Des essais mécaniques ont été réalisés pour examiner la dépendance du module et de la résistance en traction des composites par rapport au taux de porosité total, intra-toron et inter-toron et également par rapport aux caractéristiques géométriques des porosités. Des analyses au microscope électronique ont été effectuées pour obtenir des informations locales sur les fibres (diamètre et distribution) et les porosités intra-toron (rayon, rapport d’aspect et distribution). A partir de ces résultats, un nouvel algorithme a été développé pour générer le Volume Elémentaire Représentatif (VER) qui est statistiquement équivalent au composite contenant les porosités. De plus, l’effet de la morphologie, du diamètre et de la distribution spatiale des porosités (homogène, aléatoire et concentré) sur les propriétés homogénéisées des torons a également été étudié par la méthode des éléments finis. La tomographie par rayons X a été utilisée pour extraire la géométrie méso-échelle réelle en trois dimensions et les porosités intra-toron. Ensuite, ces données ont été utilisées pour créer un modèle numérique à l’échelle mésoscopique (VER) et prédire les propriétés élastiques des composites avec porosités. Une étude paramétrique utilisant une méthode numérique multi-échelle a été effectuée pour étudier l’effet de chaque caractéristique des porosités, c.-à-d. le taux volumique, la taille, la forme, la distribution et la localisation sur les propriétés élastiques de composites. Ainsi, la méthode multi-échelle proposée permet d’établir une corrélation entre les porosités à différentes échelles et les propriétés mécaniques des composites textiles
The influence of void-type manufacturing defects on the mechanical properties of textile composites was investigated both by experimental characterization and by multiscale modeling. In particular, voids characteristics such as not only void volume fraction but also its size, shape, and distribution have been characterized for textile composites and their effect on the mechanical properties have been analyzed. Several textile composite plates were fabricated by the resin transfer molding (RTM) process where 3D interlock glass textile reinforcement was impregnated by epoxy resin under a constant injection pressure to generate different types of voids. A series of mechanical tests were performed to examine the dependency of tensile modulus and strength of composites on the total void volume fraction, intra & inter-yarn void volume fraction, and their geometrical characteristics. Microscopy observations were performed to obtain the local information about fibers (diameter and distribution), and intra-yarn voids (radius, aspect ratio and distribution). Based on these results, a novel algorithm was proposed to generate the statistically equivalent representative volume element (RVE) containing voids. Moreover, the effect of void morphology, diameter and spatial distribution (homogeneous, random and clustering) on the homogenized properties of the yarns was also investigated by the finite element method. X-ray micro-computed tomography was employed to extract the real meso-scale geometry and inter-yarn voids. Subsequently, this data was utilized to create a numerical model at meso-scale RVE and used to predict the elastic properties of composites containing voids. A parametric study using a multiscale numerical method was proposed to investigate the effect of each void characteristic, i.e. volume fraction, size, shape, distribution, and location on the elastic properties of composites. Thus, the proposed multiscale method allows establishing a correlation between the void defects at different scales and the mechanical properties of textile composites
25

Cai, Weixing. « Multiple decision rules for equivalence among k populations and their applications in signal processing, clinical trials and classification ». Related electronic resource : Current Research at SU : database of SU dissertations, recent titles available full text, 2008. http://wwwlib.umi.com/cr/syr/main.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Sansivieri, Valentina <1984&gt. « Item Response Theory Equating with the Non-Equivalent Groups with Covariates Design ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amsdottorato.unibo.it/7779/1/Sansivieri_Valentina_tesi.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
We use test score equating to be able to compare different test scores from different test forms. Although it is preferable to use non-equivalent groups with anchor test (NEAT) design, it might be impossible to administer an anchor test due to test security or for other reasons. However, we still know that the groups are non-equivalent, which rules out the use of an equivalent groups (EG) design. A possibility, then, is to use non-equivalent groups with covariates (NEC) design. The overall aim of this work was to propose the use of Item Response Theory (IRT) with a NEC design. We propose the use of mixed-measurement IRT with covariates model (Tay, Newman & Vermunt, 2011; 2016) within IRT observed-score equating and IRT true-score equating to model both test scores and covariates. The proposed test equating methods are examined with simulations. The results are compared with IRT observed-score equating and IRT true-score equating methods using the EG and NEAT designs. The results from the simulations show that IRT true-score equating method doesn't work, but support the IRT observed-score equating method for which the standard errors of the equating are lower when covariates are included in the IRT model than if they are excluded. One real test dataset illustrate that the IRT observed-score equating method can be used in practice.
27

Wang, Jie Stamey James D. « Sample size determination for Emax model, equivalence / non-inferiority test and drug combination in fixed dose trials ». Waco, Tex. : Baylor University, 2008. http://hdl.handle.net/2104/5182.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

ROSATI, ROSSANA. « Testing cross-national construct equivalence in international surveys. Applications on international civic and citizenship education survey data ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/95793.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Data collected in international studies enables researchers, educators, and policy makers to compare educational systems regarding several aspects such as student achievement but also different attitudes and beliefs. Such findings are often included in international reports in the form of league tables comparing country averages on different measures and are subject of important country comparisons and subsequent decisions. Nevertheless the cross-cultural generalizability of attitudinal measures and the possibility of country comparisons cannot be always reached and statistical tests of measurement invariance (MI) must be carried out to ensure meaningful country comparisons and related conclusions. This dissertation aims to address the issue of MI of attitudinal measures. A case is made for valid country comparisons of measures collected in cross-national surveys by documenting and illustrating with examples the required tests of measurement invariance (MI). After a comprehensive account of the theoretical groundings of MI in a multiple-group confirmatory analysis (MG-CFA) framework, three nested and consecutively more constraining levels of invariance - configural invariance, metric invariance, and scalar invariance – are discussed and explored. More specifically, by testing a set of three increasingly constrained models measuring the latent concept, we estimate whether model structure in the groups, factor loadings and intercepts are equivalent. Consequently, we establish if comparisons that are made on the latent variable are significant across groups (countries). In agreement with the theory, it is assumed that in order to ensure the highest level of cross-cultural comparability (e.g. comparing country averages), MI testing must confirm the highest level of MI, scalar invariance. We approach the research topic taking as example the measure of students’ attitudes toward equal rights for immigrants collected in the International Civic and Citizenship Education Study – ICCS conducted by the International Association for the Evaluation of Educational Achievement – IEA in 2009. The methodology is applied both to all European countries and to sub-groups of students such as the non-immigrant/native students in these countries as well as students with an immigrant background. The estimation takes into account the specific properties of data. The results are discussed both within the sample and sub-samples setting and show that the required level of scalar invariance is not always reached. In particular, in the studied countries, higher levels of construct equivalence seem to be achieved only for the sub-sample of students with an immigrant background. Conclusions and implications for further research and also for reporting and interpreting current research findings are drawn.
29

Katsaounis, Parthena I. « Equivalence of symmetric factorial designs and characterization and ranking of two-level Split-lot designs ». The Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=osu1164176825.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

CASTELLETTI, FEDERICO. « Learning Markov Equivalence Classes of Gaussian DAGs via Observational and Interventional Data : an Objective Bayes Approach ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2018. http://hdl.handle.net/10281/199179.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
I modelli grafici basati sull'utilizzo di grafi direzionati (Directed Acyclic Graphs, DAG) hanno acquisito negli ultimi decenni un'ampia popolarità per lo studio della dipendenza tra variabili in molteplici ambiti scientifici. Tipicamente lo scopo è fare inferenza su un modello attraverso i dati, ovvero misurare relazioni di dipendenza tra variabili. La famiglia di indipendenze (marginali) e condizionali codificate dal DAG determinano la sua proprietà markoviana. DAG che racchiudono le medesime indipendenze condizionali sono detti Markov equivalenti. È tuttavia noto che l'utilizzo di dati di natura puramente osservazionale non consenta di "distinguere" tra DAG Markov equivalenti. Questi sono quindi partizionati in classi di equivalenza, ciascuna delle quali viene rappresentata da un grafo a catena detto essential graph. Quando l'obiettivo è fare inferenza sul modello generatore dei dati è quindi più conveniente esplorare lo spazio degli essential graph (rispetto allo spazio dei DAG), sebbene la dimensione di questo cresca "più che esponenzialmente" nel numero di variabili (nodi del grafo). Per lungo tempo lo studio degli essential graph è stato quindi confinato a "dimensioni" modeste dello spazio. Tuttavia, per superare tale limite, negli ultimi anni sono stati proposti diversi metodi basati sull'utilizzo di catene di Markov. In diverse applicazioni (di carattere tipicamente biologico e genomico) si dispone di dati di tipo "interventistico", ossia prodotti a seguito di perturbazioni esogene di variabili o "esperimenti randomizzati". La nozione di intervento è strettamente legata all'interpretazione causale del DAG. Intervenendo su una variabile è possibile "rimuovere" la dipendenza di altre variabili sulla stessa, ossia modificare la proprietà markoviana del DAG. Questo determina una partizione dei DAG in classi di equivalenza di dimensione "più contenuta", ciascuna delle quali viene rappresentata da un interventional essential graph. Pertanto, laddove si disponga di dati di natura interventistica, la selezione del modello generatore dei dati può essere rivolta all'esplorazione di tale spazio; in tal modo è possibile "migliorare" l'identificazione del DAG generatore dei dati. Nel presente lavoro si affronta il problema della selezione di modelli grafici gaussiani attraverso una metodologia di tipo bayesiano. Nello specifico, si adotta un approccio oggettivo basato sulla nozione di fractional Bayes factor. A questo scopo, ricaviamo una formula per il calcolo della verosimiglianza marginale di un interventional essential graph in presenza di dati di natura osservazionale e interventistica. In seguito, procediamo alla costruzione di una catena di Markov per l'esplorazione dello spazio degli interventional essential graph sotto condizioni di sparsità. Proponiamo quindi un algoritmo di tipo MCMC per approssimare la posterior distribution degli interventional essential graph e "quantificare" misure di incertezza come la probabilità di inclusione di un edge. Applichiamo infine la metodologia proposta, denominata Objective Bayesian Interventional Essential graph Search, a studi di simulazione e per l'analisi di protein-signaling data, laddove dati di natura interventistica corrispondono a rilevazioni effettuate sotto differenti condizioni sperimentali.
Graphical models based on Directed Acyclic Graphs (DAGs) are a very common tool in many scientific areas for the investigation of dependencies among variables. Typically, the objective is to infer models from the data or measuring dependence relationships between variables. The set of all (marginal and) conditional independencies encoded by a DAG determines its Markov property. However, it is well known that we cannot distinguish between DAGs encoding the same set of conditional independencies (Markov equivalent DAGs) using observational data. Markov equivalent DAGs are then collected in equivalence classes each one represented by an Essential Graph (EG), also called Completed Partially Directed Graph (CPDAG). When the interest is in model selection it is then convenient to explore the EG space, rather than the whole DAG space, even if the number of EGs increases super-exponentially with the number of vertices. An exhaustive enumeration of all EGs is not feasible and so structural learning in the EG space has been confined to small dimensional problems. However, to avoid such limit, several methods based on Markov chains have been proposed in recent years. In many applications (such as biology and genomics) we have both observational and interventional data produced after an exogenous perturbation of some variables or from randomized intervention experiments. The concept of intervention is strictly related to the causal interpretation of a DAG. Interventions destroy the original causal dependency on the intervened variables and modify the Markov property of a DAG. This results in a finer partition of DAGs into equivalence classes, each one represented by an Interventional Essential Graph (I-EG). Hence, model selection of DAGs in the presence of observational and interventional data can be performed over the I-EG space, thus improving the identifiability of the true data generating model. In this work we deal with the problem of Gaussian DAG model selection from a Bayesian perspective. In particular, we adopt an objective Bayes approach based on the notion of fractional Bayes factor. We then obtain a closed formula to compute the marginal likelihood of an I-EG given a collection of observational and interventional data. Next, we construct a Markov chain to explore the I-EG space possibly accounting for sparsity constraints. Hence, we propose an MCMC algorithm to approximate the posterior distribution of I-EGs and provide a quanti_cation of inferential uncertainty by measuring some features of interest, such as probabilities of edge inclusion. We apply our methodology, that we name Objective Bayesian Interventional Essential graph Search (OBIES) to simulation settings and to the analysis of protein-signaling data, where interventional data consists in a collection of observations measured under different experimental conditions.
31

Fomicheva, Marina. « The Role of human reference translation in machine translation evaluation ». Doctoral thesis, Universitat Pompeu Fabra, 2017. http://hdl.handle.net/10803/404987.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Both manual and automatic methods for Machine Translation (MT) evaluation heavily rely on professional human translation. In manual evaluation, human translation is often used instead of the source text in order to avoid the need for bilingual speakers, whereas the majority of automatic evaluation techniques measure string similarity between MT output and a human translation (commonly referred to as candidate and reference translations), assuming that the closer they are, the higher the MT quality. In spite of the crucial role of human reference translation in the assessment of MT quality, its fundamental characteristics have been largely disregarded. An inherent property of professional translation is the adaptation of the original text to the expectations of the target audience. As a consequence, human translation can be rather different from the original text, which, as will be shown throughout this work, has a strong impact on the results of MT evaluation. The first goal of our research was to assess the effects of using human translation as a benchmark for MT evaluation. To achieve this goal, we started with a theoretical discussion of the relation between original and translated texts. We identified the presence of optional translation shifts as one of the fundamental characteristics of human translation. We analyzed the impact of translation shifts on automatic and manual MT evaluation showing that in both cases quality assessment is strongly biased by the reference provided. The second goal of our work was to improve the accuracy of automatic evaluation in terms of the correlation with human judgments. Given the limitations of reference-based evaluation discussed in the first part of the work, instead of considering different aspects of similarity we focused on the differences between MT output and reference translation searching for criteria that would allow distinguishing between acceptable linguistic variation and deviations induced by MT errors. In the first place, we explored the use of local syntactic context for validating the matches between candidate and reference words. In the second place, to compensate for the lack of information regarding the MT segments for which no counterpart in the reference translation was found, we enhanced reference-based evaluation with fluency-oriented features. We implemented our approach as a family of automatic evaluation metrics that showed highly competitive performance in a series of well-known MT evaluation campaigns.
Tanto los métodos manuales como los automáticos para la evaluación de la Traducción Automática (TA) dependen en gran medida de la traducción humana profesional. En la evaluación manual, la traducción humana se utiliza a menudo en lugar del texto original para evitar la necesidad de hablantes bilingües, mientras que la mayoría de las técnicas de evaluación automática miden la similitud entre la TA y una traducción humana (comúnmente llamadas traducción candidato y traducción de referencia), asumiendo que cuanto más cerca están, mayor es la calidad de la TA. A pesar del papel fundamental que juega la traducción de referencia en la evaluación de la calidad de la TA, sus características han sido en gran parte ignoradas. Una propiedad inherente de la traducción profesional es la adaptación del texto original a las expectativas del lector. Como consecuencia, la traducción humana puede ser bastante diferente del texto original, lo cual, como se demostrará a lo largo de este trabajo, tiene un fuerte impacto en los resultados de la evaluación de la TA. El primer objetivo de nuestra investigación fue evaluar los efectos del uso de la traducción humana como punto de referencia para la evaluación de la TA. Para lograr este objetivo, comenzamos con una discusión teórica sobre la relación entre textos originales y traducidos. Se identificó la presencia de cambios de traducción opcionales como una de las características fundamentales de la traducción humana. Se analizó el impacto de estos cambios en la evaluación automática y manual de la TA demostrándose en ambos casos que la evaluación está fuertemente sesgada por la referencia proporcionada. El segundo objetivo de nuestro trabajo fue mejorar la precisión de la evaluación automática medida en términos de correlación con los juicios humanos. Dadas las limitaciones de la evaluación basada en la referencia discutidas en la primera parte del trabajo, en lugar de enfocarnos en la similitud, nos concentramos en el impacto de las diferencias entre la TA y la traducción de referencia buscando criterios que permitiesen distinguir entre variación lingüística aceptable y desviaciones inducidas por los errores de TA. En primer lugar, exploramos el uso del contexto sintáctico local para validar las coincidencias entre palabras candidato y de referencia. En segundo lugar, para compensar la falta de información sobre los segmentos de la TA para los cuales no se encontró ninguna relación con la traducción de referencia, introdujimos características orientadas a la fluidez de la TA en la evaluación basada en la referencia. Implementamos nuestro enfoque como una familia de métricas de evaluación automática que mostraron un rendimiento altamente competitivo en una serie de conocidas campañas de evaluación de la TA.
32

Scott, Heather Marie. « Parent Involvement in Children's Schooling : An Investigation of Measurement Equivalence across Ethnic Groups ». Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3339.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Epstein et al.'s Theory of Overlapping Spheres of Influence focuses on the interaction and communication, or partnerships, among families, schools, and the community to bring the three closer together. The theory works in conjunction with Epstein's typology of parental involvement, which focuses on six types of involvement that are instrumental to a child's development and his/her school and educational success. These serve as the framework for the study and support the construct of parent's involvement in children's schooling. The purpose of the current study was to conduct further validation analyses of an inventory designed to measure the construct of parent involvement in their children's schooling through the investigation of measurement invariance to determine if the measurement properties of the inventory varied by race/ethnicity. The study compared the responses of 126 Hispanic parents/guardians with 116 White/non-Hispanic parents/guardians to investigate if these two groups were interpreting the items on the inventory in the same manner. The inventory was administered to a sample of parents/guardians of children in grades 3 through 5 in a local school district. Findings indicated that the measurement model was misspecified for the White/non-Hispanic group and the Hispanic group and further measurement invariance testing was not conducted. Exploratory factor analyses were conducted in order to investigate which models would best fit the data for both groups. Feedback also was obtained from parents/guardians about the clarity of the inventory, which revealed their confusion with the response scale and the wording of particular items. In addition, they supplied issues or aspects of parent involvement that they found important but missing from the inventory. Results from the psychometric analyses and qualitative feedback indicated that the inventory requires modification and further psychometric investigation. In addition, caution should be exercised for anyone who may be considering utilizing the inventory. Results of the study were interpreted in terms of contributions to the parent involvement literature, as well as recommendations for the improvement of the inventory.
33

Campbell, Kathlleen. « Extension of Kendall's tau Using Rank-Adapted SVD to Identify Correlation and Factions Among Rankers and Equivalence Classes Among Ranked Elements ». Diss., Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/284578.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Statistics
Ph.D.
The practice of ranking objects, events, and people to determine relevance, importance, or competitive edge is ancient. Recently, the use of rankings has permeated into daily usage, especially in the fields of business and education. When determining the association among those creating the ranks (herein called sources), the traditional assumption is that all sources compare a list of the same items (herein called elements). In the twenty-first century, it is rare that any two sources choose identical elements to rank. Adding to this difficulty, the number of credible sources creating and releasing rankings is increasing. In statistical literature, there is no current methodology that adequately assesses the association among multiple sources. We introduce rank-adapted singular value decomposition (R-A SVD), a new method that uses Kendall's tau as the underlying correlation method. We begin with (P), a matrix of data ranks. The first step is to factor the covariance matrix (K) as follows: K = cov(P) = V D^2 V Here, (V) is an orthonormal basis for the rows that is useful in identifying when sources agree as to the rank order and specifically which sources. D is a diagonal of eigenvalues. By analogy with singular value decomposition (SVD), we define U^* as U^* = PVD^(-1) The diagonal matrix, D, provides the factored eigenvalues in decreasing order. The largest eigenvalue is used to assess the overall association among the sources and is a conservative unbiased method comparable to Kendall's W. Anderson's test determines whether this association is significant and also identifies other significant eigenvalues produced by the covariance matrix.. Using Anderson's test (1963) we identify the a significantly large eigenvalues from D. When one or more eigenvalues is significant, there is evidence that the association among the sources is significant. Focusing on the a corresponding vectors of V specifically identifies which sources agree. In cases where more than one eigenvalue is significant, the $a$ significant vectors of V provide insight into factions. When more than one set of sources is in agreement, each group of agreeing sources is considered a faction. In many cases, more than one set of sources will be in agreement with one another but not necessarily with another set of sources; each group that is in agreement would be considered a faction. Using the a significant vectors of U^* provides different but equally important results. In many cases, the elements that are being ranked can be subdivided into equivalence classes. An equivalence class is defined as subpopulations of ranked elements that are similar to one another but dissimilar from other classes. When these classes exist, U^* provides insight as to how many classes and which elements belong in each class. In summary, the R-A SVD method gives the user the ability to assess whether there is any underlying association among multiple rank sources. It then identifies when sources agree and allows for more useful and careful interpretation when analyzing rank data.
Temple University--Theses
34

Chang, Yu-Wei. « Sample Size Determination for a Three-arm Biosimilar Trial ». Diss., Temple University Libraries, 2014. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/298932.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Statistics
Ph.D.
The equivalence assessment usually consists of three tests and is often conducted through a three-arm clinical trial. The first two tests are to demonstrate the superiority of the test treatment and the reference treatment to placebo, and they are followed by the equivalence test between the test treatment and the reference treatment. The equivalence is commonly defined in terms of mean difference, mean ratio or ratio of mean differences, i.e. the ratio of the mean difference of the test and placebo to the mean difference of the reference and placebo. In this dissertation, the equivalence assessment for both continuous data and discrete data are discussed. For the continuous case, the test of the ratio of mean differences is applied. The advantage of this test is that it combines a superiority test of the test treatment over the placebo and an equivalence test through one hypothesis. For the discrete case, the two-step equivalence assessment approach is studied for both Poisson and negative binomial data. While a Poisson distribution implies that population mean and variance are the same, the advantage of applying a negative binomial model is that it accounts for overdispersion, which is a common phenomenon of count medical endpoints. The test statistics, power function, and required sample size examples for a three-arm equivalence trial are given for both continuous and discrete cases. In addition, discussions on power comparisons are complemented with numerical results.
Temple University--Theses
35

Karaliūtė, Asta. « Statistiniai kolokacijų nustatymo metodai ir vertimo atitikmenys lygiagrečiajame grožinės literatūros tekstyne ». Master's thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100617_111239-72584.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Darbo tyrimo objektas – kolokacijos ir jų tyrimo metodai. Pagrindinis darbo tikslas – išanalizuoti statistiniais metodais nustatytų kolokacijų sąrašus, juos palyginti ir išnagrinėti atrinktų kolokacijų vertimo atitikmenis. Darbo aktualumas – kolokacijų analizė padės lingvistams ir kitiems kalbos specialistams pasirinkti tinkamą kolokacijų nustatymo metodą tiek anglų, tiek lietuvių kalbai. O kolokacijų vertimo proceso supratimas svarbus vertimo analizei, vertėjų darbui. Tyrimas susideda iš penkių dalių. Antrajame skyriuje pristatoma teorinė kolokacijos sąvoka. Pateikiama sudėtinga kolokacijų vertimo problematika ir keturių analizei pasirinktų statistinių metodų charakteristikos: Tarpusavio Informacija (angl. Mutual Information), T-lygmuo (angl. T-score), Lošimo kauliukų metodas (angl. Dice) ir Logaritminio tikėtinumo santykis (angl. Log-likelihood ratio). Trečiajame skyriuje, remiantis pagrindiniu analizės šaltiniu – lygiagrečiu grožinės literatūros tekstynu, nustatomi kolokacijų sąrašai. Paaiškėja, kad T-lygmens ir Logaritminio tikėtinumo santykio (LTS) metoduose išryškėjo gramatinės kolokacijos, o Tarpusavio Informacijos (TI) ir Lošimo kauliukų (LK) metoduose – leksinės. Parinktos ir apibrėžtos kolokacijų ribos bei metodų panašumo koeficientai. Ketvirtajame skyriuje pasirenkamas 200 geriausiųjų kolokacijų sąrašas ir atliekamas kiekvienos kalbos statistinių metodų palyginimas. Metodai lyginami poromis pagal panašumo kriterijus – LK su TI (leksinės kolokacijos) bei... [toliau žr. visą tekstą]
The main objective of the Master thesis is collocations and collocation extraction methods. The aim of the research is to analyze collocation lists extracted by statistical methods from the parallel corpus of fiction and determine the collocation equivalents. Relevance of the thesis – collocation analysis can help linguists and other language specialists choose the right collocaton extraction methods in both, English and Lithuanian, languages. What is more, understanding of collocation translation process is very important for the translation analysis and interpreters. Research consists of 5 parts. Chapter 2 presents the concept of collocation and possible collocation translation problems. The theoretical part also includes the characteristics of the four selected statistical methods: Mutual Information (MI), T-score, Dice and Log-likelihood ratio (LLR). In chapter 3, collocation lists for each language, English and Lithuanian, are extracted. The analysis reveal that T-score and LLR methods extract grammatical collocations, while MI and Dice – lexical ones. Futher in this chapter, collocation boundaries and the coefficients of each method are defined. Chapter 4 presents a list of top 200 collocations of each language and method. The methods with new collocation lists are compared in pairs according to similarity criteria - Dice with MI (lexical collocations) and T-score with LLR (grammatical). Another distribution of bigrams according to frequency is identified, and both... [to full text]
36

Holtz, Sebastian. « High-frequency statistics for Gaussian processes from a Le Cam perspective ». Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21123.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Diese Arbeit untersucht Inferenz für Streuungsparameter bedingter Gaußprozesse anhand diskreter verrauschter Beobachtungen in einem Hochfrequenz-Setting. Unser Ziel dabei ist es, eine asymptotische Charakterisierung von effizienter Schätzung in einem allgemeine Gaußschen Rahmen zu finden. Für ein parametrisches Fundamentalmodell wird ein Hájek-Le Cam-Faltungssatz hergeleitet, welcher eine exakte asymptotische untere Schranke für Schätzmethoden liefert. Dazu passende obere Schranken werden konstruiert und die Bedeutung des Satzes wird verdeutlicht anhand zahlreicher Beispiele wie der (fraktionellen) Brownschen Bewegung, dem Ornstein-Uhlenbeck-Prozess oder integrierten Prozessen. Die Herleitung der Effizienzresultate basiert auf asymptotischen Äquivalenzen und kann für verschiedene Verallgemeinerungen des parametrischen Fundamentalmodells verwendet werden. Als eine solche Erweiterung betrachten wir das Schätzen der quadrierten Kovariation eines stetigen Martingals anhand verrauschter asynchroner Beobachtungen, welches ein fundamentales Schätzproblem in der Öknometrie ist. Für dieses Modell erhalten wir einen semi-parametrischen Faltungssatz, welcher bisherige Resultate im Sinne von Multidimensionalität, Asynchronität und Annahmen verallgemeinert. Basierend auf den vorhergehenden Herleitungen entwickeln wir einen statistischen Test für den Hurst-Parameter einer fraktionellen Brownschen Bewegung. Ein Score- und ein Likelihood-Quotienten-Test werden implementiert sowie analysiert und erste empirische Eindrücke vermittelt.
This work studies inference on scaling parameters of a conditionally Gaussian process under discrete noisy observations in a high-frequency regime. Our aim is to find an asymptotic characterisation of efficient estimation for a general Gaussian framework. For a parametric basic case model a Hájek-Le Cam convolution theorem is derived, yielding an exact asymptotic lower bound for estimators. Matching upper bounds are constructed and the importance of the theorem is illustrated by various examples of interest such as the (fractional) Brownian motion, the Ornstein-Uhlenbeck process or integrated processes. The derivation of the efficiency result is based on asymptotic equivalences and can be employed for several generalisations of the parametric basic case model. As such an extension we consider estimation of the quadratic covariation of a continuous martingale from noisy asynchronous observations, which is a fundamental estimation problem in econometrics. For this model, a semi-parametric convolution theorem is obtained which generalises existing results in terms of multidimensionality, asynchronicity and assumptions. Based on the previous derivations, we develop statistical tests on the Hurst parameter of a fractional Brownian motion. A score test and a likelihood ratio type test are implemented as well as analysed and first empirical impressions are given.
37

BAJNI, GRETA. « STATISTICAL METHODS TO ASSESS ROCKFALL SUSCEPTIBILITY IN AN ALPINE ENVIRONMENT : A FOCUS ON CLIMATIC FORCING AND GEOMECHANICAL VARIABLES ». Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/913511.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
The overarching goal of the doctoral thesis was thus the development of a systematic procedure capable to examine and enhance the role of geomechanical and climatic processes in rockfall susceptibility, performed with statistically based and Machine Learning techniques. To achieve this purpose, two case studies were analysed in the Italian Alps (Valchiavenna, Lombardy Region; Mountain Communities of Mont Cervin and Mont Emilius, Aosta Valley Region). For both case studies, Generalized Additive Models (GAM) were used for rockfall susceptibility assessment; for the Valchiavenna case study, a Random Forest (RF) model was tested too. All models were validated through k-fold cross validation routines and their performance evaluated in terms of area under the receiver operating characteristic curve (AUROC). Predictors’ behaviour physical plausibility was verified through the analysis of the mathematical functions describing the predictors-susceptibility modelled relationships. Specific objectives of the two case studies differed. The Valchiavenna case study was dedicated to testing the role of the outcrop-scale geomechanical properties in a rockfall susceptibility model. Specific objectives were: (i) the optimal selection of sampling points for the execution of geomechanical surveys to be integrated within an already available dataset; (ii) the regionalization over the study area of three geomechanical properties, namely Joint Volumetric Count (Jv), rock-mass weathering index (Wi) and rock-mass equivalent permeability (Keq); (iii) the implementation of the regionalized properties as predictors in a rockfall susceptibility model, along with the traditional morphometric variables; (iv) the investigation of prediction limitations related to inventory incompleteness; (v) the implementation of a methodology for the interpretation of predictors’ behaviour in the RF model, usually considered a black box algorithm; (vi) the integration of the RF and GAM outputs to furnish a spatially distributed measure of uncertainty; (vii) the exploitation of satellite-derived ground deformation data to verify susceptibility outputs and interpret them in an environmental management perspective. The additional geomechanical sampling points were selected by means of the Spatial Simulated Annealing technique. Once collected the necessary geomechanical data, regionalization of the geomechanical target properties was carried out by comparing different deterministic, regressive and geostatistical techniques. The most suitable technique for each property was selected and geomechanical predictors were implemented in the susceptibility models. To verify rockfall inventory completeness related effects, the GAM model was performed both on rockfall data from the official landslide Italian inventory (IFFI) and on its updating with a field-mapped rockfall dataset. Regarding the RF model, the Shapely Additive exPlanations (SHAP) were employed for the interpretation of the predictors’ behaviour. A comparison between GAM and RF related outputs was carried out to verify their coherency, as well as a quantitative integration of the resulting susceptibility maps to reduce uncertainties. Finally, the rockfall susceptibility maps were coupled with Synthetic Aperture Radar (SAR) data from 2014 to 2021: a qualitative geomorphological verification of the outputs was performed, and composite maps were produced. The key results were: (i) geomechanical predictor maps were obtained applying an ordinary kriging for Jv and Wi (NRMSE equal to 13.7% and 14.5%, respectively) and by means of Thin Plate Splines for Keq (NRMSE= 18.5%). (ii) Jv was the most important geomechanical predictor both in the GAM (witha deviance explained of 7.5%) and in the RF model, with a rockfall susceptibility increase in correspondence of the most fractured rock masses. (iii) Wi and Keq were penalized (i.e., they had low influence on rockfall susceptibility) in the GAM model, whereas Keq showed an importance comparable to Jv in the RF model. (iv) In a complex Machine Learning model (RF), the SHAPs allowed the interpretation of predictors’ behaviour, which demonstrated to be coherent with that shown in the GAM model. (v) The models including the geomechanical predictors resulted in acceptable rockfall discrimination capabilities (AUROC>0.7). (vi) The introduction of the geomechanical predictors led to a redistribution of the high-susceptibility areas in plausible geomorphological contexts, such as in correspondence of active slope deformations and structural lineaments, otherwise not revealed by the topographic predictors alone. (vii) Models built with solely the IFFI inventory, resulted in physically implausible susceptibility maps and predictor behaviour, highlighting a bias in the official inventory. (viii) The discordance in predicting rockfall susceptibility between the GAM and the RF models varied from 13% to 8% of the total study area. (ix) From the integration of InSAR data and susceptibility maps, a “SAR Integrated Susceptibility Map”, and an “Intervention Priority Map” were developed as operational products potentially exploitable in environmental planning activities. The Aosta Valley case study was dedicated to challenge the concept of “susceptibility stationarity” by including the climate component in the rockfall susceptibility model. The availability of a large historical rockfall inventory and an extensive, multi-variable meteorological dataset for the period 1990-2020 were crucial input for the analysis. Specific objectives were: (i) the identification of climate conditions related to rockfall occurrence (ii) the summary of the identified relationships in variables to be used in a susceptibility model; (iii) the optimization of a rockfall susceptibility model, including both topographic, climatic and additional snow-related predictors (from a SWE weekly gridded dataset). Starting from an hourly meteorological dataset, climate conditions were summarized in indices related to short-term rainfall (STR), effective water inputs (EWI, including rainfall and snow melting), wet-dry cycles (WD) and freeze-thaw cycles (FT). Climate indices and rockfall occurrence time series were paired. Critical thresholds relating rockfall occurrence to climate indices not-ordinary values (>75th percentile) were derived through a statistical analysis. As summary variables for the susceptibility analysis, the mean annual threshold exceedance frequency for each index was calculated. Model optimization consisted in stepwise modifications of the model settings in order to handle issues related to inventory bias, physical significance of climatic predictors and concurvity (i.e., predictors collinearity in GAMs). The starting point was a “blind model”, i.e., a susceptibility model created without awareness of the rockfall inventory characteristics and of the physical processes potentially influencing susceptibility. To reduce the inventory bias, “visibility” masks were produced so to limit the modelling domain according to the rockfall collection procedures adopted by administrations. Thirdly, models were optimized according to the physical plausibility of climatic predictors, analysed through the smooth functions relating them to susceptibility. Finally, to reduce concurvity, a Principal Component Analysis (PCA) including climatic and snow-related predictors was carried out. Subsequently, the obtained principal components were used to replace the climatic predictors in the susceptibility model. The key results were: (i) the 95% of the rockfalls occurred in severe (or not ordinary) conditions for at least one among the EWI, WD and FT indices; (ii) ignoring inventory bias led to excellent model performance (0.80≤AUROC ≤0.90) but physically implausible outputs; (iii) the selection of non-rockfall points inside the “visibility mask” was a valuable approach to manage the inventory bias influence on outputs; (iv) the inclusion of climate predictors resulted in an improvement of the susceptibility model performance (AUROC up to 3%) in comparison to a topographic-based model; (v) the most important physically plausible climate predictors were EWI, WD, with a deviance explained varying from 5% to 10% each, followed by the maximum cumulated snow melting with a deviance explained varying from 3% to 5%. The effect of FT was masked by elevation. (vi) When the climate and snow related predictors were inserted in the susceptibility model as principal components, concurvity was efficiently reduced. The inclusion of climate processes as non-stationary predictors (i.e., considering climate change) could be a valuable approach both to derive long-term rockfall susceptibility future scenarios and in combination with short-term weather forecasts to adapt susceptibility models to an early warning system for Civil Protection purpose.
38

Marshall, Scott. « An Empirical Approach to Evaluating Sufficient Similarity : Utilization of Euclidean Distance As A Similarity Measure ». VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/102.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Individuals are exposed to chemical mixtures while carrying out everyday tasks, with unknown risk associated with exposure. Given the number of resulting mixtures it is not economically feasible to identify or characterize all possible mixtures. When complete dose-response data are not available on a (candidate) mixture of concern, EPA guidelines define a similar mixture based on chemical composition, component proportions and expert biological judgment (EPA, 1986, 2000). Current work in this literature is by Feder et al. (2009), evaluating sufficient similarity in exposure to disinfection by-products of water purification using multivariate statistical techniques and traditional hypothesis testing. The work of Stork et al. (2008) introduced the idea of sufficient similarity in dose-response (making a connection between exposure and effect). They developed methods to evaluate sufficient similarity of a fully characterized reference mixture, with dose-response data available, and a candidate mixture with only mixing proportions available. A limitation of the approach is that the two mixtures must contain the same components. It is of interest to determine whether a fully characterized reference mixture (representative of the random process) is sufficiently similar in dose-response to a candidate mixture resulting from a random process. Four similarity measures based on Euclidean distance are developed to aid in the evaluation of sufficient similarity in dose-response, allowing for mixtures to be subsets of each other. If a reference and candidate mixture are concluded to be sufficiently similar in dose-response, inference about the candidate mixture can be based on the reference mixture. An example is presented demonstrating that the benchmark dose (BMD) of the reference mixture can be used as a surrogate measure of BMD for the candidate mixture when the two mixtures are determined to be sufficiently similar in dose-response. Guidelines are developed that enable the researcher to evaluate the performance of the proposed similarity measures.
39

GIACHIN, RICCA ELENA. « Essays in economics of happiness ». Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2009. http://hdl.handle.net/2108/207782.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Il Capitolo 1 focalizza l’attenzione sulla relazione tra tempo libero relazionale e soddisfazione di vita dichiarata. Nella letteratura empirica sul benessere soggettivo, è generalmente riconosciuto che il tempo libero dedicato ad interazioni sociali è correlato positivamente con la soddisfazione di vita riportata per mezzo di interviste. L’analisi tenta di definire se l’associazione tra le due variabili sia di tipo causale. A questo scopo viene condotta un’analisi empirica sul German Socio Economic Panel (GSOEP) 1984 - 2007. La disponibilità di osservazioni individuali ripetute nel tempo consente l’uso di un modello lineare ad effetti fissi che corregge per le variabili omesse persistenti nel tempo, come i tratti della personalità di un individuo. Il modello non corregge però per variabili omesse mutabili nel tempo. Si ricorre quindi ad una stima strumentata del tempo dedicato alle attività sociali. La tecnica di strumentazione si basa sulla variazione del tempo relazionale a seguito del pensionamento. Dato che lo status di pensionamento individuale ha un’influenza diretta sul livello di benessere soggettivo, la variabile strumentale adottata è la proporzione di persone in pensione per anno e per macro regione tedesca (Germania Est/Ovest). Nella regressione del primo stadio si evidenzia come l'impatto della proporzione di pensionati sul tempo devoluto alle attività sociali sia significativamente differente tra donne e uomini. Sfruttando tale eterogeneità, siamo in grado di strumentare con successo la variabile di interesse. L’analisi riesce dunque a dimostrare che il tempo libero dedicato alle interazioni sociali ha un effetto causale positivo sulla soddisfazione di vita. L’evidenza empirica di tale nesso di causalità può avere importanti implicazioni sulle politiche sociali. Il Capitolo 2 affronta il tema della soddisfazione di vita riportata dagli immigrati e dell’impatto delle relazioni diplomatiche tra il paese d’origine e d’approdo. L’analisi tenta quindi di stimare il valore di buone relazioni bilaterali. L’analisi utilizza un indice che misura il grado di cooperazione e conflitto nei rapporti tra Stati. L’indice è stato ideato da studiosi di Relazioni Internazionali per l’analisi quantitativa degli eventi internazionali. Esso è costruito come somma di eventi occorsi tra Stati, pesati in base al grado di cooperazione/conflitto attribuito da un panel di esperti. Nella nostra analisi, l’indice d’intensità delle relazioni bilaterali è associato agli immigrati intervistati nel GSOEP. L’indice ha una variabilità sia temporale che tra individui. Ciò permette di condurre un’analisi econometrica che sfrutta i vantaggi di un dataset longitudinale controllando per gli effetti fissi individuali. L’analisi empirica porta ad affermare che buone relazioni bilaterali sono significativamente correlate con la soddisfazione di vita degli immigrati, specialmente nel caso di eventi frequenti. Tale correlazione è maggiormente rilevante per gli immigrati che si trovando da più tempo in Germania e che hanno intenzione di rimanervi. Questo risultato conferma quanto già affermato in letteratura, ossia, che le relazioni tra Stati influenzano direttamente la qualità di vita degli immigrati nel paese di destinazione, ma non depongono a favore del processo d’integrazione. Usando, infine, il calcolo della variazione compensativa applicata all’indice delle relazioni tra Stati se ne monetarizza l’effetto. In conclusione, l’analisi giunge ad attribuire un valore significativo alla diplomazia: buone relazioni tra il paese d’origine e di destinazione hanno importanti esternalità positive per coloro che abitano all’estero. Il Capitolo 3 si incentra, invece, sulla relazione, fin qui poco esplorata, tra soddisfazione di vita e figli. Nella fiorente letteratura empirica sulle determinanti della soddisfazione di vita si tiene conto del numero di bambini presenti nell’unità famigliare tra le variabile esplicative standard assieme al reddito familiare disponibile (spesso non corretto per la dimensione familiare). L’impatto sulla soddisfazione di vita di tale variabile risulta associare sia il beneficio di un effetto relazionale sia il costo in termini monetari e di tempo dei figli. Nella letteratura sulla soddisfazione di vita si trova, infatti, che il coefficiente per i figli non è significativo, o è addirittura negativo. Nel lavoro qui presentato si cerca di discriminare l’effetto monetario dei figli da quello non monetario, attraverso l’utilizzo delle comuni scale di equivalenza per il reddito familiare. L’analisi empirica da noi condotta è basata su cittadini tedeschi intervistati dal GSOEP negli anni 1984 - 2007. Si dimostra, dunque, che, adottando delle elasticità di scale sempre maggiori, ossia assumendo minime, se non nulle, economie di scala nella formazione del nucleo familiare, l’impatto del numero di bambini sulla soddisfazione di vita divenga positivo. Viene anche rifiutata l’ipotesi di omogeneità dei coefficienti date le notevoli differenze nell’impatto dei figli conviventi sull’individuo a seconda del genere e della località geografica d’origine (Germania Est/Ovest). Si conclude che l’effetto non pecuniario dei figli sia maggiore per gli uomini, gli individui con un reddito familiare inferiore o uguale alla mediana, e soprattutto per i cittadini dell’Est. I risultati ottenuti nei sottogruppi possono essere imputati, secondo la nostra interpretazione a differenti costi opportunità e norme socio-culturali.
Chapter 1 focuses on the relation between social leisure and subjective well-being. In the empirical literature it is generally found that social leisure is positively correlated with life satisfaction. We ask if this association captures a genuine causal effect of the consumption of a social leisure time index on subjective well-being by using panel data from the German Socio Economic Panel (GSOEP) 1984 - 2007. The availability of multiple observations per individual allows us to use the fixed effect estimation technique which takes care of time invariant personal traits and omitted variables. This strategy of estimation solves only a part of the endogeneity issues which bias our coefficient for social leisure. We then adopt an Instrumental-Variables estimation. Our identification strategy exploits the change in social leisure brought about by retirement. However, individual retirement directly influences subjective well-being. Therefore, we instrument social leisure with the ratio of retired people in the sample by year and geographic location. Our results show a gendered difference in the impact of this ratio on social life. Exploiting the gender heterogeneity brings us to a successful instrumentation of social leisure. We can therefore conclude that social leisure has a positive causal effect on life satisfaction. Chapter 2 addresses the issue of subjective well-being of migrants and diplomatic relation. In particular, the paper represents an attempt to establish the value of good relationships between countries by considering their effects on a group of individuals who are arguably intimately affected by them: immigrants. We appeal to an index of conflict/cooperation constructed by experts in International Relations Sciences and currently used to carry out quantitative analysis on events data. Such index is an annual weighted sum of news items occurring between countries according to their content of conflict and cooperation, as established by a panel of experts in the field. This index is matched to a sample of immigrants in Germany who belong to the GSOEP data. The index of bilateral relations thus exhibits both time-series and cross-section variation and allows us to use a linear fixed effect estimation method. We find that good relations are positively and significantly correlated with immigrant life satisfaction, especially when we downplay low-value news events. This significant effect is much stronger for immigrants who have been in Germany longer, and who expect to stay there forever. This is consistent with good relations directly affecting the quality of immigrants’ lives in the host country, but is not consistent with assimilation. In order to evaluate the economic significance of our finding we finally compute the compensating surplus of the index of international relations. There is thus a significant value to diplomacy: good relationships between home and host countries generate significant well-being externalities for those who live abroad. Chapter 3 addresses the issue not enough explored by the happiness literature of the relation between children and life satisfaction. Indeed, the empirical analyses on the determinants of life satisfaction often include the number of children living in the household in the standard set of socio demographic explanatory variables together with household disposable income (often not corrected for household size). In this way, the estimation of the children’s coefficient does not fully discriminate between the monetary and non-monetary impact of children in the household. In our paper, we compare results obtained by correcting income for different equivalence scales. Indeed, equivalence scales are intended to measure the variation in income needed to bring households of different compositions to the same welfare level. The main arguments revolve around economies of scale in household formation. Our empirical analysis is based on the West and East subsamples of the GSOEP 1984 – 2007. We find that when economies of scale are assumed to be perfects (i.e. the household size and composition does not reduce the fruition of available income) children living in the household affect negatively the life satisfaction of adults. Adopting less perfect economies of scale in the household brings the children’s coefficient to shift from negative to positive and significant. We further reject slope homogeneity as we find strong differences between gender and regions of the impact of children living in the household. We show that the positive “non pecuniary” effect of children is stronger for men, below or equal to median income households and, most of all, for East Germans. We interpret these subsample split results as driven by heterogeneous opportunity costs and cultural traits.
40

CARNEIRO, JANETE C. G. G. « Contribuicao para avaliacao critica da radioprotecao por meio da analise retrospectiva das doses associadas ao trabalho com fontes nao seladas de iodo-131 ». reponame:Repositório Institucional do IPEN, 1998. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10724.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Made available in DSpace on 2014-10-09T12:43:21Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T13:58:04Z (GMT). No. of bitstreams: 1 06428.pdf: 6304808 bytes, checksum: 564076d766d1209b8ce5d2f96ff4876b (MD5)
Tese (Doutoramento)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
41

Glenn, L. Lee, et Jeff R. Knisley. « Use of Eigenslope to Estimate Fourier Coefficients for Passive Cable Models of the Neuron ». Digital Commons @ East Tennessee State University, 1997. https://dc.etsu.edu/etsu-works/7540.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Boundary conditions for the cable equation - such as voltage-clamped or sealed cable ends, branchpoints, somatic shunts, and current clamps - result in multi-exponential series representations of the voltage or current. Each term in the series expansion is characterized by a decay rate (eigenvalue) and an initial amplitude (Fourier coefficient). The eigenvalues are determined numerically and the Fourier coefficients are subsequently given by the residues at the eigenvalues of the Laplace transform of the solution. In this paper, we introduce an alternative method for estimating the Fourier coefficients which works for all types of boundary conditions and is practical even when analytic expressions for the Fourier coefficients become intractable. It is shown that terms in the analytic expressions for the Fourier coefficients result from derivatives of the equation for the eigenvalues, and that simple numerical estimates for the amplitude coefficients are easily derived by replacing analytical derivatives by numerical eigenslope. The physical quantity represented by the slope is identified as effective neuron capacitance.
42

Harlé, Flore. « Détection de ruptures multiples dans des séries temporelles multivariées : application à l'inférence de réseaux de dépendance ». Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT043/document.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Cette thèse présente une méthode pour la détection hors-ligne de multiples ruptures dans des séries temporelles multivariées, et propose d'en exploiter les résultats pour estimer les relations de dépendance entre les variables du système. L'originalité du modèle, dit du Bernoulli Detector, réside dans la combinaison de statistiques locales issues d'un test robuste, comparant les rangs des observations, avec une approche bayésienne. Ce modèle non paramétrique ne requiert pas d'hypothèse forte sur les distributions des données. Il est applicable sans ajustement à la loi gaussienne comme sur des données corrompues par des valeurs aberrantes. Le contrôle de la détection d'une rupture est prouvé y compris pour de petits échantillons. Pour traiter des séries temporelles multivariées, un terme est introduit afin de modéliser les dépendances entre les ruptures, en supposant que si deux entités du système étudié sont connectées, les événements affectant l'une s'observent instantanément sur l'autre avec une forte probabilité. Ainsi, le modèle s'adapte aux données et la segmentation tient compte des événements communs à plusieurs signaux comme des événements isolés. La méthode est comparée avec d'autres solutions de l'état de l'art, notamment sur des données réelles de consommation électrique et génomiques. Ces expériences mettent en valeur l'intérêt du modèle pour la détection de ruptures entre des signaux indépendants, conditionnellement indépendants ou complètement connectés. Enfin, l'idée d'exploiter les synchronisations entre les ruptures pour l'estimation des relations régissant les entités du système est développée, grâce au formalisme des réseaux bayésiens. En adaptant la fonction de score d'une méthode d'apprentissage de la structure, il est vérifié que le modèle d'indépendance du système peut être en partie retrouvé grâce à l'information apportée par les ruptures, estimées par le modèle du Bernoulli Detector
This thesis presents a method for the multiple change-points detection in multivariate time series, and exploits the results to estimate the relationships between the components of the system. The originality of the model, called the Bernoulli Detector, relies on the combination of a local statistics from a robust test, based on the computation of ranks, with a global Bayesian framework. This non parametric model does not require strong hypothesis on the distribution of the observations. It is applicable without modification on gaussian data as well as data corrupted by outliers. The detection of a single change-point is controlled even for small samples. In a multivariate context, a term is introduced to model the dependencies between the changes, assuming that if two components are connected, the events occurring in the first one tend to affect the second one instantaneously. Thanks to this flexible model, the segmentation is sensitive to common changes shared by several signals but also to isolated changes occurring in a single signal. The method is compared with other solutions of the literature, especially on real datasets of electrical household consumption and genomic measurements. These experiments enhance the interest of the model for the detection of change-points in independent, conditionally independent or fully connected signals. The synchronization of the change-points within the time series is finally exploited in order to estimate the relationships between the variables, with the Bayesian network formalism. By adapting the score function of a structure learning method, it is checked that the independency model that describes the system can be partly retrieved through the information given by the change-points, estimated by the Bernoulli Detector
43

Maggis, M. « ON QUASICONVEX CONDITIONAL MAPS. DUALITY RESULTS AND APPLICATIONS TO FINANCE ». Doctoral thesis, Università degli Studi di Milano, 2010. http://hdl.handle.net/2434/150201.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Motivated by many financial insights, we provide dual representation theorems for quasiconvex conditional maps defined on vector space or modules and taking values in sets of random variables. These results match the standard dual representation for quasiconvex real valued maps provided by Penot and Volle. As a financial byproduct, we apply this theory to the case of dynamic certainty equivalents and conditional risk measures.
44

Kassir, Wafaa. « Approche probabiliste non gaussienne des charges statiques équivalentes des effets du vent en dynamique des structures à partir de mesures en soufflerie ». Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1116/document.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Afin d'estimer les forces statiques équivalentes du vent, qui produisent les réponses quasi-statiques et dynamiques extrêmes dans les structures soumises au champ de pression instationnaire induit par les effets du vent, une nouvelle méthode probabiliste est proposée. Cette méthode permet de calculer les forces statiques équivalentes du vent pour les structures avec des écoulements aérodynamiques complexes telles que les toitures de stade, pour lesquelles le champ de pression n'est pas gaussien et pour lesquelles la réponse dynamique de la structure ne peut être simplement décrite en utilisant uniquement les premiers modes élastiques (mais nécessitent une bonne représentation des réponses quasi-statiques). Généralement, les mesures en soufflerie du champ de pression instationnaire appliqué à une structure dont la géométrie est complexe ne suffisent pas pour construire une estimation statistiquement convergée des valeurs extrêmes des réponses dynamiques de la structure. Une telle convergence est nécessaire pour l'estimation des forces statiques équivalentes afin de reproduire les réponses dynamiques extrêmes induites par les effets du vent en tenant compte de la non-gaussianité du champ de pression aléatoire instationnaire. Dans ce travail, (1) un générateur de réalisation du champ de pression instationnaire non gaussien est construit en utilisant les réalisations qui sont mesurées dans la soufflerie à couche limite turbulente; ce générateur basé sur une représentation en chaos polynomiaux permet de construire un grand nombre de réalisations indépendantes afin d'obtenir la convergence des statistiques des valeurs extrêmes des réponses dynamiques, (2) un modèle d'ordre réduit avec des termes d'accélération quasi-statique est construit et permet d'accélérer la convergence des réponses dynamiques de la structure en n'utilisant qu'un petit nombre de modes élastiques, (3) une nouvelle méthode probabiliste est proposée pour estimer les forces statiques équivalentes induites par les effets du vent sur des structures complexes décrites par des modèles éléments finis, en préservant le caractère non gaussien et sans introduire le concept d'enveloppes des réponses. L'approche proposée est validée expérimentalement avec une application relativement simple et elle est ensuite appliquée à une structure de toiture de stade pour laquelle des mesures expérimentales de pressions instationnaires ont été effectuées dans la soufflerie à couche limite turbulente
In order to estimate the equivalent static wind loads, which produce the extreme quasi-static and dynamical responses of structures submitted to random unsteady pressure field induced by the wind effects, a new probabilistic method is proposed. This method allows for computing the equivalent static wind loads for structures with complex aerodynamic flows such as stadium roofs, for which the pressure field is non-Gaussian, and for which the dynamical response of the structure cannot simply be described by using only the first elastic modes (but require a good representation of the quasi-static responses). Usually, the wind tunnel measurements of the unsteady pressure field applied to a structure with complex geometry are not sufficient for constructing a statistically converged estimation of the extreme values of the dynamical responses. Such a convergence is necessary for the estimation of the equivalent static loads in order to reproduce the extreme dynamical responses induced by the wind effects taking into account the non-Gaussianity of the random unsteady pressure field. In this work, (1) a generator of realizations of the non-Gaussian unsteady pressure field is constructed by using the realizations that are measured in the boundary layer wind tunnel; this generator based on a polynomial chaos representation allows for generating a large number of independent realizations in order to obtain the convergence of the extreme value statistics of the dynamical responses, (2) a reduced-order model with quasi-static acceleration terms is constructed, which allows for accelerating the convergence of the structural dynamical responses by using only a small number of elastic modes of the structure, (3) a novel probabilistic method is proposed for estimating the equivalent static wind loads induced by the wind effects on complex structures that are described by finite element models, preserving the non-Gaussian property and without introducing the concept of responses envelopes. The proposed approach is experimentally validated with a relatively simple application and is then applied to a stadium roof structure for which experimental measurements of unsteady pressures have been performed in boundary layer wind tunnel
45

Шмирьов, Володимир Федорович, et Volodymyr Fedorovych Shmyrov. « Наукові основи проектування та створення енергозалежних систем літаків транспортної категорії ». Thesis, Національний авіаційний університет, 2020. https://er.nau.edu.ua/handle/NAU/44724.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
Дисертаційну роботу присвячено розробці наукових основ проектування та створення енергозалежних систем та комплексів сучасних літаків транспортної категорії з оптимізацією за еквівалентною масою, включаючи повітрозбірники. Закладено наукові основи проектування систем протиобмерзання літаків, починаючи від визначення зон захисту, потрібних енергетичних витрат і закінчуючи проектуванням протиобмерзачів і повітряних трубопроводів для всього експлуатаційного діапазону застосування літака. Наведено приклади використання розроблених наукових основ проектування при виконанні структурного аналізу модифікацій літаків, пов'язаних із заміною двигуна. Одержаний при проектуванні й вивчений в процесі випробувань і експлуатації енергетичний баланс літака дозволяє обґрунтовано розглядати модифікацію літака як при заміні силової установки, так і при заміні її основних елементів енергозалежних систем літака. Оцінка зводиться до аналізу аеродинамічних особливостей модифікації, пов'язаних з особливостями конструкції мотогондол, зміною елементів захисту від обмерзання, появою нових повітрозабірників в повітряних системах.
Диссертационная работа посвящена разработке научных основ проектирования и создания энергозависимых систем и комплексов современных самолетов транспортной категории с оптимизацией по эквивалентной массе, включая воздухозаборники. В работе заложены научные основы проектирования систем противообледенения самолетов, начиная от определения зон защиты, необходимых энергетических затрат и заканчивая проектированием противообледенителей и воздушных трубопроводов для всего эксплуатационного диапазона применения самолета. Проведен анализ современного состояния научных основ проектирования и создания энергозависимых систем и комплексов современных самолетов транспортной категории. Показано, что для обеспечения конкурентоспособности создаваемых самолетов энергозависимые систем и комплексы должны иметь высокие показатели топливной эффективности, экологичности, надежности, обеспечивать повышенный комфорт и безопасность для пассажиров, а также иметь низкие эксплуатационные расходы. Приведены примеры использования разработанных научных основ проектирования при выполнении структурного анализа модификаций самолетов, связанных с заменой двигателя. Показано, что для современной авиации характерны тенденции на создание более экономичных и безопасных систем самолета, сбалансированных с энергетикой самолета, что обусловливает их сильное усложнение. Важным этапом после выбора двигателя является поиск путей сохранения его мощности, связанных с созданием мотогондолы на достижение минимальных потерь энергетики на внешнюю аэродинамику и по газодинамическому тракту. Важнейшими системами самолета, энергетически связанными с двигателем, являются система подготовки и распределения воздуха, система кондиционирования, система защиты от обледенения, система энергоснабжения и гидравлические системы. Рассмотренные в данной работе системы и процессы характеризуются как сложные, при изучении которых требуется системный подход, включающий многокритериальность, многофакторность, адекватный метод описания, эффективность применяемых моделей. Получение математических моделей сложных систем базируется на принятых предпосылках множественного регрессионного анализа, которые должны выполняться по отношению к моделируемой реальной действительности. Принятые предпосылки многофакторного регрессионного анализа обусловливают обоснованность полученных результатов и параметров моделей, обеспечивающих решение реальной задачи. Создание методов построения математических моделей по результатам проведения многофакторного численного эксперимента позволяет систематизировать и формализовать протекающие процессы. Полученный при проектировании и изученный в процессе испытаний и эксплуатации энергетический баланс самолета позволяет обоснованно рассматривать модификацию самолета как при замене силовой установки, так и при замене ее основных элементов энергозависимых систем самолета. Оценка сводится к анализу аэродинамических особенностей модификации, связанных с особенностями конструкции мотогондол, изменением элементов защиты от обледенения, появлением новых воздухозаборников в воздушных системах, так как для конкретного самолета топография трасс систем остается неизменной и энергетические затраты на самолете потребности, как правило, не меняются.
The dissertation is devoted to the development of Scientific basis for the designing and development of energy-dependent systems and complexes of modern transport aircraft with equivalent mass optimization, including air intakes. It establishes a scientific basis for the designing of aircraft anti-icing systems, starting with determining the protection areas, required power consumption and ending with designing of anti-icers and air ducts for the entire aircraft operating envelope. Examples of implementation of the developed scientific basis for designing during performance of the structural analysis of aircraft modifications related to engine replacement have been given. The energy balance of the aircraft obtained during the design and studied during testing and operation allows to reasonably consider the aircraft modification both when replacing the power plant and when replacing its main elements of energy-dependent aircraft systems. The assessment is reduced to the analysis of aerodynamic features of the modification associated with the design features of nacelles, changes in ice protection elements, adding of new air intakes in air systems.
46

Hseih, Tzung-Cheng, et 謝宗成. « Statistical Assessment of Therapeutic Equivalence Based on Paired Binary Data ». Thesis, 1997. http://ndltd.ncl.edu.tw/handle/05292647589635046705.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
碩士
國立成功大學
統計學系
85
We consider statistical inference for assessment of therapeutic equivalencebetween two diagnostic procedures based on the paired binary clinical endpointsfor the comparison of diagnostic efficacy with respect to the gold standard indiagnosis of a certain disease. The hypothesis of therapeutic equivalence isformulated as the interval hypothesis based on the difference between correlatedproportions of the correct diagnosis. We propose an asymptotic two one- sidedtests procedure for the interval hypothesis which is also shown to be operation-ally equivalent to the asymptotic confidence interval approach. In addition, anapproximate formula for sample size determination is also suggested. Also, wepropose an exact tests procedure by the method for small sample, and provide alist of critical values for a symmetric equivalence limit of 0.2. A simulationstudy was conducted to empirically examine the size and power of proposedasymptotic tests procedures. Also, we conducted additional simulation study forverifying whether the critical values for exact tests can adequately control thesize. A numerical example will be provided for illustration of the proposedprocedures. We make some final remarks about the critical values for the exacttest and other formulation of the equivalence limits for the interval hypothesis. In addition, the procedure for summarization of the equivalence results overstrata will be also suggested.
47

Fan, Hsin-Yi, et 范欣怡. « Statistical Evaluation of Equivalence and Non-inferiority for Binary Data ». Thesis, 2004. http://ndltd.ncl.edu.tw/handle/25785189410927575993.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
碩士
國立成功大學
統計學系碩博士班
92
In recent years, equivalence or non-inferiority studies have been applied to clinical trial. For equivalence or non-inferiority trials, the goal is to show that the new treatment can maintain similar treatment effects as compared to that of standard treatment by a pre-specified margin. New treatments have been developed because they offer better safety, less toxic, easier to administer or less expensive. Parallel designs and matched-pair designs generate independent and paired binary endpoints. Based on binary endpoints three difference criteria were proposed to evaluate equivalence or non-inferiority. They are difference in proportion, ratio of proportion, and odds ratio. We conducted a simulation to compare the performance of different methods based on these three measures in terms of size and power. In addition, we derived a new test based on restricted maximum likelihood estimator (RMLE) for the odds ratio. Its properties are also investigated by simulation. A numerical method illustrates the proposed method.
48

Wang, Hui-Hsuan, et 王慧喧. « Evaluation of Statistical Methods for Equivalence and Non-inferiority Based on the Kolmogorov-Smirnov Statistics ». Thesis, 2005. http://ndltd.ncl.edu.tw/handle/57452532513592457512.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
碩士
國立成功大學
統計學系碩博士班
93
In recent years, for ROC curve index, more and more issues have focused on the equivalence or non-inferiority test. For example, in comparing diagnostic efficacy of a non-invasive alternative diagnostic procedure to an invasive method. If the non-invasive alternative procedure is equivalent to the invasive method, we may use the non-invasive alternative diagnostic procedure because of its easy administration, its better safety profile or its reduced cost. In this paper, the equivalence/non-inferiority tests based on four methods, the non-parametric method, method based on the standardized difference, method based on the Kolmogorov-Smirnov statistic, and bootstrap method are compared. A simulation study was conducted to empirically investigate the size and power of four methods for various combinations of distributions.
49

Balasubramanian, Vijay. « Equivalence and Reduction of Hidden Markov Models ». 1993. http://hdl.handle.net/1721.1/6801.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
This report studies when and why two Hidden Markov Models (HMMs) may represent the same stochastic process. HMMs are characterized in terms of equivalence classes whose elements represent identical stochastic processes. This characterization yields polynomial time algorithms to detect equivalent HMMs. We also find fast algorithms to reduce HMMs to essentially unique and minimal canonical representations. The reduction to a canonical form leads to the definition of 'Generalized Markov Models' which are essentially HMMs without the positivity constraint on their parameters. We discuss how this generalization can yield more parsimonious representations of stochastic processes at the cost of the probabilistic interpretation of the model parameters.
50

Yu-Hunghsieh et 謝裕弘. « Statistical Evaluation of Equivalence Test Based on the Genetic Diversity Index ». Thesis, 2010. http://ndltd.ncl.edu.tw/handle/98314974235622502544.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Résumé :
碩士
國立成功大學
統計學系碩博士班
98
The United Nations marked 2010 as the International Year of Biodiversity - the variety of life on Earth 2010. Biologists often define biodiversity as the "totality of genes, species, and ecosystems of a region". It is clear that the importance of biodiversity. This study is concerned with the equivalence for multi-categorical data and focuses on gene comparison by some genetic diversity index, especially in nucleotide diversity index. The traditional estimator of diversity ignores the missing gene types result in to underestimate the gene diversity, when there is a non-negligible number of unseen genetic sequence. Therefore we combine the concept of Horvitz-Thompson and sample coverage to obtain the better genetic diversity estimator. Then we use the estimator and two one-sided test method to test the equivalence. A simulation study was conducted to empirically investigate the size and power of the proposed methods. Besides, a bootstrap-based approach is also proposed in small sample size and compared with the two one-sided test by the size and power.

Vers la bibliographie