Dissertations / Theses on the topic 'Distribution analysis method'

To see the other types of publications on this topic, follow the link: Distribution analysis method.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Distribution analysis method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kasolis, Fotios. "The Material Distribution Method : Analysis and Acoustics applications." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-92538.

Full text
Abstract:
For the purpose of numerically simulating continuum mechanical structures, different types of material may be represented by the extreme values {,1}, where 0<1, of a varying coefficient  in the governing equations. The paramter  is not allowed to vanish in order for the equations to be solvable, which means that the exact conditions are approximated. For example, for linear elasticity problems, presence of material is represented by the value  = 1, while  =  provides an approximation of void, meaning that material-free regions are approximated with a weak material. For acoustics applications, the value  = 1 corresponds to air and  to an approximation of sound-hard material using a dense fluid. Here we analyze the convergence properties of such material approximations as !0, and we employ this type of approximations to perform design optimization. In Paper I, we carry out boundary shape optimization of an acoustic horn. We suggest a shape parameterization based on a local, discrete curvature combined with a fixed mesh that does not conform to the generated shapes. The values of the coefficient , which enters in the governing equation, are obtained by projecting the generated shapes onto the underlying computational mesh. The optimized horns are smooth and exhibit good transmission properties. Due to the choice of parameterization, the smoothness of the designs is achieved without imposing severe restrictions on the design variables. In Paper II, we analyze the convergence properties of a linear elasticity problem in which void is approximated by a weak material. We show that the error introduced by the weak material approximation, after a finite element discretization, is bounded by terms that scale as  and 1/2hs, where h is the mesh size and s depends on the order of the finite element basis functions. In addition, we show that the condition number of the system matrix scales inversely proportional to , and we also construct a left preconditioner that yields a system matrix with a condition number independent of . In Paper III, we observe that the standard sound-hard material approximation with  =  gives rise to ill-conditioned system matrices at certain wavenumbers due to resonances within the approximated sound-hard material. To cure this defect, we propose a stabilization scheme that makes the condition number of the system matrix independent of the wavenumber. In addition, we demonstrate that the stabilized formulation performs well in the context of design optimization of an acoustic waveguide transmission device. In Paper IV, we analyze the convergence properties of a wave propagation problem in which sound-hard material is approximated by a dense fluid. To avoid the occurrence of internal resonances, we generalize the stabilization scheme presented in Paper III. We show that the error between the solution obtained using the stabilized soundhard material approximation and the solution to the problem with exactly modeled sound-hard material is bounded proportionally to .
APA, Harvard, Vancouver, ISO, and other styles
2

Karadag, Rukiye. "Temperature Distribution In Power Transformers." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615162/index.pdf.

Full text
Abstract:
As in all other electrical equipments it is essential to estimate the temperature distribution in transformer components in the design stage and during the operation since temperatures above thermal limits of these components might seriously damage them. Thermal models are used to predict this vital information prior to actual operations. In this study, a three dimensional model based on the Finite Element Method (FEM) is proposed to estimate the temperature distribution in the three phase, SF6 gas insulated-cooled power transformer. This model can predict the temperature distribution at the specific discredited locations in the transformer successfully.
APA, Harvard, Vancouver, ISO, and other styles
3

Erturk, Mehmet. "Efficiency Analysis Of Turkish Natural Gas Distribution Companies By Using Dea Method." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610796/index.pdf.

Full text
Abstract:
The history of natural gas in Turkey started in 1970s by the usage of domestic gas in cement factories. However, natural gas began penetrating the energy market in 1980s with the usage of natural gas in Ankara. In the following years, the number of cities using natural gas reached six. Then, a new era started with the enactment of Natural Gas Market Law in 2001 and 53 distribution tenders have been realized by Energy Market Regulatory Authority until 2009. This thesis analyzes the performance of 38 Turkish natural gas distribution companies by using a non-parametric method, Data Envelopment Analysis. The results are used to determine the most proper model specification, to detect the important criteria affecting the efficiency levels and to find the common characteristics of the most inefficient firms. The results show that public firms compared to private firms, non-tender firms compared to tender firms, large firms compared to small firms and firms operating in more developed areas compared to firms operating in underdeveloped areas utilize resources and manage costs more efficiently. However, we can not get a certain conclusion about the comparison of old firms and new firms. Lastly, we try to detect the common characteristics of the most inefficient firms and find that the major problem is low delivery amount.
APA, Harvard, Vancouver, ISO, and other styles
4

Bahai, Hamid R. S. "A hybrid method for load, stress and fatigue analysis of drill string screw connectors." Thesis, Queen Mary, University of London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mooney, Jayne. "The prevalence and social distribution of domestic violence : an analysis of theory and method." Thesis, Middlesex University, 1994. http://eprints.mdx.ac.uk/6383/.

Full text
Abstract:
Domestic violence is recognised as an area that requires more detailed research, particularly on the general population. Indeed the lack of authoritative statistics on the extent of domestic violence is considered to restrict the development of preventative or remedial action to alleviate the problem. This thesis is concerned, therefore, with the development of a methodology in order to generate data on the incidence and prevalence of domestic violence, the relationship of this to current theory and the implications for policy. The main research component involved a victimisation survey adapted to deal with the specific problems of researching domestic violence. It utilized sensitive interviewing techniques, carefully worded questionnaires, a self-complete questionnaire (the 'piggy-back' method) and vignettes detailing 'conflict' situations which could lead to violence. 571 women and 429 men were interviewed which makes it the largest survey on domestic violence to be conducted in Great Britain. A qualitative component was additionally incorporated into the methodology in order to fully explore the experience of domestic violence. The primary focus of the research was on women's experiences of violence from husbands and boyfriends, including ex-partners, although additional information was collected on other forms domestic of and non-domestic violence against both men and women. The project investigated the extent of domestic violence; its variation by subgroup; the nature, context and impact of the violence; definitions; levels and patterns of reporting to the various agencies and satisfaction with the response; the relationship of domestic to stranger violence; the location of domestic violence and non-domestic violence and the gendered distribution of violence. The examination of so many areas could not have been achieved without the use of a multiplicity of methods. This thesis, however, deals not only with the development of methodology and the subsequent findings arising from the research project. It also analyses four major criminological theories (classicism, including the new administrative criminology; positivism; feminism and left realism) in relation to domestic violence. It delineates the main principles of each theory, details how it attempts to explain, research and tackle domestic violence and identifies both strengths and weaknesses. Furthermore, the empirical data generated by the research project enables the testing of hypotheses derived from the theoretical literature about the nature of violence, particularly with respect to its social and spatial patterning. On examination, the approaches of radical feminism and left realism are singled out as having the greatest purchase on the Phenomenon and a synthesis of these positions is demarcated: a feminist realism within criminology. Finally, both the research findings and theoretical discussion inform the policy recommendations. Both long-term and short-term initiatives are considered and an emphasis is placed on development of policy that is both multiagency and woman-centred.
APA, Harvard, Vancouver, ISO, and other styles
6

ALI, MOLLA MOHAMMAD. "A PROBABILISTIC ANALYSIS OF THE DISTRIBUTION OF COLLAPSING SOIL IN TUCSON USING KRIGING METHOD." Diss., The University of Arizona, 1987. http://hdl.handle.net/10150/184180.

Full text
Abstract:
An analytical investigation was carried to determine the nature and extent of the variability of selected collapse criteria and collapse-related soil parameters both areally and with depth within the city of Tucson. Collapse-related soil parameters of about 1000 sample points from over 400 borehole locations throughout the Tucson basin were collected from several consulting geotecnical engineering offices of the city. Statistical analysis on seven data sets corresponding to six different depth increments below the surface showed high dispersion tendencies expressed by the value of coeffecient of variation (cov). The value of cov was found to increase linearly with depth for most criteria and parameters. All the collapse criteria and collapse-related soil parameters were found to follow the Gamma distribution function except insitu dry density bd) and porosity (n₀) which were found to follow the Weibull distribution function. A polynomial regression model developed for the collapse parameter Cp showed that it varies with depth almost linearly. A stepwise regression analysis revealed that the collapse parameter Cp is strongly correlated with γd and insitu moisture content, woo Factor analysis validates this finding by producing two strong factors γd and insitu degree of saturation, s₀ which described almost 80% of the variation encountered in the data. The application of geostatistical concepts was found to be feasible in analyzing the collapse criteria and collapse-related soil parameters. Almost all criteria and parameters were strongly dependent spatially. A spherical variogram model was found to be appropriate for them. The method of Ordinary Kriging provided an unbiased estimation of a parameter at an unsampled location with known estimation variance. The method of Indicator Kriging was used to develop contour plots for the various data sets that showed the probability that the value of a certain parameter is above or below a critical level. These contour plots can be used to identify the areas within the City of Tucson that contain soils having a low- medium- or high-collapse potential. The ability to predict the occurence of such soils with a known degree of certitude is invaluable to planners, developers and geotechnical engineers.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Ji. "Cox Model Analysis with the Dependently Left Truncated Data." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/math_theses/88.

Full text
Abstract:
A truncated sample consists of realizations of a pair of random variables (L, T) subject to the constraint that L ≤T. The major study interest with a truncated sample is to find the marginal distributions of L and T. Many studies have been done with the assumption that L and T are independent. We introduce a new way to specify a Cox model for a truncated sample, assuming that the truncation time is a predictor of T, and this causes the dependence between L and T. We develop an algorithm to obtain the adjusted risk sets and use the Kaplan-Meier estimator to estimate the Marginal distribution of L. We further extend our method to more practical situation, in which the Cox model includes other covariates associated with T. Simulation studies have been conducted to investigate the performances of the Cox model and the new estimators.
APA, Harvard, Vancouver, ISO, and other styles
8

Tafesse, Solomon. "Physical characterization of coarse clasts with 3D image-analysis method : development, evaluation and application." Doctoral thesis, KTH, Teknisk geologi och geofysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101388.

Full text
Abstract:
This thesis presents a novel three dimensional (3D) image-analysis method for characterizing the physical characteristics of coarse particles in the field, and introduces new methodology for the total analysis of glacial till samples. The novel image analysis method, called the GID method, is capable of determining the size, shape and surface texture of each individual clast analysed. Images of particles are taken in the field and analysis is done in the laboratory. Therefore the GID method makes it feasible to analyse statistically representative large sample in short period; for poorly sorted sediments, such as till, one-tonne is required if the analysis includes cobble size. The capability of the GID method was demonstrated by studying coarse clasts (20-200 mm) from till. There is excellent agreement in the results of the size distribution obtained from the GID method and sieve analysis. The GID method results for size and shape parameters show high and very high repeatability. The particle angularity in the GID method has not been measured to acceptable level; the repeatability test shows some variability. The new methodology for total analysis of till applied the GID method at four different locations in Sweden. The total analysis included 3D size and shape distribution of coarse particles coupled to electrical resistivity, lithological distribution and magnetic susceptibility of the clasts. The results show clear difference in the till samples from the different sites.

QC 20120828

APA, Harvard, Vancouver, ISO, and other styles
9

Ngunkeng, Grace. "Statistical Analysis of Skew Normal Distribution and its Applications." Bowling Green State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1370958073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jia, Xiaoyuan. "The Application of Linear Superposition Method on Water Distribution Systems Analysis of Contaminant Intrusion Events." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1337889123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kiyosugi, Koji. "Temporal and Spatial Analysis of Monogenetic Volcanic Fields." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4101.

Full text
Abstract:
Achieving an understanding of the nature of monogenetic volcanic fields depends on identification of the spatial and temporal patterns of volcanism in these fields, and their relationships to structures mapped in the shallow crust and inferred in the deep crust and mantle through interpretation of geochemical, radiometric and geophysical data. We investigate the spatial and temporal distributions of volcanism in the Abu Monogenetic Volcano Group, Southwest Japan. E-W elongated volcano distribution, which is identified by a nonparametric kernel method, is found to be consistent with the spatial extent of P-wave velocity anomalies in the lower crust and upper mantle, supporting the idea that the spatial density map of volcanic vents reflects the geometry of a mantle diapir. Estimated basalt supply to the lower crust is constant. This observation and the spatial distribution of volcanic vents suggest stability of magma productivity and essentially constant two-dimensional size of the source mantle diapir. We mapped conduits, dike segments, and sills in the San Rafael sub-volcanic field, Utah, where the shallowest part of a Pliocene magmatic system is exceptionally well exposed. The distribution of conduits matches the major features of dike distribution, including development of clusters and distribution of outliers. The comparison of San Rafael conduit distribution and the distributions of volcanoes in several recently active volcanic fields supports the use of statistical models, such as nonparametric kernel methods, in probabilistic hazard assessment for distributed volcanism. We developed a new recurrence rate calculation method that uses a Monte Carlo procedure to better reflect and understand the impact of uncertainties of radiometric age determinations on uncertainty of recurrence rate estimates for volcanic activity in the Abu, Yucca Mountain Region, and Izu-Tobu volcanic fields. Results suggest that the recurrence rates of volcanic fields can change by more than one order of magnitude on time scales of several hundred thousand to several million years. This suggests that magma generation rate beneath volcanic fields may change over these time scales. Also, recurrence rate varies more than one order of magnitude between these volcanic fields, consistent with the idea that distributed volcanism may be influenced by both the rate of magma generation and the potential for dike interaction during ascent.
APA, Harvard, Vancouver, ISO, and other styles
12

Fichtner, Jason J. "Distribution Tables and Federal Tax Policy: A Scoring Index as a Method for Evaluation." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/29422.

Full text
Abstract:
Distribution tables have become ubiquitous to the tax policy debates surrounding major legislative initiatives to change tax law at the federal level. The fairness of any proposed change to federal tax policy has become one of the most highlighted components of tax policy discussions. The presentation of tax data within distribution tables can hide or omit important information that is required in order to effectively evaluate the merits of any tax legislation. Many producers of distribution tables show only the information necessary to present their policy preferences in the best possible light. The different economic assumptions and presentations of data used by the various groups that release distribution tables have the inherent consequence of providing the public with numerous tables that are often used as political ammunition to influence and shape debate. The purpose of this research is to contribute to the tax policy research literature by exploring the limitations and biases inherent in specific designs of tax distribution tables and in specific methodological approaches to tax distribution analysis. This is done by means of a systematic examination of how different designs and methodologies provide an incomplete picture of a proposed change to federal tax policy. By comparing distribution tables as used by different groups to provide alternative perspectives of various tax proposals, the research shows how the use of tax distribution tables often provides misleading results about the impact of proposed tax legislation in order to influence and shape the issues surrounding a proposed change to federal tax policy. A method for evaluating tax distribution tables is proposed which highlights the deficiencies of design and methodology which characterize the present use of tax distribution tables. An index of questions is provided as part of this research project to serve as a new tool of policy analysis, an index termed the "Tax Distribution Table Scoring Index" (TDTSI). The TDTSI will assist in balancing the different perspectives presented via tax distribution tables by identifying the biases and limitations associated with different methodologies and presentations of data.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Shiran, M. B. "An investigation of the mode of action of dye/paper method of recording intensity distribution in an ultrasound field." Thesis, University of Aberdeen, 1993. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU059239.

Full text
Abstract:
Ultrasound (the frequency range above 20 KHz) at intensities of 0-3.0 Wcm-2 is used in physical medicine to treat a variety of conditions (joint contractures, fibrosis and scarring and severe pain due to disorders such as frozen shoulder and capsulitis). It is believed to produce a reduction in pain and swelling and increase movement of joints affected by diseases such as arthritis. It is also used to generate hyperthermia for the treatment of neoplastic tumours. The technique involves irradiating tissues with either continuous or pulsed mode ultrasound, using either the field from a single or multielement transducer. Geometrically similar ultrasound transducers even from the same batch can have significantly different efficiencies and the resulting field distributions can also vary. There are several pieces of information required to characterize an ultrasound source. They are frequency, power output, spatial average and spatial peak intensity and details of ultrasonic field distribution. This application of ultrasound requires an experimental method to define the intensity distribution generated by a transducer or array of transducers. A number of different techniques have been used to record intensity distribution. These include sheets of liquid crystal on an absorbing surface, detection of signals using hydrophone probes, thermocouple probes, thermistor probes and the Schlieren technique. The dye/paper method and Starch iodine plate have been developed for the observation of the ultrasonic fields and evaluation of the source pattern.
APA, Harvard, Vancouver, ISO, and other styles
14

Gossman, Stephanie Mizzell. "A new proposed method of contingency ranking." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34667.

Full text
Abstract:
Security analysis of a power system requires a process called contingency analysis that analyzes results from all possible single contingencies (i.e. outages) in the system. The process of contingency analysis requires the definition of a parameter that is used to monitor a certain aspect of the system, which is called a performance index. The performance index definitions used traditionally have been highly nonlinear, and the results have not accurately predicted the outcome of the performance index in some cases. These incorrect results are referred to as misrankings since the contingency results are usually placed in order of severity so that the most severe cases are evident. This thesis considers a new definition of contingency ranking using a more linearized definition of the performance index. The construction of both the new, proposed definition and the classic definition both consider the current loading of circuits in the system as compared to their rated values. Specifically, the parameter measured by the proposed definition measures the difference, while the more nonlinear definition uses a ratio of the two quantities, which is then raised to a higher power. A small, four bus test system is used to demonstrate the benefits of the new, more linearized definition. The average percent error for all single line contingencies of the system decreased by over 9.5% using the proposed definition as compared to the previous one. This decrease in error allows this performance index to monitor a similar parameter (comparing current loading and current rating of the lines) and achieve a higher degree of accuracy. Further linearization of this proposed definition also shows a reduction in the average percent error by an additional 22% so that when compared to the original, highly nonlinear definition, the average error is reduced by almost 30%. By linearizing the definition of the performance index, the results are more accurate and misrankings are less likely to occur from the security analysis process.
APA, Harvard, Vancouver, ISO, and other styles
15

Lee, Sungwook. "Semiparametric regression with random effects /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9842547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Yaqoob, Saima. "BRIDGE EDGE BEAM : NON-LINEAR ANALYSIS OF REINFORCEDCONCRETE OVERHANG SLAB BY FINITEELEMENT METHOD." Thesis, KTH, Bro- och stålbyggnad, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-222806.

Full text
Abstract:
Bridge edge beam system is an increasing concern in Sweden. Because it is the mostvisible part of the structure which is subjected to harsh weather. The edge beamcontributes to the stiffness of overhang slab and helps to distribute the concentratedload. The design of edge beam is not only affected by the structural members, but it isalso affected by non-structural members.The aim of the thesis is to investigate the influence of edge beam on the structuralbehavior of reinforced concrete overhang slab. A three-dimensional (3D) non-linearfinite element model is developed by using the commercial software ABAQUS version6.1.14. The load displacement curves and failure modes were observed. The bendingmoment and shear capacity of the cantilever slab is studied.The validated model from non-linear analysis of reinforced concrete slab gives morestiffer result and leads to the high value of load capacity when comparing with theexperimental test. The presence of the edge beam in the overhang slab of length 2.4 mslightly increases the load capacity and shows ductile behavior due to the self-weightof the edge beam. The non-linear FE-analysis of overhang slab of length 10 m leads tomuch higher load capacity and gives stiffer response as compare to the overhang slabof 2.4 m. The presence of the edge beam in the overhang slab of length 10 m giveshigher load capacity and shows stiffer response when comparing with the overhangslab of length 10 m. This might be due to the self-weight of the edge beam and theoverhang slab is restrained at the right side of the slab.
APA, Harvard, Vancouver, ISO, and other styles
17

Nyberg, Karl-Johan. "Performance Analysis of Detection System Design Algorithms." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/41789.

Full text
Abstract:
Detection systems are widely used in industry. Designers, operators and users of these systems need to choose an appropriate design, based on the intended usage and the operating environment. The purpose of this research is to analyze the effect of various system design variables (controllable) and system parameters (uncontrollable) on the performance of detection systems. To optimize system performance one must manage the tradeoff between two errors that can occur. A False Alarm occurs if the detection system falsely indicates a target is present and a False Clear occurs if the detection system falsely fails to indicate a target is present. Given a particular detection system and a pre-specified false clear (or false alarm) rate, there is a minimal false alarm (or false clear) rate that can be achieved. Earlier research has developed methods that address this false alarm, false clear tradeoff problem (FAFCT) by formulating a Neyman-Pearson hypothesis problem, which can be solved as a Knapsack problem. The objective of this research is to develop guidelines that can be of help in designing detection systems. For example, what system design variables must be implemented to achieve a certain false clear standard for a parallel 2-sensor detection system for Salmonella detection? To meet this objective, an experimental design is constructed and an analysis of variance is performed. Computational results are obtained using the FAFCT-methodology and the results are presented and analyzed using ROC (Receiver Operating Characteristic) curves and an analysis of variance. The research shows that sample size (i.e., size of test data set used to estimate the distribution of sensor responses) has very little effect on the FAFCT compared to other factors. The analysis clearly shows that correlation has the most influence on the FAFCT. Negatively correlated sensor responses outperform uncorrelated and positively correlated sensor responses with large margins, especially for strict FC-standards (FC-standard is defined as the maximum allowed False Clear rate). Suggestions for future research are also included. FC-standard is the second most influential design variable followed by grid size.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

Jamali, Shojaeddin. "Assessing load carrying capacity of existing bridges using SHM techniques." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/134484/1/Shojaeddin_Jamali_Thesis.pdf.

Full text
Abstract:
This research provides a multi-tier framework for load carrying capacity assessment of bridges using structural health monitoring techniques. In this framework, four tiers are developed ranging from simplified to detailed tiers for holistic bridge assessment. Performance of each tier has been validated using various numerical and experimental examples of bridges and beam-like structures.
APA, Harvard, Vancouver, ISO, and other styles
19

Griffiths, Katharine Rhiannon. "An improved method for simulation of vehicle vibration using a journey database and wavelet analysis for the pre-distribution testing of packaging." Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.589639.

Full text
Abstract:
Vehicle vibration is inherently random and non-stationary with a non-Gaussian distribution. In addition, variations in vehicle parameters, product payloads and distribution journeys mean that the characteristics of vibration are not identical for all distribution journeys. Because vehicle vibration and shock are key causes of damage during distribution, their simulation in pre-distribution testing is vital in order to ensure that adequate protection is provided for transported products. The established method set out in the current testing standards utilises a global set of averaged accelerated power spectral density spectra to construct random vibration signals. These signals are stationary with Gaussian distributions and, therefore, do not fully represent actual vehicle vibration, only an average. The aim of the investigation, reported on in this Thesis, was to create an improved test regime for simulating vehicle vibration for pre-distribution testing of packaging. This aim has been achieved through the construction of representative tests and the creation of realistic simulations with statistical significance. A journey database has been created, in which historic road profile data along with a quarter vehicle model have been used to approximate a known vehicle’s vibration on a specific distribution journey. Additionally, a wavelet decomposition method, in which wavelet analysis is used to decompose the approximate vehicle vibration in to a series of Gaussian approximations of varying amplitude and spectral content, has been developed. Along with theoretical work, case studies have been undertaken in order to validate the test regime.
APA, Harvard, Vancouver, ISO, and other styles
20

Costa, BÃrbara Cristina Alves da. "Load measurement error influence on friction factor calibration of pipe water distribution networks through do reverse transient method and genetic algorithm." Universidade Federal do CearÃ, 2014. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=13621.

Full text
Abstract:
O estudo de redes hidrÃulicas para fins de operaÃÃo ou anÃlise de viabilidade para ampliaÃÃo ou recuperaÃÃo das mesmas à iniciado pela calibraÃÃo, neste contexto, entendida como identificaÃÃo de parÃmetros tais como: fator de atrito, rugosidade e diÃmetro. O MÃtodo Transiente Inverso em conjunto com Algoritmo genÃtico se mostra eficiente nessa tarefa. O referido mÃtodo emprega o MÃtodo das CaracterÃsticas na soluÃÃo das equaÃÃes de movimento para escoamento transiente em tubos de redes e a otimizaÃÃo das soluÃÃes à baseada na Teoria Evolutiva e avaliada por uma funÃÃo objetivo, que neste estudo à o somatÃrio do mÃdulo da diferenÃa entre as cargas medidas e calculadas pelo modelo para cada conjunto de soluÃÃes. Considerando que o objetivo do desenvolvimento de modelos matemÃticos para a calibraÃÃo de redes hipotÃticas à a utilizaÃÃo dos mesmos em redes reais, e que nessas, a coleta de dados de carga està sujeita a erros de mediÃÃo, seja devido a defeitos nos equipamentos seja por condiÃÃes ambiente desfavorÃveis ou outros efeitos aleatÃrios e tendo em vista a relevÃncia dos fatores de atrito nas tubulaÃÃes, pela sua relaÃÃo com perdas de carga que devem ser controladas para um Ãtimo funcionamento de redes, garantindo um abastecimento contÃnuo em quantidade e condiÃÃes de funcionamento adequados, este trabalho propÃe-se a verificar a interferÃncia da presenÃa de erros de mediÃÃo de carga transiente na identificaÃÃo dos fatores de atrito em duas redes hidrÃulicas hipotÃticas. As mesmas sÃo de portes diferentes com relaÃÃo ao nÃmero de anÃis, nÃs e tubos. Ambas sÃo alimentadas por um reservatÃrio cada. As condiÃÃes transientes sÃo atribuÃdas a uma manobra de vÃlvula instalada em um dos nÃs de cada rede. A coleta de dados de carga à restrita a 20% dos nÃs de cada rede, sendo que um deles à o nà onde se encontra a vÃlvula. O tempo de observaÃÃo do transiente hidrÃulico à restrito ao tempo da manobra de vÃlvula, 20s, e ocorre em intervalos de 0,1s, resultando em 200 registros de carga. A condiÃÃo permanente das redes à inicialmente desconhecida o conhecimento acerca da mesma à restrito a carga nos reservatÃrios e demandas nos nÃs, bem como diÃmetros dos tubos, os fatores de atrito sÃo inicialmente estipulados. A determinaÃÃo das condiÃÃes permanente e transiente bem como a identificaÃÃo dos fatores de atrito à realizada com a utilizaÃÃo de um modelo hidrÃulico e geram cargas transientes que sÃo consideradas convencionalmente verdadeiras, essas entÃo recebem incrementos de diversos erros sistemÃticos e aleatÃrios, que geram novas cargas e essas sÃo consideradas coletadas com erros de mediÃÃo. A partir dessas novas cargas sÃo realizadas identificaÃÃes de fatores de atrito, os quais sÃo comparados com os que foram obtidos considerando um caso ideal de cargas sem erros de mediÃÃo. A referida comparaÃÃo à realizada atravÃs do Erro MÃdio Relativo e da FunÃÃo Objetivo Ãtima. Os resultados encontrados demonstram que os erros de mediÃÃo interferem na identificaÃÃo dos fatores de atrito apesar de nÃo ser possÃvel delinear uma relaÃÃo entre os mesmos.
The study of hydraulic networks for operation purposes or viability analysis for extension or renovation of the same is started the calibration in this context understood as identification parameters, such as friction coefficient, surface roughness and diameter. The Transient Inverse Method in conjunction with genetic algorithm is efficient in this task shows. This method employs the method of characteristics in the solution of the equations of motion for transient flow in networks of pipes and the optimization of solutions is based on Evolutionary Theory and evaluated by an objective function, which in this study is the sum of the difference between the module loads measured and calculated by the model for each set of solutions. Whereas the objective of the development of mathematical models for calibration hypothetical networks is their use in real networks, and that these, the collection of payload data is subject to measurement errors, is due to defects in the equipment or by conditions unfavorable environment or other random effects and taking into account the relevance of friction factors in pipelines, by their relationship to head losses that must be controlled to a great operation of networks, ensuring a continuous supply in quantity and appropriate operating conditions, this work is proposed to verify the influence of the presence of transient load measurement errors in the identification of friction factors in two hypothetical hydraulic networks. They are of different sizes with the number of rings, knots and tubes. Both are each fed by a reservoir. The transient conditions are assigned to a valve maneuver installed in one of the nodes of each network. The load data collection is restricted to 20% of the nodes in each network, one of which is the node where the valve is located. The hydraulic transient observation time is restricted to the valve maneuver time, 20s, and occurs at intervals of 0.1s, resulting in 200 charge records. The permanent condition of networks is initially unknown knowledge about the same is restricted to load in the reservoirs and demands on us as well as pipe diameter, the friction factors are initially stipulated. The determination of the permanent and transient conditions and the identification of the friction factors is performed using a hydraulic model and generate transient loads which are conventionally considered true, then these various steps of receiving systematic and random errors, which generate new burdens and these are considered collected with measurement errors. From these new loads are carried IDs friction factors, which are compared with those obtained considering an ideal case with no measurement errors loads. This comparison is performed using the mean relative error and function great goal. The results show that measurement errors in the identification of interfering friction factors although not possible to draw a relationship between them.
APA, Harvard, Vancouver, ISO, and other styles
21

Kucharska, Magdalena, and Jolanta Pielaszkiewicz. "NIG distribution in modelling stock returns with assumption about stochastic volatility : Estimation of parameters and application to VaR and ETL." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-2874.

Full text
Abstract:

We model Normal Inverse Gaussian distributed log-returns with the assumption of stochastic volatility. We consider different methods of parametrization of returns and following the paper of Lindberg, [21] we

assume that the volatility is a linear function of the number of trades. In addition to the Lindberg’s paper, we suggest daily stock volumes and amounts as alternative measures of the volatility.

As an application of the models, we perform Value-at-Risk and Expected Tail Loss predictions by the Lindberg’s volatility model and by our own suggested model. These applications are new and not described in the

literature. For better understanding of our caluclations, programmes and simulations, basic informations and properties about the Normal Inverse Gaussian and Inverse Gaussian distributions are provided. Practical applications of the models are implemented on the Nasdaq-OMX, where we have calculated Value-at-Risk and Expected Tail Loss

for the Ericsson B stock data during the period 1999 to 2004.

APA, Harvard, Vancouver, ISO, and other styles
22

Ergin, Emre. "Investigation Of Music Algorithm Based And Wd-pca Method Based Electromagnetic Target Classification Techniques For Their Noise Performances." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611218/index.pdf.

Full text
Abstract:
Multiple Signal Classification (MUSIC) Algorithm based and Wigner Distribution-Principal Component Analysis (WD-PCA) based classification techniques are very recently suggested resonance region approaches for electromagnetic target classification. In this thesis, performances of these two techniques will be compared concerning their robustness for noise and their capacity to handle large number of candidate targets. In this context, classifier design simulations will be demonstrated for target libraries containing conducting and dielectric spheres and for dielectric coated conducting spheres. Small scale aircraft targets modeled by thin conducting wires will also be used in classifier design demonstrations.
APA, Harvard, Vancouver, ISO, and other styles
23

Kucharska, Magdalena, and Jolanta Maria Pielaszkiewicz. "NIG distribution in modelling stock returns with assumption about stochastic volatility : Estimation of parameters and application to VaR and ETL." Thesis, Halmstad University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-58180.

Full text
Abstract:
We model Normal Inverse Gaussian distributed log-returns with the assumption of stochastic volatility. We consider different methods of parametrization of returns and following the paper of Lindberg, [21] we assume that the volatility is a linear function of the number of trades. In addition to the Lindberg’s paper, we suggest daily stock volumes and amounts as alternative measures of the volatility. As an application of the models, we perform Value-at-Risk and Expected Tail Loss predictions by the Lindberg’s volatility model and by our own suggested model. These applications are new and not described in the literature. For better understanding of our caluclations, programmes and simulations, basic informations and properties about the Normal Inverse Gaussian and Inverse Gaussian distributions are provided. Practical applications of the models are implemented on the Nasdaq-OMX, where we have calculated Value-at-Risk and Expected Tail Loss for the Ericsson B stock data during the period 1999 to 2004.
APA, Harvard, Vancouver, ISO, and other styles
24

Rhoades, Melinda Justine. "Addressing the computing gender gap a case study using feminist pedagogy and visual culture art education /." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1217107478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Reboková, Lenka. "Optimalizace logistických činností při expedici v elektrotechnické výrobě." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217574.

Full text
Abstract:
Logistics as a process of planning, realization and controlling of effective and powerful workflow and commodities warehousing, services and related information with the goal of costumers satisfaction, became highly important and inseparable part of every company. The essential part of the planning process is an analysis of the current company situation, which tries to find all the limiting factors, and which serves as a basis for the process of making a company more effective. This diploma thesis analyses the current state of the expedition division of the ModusLink Czech Republic s.r.o. company – it analyses all its working operations in detail and using chronometrical analysis method evaluates division‘s theoretical and feasible throughput capacity. The feasible throughput capacity is then confronted with division’s real throughput capacity achieved during the monitored months, and depending on the results of the comparison there are some conclusions and recommendations for future improvements mentioned.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Peng. "Approximating true relevance model in relevance feedback." Thesis, Robert Gordon University, 2013. http://hdl.handle.net/10059/808.

Full text
Abstract:
Relevance is an essential concept in information retrieval (IR) and relevance estimation is a fundamental IR task. It involves not only document relevance estimation, but also estimation of user's information need. Relevance-based language model aims to estimate a relevance model (i.e., a relevant query term distribution) from relevance feedback documents. The true relevance model should be generated from truly relevant documents. The ideal estimation of the true relevance model is expected to be not only effective in terms of mean retrieval performance (e.g., Mean Average Precision) over all the queries, but also stable in the sense that the performance is stable across different individual queries. In practice, however, in approximating/estimating the true relevance model, the improvement of retrieval effectiveness often sacrifices the retrieval stability, and vice versa. In this thesis, we propose to explore and analyze such effectiveness-stability tradeoff from a new perspective, i.e., the bias-variance tradeoff that is a fundamental theory in statistical estimation. We first formulate the bias, variance and the trade-off between them for retrieval performance as well as for query model estimation. We then analytically and empirically study a number of factors (e.g., query model complexity, query model combination, document weight smoothness and irrelevant documents removal) that can affect the bias and variance. Our study shows that the proposed bias-variance trade-off analysis can serve as an analytical framework for query model estimation. We then investigate in depth on two particular key factors: document weight smoothness and removal of irrelevant documents, in query model estimation, by proposing novel methods for document weight smoothing and irrelevance distribution separation, respectively. Systematic experimental evaluation on TREC collections shows that the proposed methods can improve both retrieval effectiveness and retrieval stability of query model estimation. In addition to the above main contributions, we also carry out initial exploration on two further directions: the formulation of bias-variance in personalization and looking at the query model estimation via a novel theoretical angle (i.e., Quantum theory) that has partially inspired our research.
APA, Harvard, Vancouver, ISO, and other styles
27

Cao, Liang. "Numerical analysis and multi-precision computational methods applied to the extant problems of Asian option pricing and simulating stable distributions and unit root densities." Thesis, University of St Andrews, 2014. http://hdl.handle.net/10023/6539.

Full text
Abstract:
This thesis considers new methods that exploit recent developments in computer technology to address three extant problems in the area of Finance and Econometrics. The problem of Asian option pricing has endured for the last two decades in spite of many attempts to find a robust solution across all parameter values. All recently proposed methods are shown to fail when computations are conducted using standard machine precision because as more and more accuracy is forced upon the problem, round-off error begins to propagate. Using recent methods from numerical analysis based on multi-precision arithmetic, we show using the Mathematica platform that all extant methods have efficacy when computations use sufficient arithmetic precision. This creates the proper framework to compare and contrast the methods based on criteria such as computational speed for a given accuracy. Numerical methods based on a deformation of the Bromwich contour in the Geman-Yor Laplace transform are found to perform best provided the normalized strike price is above a given threshold; otherwise methods based on Euler approximation are preferred. The same methods are applied in two other contexts: the simulation of stable distributions and the computation of unit root densities in Econometrics. The stable densities are all nested in a general function called a Fox H function. The same computational difficulties as above apply when using only double-precision arithmetic but are again solved using higher arithmetic precision. We also consider simulating the densities of infinitely divisible distributions associated with hyperbolic functions. Finally, our methods are applied to unit root densities. Focusing on the two fundamental densities, we show our methods perform favorably against the extant methods of Monte Carlo simulation, the Imhof algorithm and some analytical expressions derived principally by Abadir. Using Mathematica, the main two-dimensional Laplace transform in this context is reduced to a one-dimensional problem.
APA, Harvard, Vancouver, ISO, and other styles
28

Duffy, Brendan Gilbert. "Development of Multichannel Analysis of Surface Waves (MASW) for Characterising the Internal Structure of Active Fault Zones as a Predictive Method of Identifying the Distribution of Ground Deformation." Thesis, University of Canterbury. Geological Sciences, 2008. http://hdl.handle.net/10092/2051.

Full text
Abstract:
Bulk rock strength is greatly dependent on fracture density, so that reductions in rock strength associated with faulting and fracturing should be reflected by reduced shear coupling and hence S-wave velocity. This study is carried out along the Canterbury rangefront and in Otago. Both lie within the broader plate boundary deformation zone in the South Island of New Zealand. Therefore built structures are often, , located in areas where there are undetected or poorly defined faults with associated rock strength reduction. Where structures are sited near to, or across, such faults or fault-zones, they may sustain both shaking and ground deformation damage during an earthquake. Within this zone, management of seismic hazards needs to be based on accurate identification of the potential fault damage zone including the likely width of off-plane deformation. Lateral S-wave velocity variability provides one method of imaging and locating damage zones and off-plane deformation. This research demonstrates the utility of Multi-Channel Analysis of Surface Waves (MASW) to aid land-use planning in such fault-prone settings. Fundamentally, MASW uses surface wave dispersive characteristics to model a near surface profile of S-wave velocity variability as a proxy for bulk rock strength. The technique can aid fault-zone planning not only by locating and defining the extent of fault-zones, but also by defining within-zone variability that is readily correlated with measurable rock properties applicable to both foundation design and the distribution of surface deformation. The calibration sites presented here have well defined field relationships and known fault-zone exposure close to potential MASW survey sites. They were selected to represent a range of progressively softer lithologies from intact and fractured Torlesse Group basement hard rock (Dalethorpe) through softer Tertiary cover sediments (Boby’s Creek) and Quaternary gravels. This facilitated initial calibration of fracture intensity at a high-velocity-contrast site followed by exploration of the limits of shear zone resolution at lower velocity contrasts. Site models were constructed in AutoCAD in order to demonstrate spatial correlations between S-wave velocity and fault zone features. Site geology was incorporated in the models, along with geomorphology, river profiles, scanline locations and crosshole velocity measurement locations. Spatial data were recorded using a total-station survey. The interpreted MASW survey results are presented as two dimensional snapshot cross-sections of the three dimensional calibration-site models. These show strong correlations between MASW survey velocities and site geology, geomorphology, fluvial profiles and geotechnical parameters and observations. Correlations are particularly pronounced where high velocity contrasts exist, whilst weaker correlations are demonstrated in softer lithologies. Geomorphic correlations suggest that off-plane deformation can be imaged and interpreted in the presence of suitable topographic survey data. A promising new approach to in situ and laboratory soft-rock material and mass characterisation is also presented using a Ramset nail gun. Geotechnical investigations typically involve outcrop and laboratory scale determination of rock mass and material properties such as fracture density and unconfined compressive strength (UCS). This multi-scale approach is espoused by this study, with geotechnical and S-wave velocity data presented at multiple scales, from survey scale sonic velocity measurements, through outcrop scale scanline and crosshole sonic velocity measurements to laboratory scale property determination and sonic velocity measurements. S-wave velocities invariably increased with decreasing scale. These scaling relationships and strategies for dealing with them are investigated and presented. Finally, the MASW technique is applied to a concealed fault on the Taieri Ridge in Macraes Flat, Central Otago. Here, high velocity Otago Schist is faulted against low velocity sheared Tertiary and Quaternary sediments. This site highlights the structural sensitivity of the technique by apparently constraining the location of the principal fault, which had been ambiguous after standard processing of the seismic reflection data. Processing of the Taieri Ridge dataset has further led to the proposal of a novel surface wave imaging technique termed Swept Frequency Imaging (SFI). This inchoate technique apparently images the detailed structure of the fault-zone, and is in agreement with the conventionally-determined fault location and an existing partial trench. Overall, the results are promising and are expected to be supported by further trenching in the near future.
APA, Harvard, Vancouver, ISO, and other styles
29

Lalgudi, Subramanian N. "Transient simulation of power-supply noise in irregular on-chip power distribution networks using latency insertion method, and causal transient simulation of interconnects characterized by band-limited data and terminated by arbitrary terminations." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Aagard, Adam D. "Rectification of 2-D to 3-D Finite Element Analysis in Buried Concrete Arches Under Discrete Loading." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1768.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bouaniche, Alexandre. "A hybrid stochastic-sectional method for the simulation of soot particle size distributions Vitiated high karlovitz n-decane/air turbulent flames: scaling laws and micro-mixing modeling analysis A hybrid stochastic/fixed-sectional method for solving the population balance equation." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMIR23.

Full text
Abstract:
Les particules de suie (qui sont un type de particules ultrafines) peuvent être produites et émises dans des conditions de combustion riche. Les secteurs comme les transports (routier et aérien), où l'industrie sont des contributeurs significatifs aux émissions de particules. Celles-ci sont habituellement considérées comme des polluants dans la mesure où leur impact négatif sur la santé a été mesuré. Dans certains cas spécifiques comme la production de nanomatériaux, elles peuvent être synthétisées de manière volontaire. Dans les deux cas, une compréhension précise et une capabilité de prédiction de la distribution de tailles de particules (PSD en anglais) sont nécessaires, pour une meilleure conception des chambres de combustion. Dans cette thèse, une méthode innovante est proposée pour la prédiction de l'évolution de la distribution de tailles de particules (PSD). Elle consiste en une approche hybride composée de particules stochastiques représentant une fonction de densité de probabilité (PDF en anglais) et de sections fixes. L'objectif est de résoudre de manière précise le terme source de croissance/oxydation, en traitant le problème de diffusion numérique rencontré par des méthodes sectionnelles classiques. D'autre part, la méthode proposée est moins coûteuse qu'une méthode de Monte Carlo complète. D'abord, le contexte et les motivations de cette thèse sont expliqués. Les concepts et modèles pour les termes sources physiques de suie sont brièvement résumés. Ensuite, l'équation de bilan de population (PBE en anglais) qui pilote l'évolution de la distribution de tailles de particules (PSD), est présentée, ainsi que les différentes classes de méthodes utilisées pour sa résolution. La nouvelle méthode hybride est introduite. Sa précision et son efficacité sont démontrées sur des cas tests analytiques. Enfin, la méthode est appliquée sur une flamme prémélangée d'éthylène
Soot particles (which are one kind of ultra-fine particles) can be produced and emitted in fuel rich combustion conditions. Sectors like road and air transportation, or industry are significant contributors to soot particles emissions. Soot particles are usually considered as a pollutant as their negative impact on health has been assessed. In some specific cases like nanomaterials production, they can be synthesized on purpose. In both cases, accurate understanding and prediction capability of the Particle Size Distribution (PSD) is needed, for a better combustors design. In this thesis, a novel numerical method is proposed to predict the Particle Size Distribution (PSD) evolution. It consists in a hybrid approach featuring stochastic particles representing a Probability Density Function (PDF), and fixed sections. The objective is to solve accurately for the surface growth/oxidation term, mitigating the problem of numerical diffusion encountered in some classical sectional methods. On the other hand, the proposed method is less expensive than a full Monte Carlo method. First, the context and motivation of the thesis are explained. Concepts and models for soot physical source terms are shortly reviewed. Then, the Population Balance Equation (PBE), which drives the evolution of the Particle Size Distribution (PSD), is presented as well as the different classes of numerical methods used for its resolution. Subsequently, the novel hybrid method is introduced. Its accuracy and efficiency are demonstrated on analytical test cases. Finally, the method is applied on a premixed ethylene sooting flame
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Runtong. "Measurement of effective diffusivity : chromatographic method (pellets & monoliths)." Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608352.

Full text
Abstract:
This thesis aims to find out the effective diffusivity (Deff) of a porous material – γ-alumina, using an unsteady state method with two inert gases at ambient condition with no reactions. For porous materials, Deff is important because it determines the amount of reactants that transfers to the surface of pores. When Deff is known, the apparent tortuosity factor of γ-alumina is calculated using the parallel pore model. The apparent tortuosity factor is important because: (a) it can be used to back-calculate Deff at reacting conditions; (b) once Deff with reactions is known, the Thiele modulus can be calculated and hence the global reaction rate can be found; (c) apparent tortuosity factor is also important for modelling purposes (e.g. modelling a packed-bed column or a catalytic combustion reactor packed with porous γ-alumina in various shapes and monoliths). Experimental measurements were performed to determine the effective diffusivity of a binary pair of non-reacting gases (He in N2, and N2 in He) in spherical γ-alumina pellets (1 mm diameter), and in γ-alumina washcoated monoliths (washcoat thickness 20 to 60 µm, on 400 cpsi (cells per square inch) cordierite support). The method used is based on the chromatographic technique, where a gas flows through a tube, which is packed with the sample to be tested. A pulse of tracer gas is injected (e.g. using sample loops: 0.1, 0.2, 0.5 ml) and by using an on-line mass spectrometer the response in the outlet of the packed bed is monitored over time. For the spherical pellets, the tube i.d. = 13.8 mm and the packed bed depths were 200 and 400 mm. For monoliths the tube i.d. = 7 mm and the packed lengths were 500 and 1000 mm. When the chromatographic technique was applied to the monoliths, it was observed that experimental errors can be significant, and it is very difficult to interpret the data. However, the technique worked well with the spherical pellets, and the effective diffusivity of He in N2 was 0.75 – 1.38 × 10-7 m2 s-1, and for N2 in He was 1.81 – 3.10 × 10-7 m2 s-1. Using the parallel pore model to back-calculate the apparent tortuosity factor, then a value between 5 to 9.5 was found for the pellets.
APA, Harvard, Vancouver, ISO, and other styles
33

Dobes, Jiri. "Numerical algorithms for the computation of steady and unsteady compressible flow over moving geometries: application to fluid-structure interaction." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210640.

Full text
Abstract:

This work deals with the development of numerical methods for compressible flow simulation with application to the interaction of fluid flows and structural bodies.

First, we develop numerical methods based on multidimensional upwind residual distribution (RD) schemes. Theoretical results for the stability and accuracy of the methods are given. Then, the RD schemes for unsteady problems are extended for computations on moving meshes. As a second approach, cell centered and vertex centered finite volume (FV) schemes are considered. The RD schemes are compared to FV schemes by means of the 1D modified equation and by the comparison of the numerical results for scalar problems and system of Euler equations. We present a number of two and three dimensional steady and unsteady test cases, illustrating properties of the numerical methods. The results are compared with the theoretical solution and experimental data.

In the second part, a numerical method for fluid-structure interaction problems is developed. The problem is divided into three distinct sub-problems: Computational Fluid Dynamics, Computational Solid Mechanics and the problem of fluid mesh movement. The problem of Computational Solid Mechanics is formulated as a system of partial differential equations for an anisotropic elastic continuum and solved by the finite element method. The mesh movement is determined using the pseudo-elastic continuum approach and solved again by the finite element method. The coupling of the problems is achieved by a simple sub-iterative approach. Capabilities of the methods are demonstrated on computations of 2D supersonic panel flutter and 3D transonic flutter of the AGARD 445.6 wing. In the first case, the results are compared with the theoretical solution and the numerical computations given in the references. In the second case the comparison with experimental data is presented.


Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
34

Meynard, Joane. "Influence de la taille, de la morphologie et de la distribution spatiale des pores sur la conductivité thermique de céramiques UO2." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0607.

Full text
Abstract:
Dans un réacteur nucléaire, le comportement des combustibles est piloté par des phénomènes thermiques. C'est pourquoi il est important de modéliser précisément leur comportement thermique.L’objectif de ces travaux est d’évaluer l’impact de la microstructure sur les propriétés thermiques d’un combustible UO2 à 50°C.Des céramiques UO2 ont été fabriquées. Deux familles de porosité ont été identifiées à l’aide de différentes techniques d’imagerie (microscopie optique, MEB-FIB et tomographie X) : la porosité occluse de forme quasi-sphérique et localisée dans les granulats d’UO2 et un réseau de porosité "d'assemblage" localisée à l’interface des granulats. Des paramètres descripteurs ont été déterminés par mesures par immersion et par analyse d’images. Des études par homogénéisation analytique et numérique (calculs par transformée de Fourier rapide) ont été réalisées afin d’évaluer l’impact de ces caractéristiques sur la conductivité thermique. Les calculs numériques ont été réalisés sur des images 2D et 3D obtenues par imagerie et générées avec un modèle morphologique original reproduisant les spécificités des structures poreuses observées. Ces études ont mis en évidence l’impact de la répartition spatiale et de l’interconnexion de la porosité d’assemblage sur la conductivité thermique des céramiques UO2. Finalement, le modèle proposé a été comparé à des mesures expérimentales de diffusivité thermique obtenues par méthode Flash. Cette comparaison a permis de montrer que le modèle reproduit les tendances associées à la dégradation de la conductivité thermique mesurée sur les céramiques étudiées
Inside a nuclear reactor core, the behavior of fuels is largely controlled by thermal phenomena. That is why it is very important to model the thermal behavior of fuels very precisely.The objective of this study is to develop a model that indicates the influence of porosity on thermal conductivity at 50° that is representative of the thermal behavior of the UO2 fuels. UO2 fuels were manufactured and their microstructures were studied using optical microscopy, SEM-FIB and X-ray tomography. Two types of porosity were identified: 1) sealed and near-spherical pores which are located in UO2 aggregates, and 2) an interconnected "assembly" porosity located at the interfaces of aggregates. Several descriptive parameters were estimated by immersion measurements and image analysis. Studies based on analytical and numerical homogenization were conducted. Numerical calculations using the Fast Fourier Transform method were performed on images of slice planes obtained with imaging technologies or 3D simulated microstructures generated with an original morphological model reproducing some characteristics of the observed porosity networks. The significant impact of the spatial distribution and the interconnection of the assembly porosity on the thermal conductivity of manufactured UO2 fuels were highlighted. Finally, the proposed model was compared with experimental thermal diffusivity measurements obtained by the Flash method.Discrepancies between the model and the experimental measurements have been largely reduced with the proposed model compared with the standard models, which means that the developed model is more representative of the UO2 thermal behavior
APA, Harvard, Vancouver, ISO, and other styles
35

Collin, Marie-France. "Analyse quantitative de visualisations d'écoulements par la mise au point d'une méthode de traitement d'images : application à la détermination du champ des vitesses." Poitiers, 1989. http://www.theses.fr/1989POIT2270.

Full text
Abstract:
Developpement et essai d'une chaine de traitement et d'analyse d'images pour l'exploitation des cliches de visualisation d'ecoulement, obtenus par la technique des traceurs solides. Analyse des images numerisees par une methode de codages de contour. On considere la configuration des traces. Les traces juxtaposees sont identifiees et segmentees. Application a 4 cliches visualisant la brusque mise en translation d'un profil naca0012 en incidence fixe
APA, Harvard, Vancouver, ISO, and other styles
36

Fischer, Jiří. "Analýza vlivu směrové distribuce kolagenních vláken ve stěně tepny na její mechanické vlastnosti." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-418206.

Full text
Abstract:
The aim of this thesis is to analyse literature with focusing on literature about directional distribution of collagen fibres. This knowledge is very important for computational modelling and FEM analysis of arterial wall. Comparison of suitability of different models of directional distribution of collagen fibres is made by fitting of different types of probability density functions. Impact assessment of different collagen fibres distribution on mechanical properties of the arterial wall and impact assessment of wall anisotropy is solved with finite element method. FEM analysis is done on three loading types – uniaxial tension, equibiaxial tension and inflation of artery by internal pressure. Output of this thesis is evaluation of results for various types of collagen fibres arrangement in arterial wall.
APA, Harvard, Vancouver, ISO, and other styles
37

來海, 博央, Hirohisa KIMACHI, 拓. 田中, Hiroshi TANAKA, 敏弘 佐藤, Toshihiro SATOH, 啓介 田中, and Keisuke TANAKA. "モードⅡ荷重を受ける長繊維強化複合材料の層間マトリックスき裂先端での塑性領域." 日本機械学会, 2000. http://hdl.handle.net/2237/9170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

來海, 博央, Hirohisa KIMACHI, 拓. 田中, Hiroshi TANAKA, 敏弘 佐藤, Toshihiro SATOH, 啓介 田中, and Keisuke TANAKA. "モードⅠき裂を有する長繊維強化複合材料における塑性領域の弾塑性有限要素法解析." 日本機械学会, 2000. http://hdl.handle.net/2237/9173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Muševič, Sašo. "Non-stationary sinusoidal analysis." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/123809.

Full text
Abstract:
Muchos tipos de señales que encontramos a diario pertenecen a la categoría de sinusoides no estacionarias. Una gran parte de esas señales son sonidos que presentan una gran variedad de características: acústicos/electrónicos, sonidos instrumentales harmónicos/impulsivos, habla/canto, y la mezcla de todos ellos que podemos encontrar en la música. Durante décadas la comunidad científica ha estudiado y analizado ese tipo de señales. El motivo principal es la gran utilidad de los avances científicos en una gran variedad de áreas, desde aplicaciones médicas, financiera y ópticas, a procesado de radares o sonar, y también a análisis de sistemas. La estimación precisa de los parámetros de sinusoides no estacionarias es una de las tareas más comunes en procesado digital de señales, y por lo tanto un elemento fundamental e indispensable para una gran variedad de aplicaciones. Las transformaciones de tiempo y frecuencia clásicas son solamente apropiadas para señales con variación lenta de amplitud y frecuencia. Esta suposición no suele cumplirse en la práctica, lo que conlleva una degradación de calidad y la aparición de artefactos. Además, la resolución temporal y frecuencial no se puede incrementar arbitrariamente debido al conocido principio de incertidumbre de Heisenberg. \\ El principal objetivo de esta tesis es revisar y mejorar los métodos existentes para el análisis de sinusoides no estacionarias, y también proponer nuevas estrategias y aproximaciones. Esta disertación contribuye sustancialmente a los análisis sinusoidales existentes: a) realiza una evaluación crítica del estado del arte y describe con gran detalle los métodos de análisis existentes, b) aporta mejoras sustanciales a algunos de los métodos existentes más prometedores, c) propone varias aproximaciones nuevas para el análisis de los modelos sinusoidales existentes i d) propone un modelo sinusoidal muy general y flexible con un algoritmo de análisis directo y rápido.
Many types of everyday signals fall into the non-stationary sinusoids category. A large family of such signals represent audio, including acoustic/electronic, pitched/transient instrument sounds, human speech/singing voice, and a mixture of all: music. Analysis of such signals has been in the focus of the research community for decades. The main reason for such intense focus is the wide applicability of the research achievements to medical, financial and optical applications, as well as radar/sonar signal processing and system analysis. Accurate estimation of sinusoidal parameters is one of the most common digital signal processing tasks and thus represents an indispensable building block of a wide variety of applications. Classic time-frequency transformations are appropriate only for signals with slowly varying amplitude and frequency content - an assumption often violated in practice. In such cases, reduced readability and the presence of artefacts represent a significant problem. Time and frequency resolu
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, Ying. "Synthesis and Determination of the Local Structure and Phase Evolution of Unique Boehmite-Derived Mesoporous Doped Aluminas." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7105.

Full text
Abstract:
Mesoporous alumina (Al2O3) in the gamma (γ) phase is widely used as a support in catalytic applications because of its high surface area, large pore volume, acid-base characteristics, and thermal stability. To improve the thermal stability of gamma alumina, dopants such as lanthanum, magnesium, zirconia, and silica are often introduced. Current laboratory-based methods for synthesizing gamma alumina generally involve 10-15 steps and/or use toxic, expensive surfactants and solvents. Industrial methods, while simpler, lack control of pore properties and surface chemistry. In contrast, we have developed an innovative solvent deficient, one-step method that is able to synthesize a wide range of pure and silica-doped aluminas with high surface areas, pore volumes from 0.3 to 1.8 cm3/g, and pore diameters from 5 to 40 nm. More significantly, our silica-doped aluminas are stable up to temperatures as high as 1300<°>C, which is 200<°>C higher than other pure and doped gamma alumina materials.The usefulness of gamma-alumina as a catalyst support is dependent on its favorable combination of textural, thermal, structural, and chemical properties, yet the relationship between structure and these other properties is still not clearly understood due to the poorly crystallized nature of the material. In particular, the mechanism by which the gamma structure is stabilized thermally by so many dopants is still not well understood. Based on our previous PDF experiments on pure and La-doped alumina, we have developed a hypothesis regarding the mechanism by which dopants increase thermal stability. To validate or refute this hypothesis, we collected PDF data on a wider range of laboratory and industrial alumina samples. Herein, we have utilized PDF analysis to study the local to intermediate-range structure of a series of our pure and silica-doped aluminas calcined at 50<°>C intervals between 50 and 1300<°>C as well as pure and silica-doped aluminas from commercial sources and other synthetic methods. This thorough study of alumina local structure will allow us to separate general trends in the local structure from idiosyncrasies based on synthetic method/conditions, and it will help us identify the structural features responsible for improved thermal stability. Having access to these PDF experiments, we have validated our current hypothesis on the nature of stabilization afforded by dopants and, more generally, developed a better understanding of the role structure plays in the properties of aluminas.
APA, Harvard, Vancouver, ISO, and other styles
41

Vondra, Róbert. "Deformačně napěťová analýza zubové spojky." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-432792.

Full text
Abstract:
This thesis deals with the contact analysis study of crown gear couplings during the simulation of its working conditions. The toothed coupling transmits high torque, even when the input and output shafts are misaligned due to outside tooth shape. For this reason, it is necessary to design the gear shape correctly. The inherent use of toothed couplings results in a wide range of working speeds and load transfers. This can cause several problems, if the parameters are selected incorrectly. Among the most significant of these is the generation of vibrations during gear engagement, accompanied by noise and the excessive loading of components, such as shafts and bearings. It is not possible to completely reduce uneven running, even with knowledge of the latest trends in the field of gear development. For this reason, it is necessary to eliminate the paths where the vibrations can propagate during the design itself. Inefficient and costly experiments are often used to determine the correct shape of gear teeth. For this purpose, a computational approach to describe a contact pressure on the teeth of couplings at different misalignment and loads is proposed. The model helps to understand the composition of the contact pressure during the working mode of misalignment and its behaviour within the rotation of the gear coupling. The introductory part of thesis presents the current state of knowledge of gear couplings and a description of load distribution issues, regarding the angular misalignment, torque and friction. In the following chapter, three possible approaches to the problem are described - analytical, experimental and computational. The following work offers the introduction and creation of two different computational models, varying in different tooth shape on the hub and the sleeve. Each geometry was subjected to a different load moment, a misalignment of the hub, or the rotation of the gear coupling as a whole. The main monitored parameter was the course of contact pressure in each step, when changing the degree of misalignment or the rotation of the model. Finally, three main sets of contact pressure on the teeth are presented, in connection with the reduced pressure plotted on the toothed rings. There is derivation of results and mutual comparison of each load case. The computation approach in FEM program Ansys Workbench was used to solve the problem.
APA, Harvard, Vancouver, ISO, and other styles
42

Hibler, David A. "Development of a Two-Stage Computational Modeling Method for Drinking Water Microbial Ecology Effects on Legionella pneumophila Growth." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595509673321504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Choudrey, Rizwan A. "Variational methods for Bayesian independent component analysis." Thesis, University of Oxford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Danling. "Multifractal characterisation and analysis of complex networks." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/48176/1/Danling_Wang_Thesis.pdf.

Full text
Abstract:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
APA, Harvard, Vancouver, ISO, and other styles
45

McGarry, Gregory John. "Model-based mammographic image analysis." Thesis, Queensland University of Technology, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
46

Gorecki, Christophe. "Classification par échantillonnage de la densité spectrale d'énergie : Application à l'étude statistique des surfaces et à l'analyse de particules." Besançon, 1989. http://www.theses.fr/1989BESA2015.

Full text
Abstract:
Etude d'un profilometre optique base sur la defocalisation d'un faisceau de lumiere blanche. Etude de deux dispositifs optonumeriques d'analyse statistique utilisant les techniques de fourier optiques: un analyseur de particules et un dispositif de classement automatique des surfaces non polies
APA, Harvard, Vancouver, ISO, and other styles
47

Akhter, A. S. "Estimating the parameters of the truncated normal distribution." Thesis, University of Essex, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Gao, Huina. "Analysis of distribution-free methods for revenue management." College Park, Md.: University of Maryland, 2008. http://hdl.handle.net/1903/8783.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2008.
Thesis research directed by: Robert H. Smith School of Business. Dept. of Decision and Information Technologies. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
49

Undin, Torgny. "Development and improvement of methods for characterization of HPLC stationary phases." Licentiate thesis, Uppsala universitet, Analytisk kemi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-166978.

Full text
Abstract:
High Performance Liquid Chromatography (HPLC) is a widely used tech-nique both for detecting and purifying substances in academy and in the industry. In order to facilitate the use of, and knowledge in HPLC, character-ization of stationary phases is of utmost importance. Tailor made characteri-zation methods and workflows are steadily increasing the speed and accura-cy in which new separation systems and methods are developed. In the field fundamental separation science and of preparative chromatography there is always the need for faster and more accurate methods of adsorption isotherm determination. Some of that demand are met with the steadily increase of computational power, but the practical aspects on models and methods must also be further developed. These nonlinear characterization methods will not only give models capable of describing the adsorption isotherm but also actual values of local adsorption energies and monolayer saturation capacity of an individual interaction sites etc.The studies presented in this thesis use modern alkali stable stationary phas-es as a model phase, which will give an insight in hybrid materials and their separation mechanism. This thesis will include an update and expansion in using the Elution by Characteristic Points (ECP) method for determination of adsorption isotherms. The precision is even further increased due to the ability to use slope data as well as an increase in usability by assigning a set of guidance rules to be applied when determine adsorption isotherms having inflection points. This thesis will further provide the reader with information about stationary phase characterization and the power of using existing tech-niques; combine them with each other, and also what the expansion of meth-ods can revile in terms of precision and increased usability. A more holistic view of what benefits that comes with combining a non-linear characteriza-tion of a stationary phase with more common linear characterization meth-ods are presented.
APA, Harvard, Vancouver, ISO, and other styles
50

Gaimard, Patricia. "Fonctions de distribution de vitesses non-maxwelliennes dans le plasma ionosphérique et application à la mesure par diffusion incohérente." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00724945.

Full text
Abstract:
En période magnétiquement perturbée il est nécessaire de considérer une fonction de distribution des vitesses ioniques non-Maxwellienne, si l'on veut pouvoir estimer correctement les paramètres ionosphériques de l'ionosphère aurorale, mesurés par diffusion incohérente. Nous avons comparé deux modèles de fonction de distribution non-Maxwellienne : l'approximation polynomiale généralisée qui est une solution analytique de l'équation de Boltzmann et une distribution numérique basée sur une méthode Monte Carlo. Ces deux approches conduisent, pour les deux espèces ioniques 0+ et NO+, à des résultats semblables lors de champs électriques inférieurs à 100 m V lm. La fonction analytique a été introduite dans l'analyse des spectres EISCAT afin d'étudier l'ionosphère sous des champs électriques pouvant atteindre 100 m V /m. Cette nouvelle analyse a, dans un premier temps, été testée sur plusieurs jeux de simulations avec notamment des études de sensibilité aux modèles utilisés. Dans une seconde étape, nous avons étudié des spectres présentant des caractéristiques non-Maxwelliennes et avons ainsi déterminé la composition de l'ionosphère sous forts champs électriques. Enfin avec une dernière série de données réelles nous avons estimé quantitativement l'erreur commise lors d'une interprétation "Maxwellienne" de spectres non-Maxwelliens dans le cas d'une ionosphère composée d'ions moléculaires.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography