Academic literature on the topic 'Confidence dispersion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Confidence dispersion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Confidence dispersion"

1

Chen, Cong, and Robert W. Tipping. "Confidence Interval of a Proportion with Over-dispersion." Biometrical Journal 44, no. 7 (October 2002): 877–86. http://dx.doi.org/10.1002/1521-4036(200210)44:7<877::aid-bimj877>3.0.co;2-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Geedipally, Srinivas Reddy, and Dominique Lord. "Effects of Varying Dispersion Parameter of Poisson–Gamma Models on Estimation of Confidence Intervals of Crash Prediction Models." Transportation Research Record: Journal of the Transportation Research Board 2061, no. 1 (January 2008): 46–54. http://dx.doi.org/10.3141/2061-06.

Full text
Abstract:
In estimating safety performance, the most common probabilistic structures of the popular statistical models used by transportation safety analysts for modeling motor vehicle crashes are the traditional Poisson and Poisson–gamma (or negative binomial) distributions. Because crash data often exhibit overdispersion, Poisson–gamma models are usually the preferred model. The dispersion parameter of Poisson–gamma models had been assumed to be fixed, but recent research in highway safety has shown that the parameter can potentially be dependent on the covari-ates, especially for flow-only models. Given that the dispersion parameter is a key variable for computing confidence intervals, there is reason to believe that a varying dispersion parameter could affect the computation of confidence intervals compared with confidence intervals produced from Poisson–gamma models with a fixed dispersion parameter. This study evaluates whether the varying dispersion parameter affects the computation of the confidence intervals for the gamma mean (m) and predicted response (y) on sites that have not been used for estimating the predictive model. To accomplish that objective, predictive models with fixed and varying dispersion parameters were estimated by using data collected in California at 537 three-leg rural unsignalized intersections. The study shows that models developed with a varying dispersion parameter greatly influence the confidence intervals of the gamma mean and predictive response. More specifically, models with a varying dispersion parameter usually produce smaller confidence intervals, and hence more precise estimates, than models with a fixed dispersion parameter, both for the gamma mean and for the predicted response. Therefore, it is recommended to develop models with a varying dispersion whenever possible, especially if they are used for screening purposes.
APA, Harvard, Vancouver, ISO, and other styles
3

Gao, Jie, Xin-Kang Wang, and Hui Chen. "The Relationship Between P-Wave Dispersion, QTc Dispersion, and Gestational Hypertension." American Journal of Hypertension 35, no. 2 (February 1, 2022): 207. http://dx.doi.org/10.1093/ajh/hpab118.

Full text
Abstract:
Abstract Background To analyze the relationship between gestational hypertension and P-wave dispersion and QTc interval dispersion. Methods From January 2017 to December 2019, 213 pregnant women who met the diagnosis of hypertension were selected as gestational hypertension with 248 healthy pregnant women as controls. The basic data, P-wave dispersion, and QTc dispersion were compared between the 2 groups, and binary logistic regression analysis was performed. Results The minimum time of P wave (78.59 ± 9.32 vs. 94.61 ± 7.03 ms) and the minimum time of QTc (384.65 ± 21.69 vs. 401.91 ± 15.49 ms) in gestational hypertension were significantly lower than those in the controls. The dispersion of P wave (42.75 ± 9.94 vs. 14.91 ± 4.03 ms), QTc (57.15 ± 16.10 vs. 21.42 ± 6.07 ms), the maximum time of P wave (121.34 ± 7.22 vs. 112.53 ± 6.43 ms), and the maximum time of QTc (441.80 ± 25.42 vs. 429.74 ± 27.83 ms) in gestational hypertension were significantly higher than those in the controls (all P &lt; 0.05). Logistic regression analysis showed that P-wave dispersion (odds ratio = 1.795, 95% confidence interval 1.266–2.546), QTc minimum time (odds ratio = 0.905, 95% confidence interval 0.833–0.983), and QTc dispersion (odds ratio = 1.216, 95% confidence interval 1.042–1.420) were correlated with gestational hypertension. Conclusions P-wave dispersion and QTc dispersion are associated with gestational hypertension.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Xue-Dong, Nian-Sheng Tang, and Xue-Ren Wang. "On confidence regions of semiparametric nonlinear reproductive dispersion models." Statistics 43, no. 6 (December 2009): 583–95. http://dx.doi.org/10.1080/02331880802689332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yosboonruang, Noppadon, Sa-Aat Niwitpong, and Suparat Niwitpong. "Simultaneous confidence intervals for all pairwise differences between the coefficients of variation of rainfall series in Thailand." PeerJ 9 (June 23, 2021): e11651. http://dx.doi.org/10.7717/peerj.11651.

Full text
Abstract:
The delta-lognormal distribution is a combination of binomial and lognormal distributions, and so rainfall series that include zero and positive values conform to this distribution. The coefficient of variation is a good tool for measuring the dispersion of rainfall. Statistical estimation can be used not only to illustrate the dispersion of rainfall but also to describe the differences between rainfall dispersions from several areas simultaneously. Therefore, the purpose of this study is to construct simultaneous confidence intervals for all pairwise differences between the coefficients of variation of delta-lognormal distributions using three methods: fiducial generalized confidence interval, Bayesian, and the method of variance estimates recovery. Their performances were gauged by measuring their coverage probabilities together with their expected lengths via Monte Carlo simulation. The results indicate that the Bayesian credible interval using the Jeffreys’ rule prior outperformed the others in virtually all cases. Rainfall series from five regions in Thailand were used to demonstrate the efficacies of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Thangjai, Warisa, Sa-Aat Niwitpong, and Suparat Niwitpong. "Bayesian Confidence Intervals for Coefficients of Variation of PM10 Dispersion." Emerging Science Journal 5, no. 2 (April 1, 2021): 139–54. http://dx.doi.org/10.28991/esj-2021-01264.

Full text
Abstract:
Herein, we propose the Bayesian approach for constructing the confidence intervals for both the coefficient of variation of a log-normal distribution and the difference between the coefficients of variation of two log-normal distributions. For the first case, the Bayesian approach was compared with large-sample, Chi-squared, and approximate fiducial approaches via Monte Carlo simulation. For the second case, the Bayesian approach was compared with the method of variance estimates recovery (MOVER), modified MOVER, and approximate fiducial approaches using Monte Carlo simulation. The results show that the Bayesian approach provided the best approach for constructing the confidence intervals for both the coefficient of variation of a log-normal distribution and the difference between the coefficients of variation of two log-normal distributions. To illustrate the performances of the confidence limit construction approaches with real data, they were applied to analyze real PM10 datasets from the Nan and Chiang Mai provinces in Thailand, the results of which are in agreement with the simulation results. Doi: 10.28991/esj-2021-01264 Full Text: PDF
APA, Harvard, Vancouver, ISO, and other styles
7

Bonett, Douglas G., and Edith Seier. "Confidence Interval for a Coefficient of Dispersion in Nonnormal Distributions." Biometrical Journal 48, no. 1 (February 2006): 144–48. http://dx.doi.org/10.1002/bimj.200410148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pobedin, A. V., A. A. Dolotov, and A. I. Iskaliev. "Estimated determination of noise dispersion of towing vehicles." Izvestiya MGTU MAMI 9, no. 2-1 (January 20, 2015): 119–22. http://dx.doi.org/10.17816/2074-0530-67265.

Full text
Abstract:
The paper discusses issues related to the possibility of obtaining of the value of noise dispersion of towing vehicles by calculation. The authors obtained mathematical expressions to determine the dispersion of depending on confidence interval of individual noise sources. The article shows a general procedure for evaluating of probable dispersion of the results of the estimated noise of a vehicle.
APA, Harvard, Vancouver, ISO, and other styles
9

Shilane, David, and Derek Bean. "Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion." Journal of Probability and Statistics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/602940.

Full text
Abstract:
The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.
APA, Harvard, Vancouver, ISO, and other styles
10

Saha, Krishna K., Debaraj Sen, and Chun Jin. "Profile likelihood-based confidence interval for the dispersion parameter in count data." Journal of Applied Statistics 39, no. 4 (April 2012): 765–83. http://dx.doi.org/10.1080/02664763.2011.616581.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Confidence dispersion"

1

Erlank, Warrick. "Constructing confidence intervals for polarization mode dispersion." Thesis, Nelson Mandela Metropolitan University, 2008. http://hdl.handle.net/10948/951.

Full text
Abstract:
Polarization mode dispersion (PMD) causes significant impairment in high bit-rate optical telecommunications systems. Knowledge of a fibre’s PMD mean value, and the relevant confidence interval, is essential for determining a fibre’s maximum allowable bit-rate. Various methods of confidence interval construction for time series data were tested in this dissertation using simulation. These included the autocovariance-matrix methods as suggested by Box and Jenkins, as well as the more practical and simpler batch means methods. Some of these methods were shown to be significantly better than the standard method of calculating confidence intervals for non time series data. The best of the tested methods were used on actual PMD data. The effect of using polarization scramblers was also tested.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Shihe. "The bright side of belief dispersion within TMTs." Thesis, 2019. http://hdl.handle.net/2440/124328.

Full text
Abstract:
Prior research shows that firm innovation policies can be affected by the perceptional lens of their top executives. However, existing literature has exclusively focused on CEOs and ignored the interactional effect among CEO and non-CEO executives. In this study, I examine the effect of having different opinions within a firm’s TMT and find that belief dispersion within the TMT is positively related to corporate innovative efficiency. This result still holds after considering potential endogeneity in the baseline regressions. Next, I use three subsample analyses to indicate that information sharing and bias reduction are potentially the channels of the positive effect of belief dispersion. The last section of this thesis provides evidence that illustrates a positive relationship between TMTs belief dispersion and firm performance, yielding further corporate implications.
Thesis (MPhil.) -- University of Adelaide, Business School, 2020
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Confidence dispersion"

1

Ray, Sumantra (Shumone), Sue Fitzpatrick, Rajna Golubic, Susan Fisher, and Sarah Gibbings, eds. Navigating research methods: basic concepts in biostatistics and epidemiology. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780199608478.003.0002.

Full text
Abstract:
This chapter provides an overview of the basic concepts in biostatistics and epidemiology. Section 1: Basic concepts in biostatistics The concepts in biostatistics include: 1. descriptive statistical methods (which comprise of frequency distribution, distribution shapes, and measures of central tendency and dispersion); and 2. inferential statistics which is applied to make inferences about a population from the sample data. Non-probability and probability sampling methods are outlined. This section provides simple explanation of the complex concepts of significance tests and confidence intervals and their corresponding interpretation. Correlation and regression methods used to describe the association between two quantitative variables are also explained. This section also provides an overview of when to use which statistical test given the type of data and the nature of the research question. Section 2: Basic concepts in epidemiology This section begins with the definitions of normality. Next, the interpretation of diagnostic tests and clinical prediction are explained and the definitions of sensitivity, specificity, positive predictive value and negative predictive value are provided. The relationship between these four constructs is discussed. The application of this concepts in the treatment and prevention is discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Alexander, Peter D. G., and Malachy O. Columb. Presentation and handling of data, descriptive and inferential statistics. Edited by Jonathan G. Hardman. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199642045.003.0028.

Full text
Abstract:
The need for any doctor to comprehend, assimilate, analyse, and form an opinion on data cannot be overestimated. This chapter examines the presentation and handling of such data and its subsequent statistical analysis. It covers the organization and description of data, measures of central tendency such as mean, median, and mode, measures of dispersion (standard deviation), and the problems of missing data. Theoretical distributions, such as the Gaussian distribution, are examined and the possibility of data transformation discussed. Inferential statistics are used as a means of comparing groups, and the rationale and use of parametric and non-parametric tests and confidence intervals is outlined. The analysis of categorical variables using the chi-squared test and assessing the value of diagnostic tests using sensitivity, specificity, positive and negative predictive values, and a likelihood ratio are discussed. Measures of association are covered, namely linear regression, as is time-to-event analysis using the Kaplan–Meier method. Finally, the chapter discusses the statistical analysis used when comparing clinical measurements—the Bland and Altman method. Illustrative examples, relevant to the practice of anaesthesia, are used throughout and it is hoped that this will provide the reader with an outline of the methodologies employed and encourage further reading where necessary.
APA, Harvard, Vancouver, ISO, and other styles
3

Ferreira, Eliel Alves, and João Vicente Zamperion. Excel: Uma ferramenta estatística. Brazil Publishing, 2021. http://dx.doi.org/10.31012/978-65-5861-400-5.

Full text
Abstract:
This study aims to present the concepts and methods of statistical analysis using the Excel software, in a simple way aiming at a greater ease of understanding of students, both undergraduate and graduate, from different areas of knowledge. In Excel, mainly Data Analysis Tools will be used. For a better understanding, there are, in this book, many practical examples applying these tools and their interpretations, which are of paramount importance. In the first chapter, it deals with introductory concepts, such as introduction to Excel, the importance of statistics, concepts and definitions. Being that in this will be addressed the subjects of population and sample, types of data and their levels of measurement. Then it brings a detailed study of Descriptive Statistics, where it will be studied percentage, construction of graphs, frequency distribution, measures of central tendency and measures of dispersion. In the third chapter, notions of probability, binomial and normal probability distribution will be studied. In the last chapter, Inferential Statistics will be approached, starting with the confidence interval, going through the hypothesis tests (F, Z and t tests), ending with the statistical study of the correlation between variables and simple linear regression. It is worth mentioning that the statistical knowledge covered in this book can be useful for, in addition to students, professionals who want to improve their knowledge in statistics using Excel.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Confidence dispersion"

1

Wenhe, Shi, Liu Xiangjun, and Li Mailin. "An improved attribute recognition algorithm of overhead line engineering evaluation based on confidence dispersion." In 2017 6th International Conference on Computer Science and Network Technology (ICCSNT). IEEE, 2017. http://dx.doi.org/10.1109/iccsnt.2017.8343472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Mengxi, Na Xue, and Xinjian Liu. "Research on Optimization of Ingestion Emergency Planning Zone Sizing." In 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-81285.

Full text
Abstract:
Food contamination has aroused public concern since Fukushima accident. As emergency preparedness is often viewed as an important approach to protect staff working on site and public around the site, ingestion emergency planning zone (EPZ) is applied to protect public from the exposure of contaminated food. Ingestion EPZ is one of the technical foundations for nuclear emergency preparedness, which will be influenced by design features of plant and characteristics of the site. This paper is devoted to the research on the optimization of ingestion EPZ sizing from the view of the atmospheric dispersion model and the food chain model, which are crucial points for the sizing of ingestion EPZ. Compared to the traditional straight-line Gaussian plume model with a quite conservative assumption that plume segments always transport in the downwind direction, the Lagrangian Gaussian puff model considers the swing of wind direction over time, which makes the simulation more realistic. With the results of radionuclide concentrations evaluated by the dispersion model, the transportation of the radionuclides in food is simulated by the food chain model. The traditional food chain model is essentially a static model with no consideration that food contamination level has a strong dependence on the accident date, which may overstate the risk from nuclear plant accidents and result in unfounded fear of public. The dynamic food chain model, which takes daily changes of plant biomass, or livestock feeding periods in consideration, has been developed to estimate radionuclide concentrations in different foodstuffs. On basis of the study of the dispersion models and food chain models above, we evaluate the ingestion EPZ size of Tianwan NPP by choosing the comparatively realistic ones from them. In the scenario considered in this paper, the simulation domain of Tianwan NPP within 80km-range and hourly time-step is applied, and meteorological conditions are carefully set according to observation data in recent years. Results show that there is significant margin and conservatism in the traditional ingestion EPZ sizing. Radionuclide concentrations predicted by the Lagrangian Gaussian puff model is almost an order of magnitude lower than the Gaussian plume model. Moreover, the dynamic food chain model considers the seasonal effect that simulation results of radionuclide concentrations in foodstuffs are significantly higher in summer than in winter, which helps to make a more realistic consideration of ingestion pathway. This research gives an example of the application of new models for the optimization of ingestion EPZ sizing, which may contribute to strengthen public confidence in nuclear safety and emergency preparedness.
APA, Harvard, Vancouver, ISO, and other styles
3

Gürbüz, Gözde, İlknur Kumkale, and Adil Oğuzhan. "TheEffects of Empowerment Applications on Organizational Loyality in the Banking Sector: A Study of Trakya Region." In International Conference on Eurasian Economies. Eurasian Economists Association, 2013. http://dx.doi.org/10.36880/c04.00767.

Full text
Abstract:
It is necessary that several applications should be done by the firms to adaptation to the changing market conditions and taking advantage of loyality. The first-major one of these applications is “empowerment” which is a modern management application. Efforts and labor of employees, who are so important for the firms’ developing and growing process, make valuable the firms. As businesses are aware of this, give power, responsibility, authority, and confidence to the employees; and thus they will be empowered. When the employees feel that they are empowered, their loyality will increase to the employer. This is important for the firms in today's hard conditions. In this study, it was investigated how was applied empowerment in the banking sector that there is intensive competition and how the empowerment effect the loyality level on the organization. The study is done via first data acquired from questionnary which were applied to 382 employees in 20 banks in Edirne, Tekirdağ and Kırklareli city-centers, and seconder data is from the literrature. The reliability test, demographical dispersion of employees, interpretation of employee empowerment and organizational commitment’s surveys, factor analiysis, variation tests (Kolmogrov-Smirnov Z, Mann-Whitney U, Kruskal-Wallis), regression and correlation analysis was made by SPSS. As a result of the study, it is concluded that, the empowerment applications in the banking sector, increased the loyality of the employee to the organizations.
APA, Harvard, Vancouver, ISO, and other styles
4

Sharma, Jai, and Vidhyacharan Bhaskar. "An Rna Sequencing Analysis of Glaucoma Genesis in Mice." In 12th International Conference on Artificial Intelligence, Soft Computing and Applications. Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.122306.

Full text
Abstract:
Glaucoma is the leading cause of irreversible blindness in people over the age of 60, accounting for 6.6 to 8% of all blindness in 2010, but there is still much to be learned about the genetic origins of the eye disease. With the modern development of Next-Generation Sequencing (NGS) technologies, scientists are starting to learn more about the genetic origins of Glaucoma. This research uses differential expression (DE) and gene ontology (GO) analyses to study the genetic differences between mice with severe Glaucoma and multiple control groups. Optical nerve head (ONH) and retina data samples of genome-wide RNA expression from NCBI (NIH) are used for pairwise comparison experimentation. In addition, principal component analysis (PCA) and dispersion visualization methods are employed to perform quality control tests of the sequenced data. Genes with skewed gene counts are also identified, as they may be marker genes for a particular severity of Glaucoma. The gene ontologies found in this experiment support existing knowledge of Glaucoma genesis, providing confidence that the results were valid. Future researchers can thoroughly study the gene lists generated by the DE and GO analyses to find potential activator or protector genes for Glaucoma in mice to develop drug treatments or gene therapies to slow or stop the progression of the disease. The overall goal is that in the future, such treatments can be made for humans as well to improve the quality of life for human patients with Glaucoma and reduce Glaucoma blindness rates.
APA, Harvard, Vancouver, ISO, and other styles
5

Cassinelli, Andrea, Hui Xu, Francesco Montomoli, Paolo Adami, Raul Vazquez Diaz, and Spencer J. Sherwin. "On the Effect of Inflow Disturbances on the Flow Past a Linear LPT Vane Using Spectral/hp Element Methods." In ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-91622.

Full text
Abstract:
Abstract The recent development and increasing integration of high performance computing, scale resolving CFD and high order unstructured methods offers a potential opportunity to deliver a simulation-based capability (i.e. virtual) for aerodynamic research, analysis and design of industrial relevant problems in the near future. In particular, the tendency towards high order spectral/hp element methods is motivated by their desirable dispersion-diffusion properties, that are combined to accuracy and flexibility for complex geometries. Previous work from the Authors focused on developing guidelines for the use of these methods as a virtual cascade for turbomachinery applications. Building on such experiments, the present contribution analyzes the performance of a representative industrial cascade at moderate Reynolds number with various levels and types of inflow disturbances, adopting the incompressible Navier-Stokes solver implemented in the Nektar++ software framework. The introduction of a steady/unsteady spanwise-nonuniform momentum forcing in the leading edge region was tested, to break the flow symmetry upstream of the blade and investigate the change in transition mechanism in the aft portion of the suction surface. To provide a systematic synthetic turbulence generation tool, a parallelised version of Davidson’s method is incorporated and applied for the first time in the software framework to a low pressure turbine vane. The clean results of the cascade are compared to various levels of momentum forcing and inflow turbulence, looking at blade wall distributions, wake profiles and boundary layer parameters. Low levels of background disturbances are found to improve the agreement with experimental data. The results support the confidence for using high order spectral methods as a standalone performance analysis tool but, at the same time, underline the sensitivity at these flow regimes to disturbances or instabilities in the real environment when comparing to rig data.
APA, Harvard, Vancouver, ISO, and other styles
6

Gaidelys, Vaidas, and Emilija Naudžiūnaitė. "EVALUATION OF THE MATHEMATICAL MODELLING METHODS AVAILABLE IN THE MARKET." In 12th International Scientific Conference „Business and Management 2022“. Vilnius Gediminas Technical University, 2022. http://dx.doi.org/10.3846/bm.2022.725.

Full text
Abstract:
The major purpose of this research is to analyse and select the relevant mathematical modelling methods that will be employed for developing an algorithm. To fulfil the major purpose, three following objectives were raised. First, to select and substantiate the most common mathematical modelling methods. Second, to test the pre-selected meth-ods under laboratory conditions so that the most relevant method for implementing the target project could be identi-fied. Third, to prepare at least 3 models for application. The research results indicate that when evaluating the respira-tory virus (SARS-CoV-2 causing COVID-19) concentration and survival rate dependence on a number of traits, the methods of descriptive statistics, confidence intervals, hypothesis testing, dispersion analysis, trait dependence analysis, and regression analysis are employed. All the above-listed methods were tested under laboratory conditions and thus can be applied to evaluate the effectiveness of the project product – a device designed to prevent transmission of res-piratory viruses through air droplets. Selection of a particular method depends on a set of traits to be analysed, a trait type (quantitative, qualitative), a trait distribution type, and parameters. In the context of COVID-19, there is an urgent need to bring new products to market. Since most of the new products developed are directly related to research, it is very important to calculate the algorithms required to provide the service. Therefore, in order to calculate the optimal algorithm, it is necessary to analyze the algorithms already on the market. In this way, the products developed can gain a competitive advantage over competitors’ products. Given that the equipment placed on the market will be equipped with HINS radiation sources, such a product will become original and new on the market. Therefore, it is necessary to evaluate several methods of mathematical modelling. It is also necessary to take into account that the placing on the market of a product takes place in the context of global competition.
APA, Harvard, Vancouver, ISO, and other styles
7

Adams, J. F., S. R. Biggs, M. Fairweather, D. Njobuenwu, and J. Yao. "Theoretical Modelling of Nuclear Waste Flows." In ASME 2009 12th International Conference on Environmental Remediation and Radioactive Waste Management. ASMEDC, 2009. http://dx.doi.org/10.1115/icem2009-16377.

Full text
Abstract:
A large amount of nuclear waste is stored in tailings ponds as a solid-liquid slurry, and liquid flows containing suspensions of solid particles are encountered in the treatment and disposal of this waste. In processing this waste, it is important to understand the behaviour of particles within the flow in terms of their settling characteristics, their propensity to form solid beds, and the re-suspension characteristics of particles from a bed. A clearer understanding of such behaviour would allow the refinement of current approaches to waste management, potentially leading to reduced uncertainties in radiological impact assessments, smaller waste volumes and lower costs, accelerated clean-up, reduced worker doses, enhanced public confidence and diminished grounds for objection to waste disposal. Mathematical models are of significant value in nuclear waste processing since the extent of characterisation of wastes is in general low. Additionally, waste processing involves a diverse range of flows, within vessels, ponds and pipes. To investigate experimentally all waste form characteristics and potential flows of interest would be prohibitively expensive, whereas the use of mathematical models can help to focus experimental studies through the more efficient use of existing data, the identification of data requirements, and a reduction in the need for process optimisation in full-scale experimental trials. Validated models can also be used to predict waste transport behaviour to enable cost effective process design and continued operation, to provide input to process selection, and to allow the prediction of operational boundaries that account for the different types and compositions of particulate wastes. In this paper two mathematical modelling techniques, namely Reynolds-averaged Navier-Stokes (RANS) and large eddy simulation (LES), have been used to investigate particle-laden flows in a straight square duct and a duct with a bend. The flow solutions provided by these methods have been coupled to a three-dimensional Lagrangian particle tracking routine to predict particle trajectories. Simulation results are shown to be good agreement with experimental data, where available. Based on the LES and RANS-Lagrangian methods, the mean value of the particle displacement in a straight square duct is found to generally decrease with time due to gravity effects, with the rate of deposition increasing with particle size. Using the RANS-Lagrangian method to study flows in a duct bend, there is good agreement between predicted profiles and data, with the method able to simulate particle dispersion, the phenomenon of particle roping and the increase of particle collisions with the bend-wall with particle size. With the LES-Lagrangian method, particle re-suspension from a bed is studied in a straight square duct flow and this process shown to be dominated by secondary flows within the duct, with smaller particles tending to re-suspend in preference to larger ones. Overall, the study demonstrates that modelling techniques can be used to provide insight in to processes that are of relevance to the processing of nuclear waste, and are capable of predicting their transport behaviour. In particular, they are able to provide reliable predictions of particle deposition within flows to form solid beds, the re-suspension of particles from a bed, and the influence of complex flow geometries on particle dispersion. In the latter case, they are also of value to studies of erosion due to particle impact. Such models are therefore of value as engineering tools for use in the prediction of waste behaviour and in cost effective process design.
APA, Harvard, Vancouver, ISO, and other styles
8

How, Elijah Lip Heng, Adam Donald, Pierre Bettinelli, Pongsit Chongrueanglap, Woi Loon Hooi, and Anniza Ai Mei Soh. "Verticalized Sonic Measurements in Deviated Wellbore for Accurate Velocity Modelling and Seismic Well Tie in Offshore Malaysia." In Offshore Technology Conference Asia. OTC, 2022. http://dx.doi.org/10.4043/31641-ms.

Full text
Abstract:
Abstract Vertical seismic profile (VSP) or checkshot surveys are useful measurements to obtain accurate time-depth pairs for time-depth conversion in seismic exploration. However, in deviated wells, the standard geometry correction for rig-source VSPs will not provide reliable time-depth profiles because of ray bending, anisotropy, and lateral velocity variation effects. The accuracy of the time-depth profile can be improved by using model-based correction or vertical incidence VSP simulation with transversely isotropic (TI) data from an advanced sonic measurement. Elastic anisotropy parameters derived from sonic combined with VSP time-depth information are shown to accurately place a deviated wellbore within the reservoir to improve the drainage and productivity of a reservoir in offshore Malaysia. For rig-source VSP in a deviated well, the source-receiver travel path is not a vertical straight line, but an oblique, refracted path. The seismic waves from the source travel along straight paths within a layer of constant velocity. On entering another layer, they undergo refraction and the direction of travel changes. The pseudo-vertical incidence VSP is simulated with a velocity model to accurately calculate the vertical traveltime. This deviated well passes through various layers of overburden before reaching the target reservoirs. Observations from the dipole shear anisotropy, formation dip, and using dispersion analysis, indicate that these shales can be considered transversely isotropic with a vertical axis of symmetry. A single well probabilistic inversion was used to solve for the five anisotropic constants by combining the sonic measurements and prior elastic anisotropy relationships. This advanced model-based correction was the optimal solution to improve the accuracy of checkshot time-depth velocity data in combination with the anisotropic velocity model. Isotropic model-based correction showed a 6-ms time difference compared with standard VSP geometry correction. However, the sonic data in the overburden formations showed a significant amount of layering that gave rise to significant uncertainty in the existing velocity model and thus the position of the top reservoir. The anisotropic parameters were determined at sonic scale for the shale directly overlaying the reservoir. The upscaled anisotropic velocity model showed that an 18-ms time difference with standard VSP geometry correction changed the depth of the reservoir up to 45 m. The new model now placed the reservoir at the correct position and can be used with more confidence for development purposes.
APA, Harvard, Vancouver, ISO, and other styles
9

Yu, Jyh-Cheng, and Wen-Chung Ho. "Modified Sequential Programming for Feasibility Robustness of Constrained Design Optimization." In ASME 2000 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/detc2000/dac-14531.

Full text
Abstract:
Abstract Design parameters are subject to variations of manufactures, environments, and applications, which result in output deviations and constraint uncertainties. Quality products must perform to specifications despite these variations. Conventional constrained optimum may not be statistically feasible due to design variations. This paper addresses design variation characteristics and proposes a design procedure to ensure feasibility robustness in design optimization. Product life cycles often affect design variables with characteristic patterns. The Design Variation Hyper Sphere (DVHS) is presented using the concept of statistical joint confidence regions and decoupling techniques to characterize the coupled variations of design variables. The pattern represents the possible design dispersions at a specified probability, which is a hyper sphere for normal variables. The radius of the hyper sphere is determined by the feasibility requirement. The proposed robust optimization algorithm, SROP, introduces DVHS to the sequential quadratic programming, and modifies the feasible region to accommodate the activity uncertainty. The procedure ensures the design feasibility without over sacrificing the performance optimality. The design of a helical spring serves as an illustrative example of the proposed procedure.
APA, Harvard, Vancouver, ISO, and other styles
10

Hensberry, Kevin, Narayan Kovvali, Kuang C. Liu, Aditi Chattopadhyay, and Antonia Papandreou-Suppappola. "Guided Wave Based Fatigue Crack Detection and Localization in Aluminum Aerospace Structures." In ASME 2012 Conference on Smart Materials, Adaptive Structures and Intelligent Systems. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/smasis2012-8241.

Full text
Abstract:
The work presented in this paper provides an insight into the current challenges to detect incipient damage in complex metallic structural components. The goal of this research is to improve the confidence level in diagnosis and damage localization technologies by developing a robust structural health management (SHM) framework. Improved methodologies are developed for reference-free localization of fatigue induced cracks in complex metallic structures. The methodologies for damage interrogation involve damage feature extraction using advanced signal processing tools and a probabilistic approach for damage detection and localization. Specifically, piezoelectric transducers are used in pitch-catch mode to interrogate the structure with guided Lamb waves. A novel time-frequency (TF) based signal processing technique based on the matching pursuit decomposition (MPD) algorithm is developed to extract time-of-flight damage features from dispersive guided wave sensor signals, followed by a Bayesian probabilistic approach used to optimally fuse multi-sensor information and localize the crack tip. The MPD algorithm decomposes a signal using localized TF atoms and can provide a highly concentrated TF representation. The Bayesian probabilistic framework enables the effective quantification and management of uncertainty. Experiments are conducted to validate the proposed detection and localization methods. Results presented will illustrate the usefulness of the developed approaches in detection and localization of damage in aluminum lug joints.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography