Academic literature on the topic 'Prediction theory Mathematical models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Prediction theory Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Prediction theory Mathematical models"

1

Majda, Andrew, and Nan Chen. "Model Error, Information Barriers, State Estimation and Prediction in Complex Multiscale Systems." Entropy 20, no. 9 (August 28, 2018): 644. http://dx.doi.org/10.3390/e20090644.

Full text
Abstract:
Complex multiscale systems are ubiquitous in many areas. This research expository article discusses the development and applications of a recent information-theoretic framework as well as novel reduced-order nonlinear modeling strategies for understanding and predicting complex multiscale systems. The topics include the basic mathematical properties and qualitative features of complex multiscale systems, statistical prediction and uncertainty quantification, state estimation or data assimilation, and coping with the inevitable model errors in approximating such complex systems. Here, the information-theoretic framework is applied to rigorously quantify the model fidelity, model sensitivity and information barriers arising from different approximation strategies. It also succeeds in assessing the skill of filtering and predicting complex dynamical systems and overcomes the shortcomings in traditional path-wise measurements such as the failure in measuring extreme events. In addition, information theory is incorporated into a systematic data-driven nonlinear stochastic modeling framework that allows effective predictions of nonlinear intermittent time series. Finally, new efficient reduced-order nonlinear modeling strategies combined with information theory for model calibration provide skillful predictions of intermittent extreme events in spatially-extended complex dynamical systems. The contents here include the general mathematical theories, effective numerical procedures, instructive qualitative models, and concrete models from climate, atmosphere and ocean science.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Qian, Bee Lan Oo, and Benson Teck Heng Lim. "Validating and Applying the Mathematical Models for Predicting Corporate Social Responsibility Behavior in Construction Firms: A Roadmap." Buildings 12, no. 10 (October 12, 2022): 1666. http://dx.doi.org/10.3390/buildings12101666.

Full text
Abstract:
The prevalence of the sophisticated doctrine of corporate social responsibility (CSR) is increasing, given the perennial environmental concerns and social demands in the construction industry worldwide. Firms’ CSR implementation has been influenced by a broad spectrum of external impetuses and internal motives, yet fragmented assessments of such influences make the prediction and implementation of CSR in construction problematic. This study aimed to validate and apply mathematical models for predicting CSR practices in construction firms. Mobilizing integrated institutional theory, stakeholder theory, and self-determination theory, a questionnaire survey within the top-tier construction contractors was undertaken. Eight mathematical models were developed to predict the key dimensions of CSR practices, such as “government commitment” and “environmental preservation”, and validated by five subjective matter expert interviews. The results demonstrated the comprehensiveness, practicality, and robustness of the CSR prediction models in the construction industry. The results also highlighted the perceived importance of CSR practices; external coercive and normative forces, together with internal organizational culture, were the most influential factors directly enhancing construction firms’ CSR implementation. Conceptually, the findings refined CSR practice prediction in a construction management context. The proposed CSR assessment checklists can help practitioners improve the often-tenuous overall CSR performance and spur competitiveness in the construction market.
APA, Harvard, Vancouver, ISO, and other styles
3

D L Prasanna, S. V. S. N., K. Sandeep Reddy, Chandrasekhar, S. Sai Shivani, and E. Divya. "Prediction and Comparison of Rainfall-Runoff Using Mathematical Model." IOP Conference Series: Earth and Environmental Science 1130, no. 1 (January 1, 2023): 012044. http://dx.doi.org/10.1088/1755-1315/1130/1/012044.

Full text
Abstract:
Abstract The Runoff assessment is a crucial parameter in understanding the urban flooding scenario. This estimation becomes the deciding factor because of the uneven distribution of rainfall. Physics-based models for simulation of Runoff from catchments are composite models based on learning algorithms. The application of models to water resource problems is complex due to the incredible spatial variability of the characteristics of watershed and precipitation forms — the pattern-learning algorithms. Fuzzy-based algorithms, Artificial Neural Networks (ANNs), etc., have gained wide recognition in simulating the Rainfall-Runoff (RR), producing a comparable accuracy. In the present study, RR modeling is carried out targeting the application and estimation of Runoff using mathematical modeling. The investigations were carried out for the Malkajgiri catchment adopting 16 years of daily data from 2005 to 2021. The statistical learning theory-based pattern-learning algorithm is further utilized to evaluate the value of Runoff for the year 2021. The results were found to have fair accordance with the analytical outcomes.
APA, Harvard, Vancouver, ISO, and other styles
4

RATH, S., P. P. SENGUPTA, A. P. SINGH, A. K. MARIK, and P. TALUKDAR. "MATHEMATICAL-ARTIFICIAL NEURAL NETWORK HYBRID MODEL TO PREDICT ROLL FORCE DURING HOT ROLLING OF STEEL." International Journal of Computational Materials Science and Engineering 02, no. 01 (March 2013): 1350004. http://dx.doi.org/10.1142/s2047684113500048.

Full text
Abstract:
Accurate prediction of roll force during hot strip rolling is essential for model based operation of hot strip mills. Traditionally, mathematical models based on theory of plastic deformation have been used for prediction of roll force. In the last decade, data driven models like artificial neural network have been tried for prediction of roll force. Pure mathematical models have accuracy limitations whereas data driven models have difficulty in convergence when applied to industrial conditions. Hybrid models by integrating the traditional mathematical formulations and data driven methods are being developed in different parts of world. This paper discusses the methodology of development of an innovative hybrid mathematical-artificial neural network model. In mathematical model, the most important factor influencing accuracy is flow stress of steel. Coefficients of standard flow stress equation, calculated by parameter estimation technique, have been used in the model. The hybrid model has been trained and validated with input and output data collected from finishing stands of Hot Strip Mill, Bokaro Steel Plant, India. It has been found that the model accuracy has been improved with use of hybrid model, over the traditional mathematical model.
APA, Harvard, Vancouver, ISO, and other styles
5

Lima, E. A. B. F., J. T. Oden, D. A. Hormuth, T. E. Yankeelov, and R. C. Almeida. "Selection, calibration, and validation of models of tumor growth." Mathematical Models and Methods in Applied Sciences 26, no. 12 (October 25, 2016): 2341–68. http://dx.doi.org/10.1142/s021820251650055x.

Full text
Abstract:
This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as “model agnostic” in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology (in vivo). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction–diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.
APA, Harvard, Vancouver, ISO, and other styles
6

Ivanov, L. M., A. D. Kirwan, Jr., and O. V. Melnichenko. "Prediction of the stochastic behaviour of nonlinear systems by deterministic models as a classical time-passage probabilistic problem." Nonlinear Processes in Geophysics 1, no. 4 (December 31, 1994): 224–33. http://dx.doi.org/10.5194/npg-1-224-1994.

Full text
Abstract:
Abstract. Assuming that the behaviour of a nonlinear stochastic system can be described by a Markovian diffusion approximation and that the evolution equations can be reduced to a system of ordinary differential equations, a method for the calculation of prediction time is developed. In this approach, the prediction time depends upon the accuracy of prediction, the intensity of turbulence, the accuracy of the initial conditions, the physics contained in the mathematical model, the measurement errors, and the number of prediction variables. A numerical application to zonal channel flow illustrates the theory. Some possible generalizations of the theory are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Vlachos, D., and Y. A. Tolias. "Neuro-fuzzy modeling in bankruptcy prediction." Yugoslav Journal of Operations Research 13, no. 2 (2003): 165–74. http://dx.doi.org/10.2298/yjor0302165v.

Full text
Abstract:
For the past 30 years the problem of bankruptcy prediction had been thoroughly studied. From the paper of Altman in 1968 to the recent papers in the '90s, the progress of prediction accuracy was not satisfactory. This paper investigates an alternative modeling of the system (firm), combining neural networks and fuzzy controllers, i.e. using neuro-fuzzy models. Classical modeling is based on mathematical models that describe the behavior of the firm under consideration. The main idea of fuzzy control, on the other hand, is to build a model of a human control expert who is capable of controlling the process without thinking in a mathematical model. This control expert specifies his control action in the form of linguistic rules. These control rules are translated into the framework of fuzzy set theory providing a calculus, which can stimulate the behavior of the control expert and enhance its performance. The accuracy of the model is studied using datasets from previous research papers.
APA, Harvard, Vancouver, ISO, and other styles
8

Geng, Xinyu, Yufei Liu, Wei Zheng, Yongbin Wang, and Meng Li. "Prediction of Crushing Response for Metal Hexagonal Honeycomb under Quasi-Static Loading." Shock and Vibration 2018 (September 13, 2018): 1–10. http://dx.doi.org/10.1155/2018/8043410.

Full text
Abstract:
To provide a theoretical basis for metal honeycombs used for buffering and crashworthy structures, this study investigated the out-of-plane crushing of metal hexagonal honeycombs with various cell specifications. The mathematical models of mean crushing stress and peak crushing stress for metal hexagonal honeycombs were predicted on the basis of simplified super element theory. The experimental study was carried out to check the accuracy of mathematical models and verify the effectiveness of the proposed approach. The presented theoretical models were compared with the results obtained from experiments on nine types of honeycombs under quasi-static compression loading in the out-of-plane direction. Excellent correlation has been observed between the theoretical and experimental results.
APA, Harvard, Vancouver, ISO, and other styles
9

Guseynov, Sharif, and Eugene Kopytov. "Complex Mathematical Models for Analysis, Evaluation and Prediction of Aqueous and Atmospheric Environment of Latvia." Transport and Telecommunication Journal 13, no. 1 (January 1, 2012): 57–74. http://dx.doi.org/10.2478/v10244-012-0006-8.

Full text
Abstract:
Complex Mathematical Models for Analysis, Evaluation and Prediction of Aqueous and Atmospheric Environment of Latvia In present paper the authors consider the complete statements of initial-boundary problems for the modelling of various aspects of aqueous (3 models) and atmospheric systems (2 models) in Latvia. All the proposed models are described in terms of differential equations theory (using both ordinary differential equations and partial differential equations) and are regarded to be the evolutional models. Two of the three aqueous system models being studied are intended to describe the natural aquatic media ecosystems while the other models are aimed at studying environmental pollution processes.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhou, Ai Zhao, Wei Wei Gu, and Wei wang. "Study on Prediction Models for Time-Dependent Settlement of Soft Road Foundation." Applied Mechanics and Materials 204-208 (October 2012): 1880–85. http://dx.doi.org/10.4028/www.scientific.net/amm.204-208.1880.

Full text
Abstract:
The characteristics of soft clay roadbed settlement prediction model are studied in this paper. Firstly, based on one-dimension soil consolidation theory, the shape of time-dependent settlement process curve was analysed. Then, Mathematical analysis of traditional settlement models, including Gompertz model and Logistic model, was conducted, and the mathematical deficiency of above two traditional models were pointed out, which the settlement corresponding to inflection point has a constant ratio to the ultimate settlement. Further, Weibull model was proposed to describe the time-dependent settlement process of roadbed. This proposed model overcomes the deficiency of above two traditional models, and exponent model is one of its degraded expressions. Moreover, it can predict the total settlement processes of both the instantaneous load and the ramp load conditions. Finally, according to a group settlement observation data, the prediction results of different models are compared, and Weibull model has a good agreements.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Prediction theory Mathematical models"

1

Campbell, Alyce. "An empirical study of a financial signalling model." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26969.

Full text
Abstract:
Brennan and Kraus (1982,1986) developed a costless signalling model which can explain why managers issue hybrid securities—convertibles(CB's) or bond-warrant packages(BW's). The model predicts that when the true standard deviation (σ) of the distribution of future firm value is unknown to the market, the firm's managers will issue a hybrid with specific characteristics such that the security's full information value is at a minimum at the firm's true σ. In this fully revealing equilibrium market price is equal to this minimum value. In this study, first the mathematical properties of the hypothesized bond-valuation model were examined to see if specific functions could have a minimum not at σ = 0 or σ = ∞ as required for signalling. The Black-Scholes-Merton model was the valuation model chosen because of ease of use, supporting empirical evidence, and compatibility with the Brennan-Kraus model. Three different variations, developed from Ingersoll(1977a); Geske( 1977,1979) and Geske and Johnson(1984); and Brennan and Schwartz(1977,1978), were examined. For all hybrids except senior CB's, pricing functions with a minimum can be found for plausible input parameters. However, functions with an interior maximum are also plausible. A function with a maximum cannot be used for signalling. Second, bond pricing functions for 105 hybrids were studied. The two main hypotheses were: (1) most hybrids have functions with an interior minimum; (2) market price equals minimum theoretical value. The results do not support the signalling model, although the evidence is ambiguous. For the σ range 0.05-0.70, for CB's (BW's) 15(8) Brennan-Schwartz functions were everywhere positively sloping, 11(2) had an interior minimum, 22(0) were everywhere negatively sloping, and 35(12) had an interior maximum. Market prices did lie closer to minima than maxima from the Brennan-Schwartz solutions, but the results suggest that the solution as implemented overpriced the CB's. BW's were unambiguously overpriced. With consistent overpricing, market prices would naturally lie closer to minima. Average variation in theoretical values was, however, only about 5 percent for CB's and about 10 percent for BW's. This, coupled with the shape data, suggests that firms were choosing securities with theoretical values relatively insensitive to a rather than choosing securities to signal σ unambiguously.
Business, Sauder School of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Boye, and 扬博野. "Online auction price prediction: a Bayesian updating framework based on the feedback history." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43085830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cross, Richard J. (Richard John). "Inference and Updating of Probabilistic Structural Life Prediction Models." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19828.

Full text
Abstract:
Aerospace design requirements mandate acceptable levels of structural failure risk. Probabilistic fatigue models enable estimation of the likelihood of fatigue failure. A key step in the development of these models is the accurate inference of the probability distributions for dominant parameters. Since data sets for these inferences are of limited size, the fatigue model parameter distributions are themselves uncertain. A hierarchical Bayesian approach is adopted to account for the uncertainties in both the parameters and their distribution. Variables specifying the distribution of the fatigue model parameters are cast as hyperparameters whose uncertainty is modeled with a hyperprior distribution. Bayes' rule is used to determine the posterior hyperparameter distribution, given available data, thus specifying the probabilistic model. The Bayesian formulation provides an additional advantage by allowing the posterior distribution to be updated as new data becomes available through inspections. By updating the probabilistic model, uncertainty in the hyperparameters can be reduced, and the appropriate level of conservatism can be achieved. In this work, techniques for Bayesian inference and updating of probabilistic fatigue models for metallic components are developed. Both safe-life and damage-tolerant methods are considered. Uncertainty in damage rates, crack growth behavior, damage, and initial flaws are quantified. Efficient computational techniques are developed to perform the inference and updating analyses. The developed capabilities are demonstrated through a series of case studies.
APA, Harvard, Vancouver, ISO, and other styles
4

Van, Koten Chikako, and n/a. "Bayesian statistical models for predicting software effort using small datasets." University of Otago. Department of Information Science, 2007. http://adt.otago.ac.nz./public/adt-NZDU20071009.120134.

Full text
Abstract:
The need of today�s society for new technology has resulted in the development of a growing number of software systems. Developing a software system is a complex endeavour that requires a large amount of time. This amount of time is referred to as software development effort. Software development effort is the sum of hours spent by all individuals involved. Therefore, it is not equal to the duration of the development. Accurate prediction of the effort at an early stage of development is an important factor in the successful completion of a software system, since it enables the developing organization to allocate and manage their resource effectively. However, for many software systems, accurately predicting the effort is a challenge. Hence, a model that assists in the prediction is of active interest to software practitioners and researchers alike. Software development effort varies depending on many variables that are specific to the system, its developmental environment and the organization in which it is being developed. An accurate model for predicting software development effort can often be built specifically for the target system and its developmental environment. A local dataset of similar systems to the target system, developed in a similar environment, is then used to calibrate the model. However, such a dataset often consists of fewer than 10 software systems, causing a serious problem in the prediction, since predictive accuracy of existing models deteriorates as the size of the dataset decreases. This research addressed this problem with a new approach using Bayesian statistics. This particular approach was chosen, since the predictive accuracy of a Bayesian statistical model is not so dependent on a large dataset as other models. As the size of the dataset decreases to fewer than 10 software systems, the accuracy deterioration of the model is expected to be less than that of existing models. The Bayesian statistical model can also provide additional information useful for predicting software development effort, because it is also capable of selecting important variables from multiple candidates. In addition, it is parametric and produces an uncertainty estimate. This research developed new Bayesian statistical models for predicting software development effort. Their predictive accuracy was then evaluated in four case studies using different datasets, and compared with other models applicable to the same small dataset. The results have confirmed that the best new models are not only accurate but also consistently more accurate than their regression counterpart, when calibrated with fewer than 10 systems. They can thus replace the regression model when using small datasets. Furthermore, one case study has shown that the best new models are more accurate than a simple model that predicts the effort by calculating the average value of the calibration data. Two case studies has also indicated that the best new models can be more accurate for some software systems than a case-based reasoning model. Since the case studies provided sufficient empirical evidence that the new models are generally more accurate than existing models compared, in the case of small datasets, this research has produced a methodology for predicting software development effort using the new models.
APA, Harvard, Vancouver, ISO, and other styles
5

Abbas, Kaja Moinudeen. "Bayesian Probabilistic Reasoning Applied to Mathematical Epidemiology for Predictive Spatiotemporal Analysis of Infectious Diseases." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5302/.

Full text
Abstract:
Abstract Probabilistic reasoning under uncertainty suits well to analysis of disease dynamics. The stochastic nature of disease progression is modeled by applying the principles of Bayesian learning. Bayesian learning predicts the disease progression, including prevalence and incidence, for a geographic region and demographic composition. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest. A Bayesian network representing the outbreak of influenza and pneumonia in a geographic region is ported to a newer region with different demographic composition. Upon analysis for the newer region, the corresponding prevalence of influenza and pneumonia among the different demographic subgroups is inferred for the newer region. Bayesian reasoning coupled with disease timeline is used to reverse engineer an influenza outbreak for a given geographic and demographic setting. The temporal flow of the epidemic among the different sections of the population is analyzed to identify the corresponding risk levels. In comparison to spread vaccination, prioritizing the limited vaccination resources to the higher risk groups results in relatively lower influenza prevalence. HIV incidence in Texas from 1989-2002 is analyzed using demographic based epidemic curves. Dynamic Bayesian networks are integrated with probability distributions of HIV surveillance data coupled with the census population data to estimate the proportion of HIV incidence among the different demographic subgroups. Demographic based risk analysis lends to observation of varied spectrum of HIV risk among the different demographic subgroups. A methodology using hidden Markov models is introduced that enables to investigate the impact of social behavioral interactions in the incidence and prevalence of infectious diseases. The methodology is presented in the context of simulated disease outbreak data for influenza. Probabilistic reasoning analysis enhances the understanding of disease progression in order to identify the critical points of surveillance, control and prevention. Public health resources, prioritized by the order of risk levels of the population, will efficiently minimize the disease spread and curtail the epidemic at the earliest.
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Ni. "Statistical Learning in Logistics and Manufacturing Systems." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11457.

Full text
Abstract:
This thesis focuses on the developing of statistical methodology in reliability and quality engineering, and to assist the decision-makings at enterprise level, process level, and product level. In Chapter II, we propose a multi-level statistical modeling strategy to characterize data from spatial logistics systems. The model can support business decisions at different levels. The information available from higher hierarchies is incorporated into the multi-level model as constraint functions for lower hierarchies. The key contributions include proposing the top-down multi-level spatial models which improve the estimation accuracy at lower levels; applying the spatial smoothing techniques to solve facility location problems in logistics. In Chapter III, we propose methods for modeling system service reliability in a supply chain, which may be disrupted by uncertain contingent events. This chapter applies an approximation technique for developing first-cut reliability analysis models. The approximation relies on multi-level spatial models to characterize patterns of store locations and demands. The key contributions in this chapter are to bring statistical spatial modeling techniques to approximate store location and demand data, and to build system reliability models entertaining various scenarios of DC location designs and DC capacity constraints. Chapter IV investigates the power law process, which has proved to be a useful tool in characterizing the failure process of repairable systems. This chapter presents a procedure for detecting and estimating a mixture of conforming and nonconforming systems. The key contributions in this chapter are to investigate the property of parameter estimation in mixture repair processes, and to propose an effective way to screen out nonconforming products. The key contributions in Chapter V are to propose a new method to analyze heavily censored accelerated life testing data, and to study the asymptotic properties. This approach flexibly and rigorously incorporates distribution assumptions and regression structures into estimating equations in a nonparametric estimation framework. Derivations of asymptotic properties of the proposed method provide an opportunity to compare its estimation quality to commonly used parametric MLE methods in the situation of mis-specified regression models.
APA, Harvard, Vancouver, ISO, and other styles
7

Clark, Peter G. "Multi-scale modelling describing thermal behaviour of polymeric materials. Scalable lattice-Boltzmann models based upon the theory of Grmela towards refined thermal performance prediction of polymeric materials at micro and nano scales." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5768.

Full text
Abstract:
Micrometer injection moulding is a type of moulding in which moulds have geometrical design features on a micrometer scale that must be transferred to the geometry of the produced part. The difficulties encountered due to very high shear and rapid heat transfer of these systems has motivated this investigation into the fundamental mathematics behind polymer heat transfer and associated processes. The aim is to derive models for polymer dynamics, especially heat dynamics, that are considerably less approximate than the ones used at present, and to translate this into simulation and optimisation algorithms and strategies, Thereby allowing for greater control of the various polymer processing methods at micrometer scales.
APA, Harvard, Vancouver, ISO, and other styles
8

Collins, Michael. "Trust Discounting in the Multi-Arm Trust Game." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1607086117161125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Clark, Peter Graham. "Multi-scale modelling describing thermal behaviour of polymeric materials : scalable lattice-Boltzmann models based upon the theory of Grmela towards refined thermal performance prediction of polymeric materials at micro and nano scales." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5768.

Full text
Abstract:
Micrometer injection moulding is a type of moulding in which moulds have geometrical design features on a micrometer scale that must be transferred to the geometry of the produced part. The difficulties encountered due to very high shear and rapid heat transfer of these systems has motivated this investigation into the fundamental mathematics behind polymer heat transfer and associated processes. The aim is to derive models for polymer dynamics, especially heat dynamics, that are considerably less approximate than the ones used at present, and to translate this into simulation and optimisation algorithms and strategies, Thereby allowing for greater control of the various polymer processing methods at micrometer scales.
APA, Harvard, Vancouver, ISO, and other styles
10

Asiri, Aisha. "Applications of Game Theory, Tableau, Analytics, and R to Fashion Design." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2018. http://digitalcommons.auctr.edu/cauetds/146.

Full text
Abstract:
This thesis presents various models to the fashion industry to predict the profits for some products. To determine the expected performance of each product in 2016, we used tools of game theory to help us identify the expected value. We went further and performed a simple linear regression and used scatter plots to help us predict further the performance of the products of Prada. We used tools of game theory, analytics, and statistics to help us predict the performance of some of Prada's products. We also used the Tableau platform to visualize an overview of the products' performances. All of these tools were used to aid in finding better predictions of Prada's product performances.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Prediction theory Mathematical models"

1

International Conference on Systems Research, Informatics, and Cybernetics (16th 2004 Baden-Baden, Germany). Anticipative and predictive models in systems science. Windsor, Ont: International Institute for Advanced Studies in Systems Research and Cybernetics, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Guttman, Irving. Prediction in circular distributions. Toronto: University of Toronto, Dept. of Statistics, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cave, Jonathan A. K. A median choice theorem. Santa Monica, CA: Rand Corporation, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Clark, Todd E. Using out-of-sample mean squared prediction errors to test the martingale difference hypothesis. Kansas City [Mo.]: Research Division, Federal Reserve Bank of Kansas City, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hui se yu ce yu jue ce mo xing yan jiu. Beijing: Ke xue chu ban she, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shuhua, Mao, ed. Hui yu ce yu jue ce fang fa. Beijing: Ke xue chu ban she, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vasil'eva, Natal'ya. Mathematical models in the management of copper production: ideas, methods, examples. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1014071.

Full text
Abstract:
Presents the current status in modelling of metallurgical processes considered by the model the mathematical model used in the description of the processes of copper production and their classification. Set out a system of methods and models in the field of mathematical modeling of technological processes, including balance sheet, statistics, optimization models, forecasting models and predictive models. For specific technological processes are developed: the model of the balance of the cycle of pyrometallurgical production of copper, polynomial model for prediction of matte composition on the basis of the passive experiment, predictive model of quantitative estimation of the copper content in the matte based on fuzzy logic. Of interest to students, postgraduates, teachers of technical universities, engineers and research workers who use mathematical methods for processing of data of laboratory and industrial experiments.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Lon-Mu. Time series analysis and forecasting. River Forest, IL: Scientific Computing Associates, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Lon-Mu. Time series analysis and forecasting. 2nd ed. River Forest, Ill: Scientific Computing Associates, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Modelle zur Prädiktion von Polarkoordinaten für die Mondreflektoren im dynamischen Erde-Mond-System. Bonn: [Geodätische Institute der Rheinischen Friedrich-Wilhelms-Universität Bonn], 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Prediction theory Mathematical models"

1

Royall, Richard M. "The model based (prediction) approach to finite population sampling theory." In Institute of Mathematical Statistics Lecture Notes - Monograph Series, 225–40. Hayward, CA: Institute of Mathematical Statistics, 1992. http://dx.doi.org/10.1214/lnms/1215458849.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Marinov, C. A., and P. Neittaanmäki. "Asymptotic behaviour of mixed-type circuits. Delay time predicting." In Mathematical Models in Electrical Circuits: Theory and Applications, 98–117. Dordrecht: Springer Netherlands, 1991. http://dx.doi.org/10.1007/978-94-011-3440-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Olmo, Marta del, Saskia Grabe, and Hanspeter Herzel. "Mathematical Modeling in Circadian Rhythmicity." In Methods in Molecular Biology, 55–80. New York, NY: Springer US, 2021. http://dx.doi.org/10.1007/978-1-0716-2249-0_4.

Full text
Abstract:
AbstractCircadian clocks are autonomous systems able to oscillate in a self-sustained manner in the absence of external cues, although such Zeitgebers are typically present. At the cellular level, the molecular clockwork consists of a complex network of interlocked feedback loops. This chapter discusses self-sustained circadian oscillators in the context of nonlinear dynamics theory. We suggest basic steps that can help in constructing a mathematical model and introduce how self-sustained generations can be modeled using ordinary differential equations. Moreover, we discuss how coupled oscillators synchronize among themselves or entrain to periodic signals. The development of mathematical models over the last years has helped to understand such complex network systems and to highlight the basic building blocks in which oscillating systems are built upon. We argue that, through theoretical predictions, the use of simple models can guide experimental research and is thus suitable to model biological systems qualitatively.
APA, Harvard, Vancouver, ISO, and other styles
4

Borrelli, Arianna. "The Great Yogurt Project: Models and Symmetry Principles in Early Particle Physics." In Model and Mathematics: From the 19th to the 21st Century, 221–54. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-97833-4_6.

Full text
Abstract:
AbstractAccording to the received view of the development of particle physics, mathematics, and more specifically group theory, provided the key which, between the late 1950s and the early 1960s, allowed scientists to achieve both a deeper physical understanding and an empirically successful modeling of particle phenomena. Indeed, a posteriori it has even been suggested that just by looking at diagrams of observed particle properties (see Fig. 1) one could have recognized in them the structures of specific groups (see Fig. 2). However, a closer look at theoretical practices of the 1950s and early 1960s reveals a tension between the employment of advanced mathematical tools and the “modeling” of observation, if the term “model” is understood as a construction allowing for the fitting and predicting of phenomena. As we shall see, the most empirically successful schemes, such as the “Gell-Mann and Nishijima model” or the “eightfold way”, were mathematically very simple, made no use of group-theoretical notions and for quite a time resisted all attempts to transform them into more refined mathematical constructs. Indeed, the theorists who proposed them had little or no interest in abstract approaches to mathematical practice. On the other hand, there were a number of particle theorists who did care about and employ group-theoretical notions, yet not primarily as tools to fit phenomena, but rather as a guide to uncover the fundamental principles of particle interactions. Moreover, these theorists did not regard all groups as epistemically equivalent, and instead clearly preferred those transformations related to space-time invariances over all others. These authors also often made a distinction between purely descriptive “models” and the “theories” they were (unsuccessfully) trying to build and which in their opinion would provide a deeper understanding of nature. Nonetheless, they expected their “theories”, too, to be empirically successful in describing observation, and thus to also function as “models”. In this sense, like their less mathematically-inclined colleagues, they also saw no clear-cut distinction between “modeling” and “theorizing” particle phenomena. In my paper I will discuss the development of these theoretical practices between the 1950s and the early 1960s as examples of the complex relationship between mathematics and the conceptualization of physical phenomena, arguing that, at least in this case, no general statements are possible on the relationship of mathematics and models. At that time, very different mathematical practices coexisted and the epistemic attitudes of physicists towards theoretical constructs could depend both on the assumptions and goals of the individual authors and on the specific mathematical methods and concepts linked to the constructs.
APA, Harvard, Vancouver, ISO, and other styles
5

Dawson, John, Anna Gams, Ivan Rajen, Andrew M. Soltisz, and Andrew G. Edwards. "Computational Prediction of Cardiac Electropharmacology - How Much Does the Model Matter?" In Computational Physiology, 51–62. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-05164-7_5.

Full text
Abstract:
AbstractAnimal data describing drug interactions in cardiac tissue are abundant, however, nuanced inter-species differences hamper the use of these data to predict drug responses in humans. There are many computational models of cardiomyocyte electrophysiology that facilitate this translation, yet it is unclear whether fundamental differences in their mathematical formalisms significantly impact their predictive power. A common solution to this problem is to perform inter-species translations within a collection of models with internally consistent formalisms, termed a “lineage”, but there has been little effort to translate outputs across lineages. Here, we translate model outputs between lineages from Simula and Washington University for models of ventricular cardiomyocyte electrophysiology of humans, canines, and guinea pigs. For each lineage-species combination, we generated a population of 1000 models by varying common parameters, namely ion conductances, according to a Guassian log-normal distribution with a mean at the parameter’s species-specific default value and standard deviation of 30%.We used partial least squares regression to translate the influences of one model to another using perturbations to calculated descriptors of resulting electrophysiological behavior derived from these parameter variations. Finally, we evaluated translation fidelity by performing a sensitivity analysis between input parameters and output descriptors, as similar sensitivities between models of a common species indicates similar biological mechanisms underlying model behavior. Successful translation between models, especially those from different lineages, will increase confidence in their predictive power.
APA, Harvard, Vancouver, ISO, and other styles
6

Hinrichsen, Diederich, and Anthony J. Pritchard. "Mathematical Models." In Mathematical Systems Theory I, 1–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/3-540-26410-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Morales, Domingo, María Dolores Esteban, Agustín Pérez, and Tomáš Hobza. "Prediction Theory." In A Course on Small Area Estimation and Mixed Models, 73–89. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63757-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Heij, Christiaan, André C.M. Ran, and Frederik van Schagen. "Filtering and Prediction." In Introduction to Mathematical Systems Theory, 101–20. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-59654-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Takeuchi, Kei. "Theory of Statistical Prediction." In Contributions on Theory of Mathematical Statistics, 3–37. Tokyo: Springer Japan, 2020. http://dx.doi.org/10.1007/978-4-431-55239-0_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mostowski, A. "Models of Set Theory." In Aspects of Mathematical Logic, 65–179. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-11080-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Prediction theory Mathematical models"

1

LaViolette, Marc, and Michael Strawson. "On the Prediction of Nitrogen Oxides From Gas Turbine Combustion Chambers Using Neural Networks." In ASME Turbo Expo 2008: Power for Land, Sea, and Air. ASMEDC, 2008. http://dx.doi.org/10.1115/gt2008-50566.

Full text
Abstract:
This paper describes a method of predicting the oxides of nitrogen emissions from gas turbine combustion chambers using neural networks. A short review of existing empirical models is undertaken and the reasoning behind the choice of correlation variables and mathematical formulations is presented. This review showed that the mathematical functions obtained from the underlying theory used to develop the semi-empirical model ultimately limit their general applicability. Under these conditions, obtaining a semi-empirical model with a large domain and good accuracy is difficult. An overview of the use of neural networks as a modelling tool is given. Using over 2000 data points, a neural network that can predict NOx emissions with greater accuracy than published correlations was developed. The coefficients of determination of the prediction for the previous published semi-empirical models are 0.8048 and 0.7885. However one tends to grossly overpredict and the other underpredict. The coefficient of determination is 0.8697 for the model using a neural network. Because of the nature of neural networks, this more accurate model does not allow better insight into the physical and chemical phenomena. It is however, a useful tool for the initial design of combustion chambers.
APA, Harvard, Vancouver, ISO, and other styles
2

Moftakhar, Abbas A. "Discrepancies Between High Temperature Fatigue Life Prediction Models." In ASME 1998 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1998. http://dx.doi.org/10.1115/imece1998-0371.

Full text
Abstract:
Abstract One feature of creep fatigue data is the large amount of scatter present, even for test specimens taken from the same heat of material and tested under nominally identical conditions. This has led to considerable discrepancies between methods proposed during the last 40 years for fatigue life prediction of high temperature components. Presently, there is no general agreement on optimum approaches. The inadequacy of the Manson-Coffin law for high temperature use is discussed. The frequency modified fatigue life model and the strain range partitioning method, two popular extensions of the Manson-Coffin law for high temperature use, are studied. Major differences in terms of mechanistic interpretations and mathematical formulations between these two models are investigated. Also, difficulties in the realistic application of these methods are studied and some simplified solutions are proposed. The time-dependent damage rule of the ASME Boilerand Pressure Vessel Code is studied for its application to notched components and issues such as creep rupture under compressive stress, creep rupture time of notched components, and cyclic creep rupture curves vs. static creep rupture curves are discussed. Difficulties in the calculation of the time fraction term of this method are investigated for notched components and a simplified solution is proposed.
APA, Harvard, Vancouver, ISO, and other styles
3

Abdulhamitbilal, Erkan, Sinan Şal, and Elbrous M. Jafarov. "A Mathematical Model for Windmilling of a Turbojet Engine." In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-58503.

Full text
Abstract:
Abstract The transient windmilling characteristic of a modern turbojet engine under different flight conditions and altitudes is obtained with numerous tests conducted at an Altitude Test Facility (ATF). A simple and practical mathematical model for predicting the transient and steady-state rotational speed of a simple turbojet engine in flight has been developed. The method is derived from Froude’s momentum theory or disk actuator theory and implemented to a turbojet engine. A correction factor is introduced to match with test results of KTJ-3200 being indigenously developed by Kale R&D Inc. The present model’s predictions are compared with the test data of Microturbo TRI 60 engine and KTJ-3200 engine. The estimation of the present windmilling model fits very well with test results of two different engines within an error band of ±1.2% for various atmosphere conditions depending on flight speed, altitudes and temperature. The present model is compared with loss modeling windmilling estimation methods described in literature which requires large amount of inputs as blade angle, blade pitch and component efficiencies. The comparison with the available windmilling model at literature shows that both models capture the terminal speed estimation very well. However, the model in literature is not able to capture the transient engine speed, which is important for missile applications as the missile can be fired before the engine reaches to terminal speed. The difference between the test data and the available model during transients is up to 50%. The present model matches perfectly with test data even at transients. It is more practical and much simpler than the available windmilling model in the literature to estimate the both transient and terminal windmilling speed of the turbojet engines. The agreement between the present model, KTJ 3200 test data, windmilling method available in the literature and test data of Microturbo TRI 60 is very good for most of the ranges investigated.
APA, Harvard, Vancouver, ISO, and other styles
4

Pereira, Filipe S., Guilherme Vaz, Luís Eça, and Sébastien Lemaire. "On the Numerical Prediction of Transitional Flows With Reynolds-Averaged Navier-Stokes and Scale-Resolving Simulation Models." In ASME 2016 35th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/omae2016-54414.

Full text
Abstract:
The present work investigates the transitional flow around a smooth circular cylinder at Reynolds number Re = 140,000. The flow is resolved using the viscous-flow solver ReFRESCO, and distinct mathematical models are applied to assess their ability to handle transitional flows. The selected mathematical models are the Reynolds-Averaged Navier-Stokes equations (RANS), Scale-Adaptive Simulation (SAS), Delayed Detached-Eddy Simulation (DDES), eXtra Large-Eddy Simulation (XLES) and Partially-Averaged Navier-Stokes (PANS) equations. The RANS equations are supplemented with the k–ω Shear-Stress Transport (SST) with and without the Local Correlation Transition Model (LCTM). The numerical simulations are carried out using structured grids ranging from 9.32 × 104 to 2.24 × 107 cells, and a dimensionless time-step of 1.50 × 10−3. As expected, the outcome demonstrates that transition from laminar to turbulent regime is incorrectly predicted by the k–ω SST model. Transition occurs upstream of the flow separation, which is typical of the supercritical regime and so the flow physics is incorrectly modelled. Naturally, all Scale-Resolving Simulation (SRS) models that rely on RANS to solve the boundary-layer, called hybrid models, will exhibit a similar trend. On the other hand, mathematical models capable to resolve part of the turbulence field in the boundary layer (PANS) lead to a better agreement with the experimental data. Furthermore, the k–ω SST LCTM is also able to improve the modelling accuracy when compared to the k–ω SST. Therefore, it might be a valuable engineering tool if its computational demands are considered (in the RANS context). Therefore, the results confirm that the choice of the most appropriate mathematical model for the simulation of turbulent flows is not straightforward and it may depend on the details of the flow physics.
APA, Harvard, Vancouver, ISO, and other styles
5

Ambrus, Adrian, Sergey Alyaev, Nazanin Jahani, Felix James Pacis, and Tomasz Wiktorski. "Rate of Penetration Prediction Using Quantile Regression Deep Neural Networks." In ASME 2022 41st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/omae2022-79046.

Full text
Abstract:
Abstract As the petroleum industry makes strides towards meeting the energy efficiency and low emissions goals required to tackle ongoing climate challenges, there is an increasing need for optimizing drilling operations. A central aspect of drilling optimization lies in the ability to select the drilling parameters that improve the rate of penetration (ROP) for the rock formations being drilled. Optimization algorithms require an efficient predictive model of the ROP as a function of key drilling parameters and formation properties. Over the past decades, various mathematical ROP models have been developed. The use of machine learning and artificial intelligence for ROP modeling has become quite common, as indicated by the quickly expanding literature on this topic in recent years. Most of these modeling efforts have focused on providing single-point forecasts of ROP without any indication of the uncertainty surrounding the predictions. However, different factors, such as bit wear or foundering may severely limit the achievable ROP. Without including such uncertainty in the predictions, the reliability of ROP models for optimization and decision-making can be difficult to evaluate. To address this issue, we present the application of quantile regression deep neural networks (QRDNN) to the ROP prediction problem. In our work, quantile regression models perform probabilistic forecasts of ROP for a given range of drilling parameters and formation properties available from logging while drilling measurements or offset well logs. The model outputs consist of estimated values for different quantiles, such as the P10, P50, and P90 estimates, and associated confidence intervals. Several such models with different input features and neural network architectures, including fully connected, convolutional, and dropout layers, are trained and validated on publicly available field data spanning 12 hole sections from 7 wells drilled in the Volve field in the North Sea. The results indicate that the QRDNN achieves good prediction accuracy within the projected confidence intervals. These results highlight the advantages of combining deep learning with quantile regression compared to using machine learning models which only generate single-point predictions for the ROP.
APA, Harvard, Vancouver, ISO, and other styles
6

Tavakoli, Sasan, Abbas Dashtimanesh, and Prasanta K. Sahoo. "Prediction of Hydrodynamic Coefficients of Coupled Heave and Pitch Motions of Heeled Planing Boats by Asymmetric 2D+T Theory." In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-78327.

Full text
Abstract:
This paper attempts to present a mathematical model to predict hydrodynamic coefficients of a heeled planing hull in vertical plane. This model has been developed by using 2D+T theory and theoretical solution of the water entry of wedge section bodies. Sectional hydrodynamic force acting on the vessel is determined and then extended over the entire length of the vessel. By simplifying final equations of heave force and pitch moment acting on a heeled planing hull, equations for prediction of hydrodynamic coefficients of the vessel are developed. Accuracy of the method is evaluated by comparing its results against previous empirical methods. It is seen that current method has reasonable accuracy in prediction of different hydrodynamic coefficients. Also, effects of heel angle on added mass, damping and stiffness coefficients are studied and discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Casoli, Paolo, Luca Riccò, Federico Campanini, Antonio Lettini, and Cesare Dolcin. "Mathematical Model of a Hydraulic Excavator for Fuel Consumption Predictions." In ASME/BATH 2015 Symposium on Fluid Power and Motion Control. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/fpmc2015-9566.

Full text
Abstract:
This paper presents the multibody mathematical model of a hydraulic excavator, developed in the AMESim® environment, which is able to predict the machinery fuel consumption during the working cycles. The mathematical modelling approach is presented as well as the subsystems models. The experimental activity on the excavator is presented in detail. The excavator fuel consumption was measured according to the JCMAS standard. The working cycles were executed an appropriate number of times in order to minimize the stochastic influence of the operator on the fuel consumption. The results show the mathematical model capability in the machine fuel consumption prediction. The excavator model could be useful either to perform accurate analyses on the energy dissipation in the system, giving the possibility to introduce new system configurations and compare their performance with the standard one, or for the definition of novel system control strategies in order to achieve the fuel consumption reduction target.
APA, Harvard, Vancouver, ISO, and other styles
8

Masuda, Koichi, Takayuki Asanuma, Hisaaki Maeda, Tomoki Ikoma, and Chang-Kyu Rheem. "A Prediction Method for Horizontal Plane Behavior of FPSO Under the Single Point Mooring." In ASME 2002 21st International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2002. http://dx.doi.org/10.1115/omae2002-28096.

Full text
Abstract:
It is well known that in single point mooring or anchoring the slowly varying oscillation of a ship is caused by action of current and wind. During the slowly varying oscillation, extraordinary tension occurs in the mooring line when the ship’s yaw angle becomes nearly maximum, and incurs, as the case may be, in breakdown of mooring lines or unforeseen drift of anchors. Floating Production, Storage and Offloading (FPSO) systems are often moored as Single Point Mooring (SPM) systems. SPM systems can be Catenary Anchor Leg Mooring (CALM) systems or Single Anchor Leg Mooring (SALM) systems. It has been required to predict and evaluate performance of horizontal plane behavior of FPSO in current, wind and waves, since the workability and safety of FPSO become important from the stand point of the Life Cycle Engineering. Numerical simulation is one of the practical methods for prediction of FPSO performance and it needs quite accurate values of hydrodynamic coefficients in the mathematical model. Recently some attempts on improvement of accuracy in prediction of the hydrodynamic coefficients were made and approximate formulae for hydrodynamic derivatives including the interaction effect of main hull form and appendages were also proposed. Recently extensive studies for numerical models which describe components of hull, propeller, rudder, thruster, wind and waves separately, and these interactions have been made successively. In this paper, first, the basic equations of maneuvering motion are explained. And, an estimation method of slender body theory for hydrodynamic force acting on the hull is outlined. The authors explain numerical models to obtain FPSO coefficients for the horizontal plane behavior from mathematical model of ship maneuverability. And, numerical test of FPSO under the slowly varying oscillation is carried out. Finally, a new mathematical model is proposed to describe the current forces acting on FPSO under the slowly varying oscillation.
APA, Harvard, Vancouver, ISO, and other styles
9

Gopalakrishnan, Shibu, and Gopinath Dhandapani. "Single Layered Cable Under Constrained Bending: Development of New Mathematical Model and Validation." In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-67854.

Full text
Abstract:
This paper concerns the response of a single-layered strand cable of helical wires with wires-to-core contact under free and constant curvature constrained bending. The stranded cable under static-loading conditions experiences any combination of tension, torsion and bending. A linear finite element model for helical wire strand cable for both bending cases was developed and their bending response for various load steps was analyzed. The responses thus observed were compared with the theoretical prediction reported by the present authors in the literature. The present authors have developed a theoretical model using the thin rod theory and presented a linear stiffness matrix establishing the relationship between the axial, torsional and flexural rigidities and the coupling parameters of the cable.
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Junkang, Jun Hong, Xiaopan Wu, Mengxi Wang, and Yan Feng. "The Modeling and Prediction of Gravity Deformation in Precision Machine Tool Assembly." In ASME 2013 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/imece2013-63441.

Full text
Abstract:
The variation propagation in mechanical assembly is an important topic in several research fields, such as computer aided tolerancing (CAT) and product quality control. Mathematical models and analysis methods have been developed to solve this practical problem. Tolerance analysis which is based on the rigid hypothesis can be used to simulate the mass manufacturing and assembly. The state space model and stream of variation theory are mainly applied in flexible part assembly. However, in precision machine tool assembly, both tolerance design and process planning critically impact the accuracy performance, mainly because of the fact that the gravity deformation, including the part deformation and the variation in the joint of two connecting parts, cannot be ignored in variation propagation analysis. In this paper, based on the new generation GPS (Geometrical Product Specification and Verification) standards, the verification and modeling of key characteristics variation due to gravity deformation of single part and adjacent parts are discussed. The accurate evaluation of position and orientation variation taking into account form errors and gravity deformation can be solved from this model by FEM. A mathematical model considering rail error, stiffness of bearings is introduced to simulate the motion error in gravity effect. Based on this work to more accurately calculate the variation propagation considering gravity impact, a state space model describing the assembly process of machine tools is proposed. Then, in any assembly process, the final accuracy can be predicted to find out whether the accuracy is out of design requirement. The validity of this method is verified by a simulation of the assembly of a precision horizontal machining center.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Prediction theory Mathematical models"

1

Saptsin, Vladimir, and Володимир Миколайович Соловйов. Relativistic quantum econophysics – new paradigms in complex systems modelling. [б.в.], July 2009. http://dx.doi.org/10.31812/0564/1134.

Full text
Abstract:
This work deals with the new, relativistic direction in quantum econophysics, within the bounds of which a change of the classical paradigms in mathematical modelling of socio-economic system is offered. Classical physics proceeds from the hypothesis that immediate values of all the physical quantities, characterizing system’s state, exist and can be accurately measured in principle. Non-relativistic quantum mechanics does not reject the existence of the immediate values of the classical physical quantities, nevertheless not each of them can be simultaneously measured (the uncertainty principle). Relativistic quantum mechanics rejects the existence of the immediate values of any physical quantity in principle, and consequently the notion of the system state, including the notion of the wave function, which becomes rigorously nondefinable. The task of this work consists in econophysical analysis of the conceptual fundamentals and mathematical apparatus of the classical physics, relativity theory, non-relativistic and relativistic quantum mechanics, subject to the historical, psychological and philosophical aspects and modern state of the socio-economic modeling problem. We have shown that actually and, virtually, a long time ago, new paradigms of modeling were accepted in the quantum theory, within the bounds of which the notion of the physical quantity operator becomes the primary fundamental conception(operator is a mathematical image of the procedure, the action), description of the system dynamics becomes discrete and approximate in its essence, prediction of the future, even in the rough, is actually impossible when setting aside the aftereffect i.e. the memory. In consideration of the analysis conducted in the work we suggest new paradigms of the economical-mathematical modeling.
APA, Harvard, Vancouver, ISO, and other styles
2

Levin, Sheldon G., and J. T. Klopcic. Mathematical Models for Prediction of Neuropsychiatric and Other Non-Battle Casualties in High Intensity Combat. Fort Belvoir, VA: Defense Technical Information Center, July 1986. http://dx.doi.org/10.21236/ada171283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lovianova, Iryna V., Dmytro Ye Bobyliev, and Aleksandr D. Uchitel. Cloud calculations within the optional course Optimization Problems for 10th-11th graders. [б. в.], September 2019. http://dx.doi.org/10.31812/123456789/3267.

Full text
Abstract:
The article deals with the problem of introducing cloud calculations into 10th-11th graders’ training to solve optimization problems in the context of the STEM-education concept. After analyzing existing programmes of optional courses on optimization problems, the programme of the optional course Optimization Problems has been developed and substantiated implying solution of problems by the cloud environment CoCalc. It is a routine calculating operation and not a mathematical model that is accentuated in the programme. It allows considering more problems which are close to reality without adapting the material while training 10th-11th graders. Besides, the mathematical apparatus of the course which is partially known to students as the knowledge acquired from such mathematics sections as the theory of probability, mathematical statistics, mathematical analysis and linear algebra is enough to master the suggested course. The developed course deals with a whole class of problems of conventional optimization which vary greatly. They can be associated with designing devices and technological processes, distributing limited resources and planning business functioning as well as with everyday problems of people. Devices, processes and situations to which a model of optimization problem is applied are called optimization problems. Optimization methods enable optimal solutions for mathematical models. The developed course is noted for building mathematical models and defining a method to be applied to finding an efficient solution.
APA, Harvard, Vancouver, ISO, and other styles
4

Modlo, Yevhenii O., Serhiy O. Semerikov, Ruslan P. Shajda, Stanislav T. Tolmachev, and Oksana M. Markova. Methods of using mobile Internet devices in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects. [б. в.], July 2020. http://dx.doi.org/10.31812/123456789/3878.

Full text
Abstract:
The article describes the components of methods of using mobile Internet devices in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects: using various methods of representing models; solving professional problems using ICT; competence in electric machines and critical thinking. On the content of learning academic disciplines “Higher mathematics”, “Automatic control theory”, “Modeling of electromechanical systems”, “Electrical machines” features of use are disclosed for Scilab, SageCell, Google Sheets, Xcos on Cloud in the formation of the general professional component of bachelor in electromechanics competency in modeling of technical objects. It is concluded that it is advisable to use the following software for mobile Internet devices: a cloud-based spreadsheets as modeling tools (including neural networks), a visual modeling systems as a means of structural modeling of technical objects; a mobile computer mathematical system used at all stages of modeling; a mobile communication tools for organizing joint modeling activities.
APA, Harvard, Vancouver, ISO, and other styles
5

Al-Qadi, Imad, Qingqing Cao, Lama Abufares, Siqi Wang, Uthman Mohamed Ali, and Greg Renshaw. Moisture Content and In-place Density of Cold-Recycling Treatments. Illinois Center for Transportation, May 2022. http://dx.doi.org/10.36501/0197-9191/22-007.

Full text
Abstract:
Cold-recycling treatments are gaining popularity in the United States because of their economic and environmental benefits. Curing is the most critical phase for these treatments. Curing is the process where emulsion breaks and water evaporates, leaving residual binder in the treated material. In this process, the cold-recycled mix gains strength. Sufficient strength is required before opening the cold-treated layer to traffic or placing an overlay. Otherwise, premature failure, related to insufficient strength and trapped moisture, would be expected. However, some challenges arise from the lack of relevant information and specifications to monitor treatment curing. This report presents the outcomes of a research project funded by the Illinois Department for Transportation to investigate the feasibility of using the nondestructive ground-penetrating radar (GPR) for density and moisture content estimation of cold-recycled treatments. Monitoring moisture content is an indicator of curing level; treated layers must meet a threshold of maximum allowable moisture content (2% in Illinois) to be considered sufficiently cured. The methodology followed in this report included GPR numerical simulations and GPR indoor and field tests for data sources. The data were used to correlate moisture content to dielectric properties calculated from GPR measurements. Two models were developed for moisture content estimation: the first is based on numerical simulations and the second is based on electromagnetic mixing theory and called the Al-Qadi-Cao-Abufares (ACA) model. The simulation model had an average error of 0.33% for moisture prediction for five different field projects. The ACA model had an average error of 2% for density prediction and an average root-mean-square error of less than 0.5% for moisture content prediction for both indoor and field tests. The ACA model is presented as part of a developed user-friendly tool that could be used in the future to continuously monitor curing of cold-recycled treatments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography