Journal articles on the topic 'Mathematical modelling methodologies'

To see the other types of publications on this topic, follow the link: Mathematical modelling methodologies.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Mathematical modelling methodologies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kvasha, Serhii, Nadiia Davydenko, Yurii Pasichnyk, Tetiana Viatkina, and Natalia Wasilewska. "GDP modelling: assessment of methodologies and peculiarities of its usage in Ukraine." Problems and Perspectives in Management 16, no. 4 (November 9, 2018): 186–200. http://dx.doi.org/10.21511/ppm.16(4).2018.16.

Full text
Abstract:
GDP is one of the main indicators determining the level of economic growth in countries and regions across the globe, therefore, its calculation should be based on the advanced methodology. In the present context, the existing methods of the GDP calculation do not fully meet the fineness criterion subject to certain objective and subjective reasons. Hence, the development of more perfect methodology that takes into account the disadvantages of the existing techniques and is based on economic and mathematical modeling is an urgent national task for Ukraine. The purpose of the article is to assess the GDP calculation methodology used in Ukraine. To achieve this purpose, the relevant methods of GDP calculation, which are valid in Ukraine, have been analyzed, their specifics, certain drawbacks, problems of use and application scenarios have been also revealed. According to the analysis results, an advanced methodology based on an economic and mathematical model with the use of dynamic programming is proposed. The developed methodology for calculating the GDP takes into account the peculiarities of social development in Ukraine and the tendencies of world economic processes and contributes to obtaining more reliable GDP values. It will be useful for experts in financial institutions, including international ones, and scholars working in the macroeconomic modeling area.
APA, Harvard, Vancouver, ISO, and other styles
2

Ferrer, Jordi, Clara Prats, Daniel López, and Josep Vives-Rego. "Mathematical modelling methodologies in predictive food microbiology: A SWOT analysis." International Journal of Food Microbiology 134, no. 1-2 (August 31, 2009): 2–8. http://dx.doi.org/10.1016/j.ijfoodmicro.2009.01.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bertozzi, Andrea L., Shane D. Johnson, and Michael J. Ward. "Mathematical modelling of crime and security: Special Issue of EJAM." European Journal of Applied Mathematics 27, no. 3 (April 28, 2016): 311–16. http://dx.doi.org/10.1017/s0956792516000176.

Full text
Abstract:
This special issue of the European journal of applied mathematics features research articles that involve the application of mathematical methodologies to the modelling of a broad range of problems related to crime and security. Some specific topics in this issue include recent developments in mathematical models of residential burglary, a dynamical model for the spatial spread of riots initiated by some triggering event, the analysis and development of game-theoretic models of crime and conflict, the study of statistically based models of insurgent activity and terrorism using real-world data sets, models for the optimal strategy of police deployment under realistic constraints, and a model of cyber crime as related to the study of spiking behaviour in social network cyberspace communications. Overall, the mathematical and computational methodologies employed in these studies are as diverse as the specific applications themselves and the scales (spatial or otherwise) to which they are applied. These methodologies range from statistical and stochastic methods based on maximum likelihood methods, Bayesian equilibria, regression analysis, self-excited Hawkes point processes, agent-based random walk models on networks, to more traditional applied mathematical methods such as dynamical systems and stability theory, the theory of Nash equilibria, rigorous methods in partial differential equations and travelling wave theory, and asymptotic methods that exploit disparate space and time scales.
APA, Harvard, Vancouver, ISO, and other styles
4

Guergachi, A. Aziz, and Gilles G. Patry. "Identification, verification and validation of process models in wastewater engineering: a critical review." Journal of Hydroinformatics 5, no. 3 (July 1, 2003): 181–88. http://dx.doi.org/10.2166/hydro.2003.0014.

Full text
Abstract:
This article presents a critical review of the existing methodologies for process mathematical modelling in the area of wastewater engineering. It is argued that model identifiability is not a major issue in mathematical modelling. Model verifiability is a very demanding criterion that can be replaced by a less stringent one: model observability. The issue of ‘complex models versus reduced-order models’ is to be resolved by introducing a new concept: optimal model complexity. The traditional procedures of model validation are not adequate and a mathematical framework for model quality evaluation is needed.
APA, Harvard, Vancouver, ISO, and other styles
5

Et. al., Kaustubh Kulkarni ,. "MATHEMATICAL MODELLING OF A CIRCULAR CENTRAL SOLAR RECEIVER WITH VARIABLE DIAMETER HEADER." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 2 (March 26, 2021): 519–26. http://dx.doi.org/10.17762/itii.v9i2.379.

Full text
Abstract:
The circular central solar receiver helps in the generation of power output in a solar power plant. This paper of research encompasses mathematical development and modelling of a circular central solar receiver with variable diameter header. This modelling is utilized for analyzing the power output in a year. A desired electrical output is generated by the modeling. Methodologies were utilized for evaluating the desired parameters involved in the modelling of the entire system such as; size of the receiver, height of the tower etc. In this modelling system, solar simulator is utilized in the place of heliostat. The working fluid in the system is water. The type of receiver involved in the modelling is a circular central solar receiver. Simulation of the thermal solar receiver system is done. The solar irradiation heat flux of 3500 W/m2 is applied at the input.
APA, Harvard, Vancouver, ISO, and other styles
6

McKemmish, Laura K., and Jonathan Tennyson. "General mathematical formulation of scattering processes in atom–diatomic collisions in the RmatReact methodology." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 377, no. 2154 (August 5, 2019): 20180409. http://dx.doi.org/10.1098/rsta.2018.0409.

Full text
Abstract:
Accurately modelling cold and ultracold reactive collisions occurring over deep potential wells, such as D + + H 2 → H + + HD , requires the development of new theoretical and computational methodologies. One potentially useful framework is the R -matrix method adopted widely for electron–molecule collisions which has more recently been applied to non-reactive heavy-particle collisions such as Ar–Ar. The existing treatment of non-reactive elastic and inelastic scattering needs to be substantially extended to enable modelling of reactive collisions: this is the subject of this paper. Herein, we develop the general mathematical formulation for non-reactive elastic and inelastic scattering, photoassociation, photodissociation, charge exchange and reactive scattering using the R -matrix method. Of particular note is that the inner region, of central importance to calculable R -matrix methodologies, must be finite in all scattering coordinates rather than a single scattering coordinate as for non-reactive scattering. This article is part of a discussion meeting issue ‘Advances in hydrogen molecular ions: H 3 + , H 5 + and beyond’.
APA, Harvard, Vancouver, ISO, and other styles
7

Piskur, Pawel, Piotr Szymak, and Bartosz Larzewski. "Shipyard Crane Modeling Methods." Pedagogika-Pedagogy 93, no. 6s (August 31, 2021): 279–90. http://dx.doi.org/10.53656/ped21-6s.25shi.

Full text
Abstract:
The article discusses various crane mathematical modelling and simulation methodologies. The purpose of this study is to assess the effect of wind force on the dynamic of shipyard cranes, particularly hook movements in the horizontal plane. Appropriate simulation models are required to offer a robust control strategy that allows the crane to be remotely operated in windy circumstances. As a result, mathematical models based on differential equations for varying numbers of independent variables are compared to object-oriented, physical modelling model based on Matlab Simscape Multibody. The assumptions are explored, as well as the effect of the number of independent variables on model correctness.
APA, Harvard, Vancouver, ISO, and other styles
8

Nacaskul, Poomjai. "Survey of credit risk models in relation to capital adequacy framework for financial institutions." Journal of Governance and Regulation 5, no. 4 (2016): 68–84. http://dx.doi.org/10.22495/jgr_v5_i4_p6.

Full text
Abstract:
This article (i) iterates what is meant by credit risks and the mathematical-statistical modelling thereof, (ii) elaborates the conceptual and technical links between credit risk modelling and capital adequacy framework for financial institutions, particularly as per the New Capital Accord (Basel II)’s Internal Ratings-Based (IRB) approach, (iii) proffer a simple and intuitive taxonomy on contemporary credit risk modelling methodologies, and (iv) discuses in some details a number of key models pertinent, in various stages of development, to various application areas in the banking and financial sector.
APA, Harvard, Vancouver, ISO, and other styles
9

Barons, Martine Jayne, and Rachel L. Wilkerson. "Proof and Uncertainty in Causal Claims." Exchanges: The Interdisciplinary Research Journal 5, no. 2 (June 7, 2018): 72–89. http://dx.doi.org/10.31273/eirj.v5i2.238.

Full text
Abstract:
Causal questions drive scientific enquiry. From Hume to Granger, and Rubin to Pearl the history of science is full of examples of scientists testing new theories in an effort to uncover causal mechanisms. The difficulty of drawing causal conclusions from observational data has prompted developments in new methodologies, most notably in the area of graphical models. We explore the relationship between existing theories about causal mechanisms in a social science domain, new mathematical and statistical modelling methods, the role of mathematical proof and the importance of accounting for uncertainty. We show that, while the mathematical sciences rely on their modelling assumptions, dialogue with the social sciences calls for continual extension of these models. We show how changing model assumptions lead to innovative causal structures and more nuanced casual explanations. We review differing techniques for determining cause in different disciplines using causal theories from psychology, medicine, and economics.
APA, Harvard, Vancouver, ISO, and other styles
10

Anisimov, A. L., and Yu B. Melnikov. "On the Methodologies for Evaluation of Tax Authorities’ Performance from the Perspective of Economic-Mathematical Modelling." Journal of the Ural State University of Economics 69 (2017): 78–90. http://dx.doi.org/10.29141/2073-1019-2017-13-1-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Rosa, Milton, and Daniel Clark Orey. "Philosophical concepts of creative insubordination in ethnomodelling research." Educação Matemática Pesquisa Revista do Programa de Estudos Pós-Graduados em Educação Matemática 24, no. 2 (August 31, 2022): 353–83. http://dx.doi.org/10.23925/1983-3156.2022v24i2p353-383.

Full text
Abstract:
The application of modelling methods makes sense to researchers and educators when these professionals examine the mathematical patterns developed by members of distinct cultures. Currently, an important issue in mathematics education is its tendency towards valuing a local orientation in its research paradigm. Thus, a search for innovative methodologies such as ethnomodeling is necessary to record the historical forms of mathematical ideas, procedures, and practices developed in diverse cultural contexts. Yet, ethnomodeling is not an attempt to replace globalized school/academic mathematics, however, it is necessary to recognize the existence of local mathematical knowledge in the school curriculum. This context enabled the emergence of creative insubordination of ethnomodeling as it evoked a disturbance that caused a revision of the rules and regulations in the mathematical modelling process. This process triggered a debate about the nature of mathematics in relation to culture by proposing a dialogue between local and global approaches in dialogical ways. Thus, the main objective of this article is to discuss the philosophy of ethnomodeling as a creative insubordination of mathematics education and also as a process of glocalization.
APA, Harvard, Vancouver, ISO, and other styles
12

Britikov, Nikita. "NUMERICAL MODELLING OF SNOW DEPOSITS AND SNOW TRANSPORT ON LONG-SPAN ROOFS FOR STEADY AND UNSTEADY FLOW." International Journal for Computational Civil and Structural Engineering 18, no. 4 (December 28, 2022): 23–38. http://dx.doi.org/10.22337/2587-9618-2022-18-4-23-38.

Full text
Abstract:
This paper presents the methodologies for numerical modelling of snow deposits and snow transport on long-span roofs for steady and unsteady flow. The calculation of snow loads on long-span roofs is a complex problem, solving which often involves deviating from the building code recommendations. Experiments in wind tunnels, although widely used, do not allow reproducing the full-scale effects of all snow accumulation processes. At the same time, the continuous improvement of mathematical models, numerical methods, software and computer technologies makes the development and implementation of numerical modelling technologies in real construction practice and regulatory documents inevitable. In this paper it is shown that the use of the well-known erosion-deposition model, supported by field observations and experimental data, allows reproducing reasonably accurate snow distributions on long-span roofs. The importance of the “synthesis” between physical and mathematical modelling and the application of the building codes is emphasized, as only the joint use of approaches can comprehensively describe modelling of snow accumulation and snow transport and provide better solutions to a wider range of related problems.
APA, Harvard, Vancouver, ISO, and other styles
13

Sreenath, Sree N., Kwang-Hyun Cho, and Peter Wellstead. "Modelling the dynamics of signalling pathways." Essays in Biochemistry 45 (September 30, 2008): 1–28. http://dx.doi.org/10.1042/bse0450001.

Full text
Abstract:
In the present chapter we discuss methodologies for the modelling, calibration and validation of cellular signalling pathway dynamics. The discussion begins with the typical range of techniques for modelling that might be employed to go from the chemical kinetics to a mathematical model of biochemical pathways. In particular, we consider the decision-making processes involved in selecting the right mechanism and level of detail of representation of the biochemical interactions. These include the choice between (i) deterministic and stochastic chemical kinetics representations, (ii) discrete and continuous time models and (iii) representing continuous and discrete state processes. We then discuss the task of calibrating the models using information available in web-based databases. For situations in which the data are not available from existing sources we discuss model calibration based upon measured data and system identification methods. Such methods, together with mathematical modelling databases and computational tools, are often available in standard packages. We therefore make explicit mention of a range of popular and useful sites. As an example of the whole modelling and calibration process, we discuss a study of the cross-talk between the IL-1 (interleukin-1)-stimulated NF-κB (nuclear factor κB) pathway and the TGF-β (transforming growth factor β)-stimulated Smad2 pathway.
APA, Harvard, Vancouver, ISO, and other styles
14

Wu, Ying, Zhi Cheng, Ryley McConkey, Fue-Sang Lien, and Eugene Yee. "Modelling of Flow-Induced Vibration of Bluff Bodies: A Comprehensive Survey and Future Prospects." Energies 15, no. 22 (November 20, 2022): 8719. http://dx.doi.org/10.3390/en15228719.

Full text
Abstract:
A comprehensive review of modelling techniques for the flow-induced vibration (FIV) of bluff bodies is presented. This phenomenology involves bidirectional fluid–structure interaction (FSI) coupled with non-linear dynamics. In addition to experimental investigations of this phenomenon in wind tunnels and water channels, a number of modelling methodologies have become important in the study of various aspects of the FIV response of bluff bodies. This paper reviews three different approaches for the modelling of FIV phenomenology. Firstly, we consider the mathematical (semi-analytical) modelling of various types of FIV responses: namely, vortex-induced vibration (VIV), galloping, and combined VIV-galloping. Secondly, the conventional numerical modelling of FIV phenomenology involving various computational fluid dynamics (CFD) methodologies is described, namely: direct numerical simulation (DNS), large-eddy simulation (LES), detached-eddy simulation (DES), and Reynolds-averaged Navier–Stokes (RANS) modelling. Emergent machine learning (ML) approaches based on the data-driven methods to model FIV phenomenology are also reviewed (e.g., reduced-order modelling and application of deep neural networks). Following on from this survey of different modelling approaches to address the FIV problem, the application of these approaches to a fluid energy harvesting problem is described in order to highlight these various modelling techniques for the prediction of FIV phenomenon for this problem. Finally, the critical challenges and future directions for conventional and data-driven approaches are discussed. So, in summary, we review the key prevailing trends in the modelling and prediction of the full spectrum of FIV phenomena (e.g., VIV, galloping, VIV-galloping), provide a discussion of the current state of the field, present the current capabilities and limitations and recommend future work to address these limitations (knowledge gaps).
APA, Harvard, Vancouver, ISO, and other styles
15

Meron, Ehud. "Self-organization in interface dynamics and urban development." Discrete Dynamics in Nature and Society 3, no. 2-3 (1999): 125–36. http://dx.doi.org/10.1155/s1026022699000163.

Full text
Abstract:
The view of the urban environment as an extended nonlinear system introduces new concepts, motivates new questions, and suggests new methodologies in the study of urban dynamics. A review of recent results on interface dynamics in nonequilibrium physical systems is presented, and possible implications on the urban environment are discussed. It is suggested that the growth modes of specific urban zones (e.g. residential, commercial, or industrial) and the factors affecting them can be studied using mathematical models that capture two generic interface instabilities.
APA, Harvard, Vancouver, ISO, and other styles
16

Calmet, Jacques, and Marvin Oliver Schneider. "Decision Making Modeled as a Theorem Proving Process." International Journal of Decision Support System Technology 4, no. 3 (July 2012): 1–11. http://dx.doi.org/10.4018/jdsst.2012070101.

Full text
Abstract:
The authors introduce a theoretical framework enabling to process decisions making along some of the lines and methodologies used to mechanize mathematics and more specifically to mechanize the proofs of theorems. An underlying goal of Decision Support Systems is to trust the decision that is designed. This is also the main goal of their framework. Indeed, the proof of a theorem is always trustworthy. By analogy, this implies that a decision validated through theorem proving methodologies brings trust. To reach such a goal the authors have to rely on a series of abstractions enabling to process all of the knowledge involved in decision making. They deal with an Agent Oriented Abstraction for Multiagent Systems, Object Mechanized Computational Systems, Abstraction Based Information Technology, Virtual Knowledge Communities, topological specification of knowledge bases using Logical Fibering. This approach considers some underlying hypothesis such that knowledge is at the heart of any decision making and that trust transcends the concept of belief. This introduces methodologies from Artificial Intelligence. Another overall goal is to build tools using advanced mathematics for users without specific mathematical knowledge.
APA, Harvard, Vancouver, ISO, and other styles
17

Huang, F., K. Fehrs, G. Hartmann, and R. Klette. "Wide-angle vision for road views." Opto-Electronics Review 21, no. 1 (January 1, 2013): 1–22. http://dx.doi.org/10.2478/s11772-013-0079-5.

Full text
Abstract:
AbstractThe field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.
APA, Harvard, Vancouver, ISO, and other styles
18

Giribone, Pier Giuseppe, and Duccio Martelli. "Deep Learning for seasonality modelling in Inflation-Indexed Swap pricing." Risk Management Magazine 16, no. 3 (December 2021): 54–69. http://dx.doi.org/10.47473/2020rmm0099.

Full text
Abstract:
An Inflation-Indexed Swap (IIS) is a derivative in which, at every payment date, the counterparties swap an inflation rate with a fixed rate. For the calculation of the Inflation Leg cash flows it is necessary to build a mathematical model suitable for the Consumer Price Index (CPI) projection. For this purpose, quants typically start by using market quotes for the Zero-Coupon swaps in order to derive the future trend of the inflation index, together with a seasonality model for capturing the typical periodical effects. In this study, we propose a forecasting model for inflation seasonality based on a Long Short Term Memory (LSTM) network: a deep learning methodology particularly useful for forecasting purposes. The CPI predictions are conducted using a FinTech paradigm, but in respect of the traditional quantitative finance theory developed in this research field. The paper is structured according to the following sections: the first two parts illustrate the pricing methodologies for the most popular IIS: the Zero Coupon Inflation-Indexed Swap (ZCIIS) and the Year-on-Year Inflation-Indexed Swap (YYIIS); section 3 deals with the traditional standard method for the forecast of CPI values (trend + seasonality), while section 4 describes the LSTM architecture, and section 5 focuses on CPI projections, also called inflation bootstrap. Then section 6 describes a robust check, implementing a traditional SARIMA model in order to improve the interpretation of the LSTM outputs; finally, section 7 concludes with a real market case, where the two methodologies are used for computing the fair-value for a YYIIS and the model risk is quantified.
APA, Harvard, Vancouver, ISO, and other styles
19

Tzotzis, Anastasios, César García-Hernández, José-Luis Huertas-Talón, and Panagiotis Kyratsis. "3D FE Modelling of Machining Forces during AISI 4140 Hard Turning." Strojniški vestnik – Journal of Mechanical Engineering 66, no. 7-8 (July 15, 2020): 467–78. http://dx.doi.org/10.5545/sv-jme.2020.6784.

Full text
Abstract:
Hard turning is one of the most used machining processes in industrial applications. This paper researches critical aspects that influence the machining process of AISI-4140 to develop a prediction model for the resultant machining force-induced during AISI-4140 hard turning, based on finite element (FE) modelling. A total of 27 turning simulation runs were carried out in order to investigate the relationship between three key parameters (cutting speed, feed rate, and depth of cut) and their effect on machining force components. The acquired numerical results were compared to experimental ones for verification purposes. Additionally, a mathematical model was established according to statistical methodologies such as the response surface methodology (RSM) and the analysis of variance (ANOVA). The plurality of the simulations yielded results in high conformity with the experimental values of the main machining force and its components. Specifically, the resultant cutting force agreement exceeded 90 % in many tests. Moreover, the verification of the adequacy of the statistical model led to an accuracy of 8.8 %.
APA, Harvard, Vancouver, ISO, and other styles
20

Forte, Marcus Bruno Soares, Marcio Antonio Mazutti, Francisco Maugeri Filho, and Maria Isabel Rodrigues. "Comparative studies on parametric sensitivity analyses using conventional and factorial design methodologies: mathematical modelling of clavulanic acid adsorption on zeolites." Journal of Chemical Technology & Biotechnology 87, no. 12 (June 6, 2012): 1715–22. http://dx.doi.org/10.1002/jctb.3844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Yasemi, Mohammadreza, and Mario Jolicoeur. "Modelling Cell Metabolism: A Review on Constraint-Based Steady-State and Kinetic Approaches." Processes 9, no. 2 (February 9, 2021): 322. http://dx.doi.org/10.3390/pr9020322.

Full text
Abstract:
Studying cell metabolism serves a plethora of objectives such as the enhancement of bioprocess performance, and advancement in the understanding of cell biology, of drug target discovery, and in metabolic therapy. Remarkable successes in these fields emerged from heuristics approaches, for instance, with the introduction of effective strategies for genetic modifications, drug developments and optimization of bioprocess management. However, heuristics approaches have showed significant shortcomings, such as to describe regulation of metabolic pathways and to extrapolate experimental conditions. In the specific case of bioprocess management, such shortcomings limit their capacity to increase product quality, while maintaining desirable productivity and reproducibility levels. For instance, since heuristics approaches are not capable of prediction of the cellular functions under varying experimental conditions, they may lead to sub-optimal processes. Also, such approaches used for bioprocess control often fail in regulating a process under unexpected variations of external conditions. Therefore, methodologies inspired by the systematic mathematical formulation of cell metabolism have been used to address such drawbacks and achieve robust reproducible results. Mathematical modelling approaches are effective for both the characterization of the cell physiology, and the estimation of metabolic pathways utilization, thus allowing to characterize a cell population metabolic behavior. In this article, we present a review on methodology used and promising mathematical modelling approaches, focusing primarily to investigate metabolic events and regulation. Proceeding from a topological representation of the metabolic networks, we first present the metabolic modelling approaches that investigate cell metabolism at steady state, complying to the constraints imposed by mass conservation law and thermodynamics of reactions reversibility. Constraint-based models (CBMs) are reviewed highlighting the set of assumed optimality functions for reaction pathways. We explore models simulating cell growth dynamics, by expanding flux balance models developed at steady state. Then, discussing a change of metabolic modelling paradigm, we describe dynamic kinetic models that are based on the mathematical representation of the mechanistic description of nonlinear enzyme activities. In such approaches metabolic pathway regulations are considered explicitly as a function of the activity of other components of metabolic networks and possibly far from the metabolic steady state. We have also assessed the significance of metabolic model parameterization in kinetic models, summarizing a standard parameter estimation procedure frequently employed in kinetic metabolic modelling literature. Finally, some optimization practices used for the parameter estimation are reviewed.
APA, Harvard, Vancouver, ISO, and other styles
22

Musso, Mariel, Eva Kyndt, Eduardo Cascallar, and Filip Dochy. "Predicting Mathematical Performance: The Effect of Cognitive Processes and Self-Regulation Factors." Education Research International 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/250719.

Full text
Abstract:
A substantial number of research studies have investigated the separate influence of working memory, attention, motivation, and learning strategies on mathematical performance and self-regulation in general. There is still little understanding of their impact on performance when taken together, understanding their interactions, and how much each of them contributes to the prediction of mathematical performance. With the emergence of new methodologies and technologies, such as the modelling with predictive systems, it is now possible to study these effects with approaches which use a wide range of data, including student characteristics, to estimate future performance without the need of traditional testing (Boekaerts and Cascallar, 2006). This research examines the different cognitive patterns and complex relations between cognitive variables, motivation, and background variables associated with different levels of mathematical performance using artificial neural networks (ANNs). A sample of 800 entering university students was used to develop three ANN models to identify the expected future level of performance in a mathematics test. These ANN models achieved high degree of precision in the correct classification of future levels of performance, showing differences in the pattern of relative predictive weight amongst those variables. The impact on educational quality, improvement, and accountability is highlighted.
APA, Harvard, Vancouver, ISO, and other styles
23

LIU, ZENGRONG, and GUANRONG CHEN. "ON THE RELATIONSHIP BETWEEN PARAMETRIC VARIATION AND STATE FEEDBACK IN CHAOS CONTROL." International Journal of Bifurcation and Chaos 12, no. 06 (June 2002): 1411–15. http://dx.doi.org/10.1142/s0218127402005194.

Full text
Abstract:
In this Letter, we study the popular parametric variation chaos control and state-feedback methodologies in chaos control, and point out for the first time that they are actually equivalent in the sense that there exist diffeomorphisms that can convert one to the other for most smooth chaotic systems. Detailed conversions are worked out for typical discrete chaotic maps (logistic, Hénon) and continuous flows (Rösller, Lorenz) for illustration. This unifies the two seemingly different approaches from the physics and the engineering communities on chaos control. This new perspective reveals some new potential applications such as chaos synchronization and normal form analysis from a unified mathematical point of view.
APA, Harvard, Vancouver, ISO, and other styles
24

Dimitriadis, Christos N., Evangelos G. Tsimopoulos, and Michael C. Georgiadis. "A Review on the Complementarity Modelling in Competitive Electricity Markets." Energies 14, no. 21 (November 1, 2021): 7133. http://dx.doi.org/10.3390/en14217133.

Full text
Abstract:
In recent years, the ever-increasing research interest in various aspects of the electricity pool-based markets has generated a plethora of complementarity-based approaches to determine participating agents’ optimal offering/bidding strategies and model players’ interactions. In particular, the integration of multiple and diversified market agents, such as conventional generation companies, renewable energy sources, electricity storage facilities and agents with a mixed generation portfolio has instigated significant competition, as each player attempts to establish their market dominance and realize substantial financial benefits. The employment of complementarity modelling approaches can also prove beneficial for the optimal coordination of the electricity and natural gas market coupling. Linear and nonlinear programming as well as complementarity modelling, mainly in the form of mathematical programs with equilibrium constraints (MPECs), equilibrium programs with equilibrium constraints (EPECs) and conjectural variations models (CV) have been widely employed to provide effective market clearing mechanisms, enhance agents’ decision-making process and allow them to exert market power, under perfect and imperfect competition and various market settlements. This work first introduces the theoretical concepts that regulate the majority of contemporary competitive electricity markets. It then presents a comprehensive review of recent advances related to complementarity-based modelling methodologies and their implementation in current competitive electricity pool-based markets applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Hernández, Alfredo I., Virginie Le Rolle, Antoine Defontaine, and Guy Carrault. "A multiformalism and multiresolution modelling environment: application to the cardiovascular system and its regulation." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367, no. 1908 (December 13, 2009): 4923–40. http://dx.doi.org/10.1098/rsta.2009.0163.

Full text
Abstract:
The role of modelling and simulation in the systemic analysis of living systems is now clearly established. Emerging disciplines, such as systems biology, and worldwide research actions, such as the Physiome Project or the Virtual Physiological Human, are based on an intensive use of modelling and simulation methodologies and tools. One of the key aspects in this context is to perform an efficient integration of various models representing different biological or physiological functions, at different resolutions, spanning through different scales. This paper presents a multiformalism modelling and simulation environment (M2SL) that has been conceived to ease model integration. A given model is represented as a set of coupled and atomic model components that may be based on different mathematical formalisms with heterogeneous structural and dynamical properties. A co-simulation approach is used to solve these hybrid systems. The pioneering model of the overall regulation of the cardiovascular system proposed by Guyton and co-workers in 1972 has been implemented under M2SL and a pulsatile ventricular model based on a time-varying elastance has been integrated in a multi-resolution approach. Simulations reproducing physiological conditions and using different coupling methods show the benefits of the proposed environment.
APA, Harvard, Vancouver, ISO, and other styles
26

Duran-Serrano, Javier. "Modelling approach for multi-criteria decision-making selection process for artificial lift systems in crude oil production." CT&F - Ciencia, Tecnología y Futuro 8, no. 1 (June 15, 2018): 53–60. http://dx.doi.org/10.29047/01225383.91.

Full text
Abstract:
Artificial Lift system selection is a key factor in enhancing energy efficiency, increasing profit and expanding asset life in any oilproducing well. Theoretically, this selection has to consider an extensive number of variables, making hard to select the optimal Artificial Lift System. However, in practice, a limited number of variables and empirical knowledge are used in this selection process. The latter increases system failure probability due to pump – well incompatibility. The multi-criteria decision-making methods present mathematical modelling for selection processes with finite alternatives and high number of criteria. These methodologies make it feasible to reach a final decision considering all variables involved.In this paper, we present a software application based on a sequential mathematical analysis of hierarchies for variables, a numerical validation of input data and, finally, an implementation of Multi-Criteria Decision Making (MCDM) methods (SAW, ELECTRE and VIKOR) to select the most adequate artificial lift system for crude oil production in Colombia. Its novel algorithm is designed to rank seven Artificial Lift Systems, considering diverse variables in order to make the decision. The results are validated with field data in a Casestudy relating to a Colombian oilfield, with the aim of reducing the Artificial Lift Failure Rate.
APA, Harvard, Vancouver, ISO, and other styles
27

Cacuci, Dan G. "TOWARDS OVERCOMING THE CURSE OF DIMENSIONALITY IN PREDICTIVE MODELLING AND UNCERTAINTY QUANTIFICATION." EPJ Web of Conferences 247 (2021): 00002. http://dx.doi.org/10.1051/epjconf/202124700002.

Full text
Abstract:
This invited presentation summarizes new methodologies developed by the author for performing high-order sensitivity analysis, uncertainty quantification and predictive modeling. The presentation commences by summarizing the newly developed 3rd-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for linear systems, which overcomes the “curse of dimensionality” for sensitivity analysis and uncertainty quantification of a large variety of model responses of interest in reactor physics systems. The use of the exact expressions of the 2nd-, and 3rd-order sensitivities computed using the 3rd-ASAM is subsequently illustrated by presenting 3rd-order formulas for the first three cumulants of the response distribution, for quantifying response uncertainties (covariance, skewness) stemming from model parameter uncertainties. The use of the 1st-, 2nd-, and 3rd-order sensitivities together with the formulas for the first three cumulants of the response distribution are subsequently used in the newly developed 2nd/3rd-BERRU-PM (“Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling”), which aims at overcoming the curse of dimensionality in predictive modeling. The 2nd/3rd-BERRU-PM uses the maximum entropy principle to eliminate the need for introducing a subjective user-defined “cost functional quantifying the discrepancies between measurements and computations.” By utilizing the 1st-, 2nd- and 3rd-order response sensitivities to combine experimental and computational information in the joint phase-space of responses and model parameters, the 2nd/3rd-BERRU-PM generalizes the current data adjustment/assimilation methodologies. Even though all of the 2nd- and 3rd-order are comprised in the mathematical framework of the 2nd/3rd-BERRU-PM formalism, the computations underlying the 2nd/3rd-BERRU-PM require the inversion of a single matrix of dimensions equal to the number of considered responses, thus overcoming the curse of dimensionality which would affect the inversion of hessian and higher-order matrices in the parameter space.
APA, Harvard, Vancouver, ISO, and other styles
28

Cacuci, Dan G. "TOWARDS OVERCOMING THE CURSE OF DIMENSIONALITY IN PREDICTIVE MODELLING AND UNCERTAINTY QUANTIFICATION." EPJ Web of Conferences 247 (2021): 20005. http://dx.doi.org/10.1051/epjconf/202124720005.

Full text
Abstract:
This invited presentation summarizes new methodologies developed by the author for performing high-order sensitivity analysis, uncertainty quantification and predictive modeling. The presentation commences by summarizing the newly developed 3rd-Order Adjoint Sensitivity Analysis Methodology (3rd-ASAM) for linear systems, which overcomes the “curse of dimensionality” for sensitivity analysis and uncertainty quantification of a large variety of model responses of interest in reactor physics systems. The use of the exact expressions of the 2nd-, and 3rd-order sensitivities computed using the 3rd-ASAM is subsequently illustrated by presenting 3rd-order formulas for the first three cumulants of the response distribution, for quantifying response uncertainties (covariance, skewness) stemming from model parameter uncertainties. The use of the 1st-, 2nd-, and 3rd-order sensitivities together with the formulas for the first three cumulants of the response distribution are subsequently used in the newly developed 2nd/3rd-BERRU-PM (“Second/Third-Order Best-Estimated Results with Reduced Uncertainties Predictive Modeling”), which aims at overcoming the curse of dimensionality in predictive modeling. The 2nd/3rd-BERRU-PM uses the maximum entropy principle to eliminate the need for introducing a subjective user-defined “cost functional quantifying the discrepancies between measurements and computations.” By utilizing the 1st-, 2nd- and 3rd-order response sensitivities to combine experimental and computational information in the joint phase-space of responses and model parameters, the 2nd/3rd-BERRU-PM generalizes the current data adjustment/assimilation methodologies. Even though all of the 2nd- and 3rd-order are comprised in the mathematical framework of the 2nd/3rd-BERRU-PM formalism, the computations underlying the 2nd/3rd-BERRU-PM require the inversion of a single matrix of dimensions equal to the number of considered responses, thus overcoming the curse of dimensionality which would affect the inversion of hessian and higher-order matrices in the parameter space.
APA, Harvard, Vancouver, ISO, and other styles
29

Alsarayreh, Alanood A., Mudhar A. Al-Obaidi, Raj Patel, and Iqbal M. Mujtaba. "Scope and Limitations of Modelling, Simulation, and Optimisation of a Spiral Wound Reverse Osmosis Process-Based Water Desalination." Processes 8, no. 5 (May 12, 2020): 573. http://dx.doi.org/10.3390/pr8050573.

Full text
Abstract:
The reverse osmosis (RO) process is one of the best desalination methods, using membranes to reject several impurities from seawater and brackish water. To systematically perceive the transport phenomena of solvent and solutes via the membrane texture, several mathematical models have been developed. To date, a large number of simulation and optimisation studies have been achieved to gauge the influence of control variables on the performance indexes, to adjust the key variables at optimum values, and to realise the optimum production indexes. This paper delivers an intensive review of the successful models of the RO process and both simulation and optimisation studies carried out on the basis of the models developed. In general, this paper investigates the scope and limitations of the RO process, as well as proving the maturity of the associated perspective methodologies.
APA, Harvard, Vancouver, ISO, and other styles
30

Mavi, Anele, Tiri Chinyoka, and Andrew Gill. "Modelling and Analysis of Viscoelastic and Nanofluid Effects on the Heat Transfer Characteristics in a Double-Pipe Counter-Flow Heat Exchanger." Applied Sciences 12, no. 11 (May 28, 2022): 5475. http://dx.doi.org/10.3390/app12115475.

Full text
Abstract:
This study computationally investigates the heat transfer characteristics in a double-pipe counter-flow heat-exchanger. A heated viscoelastic fluid occupies the inner core region, and the outer annulus is filled with a colder Newtonian-Fluid-Based Nanofluid (NFBN). A mathematical model is developed to study the conjugate heat transfer characteristics and heat exchange properties from the hot viscoelastic fluid to the colder NFBN. The mathematical modelling and formulation of the given problem comprises of a system of coupled nonlinear partial differential Equations (PDEs) governing the flow, heat transfer, and stress characteristics. The viscoelastic stress behaviour of the core fluid is modelled via the Giesekus constitutive equations. The mathematical complexity arising from the coupled system of transient and nonlinear PDEs makes them analytically intractable, and hence, a recourse to numerical and computational methodologies is unavoidable. A numerical methodology based on the finite volume methods (FVM) is employed. The FVM algorithms are computationally implemented on the OpenFOAM software platform. The dependence of the field variables, namely the velocity, temperature, pressure, and polymeric stresses on the embedded flow parameters, are explored in detail. In particular, the results illustrate that an increase in the nanoparticle volume-fraction, in the NFBN, leads to enhanced heat-exchange characteristics from the hot core fluid to the colder shell NFBN. Specifically, the results illustrate that the use of NFBN as the coolant fluid leads to enhanced cooling of the hot core-fluid as compared to using an ordinary (nanoparticle free) Newtonian coolant.
APA, Harvard, Vancouver, ISO, and other styles
31

Perfetti, L., C. Polari, and F. Fassi. "FISHEYE PHOTOGRAMMETRY: TESTS AND METHODOLOGIES FOR THE SURVEY OF NARROW SPACES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (February 23, 2017): 573–80. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-573-2017.

Full text
Abstract:
The research illustrated in this article aimed at identifying a good standard methodology to survey very narrow spaces during 3D investigation of Cultural Heritage. It is an important topic in today’s era of BIM modelling applied to Cultural Heritage. Spaces like staircases, corridors and passages are very common in the architectural or archaeological fields, and obtaining a 3D-oriented survey of those areas can be a very complex task when completeness of the model and high precision are requested. Photogrammetry appears to be the most promising solution in terms of versatility and manoeuvrability also considering the quality of the required data. Fisheye lenses were studied and tested in depth because of their significant advantage in the field of view if compared with rectilinear lenses. This advantage alone can be crucial to reduce the total amount of photos and, as a consequence, to obtain manageable data, to simplify the survey phase and to significantly reduce the elaboration time. In order to overcome the main issue that arise when using fisheye lenses, which is the lack of rules that can be employed to design the survey, a general mathematical formulation to precisely estimate the GSD (Ground Sampling Distance) for every optical projection is presented here. <br><br> A complete survey of a real complex case study was performed in order to test and stress the proposed methodology, and to handle a fisheye-based survey from beginning to end: the photogrammetric survey of the Minguzzi Staircase. It is a complex service spiral-staircase located in the Duomo di Milano with a total height of 25&amp;thinsp;meters and characterized by a narrow walkable space about 70&amp;thinsp;centimetres wide.
APA, Harvard, Vancouver, ISO, and other styles
32

Olhager, Jan, Sebastian Pashaei, and Henrik Sternberg. "Design of global production and distribution networks." International Journal of Physical Distribution & Logistics Management 45, no. 1/2 (March 2, 2015): 138–58. http://dx.doi.org/10.1108/ijpdlm-05-2013-0131.

Full text
Abstract:
Purpose – The purpose of this paper is to systematically and critically review the extant literature on the design of global production and distribution networks to identify gaps in the literature and identify future research opportunities. The design aspects deal with strategic and structural decisions such as: opening or closing of manufacturing plants or distribution centres, selection of locations for manufacturing or warehousing, and making substantial capacity changes in manufacturing or distribution. Design/methodology/approach – The authors examine the peer-reviewed literature on global production and distribution networks written in English. The search strategy is based on selected keywords and databases. The authors identify 109 articles from 1974 to 2012. Findings – The authors categorize the literature according to research methodology: case studies, conceptual modelling, surveys, and mathematical modelling. The amount of literature up to 2,000 is rather sparse, while there is a positive trend from 2,000 and onwards. The content analysis shows that different research methodologies focus on different but complementary aspects. The authors propose a research agenda for further research on design of global production and distribution networks. Research limitations/implications – The authors identify research opportunities related to complementary actor perspectives, extended supply chains that explicitly include transportation and suppliers, contingency factors, and new perspectives such as facility roles within production and distribution networks. Originality/value – This paper is to the author’s knowledge the first broad review that investigates the design aspects of the interrelationships between production and distribution facilities as well as transportation in global production and distribution networks across multiple research methodologies.
APA, Harvard, Vancouver, ISO, and other styles
33

Fallahpour, A. R., and A. R. Moghassem. "Yarn Strength Modelling Using Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP)." Journal of Engineered Fibers and Fabrics 8, no. 4 (December 2013): 155892501300800. http://dx.doi.org/10.1177/155892501300800409.

Full text
Abstract:
This study compares capabilities of two different modelling methodologies for predicting breaking strength of rotor spun yarns. Forty eight yarn samples were produced considering variations in three drawing frame parameters namely break draft, delivery speed, and distance between back and middle rolls. Several topologies with different architectures were trained to get the best adaptive neuro-fuzzy inference system (ANFIS) and gene expression programming (GEP) models. Prediction performance of the GEP model was compared with that of ANFIS using root mean square error (RMSE) and correlation coefficient (R2-Value) parameters on the test data. Results show that, the GEP model has a significant priority over the ANFIS model in term of prediction accuracy. The correlation coefficient (R2-value) and root mean square error for the GEP model were 0.87 and 0.35 respectively, while these parameters were 0.48 and 0.53 for the ANFIS model. Also, a mathematical formula was developed with high degree of accuracy using GEP algorithm to predict the breaking strength of the yarns. This advantage is not accessible in the ANFIS model.
APA, Harvard, Vancouver, ISO, and other styles
34

Lourenço, Rúben, António Andrade-Campos, and Pétia Georgieva. "The Use of Machine-Learning Techniques in Material Constitutive Modelling for Metal Forming Processes." Metals 12, no. 3 (February 28, 2022): 427. http://dx.doi.org/10.3390/met12030427.

Full text
Abstract:
Accurate numerical simulations require constitutive models capable of providing precise material data. Several calibration methodologies have been developed to improve the accuracy of constitutive models. Nevertheless, a model’s performance is always constrained by its mathematical formulation. Machine learning (ML) techniques, such as artificial neural networks (ANNs), have the potential to overcome these limitations. Nevertheless, the use of ML for material constitutive modelling is very recent and not fully explored. Difficulties related to data requirements and training are still open problems. This work explores and discusses the use of ML techniques regarding the accuracy of material constitutive models in metal plasticity, particularly contributing (i) a parameter identification inverse methodology, (ii) a constitutive model corrector, (iii) a data-driven constitutive model using empirical known concepts and (iv) a general implicit constitutive model using a data-driven learning approach. These approaches are discussed, and examples are given in the framework of non-linear elastoplasticity. To conveniently train these ML approaches, a large amount of data concerning material behaviour must be used. Therefore, non-homogeneous strain field and complex strain path tests measured with digital image correlation (DIC) techniques must be used for that purpose.
APA, Harvard, Vancouver, ISO, and other styles
35

Prendergast, L. J., D. Hester, and K. Gavin. "Development of a Vehicle-Bridge-Soil Dynamic Interaction Model for Scour Damage Modelling." Shock and Vibration 2016 (2016): 1–15. http://dx.doi.org/10.1155/2016/7871089.

Full text
Abstract:
Damage detection in bridges using vibration-based methods is an area of growing research interest. Improved assessment methodologies combined with state-of-the-art sensor technology are rapidly making these approaches applicable for real-world structures. Applying these techniques to the detection and monitoring of scour around bridge foundations has remained challenging; however this area has gained attraction in recent years. Several authors have investigated a range of methods but there is still significant work required to achieve a rounded and widely applicable methodology to detect and monitor scour. This paper presents a novel Vehicle-Bridge-Soil Dynamic Interaction (VBSDI) model which can be used to simulate the effect of scour on an integral bridge. The model outputs dynamic signals which can be analysed to determine modal parameters and the variation of these parameters with respect to scour can be examined. The key novelty of this model is that it is the first numerical model for simulating scour that combines a realistic vehicle loading model with a robust foundation soil response model. This paper provides a description of the model development and explains the mathematical theory underlying the model. Finally a case study application of the model using typical bridge, soil, and vehicle properties is provided.
APA, Harvard, Vancouver, ISO, and other styles
36

Villacampa, Yolanda, Francisco José Navarro-González, Gabriela Hernández, Juan Laddaga, and Adriana Confalone. "Modelling Faba Bean (Vicia faba L.) Biomass Production for Sustainability of Agricultural Systems of Pampas." Sustainability 12, no. 23 (November 24, 2020): 9829. http://dx.doi.org/10.3390/su12239829.

Full text
Abstract:
The Pampas region is characterized by a high complexity in its productive system planning and faces the challenge of satisfying future food demands, as well as reducing the environmental impact of the activity. Climate change affects crops and farmers should use species capable of adapting to the changed climate. Among these species, faba bean (Vicia faba L.) cv. ‘Alameda’ has shown good adaptation to weather variability and, as a winter legume, it can help maintain the sustainability of agricultural systems in the area. The main purpose of this research was to select the models which describe the production characteristics of the ‘Alameda’ bean by using the least number of variables. Experimental and agrometeorological data from the cultivation of the ‘Alameda’ in Azul, Buenos Aires province, Argentina were used to generate mathematical models. Several modelling methodologies have been applied to study the production characteristics of the faba bean. The prediction of the models generated was analyzed by randomly disturbing the experimental data and analyzing the magnitude of the errors produced. The models obtained will be useful for predicting the biomass production of the faba bean cv. ‘Alameda’ grown in the agroclimatic conditions of Azul, Buenos Aires province, Argentina.
APA, Harvard, Vancouver, ISO, and other styles
37

Vives i Batlle, J., G. Biermans, D. Copplestone, A. Kryshev, A. Melintescu, C. Mothersill, T. Sazykina, C. Seymour, K. Smith, and M. D. Wood. "Towards an ecological modelling approach for assessing ionizing radiation impact on wildlife populations." Journal of Radiological Protection 42, no. 2 (April 25, 2022): 020507. http://dx.doi.org/10.1088/1361-6498/ac5dd0.

Full text
Abstract:
Abstract The emphasis of the international system of radiological protection of the environment is to protect populations of flora and fauna. Throughout the MODARIA programmes, the United Nations’ International Atomic Energy Agency (IAEA) has facilitated knowledge sharing, data gathering and model development on the effect of radiation on wildlife. We present a summary of the achievements of MODARIA I and II on wildlife dose effect modelling, extending to a new sensitivity analysis and model development to incorporate other stressors. We reviewed evidence on historical doses and transgenerational effects on wildlife from radioactively contaminated areas. We also evaluated chemical population modelling approaches, discussing similarities and differences between chemical and radiological impact assessment in wildlife. We developed population modelling methodologies by sourcing life history and radiosensitivity data and evaluating the available models, leading to the formulation of an ecosystem-based mathematical approach. This resulted in an ecologically relevant conceptual population model, which we used to produce advice on the evaluation of risk criteria used in the radiological protection of the environment and a proposed modelling extension for chemicals. This work seeks to inform stakeholder dialogue on factors influencing wildlife population responses to radiation, including discussions on the ecological relevance of current environmental protection criteria. The area of assessment of radiation effects in wildlife is still developing with underlying data and models continuing to be improved. IAEA’s ongoing support to facilitate the sharing of new knowledge, models and approaches to Member States is highlighted, and we give suggestions for future developments in this regard.
APA, Harvard, Vancouver, ISO, and other styles
38

Vives i Batlle, J., G. Biermans, D. Copplestone, A. Kryshev, A. Melintescu, C. Mothersill, T. Sazykina, C. Seymour, K. Smith, and M. D. Wood. "Towards an ecological modelling approach for assessing ionizing radiation impact on wildlife populations." Journal of Radiological Protection 42, no. 2 (April 25, 2022): 020507. http://dx.doi.org/10.1088/1361-6498/ac5dd0.

Full text
Abstract:
Abstract The emphasis of the international system of radiological protection of the environment is to protect populations of flora and fauna. Throughout the MODARIA programmes, the United Nations’ International Atomic Energy Agency (IAEA) has facilitated knowledge sharing, data gathering and model development on the effect of radiation on wildlife. We present a summary of the achievements of MODARIA I and II on wildlife dose effect modelling, extending to a new sensitivity analysis and model development to incorporate other stressors. We reviewed evidence on historical doses and transgenerational effects on wildlife from radioactively contaminated areas. We also evaluated chemical population modelling approaches, discussing similarities and differences between chemical and radiological impact assessment in wildlife. We developed population modelling methodologies by sourcing life history and radiosensitivity data and evaluating the available models, leading to the formulation of an ecosystem-based mathematical approach. This resulted in an ecologically relevant conceptual population model, which we used to produce advice on the evaluation of risk criteria used in the radiological protection of the environment and a proposed modelling extension for chemicals. This work seeks to inform stakeholder dialogue on factors influencing wildlife population responses to radiation, including discussions on the ecological relevance of current environmental protection criteria. The area of assessment of radiation effects in wildlife is still developing with underlying data and models continuing to be improved. IAEA’s ongoing support to facilitate the sharing of new knowledge, models and approaches to Member States is highlighted, and we give suggestions for future developments in this regard.
APA, Harvard, Vancouver, ISO, and other styles
39

Arena, Simone, Irene Roda, and Ferdinando Chiacchio. "Integrating Modelling of Maintenance Policies within a Stochastic Hybrid Automaton Framework of Dynamic Reliability." Applied Sciences 11, no. 5 (March 5, 2021): 2300. http://dx.doi.org/10.3390/app11052300.

Full text
Abstract:
The dependability assessment is a crucial activity for determining the availability, safety and maintainability of a system and establishing the best mitigation measures to prevent serious flaws and process interruptions. One of the most promising methodologies for the analysis of complex systems is Dynamic Reliability (also known as DPRA) with models that define explicitly the interactions between components and variables. Among the mathematical techniques of DPRA, Stochastic Hybrid Automaton (SHA) has been used to model systems characterized by continuous and discrete variables. Recently, a DPRA-oriented SHA modelling formalism, known as Stochastic Hybrid Fault Tree Automaton (SHyFTA), has been formalized together with a software library (SHyFTOO) that simplifies the resolution of complex models. At the state of the art, SHyFTOO allows analyzing the dependability of multistate repairable systems characterized by a reactive maintenance policy. Exploiting the flexibility of SHyFTA, this paper aims to extend the tools’ functionalities to other well-known maintenance policies. To achieve this goal, the main features of the preventive, risk-based and condition-based maintenance policies will be analyzed and used to design a software model to integrate into the SHyFTOO. Finally, a case study to test and compare the results of the different maintenance policies will be illustrated.
APA, Harvard, Vancouver, ISO, and other styles
40

Ellis, Jennifer L. "117 Nutrition Modelling: What Can the Pet Field Learn (or Steal) from Recent Directions in Other Species?" Journal of Animal Science 99, Supplement_3 (October 8, 2021): 62–63. http://dx.doi.org/10.1093/jas/skab235.112.

Full text
Abstract:
Abstract Nutrition modelling has been the cornerstone of feed formulation and diet optimization in animal production systems for decades. Since the 1970s and 1980s, mechanistic models of nutrient digestion, absorption, metabolism, growth and milk/egg production have been developed and implemented to (1) amass our cumulative biological knowledge and develop theories of regulation, (2) identify knowledge gaps, and (3) propose means to manipulate nutrient dynamics in the animal. At the nutrient and metabolite level, many commonalities exist and parallels found between species. In fact, several second generation models originate from other species or research fields, and many current/existing models may be advanced by examination and consideration of models developed in other species. Many such mathematical models are implemented in practice as ‘decision support systems’ or ‘opportunity analysis tools’, in order to examine a variety of (feeding or management) scenarios for their potential outcomes, with the goal of providing targeted nutrition, improving performance, reducing cost and minimizing environmental impact. More recently, partnering artificial intelligence/machine learning modelling methodologies with newly available big data streams has ushered in a new era of possibilities for data extraction and modelling in animal systems. The niche for this type of modelling in animal production appears to be (1) pattern recognition (e.g. disease detection, activity) and (2) strong predictive/forecasting abilities (e.g. bodyweight, milk, egg production). There also appears strong potential for these two seemingly divergent modelling approaches to be integrated – for example, in precision feeding systems, or in utilizing the abundance of sensor data to better drive or develop causal-pathway based mechanistic models. This talk will broadly review trends and advances in agriculture animal species modelling, and suggest what may be borrowed, stolen or serve as inspiration to advance nutrition models in companion species.
APA, Harvard, Vancouver, ISO, and other styles
41

Borg, A., K. Pommerening, and M. Sariyar. "Evaluation of Record Linkage Methods for Iterative Insertions." Methods of Information in Medicine 48, no. 05 (2009): 429–37. http://dx.doi.org/10.3414/me9238.

Full text
Abstract:
Summary Objectives: There have been many developments and applications of mathematical methods in the context of record linkage as one area of interdisciplinary research efforts. However, comparative evaluations of record linkage methods are still underrepresented. In this paper improvements of the Fellegi-Sunter model are compared with other elaborated classification methods in order to direct further research endeavors to the most promising methodologies. Methods: The task of linking records can be viewed as a special form of object identification. We consider several non-stochastic methods and procedures for the record linkage task in addition to the Fellegi-Sunter model and perform an empirical evaluation on artificial and real data in the context of iterative insertions. This evaluation provides a deeper insight into empirical similarities and differences between different modelling frames of the record linkage problem. In addition, the effects of using string comparators on the performance of different matching algorithms are evaluated. Results: Our central results show that stochastic record linkage based on the principle of the EM algorithm exhibits best classification results when calibrating data are structurally different to validation data. Bagging, boosting together with support vector machines are best classification methods when calibrating and validation data have no major structural differences. Conclusions: The most promising methodologies for record linkage in environments similar to the one considered in this paper seem to be stochastic ones.
APA, Harvard, Vancouver, ISO, and other styles
42

Aparicio-Ruiz, R., N. Tena, I. Romero, R. Aparicio, D. L. García-González, and M. T. Morales. "Predicting extra virgin olive oil freshness during storage by fluorescence spectroscopy." Grasas y Aceites 68, no. 4 (January 8, 2018): 219. http://dx.doi.org/10.3989/gya.0332171.

Full text
Abstract:
Virgin olive oil quality relates to flavor and unique health benefits. Some of these properties are at the most desirable level when the oil is just extracted, since it is not a product that improves with age. On the contrary, the concentrations of many compounds change during its shelf-life. These changes reveal the aging of the oil but do not necessarily mean decay in sensory properties, so in some cases an aged oil from healthy olives may be better qualified than a fresh one from olives affected by fermentation. The aim of this work is to analyze different methodologies proposed for assessing the quality of virgin olive oil with implications in freshness and aging of the oil, and to highlight the possibilities of rapid spectrofluorimetric techniques for assessing oil freshness by checking the evolution of pigments during storage. The observed change in the selected spectral features and mathematical modelling over time was compared with the accepted model for predicting the amount of pyropheophytin a, which is based on isokinetic studies. The best regression was obtained for 655 nm (adjusted-R2 = 0.91) wavelength, which matches the distinctive band of pigments. The two mathematical models described in this study highlight the usefulness of pigments in the prediction of the shelf-life of extra virgin olive oil.
APA, Harvard, Vancouver, ISO, and other styles
43

Vujičić, Miroslav D., James Kennell, Alastair Morrison, Viachaslau Filimonau, Ivana Štajner Papuga, Uglješa Stankov, and Djordjije A. Vasiljević. "Fuzzy Modelling of Tourist Motivation: An Age-Related Model for Sustainable, Multi-Attraction, Urban Destinations." Sustainability 12, no. 20 (October 20, 2020): 8698. http://dx.doi.org/10.3390/su12208698.

Full text
Abstract:
Tourist motivation, as a core of travel behavior, significantly influences consumer intentions and has attracted academic attention for decades. A plethora of studies analyse sets of internal and external motivators, while methodologies that exclusively focus on a single factor, such as age, that can sometimes have a determining influence in multi-attraction destinations, are less prevalent. This study introduces a fuzzy logic approach to develop a new model for analysing the internal motivations of different-aged consumers in multi-attraction urban destinations. Fuzzy models, as a mathematical means of representing vagueness and imprecise information, have the capability of recognizing, representing, manipulating, interpreting, and utilizing data and information, which typically for urban tourist motivations, are vague and lack certainty. This research tests the model in a real-life setting, using the example of Novi Sad, a mid-sized European city, which is typical of many similar cities who are attempting to develop sustainable tourism by attracting older tourists. The new model shows how tourist motivations for multi-attraction destinations are affected by age, through a specially developed m-file for MATLAB, so that it can be applied and tested in other tourism contexts. Theoretical and practical implications for sustainable destination management and marketing are described.
APA, Harvard, Vancouver, ISO, and other styles
44

Karapetkov, Stanimir, Lubomir Dimitrov, Hristo Uzunov, and Silvia Dechkova. "Identifying Vehicle and Collision Impact by Applying the Principle of Conservation of Mechanical Energy." Transport and Telecommunication Journal 20, no. 3 (June 1, 2019): 191–204. http://dx.doi.org/10.2478/ttj-2019-0016.

Full text
Abstract:
Abstract Various methodologies and tools applied to identification of vehicle and collision impact seek to present more and more accurate solutions to reproduce, restore, recreate and investigate the casualty. Modern computer technology and software provide the tools to solve specific problems developing mathematical modelling of complex mechanical systems involving vehicles and other objects in a road accident. Scientists generally utilize the Standard Test Method for Impact Testing calculating the energy of deformation of both vehicles, however, one of its limitations is the evaluation of the kinetic energy of the vehicles in post-collision taking into consideration vehicle rotation and linear displacement. To improve the analysis, dynamic traffic simulation is used, taking into account the variations in the coefficient of friction, suspension elasticity and damping. The proposed method is based on a system of two equations derived from two principles: the Principle of Conservation of Mechanical Energy and the Principle of Conservation of Momentum in the impact phase. The new approach is conducted on mathematical modelling and computer simulation of vehicle motion after the impact, wherefrom the linear and angular velocities are analysed. This is achieved by the numerical solution of the differential equations of motion of the cars after the impact, and the given initial conditions that satisfy the solution are used to solve the system of equations. The main findings of the study can be grouped as follows: 1) The positions of the vehicles prior to the moment of first impact and the post-impact orientation of velocity vectors are more precise. 2) The variability of the tire-road friction coefficient is taken into consideration. 3) The value of coefficient of restitution according to Newton’s theory of impact is unnecessarily determined.
APA, Harvard, Vancouver, ISO, and other styles
45

Pitt-Francis, Joe, Miguel O. Bernabeu, Jonathan Cooper, Alan Garny, Lee Momtahan, James Osborne, Pras Pathmanathan, Blanca Rodriguez, Jonathan P. Whiteley, and David J. Gavaghan. "Chaste: using agile programming techniques to develop computational biology software." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, no. 1878 (June 19, 2008): 3111–36. http://dx.doi.org/10.1098/rsta.2008.0096.

Full text
Abstract:
Cardiac modelling is the area of physiome modelling where the available simulation software is perhaps most mature, and it therefore provides an excellent starting point for considering the software requirements for the wider physiome community. In this paper, we will begin by introducing some of the most advanced existing software packages for simulating cardiac electrical activity. We consider the software development methods used in producing codes of this type, and discuss their use of numerical algorithms, relative computational efficiency, usability, robustness and extensibility. We then go on to describe a class of software development methodologies known as test-driven agile methods and argue that such methods are more suitable for scientific software development than the traditional academic approaches. A case study is a project of our own, Cancer, Heart and Soft Tissue Environment, which is a library of computational biology software that began as an experiment in the use of agile programming methods. We present our experiences with a review of our progress thus far, focusing on the advantages and disadvantages of this new approach compared with the development methods used in some existing packages. We conclude by considering whether the likely wider needs of the cardiac modelling community are currently being met and suggest that, in order to respond effectively to changing requirements, it is essential that these codes should be more malleable. Such codes will allow for reliable extensions to include both detailed mathematical models—of the heart and other organs—and more efficient numerical techniques that are currently being developed by many research groups worldwide.
APA, Harvard, Vancouver, ISO, and other styles
46

Tzotzis, Anastasios, Nikolaos Tapoglou, Rajesh Kumar Verma, and Panagiotis Kyratsis. "3D-FEM Approach of AISI-52100 Hard Turning: Modelling of Cutting Forces and Cutting Condition Optimization." Machines 10, no. 2 (January 20, 2022): 74. http://dx.doi.org/10.3390/machines10020074.

Full text
Abstract:
In the present study, a 3D finite element (FE) model for machining AISI-52100 steel was proposed, with respect to three levels of cutting speed (100 m/min, 150 m/min and 200 m/min), feed (0.08 mm/rev, 0.11 mm/rev and 0.14 mm/rev), depth of cut (0.20 mm, 0.30 mm and 0.40 mm) and tool nose radius (0.80 mm, 1.20 mm and 1.60 mm). Nine simulation tests were performed according to cutting conditions that were used in experimental studies, in order to verify the accuracy of the model. Next, the FE model was utilized to carry out thirty new simulation runs, with cutting conditions derived from the implementation of the central composite design (CCD). Additionally, a mathematical model was established for prediction purposes, whereas the relationship between the applied cutting parameters and their influence on the resultant cutting force was investigated with the aid of statistical methodologies such as the response surface methodology (RSM) and the analysis of variance (ANOVA). The comparison between the numerical and the statistical model revealed an increased level of correlation, superseding 90% in many tests. Specifically, the relative error varied between −7.9% and 11.3%. Lastly, an optimization process was performed to find the optimal cutting conditions for minimizing the resultant machining force, as per the standardized tool nose radius value.
APA, Harvard, Vancouver, ISO, and other styles
47

Franzi, L., D. Giordan, M. Arattano, P. Allasia, and M. Arai. "Preface Results of the open session on "Documentation and monitoring of landslides and debris flows" for mathematical modelling and design of mitigation measures, held at the EGU General Assembly 2009." Natural Hazards and Earth System Sciences 11, no. 5 (May 27, 2011): 1583–88. http://dx.doi.org/10.5194/nhess-11-1583-2011.

Full text
Abstract:
Abstract. The papers that are here presented and summarised represent the recent scientific contributions of some authors coming from different countries and working in the fields of monitoring, modelling, mapping and design of mitigation measures against mass movements. The authors had the opportunity to present their recent advancements, discuss each other needs and set forth future research requirements during the 2009 EGU General Assembly, so that their scientific contributions can be considered the result of the debates and exchanges that were set among scientists and researchers, either personally or during the review phase since that date. In this resume, the scientific papers of the special issue are divided according to different thematic areas and summarised. The most innovative scientific approaches proposed in the special issue, regarding the monitoring methodologies, simulation techniques and laboratory equipment are described and summarised. The obtained results are very promising to keep on future research at a very satisfactory level.
APA, Harvard, Vancouver, ISO, and other styles
48

BEAUMONT, PETER W. R., and HIDEKI SEKINE. "SOLVING PROBLEMS OF COMPOSITE FRACTURE BY MULTISCALE MODELING." Journal of Multiscale Modelling 01, no. 01 (January 2009): 79–106. http://dx.doi.org/10.1142/s1756973709000062.

Full text
Abstract:
In critical conditions of variable stress-state, fluctuating temperature and hostile environment, where the objective is to design components and structures for longevity, durability, and reliability — structural integrity — then the balance between empirical engineering design based on continuum and mathematical modeling (sometimes called "distilled empiricism"), and physical modeling (sometimes called "mechanism modeling" or simply "micromechanics"), is shifted in favor of physical modeling. When combined with experimental evidence, physical modeling has the economic advantage of reducing the high cost of vast experimental programs of duration of many thousands of hours. Furthermore, existing empirical design methodologies at the higher (macroscopic) structural size scales can be supported and justified by fundamental understanding at lower (microscopic) size scales through the physical model. Armed with this information, together with knowledge of the mechanical behavior of the material over time, we follow the path of "physical model-informed empiricism", sometimes called "intelligent-informed design". Proof of identity of individual cracking processes based on their direct observation and an understanding of coupling between them is the first step in formulating a complete physical model of fracture.
APA, Harvard, Vancouver, ISO, and other styles
49

Konstantaras, A., G. N. Fouskitakis, J. P. Makris, and F. Vallianatos. "Stochastic analysis of geo-electric field singularities as seismically correlated candidates." Natural Hazards and Earth System Sciences 8, no. 6 (December 18, 2008): 1451–62. http://dx.doi.org/10.5194/nhess-8-1451-2008.

Full text
Abstract:
Abstract. The study of the Earth's electromagnetic field prior to the occurrence of strong seismic events has repeatedly revealed cases where transient electric potential anomalies, often deemed as possible earthquake precursors, were observed on electromagnetic field recordings. In an attempt to understand the nature of such signals, several models have been proposed based upon the exhibited characteristics of the observed anomalies, often supported by different mathematical models simulating possible generation mechanisms. This paper discusses a candidate Electric Earthquake Precursor (EEP) signal, accompanying the Kythira Mw=6.9 earthquake in Greece (occurred on 8 January 2006). Neuro-Fuzzy along with stochastic models are currently incorporated for the modelling and analysis of the recorded Earth's electric field. The results of the study indicate that the Neuro-Fuzzy model treats the observed possible EEP signal as an external additive component to the recorded Earth's electric field, while the stochastic TARMA models accurately represent the recorded electric signals in both the time and the frequency domains. The complementary findings of both methodologies might potentially contribute to the future development of a more accurate and generalized framework for the efficient recognition and characterization of possible EEP's.
APA, Harvard, Vancouver, ISO, and other styles
50

Perzyk, M., and J. Kozlowski. "Methodology of Fault Diagnosis in Ductile Iron Melting Process." Archives of Foundry Engineering 16, no. 4 (December 1, 2016): 101–8. http://dx.doi.org/10.1515/afe-2016-0092.

Full text
Abstract:
Abstract Statistical Process Control (SPC) based on the Shewhart’s type control charts, is widely used in contemporary manufacturing industry, including many foundries. The main steps include process monitoring, detection the out-of-control signals, identification and removal of their causes. Finding the root causes of the process faults is often a difficult task and can be supported by various tools, including data-driven mathematical models. In the present paper a novel approach to statistical control of ductile iron melting process is proposed. It is aimed at development of methodologies suitable for effective finding the causes of the out-of-control signals in the process outputs, defined as ultimate tensile strength (Rm) and elongation (A5), based mainly on chemical composition of the alloy. The methodologies are tested and presented using several real foundry data sets. First, correlations between standard abnormal output patterns (i.e. out-of-control signals) and corresponding inputs patterns are found, basing on the detection of similar patterns and similar shapes of the run charts of the chemical elements contents. It was found that in a significant number of cases there was no clear indication of the correlation, which can be attributed either to the complex, simultaneous action of several chemical elements or to the causes related to other process variables, including melting, inoculation, spheroidization and pouring parameters as well as the human errors. A conception of the methodology based on simulation of the process using advanced input - output regression modelling is presented. The preliminary tests have showed that it can be a useful tool in the process control and is worth further development. The results obtained in the present study may not only be applied to the ductile iron process but they can be also utilized in statistical quality control of a wide range of different discrete processes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography