Journal articles on the topic 'Data modelling frameworks'

To see the other types of publications on this topic, follow the link: Data modelling frameworks.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data modelling frameworks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Murray, S. G., C. Power, and A. S. G. Robotham. "Modelling Galaxy Populations in the Era of Big Data." Proceedings of the International Astronomical Union 10, S306 (May 2014): 304–6. http://dx.doi.org/10.1017/s1743921314010710.

Full text
Abstract:
AbstractThe coming decade will witness a deluge of data from next generation galaxy surveys such as the Square Kilometre Array and Euclid. How can we optimally and robustly analyse these data to maximise scientific returns from these surveys? Here we discuss recent work in developing both the conceptual and software frameworks for carrying out such analyses and their application to the dark matter halo mass function. We summarise what we have learned about the HMF from the last 10 years of precision CMB data using the open-source HMFcalc framework, before discussing how this framework is being extended to the full Halo Model.
APA, Harvard, Vancouver, ISO, and other styles
2

Urquhart, Christine, and Dina Tbaishat. "Reflections on the value and impact of library and information services." Performance Measurement and Metrics 17, no. 1 (April 11, 2016): 29–44. http://dx.doi.org/10.1108/pmm-01-2016-0004.

Full text
Abstract:
Purpose – The purpose of this paper is to examine frameworks (such as scorecards) for ongoing library assessment and how business process modelling contributes in Part 3 of the series of viewpoint papers. Design/methodology/approach – Reviews the statistical data collection for strategic planning, and use of data analytics. Considers how to organise further value explorations. Compares macro-frameworks (balanced scorecard, values scorecard) and micro-frameworks for library assessment. Reviews the evidence on business process modelling/re-engineering initiatives. Describes how the Riva approach can be used to both derive a process architecture and to model individual processes. Findings – Data analytics requires collaboration among library services to develop reliable data sets and effective data visualisations for managers to use. Frameworks such as the balanced scorecard may be used to organise ongoing impact and performance evaluation. Queries that arise during ongoing library assessment may require a framework to formulate questions, and assemble evidence (qualitative and quantitative). Both macro- and micro-value frameworks are useful. Work on process modelling within libraries can help to develop an assessment culture, and the Riva approach provides both a process architecture and models of individual processes. Originality/value – Examines how to implement a library assessment culture through use of data analytics, value frameworks and business process modelling.
APA, Harvard, Vancouver, ISO, and other styles
3

Herath, Herath Mudiyanselage Viraj Vidura, Jayashree Chadalawada, and Vladan Babovic. "Hydrologically informed machine learning for rainfall–runoff modelling: towards distributed modelling." Hydrology and Earth System Sciences 25, no. 8 (August 11, 2021): 4373–401. http://dx.doi.org/10.5194/hess-25-4373-2021.

Full text
Abstract:
Abstract. Despite showing great success of applications in many commercial fields, machine learning and data science models generally show limited success in many scientific fields, including hydrology (Karpatne et al., 2017). The approach is often criticized for its lack of interpretability and physical consistency. This has led to the emergence of new modelling paradigms, such as theory-guided data science (TGDS) and physics-informed machine learning. The motivation behind such approaches is to improve the physical meaningfulness of machine learning models by blending existing scientific knowledge with learning algorithms. Following the same principles in our prior work (Chadalawada et al., 2020), a new model induction framework was founded on genetic programming (GP), namely the Machine Learning Rainfall–Runoff Model Induction (ML-RR-MI) toolkit. ML-RR-MI is capable of developing fully fledged lumped conceptual rainfall–runoff models for a watershed of interest using the building blocks of two flexible rainfall–runoff modelling frameworks. In this study, we extend ML-RR-MI towards inducing semi-distributed rainfall–runoff models. The meaningfulness and reliability of hydrological inferences gained from lumped models may tend to deteriorate within large catchments where the spatial heterogeneity of forcing variables and watershed properties is significant. This was the motivation behind developing our machine learning approach for distributed rainfall–runoff modelling titled Machine Induction Knowledge Augmented – System Hydrologique Asiatique (MIKA-SHA). MIKA-SHA captures spatial variabilities and automatically induces rainfall–runoff models for the catchment of interest without any explicit user selections. Currently, MIKA-SHA learns models utilizing the model building components of two flexible modelling frameworks. However, the proposed framework can be coupled with any internally coherent collection of building blocks. MIKA-SHA's model induction capabilities have been tested on the Rappahannock River basin near Fredericksburg, Virginia, USA. MIKA-SHA builds and tests many model configurations using the model building components of the two flexible modelling frameworks and quantitatively identifies the optimal model for the watershed of concern. In this study, MIKA-SHA is utilized to identify two optimal models (one from each flexible modelling framework) to capture the runoff dynamics of the Rappahannock River basin. Both optimal models achieve high-efficiency values in hydrograph predictions (both at catchment and subcatchment outlets) and good visual matches with the observed runoff response of the catchment. Furthermore, the resulting model architectures are compatible with previously reported research findings and fieldwork insights of the watershed and are readily interpretable by hydrologists. MIKA-SHA-induced semi-distributed model performances were compared against existing lumped model performances for the same basin. MIKA-SHA-induced optimal models outperform the lumped models used in this study in terms of efficiency values while benefitting hydrologists with more meaningful hydrological inferences about the runoff dynamics of the Rappahannock River basin.
APA, Harvard, Vancouver, ISO, and other styles
4

Støa, Bente, Rune Halvorsen, Sabrina Mazzoni, and Vladimir I. Gusarov. "Sampling bias in presence-only data used for species distribution modelling: theory and methods for detecting sample bias and its effects on models." Sommerfeltia 38, no. 1 (October 1, 2018): 1–53. http://dx.doi.org/10.2478/som-2018-0001.

Full text
Abstract:
Abstract This paper provides a theoretical understanding of sampling bias in presence-only data in the context of species distribution modelling. This understanding forms the basis for two integrated frameworks, one for detecting sampling bias of different kinds in presence-only data (the bias assessment framework) and one for assessing potential effects of sampling bias on species distribution models (the bias effects framework). We exemplify the use of these frameworks to museum data for nine insect species in Norway, for which the distribution along the two main bioclimatic gradients (related to oceanicity and temperatures) are modelled using the MaxEnt method. Models of different complexity (achieved by use of two different model selection procedures that represent spatial prediction or ecological response modelling purposes, respectively) were generated with different types of background data (uninformed and background-target-group [BTG]). The bias assessment framework made use of comparisons between observed and theoretical frequency-of-presence (FoP) curves, obtained separately for each combination of species and bioclimatic predictor, to identify potential sampling bias. The bias effects framework made use of comparisons between modelled response curves (predicted relative FoP curves) and the corresponding observed FoP curves for each combination of species and predictor. The extent to which the observed FoP curves deviated from the expected, smooth and unimodal theoretical FoP curve, varied considerably among the nine insect species. Among-curve differences were, in most cases, interpreted as indications of sampling bias. Using BTG-type background data in many cases introduced strong sampling bias. The predicted relative FoP curves from MaxEnt were, in general, similar to the corresponding observed FoP curves. This indicates that the main structure of the data-sets were adequately summarised by the MaxEnt models (with the options and settings used), in turn suggesting that shortcomings of input data such as sampling bias or omission of important predictors may overshadow the effect of modelling method on the predictive performance of distribution models. The examples indicate that the two proposed frameworks are useful for identification of sampling bias in presence-only data and for choosing settings for distribution modelling options such as the method for extraction of background data points and determining the appropriate level of model complexity.
APA, Harvard, Vancouver, ISO, and other styles
5

Oden, J. Tinsley. "Adaptive multiscale predictive modelling." Acta Numerica 27 (May 1, 2018): 353–450. http://dx.doi.org/10.1017/s096249291800003x.

Full text
Abstract:
The use of computational models and simulations to predict events that take place in our physical universe, or to predict the behaviour of engineered systems, has significantly advanced the pace of scientific discovery and the creation of new technologies for the benefit of humankind over recent decades, at least up to a point. That ‘point’ in recent history occurred around the time that the scientific community began to realize that true predictive science must deal with many formidable obstacles, including the determination of the reliability of the models in the presence of many uncertainties. To develop meaningful predictions one needs relevant data, itself possessing uncertainty due to experimental noise; in addition, one must determine model parameters, and concomitantly, there is the overriding need to select and validate models given the data and the goals of the simulation.This article provides a broad overview of predictive computational science within the framework of what is often called the science of uncertainty quantification. The exposition is divided into three major parts. In Part 1, philosophical and statistical foundations of predictive science are developed within a Bayesian framework. There the case is made that the Bayesian framework provides, perhaps, a unique setting for handling all of the uncertainties encountered in scientific prediction. In Part 2, general frameworks and procedures for the calculation and validation of mathematical models of physical realities are given, all in a Bayesian setting. But beyond Bayes, an introduction to information theory, the maximum entropy principle, model sensitivity analysis and sampling methods such as MCMC are presented. In Part 3, the central problem of predictive computational science is addressed: the selection, adaptive control and validation of mathematical and computational models of complex systems. The Occam Plausibility Algorithm, OPAL, is introduced as a framework for model selection, calibration and validation. Applications to complex models of tumour growth are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Chartier, Jean-François, Davide Pulizzotto, Louis Chartrand, and Jean-Guy Meunier. "A data-driven computational semiotics: The semantic vector space of Magritte’s artworks." Semiotica 2019, no. 230 (October 25, 2019): 19–69. http://dx.doi.org/10.1515/sem-2018-0120.

Full text
Abstract:
AbstractThe rise of big digital data is changing the framework within which linguists, sociologists, anthropologists, and other researchers are working. Semiotics is not spared by this paradigm shift. A data-driven computational semiotics is the study with an intensive use of computational methods of patterns in human-created contents related to semiotic phenomena. One of the most promising frameworks in this research program is the Semantic Vector Space (SVS) models and their methods. The objective of this article is to contribute to the exploration of the SVS for a computational semiotics by showing what types of semiotic analysis can be accomplished within this framework. The study is applied to a unique body of digitized artworks. We conducted three short experiments in which we explore three types of semiotic analysis: paradigmatic analysis, componential analysis, and topic modelling analysis. The results reported show that the SVS constitutes a powerful framework within which various types of semiotic analysis can be carried out.
APA, Harvard, Vancouver, ISO, and other styles
7

Shivakumar, Abhishek, Thomas Alfstad, and Taco Niet. "A clustering approach to improve spatial representation in water-energy-food models." Environmental Research Letters 16, no. 11 (October 29, 2021): 114027. http://dx.doi.org/10.1088/1748-9326/ac2ce9.

Full text
Abstract:
Abstract Currently available water-energy-food (WEF) modelling frameworks to analyse cross-sectoral interactions often share one or more of the following gaps: (a) lack of integration between sectors, (b) coarse spatial representation, and (c) lack of reproducible methods of nexus assessment. In this paper, we present a novel clustering tool as an expansion to the Climate-Land-Energy-Water-Systems modelling framework used to quantify inter-sectoral linkages between water, energy, and food systems. The clustering tool uses Agglomerative Hierarchical clustering to aggregate spatial data related to the land and water sectors. Using clusters of aggregated data reconciles the need for a spatially resolved representation of the land-use and water sectors with the computational and data requirements to efficiently solve such a model. The aggregated clusters, combined together with energy system components, form an integrated resource planning structure. The modelling framework is underpinned by an open-source energy system modelling tool—OSeMOSYS—and uses publicly available data with global coverage. By doing so, the modelling framework allows for reproducible WEF nexus assessments. The approach is used to explore the inter-sectoral linkages between the energy, land-use, and water sectors of Viet Nam out to 2030. A validation of the clustering approach confirms that underlying trends actual crop yield data are preserved in the resultant clusters. Finally, changes in cultivated area of selected crops are observed and differences in levels of crop migration are identified.
APA, Harvard, Vancouver, ISO, and other styles
8

aus der Beek, T., M. Flörke, D. M. Lapola, R. Schaldach, F. Voß, and E. Teichert. "Modelling historical and current irrigation water demand on the continental scale: Europe." Advances in Geosciences 27 (September 7, 2010): 79–85. http://dx.doi.org/10.5194/adgeo-27-79-2010.

Full text
Abstract:
Abstract. Water abstractions for irrigation purposes are higher than for any other pan-European water use sector and have a large influence on river runoff regimes. This modelling experiment assesses historic and current irrigation water demands for different crops in five arc minute spatial resolution for pan-Europe. Two different modelling frameworks have been applied in this study. First, soft-coupling the dynamic vegetation model LPJmL with the land use model LandSHIFT leads to overestimations of national irrigation water demands, which are rather high in the southern Mediterranean countries. This can be explained by unlimited water supply in the model structure and illegal or not gauged water abstractions in the reported data sets. The second modelling framework is WaterGAP3, which has an integrated conceptual crop specific irrigation module. Irrigation water requirements as modelled with WaterGAP3 feature a more realistic representation of pan-European water withdrawals. However, in colder humid regions, irrigation water demands are often underestimated. Additionally, a national database on crop-specific irrigated area and water withdrawal for all 42 countries within pan-Europe has been set up and integrated in both model frameworks.
APA, Harvard, Vancouver, ISO, and other styles
9

Wilbert, Niko, Tiziano Zito, Rike-Benjamin Schuppner, Zbigniew Jędrzejewski-Szmek, Laurenz Wiskott, and Pietro Berkes. "Building extensible frameworks for data processing: The case of MDP, Modular toolkit for Data Processing." Journal of Computational Science 4, no. 5 (September 2013): 345–51. http://dx.doi.org/10.1016/j.jocs.2011.10.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ledien, Julia, Zulma M. Cucunubá, Gabriel Parra-Henao, Eliana Rodríguez-Monguí, Andrew P. Dobson, Susana B. Adamo, María-Gloria Basáñez, and Pierre Nouvellet. "Linear and machine learning modelling for spatiotemporal disease predictions: Force-of-infection of Chagas disease." PLOS Neglected Tropical Diseases 16, no. 7 (July 19, 2022): e0010594. http://dx.doi.org/10.1371/journal.pntd.0010594.

Full text
Abstract:
Background Chagas disease is a long-lasting disease with a prolonged asymptomatic period. Cumulative indices of infection such as prevalence do not shed light on the current epidemiological situation, as they integrate infection over long periods. Instead, metrics such as the Force-of-Infection (FoI) provide information about the rate at which susceptible people become infected and permit sharper inference about temporal changes in infection rates. FoI is estimated by fitting (catalytic) models to available age-stratified serological (ground-truth) data. Predictive FoI modelling frameworks are then used to understand spatial and temporal trends indicative of heterogeneity in transmission and changes effected by control interventions. Ideally, these frameworks should be able to propagate uncertainty and handle spatiotemporal issues. Methodology/principal findings We compare three methods in their ability to propagate uncertainty and provide reliable estimates of FoI for Chagas disease in Colombia as a case study: two Machine Learning (ML) methods (Boosted Regression Trees (BRT) and Random Forest (RF)), and a Linear Model (LM) framework that we had developed previously. Our analyses show consistent results between the three modelling methods under scrutiny. The predictors (explanatory variables) selected, as well as the location of the most uncertain FoI values, were coherent across frameworks. RF was faster than BRT and LM, and provided estimates with fewer extreme values when extrapolating to areas where no ground-truth data were available. However, BRT and RF were less efficient at propagating uncertainty. Conclusions/significance The choice of FoI predictive models will depend on the objectives of the analysis. ML methods will help characterise the mean behaviour of the estimates, while LM will provide insight into the uncertainty surrounding such estimates. Our approach can be extended to the modelling of FoI patterns in other Chagas disease-endemic countries and to other infectious diseases for which serosurveys are regularly conducted for surveillance.
APA, Harvard, Vancouver, ISO, and other styles
11

Sibenik, Goran, and Iva Kovacic. "Interpreted open data exchange between architectural design and structural analysis models." Journal of Information Technology in Construction 26 (February 26, 2021): 39–57. http://dx.doi.org/10.36680/j.itcon.2021.004.

Full text
Abstract:
The heterogeneity of the architecture, engineering and construction (AEC) industry reflects on digital building models, which differ across domains and planning phases. Data exchange between architectural design and structural analysis models poses a particular challenge because of dramatically different representations of building elements. Existing software tools and standards have not been able to deal with these differences. The research on inter-domain building information modelling (BIM) frameworks does not consider the geometry interpretations for data exchange. Analysis of geometry interpretations is mostly project-specific and is seldom reflected in general data exchange frameworks. By defining a data exchange framework that engages with varying requirements and representations of architectural design and structural analysis in terms of geometry, which is open to other domains, we aim to close the identified gap. Existing classification systems in software tools and standards were reviewed in order to understand architectural design and structural analysis representations and to identify the relationships between them. Following the analysis, a novel data management framework based on classification, interpretation and automation was proposed, implemented and tested. Classification is a model specification including domain-specific terms and relationships between them. Interpretations consist of inter-domain procedures necessary to generate domain-specific models from a provided model. Automation represents the connection between open domain-specific models and proprietary models in software tools. Practical implementation with a test case demonstrated a possible realization of the proposed framework. The innovative contribution of the research is a novel framework based on the system of open domain-specific classifications and procedures for the inter-domain interpretation, which can prepare domain-specific models on central storage. The main benefit is a centrally prepared domain-specific model, relieving software developers from so-far-unsuccessful implementation of complex inter-domain interpretations in each software tool, and providing end users with control over the data exchange. Although the framework is based on the exchange between architectural design and structural analysis, the proposed central data management framework can be used for other exchange processes involving different model representations.
APA, Harvard, Vancouver, ISO, and other styles
12

Gumbricht, T., and R. Thunvik. "3D Hydrogeological Modelling with an Expert GIS Interface." Hydrology Research 28, no. 4-5 (August 1, 1997): 329–38. http://dx.doi.org/10.2166/nh.1998.27.

Full text
Abstract:
Geographical Information Systems provide a powerful tool for creating three-dimensional (3D) datasets for sophisticated hydrogeological models. The article describes a GIS with an expert system interface developed for generating 3D hydrogeological frameworks. The system integrates 2D images of elevation and geology and vertical profile data. Application of the expert GIS to a complex aquifer in South Eastern Sweden is described.
APA, Harvard, Vancouver, ISO, and other styles
13

Rahman, Mohammad Lutfur, Antoni Moore, Melody Smith, John Lieswyn, and Sandra Mandic. "A Conceptual Framework for Modelling Safe Walking and Cycling Routes to High Schools." International Journal of Environmental Research and Public Health 17, no. 9 (May 10, 2020): 3318. http://dx.doi.org/10.3390/ijerph17093318.

Full text
Abstract:
Active transport to or from school presents an opportunity for adolescents to engage in daily physical activity. Multiple factors influence whether adolescents actively travel to/from school. Creating safe walking and cycling routes to school is a promising strategy to increase rates of active transport. This article presents a comprehensive conceptual framework for modelling safe walking and cycling routes to high schools. The framework has been developed based on several existing relevant frameworks including (a) ecological models, (b) the “Five Es” (engineering, education, enforcement, encouragement, and evaluation) framework of transport planning, and (c) a travel mode choice framework for school travel. The framework identifies built environment features (land use mix, pedestrian/cycling infrastructure, neighbourhood aesthetics, and accessibility to local facilities) and traffic safety factors (traffic volume and speed, safe road crossings, and quality of path surface) to be considered when modelling safe walking/cycling routes to high schools. Future research should test this framework using real-world data in different geographical settings and with a combination of tools for the assessment of both macro-scale and micro-scale built environment features. To be effective, the modelling and creation of safe routes to high schools should be complemented by other interventions, including education, enforcement, and encouragement in order to minimise safety concerns and promote active transport.
APA, Harvard, Vancouver, ISO, and other styles
14

Centonze, Paolina. "Security and Privacy Frameworks for Access Control Big Data Systems." Computers, Materials & Continua 59, no. 2 (2019): 361–74. http://dx.doi.org/10.32604/cmc.2019.06223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Camarero, Mariam, Juan Sapena, and Cecilio Tamarit. "Modelling Time-Varying Parameters in Panel Data State-Space Frameworks: An Application to the Feldstein–Horioka Puzzle." Computational Economics 56, no. 1 (February 7, 2019): 87–114. http://dx.doi.org/10.1007/s10614-019-09879-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Sarhan, Jamil Ghazi, Bo Xia, Sabrina Fawzia, Azharul Karim, Ayokunle Olubunmi Olanipekun, and Vaughan Coffey. "Framework for the implementation of lean construction strategies using the interpretive structural modelling (ISM) technique." Engineering, Construction and Architectural Management 27, no. 1 (September 13, 2019): 1–23. http://dx.doi.org/10.1108/ecam-03-2018-0136.

Full text
Abstract:
Purpose The purpose of this paper is to develop a framework for implementing lean construction and consequently to improve performance levels in the construction industry in the context of Saudi Arabia. There is currently no framework for implementing lean construction specifically tailored to the Kingdom of Saudi Arabia (KSA) construction industry. Existing lean construction frameworks are focussed on other countries and are less applicable in the KSA due to differences in socio-cultural and operational contexts. Design/methodology/approach This study employs the interpretive structural modelling (ISM) technique for data collection and analysis. First, following a survey of 282 construction professionals, 12 critical success factors (CSFs) for implementing lean construction in the KSA construction industry were identified by Sarhan et al. (2016). Second, 16 of these professionals who have 15 years or more experience were exclusively selected to examine the contextual relationship among the 12 CSFs. A row and column questionnaire was used for a pairwise comparison of the CSFs. A matrix of cross-impact multiplications (MICMAC) was applied to analyse the questionnaire data to develop an ISM model that can serve as a framework for implementing lean construction. Third, the framework was subjected to further validation by interviewing five experts to check for conceptual inconsistencies and to confirm the applicability of the framework in the context of the KSA construction industry. Findings The findings reveal that the CSFs are divided into four clusters: autonomous, linkage, dependent and driving clusters. Additionally, the findings reveal seven hierarchies of inter-relationships among the CSFs. The order of practical application of the CSFs descends from the seventh hierarchy to the first hierarchy. Originality/value The new framework is a significant advancement over existing lean construction frameworks as it employs an ISM technique to specify the hierarchical relationships among the different factors that contribute to the successful implementation of lean construction. The primary value of this study is the development of a new framework that reflects the socio-cultural and operational contexts in the KSA construction industry and can guide the successful implementation of lean construction. Therefore, construction industry operators such as contractors, consultants, government departments and professionals can rely on the framework to implement lean construction more effectively and successfully.
APA, Harvard, Vancouver, ISO, and other styles
17

Spence, Seymour M. J., and Ahsan Kareem. "Data-Enabled Design and Optimization (DEDOpt): Tall steel building frameworks." Computers & Structures 129 (December 2013): 134–47. http://dx.doi.org/10.1016/j.compstruc.2013.04.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Anagnostis, Athanasios, Serafeim Moustakidis, Elpiniki Papageorgiou, and Dionysis Bochtis. "A Hybrid Bimodal LSTM Architecture for Cascading Thermal Energy Storage Modelling." Energies 15, no. 6 (March 8, 2022): 1959. http://dx.doi.org/10.3390/en15061959.

Full text
Abstract:
Modelling of thermal energy storage (TES) systems is a complex process that requires the development of sophisticated computational tools for numerical simulation and optimization. Until recently, most modelling approaches relied on analytical methods based on equations of the physical processes that govern TES systems’ operations, producing high-accuracy and interpretable results. The present study tackles the problem of modelling the temperature dynamics of a TES plant by exploring the advantages and limitations of an alternative data-driven approach. A hybrid bimodal LSTM (H2M-LSTM) architecture is proposed to model the temperature dynamics of different TES components, by utilizing multiple temperature readings in both forward and bidirectional fashion for fine-tuning the predictions. Initially, a selection of methods was employed to model the temperature dynamics of individual components of the TES system. Subsequently, a novel cascading modelling framework was realised to provide an integrated holistic modelling solution that takes into account the results of the individual modelling components. The cascading framework was built in a hierarchical structure that considers the interrelationships between the integrated energy components leading to seamless modelling of whole operation as a single system. The performance of the proposed H2M-LSTM was compared against a variety of well-known machine learning algorithms through an extensive experimental analysis. The efficacy of the proposed energy framework was demonstrated in comparison to the modelling performance of the individual components, by utilizing three prediction performance indicators. The findings of the present study offer: (i) insights on the low-error performance of tailor-made LSTM architectures fitting the TES modelling problem, (ii) deeper knowledge of the behaviour of integral energy frameworks operating in fine timescales and (iii) an alternative approach that enables the real-time or semi-real time deployment of TES modelling tools facilitating their use in real-world settings.
APA, Harvard, Vancouver, ISO, and other styles
19

Chadalawada, Jayashree, and Vladan Babovic. "Review and comparison of performance indices for automatic model induction." Journal of Hydroinformatics 21, no. 1 (December 6, 2017): 13–31. http://dx.doi.org/10.2166/hydro.2017.078.

Full text
Abstract:
Abstract One of the more perplexing challenges for the hydrologic research community is the need for development of coupled systems involving integration of hydrologic, atmospheric and socio-economic relationships. Given the demand for integrated modelling and availability of enormous data with varying degrees of (un)certainty, there exists growing popularity of data-driven, unified theory catchment scale hydrological modelling frameworks. Recent research focuses on representation of distinct hydrological processes using mathematical model components that vary in a controlled manner, thereby deriving relationships between alternative conceptual model constructs and catchments’ behaviour. With increasing computational power, an evolutionary approach to auto-configuration of conceptual hydrological models is gaining importance. Its successful implementation depends on the choice of evolutionary algorithm, inventory of model components, numerical implementation, rules of operation and fitness functions. In this study, genetic programming is used as an example of evolutionary algorithm that employs modelling decisions inspired by the Superflex framework to automatically induce optimal model configurations for the given catchment dataset. The main objective of this paper is to identify the effects of entropy, hydrological and statistical measures as optimization objectives on the performance of the proposed approach based on two synthetic case studies of varying complexity.
APA, Harvard, Vancouver, ISO, and other styles
20

Pragathi, DrYVS Sai, M. V. S. Phani Narasimham, and B. V. Ramana Murthy. "Analysis and implementation of realtime stock prediction using reinforcement frameworks." Journal of Physics: Conference Series 2089, no. 1 (November 1, 2021): 012045. http://dx.doi.org/10.1088/1742-6596/2089/1/012045.

Full text
Abstract:
Abstract Real time stock prediction is interesting research topic due to the risk involved with volatile scenarios. Modelling of the stocks by reducing the overestimation in ANN model, due to rapid fluctuations in the market guide fund managers risky decisions while building stock portfolio. This paper builds real time framework for stock prediction using deep reinforcement learning to buy, sell or hold the stocks. This paper models the transformed stock tick data and technical indicators using Transformed Deep-Q Learning. Our framework is cost reduced and transaction time optimized to get real time stock prediction using GPU and Memory containers. Stock predictor is architected using GRPC based clean architecture which has the benefits of easy updates, addition of new services with reduced integration costs. Data archive features of the cloud will give benefit of reduced cost of the new stock predictor framework.
APA, Harvard, Vancouver, ISO, and other styles
21

Kurtz, W., G. He, S. Kollet, R. Maxwell, H. Vereecken, and H. J. Hendricks Franssen. "TerrSysMP-PDAF (version 1.0): a modular high-performance data assimilation framework for an integrated land surface–subsurface model." Geoscientific Model Development Discussions 8, no. 11 (November 3, 2015): 9617–68. http://dx.doi.org/10.5194/gmdd-8-9617-2015.

Full text
Abstract:
Abstract. Modelling of terrestrial systems is continuously moving towards more integrated modelling approaches where different terrestrial compartment models are combined in order to realise a more sophisticated physical description of water, energy and carbon fluxes across compartment boundaries and to provide a more integrated view on terrestrial processes. While such models can effectively reduce certain parameterization errors of single compartment models, model predictions are still prone to uncertainties regarding model input variables. The resulting uncertainties of model predictions can be effectively tackled by data assimilation techniques which allow to correct model predictions with observations taking into account both the model and measurement uncertainties. The steadily increasing availability of computational resources makes it now increasingly possible to perform data assimilation also for computationally highly demanding integrated terrestrial system models. However, as the computational burden for integrated models as well as data assimilation techniques is quite large, there is an increasing need to provide computationally efficient data assimilation frameworks for integrated models that allow to run on and to make efficient use of massively parallel computational resources. In this paper we present a data assimilation framework for the land surface–subsurface part of the Terrestrial System Modelling Platform TerrSysMP. TerrSysMP is connected via a memory based coupling approach with the pre-existing parallel data assimilation library PDAF (Parallel Data Assimilation Framework). This framework provides a fully parallel modular environment for performing data assimilation for the land surface and the subsurface compartment. A simple synthetic case study for a land surface–subsurface system (0.8 Mio. unknowns) is used to demonstrate the effects of data assimilation in the integrated model TerrSysMP and to access the scaling behaviour of the data assimilation system. Results show that data assimilation effectively corrects model states and parameters of the integrated model towards the reference values. Scaling tests provide evidence that the data assimilation system for TerrSysMP can make efficient use of parallel computational resources for > 30 k processors. Simulations with a large problem size (20 Mio. unknows) for the forward model were also efficiently handled by the data assimilation system. The proposed data assimilation framework is useful in simulating and estimating uncertainties in predicted states and fluxes of the terrestrial system over large spatial scales at high resolution utilizing integrated models.
APA, Harvard, Vancouver, ISO, and other styles
22

Waldherr, Steffen. "Estimation methods for heterogeneous cell population models in systems biology." Journal of The Royal Society Interface 15, no. 147 (October 2018): 20180530. http://dx.doi.org/10.1098/rsif.2018.0530.

Full text
Abstract:
Heterogeneity among individual cells is a characteristic and relevant feature of living systems. A range of experimental techniques to investigate this heterogeneity is available, and multiple modelling frameworks have been developed to describe and simulate the dynamics of heterogeneous populations. Measurement data are used to adjust computational models, which results in parameter and state estimation problems. Methods to solve these estimation problems need to take the specific properties of data and models into account. The aim of this review is to give an overview on the state of the art in estimation methods for heterogeneous cell population data and models. The focus is on models based on the population balance equation, but stochastic and individual-based models are also discussed. It starts with a brief discussion of common experimental approaches and types of measurement data that can be obtained in this context. The second part describes computational modelling frameworks for heterogeneous populations and the types of estimation problems occurring for these models. The third part starts with a discussion of observability and identifiability properties, after which the computational methods to solve the various estimation problems are described.
APA, Harvard, Vancouver, ISO, and other styles
23

Mittal, Shruti, and Anubhav Chauhan. "A RNN-LSTM-Based Predictive Modelling Framework for Stock Market Prediction Using Technical Indicators." International Journal of Rough Sets and Data Analysis 7, no. 1 (January 2021): 1–13. http://dx.doi.org/10.4018/ijrsda.288521.

Full text
Abstract:
The successful prediction of the stocks’ future price would produce substantial profit to the investor. In this paper, we propose a framework with the help of various technical indicators of the stock market to predict the future prices of the stock using Recurrent Neural Network based Long Short-Term Memory (LSTM) algorithm. The historical transactional data set is amalgamated with the technical indicators to create a more effective input dataset. The historical data is taken from 2010-2019 ten years in total. The dataset is divided into 80% training set and 20% test set. The experiment is carried out in two phases first without the technical indicators and after adding technical indicators. In the experimental setup, it has been observed the LSTM with technical indicators have significantly reduced the error value by 2.42% and improved the overall performance of the system as compared to other machine learning frameworks that are not accounting the effect of technical indicators.
APA, Harvard, Vancouver, ISO, and other styles
24

Soni, Gunjan, and Rambabu Kodali. "Path analysis for proposed framework of SCM excellence in Indian manufacturing industry." Journal of Manufacturing Technology Management 27, no. 4 (May 3, 2016): 577–611. http://dx.doi.org/10.1108/jmtm-08-2015-0059.

Full text
Abstract:
Purpose – Several authors in extant literature have shown concern towards lacuna in availability of standard constructs in supply chain management (SCM). These standard constructs can represent pillars of SCM excellence. However, frameworks on SCM excellence unlike its contemporary fields are very few. Thus the purpose of this paper is to develop a path analysis for proposed framework of SCM excellence in Indian manufacturing industry proposed by Soni and Kodali (2014) using interpretive structural modelling (ISM) and structural equation modelling (SEM). Design/methodology/approach – The ISM is performed on two exemplary cases of supply chain in Indian manufacturing industry. These cases were selected on the consideration of supply chain excellence index (SCEI), based on the results of an empirical study conducted by Soni and Kodali (2014) in Indian manufacturing industry. The focal manufacturing company which exhibited lowest and highest SCEI were selected as contenders for developing ISM. The relationships among pillars and constructs of SCM excellence framework are obtained from ISM, and later are subjected to statistical testing of model fit by using SEM. The input to SEM was the respondent’s data used in previous study. Findings – The major findings revealed that ISM based on focal company having highest SCEI, is statistically fit for SCM excellence framework, and finally the structural models of the constructs for each pillar of SCM excellence are also formed by using path analysis. Originality/value – The study offers a unique managerial approach for analysing the underlying relationships between pillars of SCM excellence. Researchers can use this study for developing frameworks in various realms of SCM excellence.
APA, Harvard, Vancouver, ISO, and other styles
25

Kurzeja, Patrick. "The criterion of subscale sufficiency and its application to the relationship between static capillary pressure, saturation and interfacial areas." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 472, no. 2189 (May 2016): 20150869. http://dx.doi.org/10.1098/rspa.2015.0869.

Full text
Abstract:
Modern imaging techniques, increased simulation capabilities and extended theoretical frameworks, naturally drive the development of multiscale modelling by the question: which new information should be considered? Given the need for concise constitutive relationships and efficient data evaluation; however, one important question is often neglected: which information is sufficient? For this reason, this work introduces the formalized criterion of subscale sufficiency. This criterion states whether a chosen constitutive relationship transfers all necessary information from micro to macroscale within a multiscale framework. It further provides a scheme to improve constitutive relationships. Direct application to static capillary pressure demonstrates usefulness and conditions for subscale sufficiency of saturation and interfacial areas.
APA, Harvard, Vancouver, ISO, and other styles
26

Kurtz, Wolfgang, Guowei He, Stefan J. Kollet, Reed M. Maxwell, Harry Vereecken, and Harrie-Jan Hendricks Franssen. "TerrSysMP–PDAF (version 1.0): a modular high-performance data assimilation framework for an integrated land surface–subsurface model." Geoscientific Model Development 9, no. 4 (April 11, 2016): 1341–60. http://dx.doi.org/10.5194/gmd-9-1341-2016.

Full text
Abstract:
Abstract. Modelling of terrestrial systems is continuously moving towards more integrated modelling approaches, where different terrestrial compartment models are combined in order to realise a more sophisticated physical description of water, energy and carbon fluxes across compartment boundaries and to provide a more integrated view on terrestrial processes. While such models can effectively reduce certain parameterisation errors of single compartment models, model predictions are still prone to uncertainties regarding model input variables. The resulting uncertainties of model predictions can be effectively tackled by data assimilation techniques, which allow one to correct model predictions with observations taking into account both the model and measurement uncertainties. The steadily increasing availability of computational resources makes it now increasingly possible to perform data assimilation also for computationally highly demanding integrated terrestrial system models. However, as the computational burden for integrated models as well as data assimilation techniques is quite large, there is an increasing need to provide computationally efficient data assimilation frameworks for integrated models that allow one to run on and to make efficient use of massively parallel computational resources. In this paper we present a data assimilation framework for the land surface–subsurface part of the Terrestrial System Modelling Platform (TerrSysMP). TerrSysMP is connected via a memory-based coupling approach with the pre-existing parallel data assimilation library PDAF (Parallel Data Assimilation Framework). This framework provides a fully parallel modular environment for performing data assimilation for the land surface and the subsurface compartment. A simple synthetic case study for a land surface–subsurface system (0.8 million unknowns) is used to demonstrate the effects of data assimilation in the integrated model TerrSysMP and to assess the scaling behaviour of the data assimilation system. Results show that data assimilation effectively corrects model states and parameters of the integrated model towards the reference values. Scaling tests provide evidence that the data assimilation system for TerrSysMP can make efficient use of parallel computational resources for > 30 k processors. Simulations with a large problem size (20 million unknowns) for the forward model were also efficiently handled by the data assimilation system. The proposed data assimilation framework is useful in simulating and estimating uncertainties in predicted states and fluxes of the terrestrial system over large spatial scales at high resolution utilising integrated models.
APA, Harvard, Vancouver, ISO, and other styles
27

Raymer, James, Phil Rees, and Ann Blake. "Frameworks for Guiding the Development and Improvement of Population Statistics in the United Kingdom." Journal of Official Statistics 31, no. 4 (December 1, 2015): 699–722. http://dx.doi.org/10.1515/jos-2015-0041.

Full text
Abstract:
Abstract The article presents central frameworks for guiding the development and improvement of population statistics. A shared understanding between producers and users of statistics is needed with regard to the concepts, data, processes, and outputs produced. In the United Kingdom, population estimates are produced by conducting decennial censuses and by estimating intercensus populations through the addition and subtraction of the demographic components of change derived from registers of vital events and from a combination of administrative data and surveys for internal and international migration. In addition, data cleaning, imputation, and modelling may be required to produce the desired population statistics. The frameworks presented in this paper are useful for aligning the required concepts of population statistics with the various sources of available data. Taken together, they provide a general ‘recipe’ for the continued improvement and expansion of official statistics on population and demographic change.
APA, Harvard, Vancouver, ISO, and other styles
28

Marcek, Dusan. "Some statistical and CI models to predict chaotic high-frequency financial data." Journal of Intelligent & Fuzzy Systems 39, no. 5 (November 19, 2020): 6419–30. http://dx.doi.org/10.3233/jifs-189107.

Full text
Abstract:
To forecast time series data, two methodological frameworks of statistical and computational intelligence modelling are considered. The statistical methodological approach is based on the theory of invertible ARIMA (Auto-Regressive Integrated Moving Average) models with Maximum Likelihood (ML) estimating method. As a competitive tool to statistical forecasting models, we use the popular classic neural network (NN) of perceptron type. To train NN, the Back-Propagation (BP) algorithm and heuristics like genetic and micro-genetic algorithm (GA and MGA) are implemented on the large data set. A comparative analysis of selected learning methods is performed and evaluated. From performed experiments we find that the optimal population size will likely be 20 with the lowest training time from all NN trained by the evolutionary algorithms, while the prediction accuracy level is lesser, but still acceptable by managers.
APA, Harvard, Vancouver, ISO, and other styles
29

Reklaite, Agne. "Globalisation effect measure via hierarchical dynamic factor modelling." Equilibrium 10, no. 3 (September 30, 2015): 139. http://dx.doi.org/10.12775/equil.2015.029.

Full text
Abstract:
In this paper the issue of globalisation and deteriorating precision of domestically oriented frameworks is addressed. A hypothesis that the effect of international trends on the growth of economy is increasing over time is formed. In order to validate this, a method of composing foreign series with local indicators in a hierarchical dynamic factor model is presented. The novelty of this approach is that globalisation effect is measured focusing on prediction rather than similarity. This way the measure presents the country's sensitivity to global shocks and reveals how much focal country's economy is intertwined with global economy. The application was performed on the basis of Lithuanian data and the hypothesis was validated. The results indicate that globalisation effect has an increasing effect over time.
APA, Harvard, Vancouver, ISO, and other styles
30

Ismail, Mohamed, and Milica Orlandić. "Segment-Based Clustering of Hyperspectral Images Using Tree-Based Data Partitioning Structures." Algorithms 13, no. 12 (December 10, 2020): 330. http://dx.doi.org/10.3390/a13120330.

Full text
Abstract:
Hyperspectral image classification has been increasingly used in the field of remote sensing. In this study, a new clustering framework for large-scale hyperspectral image (HSI) classification is proposed. The proposed four-step classification scheme explores how to effectively use the global spectral information and local spatial structure of hyperspectral data for HSI classification. Initially, multidimensional Watershed is used for pre-segmentation. Region-based hierarchical hyperspectral image segmentation is based on the construction of Binary partition trees (BPT). Each segmented region is modeled while using first-order parametric modelling, which is then followed by a region merging stage using HSI regional spectral properties in order to obtain a BPT representation. The tree is then pruned to obtain a more compact representation. In addition, principal component analysis (PCA) is utilized for HSI feature extraction, so that the extracted features are further incorporated into the BPT. Finally, an efficient variant of k-means clustering algorithm, called filtering algorithm, is deployed on the created BPT structure, producing the final cluster map. The proposed method is tested over eight publicly available hyperspectral scenes with ground truth data and it is further compared with other clustering frameworks. The extensive experimental analysis demonstrates the efficacy of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
31

Leoni, Leonardo, Alessandra Cantini, Farshad BahooToroody, Saeed Khalaj, Filippo De Carlo, Mohammad Mahdi Abaei, and Ahmad BahooToroody. "Reliability Estimation under Scarcity of Data: A Comparison of Three Approaches." Mathematical Problems in Engineering 2021 (March 19, 2021): 1–15. http://dx.doi.org/10.1155/2021/5592325.

Full text
Abstract:
During the last decades, the optimization of the maintenance plan in process plants has lured the attention of many researchers due to its vital role in assuring the safety of operations. Within the process of scheduling maintenance activities, one of the most significant challenges is estimating the reliability of the involved systems, especially in case of data scarcity. Overestimating the average time between two consecutive failures of an individual component could compromise safety, while an underestimate leads to an increase of operational costs. Thus, a reliable tool able to determine the parameters of failure modelling with high accuracy when few data are available would be welcome. For this purpose, this paper aims at comparing the implementation of three practical estimation frameworks in case of sparse data to point out the most efficient approach. Hierarchical Bayesian modelling (HBM), maximum likelihood estimation (MLE), and least square estimation (LSE) are applied on data generated by a simulated stochastic process of a natural gas regulating and metering station (NGRMS), which was adopted as a case of study. The results identify the Bayesian methodology as the most accurate for predicting the failure rate of the considered devices, especially for the equipment characterized by less data available. The outcomes of this research will assist maintenance engineers and asset managers in choosing the optimal approach to conduct reliability analysis either when sufficient data or limited data are observed.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Shouwei. "Does Diversification Affect Banking Systemic Risk?" Discrete Dynamics in Nature and Society 2016 (2016): 1–5. http://dx.doi.org/10.1155/2016/2967830.

Full text
Abstract:
This paper contributes to the understanding of the linear and nonlinear causal linkage from diversification to banking systemic risk. Employing data from China, within both linear and nonlinear causality frameworks, we find that diversification does not embody significant predictive power with respect to banking systemic risk.
APA, Harvard, Vancouver, ISO, and other styles
33

Craninx, Michel, Koen Hilgersom, Jef Dams, Guido Vaes, Thomas Danckaert, and Jan Bronders. "Flood4castRTF: A Real-Time Urban Flood Forecasting Model." Sustainability 13, no. 10 (May 18, 2021): 5651. http://dx.doi.org/10.3390/su13105651.

Full text
Abstract:
Worldwide, climate change increases the frequency and intensity of heavy rainstorms. The increasing severity of consequent floods has major socio-economic impacts, especially in urban environments. Urban flood modelling supports the assessment of these impacts, both in current climate conditions and for forecasted climate change scenarios. Over the past decade, model frameworks that allow flood modelling in real-time have been gaining widespread popularity. Flood4castRTF is a novel urban flood model that applies a grid-based approach at a modelling scale coarser than most recent detailed physically based models. Automatic model set-up based on commonly available GIS data facilitates quick model building in contrast with detailed physically based models. The coarser grid scale applied in Flood4castRTF pursues a better agreement with the resolution of the forcing rainfall data and allows speeding up of the calculations. The modelling approach conceptualises cell-to-cell interactions while at the same time maintaining relevant and interpretable physical descriptions of flow drivers and resistances. A case study comparison of Flood4castRTF results with flood results from two detailed models shows that detailed models do not necessarily outperform the accuracy of Flood4castRTF with flooded areas in-between the two detailed models. A successful model application for a high climate change scenario is demonstrated. The reduced data need, consisting mainly of widely available data, makes the presented modelling approach applicable in data scarce regions with no terrain inventories. Moreover, the method is cost effective for applications which do not require detailed physically based modelling.
APA, Harvard, Vancouver, ISO, and other styles
34

Awuzie, Bankole Osita, and Amal Abuzeinab. "Modelling Organisational Factors Influencing Sustainable Development Implementation Performance in Higher Education Institutions: An Interpretative Structural Modelling (ISM) Approach." Sustainability 11, no. 16 (August 9, 2019): 4312. http://dx.doi.org/10.3390/su11164312.

Full text
Abstract:
Globally, higher education institutions (HEIs) have continued to record varied sustainable development (SD) implementation performances. This variance has been attributed to the presence of certain organisational factors. Whereas previous studies have successfully identified the factors influencing SD implementation performance in HEIs, few studies have attempted to explore the relationship between these factors and the influence of such a relationship on the management of SD implementation in HEIs. This is the objective of this study. Understandably, knowledge of such relationships will facilitate the development of appropriate frameworks for managing SD implementation in HEIs. Relying on a case study of a South African University of Technology (SAUoT), this study elicits data through a focus group discussion session. An interpretative structural modelling (ISM) focus group protocol indicating extant pair-wise relationships between identified organisational factor categories was extensively discussed. The emergent data was recorded, transcribed verbatim and subsequently analysed. The findings suggest that communication was critical to the prevalence of other factors, hence indicating its centrality to the effective management of SD implementation in HEIs. These findings will guide implementing agents in HEIs towards developing appropriate mechanisms for communicating SD implementation strategies.
APA, Harvard, Vancouver, ISO, and other styles
35

Wilson, John R. U., Sven Bacher, Curtis C. Daehler, Quentin J. Groom, Sabrina Kumschick, Julie L. Lockwood, Tamara B. Robinson, Tsungai A. Zengeya, and David M. Richardson. "Frameworks used in invasion science: progress and prospects." NeoBiota 62 (October 15, 2020): 1–30. http://dx.doi.org/10.3897/neobiota.62.58738.

Full text
Abstract:
Our understanding and management of biological invasions relies on our ability to classify and conceptualise the phenomenon. This need has stimulated the development of a plethora of frameworks, ranging in nature from conceptual to applied. However, most of these frameworks have not been widely tested and their general applicability is unknown. In order to critically evaluate frameworks in invasion science, we held a workshop on ‘Frameworks used in Invasion Science’ hosted by the DSI-NRF Centre of Excellence for Invasion Biology in Stellenbosch, South Africa, in November 2019, which led to this special issue. For the purpose of the workshop we defined a framework as “a way of organising things that can be easily communicated to allow for shared understanding or that can be implemented to allow for generalisations useful for research, policy or management”. Further, we developed the Stellenbosch Challenge for Invasion Science: “Can invasion science develop and improve frameworks that are useful for research, policy or management, and that are clear as to the contexts in which the frameworks do and do not apply?”. Particular considerations identified among meeting participants included the need to identify the limitations of a framework, specify how frameworks link to each other and broader issues, and to improve how frameworks can facilitate communication. We believe that the 24 papers in this special issue do much to meet this challenge. The papers apply existing frameworks to new data and contexts, review how the frameworks have been adopted and used, develop useable protocols and guidelines for applying frameworks to different contexts, refine the frameworks in light of experience, integrate frameworks for new purposes, identify gaps, and develop new frameworks to address issues that are currently not adequately dealt with. Frameworks in invasion science must continue to be developed, tested as broadly as possible, revised, and retired as contexts and needs change. However, frameworks dealing with pathways of introduction, progress along the introduction-naturalisation-invasion continuum, and the assessment of impacts are being increasingly formalised and set as standards. This, we argue, is an important step as invasion science starts to mature as a discipline.
APA, Harvard, Vancouver, ISO, and other styles
36

Kumar, Saurabh, Jitendra Kumar, Vikas Kumar Sharma, and Varun Agiwal. "Random order autoregressive time series model with structural break." Model Assisted Statistics and Applications 15, no. 3 (October 9, 2020): 225–37. http://dx.doi.org/10.3233/mas-200490.

Full text
Abstract:
This paper deals with the problem of modelling time series data with structural breaks occur at multiple time points that may result in varying order of the model at every structural break. A flexible and generalized class of Autoregressive (AR) models with multiple structural breaks is proposed for modelling in such situations. Estimation of model parameters are discussed in both classical and Bayesian frameworks. Since the joint posterior of the parameters is not analytically tractable, we employ a Markov Chain Monte Carlo method, Gibbs sampling to simulate posterior sample. To verify the order change, a hypotheses test is constructed using posterior probability and compared with that of without breaks. The methodologies proposed here are illustrated by means of simulation study and a real data analysis.
APA, Harvard, Vancouver, ISO, and other styles
37

Sigauke, Caston, Murendeni Nemukula, and Daniel Maposa. "Probabilistic Hourly Load Forecasting Using Additive Quantile Regression Models." Energies 11, no. 9 (August 23, 2018): 2208. http://dx.doi.org/10.3390/en11092208.

Full text
Abstract:
Short-term hourly load forecasting in South Africa using additive quantile regression (AQR) models is discussed in this study. The modelling approach allows for easy interpretability and accounting for residual autocorrelation in the joint modelling of hourly electricity data. A comparative analysis is done using generalised additive models (GAMs). In both modelling frameworks, variable selection is done using least absolute shrinkage and selection operator (Lasso) via hierarchical interactions. Four models considered are GAMs and AQR models with and without interactions, respectively. The AQR model with pairwise interactions was found to be the best fitting model. The forecasts from the four models were then combined using an algorithm based on the pinball loss (convex combination model) and also using quantile regression averaging (QRA). The AQR model with interactions was then compared with the convex combination and QRA models and the QRA model gave the most accurate forecasts. Except for the AQR model with interactions, the other two models (convex combination model and QRA model) gave prediction interval coverage probabilities that were valid for the 90 % , 95 % and the 99 % prediction intervals. The QRA model had the smallest prediction interval normalised average width and prediction interval normalised average deviation. The modelling framework discussed in this paper has established that going beyond summary performance statistics in forecasting has merit as it gives more insight into the developed forecasting models.
APA, Harvard, Vancouver, ISO, and other styles
38

McIntyre, Neil, Caroline Ballard, Michael Bruen, Nataliya Bulygina, Wouter Buytaert, Ian Cluckie, Sarah Dunn, et al. "Modelling the hydrological impacts of rural land use change." Hydrology Research 45, no. 6 (March 27, 2013): 737–54. http://dx.doi.org/10.2166/nh.2013.145.

Full text
Abstract:
The potential role of rural land use in mitigating flood risk and protecting water supplies continues to be of great interest to regulators and planners. The ability of hydrologists to quantify the impact of rural land use change on the water cycle is however limited and we are not able to provide consistently reliable evidence to support planning and policy decisions. This shortcoming stems mainly from lack of data, but also from lack of modelling methods and tools. Numerous research projects over the last few years have been attempting to address the underlying challenges. This paper describes these challenges, significant areas of progress and modelling innovations, and proposes priorities for further research. The paper is organised into five inter-related subtopics: (1) evidence-based modelling; (2) upscaling to maximise the use of process knowledge and physics-based models; (3) representing hydrological connectivity in models; (4) uncertainty analysis; and (5) integrated catchment modelling for ecosystem service management. It is concluded that there is room for further advances in hydrological data analysis, sensitivity and uncertainty analysis methods and modelling frameworks, but progress will also depend on continuing and strengthened commitment to long-term monitoring and inter-disciplinarity in defining and delivering land use impacts research.
APA, Harvard, Vancouver, ISO, and other styles
39

Vohra, Anupama, and Neha Bhardwaj. "Customer engagement in an e-commerce brand community." Journal of Research in Interactive Marketing 13, no. 1 (March 11, 2019): 2–25. http://dx.doi.org/10.1108/jrim-01-2018-0003.

Full text
Abstract:
Purpose The purpose of this study is to outline a conceptual framework for customer engagement in the context of social media for emerging markets. Three competing models of customer engagement were identified and tested to arrive at the best suited model for the given contexts. The alternative conceptual frameworks involve the constructs of active participation, community trust and community commitment in relation to customer engagement. Design/methodology/approach Data were collected using questionnaires sent via e-mail to respondents. Structural equation modelling was then used to arrive at the best suited model, while also empirically testing for the relationships among the constructs. Findings The study, by way of an empirical comparison of alternative conceptual frameworks, presents a customer engagement framework best suiting the social media context for emerging markets. The study also outlines active participation, community trust and community commitment to be acting as antecedents to customer engagement. Further active participation is identified as a necessary antecedent to customer engagement based on the comparative assessment of the frameworks. Research limitations/implications While there is not much consensus on the nature of customer engagement, the study offers insights to marketers in terms of managing customer engagement with their brand communities. The study identifies the role and importance of inducing active participation in a brand community context. Further, it also identifies community trust and community commitment to be occurring as antecedents to customer engagement, with commitment implying for a more pronounced role in the framework. Originality/value There is no consensus among researchers regarding the nomological network surrounding customer engagement. Further, very few of these studies have focussed on this construct in the context of emerging markets. This study thus attempts to close the above gap, by testing for alternative conceptual frameworks involving customer engagement, in the context of social media for emerging markets.
APA, Harvard, Vancouver, ISO, and other styles
40

Sembiring, Maximus Gorky. "Modelling the notions and dimensions of MOOCs." Asian Association of Open Universities Journal 13, no. 1 (March 5, 2018): 100–114. http://dx.doi.org/10.1108/aaouj-01-2018-0007.

Full text
Abstract:
Purpose This report explored enriched notions and dimensions of quality massive open online courses (QMOOCs). The purpose of this paper is to visualize the quality measures adjacent to MOOCs and understanding distinctive outlooks to approaching them. It was also of interests to envisage how and in what routines those notions and dimensions interrelated. Design/methodology/approach Exploratory-design was employed to qualitatively establishing conceptual and operational frameworks first through reviewing processes and focus-group discussions. QMOOCs were reflected by four dimensions: scientifically provable, technically feasible, economically beneficial and socio-culturally adaptable. Besides, QMOOCs involved six notions (6P: presage, process, product, practicability, prospective and power) and affected knowledge, skills and professionalism (KSP). Quantitatively, QMOOCs, 6P and KSP were the moderating, independent and dependent variables, respectively. Associated data were accumulated through survey by distributing 600 questionnaires randomly to 708 Universitas Terbuka faculty members; 299 of them were completed. Findings Nine hypotheses were scrutinized utilizing structural-equation model and eight were validated by the analysis. It was statistically inferred that product was alluded as the prime notion to QMOOCs followed by process, practicability, presage and power; prospective was excluded. Professionalism, knowledge and skill were influenced by QMOOCs. Importance-performance analysis (IPA) and customer-satisfaction index were emulated (and applied) to quantify respondents opinion and relevance degree of those engaged notions and dimensions. IPA analysis revealed four prominent notions (corresponding, functional, well-defined and learner-focused) and one dimension (technically feasible). Originality/value Qualitative framework was imperfectly confirmed by the quantitative upshot. Further inquiry is crucial searching for plausible validation how this consequence was marginally distinctive in conjunction with authenticating QMOOCs.
APA, Harvard, Vancouver, ISO, and other styles
41

Azpiroz, Izar, Noelia Oses, Marco Quartulli, Igor G. Olaizola, Diego Guidotti, and Susanna Marchi. "Comparison of Climate Reanalysis and Remote-Sensing Data for Predicting Olive Phenology through Machine-Learning Methods." Remote Sensing 13, no. 6 (March 23, 2021): 1224. http://dx.doi.org/10.3390/rs13061224.

Full text
Abstract:
Machine-learning algorithms used for modelling olive-tree phenology generally and largely rely on temperature data. In this study, we developed a prediction model on the basis of climate data and geophysical information. Remote measurements of weather conditions, terrain slope, and surface spectral reflectance were considered for this purpose. The accuracy of the temperature data worsened when replacing weather-station measurements with remote-sensing records, though the addition of more complete environmental data resulted in an efficient prediction model of olive-tree phenology. Filtering and embedded feature-selection techniques were employed to analyze the impact of variables on olive-tree phenology prediction, facilitating the inclusion of measurable information in decision support frameworks for the sustainable management of olive-tree systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Orjatsalo, Johanna. "Facilitating Cyber Security Threat Modelling: A Social Capital Perspective." European Conference on Knowledge Management 23, no. 2 (August 25, 2022): 878–84. http://dx.doi.org/10.34190/eckm.23.2.360.

Full text
Abstract:
To identify and manage their cyber security risks, organisations need to form a thorough understanding of various factors that may expose them to these risks. While cyber security professionals and scholars have developed a plethora of practical methodologies and frameworks to support cyber security risk identification and mitigation, the theoretical foundations on what promotes effective knowledge creation when using these methodologies and frameworks are nascent. Yet, theories developed in the field of knowledge management and intellectual capital may provide valuable insight on how to enhance cyber security risk related knowledge creation in organisations. For example, social capital is considered as an important prerequisite for knowledge exchange and combination when creating new intellectual capital (Nahapiet & Ghoshal, 1998). However, more focused research is required to understand how social capital affects knowledge creation in the context of organisational cyber security risk related activities. Using qualitative data gathered from three cyber security threat modelling workshops, this paper examines how social capital enables conditions for exchanging and combining knowledge on cyber security threats. By comparing the empirical observations with Nahapiet and Ghoshal’s (1998) model, this study identifies practical approaches that are used by threat modelling workshop facilitators to create conditions for effective knowledge exchange and combination. This study provides both cyber security scholars and professionals with an example on how to use knowledge creation related academic theories to analyse and further enhance cyber security risk management approaches by creating a connection between Nahapiet and Ghoshal’s (1998) social capital model and cyber security threat modelling.
APA, Harvard, Vancouver, ISO, and other styles
43

Blangiardo, Marta, Areti Boulieri, Peter Diggle, Frédéric B. Piel, Gavin Shaddick, and Paul Elliott. "Advances in spatiotemporal models for non-communicable disease surveillance." International Journal of Epidemiology 49, Supplement_1 (April 1, 2020): i26—i37. http://dx.doi.org/10.1093/ije/dyz181.

Full text
Abstract:
Abstract Surveillance systems are commonly used to provide early warning detection or to assess an impact of an intervention/policy. Traditionally, the methodological and conceptual frameworks for surveillance have been designed for infectious diseases, but the rising burden of non-communicable diseases (NCDs) worldwide suggests a pressing need for surveillance strategies to detect unusual patterns in the data and to help unveil important risk factors in this setting. Surveillance methods need to be able to detect meaningful departures from expectation and exploit dependencies within such data to produce unbiased estimates of risk as well as future forecasts. This has led to the increasing development of a range of space-time methods specifically designed for NCD surveillance. We present an overview of recent advances in spatiotemporal disease surveillance for NCDs, using hierarchically specified models. This provides a coherent framework for modelling complex data structures, dealing with data sparsity, exploiting dependencies between data sources and propagating the inherent uncertainties present in both the data and the modelling process. We then focus on three commonly used models within the Bayesian Hierarchical Model (BHM) framework and, through a simulation study, we compare their performance. We also discuss some challenges faced by researchers when dealing with NCD surveillance, including how to account for false detection and the modifiable areal unit problem. Finally, we consider how to use and interpret the complex models, how model selection may vary depending on the intended user group and how best to communicate results to stakeholders and the general public.
APA, Harvard, Vancouver, ISO, and other styles
44

Vorobevskii, Ivan, Thi Thanh Luong, Rico Kronenberg, Thomas Grünwald, and Christian Bernhofer. "Modelling evaporation with local, regional and global BROOK90 frameworks: importance of parameterization and forcing." Hydrology and Earth System Sciences 26, no. 12 (June 22, 2022): 3177–239. http://dx.doi.org/10.5194/hess-26-3177-2022.

Full text
Abstract:
Abstract. Evaporation plays an important role in the water balance on a different spatial scale. However, its direct and indirect measurements are globally scarce and accurate estimations are a challenging task. Thus the correct process approximation in modelling of the terrestrial evaporation plays a crucial part. A physically based 1D lumped soil–plant–atmosphere model (BROOK90) is applied to study the role of parameter selection and meteorological input for modelled evaporation on the point scale. Then, with the integration of the model into global, regional and local frameworks, we made cross-combinations out of their parameterization and forcing schemes to show and analyse their roles in the estimations of the evaporation. Five sites with different land uses (grassland, cropland, deciduous broadleaf forest, two evergreen needleleaf forests) located in Saxony, Germany, were selected for the study. All tested combinations showed a good agreement with FLUXNET measurements (Kling–Gupta efficiency, KGE, values 0.35–0.80 for a daily scale). For most of the sites, the best results were found for the calibrated model with in situ meteorological input data, while the worst was observed for the global setup. The setups' performance in the vegetation period was much higher than for the winter period. Among the tested setups, the model parameterization showed higher spread in performance than meteorological forcings for fields and evergreen forests sites, while the opposite was noticed in deciduous forests. Analysis of the of evaporation components revealed that transpiration dominates (up to 65 %–75 %) in the vegetation period, while interception (in forests) and soil/snow evaporation (in fields) prevail in the winter months. Finally, it was found that different parameter sets impact model performance and redistribution of evaporation components throughout the whole year, while the influence of meteorological forcing was evident only in summer months.
APA, Harvard, Vancouver, ISO, and other styles
45

Huppmann, Daniel, Matthew J. Gidden, Zebedee Nicholls, Jonas Hörsch, Robin Lamboll, Paul N. Kishimoto, Thorsten Burandt, et al. "pyam: Analysis and visualisation of integrated assessment and macro-energy scenarios." Open Research Europe 1 (June 28, 2021): 74. http://dx.doi.org/10.12688/openreseurope.13633.1.

Full text
Abstract:
The open-source Python package pyam provides a suite of features and methods for the analysis, validation and visualization of reference data and scenario results generated by integrated assessment models, macro-energy tools and other frameworks in the domain of energy transition, climate change mitigation and sustainable development. It bridges the gap between scenario processing and visualisation solutions that are "hard-wired" to specific modelling frameworks and generic data analysis or plotting packages. The package aims to facilitate reproducibility and reliability of scenario processing, validation and analysis by providing well-tested and documented methods for timeseries aggregation, downscaling and unit conversion. It supports various data formats, including sub-annual resolution using continuous time representation and "representative timeslices". The code base is implemented following best practices of collaborative scientific-software development. This manuscript describes the design principles of the package and the types of data which can be handled. The usefulness of pyam is illustrated by highlighting several recent applications.
APA, Harvard, Vancouver, ISO, and other styles
46

Ballas, Dimitris, Richard Kingston, John Stillwell, and Jianhui Jin. "Building a Spatial Microsimulation-Based Planning Support System for Local Policy Making." Environment and Planning A: Economy and Space 39, no. 10 (October 2007): 2482–99. http://dx.doi.org/10.1068/a38441.

Full text
Abstract:
This paper presents a spatial microsimulation modelling and predictive policy analysis system called Micro-MaPPAS, a Planning Support System (PSS) constructed for a local strategic partnership in a large metropolitan area of the UK. The innovative feature of this system is the use of spatial microsimulation techniques for the enhancement of local policy decision making in connection with the neighbourhood renewal strategy. The paper addresses the relevant data issues and technical aspects of the linkage of spatial microsimulation modelling frameworks to PSS and deals with the wider implications that such a linkage may have to local policy and planning procedures. Finally, the paper presents some illustrative examples of the policy relevance and policy analysis potential of the software.
APA, Harvard, Vancouver, ISO, and other styles
47

Monteiro, Joy Merwin, Jeremy McGibbon, and Rodrigo Caballero. "sympl (v. 0.4.0) and climt (v. 0.15.3) – towards a flexible framework for building model hierarchies in Python." Geoscientific Model Development 11, no. 9 (September 18, 2018): 3781–94. http://dx.doi.org/10.5194/gmd-11-3781-2018.

Full text
Abstract:
Abstract. sympl (System for Modelling Planets) and climt (Climate Modelling and Diagnostics Toolkit) are an attempt to rethink climate modelling frameworks from the ground up. The aim is to use expressive data structures available in the scientific Python ecosystem along with best practices in software design to allow scientists to easily and reliably combine model components to represent the climate system at a desired level of complexity and to enable users to fully understand what the model is doing. sympl is a framework which formulates the model in terms of a state that gets evolved forward in time or modified within a specific time by well-defined components. sympl's design facilitates building models that are self-documenting, are highly interoperable, and provide fine-grained control over model components and behaviour. sympl components contain all relevant information about the input they expect and output that they provide. Components are designed to be easily interchanged, even when they rely on different units or array configurations. sympl provides basic functions and objects which could be used in any type of Earth system model. climt is an Earth system modelling toolkit that contains scientific components built using sympl base objects. These include both pure Python components and wrapped Fortran libraries. climt provides functionality requiring model-specific assumptions, such as state initialization and grid configuration. climt's programming interface designed to be easy to use and thus appealing to a wide audience. Model building, configuration and execution are performed through a Python script (or Jupyter Notebook), enabling researchers to build an end-to-end Python-based pipeline along with popular Python data analysis and visualization tools.
APA, Harvard, Vancouver, ISO, and other styles
48

Dujardin, Sébastien, Damien Jacques, Jessica Steele, and Catherine Linard. "Mobile Phone Data for Urban Climate Change Adaptation: Reviewing Applications, Opportunities and Key Challenges." Sustainability 12, no. 4 (February 18, 2020): 1501. http://dx.doi.org/10.3390/su12041501.

Full text
Abstract:
Climate change places cities at increasing risk and poses a serious challenge for adaptation. As a response, novel sources of data combined with data-driven logics and advanced spatial modelling techniques have the potential for transformative change in the role of information in urban planning. However, little practical guidance exists on the potential opportunities offered by mobile phone data for enhancing adaptive capacities in urban areas. Building upon a review of spatial studies mobilizing mobile phone data, this paper explores the opportunities offered by such digital information for providing spatially-explicit assessments of urban vulnerability, and shows the ways these can help developing more dynamic strategies and tools for urban planning and disaster risk management. Finally, building upon the limitations of mobile phone data analysis, it discusses the key urban governance challenges that need to be addressed for supporting the emergence of transformative change in current planning frameworks.
APA, Harvard, Vancouver, ISO, and other styles
49

Elliot, Thomas, Javier Babí Almenar, Samuel Niza, Vânia Proença, and Benedetto Rugani. "Pathways to Modelling Ecosystem Services within an Urban Metabolism Framework." Sustainability 11, no. 10 (May 14, 2019): 2766. http://dx.doi.org/10.3390/su11102766.

Full text
Abstract:
Urbanisation poses new and complex sustainability challenges. Socio-economic activities drive material and energy flows in cities that influence the health of ecosystems inside and outside the urban system. Recent studies suggest that these flows, under the urban metabolism (UM) metaphor, can be extended to encompass the assessment of urban ecosystem services (UES). Advancing UM approaches to assess UES may be a valuable solution to these arising sustainability challenges, which can support urban planning decisions. This paper critically reviews UM literature related to the UES concept and identifies approaches that may allow or improve the assessment of UES within UM frameworks. We selected from the UM literature 42 studies that encompass UES aspects, and analysed them on the following key investigation themes: temporal information, spatial information, system boundary aspects and cross-scale indicators. The analysis showed that UES are rarely acknowledged in UM literature, and that existing UM approaches have limited capacity to capture the complexity of spatio-temporal and multi-scale information underpinning UES, which has hampered the implementation of operational decision support systems so far. We use these results to identify and illustrate pathways towards a UM-UES modelling approach. Our review suggests that cause–effect dynamics should be integrated with the UM framework, based on spatially-specific social, economic and ecological data. System dynamics can inform on the causal relationships underpinning UES in cities and, therefore, can help moving towards a knowledge base tool to support urban planners in addressing urban challenges.
APA, Harvard, Vancouver, ISO, and other styles
50

Piazzi, Murillo A., Haibo Feng, and Mohamad Kassem. "An investigation of concepts for the specification of graphical exchange information requirements in building information modelling." Journal of Information Technology in Construction 27 (July 26, 2022): 662–84. http://dx.doi.org/10.36680/j.itcon.2022.033.

Full text
Abstract:
Previous studies have investigated frameworks for the specification of Exchange Information Requirements (EIRs). So far, these efforts have concentrated on the specification of non-geometrical data. Graphical information specification is often carried out through the application of subjective criteria. Moreover, the definition of variables used in existing specification frameworks has acquired various meanings among practitioners and organisations. To address this gap, this study's aim is to identify and analyse the concepts that influence the specification of the graphical data in BIM-enabled projects. The BIM literature tends to consider problems from a technological standpoint. The current dichotomy in the BIM body of knowledge demands research that account for the context of industry practices and organisations in which the specification of graphical data is performed. To address its aim, this study adopts a qualitative strategy, employing a cross-sectional design and a grounded theory approach for data collection and analysis. The iterative nature of the grounded theory approach, particularly of its theoretical sampling feature, was addressed by dividing data collection and analysis into two stages. In exploring the concepts that define the specification of graphical data in EIRs, six main themes were identified: model use, project stage, project actors, processes and objects definitions, graphical granularity, and model attribute. Moreover, the findings support the suggestion that contextual factors play a role in the implementation of these variables and associated processes. There is a suggestion that practices at the industry and organisational context level, such as the existence of mandates, could be influencing the way practitioners specify information. These results can be employed to extend the understanding of the considerations made in the definition of graphical information in EIRs documentation. Moreover, this work could inform the activity of practitioners and the development of new technologies focused on the automation of information specification.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography