Academic literature on the topic 'Data modelling frameworks'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data modelling frameworks.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data modelling frameworks"

1

Murray, S. G., C. Power, and A. S. G. Robotham. "Modelling Galaxy Populations in the Era of Big Data." Proceedings of the International Astronomical Union 10, S306 (May 2014): 304–6. http://dx.doi.org/10.1017/s1743921314010710.

Full text
Abstract:
AbstractThe coming decade will witness a deluge of data from next generation galaxy surveys such as the Square Kilometre Array and Euclid. How can we optimally and robustly analyse these data to maximise scientific returns from these surveys? Here we discuss recent work in developing both the conceptual and software frameworks for carrying out such analyses and their application to the dark matter halo mass function. We summarise what we have learned about the HMF from the last 10 years of precision CMB data using the open-source HMFcalc framework, before discussing how this framework is being extended to the full Halo Model.
APA, Harvard, Vancouver, ISO, and other styles
2

Urquhart, Christine, and Dina Tbaishat. "Reflections on the value and impact of library and information services." Performance Measurement and Metrics 17, no. 1 (April 11, 2016): 29–44. http://dx.doi.org/10.1108/pmm-01-2016-0004.

Full text
Abstract:
Purpose – The purpose of this paper is to examine frameworks (such as scorecards) for ongoing library assessment and how business process modelling contributes in Part 3 of the series of viewpoint papers. Design/methodology/approach – Reviews the statistical data collection for strategic planning, and use of data analytics. Considers how to organise further value explorations. Compares macro-frameworks (balanced scorecard, values scorecard) and micro-frameworks for library assessment. Reviews the evidence on business process modelling/re-engineering initiatives. Describes how the Riva approach can be used to both derive a process architecture and to model individual processes. Findings – Data analytics requires collaboration among library services to develop reliable data sets and effective data visualisations for managers to use. Frameworks such as the balanced scorecard may be used to organise ongoing impact and performance evaluation. Queries that arise during ongoing library assessment may require a framework to formulate questions, and assemble evidence (qualitative and quantitative). Both macro- and micro-value frameworks are useful. Work on process modelling within libraries can help to develop an assessment culture, and the Riva approach provides both a process architecture and models of individual processes. Originality/value – Examines how to implement a library assessment culture through use of data analytics, value frameworks and business process modelling.
APA, Harvard, Vancouver, ISO, and other styles
3

Herath, Herath Mudiyanselage Viraj Vidura, Jayashree Chadalawada, and Vladan Babovic. "Hydrologically informed machine learning for rainfall–runoff modelling: towards distributed modelling." Hydrology and Earth System Sciences 25, no. 8 (August 11, 2021): 4373–401. http://dx.doi.org/10.5194/hess-25-4373-2021.

Full text
Abstract:
Abstract. Despite showing great success of applications in many commercial fields, machine learning and data science models generally show limited success in many scientific fields, including hydrology (Karpatne et al., 2017). The approach is often criticized for its lack of interpretability and physical consistency. This has led to the emergence of new modelling paradigms, such as theory-guided data science (TGDS) and physics-informed machine learning. The motivation behind such approaches is to improve the physical meaningfulness of machine learning models by blending existing scientific knowledge with learning algorithms. Following the same principles in our prior work (Chadalawada et al., 2020), a new model induction framework was founded on genetic programming (GP), namely the Machine Learning Rainfall–Runoff Model Induction (ML-RR-MI) toolkit. ML-RR-MI is capable of developing fully fledged lumped conceptual rainfall–runoff models for a watershed of interest using the building blocks of two flexible rainfall–runoff modelling frameworks. In this study, we extend ML-RR-MI towards inducing semi-distributed rainfall–runoff models. The meaningfulness and reliability of hydrological inferences gained from lumped models may tend to deteriorate within large catchments where the spatial heterogeneity of forcing variables and watershed properties is significant. This was the motivation behind developing our machine learning approach for distributed rainfall–runoff modelling titled Machine Induction Knowledge Augmented – System Hydrologique Asiatique (MIKA-SHA). MIKA-SHA captures spatial variabilities and automatically induces rainfall–runoff models for the catchment of interest without any explicit user selections. Currently, MIKA-SHA learns models utilizing the model building components of two flexible modelling frameworks. However, the proposed framework can be coupled with any internally coherent collection of building blocks. MIKA-SHA's model induction capabilities have been tested on the Rappahannock River basin near Fredericksburg, Virginia, USA. MIKA-SHA builds and tests many model configurations using the model building components of the two flexible modelling frameworks and quantitatively identifies the optimal model for the watershed of concern. In this study, MIKA-SHA is utilized to identify two optimal models (one from each flexible modelling framework) to capture the runoff dynamics of the Rappahannock River basin. Both optimal models achieve high-efficiency values in hydrograph predictions (both at catchment and subcatchment outlets) and good visual matches with the observed runoff response of the catchment. Furthermore, the resulting model architectures are compatible with previously reported research findings and fieldwork insights of the watershed and are readily interpretable by hydrologists. MIKA-SHA-induced semi-distributed model performances were compared against existing lumped model performances for the same basin. MIKA-SHA-induced optimal models outperform the lumped models used in this study in terms of efficiency values while benefitting hydrologists with more meaningful hydrological inferences about the runoff dynamics of the Rappahannock River basin.
APA, Harvard, Vancouver, ISO, and other styles
4

Støa, Bente, Rune Halvorsen, Sabrina Mazzoni, and Vladimir I. Gusarov. "Sampling bias in presence-only data used for species distribution modelling: theory and methods for detecting sample bias and its effects on models." Sommerfeltia 38, no. 1 (October 1, 2018): 1–53. http://dx.doi.org/10.2478/som-2018-0001.

Full text
Abstract:
Abstract This paper provides a theoretical understanding of sampling bias in presence-only data in the context of species distribution modelling. This understanding forms the basis for two integrated frameworks, one for detecting sampling bias of different kinds in presence-only data (the bias assessment framework) and one for assessing potential effects of sampling bias on species distribution models (the bias effects framework). We exemplify the use of these frameworks to museum data for nine insect species in Norway, for which the distribution along the two main bioclimatic gradients (related to oceanicity and temperatures) are modelled using the MaxEnt method. Models of different complexity (achieved by use of two different model selection procedures that represent spatial prediction or ecological response modelling purposes, respectively) were generated with different types of background data (uninformed and background-target-group [BTG]). The bias assessment framework made use of comparisons between observed and theoretical frequency-of-presence (FoP) curves, obtained separately for each combination of species and bioclimatic predictor, to identify potential sampling bias. The bias effects framework made use of comparisons between modelled response curves (predicted relative FoP curves) and the corresponding observed FoP curves for each combination of species and predictor. The extent to which the observed FoP curves deviated from the expected, smooth and unimodal theoretical FoP curve, varied considerably among the nine insect species. Among-curve differences were, in most cases, interpreted as indications of sampling bias. Using BTG-type background data in many cases introduced strong sampling bias. The predicted relative FoP curves from MaxEnt were, in general, similar to the corresponding observed FoP curves. This indicates that the main structure of the data-sets were adequately summarised by the MaxEnt models (with the options and settings used), in turn suggesting that shortcomings of input data such as sampling bias or omission of important predictors may overshadow the effect of modelling method on the predictive performance of distribution models. The examples indicate that the two proposed frameworks are useful for identification of sampling bias in presence-only data and for choosing settings for distribution modelling options such as the method for extraction of background data points and determining the appropriate level of model complexity.
APA, Harvard, Vancouver, ISO, and other styles
5

Oden, J. Tinsley. "Adaptive multiscale predictive modelling." Acta Numerica 27 (May 1, 2018): 353–450. http://dx.doi.org/10.1017/s096249291800003x.

Full text
Abstract:
The use of computational models and simulations to predict events that take place in our physical universe, or to predict the behaviour of engineered systems, has significantly advanced the pace of scientific discovery and the creation of new technologies for the benefit of humankind over recent decades, at least up to a point. That ‘point’ in recent history occurred around the time that the scientific community began to realize that true predictive science must deal with many formidable obstacles, including the determination of the reliability of the models in the presence of many uncertainties. To develop meaningful predictions one needs relevant data, itself possessing uncertainty due to experimental noise; in addition, one must determine model parameters, and concomitantly, there is the overriding need to select and validate models given the data and the goals of the simulation.This article provides a broad overview of predictive computational science within the framework of what is often called the science of uncertainty quantification. The exposition is divided into three major parts. In Part 1, philosophical and statistical foundations of predictive science are developed within a Bayesian framework. There the case is made that the Bayesian framework provides, perhaps, a unique setting for handling all of the uncertainties encountered in scientific prediction. In Part 2, general frameworks and procedures for the calculation and validation of mathematical models of physical realities are given, all in a Bayesian setting. But beyond Bayes, an introduction to information theory, the maximum entropy principle, model sensitivity analysis and sampling methods such as MCMC are presented. In Part 3, the central problem of predictive computational science is addressed: the selection, adaptive control and validation of mathematical and computational models of complex systems. The Occam Plausibility Algorithm, OPAL, is introduced as a framework for model selection, calibration and validation. Applications to complex models of tumour growth are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Chartier, Jean-François, Davide Pulizzotto, Louis Chartrand, and Jean-Guy Meunier. "A data-driven computational semiotics: The semantic vector space of Magritte’s artworks." Semiotica 2019, no. 230 (October 25, 2019): 19–69. http://dx.doi.org/10.1515/sem-2018-0120.

Full text
Abstract:
AbstractThe rise of big digital data is changing the framework within which linguists, sociologists, anthropologists, and other researchers are working. Semiotics is not spared by this paradigm shift. A data-driven computational semiotics is the study with an intensive use of computational methods of patterns in human-created contents related to semiotic phenomena. One of the most promising frameworks in this research program is the Semantic Vector Space (SVS) models and their methods. The objective of this article is to contribute to the exploration of the SVS for a computational semiotics by showing what types of semiotic analysis can be accomplished within this framework. The study is applied to a unique body of digitized artworks. We conducted three short experiments in which we explore three types of semiotic analysis: paradigmatic analysis, componential analysis, and topic modelling analysis. The results reported show that the SVS constitutes a powerful framework within which various types of semiotic analysis can be carried out.
APA, Harvard, Vancouver, ISO, and other styles
7

Shivakumar, Abhishek, Thomas Alfstad, and Taco Niet. "A clustering approach to improve spatial representation in water-energy-food models." Environmental Research Letters 16, no. 11 (October 29, 2021): 114027. http://dx.doi.org/10.1088/1748-9326/ac2ce9.

Full text
Abstract:
Abstract Currently available water-energy-food (WEF) modelling frameworks to analyse cross-sectoral interactions often share one or more of the following gaps: (a) lack of integration between sectors, (b) coarse spatial representation, and (c) lack of reproducible methods of nexus assessment. In this paper, we present a novel clustering tool as an expansion to the Climate-Land-Energy-Water-Systems modelling framework used to quantify inter-sectoral linkages between water, energy, and food systems. The clustering tool uses Agglomerative Hierarchical clustering to aggregate spatial data related to the land and water sectors. Using clusters of aggregated data reconciles the need for a spatially resolved representation of the land-use and water sectors with the computational and data requirements to efficiently solve such a model. The aggregated clusters, combined together with energy system components, form an integrated resource planning structure. The modelling framework is underpinned by an open-source energy system modelling tool—OSeMOSYS—and uses publicly available data with global coverage. By doing so, the modelling framework allows for reproducible WEF nexus assessments. The approach is used to explore the inter-sectoral linkages between the energy, land-use, and water sectors of Viet Nam out to 2030. A validation of the clustering approach confirms that underlying trends actual crop yield data are preserved in the resultant clusters. Finally, changes in cultivated area of selected crops are observed and differences in levels of crop migration are identified.
APA, Harvard, Vancouver, ISO, and other styles
8

aus der Beek, T., M. Flörke, D. M. Lapola, R. Schaldach, F. Voß, and E. Teichert. "Modelling historical and current irrigation water demand on the continental scale: Europe." Advances in Geosciences 27 (September 7, 2010): 79–85. http://dx.doi.org/10.5194/adgeo-27-79-2010.

Full text
Abstract:
Abstract. Water abstractions for irrigation purposes are higher than for any other pan-European water use sector and have a large influence on river runoff regimes. This modelling experiment assesses historic and current irrigation water demands for different crops in five arc minute spatial resolution for pan-Europe. Two different modelling frameworks have been applied in this study. First, soft-coupling the dynamic vegetation model LPJmL with the land use model LandSHIFT leads to overestimations of national irrigation water demands, which are rather high in the southern Mediterranean countries. This can be explained by unlimited water supply in the model structure and illegal or not gauged water abstractions in the reported data sets. The second modelling framework is WaterGAP3, which has an integrated conceptual crop specific irrigation module. Irrigation water requirements as modelled with WaterGAP3 feature a more realistic representation of pan-European water withdrawals. However, in colder humid regions, irrigation water demands are often underestimated. Additionally, a national database on crop-specific irrigated area and water withdrawal for all 42 countries within pan-Europe has been set up and integrated in both model frameworks.
APA, Harvard, Vancouver, ISO, and other styles
9

Wilbert, Niko, Tiziano Zito, Rike-Benjamin Schuppner, Zbigniew Jędrzejewski-Szmek, Laurenz Wiskott, and Pietro Berkes. "Building extensible frameworks for data processing: The case of MDP, Modular toolkit for Data Processing." Journal of Computational Science 4, no. 5 (September 2013): 345–51. http://dx.doi.org/10.1016/j.jocs.2011.10.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ledien, Julia, Zulma M. Cucunubá, Gabriel Parra-Henao, Eliana Rodríguez-Monguí, Andrew P. Dobson, Susana B. Adamo, María-Gloria Basáñez, and Pierre Nouvellet. "Linear and machine learning modelling for spatiotemporal disease predictions: Force-of-infection of Chagas disease." PLOS Neglected Tropical Diseases 16, no. 7 (July 19, 2022): e0010594. http://dx.doi.org/10.1371/journal.pntd.0010594.

Full text
Abstract:
Background Chagas disease is a long-lasting disease with a prolonged asymptomatic period. Cumulative indices of infection such as prevalence do not shed light on the current epidemiological situation, as they integrate infection over long periods. Instead, metrics such as the Force-of-Infection (FoI) provide information about the rate at which susceptible people become infected and permit sharper inference about temporal changes in infection rates. FoI is estimated by fitting (catalytic) models to available age-stratified serological (ground-truth) data. Predictive FoI modelling frameworks are then used to understand spatial and temporal trends indicative of heterogeneity in transmission and changes effected by control interventions. Ideally, these frameworks should be able to propagate uncertainty and handle spatiotemporal issues. Methodology/principal findings We compare three methods in their ability to propagate uncertainty and provide reliable estimates of FoI for Chagas disease in Colombia as a case study: two Machine Learning (ML) methods (Boosted Regression Trees (BRT) and Random Forest (RF)), and a Linear Model (LM) framework that we had developed previously. Our analyses show consistent results between the three modelling methods under scrutiny. The predictors (explanatory variables) selected, as well as the location of the most uncertain FoI values, were coherent across frameworks. RF was faster than BRT and LM, and provided estimates with fewer extreme values when extrapolating to areas where no ground-truth data were available. However, BRT and RF were less efficient at propagating uncertainty. Conclusions/significance The choice of FoI predictive models will depend on the objectives of the analysis. ML methods will help characterise the mean behaviour of the estimates, while LM will provide insight into the uncertainty surrounding such estimates. Our approach can be extended to the modelling of FoI patterns in other Chagas disease-endemic countries and to other infectious diseases for which serosurveys are regularly conducted for surveillance.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Data modelling frameworks"

1

Bryan-Kinns, Nicholas Jonathan. "A framework for modelling video content." Thesis, Queen Mary, University of London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hempel, Arne-Jens, and Steffen F. Bocklisch. "Parametric Fuzzy Modelling Framework for Complex Data-Inherent Structures." Universitätsbibliothek Chemnitz, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200901487.

Full text
Abstract:
The present article dedicates itself to fuzzy modelling of data-inherent structures. In particular two main points are dealt with: the introduction of a fuzzy modelling framework and the elaboration of an automated, data-driven design strategy to model complex data-inherent structures within this framework. The innovation concerning the modelling framework lies in the fact that it is consistently built around a single, generic type of parametrical and convex membership function. In the first part of the article this essential building block will be defined and its assets and shortcomings will be discussed. The novelty regarding the automated, data-driven design strategy consist in the conservation of the modelling framework when modelling complex (nonconvex) data-inherent structures. Instead of applying current clustering methods the design strategy uses the inverse of the data structure in order to created a fuzzy model solely based on convex membership functions. Throughout the article the whole model design process is illustrated, section by section, with the help of an academic example.
APA, Harvard, Vancouver, ISO, and other styles
3

Serpeka, Rokas. "Analyzing and modelling exchange rate data using VAR framework." Thesis, KTH, Matematik (Inst.), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-94180.

Full text
Abstract:
Abstract   In this report analysis of foreign exchange rates time series are performed. First, triangular arbitrage is detected and eliminated from data series using linear algebra tools. Then Vector Autoregressive processes are calibrated and used to replicate dynamics of exchange rates as well as to forecast time series. Finally, optimal portfolio of currencies with minimal Expected Shortfall is formed using one time period ahead forecasts
APA, Harvard, Vancouver, ISO, and other styles
4

Silverwood, Richard Jonathan. "Issues in modelling growth data within a life course framework." Thesis, London School of Hygiene and Tropical Medicine (University of London), 2008. http://researchonline.lshtm.ac.uk/682377/.

Full text
Abstract:
This thesis explores, develops and implements modelling strategies for studying relationships between childhood growth and later health, focusing primarily on the relationship between the development of body mass index (BMI) in childhood and later obesity. Existing growth models are explored, though found to be inflexible and potentially inadequate. Alternative approaches using parametric and nonparametric modelling are investigated. A distinction between balanced and unbalanced data structure is made because of the ways in which missing data can be addressed. A dataset of each type is used for illustration: the Stockholm Weight Development Study (SWEDES) and the Uppsala Family Study (UFS). The focus in each application is obesity, with the first examining how the adiposity rebound (AR), and the second how the adiposity peak (AP) in infancy, relate to later adiposity. In each case a two-stage approach is used. Subject-specific cubic smoothing splines are used in SWEDES to model childhood BMI and estimate the AR for each subject. As childhood BMI data are balanced, missingness can be dealt with via mUltiple imputation. The relationship between the AR and late-adolescent adiposity is then explored via linear and logistic regression, with both the age and BMI at AR found to be strongly and independently associated with late-adolescent adiposity. In the UFS, where childhood BMI data are unbalanced, penalised regression splines are used within a mixed model framework to model childhood BMI and estimate the AP for each subject. The data correlations induced by the family structure of the observations are addressed by fitting multilevel models in the second stage. Both age and BMI at AP are found to be positively associated with later adiposity. The two nonparametric modelling approaches are found to be effective and flexible. Whilst the thesis concentrates on BMI development in childhood and later adiposity, the techniques employed, both in terms the modelling of growth and the relating of the derived features to the outcomes, are far more widely applicable.
APA, Harvard, Vancouver, ISO, and other styles
5

Ekaterina, Guseva. "The Conceptual Integration Modelling Framework: Semantics and Query Answering." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/33464.

Full text
Abstract:
In the context of business intelligence (BI), the accuracy and accessibility of information consolidation play an important role. Integrating data from different sources involves its transformation according to constraints expressed in an appropriate language. The Conceptual Integration Modelling framework (CIM) acts as such a language. The CIM is aimed to allow business users to specify what information is needed in a simplified and comprehensive language. Achieving this requires raising the level of abstraction to the conceptual level, so that users are able to pose queries expressed in a conceptual query language (CQL). The CIM is comprised of three facets: an Extended Entity Relationship (EER) model (a high level conceptual model that is used to design databases), a conceptual schema against which users pose their queries, a relational multidimensional model that represents data sources, and mappings between the conceptual schema and sources. Such mappings can be specified in two ways: in the first scenario, the so-called global-as-view (GAV), the global schema is mapped to views over the relational sources by specifying how to obtain tuples of the global relation from tuples in the sources. In the second scenario, sources may contain less detailed information (a more aggregated data) so the local relations are defined as views over global relations that is called as local-as-view (LAV). In this thesis, we address the problem of expressibility and decidability of queries written in CQL. We first define the semantics of the CIM by translating the conceptual model so we could translate it into a set of first order sentences containing a class of conceptual dependencies (CDs) - tuple-generating dependencies (TGDs) and equality generating dependencies (EGDs), in addition to certain (first order) restrictions to express multidimensionality. Here a multidimensionality means that facts in a data warehouse can be described from different perspectives. The EGDs set the equality between tuples and the TGDs set the rule that two instances are in a subtype association (more precise definitions are given further in the thesis). We use a non-conflicting class of conceptual dependencies that guarantees a query's decidability. The non-conflicting dependencies avoid an interaction between TGDs and EGDs. Our semantics extend the existing semantics defined for extended entity relationship models to the notions of fact, dimension category, dimensional hierarchy and dimension attributes. In addition, a class of conceptual queries will be defined and proven to be decidable. A DL-Lite logic has been extensively used for query rewriting as it allows us to reduce the complexity of the query answering to AC0. Moreover, we present a query rewriting algorithm for the class of defined conceptual dependencies. Finally, we consider the problem in light of GAV and LAV approaches and prove the query answering complexities. The query answering problem becomes decidable if we add certain constraints to a well-known set of EGDs + TGDs dependencies to guarantee summarizability. The query answering problem in light of the global-as-a-view approach of mapping has AC0 data complexity and EXPTIME combined complexity. This problem becomes coNP hard if we are to consider it a LAV approach of mapping.
APA, Harvard, Vancouver, ISO, and other styles
6

Mgbemena, Chidozie Simon. "A data-driven framework for investigating customer retention." Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13175.

Full text
Abstract:
This study presents a data-driven simulation framework in order to understand customer behaviour and therefore improve customer retention. The overarching system design methodology used for this study is aligned with the design science paradigm. The Social Media Domain Analysis (SoMeDoA) approach is adopted and evaluated to build a model on the determinants of customer satisfaction in the mobile services industry. Furthermore, the most popular machine learning algorithms for analysing customer churn are applied to analyse customer retention based on the derived determinants. Finally, a data-driven approach for agent-based modelling is proposed to investigate the social effect of customer retention. The key contribution of this study is the customer agent decision trees (CADET) approach and a data-driven approach for Agent-Based Modelling (ABM). The CADET approach is applied to a dataset provided by a UK mobile services company. One of the major findings of using the CADET approach to investigate customer retention is that social influence, specifically word of mouth has an impact on customer retention. The second contribution of this study is the method used to uncover customer satisfaction determinants. The SoMeDoA framework was applied to uncover determinants of customer satisfaction in the mobile services industry. Customer service, coverage quality and price are found to be key determinants of customer satisfaction in the mobile services industry. The third contribution of this study is the approach used to build customer churn prediction models. The most popular machine learning techniques are used to build customer churn prediction models based on identified customer satisfaction determinants. Overall, for the identified determinants, decision trees have the highest accuracy scores for building customer churn prediction models.
APA, Harvard, Vancouver, ISO, and other styles
7

Mouline, Ludovic. "Towards a modelling framework with temporal and uncertain data for adaptive systems." Thesis, Rennes 1, 2019. https://ged.univ-rennes1.fr/nuxeo/site/esupversions/32c7a604-bdf6-491e-ba8f-1a9f2a1c0b8b.

Full text
Abstract:
Les systèmes auto-adaptatifs (SAS) optimisent leurs comportements ou configurations au moment de l'exécution en réponse à une modification de leur environnement ou de leurs comportements. Ces systèmes nécessitent donc une connaissance approfondie de la situation en cours qui permet de raisonnement en considérant les opérations d'adaptation. En utilisant la méthodologie de l'Ingénierie Dirigée par les Modèles (IDM), il est possible d'abstraire cette situation. Cependant, les informations concernant le système ne sont pas toujours connues avec une confiance absolue. De plus, dans de tels systèmes, la fréquence de surveillance peut différer du délai nécessaire pour que les mesures de reconfiguration aient des effets mesurables. Ces caractéristiques s'accompagnent d'un défi global pour les ingénieurs logiciels : comment représenter les connaissances incertaines tout en permettant de les interroger efficacement et de représenter les actions en cours afin d'améliorer les processus d'adaptation ? Pour relever ce défi, cette thèse défend la nécessité d'un framework de modélisation qui inclut, en plus de tous les éléments traditionnels, l'incertitude et le temps en tant que concepts de première classe. Par conséquent, un développeur sera en mesure d'extraire des informations relatives au processus d'adaptation, à l'environnement ainsi qu'au système lui-même. Dans cette optique, nous présentons deux contributions évaluées : un modèle de contexte temporel et un langage pour les données incertaines. Le modèle de contexte temporel permet d'abstraire les actions passées, en cours et futures avec leurs impacts et leur contexte. Le langage, appelé Ain'tea, intègre l'incertitude des données en tant que concept de première classe
Self-Adaptive Systems (SAS) optimise their behaviours or configurations at runtime in response to a modification of their environments or their behaviours. These systems therefore need a deep understanding of the ongoing situation which enables reasoning tasks for adaptation operations. Using the model-driven engineering (MDE) methodology, one can abstract this situation. However, information concerning the system is not always known with absolute confidence. Moreover, in such systems, the monitoring frequency may differ from the delay for reconfiguration actions to have measurable effects. These characteristics come with a global challenge for software engineers: how to represent uncertain knowledge that can be efficiently queried and to represent ongoing actions in order to improve adaptation processes? To tackle this challenge, this thesis defends the need for a unified modelling framework which includes, besides all traditional elements, temporal and uncertainty as first-class concepts. Therefore, a developer will be able to abstract information related to the adaptation process, the environment as well as the system itself. Towards this vision, we present two evaluated contributions: a temporal context model and a language for uncertain data. The temporal context model allows abstracting past, ongoing and future actions with their impacts and context. The language, named Ain’tea, integrates data uncertainty as a first-class citizen
APA, Harvard, Vancouver, ISO, and other styles
8

Förster, Stefan. "A formal framework for modelling component extension and layers in distributed embedded systems /." Dresden : TUDpress, 2007. http://www.loc.gov/catdir/toc/fy0803/2007462554.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Duong, Thi V. T. "Efficient duration modelling in the hierarchical hidden semi-Markov models and their applications." Thesis, Curtin University, 2008. http://hdl.handle.net/20.500.11937/1408.

Full text
Abstract:
Modeling patterns in temporal data has arisen as an important problem in engineering and science. This has led to the popularity of several dynamic models, in particular the renowned hidden Markov model (HMM) [Rabiner, 1989]. Despite its widespread success in many cases, the standard HMM often fails to model more complex data whose elements are correlated hierarchically or over a long period. Such problems are, however, frequently encountered in practice. Existing efforts to overcome this weakness often address either one of these two aspects separately, mainly due to computational intractability. Motivated by this modeling challenge in many real world problems, in particular, for video surveillance and segmentation, this thesis aims to develop tractable probabilistic models that can jointly model duration and hierarchical information in a unified framework. We believe that jointly exploiting statistical strength from both properties will lead to more accurate and robust models for the needed task. To tackle the modeling aspect, we base our work on an intersection between dynamic graphical models and statistics of lifetime modeling. Realizing that the key bottleneck found in the existing works lies in the choice of the distribution for a state, we have successfully integrated the discrete Coxian distribution [Cox, 1955], a special class of phase-type distributions, into the HMM to form a novel and powerful stochastic model termed as the Coxian Hidden Semi-Markov Model (CxHSMM). We show that this model can still be expressed as a dynamic Bayesian network, and inference and learning can be derived analytically.Most importantly, it has four superior features over existing semi-Markov modelling: the parameter space is compact, computation is fast (almost the same as the HMM), close-formed estimation can be derived, and the Coxian is flexible enough to approximate a large class of distributions. Next, we exploit hierarchical decomposition in the data by borrowing analogy from the hierarchical hidden Markov model in [Fine et al., 1998, Bui et al., 2004] and introduce a new type of shallow structured graphical model that combines both duration and hierarchical modelling into a unified framework, termed the Coxian Switching Hidden Semi-Markov Models (CxSHSMM). The top layer is a Markov sequence of switching variables, while the bottom layer is a sequence of concatenated CxHSMMs whose parameters are determined by the switching variable at the top. Again, we provide a thorough analysis along with inference and learning machinery. We also show that semi-Markov models with arbitrary depth structure can easily be developed. In all cases we further address two practical issues: missing observations to unstable tracking and the use of partially labelled data to improve training accuracy. Motivated by real-world problems, our application contribution is a framework to recognize complex activities of daily livings (ADLs) and detect anomalies to provide better intelligent caring services for the elderly.Coarser activities with self duration distributions are represented using the CxHSMM. Complex activities are made of a sequence of coarser activities and represented at the top level in the CxSHSMM. Intensive experiments are conducted to evaluate our solutions against existing methods. In many cases, the superiority of the joint modeling and the Coxian parameterization over traditional methods is confirmed. The robustness of our proposed models is further demonstrated in a series of more challenging experiments, in which the tracking is often lost and activities considerably overlap. Our final contribution is an application of the switching Coxian model to segment education-oriented videos into coherent topical units. Our results again demonstrate such segmentation processes can benefit greatly from the joint modeling of duration and hierarchy.
APA, Harvard, Vancouver, ISO, and other styles
10

Duong, Thi V. T. "Efficient duration modelling in the hierarchical hidden semi-Markov models and their applications." Curtin University of Technology, Dept. of Computing, 2008. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=18610.

Full text
Abstract:
Modeling patterns in temporal data has arisen as an important problem in engineering and science. This has led to the popularity of several dynamic models, in particular the renowned hidden Markov model (HMM) [Rabiner, 1989]. Despite its widespread success in many cases, the standard HMM often fails to model more complex data whose elements are correlated hierarchically or over a long period. Such problems are, however, frequently encountered in practice. Existing efforts to overcome this weakness often address either one of these two aspects separately, mainly due to computational intractability. Motivated by this modeling challenge in many real world problems, in particular, for video surveillance and segmentation, this thesis aims to develop tractable probabilistic models that can jointly model duration and hierarchical information in a unified framework. We believe that jointly exploiting statistical strength from both properties will lead to more accurate and robust models for the needed task. To tackle the modeling aspect, we base our work on an intersection between dynamic graphical models and statistics of lifetime modeling. Realizing that the key bottleneck found in the existing works lies in the choice of the distribution for a state, we have successfully integrated the discrete Coxian distribution [Cox, 1955], a special class of phase-type distributions, into the HMM to form a novel and powerful stochastic model termed as the Coxian Hidden Semi-Markov Model (CxHSMM). We show that this model can still be expressed as a dynamic Bayesian network, and inference and learning can be derived analytically.
Most importantly, it has four superior features over existing semi-Markov modelling: the parameter space is compact, computation is fast (almost the same as the HMM), close-formed estimation can be derived, and the Coxian is flexible enough to approximate a large class of distributions. Next, we exploit hierarchical decomposition in the data by borrowing analogy from the hierarchical hidden Markov model in [Fine et al., 1998, Bui et al., 2004] and introduce a new type of shallow structured graphical model that combines both duration and hierarchical modelling into a unified framework, termed the Coxian Switching Hidden Semi-Markov Models (CxSHSMM). The top layer is a Markov sequence of switching variables, while the bottom layer is a sequence of concatenated CxHSMMs whose parameters are determined by the switching variable at the top. Again, we provide a thorough analysis along with inference and learning machinery. We also show that semi-Markov models with arbitrary depth structure can easily be developed. In all cases we further address two practical issues: missing observations to unstable tracking and the use of partially labelled data to improve training accuracy. Motivated by real-world problems, our application contribution is a framework to recognize complex activities of daily livings (ADLs) and detect anomalies to provide better intelligent caring services for the elderly.
Coarser activities with self duration distributions are represented using the CxHSMM. Complex activities are made of a sequence of coarser activities and represented at the top level in the CxSHSMM. Intensive experiments are conducted to evaluate our solutions against existing methods. In many cases, the superiority of the joint modeling and the Coxian parameterization over traditional methods is confirmed. The robustness of our proposed models is further demonstrated in a series of more challenging experiments, in which the tracking is often lost and activities considerably overlap. Our final contribution is an application of the switching Coxian model to segment education-oriented videos into coherent topical units. Our results again demonstrate such segmentation processes can benefit greatly from the joint modeling of duration and hierarchy.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Data modelling frameworks"

1

Vanrolleghem, Peter A. Modelling aspects of water framework directive implementation. London: IWA Pub., 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stefan, Förster. A formal framework for modelling component extension and layers in distributed embedded systems. Dresden: TUDpress, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ramackers, Guustaaf Jan. Integrated object modelling: An executable specification framework for business analysis and information system design. Amsterdam: Thesis Publishers, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

1968-, Lawry Jonathan, Shanahan James G, and Ralescu Anca L. 1949-, eds. Modelling with words: Learning, fusion, and reasoning within a formal linguistic representation framework. Berlin: Springer, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vanrolleghem, Peter A. Modelling Aspects of Water Framework Directive Implementation. IWA Publishing, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bianconi, Ginestra. Multilayer Network Models. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198753919.003.0010.

Full text
Abstract:
This chapter presents the existing modelling frameworks for multiplex and multilayer networks. Multiplex network models are divided into growing multiplex network models and null models of multiplex networks. Growing multiplex networks are here shown to explain the main dynamical rules responsible to the emergent properties of multiplex networks, including the scale-free degree distribution, interlayer degree correlations and multilayer communities. Null models of multiplex networks are described in the context of maximum-entropy multiplex network ensembles. Randomization algorithms to test the relevant of network properties against null models are here described. Moreover, Multi-slice temporal networks Models capturing main properties of real temporal network data are presented. Finally, null models of general multilayer networks and networks of networks are characterized.
APA, Harvard, Vancouver, ISO, and other styles
7

Abdullah, Ahmad Fikri Bin. Methodology for Processing Raw LIDAR Data to Support Urban Flood Modelling Framework: UNESCO-IHE PhD Thesis. Taylor & Francis Group, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Abdullah, Ahmad Fikri Bin. Methodology for Processing Raw LIDAR Data to Support Urban Flood Modelling Framework: UNESCO-IHE PhD Thesis. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abdullah, Ahmad Fikri Bin. Methodology for Processing Raw LIDAR Data to Support Urban Flood Modelling Framework: UNESCO-IHE PhD Thesis. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Abdullah, Ahmad Fikri Bin. Methodology for Processing Raw LIDAR Data to Support Urban Flood Modelling Framework: UNESCO-IHE PhD Thesis. Taylor & Francis Group, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data modelling frameworks"

1

Farsi, Maryam, Amin Hosseinian-Far, Alireza Daneshkhah, and Tabassom Sedighi. "Mathematical and Computational Modelling Frameworks for Integrated Sustainability Assessment (ISA)." In Strategic Engineering for Cloud Computing and Big Data Analytics, 3–27. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52491-7_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Léonard, Michel, and Ian Prince. "NelleN: A framework for literate data modelling." In Notes on Numerical Fluid Mechanics and Multidisciplinary Design, 239–56. Cham: Springer International Publishing, 1992. http://dx.doi.org/10.1007/bfb0035135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Degiannakis, Stavros, and Christos Floros. "Multiple Model Comparison and Hypothesis Framework Construction." In Modelling and Forecasting High Frequency Financial Data, 110–60. London: Palgrave Macmillan UK, 2015. http://dx.doi.org/10.1057/9781137396495_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Yenming J., and Albert Jing-Fuh Yang. "Crowd Density Estimation from Few Radio-Frequency Tracking Devices: I. A Modelling Framework." In Data Mining and Big Data, 390–98. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61845-6_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zieliński, Bartosz. "Modular Term-Rewriting Framework for Artifact-Centric Business Process Modelling." In Model and Data Engineering, 71–78. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66854-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Díaz Mercado, Vitali. "Methodological framework." In Spatio-Temporal Characterisation of Drought: Data Analytics, Modelling, Tracking, Impact and Prediction, 23–28. London: CRC Press, 2022. http://dx.doi.org/10.1201/9781003279655-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Qin, Zengchang, and Jonathan Lawry. "Knowledge Discovery in a Framework for Modelling with Words." In Soft Computing for Knowledge Discovery and Data Mining, 241–76. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-69935-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kiran, Deshpande, and Madhuri Rao. "Modelling Auto-scalable Big Data Enabled Log Analytic Framework." In Computer Networks and Inventive Communication Technologies, 857–70. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-3035-5_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hari Prasad, D., and M. Punithavalli. "An Integrated Framework for Mixed Data Clustering Using Growing Hierarchical Self-Organizing Map (GHSOM)." In Mathematical Modelling and Scientific Computation, 471–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28926-2_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Othman, Muhaini, Siti Aisyah Mohamed, Mohd Hafizul Afifi Abdullah, Munirah Mohd Yusof, and Rozlini Mohamed. "A Framework to Cluster Temporal Data Using Personalised Modelling Approach." In Advances in Intelligent Systems and Computing, 181–90. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-72550-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data modelling frameworks"

1

Bamba, Inshita, Yashika, Jahanvi Singh, Pronika Chawla, and Kritika Soni. "Big Social Data and Modelling Frameworks." In 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS). IEEE, 2021. http://dx.doi.org/10.1109/icais50930.2021.9395935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Yaqiong, Xuhui Fan, Ling Chen, Bin Li, Zheng Yu, and Scott A. Sisson. "Recurrent Dirichlet Belief Networks for interpretable Dynamic Relational Data Modelling." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/342.

Full text
Abstract:
The Dirichlet Belief Network~(DirBN) has been recently proposed as a promising approach in learning interpretable deep latent representations for objects. In this work, we leverage its interpretable modelling architecture and propose a deep dynamic probabilistic framework -- the Recurrent Dirichlet Belief Network~(Recurrent-DBN) -- to study interpretable hidden structures from dynamic relational data. The proposed Recurrent-DBN has the following merits: (1) it infers interpretable and organised hierarchical latent structures for objects within and across time steps; (2) it enables recurrent long-term temporal dependence modelling, which outperforms the one-order Markov descriptions in most of the dynamic probabilistic frameworks; (3) the computational cost scales to the number of positive links only. In addition, we develop a new inference strategy, which first upward-and-backward propagates latent counts and then downward-and-forward samples variables, to enable efficient Gibbs sampling for the Recurrent-DBN. We apply the Recurrent-DBN to dynamic relational data problems. The extensive experiment results on real-world data validate the advantages of the Recurrent-DBN over the state-of-the-art models in interpretable latent structure discovery and improved link prediction performance.
APA, Harvard, Vancouver, ISO, and other styles
3

Berryman, Matthew, Rohan Wickramasuriya, Vu Lam Co, Qun Chen, and Pascal Pascal. "Modelling and Data Frameworks for Understanding Infrastructure Systems through a Systems-of-Systems Lens." In International Symposium for Next Generation Infrastructure. University of Wollongong, SMART Infrastructure Facility, 2014. http://dx.doi.org/10.14453/isngi2013.proc.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lener, Alberto. "Foundational Study of Artificial Intelligence Reservoir Simulation by Integrating Digital Core Technology and Logging Data to Optimise Recovery." In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211066-ms.

Full text
Abstract:
Abstract In strategising development of hydrocarbon reservoirs, substantial uncertainty in recovery potential is often attributed to subsurface heterogeneity. Challenged reservoir characterisation is proposed to be directly due to the inability of correlating spatial scales: core analyses to well logging data. This study’s central goal is to propose a ‘Multiscale link’ by challenging empirical correlations of multiphase displacement and ‘upscaling’ processes of reservoir characterisation by exploiting Artificial Intelligence and ‘Digital Rock Technology’, aiming at minimising geological risk. By exploiting 40 years of a North Sea field's appraisal and production and formulating an AI-compatible ‘multiscale’ data set, petrophysical correlations have integrated a further innovative concept: borehole image processing to characterise geological features and oil potential. In binding the ‘Multiscale’, fundamental multiphase dynamics at pore-scale have been critically associated to most affine reservoir modelling ‘deep learning’ frameworks, leading to ideating an AI workflow linking field-scale rates, well logs and core analyses to the continuously-reconstructed pore network, whilst extracting invaluable multiphase dependencies. The preliminary results implementing selected Machine Learning algorithms, coupled with advanced digital technologies in reservoir simulation, have been showcased in proposing a solution to the ‘Multiscale link’ in reservoir characterisation, providing the groundworks for its programming realisation. Importantly, it was concluded that the layers of complexity within learning algorithms, which constrained its execution within this project, undoubtedly require multidisciplinary approach. By conceiving a physically and coding-robust workflow for advanced reservoir characterisation and modelling permitting ‘multiscale’ representative multiphase simulations, identification of optimal EOR becomes attainable. This leading edge represents potential to minimise geological risk, thus de-risking reservoir management (in turn FDP) of mature and live fields; but also expected to set a starting point for further developments of Artificial Intelligence in the oil and gas industry.
APA, Harvard, Vancouver, ISO, and other styles
5

Honfi, Dániel, John Leander, Ivar Björnsson, and Oskar Larsson Ivanov. "A practical approach for supporting decisions in bridge condition assessment and monitoring." In IABSE Congress, New York, New York 2019: The Evolving Metropolis. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2019. http://dx.doi.org/10.2749/newyork.2019.2136.

Full text
Abstract:
<p>In this contribution a practical and rational decision-making approach is presented to be applied for common bridges typically managed by public authorities. The authors have developed a model with the intention to be applicable for practical cases for common bridges in the daily work of bride operators responsible for a large number of assets, yet still maintain the principles of more generic frameworks based on probabilistic decision-theory.</p><p>Three main attributes of the verification of sufficiency of structural performance are considered, namely: 1) the level of sophistication of modelling performance, 2) the degree of verification and acceptance criteria in terms of dealing with uncertainties and consequences, 3) the extent of information is obtained and incorporated in the verification.</p><p>The simplicity of the approach is demonstrated through an illustrative case study inspired by practical condition assessment decision problems. It is argued that in practical cases it may be desirable to utilize less advanced methods owing to constraints in resources or lack of reliable data (e.g. based on structural health monitoring or other on-site measurement techniques).</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Bohlmann, Sebastian, Matthias Becker, Helena Szczerbicka, and Volkhard Klinger. "A Data Management Framework Providing Online-Connectivity In Symbiotic Simulation." In 24th European Conference on Modelling and Simulation. ECMS, 2010. http://dx.doi.org/10.7148/2010-0302-0308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Koffi, Itoro Udofort. "A Deep Learning Approach for the Prediction of Oil Formation Volume Factor." In SPE Annual Technical Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/208627-stu.

Full text
Abstract:
Abstract Accurate knowledge of Pressure-Volume-Temperature (PVT) properties is crucial in reservoir and production engineering computational applications. One of these properties is the oil formation volume factor (Bo), which assumes a significant role in calculating some of the prominent petroleum engineering terms and parameters, such as depletion rate, oil in place, reservoir simulation, material balance equation, well testing, reservoir production calculation, etc. These properties are ideally measured experimentally in the laboratory, based on downhole or recommended surface samples. Faster and cheaper methods are important for real-time decision making and empirically developed correlations are used in the prediction of this property. This work is aimed at developing a more accurate prediction method than the more common methods. The prediction method used is based on a supervised deep neural network to estimate oil formation volume factor at bubble point pressure as a function of gas-oil ratio, gas gravity, specific oil gravity, and reservoir temperature. Deep learning is applied in this paper to address the inaccuracy of empirically derived correlations used for predicting oil formation volume factor. Neural Networks would help us find hidden patterns in the data, which cannot be found otherwise. A multi-layer neural network was used for the prediction via the anaconda programming environment. Two frameworks for modelling data using deep learning viz: TensorFlow and Keras were utilized, and PVT variables selected as input neurons while employing early stopping which uses a part of our data not fed to the model to test its performance to prevent overfitting. In the modelling process, 2994 dataset retrieved from the Niger Delta region was used. The dataset was randomly divided into three parts of which 60% was used for training, 20% for validation, and 20% for testing. The result predicted by the network outperformed existing correlations by the statistical parameters used for the same set of field data. The network has a mean average error of 0.05 which is the lowest when compared to the error generated by other correlation models. The predictive capability of this network is found to be higher than existing models, based on the findings of this work.
APA, Harvard, Vancouver, ISO, and other styles
8

Schallenberg, A., W. Nebel, A. Herrholz, P. A. Hartmann, and F. Oppenheimer. "OSSS+R: A framework for application level modelling and synthesis of reconfigurable systems." In 2009 Design, Automation & Test in Europe Conference & Exhibition (DATE'09). IEEE, 2009. http://dx.doi.org/10.1109/date.2009.5090805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Enhancement of water storage estimates using GRACE data assimilation with particle filter framework." In 22nd International Congress on Modelling and Simulation. Modelling and Simulation Society of Australia and New Zealand (MSSANZ), Inc., 2017. http://dx.doi.org/10.36334/modsim.2017.h5.tangdamrongsub.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rubio-Solis, Adrian, George Panoutsos, and Steve Thornton. "A Data-driven fuzzy modelling framework for the classification of imbalanced data." In 2016 IEEE 8th International Conference on Intelligent Systems (IS). IEEE, 2016. http://dx.doi.org/10.1109/is.2016.7737438.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Data modelling frameworks"

1

Russell, H. A. J., and S. K. Frey. Canada One Water: integrated groundwater-surface-water-climate modelling for climate change adaptation. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/329092.

Full text
Abstract:
Canada 1 Water is a 3-year governmental multi-department-private-sector-academic collaboration to model the groundwater-surface-water of Canada coupled with historic climate and climate scenario input. To address this challenge continental Canada has been allocated to one of 6 large watershed basins of approximately two million km2. The model domains are based on natural watershed boundaries and include approximately 1 million km2 of the United States. In year one (2020-2021) data assembly and validation of some 20 datasets (layers) is the focus of work along with conceptual model development. To support analysis of the entire water balance the modelling framework consists of three distinct components and modelling software. Land Surface modelling with the Community Land Model will support information needed for both the regional climate modelling using the Weather Research &amp; Forecasting model (WRF), and input to HydroGeoSphere for groundwater-surface-water modelling. The inclusion of the transboundary watersheds will provide a first time assessment of water resources in this critical international domain. Modelling is also being integrated with Remote Sensing datasets, notably the Gravity Recovery and Climate Experiment (GRACE). GRACE supports regional scale watershed analysis of total water flux. GRACE along with terrestrial time-series data will serve provide validation datasets for model results to ensure that the final project outputs are representative and reliable. The project has an active engagement and collaborative effort underway to try and maximize the long-term benefit of the framework. Much of the supporting model datasets will be published under open access licence to support broad usage and integration.
APA, Harvard, Vancouver, ISO, and other styles
2

Faverjon, Céline, Angus Cameron, and Marco De Nardi. Modelling framework to quantify the risk of AMR exposure via food products - example of chicken and lettuce. Food Standards Agency, April 2022. http://dx.doi.org/10.46756/sci.fsa.qum110.

Full text
Abstract:
Antimicrobial resistance (AMR) is a complex issue where microorganisms survive antimicrobial treatments, making such infections more difficult to treat. It is a global threat to public health. To increase the evidence base for AMR in the food chain, the FSA has funded several projects to collect data to monitor the trends, prevalence, emergence, spread and decline of AMR bacteria in a range of retail foods in the UK. However, this data and information from the wider literature was yet to be used to create tools to aid in the production of quantitative risk assessment to determine the risk to consumers of AMR in the food chain. To assist with this, there was a need to develop a set of modular templates of risk of AMR within foods. This sought to allow the efficient creation of reproducible risk assessments of AMR to maintain the FSA at the forefront of food safety.
APA, Harvard, Vancouver, ISO, and other styles
3

Daudelin, Francois, Lina Taing, Lucy Chen, Claudia Abreu Lopes, Adeniyi Francis Fagbamigbe, and Hamid Mehmood. Mapping WASH-related disease risk: A review of risk concepts and methods. United Nations University Institute for Water, Environment and Health, December 2021. http://dx.doi.org/10.53328/uxuo4751.

Full text
Abstract:
The report provides a review of how risk is conceived of, modelled, and mapped in studies of infectious water, sanitation, and hygiene (WASH) related diseases. It focuses on spatial epidemiology of cholera, malaria and dengue to offer recommendations for the field of WASH-related disease risk mapping. The report notes a lack of consensus on the definition of disease risk in the literature, which limits the interpretability of the resulting analyses and could affect the quality of the design and direction of public health interventions. In addition, existing risk frameworks that consider disease incidence separately from community vulnerability have conceptual overlap in their components and conflate the probability and severity of disease risk into a single component. The report identifies four methods used to develop risk maps, i) observational, ii) index-based, iii) associative modelling and iv) mechanistic modelling. Observational methods are limited by a lack of historical data sets and their assumption that historical outcomes are representative of current and future risks. The more general index-based methods offer a highly flexible approach based on observed and modelled risks and can be used for partially qualitative or difficult-to-measure indicators, such as socioeconomic vulnerability. For multidimensional risk measures, indices representing different dimensions can be aggregated to form a composite index or be considered jointly without aggregation. The latter approach can distinguish between different types of disease risk such as outbreaks of high frequency/low intensity and low frequency/high intensity. Associative models, including machine learning and artificial intelligence (AI), are commonly used to measure current risk, future risk (short-term for early warning systems) or risk in areas with low data availability, but concerns about bias, privacy, trust, and accountability in algorithms can limit their application. In addition, they typically do not account for gender and demographic variables that allow risk analyses for different vulnerable groups. As an alternative, mechanistic models can be used for similar purposes as well as to create spatial measures of disease transmission efficiency or to model risk outcomes from hypothetical scenarios. Mechanistic models, however, are limited by their inability to capture locally specific transmission dynamics. The report recommends that future WASH-related disease risk mapping research: - Conceptualise risk as a function of the probability and severity of a disease risk event. Probability and severity can be disaggregated into sub-components. For outbreak-prone diseases, probability can be represented by a likelihood component while severity can be disaggregated into transmission and sensitivity sub-components, where sensitivity represents factors affecting health and socioeconomic outcomes of infection. -Employ jointly considered unaggregated indices to map multidimensional risk. Individual indices representing multiple dimensions of risk should be developed using a range of methods to take advantage of their relative strengths. -Develop and apply collaborative approaches with public health officials, development organizations and relevant stakeholders to identify appropriate interventions and priority levels for different types of risk, while ensuring the needs and values of users are met in an ethical and socially responsible manner. -Enhance identification of vulnerable populations by further disaggregating risk estimates and accounting for demographic and behavioural variables and using novel data sources such as big data and citizen science. This review is the first to focus solely on WASH-related disease risk mapping and modelling. The recommendations can be used as a guide for developing spatial epidemiology models in tandem with public health officials and to help detect and develop tailored responses to WASH-related disease outbreaks that meet the needs of vulnerable populations. The report’s main target audience is modellers, public health authorities and partners responsible for co-designing and implementing multi-sectoral health interventions, with a particular emphasis on facilitating the integration of health and WASH services delivery contributing to Sustainable Development Goals (SDG) 3 (good health and well-being) and 6 (clean water and sanitation).
APA, Harvard, Vancouver, ISO, and other styles
4

Nechaev, V., Володимир Миколайович Соловйов, and A. Nagibas. Complex economic systems structural organization modelling. Politecnico di Torino, 2006. http://dx.doi.org/10.31812/0564/1118.

Full text
Abstract:
One of the well-known results of the theory of management is the fact, that multi-stage hierarchical organization of management is unstable. Hence, the ideas expressed in a number of works by Don Tapscott on advantages of network organization of businesses over vertically integrated ones is clear. While studying the basic tendencies of business organization in the conditions of globalization, computerization and internetization of the society and the results of the financial activities of the well-known companies, the authors arrive at the conclusion, that such companies, as IBM, Boeing, Mercedes-Benz and some others companies have not been engaged in their traditional business for a long time. Their partner networks performs this function instead of them. The companies themselves perform the function of system integrators. The Tapscott’s idea finds its confirmation within the framework of a new powerful direction of the development of the modern interdisciplinary science – the theory of the complex networks (CN) [2]. CN-s are multifractal objects, the loss of multifractality being the indicator of the system transition from more complex state into more simple state. We tested the multifractal properties of the data using the wavelet transform modulus maxima approach in order to analyze scaling properties of our company. Comparative analysis of the singularity spectrumf(®), namely, the difference between maximum and minimum values of ® (∆ = ®max ¡ ®min) shows that IBM company is considerably more fractal in comparison with Apple Computer. Really, for it the value of ∆ is equal to 0.3, while for the vertically integrated company Apple it only makes 0.06 – 5 times less. The comparison of other companies shows that this dependence is of general character. Taking into consideration the fact that network organization of business has become dominant in the last 5-10 years, we carried out research for the selected companies in the earliest possible period of time which was determined by the availability of data in the Internet, or by historically later beginning of stock trade of computer companies. A singularity spectrum of the first group of companies turned out to be considerably narrower, or shifted toward the smaller values of ® in the pre-network period. The latter means that dynamic series were antipersistant. That is, these companies‘ management was rigidly controlled while the impact of market mechanisms was minimized. In the second group of companies if even the situation did changed it did not change for the better. In addition, we discuss applications to the construction of portfolios of stock that have a stable ratio of risk to return.
APA, Harvard, Vancouver, ISO, and other styles
5

Murad, M. Hassan, Stephanie M. Chang, Celia Fiordalisi, Jennifer S. Lin, Timothy J. Wilt, Amy Tsou, Brian Leas, et al. Improving the Utility of Evidence Synthesis for Decision Makers in the Face of Insufficient Evidence. Agency for Healthcare Research and Quality (AHRQ), April 2021. http://dx.doi.org/10.23970/ahrqepcwhitepaperimproving.

Full text
Abstract:
Background: Healthcare decision makers strive to operate on the best available evidence. The Agency for Healthcare Research and Quality Evidence-based Practice Center (EPC) Program aims to support healthcare decision makers by producing evidence reviews that rate the strength of evidence. However, the evidence base is often sparse or heterogeneous, or otherwise results in a high degree of uncertainty and insufficient evidence ratings. Objective: To identify and suggest strategies to make insufficient ratings in systematic reviews more actionable. Methods: A workgroup comprising EPC Program members convened throughout 2020. We conducted interative discussions considering information from three data sources: a literature review for relevant publications and frameworks, a review of a convenience sample of past systematic reviews conducted by the EPCs, and an audit of methods used in past EPC technical briefs. Results: Several themes emerged across the literature review, review of systematic reviews, and review of technical brief methods. In the purposive sample of 43 systematic reviews, the use of the term “insufficient” covered both instances of no evidence and instances of evidence being present but insufficient to estimate an effect. The results of the literature review and review of the EPC Program systematic reviews illustrated the importance of clearly stating the reasons for insufficient evidence. Results of both the literature review and review of systematic reviews highlighted the factors decision makers consider when making decisions when evidence of benefits or harms is insufficient, such as costs, values, preferences, and equity. We identified five strategies for supplementing systematic review findings when evidence on benefit or harms is expected to be or found to be insufficient, including: reconsidering eligible study designs, summarizing indirect evidence, summarizing contextual and implementation evidence, modelling, and incorporating unpublished health system data. Conclusion: Throughout early scoping, protocol development, review conduct, and review presentation, authors should consider five possible strategies to supplement potential insufficient findings of benefit or harms. When there is no evidence available for a specific outcome, reviewers should use a statement such as “no studies” instead of “insufficient.” The main reasons for insufficient evidence rating should be explicitly described.
APA, Harvard, Vancouver, ISO, and other styles
6

Sett, Dominic, Florian Waldschmidt, Alvaro Rojas-Ferreira, Saut Sagala, Teresa Arce Mojica, Preeti Koirala, Patrick Sanady, et al. Climate and disaster risk analytics tool for adaptive social protection. United Nations University - Institute for Environment and Human Security, March 2022. http://dx.doi.org/10.53324/wnsg2302.

Full text
Abstract:
Adaptive Social Protection (ASP) as discussed in this report is an approach to enhance the well-being of communities at risk. As an integrated approach, ASP builds on the interface of Disaster Risk Management (DRM), Climate Change Adaptation (CCA) and Social Protection (SP) to address interconnected risks by building resilience, thereby overcoming the shortcomings of traditionally sectoral approaches. The design of meaningful ASP measures needs to be informed by specific information on risk, risk drivers and impacts on communities at risk. In contrast, a limited understanding of risk and its drivers can potentially lead to maladaptation practices. Therefore, multidimensional risk assessments are vital for the successful implementation of ASP. Although many sectoral tools to assess risks exist, available integrated risk assessment methods across sectors are still inadequate in the context of ASP, presenting an important research and implementation gap. ASP is now gaining international momentum, making the timely development of a comprehensive risk analytics tool even more important, including in Indonesia, where nationwide implementation of ASP is currently under way. OBJECTIVE: To address this gap, this study explores the feasibility of a climate and disaster risk analytics tool for ASP (CADRAT-ASP), combining sectoral risk assessment in the context of ASP with a more comprehensive risk analytics approach. Risk analytics improve the understanding of risks by locating and quantifying the potential impacts of disasters. For example, the Economics of Climate Adaptation (ECA) framework quantifies probable current and expected future impacts of extreme events and determines the monetary cost and benefits of specific risk management and adaptation measures. Using the ECA framework, this report examines the viability and practicality of applying a quantitative risk analytics approach for non-financial and non-tangible assets that were identified as central to ASP. This quantitative approach helps to identify cost-effective interventions to support risk-informed decision making for ASP. Therefore, we used Nusa Tenggara, Indonesia, as a case study, to identify potential entry points and examples for the further development and application of such an approach. METHODS & RESULTS: The report presents an analysis of central risks and related impacts on communities in the context of ASP. In addition, central social protection dimensions (SPD) necessary for the successful implementation of ASP and respective data needs from a theoretical perspective are identified. The application of the quantitative ECA framework is tested for tropical storms in the context of ASP, providing an operational perspective on technical feasibility. Finally, recommendations on further research for the potential application of a suitable ASP risk analytics tool in Indonesia are proposed. Results show that the ECA framework and its quantitative modelling platform CLIMADA successfully quantified the impact of tropical storms on four SPDs. These SPDs (income, access to health, access to education and mobility) were selected based on the results from the Hazard, Exposure and Vulnerability Assessment (HEVA) conducted to support the development of an ASP roadmap for the Republic of Indonesia (UNU-EHS 2022, forthcoming). The SPDs were modelled using remote sensing, gridded data and available global indices. The results illustrate the value of the outcome to inform decision making and a better allocation of resources to deliver ASP to the case study area. RECOMMENDATIONS: This report highlights strong potential for the application of the ECA framework in the ASP context. The impact of extreme weather events on four social protection dimensions, ranging from access to health care and income to education and mobility, were successfully quantified. In addition, further developments of CADRAT-ASP can be envisaged to improve modelling results and uptake of this tool in ASP implementation. Recommendations are provided for four central themes: mainstreaming the CADRAT approach into ASP, data and information needs for the application of CADRAT-ASP, methodological advancements of the ECA framework to support ASP and use of CADRAT-ASP for improved resilience-building. Specific recommendations are given, including the integration of additional hazards, such as flood, drought or heatwaves, for a more comprehensive outlook on potential risks. This would provide a broader overview and allow for multi-hazard risk planning. In addition, high-resolution local data and stakeholder involvement can increase both ownership and the relevance of SPDs. Further recommendations include the development of a database and the inclusion of climate and socioeconomic scenarios in analyses.
APA, Harvard, Vancouver, ISO, and other styles
7

Verburg, Peter H., Žiga Malek, Sean P. Goodwin, and Cecilia Zagaria. The Integrated Economic-Environmental Modeling (IEEM) Platform: IEEM Platform Technical Guides: User Guide for the IEEM-enhanced Land Use Land Cover Change Model Dyna-CLUE. Inter-American Development Bank, September 2021. http://dx.doi.org/10.18235/0003625.

Full text
Abstract:
The Conversion of Land Use and its Effects modeling framework (CLUE) was developed to simulate land use change using empirically quantified relations between land use and its driving factors in combination with dynamic modeling of competition between land use types. Being one of the most widely used spatial land use models, CLUE has been applied all over the world on different scales. In this document, we demonstrate how the model can be used to develop a multi-regional application. This means, that instead of developing numerous individual models, the user only prepares one CLUE model application, which then allocates land use change across different regions. This facilitates integration with the Integrated Economic-Environmental Modeling (IEEM) Platform for subnational assessments and increases the efficiency of the IEEM and Ecosystem Services Modeling (IEEMESM) workflow. Multi-regional modelling is particularly useful in larger and diverse countries, where we can expect different spatial distributions in land use changes in different regions: regions of different levels of achieved socio-economic development, regions with different topographies (flat vs. mountainous), or different climatic regions (dry vs humid) within a same country. Accounting for such regional differences also facilitates developing ecosystem services models that consider region specific biophysical characteristics. This manual, and the data that is provided with it, demonstrates multi-regional land use change modeling using the country of Colombia as an example. The user will learn how to prepare the data for the model application, and how the multi-regional run differs from a single-region simulation.
APA, Harvard, Vancouver, ISO, and other styles
8

Downes, Jane, ed. Chalcolithic and Bronze Age Scotland: ScARF Panel Report. Society for Antiquaries of Scotland, September 2012. http://dx.doi.org/10.9750/scarf.09.2012.184.

Full text
Abstract:
The main recommendations of the panel report can be summarised under five key headings:  Building the Scottish Bronze Age: Narratives should be developed to account for the regional and chronological trends and diversity within Scotland at this time. A chronology Bronze Age Scotland: ScARF Panel Report iv based upon Scottish as well as external evidence, combining absolute dating (and the statistical modelling thereof) with re-examined typologies based on a variety of sources – material cultural, funerary, settlement, and environmental evidence – is required to construct a robust and up to date framework for advancing research.  Bronze Age people: How society was structured and demographic questions need to be imaginatively addressed including the degree of mobility (both short and long-distance communication), hierarchy, and the nature of the ‘family’ and the ‘individual’. A range of data and methodologies need to be employed in answering these questions, including harnessing experimental archaeology systematically to inform archaeologists of the practicalities of daily life, work and craft practices.  Environmental evidence and climate impact: The opportunity to study the effects of climatic and environmental change on past society is an important feature of this period, as both palaeoenvironmental and archaeological data can be of suitable chronological and spatial resolution to be compared. Palaeoenvironmental work should be more effectively integrated within Bronze Age research, and inter-disciplinary approaches promoted at all stages of research and project design. This should be a two-way process, with environmental science contributing to interpretation of prehistoric societies, and in turn, the value of archaeological data to broader palaeoenvironmental debates emphasised. Through effective collaboration questions such as the nature of settlement and land-use and how people coped with environmental and climate change can be addressed.  Artefacts in Context: The Scottish Chalcolithic and Bronze Age provide good evidence for resource exploitation and the use, manufacture and development of technology, with particularly rich evidence for manufacture. Research into these topics requires the application of innovative approaches in combination. This could include biographical approaches to artefacts or places, ethnographic perspectives, and scientific analysis of artefact composition. In order to achieve this there is a need for data collation, robust and sustainable databases and a review of the categories of data.  Wider Worlds: Research into the Scottish Bronze Age has a considerable amount to offer other European pasts, with a rich archaeological data set that includes intact settlement deposits, burials and metalwork of every stage of development that has been the subject of a long history of study. Research should operate over different scales of analysis, tracing connections and developments from the local and regional, to the international context. In this way, Scottish Bronze Age studies can contribute to broader questions relating both to the Bronze Age and to human society in general.
APA, Harvard, Vancouver, ISO, and other styles
9

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Full text
Abstract:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
APA, Harvard, Vancouver, ISO, and other styles
10

COLD FORMED STEEL SHEAR WALL RACKING ANALYSIS THROUGH A MECHANISTIC APPROACH: CFS-RAMA. The Hong Kong Institute of Steel Construction, September 2022. http://dx.doi.org/10.18057/ijasc.2022.18.3.2.

Full text
Abstract:
Cold-formed steel shear wall panels are an effective lateral load resisting system in cold-formed steel or light gauge constructions. The behavior of these panels is governed by the interaction of the sheathing - frame fasteners and the sheathing itself. Therefore, analysis of these panels for an applied lateral load (monotonic/cyclic) is complex due to the inherent non-linearity that exists in the fastener-sheathing interaction. This paper presents a novel and efficient, fastener based mechanistic approach that can reliably predict the response of cold-formed steel wall panels for an applied monotonic lateral load. The approach is purely mechanistic, alleviating the modelling complexity, computational costs and convergence issues which is generally confronted in finite element models. The computational time savings are in the order of seven when compared to the finite element counterparts. Albeit its simplicity, it gives a good insight into the component level forces such as on studs, tracks and individual fasteners for post-processing and performance-based seismic design at large. The present approach is incorporated in a computational framework - CFS-RAMA. The approach is general and thereby making it easy to analyze a variety of configurations of wall panels with brittle sheathing materials and the results are validated using monotonic racking test data published from literature. The design parameters estimated using EEEP (Equivalent Energy Elastic Plastic) method are also compared against corresponding experimental values and found in good agreement. The method provides a good estimate of the wall panel behavior for a variety of configurations, dimensions and sheathing materials used, making it an effective design tool for practicing engineers.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography