Academic literature on the topic 'Information modelling, management and ontologies'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Information modelling, management and ontologies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Information modelling, management and ontologies"

1

Husáková, Martina, and Vladimír Bureš. "Formal Ontologies in Information Systems Development: A Systematic Review." Information 11, no. 2 (January 27, 2020): 66. http://dx.doi.org/10.3390/info11020066.

Full text
Abstract:
Computational ontologies are machine-processable structures which represent particular domains of interest. They integrate knowledge which can be used by humans or machines for decision making and problem solving. The main aim of this systematic review is to investigate the role of formal ontologies in information systems development, i.e., how these graphs-based structures can be beneficial during the analysis and design of the information systems. Specific online databases were used to identify studies focused on the interconnections between ontologies and systems engineering. One-hundred eighty-seven studies were found during the first phase of the investigation. Twenty-seven studies were examined after the elimination of duplicate and irrelevant documents. Mind mapping was substantially helpful in organising the basic ideas and in identifying five thematic groups that show the main roles of formal ontologies in information systems development. Formal ontologies are mainly used in the interoperability of information systems, human resource management, domain knowledge representation, the involvement of semantics in unified modelling language (UML)-based modelling, and the management of programming code and documentation. We explain the main ideas in the reviewed studies and suggest possible extensions to this research.
APA, Harvard, Vancouver, ISO, and other styles
2

Shynkarenko, V. I., and L. I. Zhuchyi. "Constructive-Synthesizing Modelling of Ontological Document Management Support for the Railway Train Speed Restrictions." Science and Transport Progress, no. 2(98) (June 20, 2022): 59–68. http://dx.doi.org/10.15802/stp2022/268001.

Full text
Abstract:
Purpose. During the development of railway ontologies, it is necessary to take into account both the data of information systems and regulatory support to check their consistency. To do this, data integration is performed. The purpose of the work is to formalize the methods for integrating heterogeneous sources of information and ontology formation. Methodology. Constructive-synthesizing modelling of ontology formation and its resources was developed. Findings. Ontology formation formalization has been performed, which allows expanding the possibilities of automating the integration and coordination of data using ontologies. In the future, it is planned to expand the structural system for the formation of ontologies based on textual sources of railway regulatory documentation and information systems. Originality. The authors laid the foundations of using constructive-synthesizing modelling in the railway transport ontological domain to form the structure and data of the railway train speed restriction warning tables (database and csv format), their transformation into a common tabular format, vocabulary, rules and ontology individuals, as well as ontology population. Ontology learning methods have been developed to integrate data from heterogeneous sources. Practical value. The developed methods make it possible to integrate heterogeneous data sources (the structure of the table of the railway train management rules, the form and application for issuing a warning), which are railway domain-specific. It allows forming an ontology from its data sources (database and csv formats) to schema and individuals. Integration and consistency of information system data and regulatory documentation is one of the aspects of increasing the level of train traffic safety.
APA, Harvard, Vancouver, ISO, and other styles
3

Baro, Dana, and Laura Neundörfer. "Contrasting Ontology Modeling with Correlation Rules for Delivery Applications." SHS Web of Conferences 102 (2021): 02002. http://dx.doi.org/10.1051/shsconf/202110202002.

Full text
Abstract:
With the increasing importance of knowledge management, variant management and the ever-growing quantity of data, ontologies emerged as a form of knowledge representation, especially in the field of technical communication for modelling metadata and to create correlations between them. In the area of delivery applications, the deliverable information objects receive a certain intelligence by semantic metadata. It is expected, that ontologies offer a higher level of intelligence which could lead to an improvement in classification, connection and delivery possibilities of content. On the contrary, creating those complex ontologies requires a time-consuming effort. Thus, the question arises, whether their use offers a decisive added benefit or if alternatives, such as untyped correlations, should be preferred. In that case, the concept of Semantic Correlation Rules can offer an opportunity to derive advantages from ontologies: By defining which classifications are connected to others, it is possible to present content tailored to user-specific information requirements. By developing use cases, we aim to evaluate the required level of intelligence of the metadata resulting from its modeling method to achieve this goal.
APA, Harvard, Vancouver, ISO, and other styles
4

Nägele, Daniel, and Patricia Vobl. "Ontology Modelling and Standardized Information Exchange with Content Delivery Applications in Technical Communication." SHS Web of Conferences 102 (2021): 02005. http://dx.doi.org/10.1051/shsconf/202110202005.

Full text
Abstract:
Ontologies are a technology recently used in technical communication (TC) to model information into a multidimensional net. They expand the modelling by taxonomy of metadata in TC. Any kind of relation between multiple classes and instances can be established. These ontologies can appear in the form of semantic correlation rules (SCR), which represent the connection between the metadata of the objects. SCR are used in connection with component content management systems (CCMS), semantic modelling systems (SMS) and content delivery portals (CDP) to deliver the appropriate amount of content in a more precise manner to the end user. In general, Ontology tools, CCMS and CDP are not based on the same ecosystem and therefore, they do not always work together effortlessly. A solution to this problem are exchange formats like the intelligent information Request and Delivery Standard (iiRDS), which enable a standardized information exchange between supported systems. Another solution would be compound information systems (CIS) like ONTOLIS, which combine a CCMS, CDP and SMS all in one. This paper aims to investigate the effect of SCR in the CDP of a CIS like ONTOLIS and to evaluate the use of exchange formats like iiRDS.
APA, Harvard, Vancouver, ISO, and other styles
5

Smirnov, Alexander, and Nikolay Shilov. "Business Network Modelling." International Journal of Information System Modeling and Design 1, no. 4 (October 2010): 77–91. http://dx.doi.org/10.4018/jismd.2010100104.

Full text
Abstract:
Business networks have appeared as a reaction to changes taking place in the world economy and logistic networks can be considered as examples of such networks. The approach proposed in the paper is based on the idea to represent the business network members with services provided by them, and to achieve interoperability via application of the SOA standards. The approach is based on usage of such technologies as Web services, ontology, and context management. Web services enable interoperability at the technological level. Ontologies are used for description of knowledge domains and enable interoperability at the level of semantics. The purpose of the context is to represent only relevant information from the large amount of the information and the application of the approach is demonstrated on the case study from the area of dynamic logistics. The considered problem takes into account a continuously changing problem environment and requires nearly real-time solving.
APA, Harvard, Vancouver, ISO, and other styles
6

Bischoff, Sophie. "Modelling of Child Seat Dependencies for Digital Information Services using Semantic Technologies." SHS Web of Conferences 139 (2022): 02003. http://dx.doi.org/10.1051/shsconf/202213902003.

Full text
Abstract:
With the increasing importance of knowledge management and the growing amount of data in industry, a field of application for semantic technologies has emerged with the representation of knowledge in the area of technical communication. Semantic networks and ontologies offer a high degree of intelligence and can make complex relationships interpretable for humans and machines alike. This aspect can lead to an improvement in the classification, connection, and delivery possibilities of content. Technical documentation is one of many areas where semantic technologies and knowledge graphs can add value to industry. This paper should give an outlook on how semantic technologies can be used in different use cases by companies in the domain of technical communication in the future. Based on a use case of a German company of the child seat industry, the use of semantic technologies and their various use cases in technical communication is shown. The company has started to use semantic technologies to simplify processes and to provide their digital information products in a user- and context-oriented way. This paper focuses on the possibilities that the use of semantic technologies, especially ontologies, provides in a company. It also highlights the difficulties that can arise in the transition and transformation of data.
APA, Harvard, Vancouver, ISO, and other styles
7

Celeste, Giuseppe, Mariangela Lazoi, Mattia Mangia, and Giovanna Mangialardi. "Innovating the Construction Life Cycle through BIM/GIS Integration: A Review." Sustainability 14, no. 2 (January 11, 2022): 766. http://dx.doi.org/10.3390/su14020766.

Full text
Abstract:
The construction sector is in continuous evolution due to the digitalisation and integration into daily activities of the building information modelling approach and methods that impact on the overall life cycle. This study investigates the topic of BIM/GIS integration with the adoption of ontologies and metamodels, providing a critical analysis of the existing literature. Ontologies and metamodels share several similarities and could be combined for potential solutions to address BIM/GIS integration for complex tasks, such as asset management, where heterogeneous sources of data are involved. The research adopts a systematic literature review (SLR), providing a formal approach to retrieve scientific papers from dedicated online databases. The results found are then analysed, in order to describe the state of the art and suggest future research paths, which is useful for both researchers and practitioners. From the SLR, it emerged that several studies address ontologies as a promising way to overcome the semantic barriers of the BIM/GIS integration. On the other hand, metamodels (and MDE and MDA approaches, in general) are rarely found in relation to the integration topic. Moreover, the joint application of ontologies and metamodels for BIM/GIS applications is an unexplored field. The novelty of this work is the proposal of the joint application of ontologies and metamodels to perform BIM/GIS integration, for the development of software and systems for asset management.
APA, Harvard, Vancouver, ISO, and other styles
8

Previtali, Mattia, Raffaella Brumana, Chiara Stanga, and Fabrizio Banfi. "An Ontology-Based Representation of Vaulted System for HBIM." Applied Sciences 10, no. 4 (February 18, 2020): 1377. http://dx.doi.org/10.3390/app10041377.

Full text
Abstract:
In recent years, many efforts have been invested in the cultural heritage digitization: surveying, modelling, diagnostic analysis and historic data collection. Nowadays, this effort is finalized in many cases towards historical building information modelling (HBIM). However, the architecture, engineering, construction and facility management (AEC-FM) domain is very fragmented and many experts operating with different data types and models are involved in HBIM projects. This prevents effective communication and sharing of the results not only among different professionals but also among different projects. Semantic web tools may significantly contribute in facilitating sharing, connection and integration of data provided in different domains and projects. The paper describes this aspect specifically focusing on managing the information and models acquired on the case of vaulted systems. Information is collected within a semantic based hub platform to perform cross correlation. Such functionality allows the reconstructing of the rich history of the construction techniques and skilled workers across Europe. To this purpose an ontology-based vaults database has been undertaken and an example of its implementation is presented. The developed ontology-based vaults database is a database that makes uses of a set of ontologies to effectively combine data and information from multiple heterogeneous sources. The defined ontologies provide a high-level schema of a data source and provides a vocabulary for user queries.
APA, Harvard, Vancouver, ISO, and other styles
9

Van Ruymbeke, M., P. Hallot, and R. Billen. "Enhancing CIDOC-CRM and compatible models with the concept of multiple interpretation." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W2 (August 17, 2017): 287–94. http://dx.doi.org/10.5194/isprs-annals-iv-2-w2-287-2017.

Full text
Abstract:
Modelling cultural heritage and archaeological objects is used as much for management as for research purposes. To ensure the sustainable benefit of digital data, models benefit from taking the data specificities of historical and archaeological domains into account. Starting from a conceptual model tailored to storing these specificities, we present, in this paper, an extended mapping to CIDOC-CRM and its compatible models. Offering an ideal framework to structure and highlight the best modelling practices, these ontologies are essentially dedicated to storing semantic data which provides information about cultural heritage objects. Based on this standard, our proposal focuses on multiple interpretation and sequential reality.
APA, Harvard, Vancouver, ISO, and other styles
10

Sousa, Cristóvão Dinis, António Lucas Soares, and Carla Sofia Pereira. "Collaborative conceptualisation processes in the development of lightweight ontologies." VINE Journal of Information and Knowledge Management Systems 46, no. 2 (May 9, 2016): 175–93. http://dx.doi.org/10.1108/vjikms-03-2015-0022.

Full text
Abstract:
Purpose In collaborative settings, such as research and development projects, obtaining the maximum benefit from knowledge management systems depends on the ability of the different partners to understand the conceptualisation underlying the system’s knowledge organisation. This paper aims to show how information/knowledge organisation in a multi-organisation project can be made more effective if the domain experts are involved in the specification of the systems semantic structure. A particular aspect is further studied: the role of conceptual relations in the process of collaborative development of such structures. Design/methodology/approach An action-research approach was adopted, framed by a socio-semantic stance. A collaborative conceptual modelling platform was used to support the members of a research and development project in the process of developing a lightweight ontology aiming at reorganising all the project information in a wiki system. Data collection was carried out by means of participant observation, interviews and a questionnaire. Findings The approach to solve the content organisation problem revealed to be effective both in the result and the process. It resulted in a better-organised system, enabling more efficient project information retrieval. The collaborative development of the lightweight ontology embodied, in fact, a learning process, leading to a shared conceptualisation. The research results point to the importance of the elicitation of conceptual relations for structuring the project’s knowledge. These results are important for the design of methods and tools to support the collaborative development of conceptual models. Originality/value This paper studies the social process leading to a shared conceptualisation, a subject that has not been sufficiently researched. This case study provides evidence about the importance of the early phases of the construction of ontologies, mainly if domain experts are deeply involved, supported by appropriated tools and guided by well-structured processes.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Information modelling, management and ontologies"

1

Nyqvist, Olof. "Information Management for Cutting Tools : Information Models and Ontologies." Doctoral thesis, Stockholm : Industriell produktion, Production Engineering, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Afzal, Muhammad. "Modelling temporal aspects of healthcare processes with Ontologies." Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-12781.

Full text
Abstract:

This thesis represents the ontological model for the Time Aspects for a Healthcare Organization. It provides information about activities which take place at different interval of time at Ryhov Hospital. These activities are series of actions which may be happen in predefined sequence and at predefined times or may be happen at any time in a General ward or in Emergency ward of a Ryhov Hospital.

For achieving above mentioned objective, our supervisor conducts a workshop at the start of thesis. In this workshop, the domain experts explain the main idea of ward activities. From this workshop; the author got a lot of knowledge about activities and time aspects. After this, the author start literature review for achieving valuable knowledge about ward activities, time aspects and also methodology steps which are essentials for ontological model. After developing ontological model for Time Aspects, our supervisor also conducts a second workshop. In this workshop, the author presents the model for evaluation purpose.

APA, Harvard, Vancouver, ISO, and other styles
3

Tchouanguem, Djuedja Justine Flore. "Information modelling for the development of sustainable construction (MINDOC)." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0133.

Full text
Abstract:
Au cours des dernières décennies, la maîtrise de l'impact sur l'environnement par l'analyse du cycle de vie est devenue un sujet d'actualité dans le secteur du bâtiment. Cependant, il y a quelques problèmes d’échange d'informations entre experts pour la réalisation de diverses études telles que l’évaluation environnementale du bâtiment. Il existe une hétérogénéité entre les bases de données de produits de construction car elles n'ont pas les mêmes caractéristiques et n'utilisent pas la même base pour mesurer l'impact environnemental de chaque produit de construction. En outre, il est encore difficile d'exploiter pleinement le potentiel de liaison entre le BIM, le Web sémantique et les bases de données de produits de construction, car l'idée de les combiner est relativement récente. L'objectif de cette thèse est d'accroître la flexibilité nécessaire pour évaluer l'impact environnemental du bâtiment au moment opportun. Premièrement, notre recherche détermine les lacunes en matière d’interopérabilité dans le domaine AEC (Architecture Engineering and Construction). Ensuite, nous comblons certaines des lacunes rencontrées par la formalisation des informations du bâtiment et la génération de données du bâtiment aux formats Web sémantique. Nous promouvons l'utilisation efficace du BIM tout au long du cycle de vie du bâtiment en intégrant et en référençant les données environnementales sur les produits de construction dans un outil BIM. De plus, la sémantique a été affiner par l'amélioration d'une ontologie bien connue basée sur le bâtiment ; à savoir ifcOWL pour le langage d'ontologie Web (OWL) des IFC (Industry Foundation Classes). Enfin, nous avons réalisé une expérimentation d'une étude de cas d'un petit bâtiment pour notre méthodologie
In previous decades, controlling the environmental impact through lifecycle analysis has become a topical issue in the building sector. However, there are some problems when trying to exchange information between experts for conducting various studies like the environmental assessment of the building. There is also heterogeneity between construction product databases because they do not have the same characteristics and do not use the same basis to measure the environmental impact of each construction product. Moreover, there are still difficulties to exploit the full potential of linking BIM, SemanticWeb and databases of construction products because the idea of combining them is relatively recent. The goal of this thesis is to increase the flexibility needed to assess the building’s environmental impact in a timely manner. First, our research determines gaps in interoperability in the AEC (Architecture Engineering and Construction) domain. Then, we fill some of the shortcomings encountered in the formalization of building information and the generation of building data in Semantic Web formats. We further promote efficient use of BIM throughout the building life cycle by integrating and referencing environmental data on construction products into a BIM tool. Moreover, semantics has been improved by the enhancement of a well-known building-based ontology (namely ifcOWL for Industry Foundation Classes Web Ontology Language). Finally, we experience a case study of a small building for our methodology
APA, Harvard, Vancouver, ISO, and other styles
4

Flycht-Eriksson, (Silvervarg) Annika. "Design and use of ontologies in information-providing dialogue systems." Doctoral thesis, Linköpings universitet, NLPLAB - Laboratoriet för databehandling av naturligt språk, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5007.

Full text
Abstract:
In this thesis, the design and use of ontologies as domain knowledge sources in information-providing dialogue systems are investigated. The research is divided into two parts, theoretical investigations that have resulted in a requirements specifications on the design of ontologies to be used in information-providing dialogue systems, and empirical work on the development of a framework for use of ontologies in information-providing dialogue systems. The framework includes three models: A model for ontology-based semantic analysis of questions. A model for ontology-based dialogue management, specifically focus management and clarifications. A model for ontology-based domain knowledge management, specifically transformation of user requests to system oriented concepts used for information retrieval. In this thesis, it is shown that using ontologies to represent and reason on domain knowledge in dialogue systems has several advantages. A deeper semantic analysis is possible in several modules and a more natural and efficient dialogue can be achieved. Another important aspect is that it facilitates portability; to be able to reuse adapt the dialogue system to new tasks and domains, since the domain-specific knowledge is separated form generic features in the dialogue system architecture. Other advantages are that it reduces the complexity of linguistic produced in various domains.
APA, Harvard, Vancouver, ISO, and other styles
5

Fragos, Serafeim. "Behavioural modelling in management and accounting information systems." Thesis, Lancaster University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.483621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ayub, Muhammad, and Muhammad Jawad. "Structuring and Modelling Competences in the Healthcare Area with the help of Ontologies." Thesis, Jönköping University, School of Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-9627.

Full text
Abstract:

Ontology development is a systematic technique to represent the existing and new knowledge about a specific domain by using some models to present the system in which conceptualization is involved. This thesis presents the use of ontologies to formally represent ontology-based competence model for potential users of quality registry report in a healthcare organization. The model describes the professional and occupational interests and needs of the users through structuring and describing their skills and qualifications. There are individual competences model having two main parts: general competence and occupational competence. The model is implemented in an ontology editor. Although our competence model gives the general view about all medical areas in a hospital, from implementation point of view, we have considered only Cardiology area in detail. The potential users of quality registry are medical staff, county council staff and Pharmaceutical staff. In this report we have also used different classifications of education, occupational fields and diseases. A user can get information about the patient and specific disease with treatment tips by using various organizational resources: i.e. quality registries, electronic medical reports, and online journals. Our model also provides a support of information filtering which filters the information according to the need and the competencies of the users.

APA, Harvard, Vancouver, ISO, and other styles
7

Wong, Siaw Ming. "Analyse des causes d'échec des projets d'affaires à partir d'études de cas en entreprises, et proposition d'un modèle de domaine en langage UML." Phd thesis, Université de La Rochelle, 2010. http://tel.archives-ouvertes.fr/tel-00556609.

Full text
Abstract:
En dépit des efforts destinés à accroitre la maturité de la profession dans le domaine de la gestion de projet, le taux d'échec des projets d'affaires (par opposition aux projets techniques) reste élevé. On s'est aperçu que les standards actuels en matière de gestion de projet ne prenaient pas en compte les contraintes liées au contexte d'exécution des projets, et que de ce fait, la gestion de projets d'affaires n'avait pas été étudiée en profondeur. L'objectif de ce travail de recherche transdisciplinaire est donc d'abord d'obtenir une meilleure compréhension du sujet en essayant de comprendre pourquoi l'échec d'un projet d'affaires est considéré comme un échec du point de vue de l'organisation, puis de formaliser la connaissance acquise dans un format qui permette par la suite de l'enrichir et de l'appliquer. En nous appuyant sur le modèle des systèmes ouverts, trois études de cas ont été conduites, avec pour objectif d'étudier l'effet modérateur des différents types de structures des organisations et des systèmes d'information pour la gestion de projet sur la relation causale entre la compétence en gestion de projet et le succès des projets d'affaires. Il résulte de ce travail que le succès des projets d'affaires devrait être mesuré en termes de réalisation des objectifs du projet mais aussi de l'organisation. Ce travail a également permis d'identifier les composants essentiels de la gestion de projets d'affaires : (1) "Compétences de base pour la gestion de projet "; (2) "Gestion intégrée de programme" et (3) "Système d'information intégré pour la gestion de projet". Dans les trois études de cas, il apparait également de manière déterminante que les facteurs d'ordre organisationnel ont un impact significatif sur la réussite du projet. Une théorie est proposée, qui postule qu'un projet d'affaires a de grandes chances d'échouer s'il n'est pas géré comme une partie intégrante de l'entreprise, en le traitant comme une opération courante au sein de l'entreprise. Cela signifie que la manière dont les projets d'affaires sont gérés aujourd'hui devrait être revue. Le rôle de l'informatique dans l'assistance à la gestion de ces projets devrait également être revu. Et il faudrait sans doute aussi faire une plus grande différence entre les projets d'affaires et les projets " traditionnels " plus techniques. D'autre part, la formalisation de la connaissance acquise au cours de ces études de cas a été effectuée en développant un modèle de domaine à l'aide du langage de modélisation UML. Et l'approche de modélisation du domaine a été élaborée en modifiant l'étape de conceptualisation dans le processus traditionnel d'ingénierie d'ontologie. En prenant comme point de départ le cadre théorique qui prend en compte l'essentiel des composants de la gestion de projets d'affaires, le modèle a été construit en quatre étapes : (1) définition de la portée du travail en développant chaque composant à partir des normes en vigueur ; (2) intégration de ces développements en réutilisant les travaux réalisés et proposés par d'autres chercheurs ; (3) développement et (4) évaluation des spécifications UML décrivant aussi bien les aspects structurels que dynamiques du sujet traité. Le fait d'avoir réussi à développer un modèle du domaine et à montrer de quelle manière il pouvait être mis en œuvre directement pour développer un système d'information pour la gestion de projet ainsi que des ontologies portant sur les connaissances liées à la gestion de projet a montré que l'approche consistant à construire une base sémantique commune permettant de travailler à la modélisation de systèmes applicatifs et d'ontologies est à la fois réalisable et valide. De plus, le modèle de domaine proposé peut servir de socle permettant d'accumuler progressivement la connaissance du domaine, dans la mesure où l'approche de modélisation a pris en compte la possibilité d'intégrer des travaux et propositions antérieurs. Ce résultat ouvre de nouvelles perspectives de développement de logiciels s'appuyant sur un modèle de domaine qui est directement issu de travaux de recherch
APA, Harvard, Vancouver, ISO, and other styles
8

Leshi, Olumide. "An Approach to Extending Ontologies in the Nanomaterials Domain." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170255.

Full text
Abstract:
As recently as the last decade or two, data-driven science workflows have become increasingly popular and semantic technology has been relied on to help align often parallel research efforts in the different domains and foster interoperability and data sharing. However, a key challenge is the size of the data and the pace at which it is being generated, so much that manual procedures lag behind. Thus, eliciting automation of most workflows. In this study, the effort is to continue investigating ways by which some tasks performed by experts in the nanotechnology domain, specifically in ontology engineering, could benefit from automation. An approach, featuring phrase-based topic modelling and formal topical concept analysis is further motivated, together with formal implication rules, to uncover new concepts and axioms relevant to two nanotechnology-related ontologies. A corpus of 2,715 nanotechnology research articles helps showcase that the approach can scale, as seen in a number of experiments conducted. The usefulness of document text ranking as an alternative form of input to topic models is highlighted as well as the benefit of implication rules to the task of concept discovery. In all, a total of 203 new concepts are uncovered by the approach to extend the referenced ontologies
APA, Harvard, Vancouver, ISO, and other styles
9

Thomas, Manoj. "An Ontology Centric Architecture For Mediating Interactions In Semantic Web-Based E-Commerce Environments." VCU Scholars Compass, 2008. http://scholarscompass.vcu.edu/etd/1598.

Full text
Abstract:
Information freely generated, widely distributed and openly interpreted is a rich source of creative energy in the digital age that we live in. As we move further into this irrevocable relationship with self-growing and actively proliferating information spaces, we are also finding ourselves overwhelmed, disheartened and powerless in the presence of so much information. We are at a point where, without domain familiarity or expert guidance, sifting through the copious volumes of information to find relevance quickly turns into a mundane task often requiring enormous patience. The realization of accomplishment soon turns into a matter of extensive cognitive load, serendipity or just plain luck. This dissertation describes a theoretical framework to analyze user interactions based on mental representations in a medium where the nature of the problem-solving task emphasizes the interaction between internal task representation and the external problem domain. The framework is established by relating to work in behavioral science, sociology, cognitive science and knowledge engineering, particularly Herbert Simon’s (1957; 1989) notion of satisficing on bounded rationality and Schön’s (1983) reflective model. Mental representations mediate situated actions in our constrained digital environment and provide the opportunity for completing a task. Since assistive aids to guide situated actions reduce complexity in the task environment (Vessey 1991; Pirolli et al. 1999), the framework is used as the foundation for developing mediating structures to express the internal, external and mental representations. Interaction aids superimposed on mediating structures that model thought and action will help to guide the “perpetual novice” (Borgman 1996) through the vast digital information spaces by orchestrating better cognitive fit between the task environment and the task solution. This dissertation presents an ontology centric architecture for mediating interactions is presented in a semantic web based e-commerce environment. The Design Science approach is applied for this purpose. The potential of the framework is illustrated as a functional model by using it to model the hierarchy of tasks in a consumer decision-making process as it applies in an e-commerce setting. Ontologies are used to express the perceptual operations on the external task environment, the intuitive operations on the internal task representation, and the constraint satisfaction and situated actions conforming to reasoning from the cognitive fit. It is maintained that actions themselves cannot be enforced, but when the meaning from mental imagery and the task environment are brought into coordination, it leads to situated actions that change the present situation into one closer to what is desired. To test the usability of the ontologies we use the Web Ontology Language (OWL) to express the semantics of the three representations. We also use OWL to validate the knowledge representations and to make rule-based logical inferences on the ontological semantics. An e-commerce application was also developed to show how effective guidance can be provided by constructing semantically rich target pages from the knowledge manifested in the ontologies.
APA, Harvard, Vancouver, ISO, and other styles
10

Dennie, Keiran. "Scalable attack modelling in support of security information and event management." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/9205.

Full text
Abstract:
Includes bibliographical references
While assessing security on single devices can be performed using vulnerability assessment tools, modelling of more intricate attacks, which incorporate multiple steps on different machines, requires more advanced techniques. Attack graphs are a promising technique, however they face a number of challenges. An attack graph is an abstract description of what attacks are possible against a specific network. Nodes in an attack graph represent the state of a network at a point in time while arcs between nodes indicate the transformation of a network from one state to another, via the exploit of a vulnerability. Using attack graphs allows system and network configuration information to be correlated and analysed to indicate imminent threats. This approach is limited by several serious issues including the state-space explosion, due to the exponential nature of the problem, and the difficulty in visualising an exhaustive graph of all potential attacks. Furthermore, the lack of availability of information regarding exploits, in a standardised format, makes it difficult to model atomic attacks in terms of exploit requirements and effects. This thesis has as its objective to address these issues and to present a proof of concept solution. It describes a proof of concept implementation of an automated attack graph based tool, to assist in evaluation of network security, assessing whether a sequence of actions could lead to an attacker gaining access to critical network resources. Key objectives are the investigation of attacks that can be modelled, discovery of attack paths, development of techniques to strengthen networks based on attack paths, and testing scalability for larger networks. The proof of concept framework, Network Vulnerability Analyser (NVA), sources vulnerability information from National Vulnerability Database (NVD), a comprehensive, publicly available vulnerability database, transforming it into atomic exploit actions. NVA combines these with a topological network model, using an automated planner to identify potential attacks on network devices. Automated planning is an area of Artificial Intelligence (AI) which focuses on the computational deliberation process of action sequences, by measuring their expected outcomes and this technique is applied to support discovery of a best possible solution to an attack graph that is created. Through the use of heuristics developed for this study, unpromising regions of an attack graph are avoided. Effectively, this prevents the state-space explosion problem associated with modelling large scale networks, only enumerating critical paths rather than an exhaustive graph. SGPlan5 was selected as the most suitable automated planner for this study and was integrated into the system, employing network and exploit models to construct critical attack paths. A critical attack path indicates the most likely attack vector to be used in compromising a targeted device. Critical attack paths are identifed by SGPlan5 by using a heuristic to search through the state-space the attack which yields the highest aggregated severity score. CVSS severity scores were selected as a means of guiding state-space exploration since they are currently the only publicly available metric which can measure the impact of an exploited vulnerability. Two analysis techniques have been implemented to further support the user in making an informed decision as to how to prevent identified attacks. Evaluation of NVA was broken down into a demonstration of its effectiveness in two case studies, and analysis of its scalability potential. Results demonstrate that NVA can successfully enumerate the expected critical attack paths and also this information to establish a solution to identified attacks. Additionally, performance and scalability testing illustrate NVA's success in application to realistically sized larger networks.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Information modelling, management and ontologies"

1

Lim, Edward H. Y. Knowledge Seeker - Ontology Modelling for Information Search and Management: A Compendium. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Staab, Steffen. Handbook on Ontologies. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bytheway, A. Information modelling for management. Cranfield: Cranfield University, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nogueras, Iso Javier, Zarazaga-Soria F. Javier, and SpringerLink (Online service), eds. Terminological Ontologies: Design, Management and Practical Applications. Boston, MA: Springer Science+Business Media, LLC, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Advancing information management through Semantic Web concepts and ontologies. Hershey, PA: Information Science Reference, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Information and data modelling. 2nd ed. London: McGraw-Hill, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Benyon, David. Information and data modelling. Oxford: Blackwell Scientific Publications, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rebstock, Michael. Ontologies-based business integration. Berlin: Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Janina, Fengel, and Paulheim Heiko, eds. Ontologies-based business integration. Berlin: Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Veryard, R. Information modelling: Practical guidance. New York: Prentice Hall, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Information modelling, management and ontologies"

1

Akkermans, Hans. "Ontologies and their Role in Knowledge Management and E-Business Modelling." In IFIP Advances in Information and Communication Technology, 71–82. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-0-387-35621-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pruski, Cédric, and Defne Sunguroğlu Hensel. "The Role of Information Modelling and Computational Ontologies to Support the Design, Planning and Management of Urban Environments: Current Status and Future Challenges." In Informed Urban Environments, 51–70. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-03803-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lu, Jing, Xingzhi Sun, Linhao Xu, and Haofen Wang. "Incremental Reasoning over Multiple Ontologies." In Web-Age Information Management, 131–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23535-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Maedche, Alexander. "Web Information Tracking Using Ontologies." In Practical Aspects of Knowledge Management, 201–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36277-0_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jekjantuk, Nophadol, Gerd Gröner, and Jeff Z. Pan. "Modelling and Reasoning in Metamodelling Enabled Ontologies." In Knowledge Science, Engineering and Management, 51–62. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15280-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Maree, Mohammed, Saadat M. Alhashmi, and Mohammed Belkhatir. "QuerySem: Deriving Query Semantics Based on Multiple Ontologies." In Web-Age Information Management, 157–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23535-1_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Calì, Andrea, Georg Gottlob, and Thomas Lukasiewicz. "Datalog Extensions for Tractable Query Answering over Ontologies." In Semantic Web Information Management, 249–79. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04329-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shironoshita, E. Patrick, Da Zhang, Mansur R. Kabuka, and Jia Xu. "Parallelization of Conjunctive Query Answering over Ontologies." In Information Management and Big Data, 1–14. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-90596-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rao, Junyang, Aixia Jia, Yansong Feng, and Dongyan Zhao. "Personalized News Recommendation Using Ontologies Harvested from the Web." In Web-Age Information Management, 781–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38562-9_79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Danfeng, Antonis Bikakis, and Andreas Vlachidis. "Evaluation of Semantic Web Ontologies for Modelling Art Collections." In Communications in Computer and Information Science, 343–52. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67162-8_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Information modelling, management and ontologies"

1

"Ontologies and Information Modelling." In 2018 IEEE 23rd International Conference on Emerging Technologies and Factory Automation (ETFA). IEEE, 2018. http://dx.doi.org/10.1109/etfa.2018.8502443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ayala, Alejandro Pena. "Student Modelling Based on Ontologies." In 2009 First Asian Conference on Intelligent Information and Database Systems, ACIIDS. IEEE, 2009. http://dx.doi.org/10.1109/aciids.2009.73.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Session 12: Conceptual modelling and ontologies." In 2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS). IEEE, 2016. http://dx.doi.org/10.1109/rcis.2016.7549296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Oro, Ermelinda, and Massimo Ruffolo. "Description Ontologies." In 2008 Third International Conference on Digital Information Management (ICDIM). IEEE, 2008. http://dx.doi.org/10.1109/icdim.2008.4746710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Strategy Needs Structure - Structure Needs Ontologies – Dynamic Ontologies Carry Meanings." In International Conference on Knowledge Management and Information Sharing. SciTePress - Science and and Technology Publications, 2012. http://dx.doi.org/10.5220/0004118902610264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rossello-Busquet, A., L. J. Brewka, J. Soler, and L. Dittmann. "OWL Ontologies and SWRL Rules Applied to Energy Management." In 2011 UkSim 13th International Conference on Computer Modelling and Simulation (UKSim 2011). IEEE, 2011. http://dx.doi.org/10.1109/uksim.2011.91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Neil Urwin, Esmond, Claire Palmer, Anne-Françoise Cutting-Decelle, Francisco Sánchez Cid, José Miguel Pinazo-Sánchez, Sonja Pajkovska-Goceva, and Robert Young. "Reference Ontologies for Global Production Networks." In International Conference on Knowledge Management and Information Sharing. SCITEPRESS - Science and and Technology Publications, 2014. http://dx.doi.org/10.5220/0005026001330139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Casola, Valentina, and Rosario Catelli. "Semantic Management of Enterprise Information Systems through Ontologies." In 9th International Conference on Natural Language Processing (NLP 2020). AIRCC Publishing Corporation, 2020. http://dx.doi.org/10.5121/csit.2020.101403.

Full text
Abstract:
This article introduces a model for cloud-aware enterprise governance with a focus on its semantic aspects. It considers the need for Business-IT/OT and Governance-Security alignments. The proposed model suggests the usage of ontologies as specific tools to address the governance of each IT/OT environment in a holistic way. The concrete utilization of ISO and NIST standards allows to correctly structure the ontological model: in fact, by using these wellknown international standards it is possible to significantly reduce terminological and conceptual inconsistencies already in the design phase of the ontology. This also brings a considerable advantage in the operational management phase of the company certifications, congruently aligned with the knowledge structured in this manner. The semantic support within the model suggests further possible applications in different departments of the company, with the aim of evaluating and managing projects in an optimal way, integrating different but important points of view of stakeholders.
APA, Harvard, Vancouver, ISO, and other styles
9

Ribino, Patrizia, Antonio Oliveri, Giuseppe Lo Re, and Salvatore Gaglio. "A Knowledge Management System Based on Ontologies." In 2009 International Conference on New Trends in Information and Service Science (NISS). IEEE, 2009. http://dx.doi.org/10.1109/niss.2009.105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aufaure, Marie-Aude, Rania Soussi, and Hajer Baazaoui. "SIRO: On-line semantic information retrieval using ontologies." In 2007 2nd International Conference on Digital Information Management. IEEE, 2007. http://dx.doi.org/10.1109/icdim.2007.4444243.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Information modelling, management and ontologies"

1

McGarrigle, M. Embedding Building Information Modelling into Construction Technology and Documentation Courses. Unitec ePress, November 2014. http://dx.doi.org/10.34074/rsrp.005.

Full text
Abstract:
The aim of this research is to generate a resource to assist construction lecturers in identifying opportunities where Building Information Modelling [BIM] could be employed to augment the delivery of subject content within individual courses on construction technology programmes. The methodology involved a detailed analysis of the learning objectives and underpinning knowledge of the course content by topic area, within the residential Construction Systems 1 course presently delivered at Unitec on the National Diplomas in Architectural Technology[NDAT], Construction Management [NDCM] and Quantity Surveying [NDQS]. The objective is to aid students’ understanding of specific aspects such as planning controls or sub-floor framing by using BIM models, and investigate how these could enhance delivery modes using image,animation and interactive student activity. A framework maps the BIM teaching opportunities against each topic area highlighting where these could be embedded into construction course delivery. This template also records software options and could be used in similar analyses of other courses within similar programmes to assist with embedding BIM in subject delivery.
APA, Harvard, Vancouver, ISO, and other styles
2

McGarrigle, M. Embedding Building Information Modelling into Construction Technology and Documentation Courses. Unitec ePress, November 2014. http://dx.doi.org/10.34074/rsrp.005.

Full text
Abstract:
The aim of this research is to generate a resource to assist construction lecturers in identifying opportunities where Building Information Modelling [BIM] could be employed to augment the delivery of subject content within individual courses on construction technology programmes. The methodology involved a detailed analysis of the learning objectives and underpinning knowledge of the course content by topic area, within the residential Construction Systems 1 course presently delivered at Unitec on the National Diplomas in Architectural Technology[NDAT], Construction Management [NDCM] and Quantity Surveying [NDQS]. The objective is to aid students’ understanding of specific aspects such as planning controls or sub-floor framing by using BIM models, and investigate how these could enhance delivery modes using image,animation and interactive student activity. A framework maps the BIM teaching opportunities against each topic area highlighting where these could be embedded into construction course delivery. This template also records software options and could be used in similar analyses of other courses within similar programmes to assist with embedding BIM in subject delivery.
APA, Harvard, Vancouver, ISO, and other styles
3

O'Neill, H. B., S. A. Wolfe, and C. Duchesne. Preliminary modelling of ground ice abundance in the Slave Geological Province, Northwest Territories and Nunavut. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/329815.

Full text
Abstract:
New infrastructure corridors within the Slave Geological Province could provide transportation, electric, and communications links to mineral-rich areas of northern Canada, and connect southern highway systems and Arctic shipping routes. Relatively little information on permafrost and ground ice is available compared to other regions, particularly in the north of the corridor. Improved knowledge of permafrost and ground ice conditions is required to inform planning and management of infrastructure. Work within the Geological Survey of Canada's (GSC) GEM-GeoNorth program includes mapping periglacial terrain features, synthesizing existing permafrost and surficial data, and modelling ground ice conditions along the Yellowknife-Grays Bay corridor. Here we present initial modelling of ground ice abundance in the region using a methodology developed for the national scale Ground ice map of Canada (GIMC), and higher resolution surficial geology mapping. The results highlight the increased estimated abundance of potentially ice-rich deposits compared to the GIMC when using more detailed surficial geology as model inputs.
APA, Harvard, Vancouver, ISO, and other styles
4

Aalto, Juha, and Ari Venäläinen, eds. Climate change and forest management affect forest fire risk in Fennoscandia. Finnish Meteorological Institute, June 2021. http://dx.doi.org/10.35614/isbn.9789523361355.

Full text
Abstract:
Forest and wildland fires are a natural part of ecosystems worldwide, but large fires in particular can cause societal, economic and ecological disruption. Fires are an important source of greenhouse gases and black carbon that can further amplify and accelerate climate change. In recent years, large forest fires in Sweden demonstrate that the issue should also be considered in other parts of Fennoscandia. This final report of the project “Forest fires in Fennoscandia under changing climate and forest cover (IBA ForestFires)” funded by the Ministry for Foreign Affairs of Finland, synthesises current knowledge of the occurrence, monitoring, modelling and suppression of forest fires in Fennoscandia. The report also focuses on elaborating the role of forest fires as a source of black carbon (BC) emissions over the Arctic and discussing the importance of international collaboration in tackling forest fires. The report explains the factors regulating fire ignition, spread and intensity in Fennoscandian conditions. It highlights that the climate in Fennoscandia is characterised by large inter-annual variability, which is reflected in forest fire risk. Here, the majority of forest fires are caused by human activities such as careless handling of fire and ignitions related to forest harvesting. In addition to weather and climate, fuel characteristics in forests influence fire ignition, intensity and spread. In the report, long-term fire statistics are presented for Finland, Sweden and the Republic of Karelia. The statistics indicate that the amount of annually burnt forest has decreased in Fennoscandia. However, with the exception of recent large fires in Sweden, during the past 25 years the annually burnt area and number of fires have been fairly stable, which is mainly due to effective fire mitigation. Land surface models were used to investigate how climate change and forest management can influence forest fires in the future. The simulations were conducted using different regional climate models and greenhouse gas emission scenarios. Simulations, extending to 2100, indicate that forest fire risk is likely to increase over the coming decades. The report also highlights that globally, forest fires are a significant source of BC in the Arctic, having adverse health effects and further amplifying climate warming. However, simulations made using an atmospheric dispersion model indicate that the impact of forest fires in Fennoscandia on the environment and air quality is relatively minor and highly seasonal. Efficient forest fire mitigation requires the development of forest fire detection tools including satellites and drones, high spatial resolution modelling of fire risk and fire spreading that account for detailed terrain and weather information. Moreover, increasing the general preparedness and operational efficiency of firefighting is highly important. Forest fires are a large challenge requiring multidisciplinary research and close cooperation between the various administrative operators, e.g. rescue services, weather services, forest organisations and forest owners is required at both the national and international level.
APA, Harvard, Vancouver, ISO, and other styles
5

Sett, Dominic, Florian Waldschmidt, Alvaro Rojas-Ferreira, Saut Sagala, Teresa Arce Mojica, Preeti Koirala, Patrick Sanady, et al. Climate and disaster risk analytics tool for adaptive social protection. United Nations University - Institute for Environment and Human Security, March 2022. http://dx.doi.org/10.53324/wnsg2302.

Full text
Abstract:
Adaptive Social Protection (ASP) as discussed in this report is an approach to enhance the well-being of communities at risk. As an integrated approach, ASP builds on the interface of Disaster Risk Management (DRM), Climate Change Adaptation (CCA) and Social Protection (SP) to address interconnected risks by building resilience, thereby overcoming the shortcomings of traditionally sectoral approaches. The design of meaningful ASP measures needs to be informed by specific information on risk, risk drivers and impacts on communities at risk. In contrast, a limited understanding of risk and its drivers can potentially lead to maladaptation practices. Therefore, multidimensional risk assessments are vital for the successful implementation of ASP. Although many sectoral tools to assess risks exist, available integrated risk assessment methods across sectors are still inadequate in the context of ASP, presenting an important research and implementation gap. ASP is now gaining international momentum, making the timely development of a comprehensive risk analytics tool even more important, including in Indonesia, where nationwide implementation of ASP is currently under way. OBJECTIVE: To address this gap, this study explores the feasibility of a climate and disaster risk analytics tool for ASP (CADRAT-ASP), combining sectoral risk assessment in the context of ASP with a more comprehensive risk analytics approach. Risk analytics improve the understanding of risks by locating and quantifying the potential impacts of disasters. For example, the Economics of Climate Adaptation (ECA) framework quantifies probable current and expected future impacts of extreme events and determines the monetary cost and benefits of specific risk management and adaptation measures. Using the ECA framework, this report examines the viability and practicality of applying a quantitative risk analytics approach for non-financial and non-tangible assets that were identified as central to ASP. This quantitative approach helps to identify cost-effective interventions to support risk-informed decision making for ASP. Therefore, we used Nusa Tenggara, Indonesia, as a case study, to identify potential entry points and examples for the further development and application of such an approach. METHODS & RESULTS: The report presents an analysis of central risks and related impacts on communities in the context of ASP. In addition, central social protection dimensions (SPD) necessary for the successful implementation of ASP and respective data needs from a theoretical perspective are identified. The application of the quantitative ECA framework is tested for tropical storms in the context of ASP, providing an operational perspective on technical feasibility. Finally, recommendations on further research for the potential application of a suitable ASP risk analytics tool in Indonesia are proposed. Results show that the ECA framework and its quantitative modelling platform CLIMADA successfully quantified the impact of tropical storms on four SPDs. These SPDs (income, access to health, access to education and mobility) were selected based on the results from the Hazard, Exposure and Vulnerability Assessment (HEVA) conducted to support the development of an ASP roadmap for the Republic of Indonesia (UNU-EHS 2022, forthcoming). The SPDs were modelled using remote sensing, gridded data and available global indices. The results illustrate the value of the outcome to inform decision making and a better allocation of resources to deliver ASP to the case study area. RECOMMENDATIONS: This report highlights strong potential for the application of the ECA framework in the ASP context. The impact of extreme weather events on four social protection dimensions, ranging from access to health care and income to education and mobility, were successfully quantified. In addition, further developments of CADRAT-ASP can be envisaged to improve modelling results and uptake of this tool in ASP implementation. Recommendations are provided for four central themes: mainstreaming the CADRAT approach into ASP, data and information needs for the application of CADRAT-ASP, methodological advancements of the ECA framework to support ASP and use of CADRAT-ASP for improved resilience-building. Specific recommendations are given, including the integration of additional hazards, such as flood, drought or heatwaves, for a more comprehensive outlook on potential risks. This would provide a broader overview and allow for multi-hazard risk planning. In addition, high-resolution local data and stakeholder involvement can increase both ownership and the relevance of SPDs. Further recommendations include the development of a database and the inclusion of climate and socioeconomic scenarios in analyses.
APA, Harvard, Vancouver, ISO, and other styles
6

Rankin, Nicole, Deborah McGregor, Candice Donnelly, Bethany Van Dort, Richard De Abreu Lourenco, Anne Cust, and Emily Stone. Lung cancer screening using low-dose computed tomography for high risk populations: Investigating effectiveness and screening program implementation considerations: An Evidence Check rapid review brokered by the Sax Institute (www.saxinstitute.org.au) for the Cancer Institute NSW. The Sax Institute, October 2019. http://dx.doi.org/10.57022/clzt5093.

Full text
Abstract:
Background Lung cancer is the number one cause of cancer death worldwide.(1) It is the fifth most commonly diagnosed cancer in Australia (12,741 cases diagnosed in 2018) and the leading cause of cancer death.(2) The number of years of potential life lost to lung cancer in Australia is estimated to be 58,450, similar to that of colorectal and breast cancer combined.(3) While tobacco control strategies are most effective for disease prevention in the general population, early detection via low dose computed tomography (LDCT) screening in high-risk populations is a viable option for detecting asymptomatic disease in current (13%) and former (24%) Australian smokers.(4) The purpose of this Evidence Check review is to identify and analyse existing and emerging evidence for LDCT lung cancer screening in high-risk individuals to guide future program and policy planning. Evidence Check questions This review aimed to address the following questions: 1. What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? 2. What is the evidence of potential harms from lung cancer screening for higher-risk individuals? 3. What are the main components of recent major lung cancer screening programs or trials? 4. What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Summary of methods The authors searched the peer-reviewed literature across three databases (MEDLINE, PsycINFO and Embase) for existing systematic reviews and original studies published between 1 January 2009 and 8 August 2019. Fifteen systematic reviews (of which 8 were contemporary) and 64 original publications met the inclusion criteria set across the four questions. Key findings Question 1: What is the evidence for the effectiveness of lung cancer screening for higher-risk individuals? There is sufficient evidence from systematic reviews and meta-analyses of combined (pooled) data from screening trials (of high-risk individuals) to indicate that LDCT examination is clinically effective in reducing lung cancer mortality. In 2011, the landmark National Lung Cancer Screening Trial (NLST, a large-scale randomised controlled trial [RCT] conducted in the US) reported a 20% (95% CI 6.8% – 26.7%; P=0.004) relative reduction in mortality among long-term heavy smokers over three rounds of annual screening. High-risk eligibility criteria was defined as people aged 55–74 years with a smoking history of ≥30 pack-years (years in which a smoker has consumed 20-plus cigarettes each day) and, for former smokers, ≥30 pack-years and have quit within the past 15 years.(5) All-cause mortality was reduced by 6.7% (95% CI, 1.2% – 13.6%; P=0.02). Initial data from the second landmark RCT, the NEderlands-Leuvens Longkanker Screenings ONderzoek (known as the NELSON trial), have found an even greater reduction of 26% (95% CI, 9% – 41%) in lung cancer mortality, with full trial results yet to be published.(6, 7) Pooled analyses, including several smaller-scale European LDCT screening trials insufficiently powered in their own right, collectively demonstrate a statistically significant reduction in lung cancer mortality (RR 0.82, 95% CI 0.73–0.91).(8) Despite the reduction in all-cause mortality found in the NLST, pooled analyses of seven trials found no statistically significant difference in all-cause mortality (RR 0.95, 95% CI 0.90–1.00).(8) However, cancer-specific mortality is currently the most relevant outcome in cancer screening trials. These seven trials demonstrated a significantly greater proportion of early stage cancers in LDCT groups compared with controls (RR 2.08, 95% CI 1.43–3.03). Thus, when considering results across mortality outcomes and early stage cancers diagnosed, LDCT screening is considered to be clinically effective. Question 2: What is the evidence of potential harms from lung cancer screening for higher-risk individuals? The harms of LDCT lung cancer screening include false positive tests and the consequences of unnecessary invasive follow-up procedures for conditions that are eventually diagnosed as benign. While LDCT screening leads to an increased frequency of invasive procedures, it does not result in greater mortality soon after an invasive procedure (in trial settings when compared with the control arm).(8) Overdiagnosis, exposure to radiation, psychological distress and an impact on quality of life are other known harms. Systematic review evidence indicates the benefits of LDCT screening are likely to outweigh the harms. The potential harms are likely to be reduced as refinements are made to LDCT screening protocols through: i) the application of risk predication models (e.g. the PLCOm2012), which enable a more accurate selection of the high-risk population through the use of specific criteria (beyond age and smoking history); ii) the use of nodule management algorithms (e.g. Lung-RADS, PanCan), which assist in the diagnostic evaluation of screen-detected nodules and cancers (e.g. more precise volumetric assessment of nodules); and, iii) more judicious selection of patients for invasive procedures. Recent evidence suggests a positive LDCT result may transiently increase psychological distress but does not have long-term adverse effects on psychological distress or health-related quality of life (HRQoL). With regards to smoking cessation, there is no evidence to suggest screening participation invokes a false sense of assurance in smokers, nor a reduction in motivation to quit. The NELSON and Danish trials found no difference in smoking cessation rates between LDCT screening and control groups. Higher net cessation rates, compared with general population, suggest those who participate in screening trials may already be motivated to quit. Question 3: What are the main components of recent major lung cancer screening programs or trials? There are no systematic reviews that capture the main components of recent major lung cancer screening trials and programs. We extracted evidence from original studies and clinical guidance documents and organised this into key groups to form a concise set of components for potential implementation of a national lung cancer screening program in Australia: 1. Identifying the high-risk population: recruitment, eligibility, selection and referral 2. Educating the public, people at high risk and healthcare providers; this includes creating awareness of lung cancer, the benefits and harms of LDCT screening, and shared decision-making 3. Components necessary for health services to deliver a screening program: a. Planning phase: e.g. human resources to coordinate the program, electronic data systems that integrate medical records information and link to an established national registry b. Implementation phase: e.g. human and technological resources required to conduct LDCT examinations, interpretation of reports and communication of results to participants c. Monitoring and evaluation phase: e.g. monitoring outcomes across patients, radiological reporting, compliance with established standards and a quality assurance program 4. Data reporting and research, e.g. audit and feedback to multidisciplinary teams, reporting outcomes to enhance international research into LDCT screening 5. Incorporation of smoking cessation interventions, e.g. specific programs designed for LDCT screening or referral to existing community or hospital-based services that deliver cessation interventions. Most original studies are single-institution evaluations that contain descriptive data about the processes required to establish and implement a high-risk population-based screening program. Across all studies there is a consistent message as to the challenges and complexities of establishing LDCT screening programs to attract people at high risk who will receive the greatest benefits from participation. With regards to smoking cessation, evidence from one systematic review indicates the optimal strategy for incorporating smoking cessation interventions into a LDCT screening program is unclear. There is widespread agreement that LDCT screening attendance presents a ‘teachable moment’ for cessation advice, especially among those people who receive a positive scan result. Smoking cessation is an area of significant research investment; for instance, eight US-based clinical trials are now underway that aim to address how best to design and deliver cessation programs within large-scale LDCT screening programs.(9) Question 4: What is the cost-effectiveness of lung cancer screening programs (include studies of cost–utility)? Assessing the value or cost-effectiveness of LDCT screening involves a complex interplay of factors including data on effectiveness and costs, and institutional context. A key input is data about the effectiveness of potential and current screening programs with respect to case detection, and the likely outcomes of treating those cases sooner (in the presence of LDCT screening) as opposed to later (in the absence of LDCT screening). Evidence about the cost-effectiveness of LDCT screening programs has been summarised in two systematic reviews. We identified a further 13 studies—five modelling studies, one discrete choice experiment and seven articles—that used a variety of methods to assess cost-effectiveness. Three modelling studies indicated LDCT screening was cost-effective in the settings of the US and Europe. Two studies—one from Australia and one from New Zealand—reported LDCT screening would not be cost-effective using NLST-like protocols. We anticipate that, following the full publication of the NELSON trial, cost-effectiveness studies will likely be updated with new data that reduce uncertainty about factors that influence modelling outcomes, including the findings of indeterminate nodules. Gaps in the evidence There is a large and accessible body of evidence as to the effectiveness (Q1) and harms (Q2) of LDCT screening for lung cancer. Nevertheless, there are significant gaps in the evidence about the program components that are required to implement an effective LDCT screening program (Q3). Questions about LDCT screening acceptability and feasibility were not explicitly included in the scope. However, as the evidence is based primarily on US programs and UK pilot studies, the relevance to the local setting requires careful consideration. The Queensland Lung Cancer Screening Study provides feasibility data about clinical aspects of LDCT screening but little about program design. The International Lung Screening Trial is still in the recruitment phase and findings are not yet available for inclusion in this Evidence Check. The Australian Population Based Screening Framework was developed to “inform decision-makers on the key issues to be considered when assessing potential screening programs in Australia”.(10) As the Framework is specific to population-based, rather than high-risk, screening programs, there is a lack of clarity about transferability of criteria. However, the Framework criteria do stipulate that a screening program must be acceptable to “important subgroups such as target participants who are from culturally and linguistically diverse backgrounds, Aboriginal and Torres Strait Islander people, people from disadvantaged groups and people with a disability”.(10) An extensive search of the literature highlighted that there is very little information about the acceptability of LDCT screening to these population groups in Australia. Yet they are part of the high-risk population.(10) There are also considerable gaps in the evidence about the cost-effectiveness of LDCT screening in different settings, including Australia. The evidence base in this area is rapidly evolving and is likely to include new data from the NELSON trial and incorporate data about the costs of targeted- and immuno-therapies as these treatments become more widely available in Australia.
APA, Harvard, Vancouver, ISO, and other styles
7

African Open Science Platform Part 1: Landscape Study. Academy of Science of South Africa (ASSAf), 2019. http://dx.doi.org/10.17159/assaf.2019/0047.

Full text
Abstract:
This report maps the African landscape of Open Science – with a focus on Open Data as a sub-set of Open Science. Data to inform the landscape study were collected through a variety of methods, including surveys, desk research, engagement with a community of practice, networking with stakeholders, participation in conferences, case study presentations, and workshops hosted. Although the majority of African countries (35 of 54) demonstrates commitment to science through its investment in research and development (R&D), academies of science, ministries of science and technology, policies, recognition of research, and participation in the Science Granting Councils Initiative (SGCI), the following countries demonstrate the highest commitment and political willingness to invest in science: Botswana, Ethiopia, Kenya, Senegal, South Africa, Tanzania, and Uganda. In addition to existing policies in Science, Technology and Innovation (STI), the following countries have made progress towards Open Data policies: Botswana, Kenya, Madagascar, Mauritius, South Africa and Uganda. Only two African countries (Kenya and South Africa) at this stage contribute 0.8% of its GDP (Gross Domestic Product) to R&D (Research and Development), which is the closest to the AU’s (African Union’s) suggested 1%. Countries such as Lesotho and Madagascar ranked as 0%, while the R&D expenditure for 24 African countries is unknown. In addition to this, science globally has become fully dependent on stable ICT (Information and Communication Technologies) infrastructure, which includes connectivity/bandwidth, high performance computing facilities and data services. This is especially applicable since countries globally are finding themselves in the midst of the 4th Industrial Revolution (4IR), which is not only “about” data, but which “is” data. According to an article1 by Alan Marcus (2015) (Senior Director, Head of Information Technology and Telecommunications Industries, World Economic Forum), “At its core, data represents a post-industrial opportunity. Its uses have unprecedented complexity, velocity and global reach. As digital communications become ubiquitous, data will rule in a world where nearly everyone and everything is connected in real time. That will require a highly reliable, secure and available infrastructure at its core, and innovation at the edge.” Every industry is affected as part of this revolution – also science. An important component of the digital transformation is “trust” – people must be able to trust that governments and all other industries (including the science sector), adequately handle and protect their data. This requires accountability on a global level, and digital industries must embrace the change and go for a higher standard of protection. “This will reassure consumers and citizens, benefitting the whole digital economy”, says Marcus. A stable and secure information and communication technologies (ICT) infrastructure – currently provided by the National Research and Education Networks (NRENs) – is key to advance collaboration in science. The AfricaConnect2 project (AfricaConnect (2012–2014) and AfricaConnect2 (2016–2018)) through establishing connectivity between National Research and Education Networks (NRENs), is planning to roll out AfricaConnect3 by the end of 2019. The concern however is that selected African governments (with the exception of a few countries such as South Africa, Mozambique, Ethiopia and others) have low awareness of the impact the Internet has today on all societal levels, how much ICT (and the 4th Industrial Revolution) have affected research, and the added value an NREN can bring to higher education and research in addressing the respective needs, which is far more complex than simply providing connectivity. Apart from more commitment and investment in R&D, African governments – to become and remain part of the 4th Industrial Revolution – have no option other than to acknowledge and commit to the role NRENs play in advancing science towards addressing the SDG (Sustainable Development Goals). For successful collaboration and direction, it is fundamental that policies within one country are aligned with one another. Alignment on continental level is crucial for the future Pan-African African Open Science Platform to be successful. Both the HIPSSA ((Harmonization of ICT Policies in Sub-Saharan Africa)3 project and WATRA (the West Africa Telecommunications Regulators Assembly)4, have made progress towards the regulation of the telecom sector, and in particular of bottlenecks which curb the development of competition among ISPs. A study under HIPSSA identified potential bottlenecks in access at an affordable price to the international capacity of submarine cables and suggested means and tools used by regulators to remedy them. Work on the recommended measures and making them operational continues in collaboration with WATRA. In addition to sufficient bandwidth and connectivity, high-performance computing facilities and services in support of data sharing are also required. The South African National Integrated Cyberinfrastructure System5 (NICIS) has made great progress in planning and setting up a cyberinfrastructure ecosystem in support of collaborative science and data sharing. The regional Southern African Development Community6 (SADC) Cyber-infrastructure Framework provides a valuable roadmap towards high-speed Internet, developing human capacity and skills in ICT technologies, high- performance computing and more. The following countries have been identified as having high-performance computing facilities, some as a result of the Square Kilometre Array7 (SKA) partnership: Botswana, Ghana, Kenya, Madagascar, Mozambique, Mauritius, Namibia, South Africa, Tunisia, and Zambia. More and more NRENs – especially the Level 6 NRENs 8 (Algeria, Egypt, Kenya, South Africa, and recently Zambia) – are exploring offering additional services; also in support of data sharing and transfer. The following NRENs already allow for running data-intensive applications and sharing of high-end computing assets, bio-modelling and computation on high-performance/ supercomputers: KENET (Kenya), TENET (South Africa), RENU (Uganda), ZAMREN (Zambia), EUN (Egypt) and ARN (Algeria). Fifteen higher education training institutions from eight African countries (Botswana, Benin, Kenya, Nigeria, Rwanda, South Africa, Sudan, and Tanzania) have been identified as offering formal courses on data science. In addition to formal degrees, a number of international short courses have been developed and free international online courses are also available as an option to build capacity and integrate as part of curricula. The small number of higher education or research intensive institutions offering data science is however insufficient, and there is a desperate need for more training in data science. The CODATA-RDA Schools of Research Data Science aim at addressing the continental need for foundational data skills across all disciplines, along with training conducted by The Carpentries 9 programme (specifically Data Carpentry 10 ). Thus far, CODATA-RDA schools in collaboration with AOSP, integrating content from Data Carpentry, were presented in Rwanda (in 2018), and during17-29 June 2019, in Ethiopia. Awareness regarding Open Science (including Open Data) is evident through the 12 Open Science-related Open Access/Open Data/Open Science declarations and agreements endorsed or signed by African governments; 200 Open Access journals from Africa registered on the Directory of Open Access Journals (DOAJ); 174 Open Access institutional research repositories registered on openDOAR (Directory of Open Access Repositories); 33 Open Access/Open Science policies registered on ROARMAP (Registry of Open Access Repository Mandates and Policies); 24 data repositories registered with the Registry of Data Repositories (re3data.org) (although the pilot project identified 66 research data repositories); and one data repository assigned the CoreTrustSeal. Although this is a start, far more needs to be done to align African data curation and research practices with global standards. Funding to conduct research remains a challenge. African researchers mostly fund their own research, and there are little incentives for them to make their research and accompanying data sets openly accessible. Funding and peer recognition, along with an enabling research environment conducive for research, are regarded as major incentives. The landscape report concludes with a number of concerns towards sharing research data openly, as well as challenges in terms of Open Data policy, ICT infrastructure supportive of data sharing, capacity building, lack of skills, and the need for incentives. Although great progress has been made in terms of Open Science and Open Data practices, more awareness needs to be created and further advocacy efforts are required for buy-in from African governments. A federated African Open Science Platform (AOSP) will not only encourage more collaboration among researchers in addressing the SDGs, but it will also benefit the many stakeholders identified as part of the pilot phase. The time is now, for governments in Africa, to acknowledge the important role of science in general, but specifically Open Science and Open Data, through developing and aligning the relevant policies, investing in an ICT infrastructure conducive for data sharing through committing funding to making NRENs financially sustainable, incentivising open research practices by scientists, and creating opportunities for more scientists and stakeholders across all disciplines to be trained in data management.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography