To see the other types of publications on this topic, follow the link: Semantics preserving modeling technique.

Journal articles on the topic 'Semantics preserving modeling technique'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Semantics preserving modeling technique.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Tigane, Samir, Fayçal Guerrouf, Nadia Hamani, Laid Kahloul, Mohamed Khalgui, and Masood Ashraf Ali. "Dynamic Timed Automata for Reconfigurable System Modeling and Verification." Axioms 12, no. 3 (February 22, 2023): 230. http://dx.doi.org/10.3390/axioms12030230.

Full text
Abstract:
Modern discrete-event systems (DESs) are often characterized by their dynamic structures enabling highly flexible behaviors that can respond in real time to volatile environments. On the other hand, timed automata (TA) are powerful tools used to design various DESs. However, they lack the ability to naturally describe dynamic-structure reconfigurable systems. Indeed, TA are characterized by their rigid structures, which cannot handle the complexity of dynamic structures. To overcome this limitation, we propose an extension to TA, called dynamic timed automata (DTA), enabling the modeling and verification of reconfigurable systems. Additionally, we present a new algorithm that transforms DTA into semantic-equivalent TA while preserving their behavior. We demonstrate the usefulness and applicability of this new modeling and verification technique using an illustrative example.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Ji-Sun, and Cheong Youn. "EMPS : An Efficient Software Merging Technique for Preserving Semantics." KIPS Transactions:PartD 13D, no. 2 (April 1, 2006): 223–34. http://dx.doi.org/10.3745/kipstd.2006.13d.2.223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Varde, Aparna S., Mohammed Maniruzzaman, and Richard D. Sisson. "QuenchML: A semantics-preserving markup language for knowledge representation in quenching." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 27, no. 1 (January 15, 2013): 65–82. http://dx.doi.org/10.1017/s0890060412000352.

Full text
Abstract:
AbstractKnowledge representation (KR) is an important area in artificial intelligence (AI) and is often related to specific domains. The representation of knowledge in domain-specific contexts makes it desirable to capture semantics as domain experts would. This motivates the development of semantics-preserving standards for KR within the given domain. In addition to the storage and analysis of information using such standards, the effect of globalization today necessitates the publishing of information on the Web. Thus, it is advisable to use formats that make the information easily publishable and accessible while developing KR standards. In this article, we propose such a standard called Quenching Markup Language (QuenchML). This follows the syntax of the eXtensible Markup Language and captures the semantics of the quenching domain within the heat treating of materials. We describe the development of QuenchML, a multidisciplinary effort spanning the realms of AI, database management, and materials science, considering various aspects such as ontology, data modeling, and domain-specific constraints. We also explain the usefulness of QuenchML in semantics-preserving information retrieval and in text mining guided by domain knowledge. Furthermore, we outline the significance of this work in software tools within the field of AI.
APA, Harvard, Vancouver, ISO, and other styles
4

Marino, B. G., A. Masiero, F. Chiabrando, A. M. Lingua, F. Fissore, W. Błaszczak-Bak, and A. Vettore. "DATA OPTIMIZATION FOR 3D MODELING AND ANALYSIS OF A FORTRESS ARCHITECTURE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W11 (May 4, 2019): 809–13. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w11-809-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Thanks to the recent worldwide spread of drones and to the development of structure from motion photogrammetric software, UAV photogrammetry is becoming a convenient and reliable way for the 3D documentation of built heritage. Hence, nowadays, UAV photogrammetric surveying is a common and quite standard tool for producing 3D models of relatively large areas. However, when such areas are large, then a significant part of the generated point cloud is often of minor interest. Given the necessity of efficiently dealing with storing, processing and analyzing the produced point cloud, some optimization step should be considered in order to reduce the amount of redundancy, in particular in the parts of the model that are of minor interest. Despite this can be done by means of a manual selection of such parts, an automatic selection is clearly much more viable way to speed up the final model generation. Motivated by the recent development of many semantic classification techniques, the aim of this work is investigating the use of point cloud optimization based on semantic recognition of different components in the photogrammetric 3D model. The Girifalco Fortress (Cortona, Italy) is used as case study for such investigation. The rationale of the proposed methodology is clearly that of preserving high point density in the model in the areas that describe the fortress, whereas point cloud density is dramatically reduced in vegetated and soil areas. Thanks to the implemented automatic procedure, in the considered case study, the size of the point cloud has been reduced by a factor five, approximately. It is worth to notice that such result has been obtained preserving the original point density on the fortress surfaces, hence ensuring the same capabilities of geometric analysis of the original photogrammetric model.</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Batra, Dinesh. "An Event-Oriented Data Modeling Technique Based on the Cognitive Semantics Theory." Journal of Database Management 23, no. 4 (October 2012): 52–74. http://dx.doi.org/10.4018/jdm.2012100103.

Full text
Abstract:
The Resource-Event-Agent (REA) model has been proposed as a data modeling approach for representing accounting transactions. However, most business events are not transactions; thus, the REA formulation is incomplete. Based on the Conceptual Semantics theory, this paper discusses the entity-relationship event network (EREN) model, which extends the REA model and provides a comprehensive data template for a business event. Specifically, the notions of resource, event, and agent in the REA model are extended to include more discriminating entity types. The EREN technique can be used to identify events, sketch a network of events, and develop a data model of a business application by applying the EREN template to each event. Most extant techniques facilitate only the descriptive role whereas the EREN technique facilitates both the design and descriptive role of data modeling.
APA, Harvard, Vancouver, ISO, and other styles
6

Beck, Edgar, Carsten Bockelmann, and Armin Dekorsy. "Semantic Information Recovery in Wireless Networks." Sensors 23, no. 14 (July 12, 2023): 6347. http://dx.doi.org/10.3390/s23146347.

Full text
Abstract:
Motivated by the recent success of Machine Learning (ML) tools in wireless communications, the idea of semantic communication by Weaver from 1949 has gained attention. It breaks with Shannon’s classic design paradigm by aiming to transmit the meaning of a message, i.e., semantics, rather than its exact version and, thus, enables savings in information rate. In this work, we extend the fundamental approach from Basu et al. for modeling semantics to the complete communications Markov chain. Thus, we model semantics by means of hidden random variables and define the semantic communication task as the data-reduced and reliable transmission of messages over a communication channel such that semantics is best preserved. We consider this task as an end-to-end Information Bottleneck problem, enabling compression while preserving relevant information. As a solution approach, we propose the ML-based semantic communication system SINFONY and use it for a distributed multipoint scenario; SINFONY communicates the meaning behind multiple messages that are observed at different senders to a single receiver for semantic recovery. We analyze SINFONY by processing images as message examples. Numerical results reveal a tremendous rate-normalized SNR shift up to 20 dB compared to classically designed communication systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Vdovychenko, Ruslan, and Vadim Tulchinsky. "Parallel Implementation of Sparse Distributed Memory for Semantic Storage." Cybernetics and Computer Technologies, no. 2 (September 30, 2022): 58–66. http://dx.doi.org/10.34229/2707-451x.22.2.6.

Full text
Abstract:
Introduction. Sparse Distributed Memory (SDM) and Binary Sparse Distributed Representations (Binary Sparse Distributed Representations, BSDR), as two phenomenological approaches to biological memory modelling, have many similarities. The idea of ??their integration into a hybrid semantic storage model with SDM as a low-level cleaning memory (brain cells) for BSDR, which is used as an encoder of high-level symbolic information, is natural. A hybrid semantic store should be able to store holistic data (for example, structures of interconnected and sequential key-value pairs) in a neural network. A similar design has been proposed several times since the 1990s. However, the previously proposed models are impractical due to insufficient scalability and/or low storage density. The gap between SDM and BSDR can be bridged by the results of a third theory related to sparse signals: Compressive Sensing or Sampling (CS). In this article, we focus on the highly efficient parallel implementation of the CS-SDM hybrid memory model for graphics processing units on the NVIDIA CUDA platform, analyze the computational complexity of CS-SDM operations for the case of parallel implementation, and offer optimization techniques for conducting experiments with big sequential batches of vectors. The purpose of the paper is to propose an efficient software implementation of sparse-distributed memory for preserving semantics on modern graphics processing units. Results. Parallel algorithms for CS-SDM operations are proposed, their computational complexity is estimated, and a parallel implementation of the CS-SDM hybrid semantic store is given. Optimization of vector reconstruction for experiments with sequential data batches is proposed. Conclusions. The obtained results show that the design of CS-SDM is naturally parallel and that its algorithms are by design compatible with the architecture of systems with massive parallelism. The conducted experiments showed high performance of the developed implementation of the SDM memory block. Keywords: GPU, CUDA, neural network, Sparse Distributed Memory, associative memory, Compressive Sensing.
APA, Harvard, Vancouver, ISO, and other styles
8

Albert, Elvira, Nikolaos Bezirgiannis, Frank de Boer, and Enrique Martin-Martin. "A Formal, Resource Consumption-Preserving Translation from Actors with Cooperative Scheduling to Haskell*." Fundamenta Informaticae 177, no. 3-4 (December 10, 2020): 203–34. http://dx.doi.org/10.3233/fi-2020-1988.

Full text
Abstract:
We present a formal translation of a resource-aware extension of the Abstract Behavioral Specification (ABS) language to the functional language Haskell. ABS is an actor-based language tailored to the modeling of distributed systems. It combines asynchronous method calls with a suspend and resume mode of execution of the method invocations. To cater for the resulting cooperative scheduling of the method invocations of an actor, the translation exploits for the compilation of ABS methods Haskell functions with continuations. The main result of this article is a correctness proof of the translation by means of a simulation relation between a formal semantics of the source language and a high-level operational semantics of the target language, i.e., a subset of Haskell. We further prove that the resource consumption of an ABS program extended with a cost model is preserved over this translation, as we establish an equivalence of the cost of executing the ABS program and its corresponding Haskell-translation. Concretely, the resources consumed by the original ABS program and those consumed by the Haskell program are the same, considering a cost model. Consequently, the resource bounds automatically inferred for ABS programs extended with a cost model, using resource analysis tools, are sound resource bounds also for the translated Haskell programs. Our experimental evaluation confirms the resource preservation over a set of benchmarks featuring different asymptotic costs.
APA, Harvard, Vancouver, ISO, and other styles
9

Pimentel-Niño, M. A., Paresh Saxena, and M. A. Vazquez-Castro. "Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness." Scientific World Journal 2015 (2015): 1–16. http://dx.doi.org/10.1155/2015/394956.

Full text
Abstract:
A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture.
APA, Harvard, Vancouver, ISO, and other styles
10

Motzek, Alexander, and Ralf Möller. "Indirect Causes in Dynamic Bayesian Networks Revisited." Journal of Artificial Intelligence Research 59 (May 27, 2017): 1–58. http://dx.doi.org/10.1613/jair.5361.

Full text
Abstract:
Modeling causal dependencies often demands cycles at a coarse-grained temporal scale. If Bayesian networks are to be used for modeling uncertainties, cycles are eliminated with dynamic Bayesian networks, spreading indirect dependencies over time and enforcing an infinitesimal resolution of time. Without a ``causal design,'' i.e., without anticipating indirect influences appropriately in time, we argue that such networks return spurious results. By identifying activator random variables, we propose activator dynamic Bayesian networks (ADBNs) which are able to rapidly adapt to contexts under a causal use of time, anticipating indirect influences on a solid mathematical basis using familiar Bayesian network semantics. ADBNs are well-defined dynamic probabilistic graphical models allowing one to model cyclic dependencies from local and causal perspectives while preserving a classical, familiar calculus and classically known algorithms, without introducing any overhead in modeling or inference.
APA, Harvard, Vancouver, ISO, and other styles
11

Song, Shuang, Andy Dong, and Alice Agogino. "Modeling Information Needs in Engineering Databases Using Tacit Knowledge." Journal of Computing and Information Science in Engineering 2, no. 3 (September 1, 2002): 199–207. http://dx.doi.org/10.1115/1.1528921.

Full text
Abstract:
Online resources of engineering design information are a critical resource for practicing engineers. These online resources often contain references and content associated with technical memos, journal articles and “white papers” of prior engineering projects. However, filtering this stream of information to find the right information appropriate to an engineering issue and the engineer is a time-consuming task. The focus of this research lies in ascertaining tacit knowledge to model the information needs of the users of an engineering information system. It is proposed that the combination of reading time and the semantics of documents accessed by users reflect their tacit knowledge. By combining the computational text analysis tool of Latent Semantic Analysis with analyses of on-line user transaction logs, we introduce the technique of Latent Interest Analysis (LIA) to model information needs based on tacit knowledge. Information needs are modeled by a vector equation consisting of a linear combination of the user’s queries and prior documents downloaded, scaled by the reading time of each document to measure the degree of relevance. A validation study of the LIA model revealed a higher correlation between predicted and actual information needs for our model in comparison to models lacking scaling by reading time and a representation of the semantics of prior accessed documents. The technique was incorporated into a digital library to recommend engineering education materials to users.
APA, Harvard, Vancouver, ISO, and other styles
12

McNeill, Fiona, Paolo Besana, Juan Pane, and Fausto Giunchiglia. "Service Integration through Structure-Preserving Semantic Matching." Journal of Cases on Information Technology 11, no. 4 (October 2009): 26–46. http://dx.doi.org/10.4018/jcit.2009072102.

Full text
Abstract:
The problem of integrating services is becoming increasingly pressing. In large, open environments such as the Semantic Web, huge numbers of services are developed by vast numbers of different users. Imposing strict semantics standards in such an environment is useless; fully predicting in advance which services one will interact with is not always possible as services may be temporarily or permanently unreachable, may be updated or may be superseded by better services. In some situations, characterised by unpredictability, such as the emergency response scenario described in this case, the best solution is to enable decisions about which services to interact with to be made on-the-fly. We propose a method of doing this using matching technique to map the anticipated call to the input that the service is actually expecting. To be practical, this must be done during run-time. In this case, we present our structure-preserving semantic matching algorithm (SPSM), which performs this matching task both for perfect and approximate matches between calls. In addition, we introduce the OpenKnowledge system for service interaction which, using the SPSM algorithm, along with many other features, facilitates on-the-fly interaction between services in an arbitrarily large network without any global agreements or pre-run-time knowledge of who to interact with or how interactions will proceed. We provide a preliminary evaluation of the SPSM algorithm within the OpenKnowledge framework.
APA, Harvard, Vancouver, ISO, and other styles
13

LAM, VITUS S. W. "FORMAL ANALYSIS OF BPMN MODELS: A NuSMV-BASED APPROACH." International Journal of Software Engineering and Knowledge Engineering 20, no. 07 (November 2010): 987–1023. http://dx.doi.org/10.1142/s0218194010005079.

Full text
Abstract:
Business Process Modeling Notation (BPMN) plays a significant role in the specification of business processes. To ascertain the validity of BPMN models, a disciplined approach to analyze their behavior is of particular interest to the field of business process management. This paper advocates a semantics-preserving method for transforming BPMN models into New Symbolic Model Verifier (NuSMV) language as a means to verify the models. A subset of BPMN is specified rigorously in the form of a mathematical model. With this foundation in place, the translation for the subset of BPMN notational elements is then driven by a set of formally defined rules. The practicality of our approach is exemplified using an on-line flight reservation service.
APA, Harvard, Vancouver, ISO, and other styles
14

Yang, Shi Han, Jin Zhao Wu, and An Ping He. "Automatically Transforming Legacy XML Documents Into OWL Ontology." Applied Mechanics and Materials 241-244 (December 2012): 2638–44. http://dx.doi.org/10.4028/www.scientific.net/amm.241-244.2638.

Full text
Abstract:
It is a challenge to transform legacy XML-based data into ontology for applications of the semantic web, such as semantic-based integrations, intelligent web searching, and internet based knowledge reasoning. We propose a new technique to transform XML data into ontology data automatically by modeling XML documents semantically. Firstly, we provide the XML a semantically interpretation by developing a graph-based formal language, w-graph. Then, the result of the interpretation can be automatically mapped into OWL web ontology language with semantics preserved. The proof of semantics preserved also been considered, and automatic mapping tool has been developed.
APA, Harvard, Vancouver, ISO, and other styles
15

AMENDOLA, GIOVANNI, CARMINE DODARO, and FRANCESCO RICCA. "Better Paracoherent Answer Sets with Less Resources." Theory and Practice of Logic Programming 19, no. 5-6 (September 2019): 757–72. http://dx.doi.org/10.1017/s1471068419000176.

Full text
Abstract:
AbstractAnswer Set Programming (ASP) is a well-established formalism for logic programming. Problem solving in ASP requires to write an ASP program whose answers sets correspond to solutions. Albeit the non-existence of answer sets for some ASP programs can be considered as a modeling feature, it turns out to be a weakness in many other cases, and especially for query answering. Paracoherent answer set semantics extend the classical semantics of ASP to draw meaningful conclusions also from incoherent programs, with the result of increasing the range of applications of ASP. State of the art implementations of paracoherent ASP adopt the semi-equilibrium semantics, but cannot be lifted straightforwardly to compute efficiently the (better) split semi-equilibrium semantics that discards undesirable semi-equilibrium models. In this paper an efficient evaluation technique for computing a split semi-equilibrium model is presented. An experiment on hard benchmarks shows that better paracoherent answer sets can be computed consuming less computational resources than existing methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Ni, Jiajia, Jianhuang Wu, Jing Tong, Mingqiang Wei, and Zhengming Chen. "SSCA-Net: Simultaneous Self- and Channel-Attention Neural Network for Multiscale Structure-Preserving Vessel Segmentation." BioMed Research International 2021 (March 30, 2021): 1–17. http://dx.doi.org/10.1155/2021/6622253.

Full text
Abstract:
Vessel segmentation is a fundamental, yet not well-solved problem in medical image analysis, due to the complicated geometrical and topological structures of human vessels. Unlike existing rule- and conventional learning-based techniques, which hardly capture the location of tiny vessel structures and perceive their global spatial structures, we propose Simultaneous Self- and Channel-attention Neural Network (termed SSCA-Net) to solve the multiscale structure-preserving vessel segmentation (MSVS) problem. SSCA-Net differs from the conventional neural networks in modeling image global contexts, showing more power to understand the global semantic information by both self- and channel-attention (SCA) mechanism and offering high performance on segmenting vessels with multiscale structures (e.g., DSC: 96.21% and MIoU: 92.70% on the intracranial vessel dataset). Specifically, the SCA module is designed and embedded in the feature decoding stage to learn SCA features at different layers, in which the self-attention is used to obtain the position information of the feature itself, and the channel attention is designed to guide the shallow features to obtain global feature information. To evaluate the effectiveness of our SSCA-Net, we compare it with several state-of-the-art methods on three well-known vessel segmentation benchmark datasets. Qualitative and quantitative results demonstrate clear improvements of our method over the state-of-the-art in terms of preserving vessel details and global spatial structures.
APA, Harvard, Vancouver, ISO, and other styles
17

Xu, Lyu, Byron Choi, Yun Peng, Jianliang Xu, and Sourav S Bhowmick. "A Framework for Privacy Preserving Localized Graph Pattern Query Processing." Proceedings of the ACM on Management of Data 1, no. 2 (June 13, 2023): 1–27. http://dx.doi.org/10.1145/3589274.

Full text
Abstract:
This paper studies privacy preserving graph pattern query services in a cloud computing paradigm. In such a paradigm, data owner stores the large data graph to a powerful cloud hosted by a service provider (SP) and users send their queries to SP for query processing. However, as SP may not always be trusted, the sensitive information of users' queries, importantly, the query structures, should be protected. In this paper, we study how to outsource the localized graph pattern queries (LGPQs) on the SP side with privacy preservation. LGPQs include a rich set of semantics, such as subgraph homomorphism, subgraph isomorphism, and strong simulation, for which each matched graph pattern is located in a subgraph called ball that have a restriction on its size. To provide privacy preserving query service for LGPQs, this paper proposes the first framework, called Prilo, that enables users to privately obtain the query results. To further optimize Prilo, we propose Prilo* that comprises the first bloom filter for trees in the trust execution environment (TEE) on SP, a query-oblivious twiglet-based technique for pruning non-answers, and a secure retrieval scheme of balls that enables user to obtain query results early. We conduct detailed experiments on real world datasets to show that Prilo* is on average 4x faster than the baseline, and meanwhile, preserves query privacy.
APA, Harvard, Vancouver, ISO, and other styles
18

Iluz, Shir, Yael Vinker, Amir Hertz, Daniel Berio, Daniel Cohen-Or, and Ariel Shamir. "Word-As-Image for Semantic Typography." ACM Transactions on Graphics 42, no. 4 (July 26, 2023): 1–11. http://dx.doi.org/10.1145/3592123.

Full text
Abstract:
A word-as-image is a semantic typography technique where a word illustration presents a visualization of the meaning of the word, while also preserving its readability. We present a method to create word-as-image illustrations automatically. This task is highly challenging as it requires semantic understanding of the word and a creative idea of where and how to depict these semantics in a visually pleasing and legible manner. We rely on the remarkable ability of recent large pretrained language-vision models to distill textual concepts visually. We target simple, concise, black-and-white designs that convey the semantics clearly. We deliberately do not change the color or texture of the letters and do not use embellishments. Our method optimizes the outline of each letter to convey the desired concept, guided by a pretrained Stable Diffusion model. We incorporate additional loss terms to ensure the legibility of the text and the preservation of the style of the font. We show high quality and engaging results on numerous examples and compare to alternative techniques. Code and demo will be available at our project page.
APA, Harvard, Vancouver, ISO, and other styles
19

Nguyen, Ngoc Duc, and Bac Le. "A Fast Algorithm for Privacy-Preserving Utility Mining." Research and Development on Information and Communication Technology 2022, no. 1 (March 8, 2022): 12–22. http://dx.doi.org/10.32913/mic-ict-research.v2022.n1.1026.

Full text
Abstract:
Utility mining (UM) is an efficient technique for data mining which aim to discover critical patternsfrom various types of database. However, mining data can reveal sensitive information of individuals. Privacy preserving utility mining (PPUM) emerges as an important research topic in recent years. In the past, integer programming approach was developed to hide sensitive knowledge in a database. This approach required a significant amount of time for preprocessing and formulating a constraint satisfaction problem (CSP). To address this problem, we proposed a new algorithm based on a hash data structure which performs more quickly in itemsets filtering and problem modeling. Experiment evaluations are conducted on real world and synthetic datasets.
APA, Harvard, Vancouver, ISO, and other styles
20

Hiebel, Gerald, Edeltraud Aspöck, and Karin Kopetzky. "Ontological Modeling for Excavation Documentation and Virtual Reconstruction of an Ancient Egyptian Site." Journal on Computing and Cultural Heritage 14, no. 3 (July 2021): 1–14. http://dx.doi.org/10.1145/3439735.

Full text
Abstract:
In this article we introduce our semantic modeling approach for data from over 50 years of excavations at Tell el-Daba in Egypt. The CIDOC CRM with some of its extensions is used as an ontological framework to provide the semantics for creating a knowledge graph containing material remains, excavated areas, and documentation resources. An objective of the project A Puzzle in 4D is to digitize the documentation and create metadata for analog and digital resources in order to provide the data to the research community and facilitate future work for this important archaeological site. Using an example of 3D reconstruction of a tomb, we show how the knowledge graph linked to digital resources can be exploited for a specific task to encounter available information that is essential for a virtual reconstruction. Moreover, we show an approach of modeling to represent the interpretations supporting reconstructions as well as relate them to the sources used, thus providing transparency for the model and provenance data. Modeling for excavation documentation as well as virtual reconstruction has been tailored to the large amount of data processed from the project. The goal is to propose a semantic modeling feasible even on a large scale while still preserving the basic underlying ontological structures.
APA, Harvard, Vancouver, ISO, and other styles
21

Li, Weihong. "3D Virtual Modeling Realizations of Building Construction Scenes via Deep Learning Technique." Computational Intelligence and Neuroscience 2022 (March 31, 2022): 1–11. http://dx.doi.org/10.1155/2022/6286420.

Full text
Abstract:
The architectural drawings of traditional building constructions generally require some design knowledge of the architectural plan to be understood. With the continuous development of the construction industry, the use of three-dimensional (3D) virtual models of buildings is quickly increased. Using three-dimensional models can give people a more convenient and intuitive understanding of the model of the building, and it is necessary for the painter to manually draw the 3D model. By analyzing the common design rules of architectural drawing, this project designed and realized a building three-dimensional reconstruction system that can automatically generate a stereogram (3 ds format) from a building plan (dxf format). The system extracts the building information in the dxf plan and generates a three-dimensional model (3 ds format) after identification and analysis. Three-dimensional reconstruction of architectural drawings is an important application of computer graphics in the field of architecture. The technology is based on computer vision and pattern recognition, supported by artificial intelligence, three-dimensional reconstruction, and other aspects of computer technology and engineering domain knowledge. It specializes in processing architectural engineering drawings with rich semantic information and various description forms to automatically carry out architectural drawing layouts. The high-level information with domain meanings such as the geometry and semantics/functions of graphics of the buildings can be analyzed for forming a complete and independent research system. As a new field of computer technology, the three-dimensional reconstruction drawings are appropriate for demonstrating the characteristics of architectural constructions.
APA, Harvard, Vancouver, ISO, and other styles
22

WANG, ENOCH Y., and BETTY H. C. CHENG. "FORMALIZING THE FUNCTIONAL MODEL WITHIN OBJECT-ORIENTED DESIGN." International Journal of Software Engineering and Knowledge Engineering 10, no. 01 (February 2000): 5–30. http://dx.doi.org/10.1142/s0218194000000031.

Full text
Abstract:
The data flow diagram (DFD), originally introduced for structured design purposes, depicts the functions that a system or a module should provide. The objective of a software system is to implement specific functionalities. The function-oriented decomposition strategy of DFDs in the conventional design process for structured design conflicts with the spirit of object-orientation. So far, there is no object-oriented method that has successfully integrated DFDs into the object-oriented development process. In this paper, we demonstrate how DFDs can be modified in order to be integrated into object-oriented development. The Object Modeling Technique (OMT) is used as the context for object-oriented development. In addition, a set of formalization rules are proposed to provide formal semantics for DFDs in order to integrate the functional model with the other two models of OMT, namely, the object and dynamic models, in terms of the underlying formal semantics.
APA, Harvard, Vancouver, ISO, and other styles
23

Langenfeld, Kai, Kerstin Möhring, Frank Walther, and Jörn Mosler. "Regularizational approach for modeling ductile damage." MATEC Web of Conferences 300 (2019): 08008. http://dx.doi.org/10.1051/matecconf/201930008008.

Full text
Abstract:
It is well known, that modeling of material softening behavior can lead to ill-posed boundary value problems. This, in turn, leads to meshdependent results as far as the finite-element-method is concerned [1]. Several solution strategies in order to regularize the aforementioned problem have been proposed in the literature, cf. [2]. However, these strategies often involve high implementational effort. An approach which is very efficient froman implementational point of view is the so-called micromorphic approach by [3, 4]. This regularization technique includes gradients of internal variables implicitly into the framework, while preserving the original structure of the underlying local constitutive model. However, it is shown that a straightforward implementation of the micromorphic approach does not work for single-surface ductile damage models. By analyzing the respective equations, a modification of the micromorphic approach is proposed – first for a scalar internal variable, i.e., isotropic damage. Subsequently, the novel regularization method is extended to tensor valued damage, i.e., anisotropic material degradation.
APA, Harvard, Vancouver, ISO, and other styles
24

Bui, Nghi D. Q., Yijun Yu, and Lingxiao Jiang. "TreeCaps: Tree-Based Capsule Networks for Source Code Processing." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (May 18, 2021): 30–38. http://dx.doi.org/10.1609/aaai.v35i1.16074.

Full text
Abstract:
Recently program learning techniques have been proposed to process source code based on syntactical structures (e.g., abstract syntax trees) and/or semantic information (e.g., dependency graphs). While graphs may be better than trees at capturing code semantics, constructing the graphs from code inputs through the semantic analysis of multiple viewpoints can lead to inaccurate noises for a specific software engineering task. Compared to graphs, syntax trees are more precisely defined on the grammar and easier to parse; unfortunately, previous tree-based learning techniques have not been able to learn semantic information from trees to achieve better accuracy than graph-based techniques. We have proposed a new learning technique, named TreeCaps, by fusing together capsule networks with tree-based convolutional neural networks to achieve a learning accuracy higher than some existing graph-based techniques while it is based only on trees. TreeCaps introduces novel variable-to-static routing algorithms into the capsule networks to compensate for the loss of previous routing algorithms. Aside from accuracy, we also find that TreeCaps is the most robust to withstand those semantic-preserving program transformations that change code syntax without modifying the semantics. Evaluated on a large number of Java and C/C++ programs, TreeCaps models outperform prior deep learning models of program source code, in terms of both accuracy and robustness for program comprehension tasks such as code functionality classification and function name prediction. Our implementation is publicly available at: https://github.com/bdqnghi/treecaps.
APA, Harvard, Vancouver, ISO, and other styles
25

Harzallah, Salaheddine, and Mohamed Chabaat. "Eddy Current Modeling for the Nondestructive Evaluation of Stress Intensity Factor." Applied Mechanics and Materials 621 (August 2014): 83–88. http://dx.doi.org/10.4028/www.scientific.net/amm.621.83.

Full text
Abstract:
In this paper, a nondestructive technique is used as a tool to control cracks and microcracks in materials. A simulation by a numerical approach such as the finite element method is employed to detect cracks and eventually; to study their propagation using a crucial parameter such as the stress intensity factor. This approach has been used in the aircraft industry to control cracks. Besides, it makes it possible to highlight the defects of parts while preserving the integrity of the controlled products. On the other side, it is proven that the reliability of the control of defects gives convincing results for the improvement of the quality and the safety of the material .
APA, Harvard, Vancouver, ISO, and other styles
26

Zhao, Yanbin, Lu Chen, Zhi Chen, and Kai Yu. "Semi-Supervised Text Simplification with Back-Translation and Asymmetric Denoising Autoencoders." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9668–75. http://dx.doi.org/10.1609/aaai.v34i05.6515.

Full text
Abstract:
Text simplification (TS) rephrases long sentences into simplified variants while preserving inherent semantics. Traditional sequence-to-sequence models heavily rely on the quantity and quality of parallel sentences, which limits their applicability in different languages and domains. This work investigates how to leverage large amounts of unpaired corpora in TS task. We adopt the back-translation architecture in unsupervised machine translation (NMT), including denoising autoencoders for language modeling and automatic generation of parallel data by iterative back-translation. However, it is non-trivial to generate appropriate complex-simple pair if we directly treat the set of simple and complex corpora as two different languages, since the two types of sentences are quite similar and it is hard for the model to capture the characteristics in different types of sentences. To tackle this problem, we propose asymmetric denoising methods for sentences with separate complexity. When modeling simple and complex sentences with autoencoders, we introduce different types of noise into the training process. Such a method can significantly improve the simplification performance. Our model can be trained in both unsupervised and semi-supervised manner. Automatic and human evaluations show that our unsupervised model outperforms the previous systems, and with limited supervision, our model can perform competitively with multiple state-of-the-art simplification systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Eseonu, Chikezie I., Karim ReFaey, Eva Pamias-Portalatin, Javier Asensio, Oscar Garcia, Kofi D. Boahene, and Alfredo Quiñones-Hinojosa. "Three-Hand Endoscopic Endonasal Transsphenoidal Surgery: Experience With an Anatomy-Preserving Mononostril Approach Technique." Operative Neurosurgery 14, no. 2 (May 10, 2017): 158–65. http://dx.doi.org/10.1093/ons/opx110.

Full text
Abstract:
Abstract BACKGROUND Variations on the endoscopic transsphenoidal approach present unique surgical techniques that have unique effects on surgical outcomes, extent of resection (EOR), and anatomical complications. OBJECTIVE To analyze the learning curve and perioperative outcomes of the 3-hand endoscopic endonasal mononostril transsphenoidal technique. METHODS Prospective case series and retrospective data analysis of patients who were treated with the 3-hand transsphenoidal technique between January 2007 and May 2015 by a single neurosurgeon. Patient characteristics, preoperative presentation, tumor characteristics, operative times, learning curve, and postoperative outcomes were analyzed. Volumetric EOR was evaluated, and a logistic regression analysis was used to assess predictors of EOR. RESULTS Two hundred seventy-five patients underwent an endoscopic transsphenoidal surgery using the 3-hand technique. One hundred eighteen patients in the early group had surgery between 2007 and 2010, while 157 patients in the late group had surgery between 2011 and 2015. Operative time was significantly shorter in the late group (161.6 min) compared to the early group (211.3 min, P = .001). Both cohorts had similar EOR (early group 84.6% vs late group 85.5%, P = .846) and postoperative outcomes. The learning curve showed that it took 54 cases to achieve operative proficiency with the 3-handed technique. Multivariate modeling suggested that prior resections and preoperative tumor size are important predictors for EOR. CONCLUSION We describe a 3-hand, mononostril endoscopic transsphenoidal technique performed by a single neurosurgeon that has minimal anatomic distortion and postoperative complications. During the learning curve of this technique, operative time can significantly decrease, while EOR, postoperative outcomes, and complications are not jeopardized.
APA, Harvard, Vancouver, ISO, and other styles
28

TRIVELLATO, DANIEL, NICOLA ZANNONE, MAURICE GLAUNDRUP, JACEK SKOWRONEK, and SANDRO ETALLE. "A SEMANTIC SECURITY FRAMEWORK FOR SYSTEMS OF SYSTEMS." International Journal of Cooperative Information Systems 22, no. 01 (March 2013): 1350004. http://dx.doi.org/10.1142/s0218843013500044.

Full text
Abstract:
Systems of systems (SoS) are dynamic coalitions of distributed, autonomous and heterogeneous systems that collaborate to achieve a common goal. While offering several advantages in terms of scalability and flexibility, the SoS paradigm has a strong impact on systems interoperability and on the security requirements of the collaborating parties. In this paper, we introduce a service-oriented security framework that protects the information exchanged among the parties in an SoS, while preserving parties' autonomy and interoperability. Confidentiality and integrity of information are protected by combining context-aware access control with trust management. Autonomy and interoperability among parties are enabled by the use of ontology-based services. More precisely, parties may refer to different ontologies to define the semantics of the terms used in their security policies and to describe domain knowledge and context information; a semantic alignment technique is then employed to map concepts from different ontologies and align the parties' vocabularies. We demonstrate the applicability of our solution by deploying a prototype implementation of the framework in an SoS in the maritime safety and security domain.
APA, Harvard, Vancouver, ISO, and other styles
29

Zimba, Aaron. "A Bayesian Attack-Network Modeling Approach to Mitigating Malware-Based Banking Cyberattacks." International Journal of Computer Network and Information Security 14, no. 1 (February 8, 2021): 25–39. http://dx.doi.org/10.5815/ijcnis.2022.01.03.

Full text
Abstract:
According to Cybersecurity Ventures, the damage related to cybercrime is projected to reach $6 trillion annually by 2021. The majority of the cyberattacks are directed at financial institutions as this reduces the number of intermediaries that the attacker needs to attack to reach the target - monetary proceeds. Research has shown that malware is the preferred attack vector in cybercrimes targeted at banks and other financial institutions. In light of the above, this paper presents a Bayesian Attack Network modeling technique of cyberattacks in the financial sector that are perpetuated by crimeware. We use the GameOver Zeus malware for our use cases as it’s the most common type of malware in this domain. The primary targets of this malware are any users of financial services. Today, financial services are accessed using personal laptops, institutional computers, mobile phones and tablets, etc. All these are potential victims that can be enlisted to the malware’s botnet. In our approach, phishing emails as well as Common Vulnerabilities and Exposures (CVEs) which are exhibited in various systems are employed to derive conditional probabilities that serve as inputs to the modeling technique. Compared to the state-of-the-art approaches, our method generates probability density curves of various attack structures whose semantics are applied in the mitigation process. This is based on the level exploitability that is deduced from the vertex degrees of the compromised nodes that characterizes the probability density curves.
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Xing, Zhaowang Liang, and Jianjia Wang. "FedMed: A Federated Learning Framework for Language Modeling." Sensors 20, no. 14 (July 21, 2020): 4048. http://dx.doi.org/10.3390/s20144048.

Full text
Abstract:
Federated learning (FL) is a privacy-preserving technique for training a vast amount of decentralized data and making inferences on mobile devices. As a typical language modeling problem, mobile keyboard prediction aims at suggesting a probable next word or phrase and facilitating the human-machine interaction in a virtual keyboard of the smartphone or laptop. Mobile keyboard prediction with FL hopes to satisfy the growing demand that high-level data privacy be preserved in artificial intelligence applications even with the distributed models training. However, there are two major problems in the federated optimization for the prediction: (1) aggregating model parameters on the server-side and (2) reducing communication costs caused by model weights collection. To address the above issues, traditional FL methods simply use averaging aggregation or ignore communication costs. We propose a novel Federated Mediation (FedMed) framework with the adaptive aggregation, mediation incentive scheme, and topK strategy to address the model aggregation and communication costs. The performance is evaluated in terms of perplexity and communication rounds. Experiments are conducted on three datasets (i.e., Penn Treebank, WikiText-2, and Yelp) and the results demonstrate that our FedMed framework achieves robust performance and outperforms baseline approaches.
APA, Harvard, Vancouver, ISO, and other styles
31

de Leoni, Massimiliano, Paolo Felli, and Marco Montali. "Integrating BPMN and DMN: Modeling and Analysis." Journal on Data Semantics 10, no. 1-2 (June 2021): 165–88. http://dx.doi.org/10.1007/s13740-021-00132-z.

Full text
Abstract:
AbstractThe operational backbone of modern organizations is the target of business process management, where business process models are produced to describe how the organization should react to events and coordinate the execution of activities so as to satisfy its business goals. At the same time, operational decisions are made by considering internal and external contextual factors, according to decision models that are typically based on declarative, rule-based specifications that describe how input configurations correspond to output results. The increasing importance and maturity of these two intertwined dimensions, those of processes and decisions, have led to a wide range of data-aware models and associated methodologies, such as BPMN for processes and DMN for operational decisions. While it is important to analyze these two aspects independently, it has been pointed out by several authors that it is also crucial to analyze them in combination. In this paper, we provide a native, formal definition of DBPMN models, namely data-aware and decision-aware processes that build on BPMN and DMN S-FEEL, illustrating their use and giving their formal execution semantics via an encoding into Data Petri nets (DPNs). By exploiting this encoding, we then build on previous work in which we lifted the classical notion of soundness of processes to this richer, data-aware setting, and show how the abstraction and verification techniques that were devised for DPNs can be directly used for DBPMN models. This paves the way towards even richer forms of analysis, beyond that of assessing soundness, that are based on the same technique.
APA, Harvard, Vancouver, ISO, and other styles
32

Alizadeh, Mahmoud, Peter Händel, and Daniel Rönnow. "Behavioral modeling and digital pre-distortion techniques for RF PAs in a 3 × 3 MIMO system." International Journal of Microwave and Wireless Technologies 11, no. 10 (June 20, 2019): 989–99. http://dx.doi.org/10.1017/s1759078719000862.

Full text
Abstract:
AbstractModern telecommunications are moving towards (massive) multi-input multi-output (MIMO) systems in 5th generation (5G) technology, increasing the dimensionality of the systems dramatically. In this paper, the impairments of radio frequency (RF) power amplifiers (PAs) in a 3 × 3 MIMO system are compensated in both the time and the frequency domains. A three-dimensional (3D) time-domain memory polynomial-type model is proposed as an extension of conventional 2D models. Furthermore, a 3D frequency-domain technique is formulated based on the proposed time-domain model to reduce the dimensionality of the model, while preserving the performance in terms of model errors. In the 3D frequency-domain technique, the bandwidth of the system is split into several narrow sub-bands, and the parameters of the model are estimated for each sub-band. This approach requires less computational complexity, and also the procedure of the parameters estimation for each sub-band can be implemented independently. The device-under-test consists of three RF PAs including input and output cross-talk channels. The proposed techniques are evaluated in both behavioral modeling and digital pre-distortion (DPD) perspectives. The experimental results show that the proposed DPD technique can compensate the errors of non-linearity and memory effects in the both time and frequency domains.
APA, Harvard, Vancouver, ISO, and other styles
33

Nasr, S., and O. V. German. "A SEARCHING ALGORITHM FOR TEXT WITH MISTAKES." Doklady BGUIR, no. 1 (March 6, 2020): 29–34. http://dx.doi.org/10.35596/1729-7648-2020-18-1-29-34.

Full text
Abstract:
The paper contains a new text searching method representing modification of the Boyer-Moore algorithm and enabling a user to find the places in the text where the given substring occurs maybe with possible errors, that is the string in text and a query may not coincide but nevertheless are identical. The idea consists in division of the searching process in two phases: at the first phase a fuzzy variant of the Boyer–Moore algorithm is performed; at the second phase the Dice metrics is used. The advantage of suggested technique in comparison with the known methods using the fixed value of the mistakes number is that it 1) does not perform precomputation of the auxiliary table of the sizes comparable to the original text sizes and 2) it more flexibly catches the semantics of the erroneous text substrings even for a big number of mistakes. This circumstance extends possibilities of the Boyer–Moore method by addmitting a bigger amount of possible mistakes in text and preserving text semantics. The suggested method provides also more accurate regulation of the upper boundary for the text mistakes which differs it from the known methods with fixed value of the maximum number of mistakes not depending on the text sizes. Moreover, this upper boundary is defined as Levenshtein distance not suitable for evaluating a relevance of the founded text and a query, while the Dice metrics provides such a relevance. In fact, if maximum Levenshtein distanse is 3 then how one can judge if this value is big or small to provide relevance of the search results. Consequently, the suggested method is more flexible, enables one to find relevant answers even in case of a big number of mistakes in text. The efficiency of the suggested method in the worst case is O(nc) with constant c defining the biggest allowable number of mistakes.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Qiuhong, Kun Qu, and Jinsheng Cai. "An automated multi-grid Chimera method based on the implicit hole technique." Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering 231, no. 2 (August 6, 2016): 279–93. http://dx.doi.org/10.1177/0954410016636162.

Full text
Abstract:
An automated multi-grid overset grid algorithm is presented for accurate simulation of viscous flows around complex configurations. The algorithm is based on the implicit hole cutting technique optimized with an overset grid construction strategy, a grid cutting criterion and a multi-level overset grid cutting method. The enhanced method is more general than the original method, while preserving the high degree of automation of the implicit hole cutting algorithm. Moreover, a mesh sequencing and multi-grid Chimera strategy is proposed to achieve convergence acceleration of the flow calculations. The present Chimera algorithm is demonstrated through the solution of the flows over the NASA Common Research Model configurations. The results show that the present mesh sequencing and multi-grid strategy in the overset grid framework are very effective in increasing convergence of solution in comparison with the standard three-level multi-grid. Wing pressure comparisons indicate the pressure distribution varies with grid solution at the shock, and a higher grid resolution improves shock definition. Abrupt reductions in lift and drag coefficients are correlated with the side-of-body separation bubble as angle of attack is increased. And various modeling and grid resolution have a strong impact on predicting the side-of-body separation. The side-of-body separation decreases with grid refinement. And the bubble size from spalart-allmaras (SA) modeling is larger than that from shear stress transport model.
APA, Harvard, Vancouver, ISO, and other styles
35

Camagni, F., S. Colaceci, and M. Russo. "REVERSE MODELING OF CULTURAL HERITAGE: PIPELINE AND BOTTLENECKS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W9 (January 31, 2019): 197–204. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w9-197-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> The present work is suggested as a contribution to the debate on Reverse Modeling (RM) topic in the Cultural Heritage field. It wants to test the methodology, the limits and the bottlenecks of the RM pipeline in the architectural field, with particular attention to the geometric shapes reading and interpretation. The mathematical reconstruction of architectural models represents an overlaid result of anthropic and natural transformations framed inside a complex process of shape simplification and surface generation. This pipeline must be supported by a careful Heritage reading by means of architecture rules, both preserving the actual shape and the original intent of the building designer. The integration of these last two aspects make the process of RM applied to CH extremely complex. It involves a cognitive activity aimed at choosing on the one hand the best 3D survey technique to obtain reliable 3D data, on the other hand reaching a suitable architectural knowledge for achieving a plausible modeling result. The research presented describes a RM process applied to an ecclesiastical architecture, highlighting some key passages: an integrated survey approach to extract geometrical information, data analysis and generation of a mathematical 3D model, reliable from both a formal and cultural point of view.</p>
APA, Harvard, Vancouver, ISO, and other styles
36

Kumar, Kalyana Kiran, Gandi Ramarao, Polamarasetty P. Kumar, Ramakrishna S. S. Nuvvula, Ilhami Colak, Baseem Khan, and Md Alamgir Hossain. "Reduction of High Dimensional Noninteger Commensurate Systems Based on Differential Evolution." International Transactions on Electrical Energy Systems 2023 (March 31, 2023): 1–10. http://dx.doi.org/10.1155/2023/5911499.

Full text
Abstract:
This work relates to the reduction of a noninteger commensurate high dimensional system. The essential objective of this article is to come up with an approximating technique to replace the original high dimensional system with a low dimensional model preserving the properties of the original system in its shortened model. Superiority of the proposed technique is exhibited by correlating the reduced model with the models of other current methods. The simulation results are cited to approve that the recommended technique has high efficiency with a closing value of time-domain specifications. Conclusively, more logical comparisons were made between other existing methods. The performance indices are calculated for both original system and reduced model system and presented in the manuscript.
APA, Harvard, Vancouver, ISO, and other styles
37

Luca, I., C. Y. Kuo, K. Hutter, and Y. C. Tai. "Modeling Shallow Over-Saturated Mixtures on Arbitrary Rigid Topography." Journal of Mechanics 28, no. 3 (August 9, 2012): 523–41. http://dx.doi.org/10.1017/jmech.2012.62.

Full text
Abstract:
AbstractIn this paper a system of depth-integrated equations for over-saturated debris flows on three-dimensional topography is derived. The lower layer is a saturated mixture of density preserving solid and fluid constituents, where the pore fluid is in excess, so that an upper fluid layer develops above the mixture layer. At the layer interface fluid mass exchange may exist and for this a parameterization is needed. The emphasis is on the description of the influence on the flow by the curvature of the basal surface, and not on proposing rheological models of the avalanching mass. To this end, a coordinate system fitted to the topography has been used to properly account for the geometry of the basal surface. Thus, the modeling equations have been written in terms of these coordinates, and then simplified by using (1) the depth-averaging technique and (2) ordering approximations in terms of an aspect ratio ϵ which accounts for the scale of the flowing mass. The ensuing equations have been complemented by closure relations, but any other such relations can be postulated. For a shallow two-layer debris with clean water in the upper layer, flowing on a slightly curved surface, the equilibrium free surface is shown to be horizontal.
APA, Harvard, Vancouver, ISO, and other styles
38

Holst, M., J. A. McCammon, Z. Yu, Y. C. Zhou, and Y. Zhu. "Adaptive Finite Element Modeling Techniques for the Poisson-Boltzmann Equation." Communications in Computational Physics 11, no. 1 (January 2012): 179–214. http://dx.doi.org/10.4208/cicp.081009.130611a.

Full text
Abstract:
AbstractWe consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a prioriL∞ estimates. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEMalgorithmis also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein.
APA, Harvard, Vancouver, ISO, and other styles
39

Ali, Wael H., Chris Mirabito, Patrick J. Haley, and Pierre F. Lermusiaux. "Optimal stochastic modeling in random media propagation: Dynamically orthogonal parabolic equations?" Journal of the Acoustical Society of America 152, no. 4 (October 2022): A158. http://dx.doi.org/10.1121/10.0015879.

Full text
Abstract:
Reliable underwater acoustic propagation is challenging due to complex ocean dynamics such as internal-waves and to the uncertain larger-scale ocean physics, acoustics, bathymetry, and seabed fields. For accurate acoustic propagation, capturing the important environmental uncertainties and variabilities and predicting the probability distributions of the acoustic pressure field is then what matters. Prior works towards addressing this goal include (i) wave propagation in random media techniques such as perturbation methods, path integral theory, and coupled-mode transport theory, and (ii) probabilistic modeling techniques such as Monte Carlo sampling and Polynomial Chaos expansions. Recently, we developed a novel technique called the Dynamically Orthogonal Parabolic Equations (DO-ParEq) which represent the sound speed, density, bathymetry, and acoustic pressure fields using optimal dynamic Karhunen-Loeve decompositions. The DO-ParEq are range-evolving partial and stochastic differential equations preserving acoustic nonlinearities and non-Gaussian properties. In this presentation, we showcase the theoretical and computational advantages of the DO-ParEq framework compared to the state-of-the-art techniques in the Pekeris waveguide and wedge benchmark problems, in addition to a realistic ocean example in the New York Bight region.
APA, Harvard, Vancouver, ISO, and other styles
40

Jacobs, Bart, Aleks Kissinger, and Fabio Zanasi. "Causal inference via string diagram surgery." Mathematical Structures in Computer Science 31, no. 5 (May 2021): 553–74. http://dx.doi.org/10.1017/s096012952100027x.

Full text
Abstract:
Abstract Extracting causal relationships from observed correlations is a growing area in probabilistic reasoning, originating with the seminal work of Pearl and others from the early 1990s. This paper develops a new, categorically oriented view based on a clear distinction between syntax (string diagrams) and semantics (stochastic matrices), connected via interpretations as structure-preserving functors. A key notion in the identification of causal effects is that of an intervention, whereby a variable is forcefully set to a particular value independent of any prior propensities. We represent the effect of such an intervention as an endo-functor which performs ‘string diagram surgery’ within the syntactic category of string diagrams. This diagram surgery in turn yields a new, interventional distribution via the interpretation functor. While in general there is no way to compute interventional distributions purely from observed data, we show that this is possible in certain special cases using a calculational tool called comb disintegration. We demonstrate the use of this technique on two well-known toy examples: one where we predict the causal effect of smoking on cancer in the presence of a confounding common cause and where we show that this technique provides simple sufficient conditions for computing interventions which apply to a wide variety of situations considered in the causal inference literature; the other one is an illustration of counterfactual reasoning where the same interventional techniques are used, but now in a ‘twinned’ set-up, with two version of the world – one factual and one counterfactual – joined together via exogenous variables that capture the uncertainties at hand.
APA, Harvard, Vancouver, ISO, and other styles
41

Маркова, О. Н. "Religious Semantics in the Modern Monumental Landscape: All-Russian and Regional Aspects." Nasledie Vekov, no. 4(28) (December 31, 2020): 85–102. http://dx.doi.org/10.36343/sb.2021.28.4.006.

Full text
Abstract:
Проанализированы религиозные аспекты формирования современного монументального ландшафта в общероссийском и региональном масштабах. Источниками исследования послужили нормативно-правовые документы, материалы интернет-сайтов и результаты натурных исследований территорий исторических центров отдельных городов. Рассмотрен опыт монументальных практик, присущих постсоветскому периоду и современности, определены особенности воплощения сакральных текстов в монументальной архитектуре (религиозной по своей сущности) и монументальной скульптуре (являющейся проявлением светской культуры). Характеризуется взаимодействие государственной власти и церкви в вопросах монументальной политики и моделирования официальных практик памятования. Обозначены роли религиозного монументального искусства в культурных ландшафтах и в социально-культурной среде. Установлено, что монументы с религиозной символикой и церковная архитектура транслируют идеи устойчивости и непрерывности традиции в условиях глобализации. The article analyzes the religious aspects of the formation of a modern monumental landscape at a national and regional scale in the post-Soviet period. The author tried to characterize the representation of the religious component in the modern monumental landscape of the country. The main sources were regulatory documents, materials of Internet sites and the results of field studies of the territories of historical centers of southern Russian cities. The methodological basis of the research was historical-comparative, retrospective, typological methods, as well as the method of participatory observation, the use of which is due to the author’s many years of expert activity in the field of preserving cultural heritage. The author emphasizes that, in the context of the orientation towards traditionalism and religiosity made by the Russian authorities after the collapse of the USSR, the Orthodox Church acquired a leading role. The formation of the Orthodox monumental landscape is presented in the development over the course of three post-Soviet decades. The nature of the interaction between the state and church authorities in matters of modeling the official ideology and strengthening the social positions of the Russian Orthodox Church is considered. The importance of religious monumental rhetoric in the material embodiment of the ideas of national-Orthodox revival based on the “symphony of priesthood and kingdom”, in the visual sacralization of state power is determined. The significance of the recreated iconic pre-revolutionary churches as symbols of historical continuity and translators of religious and spiritual meanings (the Cathedral of Christ the Savior in Moscow, the Alexander Nevsky Cathedral in Krasnodar, etc.) is analyzed. Their place in the modern urban-planning space (architectural and urban-planning compositional accents and dominants) and in the sociocultural sphere (places of memory, social attraction, public spaces) is indicated. The contradictoriness of public perception of the church-architectural monumentalization of landscapes is shown. The predominant ideological function and lack of involvement in the church tradition of sculptural monuments on religious themes are emphasized. The most popular images of Orthodox saints and biblical heroes included in the processes of monumental commemoration (Alexander Nevsky, Sergius of Radonezh, Nicholas the Wonderworker, Peter and Fevronia, Saint George the Victorious, etc.) are highlighted. The author has established that monuments with religious symbols and church architecture convey the ideas of sustainability and continuity of tradition in the context of globalization.
APA, Harvard, Vancouver, ISO, and other styles
42

Alkhalifah, Tariq. "Efficient synthetic‐seismogram generation in transversely isotropic, inhomogeneous media." GEOPHYSICS 60, no. 4 (July 1995): 1139–50. http://dx.doi.org/10.1190/1.1443842.

Full text
Abstract:
I develop an efficient modeling technique for transversely isotropic, inhomogeneous media using a mix of analytical equations and numerical calculations. The analytic equation for the raypath in a factorized transversely isotropic (FTI) media with linear velocity variation, derived by Shearer and Chapman, is used to trace rays between two points. In addition, I derive an analytical equation for geometrical spreading in FTI media that aids in preserving program efficiency; however, the traveltimes are calculated numerically. I then generalize the method to treat general transversely isotropic (TI) media that are not factorized anisotropic inhomogeneous by perturbing the FTI traveltimes, following the perturbation ideas of Červený and Filho. A Kirchhoff‐summation‐based program relying on Trorey’s diffraction method is used to generate synthetic seismograms for such a medium. For the type of velocity models treated, the program is much more efficient than finite‐difference or general ray‐trace modeling techniques.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhou, Xin, Jingnan Guo, Liling Jiang, Bo Ning, and Yanhao Wang. "A lightweight CNN-based knowledge graph embedding model with channel attention for link prediction." Mathematical Biosciences and Engineering 20, no. 6 (2023): 9607–24. http://dx.doi.org/10.3934/mbe.2023421.

Full text
Abstract:
<abstract><p>Knowledge graph (KG) embedding is to embed the entities and relations of a KG into a low-dimensional continuous vector space while preserving the intrinsic semantic associations between entities and relations. One of the most important applications of knowledge graph embedding (KGE) is link prediction (LP), which aims to predict the missing fact triples in the KG. A promising approach to improving the performance of KGE for the task of LP is to increase the feature interactions between entities and relations so as to express richer semantics between them. Convolutional neural networks (CNNs) have thus become one of the most popular KGE models due to their strong expression and generalization abilities. To further enhance favorable features from increased feature interactions, we propose a lightweight CNN-based KGE model called IntSE in this paper. Specifically, IntSE not only increases the feature interactions between the components of entity and relationship embeddings with more efficient CNN components but also incorporates the channel attention mechanism that can adaptively recalibrate channel-wise feature responses by modeling the interdependencies between channels to enhance the useful features while suppressing the useless ones for improving its performance for LP. The experimental results on public datasets confirm that IntSE is superior to state-of-the-art CNN-based KGE models for link prediction in KGs.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
44

Khoo-Fazari, Kerry, Zijiang Yang, and Joseph C. Paradi. "A Distribution-Free Approach to Stochastic Efficiency Measurement with Inclusion of Expert Knowledge." Journal of Applied Mathematics 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/102163.

Full text
Abstract:
This paper proposes a new efficiency benchmarking methodology that is capable of incorporating probability while still preserving the advantages of a distribution-free and nonparametric modeling technique. This new technique developed in this paper will be known as the DEA-Chebyshev model. The foundation of DEA-Chebyshev model is based on the model pioneered by Charnes, Cooper, and Rhodes in 1978 known as Data Envelopment Analysis (DEA). The combination of normal DEA with DEA-Chebyshev frontier (DCF) can successfully provide a good framework for evaluation based on quantitative data and qualitative intellectual management knowledge. The simulated dataset was tested on DEA-Chebyshev model. It has been statistically shown that this model is effective in predicting a new frontier, whereby DEA efficient units can be further differentiated and ranked. It is an improvement over other methods, as it is easily applied, practical, not computationally intensive, and easy to implement.
APA, Harvard, Vancouver, ISO, and other styles
45

Lee, Ching-Sung, Yen-Cheng Chen, Pei-Ling Tsui, Cheng-Wei Che, and Ming-Chen Chiang. "Application of Fuzzy Delphi Technique Approach in Sustainable Inheritance of Rural Cooking Techniques and Innovative Business Strategies Modeling." Agriculture 11, no. 10 (September 26, 2021): 924. http://dx.doi.org/10.3390/agriculture11100924.

Full text
Abstract:
Transformation and sustainable development must be undertaken in accordance with the trends of the times, which presents challenges to rural areas worldwide. In addition to preserving rural food specialties and presenting them in new ways to attract consumers, these areas must link farmers’ production, processing, sales, and management. It is imperative to sustainably pass on rural foods and theircooking techniques and integrate them into innovative business strategies so that delicious rural foods can be sold on the consumer market, boosting rural economies and their development. The main objective of this research was to conduct indicator modeling and empirical analysis for the sustainable inheritance of Taiwan’s rural cooking techniques and the development of innovative marketing strategies. The Fuzzy Delphi Technique was used as the main research method to select agricultural experts and rural household economy organizations for indicator modeling and empirical analysis. The results of the research indicate that agricultural experts believe that market operation is the primary developmental focus of cultural inheritance and innovation, whereas household economy organizations believe that education, training, promotion, and development are the primary developmental focuses. The greatest contribution and innovation of this research are the findings that culinary education and training, organized by the farmers’ association, can sustainably pass on traditional rural cooking techniques, and the process of incorporating local ingredients into commercial gourmet food should also consider aspects of the economic and marketing strategies of market operation, facilitating the sustainable inheritance of unique, traditional, local, and rural food culture.
APA, Harvard, Vancouver, ISO, and other styles
46

Micheli, Andrea, and Enrico Scala. "Temporal Planning with Temporal Metric Trajectory Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7675–82. http://dx.doi.org/10.1609/aaai.v33i01.33017675.

Full text
Abstract:
In several industrial applications of planning, complex temporal metric trajectory constraints are needed to adequately model the problem at hand. For example, in production plants, items must be processed following a “recipe” of steps subject to precise timing constraints. Modeling such domains is very challenging in existing action-based languages due to the lack of sufficiently expressive trajectory constraints.We propose a novel temporal planning formalism allowing quantified temporal constraints over execution timing of action instances. We build on top of instantaneous actions borrowed from classical planning and add expressive temporal constructs. The paper details the semantics of our new formalism and presents a solving technique grounded in classical, heuristic forward search planning. Our experiments prove the proposed framework superior to alternative state-of-theart planning approaches on industrial benchmarks, and competitive with similar solving methods on well known benchmarks took from the planning competition.
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Guo, Jianmin Gao, and Fumin Chen. "A novel approach for failure modes and effects analysis based on polychromatic sets." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 23, no. 2 (November 13, 2008): 119–29. http://dx.doi.org/10.1017/s089006040900002x.

Full text
Abstract:
AbstractTraditional failure modes and effects analysis (FMEA) methods lack sufficient semantics and structure to provide full traceability between the failure modes and the effects of the complex system. To overcome this limitation, this paper proposes a formal failure knowledge representation model combined with the structural decomposition of the complex system. The model defines the failure modes as the inherent properties of the physical entities at different hierarchical levels, and employs the individual color, unified color, and Boolean matrix of the polychromatic sets to represent the failure modes in terms of their interrelationships and their relations to the physical system. This method is a structure-based modeling technique that provides a simple, yet comprehensive framework to organize the failure modes and their causes and effects more systematically and completely. Using the iterative search process operated on the reasoning matrices, the end effects on the entire system can be achieved automatically, which allows for the consideration of both the single and multiple failures. An example is embedded in the description of the methodology for better understanding. Because of the powerful mathematical modeling capability of the polychromatic sets, the approach presented in this paper makes significant progress in FMEA formalization.
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Jieying, Jiří Kosinka, and Alexandru Telea. "Spline-Based Dense Medial Descriptors for Lossy Image Compression." Journal of Imaging 7, no. 8 (August 19, 2021): 153. http://dx.doi.org/10.3390/jimaging7080153.

Full text
Abstract:
Medial descriptors are of significant interest for image simplification, representation, manipulation, and compression. On the other hand, B-splines are well-known tools for specifying smooth curves in computer graphics and geometric design. In this paper, we integrate the two by modeling medial descriptors with stable and accurate B-splines for image compression. Representing medial descriptors with B-splines can not only greatly improve compression but is also an effective vector representation of raster images. A comprehensive evaluation shows that our Spline-based Dense Medial Descriptors (SDMD) method achieves much higher compression ratios at similar or even better quality to the well-known JPEG technique. We illustrate our approach with applications in generating super-resolution images and salient feature preserving image compression.
APA, Harvard, Vancouver, ISO, and other styles
49

Khan, Sumeer Ahmad, Yonis Gulzar, Sherzod Turaev, and Young Suet Peng. "A Modified HSIFT Descriptor for Medical Image Classification of Anatomy Objects." Symmetry 13, no. 11 (October 20, 2021): 1987. http://dx.doi.org/10.3390/sym13111987.

Full text
Abstract:
Modeling low level features to high level semantics in medical imaging is an important aspect in filtering anatomy objects. Bag of Visual Words (BOVW) representations have been proven effective to model these low level features to mid level representations. Convolutional neural nets are learning systems that can automatically extract high-quality representations from raw images. However, their deployment in the medical field is still a bit challenging due to the lack of training data. In this paper, learned features that are obtained by training convolutional neural networks are compared with our proposed hand-crafted HSIFT features. The HSIFT feature is a symmetric fusion of a Harris corner detector and the Scale Invariance Transform process (SIFT) with BOVW representation. The SIFT process is enhanced as well as the classification technique by adopting bagging with a surrogate split method. Quantitative evaluation shows that our proposed hand-crafted HSIFT feature outperforms the learned features from convolutional neural networks in discriminating anatomy image classes.
APA, Harvard, Vancouver, ISO, and other styles
50

Semenenko, Nataliya N., Darya A. Mashukova, Marina J. Smelkovskaja, and Olesia A. Lazutkina. "Conceptualization of Optimistic Models for the World in Paremic Picture of the World." Journal of History Culture and Art Research 6, no. 5 (November 28, 2017): 19. http://dx.doi.org/10.7596/taksad.v6i5.1287.

Full text
Abstract:
<p>The description of language expression of an ethno cultural stereotype in assessment of optimistic approach to judgment of a role of trials destiny provides in human life is offered. The optimistic outlook model is considered as integrative linguo-cognitive area which valuable dominants are directly connected with points, key for national sphere of concepts. The algorithm of a descriptive technique is presented on the example of cognitive and pragmatical modeling of semantics in the Russian proverbs of theme groups "Destiny - Patience - Hope" and "Patience - Hope" from the collection "Proverbs of the Russian People" of V. I. Dahl. The area of paremic verbalization of cognitive category "Optimism" is considered taking into account polyconceptuality of national aphorisms maintenance and ambivalence of the ethno culture major stereotypes assessment concluded in them. Linguo-cognitive potential of paremias in representation of optimistic outlook model is defined with aphoristic value of paremias, pragmatical recommendation expressed in them and a valuable semantic core.</p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography