Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Sensor data semantic annotation.

Articles de revues sur le sujet « Sensor data semantic annotation »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Sensor data semantic annotation ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Sejdiu, Besmir, Florije Ismaili et Lule Ahmedi. « Integration of Semantics Into Sensor Data for the IoT ». International Journal on Semantic Web and Information Systems 16, no 4 (octobre 2020) : 1–25. http://dx.doi.org/10.4018/ijswis.2020100101.

Texte intégral
Résumé :
The internet of things (IoT) as an evolving technology represents an active scientific research field in recognizing research challenges associated with its application in various domains, ranging from consumer convenience, smart energy, and resource saving to IoT enterprises. Sensors are crucial components of IoT that relay the collected data in the form of the data stream for further processing. Interoperability of various connected digital resources is a key challenge in IoT environments. The enrichment of raw sensor data with semantic annotations using concept definitions from ontologies enables more expressive data representation that supports knowledge discovery. In this paper, a systematic review of integration of semantics into sensor data for the IoT is provided. The conducted review is focused on analyzing the main solutions of adding semantic annotations to the sensor data, standards that enable all types of sensor data via the web, existing models of stream data annotation, and the IoT trend domains that use semantics.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Elsaleh, Tarek, Shirin Enshaeifar, Roonak Rezvani, Sahr Thomas Acton, Valentinas Janeiko et Maria Bermudez-Edo. « IoT-Stream : A Lightweight Ontology for Internet of Things Data Streams and Its Use with Data Analytics and Event Detection Services ». Sensors 20, no 4 (11 février 2020) : 953. http://dx.doi.org/10.3390/s20040953.

Texte intégral
Résumé :
With the proliferation of sensors and IoT technologies, stream data are increasingly stored and analysed, but rarely combined, due to the heterogeneity of sources and technologies. Semantics are increasingly used to share sensory data, but not so much for annotating stream data. Semantic models for stream annotation are scarce, as generally, semantics are heavy to process and not ideal for Internet of Things (IoT) environments, where the data are frequently updated. We present a light model to semantically annotate streams, IoT-Stream. It takes advantage of common knowledge sharing of the semantics, but keeping the inferences and queries simple. Furthermore, we present a system architecture to demonstrate the adoption the semantic model, and provide examples of instantiation of the system for different use cases. The system architecture is based on commonly used architectures in the field of IoT, such as web services, microservices and middleware. Our system approach includes the semantic annotations that take place in the pipeline of IoT services and sensory data analytics. It includes modules needed to annotate, consume, and query data annotated with IoT-Stream. In addition to this, we present tools that could be used in conjunction to the IoT-Stream model and facilitate the use of semantics in IoT.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Llaves, Alejandro, Oscar Corcho, Peter Taylor et Kerry Taylor. « Enabling RDF Stream Processing for Sensor Data Management in the Environmental Domain ». International Journal on Semantic Web and Information Systems 12, no 4 (octobre 2016) : 1–21. http://dx.doi.org/10.4018/ijswis.2016100101.

Texte intégral
Résumé :
This paper presents a generic approach to integrate environmental sensor data efficiently, allowing the detection of relevant situations and events in near real-time through continuous querying. Data variety is addressed with the use of the Semantic Sensor Network ontology for observation data modelling, and semantic annotations for environmental phenomena. Data velocity is handled by distributing sensor data messaging and serving observations as RDF graphs on query demand. The stream processing engine presented in the paper, morph-streams++, provides adapters for different data formats and distributed processing of streams in a cluster. An evaluation of different configurations for parallelization and semantic annotation parameters proves that the described approach reduces the average latency of message processing in some cases.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Xu, Hongsheng, et Huijuan Sun. « Application of Rough Concept Lattice Model in Construction of Ontology and Semantic Annotation in Semantic Web of Things ». Scientific Programming 2022 (13 avril 2022) : 1–12. http://dx.doi.org/10.1155/2022/7207372.

Texte intégral
Résumé :
In order to solve the problem of interoperability in Internet of Things, the Semantic Web technology is introduced into the Internet of Things to form Semantic Web of Things. Ontology construction is the core of Semantic Web of Things. Firstly, this paper analyzes the shortcomings of ontology construction methods in the Semantic Web of Things. Then, this paper proposes construction of semantic ontology based on improved rough concept lattice, which provides theoretical basis for semantic annotation of the sensing data attributes. In addition, this paper describes the semantic annotation system for the Internet of Things based on semantic similarity of ontology. The system consists of three steps: ontology mapping integration module, information extraction module, and semantic annotation of sensing data. Finally, the experimental results show that this semantic annotation method effectively improves the flexibility of sensor information and data attributes and effectively enhances the expression ability of sensor information and the use value of data.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Abdel Hakim, Alaa E., et Wael Deabes. « Can People Really Do Nothing ? Handling Annotation Gaps in ADL Sensor Data ». Algorithms 12, no 10 (17 octobre 2019) : 217. http://dx.doi.org/10.3390/a12100217.

Texte intégral
Résumé :
In supervised Activities of Daily Living (ADL) recognition systems, annotating collected sensor readings is an essential, yet exhaustive, task. Readings are collected from activity-monitoring sensors in a 24/7 manner. The size of the produced dataset is so huge that it is almost impossible for a human annotator to give a certain label to every single instance in the dataset. This results in annotation gaps in the input data to the adopting learning system. The performance of the recognition system is negatively affected by these gaps. In this work, we propose and investigate three different paradigms to handle these gaps. In the first paradigm, the gaps are taken out by dropping all unlabeled readings. A single “Unknown” or “Do-Nothing” label is given to the unlabeled readings within the operation of the second paradigm. The last paradigm handles these gaps by giving every set of them a unique label identifying the encapsulating certain labels. Also, we propose a semantic preprocessing method of annotation gaps by constructing a hybrid combination of some of these paradigms for further performance improvement. The performance of the proposed three paradigms and their hybrid combination is evaluated using an ADL benchmark dataset containing more than 2.5 × 10 6 sensor readings that had been collected over more than nine months. The evaluation results emphasize the performance contrast under the operation of each paradigm and support a specific gap handling approach for better performance.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Sejdiu, Besmir, Florije Ismaili et Lule Ahmedi. « IoTSAS : An Integrated System for Real-Time Semantic Annotation and Interpretation of IoT Sensor Stream Data ». Computers 10, no 10 (11 octobre 2021) : 127. http://dx.doi.org/10.3390/computers10100127.

Texte intégral
Résumé :
Sensors and other Internet of Things (IoT) technologies are increasingly finding application in various fields, such as air quality monitoring, weather alerts monitoring, water quality monitoring, healthcare monitoring, etc. IoT sensors continuously generate large volumes of observed stream data; therefore, processing requires a special approach. Extracting the contextual information essential for situational knowledge from sensor stream data is very difficult, especially when processing and interpretation of these data are required in real time. This paper focuses on processing and interpreting sensor stream data in real time by integrating different semantic annotations. In this context, a system named IoT Semantic Annotations System (IoTSAS) is developed. Furthermore, the performance of the IoTSAS System is presented by testing air quality and weather alerts monitoring IoT domains by extending the Open Geospatial Consortium (OGC) standards and the Sensor Observations Service (SOS) standards, respectively. The developed system provides information in real time to citizens about the health implications from air pollution and weather conditions, e.g., blizzard, flurry, etc.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Desimoni, Federico, Sergio Ilarri, Laura Po, Federica Rollo et Raquel Trillo-Lado. « Semantic Traffic Sensor Data : The TRAFAIR Experience ». Applied Sciences 10, no 17 (25 août 2020) : 5882. http://dx.doi.org/10.3390/app10175882.

Texte intégral
Résumé :
Modern cities face pressing problems with transportation systems including, but not limited to, traffic congestion, safety, health, and pollution. To tackle them, public administrations have implemented roadside infrastructures such as cameras and sensors to collect data about environmental and traffic conditions. In the case of traffic sensor data not only the real-time data are essential, but also historical values need to be preserved and published. When real-time and historical data of smart cities become available, everyone can join an evidence-based debate on the city’s future evolution. The TRAFAIR (Understanding Traffic Flows to Improve Air Quality) project seeks to understand how traffic affects urban air quality. The project develops a platform to provide real-time and predicted values on air quality in several cities in Europe, encompassing tasks such as the deployment of low-cost air quality sensors, data collection and integration, modeling and prediction, the publication of open data, and the development of applications for end-users and public administrations. This paper explicitly focuses on the modeling and semantic annotation of traffic data. We present the tools and techniques used in the project and validate our strategies for data modeling and its semantic enrichment over two cities: Modena (Italy) and Zaragoza (Spain). An experimental evaluation shows that our approach to publish Linked Data is effective.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Pacha, Shobharani, Suresh Ramalingam Murugan et R. Sethukarasi. « Semantic annotation of summarized sensor data stream for effective query processing ». Journal of Supercomputing 76, no 6 (25 novembre 2017) : 4017–39. http://dx.doi.org/10.1007/s11227-017-2183-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Vedurmudi, Anupam Prasad, Julia Neumann, Maximilian Gruber et Sascha Eichstädt. « Semantic Description of Quality of Data in Sensor Networks ». Sensors 21, no 19 (28 septembre 2021) : 6462. http://dx.doi.org/10.3390/s21196462.

Texte intégral
Résumé :
The annotation of sensor data with semantic metadata is essential to the goals of automation and interoperability in the context of Industry 4.0. In this contribution, we outline a semantic description of quality of data in sensor networks in terms of indicators, metrics and interpretations. The concepts thus defined are consolidated into an ontology that describes quality of data metainformation in heterogeneous sensor networks and methods for the determination of corresponding quality of data dimensions are outlined. By incorporating support for sensor calibration models and measurement uncertainty via a previously derived ontology, a conformity with metrological requirements for sensor data is ensured. A quality description for a calibrated sensor generated using the resulting ontology is presented in the JSON-LD format using the battery level and calibration data as quality indicators. Finally, the general applicability of the model is demonstrated using a series of competency questions.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Nadim, Ismail, Yassine El Ghayam et Abdelalim Sadiq. « Semantic Annotation of Web of Things Using Entity Linking ». International Journal of Business Analytics 7, no 4 (octobre 2020) : 1–13. http://dx.doi.org/10.4018/ijban.2020100101.

Texte intégral
Résumé :
The web of things (WoT) improves syntactic interoperability between internet of things (IoT) devices by leveraging web standards. However, the lack of a unified WoT data model remains a challenge for the semantic interoperability. Fortunately, semantic web technologies are taking this challenge over by offering numerous semantic vocabularies like the semantic sensor networks (SSN) ontology. Although it enables the semantic interoperability between heterogeneous devices, the manual annotation hinders the scalability of the WoT. As a result, the automation of the semantic annotation of WoT devices becomes a prior issue for researchers. This paper proposes a method to improve the semi-automatic semantic annotation of web of things (WoT) using the entity linking task and the well-known ontologies, mainly the SSN.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Pacha, Shobharani, Suresh Ramalingam Murugan et R. Sethukarasi. « Correction to : Semantic annotation of summarized sensor data stream for effective query processing ». Journal of Supercomputing 76, no 6 (5 janvier 2018) : 4040. http://dx.doi.org/10.1007/s11227-017-2212-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Lin, Szu-Yin, Jun-Bin Li et Ching-Tzu Yu. « Dynamic Data Driven-based Automatic Clustering and Semantic Annotation for Internet of Things Sensor Data ». Sensors and Materials 31, no 6 (7 juin 2019) : 1789. http://dx.doi.org/10.18494/sam.2019.2333.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Tylecek, Radim, et Robert Fisher. « Consistent Semantic Annotation of Outdoor Datasets via 2D/3D Label Transfer ». Sensors 18, no 7 (12 juillet 2018) : 2249. http://dx.doi.org/10.3390/s18072249.

Texte intégral
Résumé :
The advance of scene understanding methods based on machine learning relies on the availability of large ground truth datasets, which are essential for their training and evaluation. Construction of such datasets with imagery from real sensor data however typically requires much manual annotation of semantic regions in the data, delivered by substantial human labour. To speed up this process, we propose a framework for semantic annotation of scenes captured by moving camera(s), e.g., mounted on a vehicle or robot. It makes use of an available 3D model of the traversed scene to project segmented 3D objects into each camera frame to obtain an initial annotation of the associated 2D image, which is followed by manual refinement by the user. The refined annotation can be transferred to the next consecutive frame using optical flow estimation. We have evaluated the efficiency of the proposed framework during the production of a labelled outdoor dataset. The analysis of annotation times shows that up to 43% less effort is required on average, and the consistency of the labelling is also improved.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Vidal-Filho, Jarbas Nunes, Valéria Cesário Times, Jugurta Lisboa-Filho et Chiara Renso. « Towards the Semantic Enrichment of Trajectories Using Spatial Data Infrastructures ». ISPRS International Journal of Geo-Information 10, no 12 (6 décembre 2021) : 825. http://dx.doi.org/10.3390/ijgi10120825.

Texte intégral
Résumé :
The term Semantic Trajectories of Moving Objects (STMO) corresponds to a sequence of spatial-temporal points with associated semantic information (for example, annotations about locations visited by the user or types of transportation used). However, the growth of Big Data generated by users, such as data produced by social networks or collected by an electronic equipment with embedded sensors, causes the STMO to require services and standards for enabling data documentation and ensuring the quality of STMOs. Spatial Data Infrastructures (SDI), on the other hand, provide a shared interoperable and integrated environment for data documentation. The main challenge is how to lead traditional SDIs to evolve to an STMO document due to the lack of specific metadata standards and services for semantic annotation. This paper presents a new concept of SDI for STMO, named SDI4Trajectory, which supports the documentation of different types of STMO—holistic trajectories, for example. The SDI4Trajectory allows us to propose semi-automatic and manual semantic enrichment processes, which are efficient in supporting semantic annotations and STMO documentation as well. These processes are hardly found in traditional SDIs and have been developed through Web and semantic micro-services. To validate the SDI4Trajectory, we used a dataset collected by voluntary users through the MyTracks application for the following purposes: (i) comparing the semi-automatic and manual semantic enrichment processes in the SDI4Trajectory; (ii) investigating the viability of the documentation processes carried out by the SDI4Trajectory, which was able to document all the collected trajectories.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Amgad, Mohamed, Habiba Elfandy, Hagar Hussein, Lamees A. Atteya, Mai A. T. Elsebaie, Lamia S. Abo Elnasr, Rokia A. Sakr et al. « Structured crowdsourcing enables convolutional segmentation of histology images ». Bioinformatics 35, no 18 (6 février 2019) : 3461–67. http://dx.doi.org/10.1093/bioinformatics/btz083.

Texte intégral
Résumé :
Abstract Motivation While deep-learning algorithms have demonstrated outstanding performance in semantic image segmentation tasks, large annotation datasets are needed to create accurate models. Annotation of histology images is challenging due to the effort and experience required to carefully delineate tissue structures, and difficulties related to sharing and markup of whole-slide images. Results We recruited 25 participants, ranging in experience from senior pathologists to medical students, to delineate tissue regions in 151 breast cancer slides using the Digital Slide Archive. Inter-participant discordance was systematically evaluated, revealing low discordance for tumor and stroma, and higher discordance for more subjectively defined or rare tissue classes. Feedback provided by senior participants enabled the generation and curation of 20 000+ annotated tissue regions. Fully convolutional networks trained using these annotations were highly accurate (mean AUC=0.945), and the scale of annotation data provided notable improvements in image classification accuracy. Availability and Implementation Dataset is freely available at: https://goo.gl/cNM4EL. Supplementary information Supplementary data are available at Bioinformatics online.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Chen, Xi, Huajun Chen, Ningyu Zhang, Jue Huang et Wen Zhang. « Large-Scale Real-Time Semantic Processing Framework for Internet of Things ». International Journal of Distributed Sensor Networks 2015 (2015) : 1–11. http://dx.doi.org/10.1155/2015/365372.

Texte intégral
Résumé :
Nowadays, the advanced sensor technology with cloud computing and big data is generating large-scale heterogeneous and real-time IOT (Internet of Things) data. To make full use of the data, development and deploy of ubiquitous IOT-based applications in various aspects of our daily life are quite urgent. However, the characteristics of IOT sensor data, including heterogeneity, variety, volume, and real time, bring many challenges to effectively process the sensor data. The Semantic Web technologies are viewed as a key for the development of IOT. While most of the existing efforts are mainly focused on the modeling, annotation, and representation of IOT data, there has been little work focusing on the background processing of large-scale streaming IOT data. In the paper, we present a large-scale real-time semantic processing framework and implement an elastic distributed streaming engine for IOT applications. The proposed engine efficiently captures and models different scenarios for all kinds of IOT applications based on popular distributed computing platform SPARK. Based on the engine, a typical use case on home environment monitoring is given to illustrate the efficiency of our engine. The results show that our system can scale for large number of sensor streams with different types of IOT applications.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Vanden Hautte, Sander, Pieter Moens, Joachim Van Herwegen, Dieter De Paepe, Bram Steenwinckel, Stijn Verstichel, Femke Ongenae et Sofie Van Hoecke. « A Dynamic Dashboarding Application for Fleet Monitoring Using Semantic Web of Things Technologies ». Sensors 20, no 4 (20 février 2020) : 1152. http://dx.doi.org/10.3390/s20041152.

Texte intégral
Résumé :
In industry, dashboards are often used to monitor fleets of assets, such as trains, machines or buildings. In such industrial fleets, the vast amount of sensors evolves continuously, new sensor data exchange protocols and data formats are introduced, new visualization types may need to be introduced and existing dashboard visualizations may need to be updated in terms of displayed sensors. These requirements motivate the development of dynamic dashboarding applications. These, as opposed to fixed-structure dashboard applications, allow users to create visualizations at will and do not have hard-coded sensor bindings. The state-of-the-art in dynamic dashboarding does not cope well with the frequent additions and removals of sensors that must be monitored—these changes must still be configured in the implementation or at runtime by a user. Also, the user is presented with an overload of sensors, aggregations and visualizations to select from, which may sometimes even lead to the creation of dashboard widgets that do not make sense. In this paper, we present a dynamic dashboard that overcomes these problems. Sensors, visualizations and aggregations can be discovered automatically, since they are provided as RESTful Web Things on a Web Thing Model compliant gateway. The gateway also provides semantic annotations of the Web Things, describing what their abilities are. A semantic reasoner can derive visualization suggestions, given the Thing annotations, logic rules and a custom dashboard ontology. The resulting dashboarding application automatically presents the available sensors, visualizations and aggregations that can be used, without requiring sensor configuration, and assists the user in building dashboards that make sense. This way, the user can concentrate on interpreting the sensor data and detecting and solving operational problems early.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Messaoudi, Wassim, Mohamed Farah et Imed Riadh Farah. « Fuzzy Spatio-Spectro-Temporal Ontology for Remote Sensing Image Annotation and Interpretation : Application to Natural Risks Assessment ». International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 27, no 05 (octobre 2019) : 815–40. http://dx.doi.org/10.1142/s0218488519500363.

Texte intégral
Résumé :
This research deals with semantic interpretation of Remote Sensing Images (RSIs) using ontologies which are considered as one of the main challenging methods for modeling high-level knowledge, and reducing the semantic gap between low-level features and high-level semantics of an image. In this paper, we propose a new ontology which allows the annotation as well as the interpretation of RSI with respect to natural risks, while taking into account uncertainty of data, object dynamics in natural scenes, and specificities of sensors. In addition, using this ontology, we propose a methodology to (i) annotate the semantic content of RSI, and (ii) deduce the susceptibility of the land cover to natural phenomena such as erosion, floods, and fires, using case-based reasoning supported by the ontology. This work is tested using LANDSAT and SPOT images of the region of Kef which is situated in the north-west of Tunisia. Results are rather promising.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Behley, Jens, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Jürgen Gall et Cyrill Stachniss. « Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences : The SemanticKITTI Dataset ». International Journal of Robotics Research 40, no 8-9 (20 avril 2021) : 959–67. http://dx.doi.org/10.1177/02783649211006735.

Texte intégral
Résumé :
A holistic semantic scene understanding exploiting all available sensor modalities is a core capability to master self-driving in complex everyday traffic. To this end, we present the SemanticKITTI dataset that provides point-wise semantic annotations of Velodyne HDL-64E point clouds of the KITTI Odometry Benchmark. Together with the data, we also published three benchmark tasks for semantic scene understanding covering different aspects of semantic scene understanding: (1) semantic segmentation for point-wise classification using single or multiple point clouds as input; (2) semantic scene completion for predictive reasoning on the semantics and occluded regions; and (3) panoptic segmentation combining point-wise classification and assigning individual instance identities to separate objects of the same class. In this article, we provide details on our dataset showing an unprecedented number of fully annotated point cloud sequences, more information on our labeling process to efficiently annotate such a vast amount of point clouds, and lessons learned in this process. The dataset and resources are available at http://www.semantic-kitti.org .
Styles APA, Harvard, Vancouver, ISO, etc.
20

Wu, Yufeng, Longfei Zhang, Gangyi Ding, Dapeng Yan et Fuquan Zhang. « TSN : Performance Creative Choreography Based on Twin Sensor Network ». Wireless Communications and Mobile Computing 2021 (26 mars 2021) : 1–12. http://dx.doi.org/10.1155/2021/5532754.

Texte intégral
Résumé :
The purpose of this paper is to improve the efficiency of performance creative choreography (PCC). Our research work shows that we can realize the model integration and data optimization for PCC in complex environments based on the combined architecture of sensor network (SN) and machine-learning algorithm (MLA). In order to explain the process and content of this research better, this paper designs a specific problem description framework for PCC, which mainly includes the following content: (1) a twin sensor network (TSN) architecture based on digital twin information interaction is proposed, which defines and describes the acquisition method, classification (creative data, rehearsal data, and live data), and temporal and spatial features of performance data. (2) Proposed a mobile computing method based on director semantic annotation (DSA) as the core computing module of TSN. (3) A spatial dynamic line (SDL) model and a creative activation mechanism (CAM) based on DSA are proposed to realize fast and efficient PCC of dance with the TSN architecture. Experimental results show that the TSN architecture proposed in this article is reasonable and effective. The SDL model achieved significantly better performance with little time increase and improved the computability and aesthetics of PCC. New research ideas are proposed to solve the computational problem of PCC in complex environments.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Xie, Yuan, Jisheng Zhao, Baohua Qiang, Luzhong Mi, Chenghua Tang et Longge Li. « Attention Mechanism-Based CNN-LSTM Model for Wind Turbine Fault Prediction Using SSN Ontology Annotation ». Wireless Communications and Mobile Computing 2021 (27 mars 2021) : 1–12. http://dx.doi.org/10.1155/2021/6627588.

Texte intégral
Résumé :
The traditional model for wind turbine fault prediction is not sensitive to the time sequence data and cannot mine the deep connection between the time series data, resulting in poor generalization ability of the model. To solve this problem, this paper proposes an attention mechanism-based CNN-LSTM model. The semantic sensor data annotated by SSN ontology is used as input data. Firstly, CNN extracts features to get high-level feature representation from input data. Then, the latent time sequence connection of features in different time periods is learned by LSTM. Finally, the output of LSTM is input into the attention mechanism module to obtain more fault-related target information, which improves the efficiency, accuracy, and generalization ability of the model. In addition, in the data preprocessing stage, the random forest algorithm analyzes the feature correlation degree of the data to get the features of high correlation degree with the wind turbine fault, which further improves the efficiency, accuracy, and generalization ability of the model. The model is validated on the icing fault dataset of No. 21 wind turbine and the yaw dataset of No. 4 wind turbine. The experimental results show that the proposed model has better efficiency, accuracy, and generalization ability than RNN, LSTM, and XGBoost.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Song, Young Chol, et Henry Kautz. « A Testbed for Learning by Demonstration from Natural Language and RGB-Depth Video ». Proceedings of the AAAI Conference on Artificial Intelligence 26, no 1 (20 septembre 2021) : 2457–58. http://dx.doi.org/10.1609/aaai.v26i1.8430.

Texte intégral
Résumé :
We are developing a testbed for learning by demonstration combining spoken language and sensor data in a natural real-world environment. Microsoft Kinect RGB-Depth cameras allow us to infer high-level visual features, such as the relative position of objects in space, with greater precision and less training than required by traditional systems. Speech is recognized and parsed using a “deep” parsing system, so that language features are available at the word, syntactic, and semantic levels. We collected an initial data set of 10 episodes of 7 individuals demonstrating how to “make tea”, and created a “gold standard” hand annotation of the actions performed in each. Finally, we are constructing “baseline” HMM-based activity recognition models using the visual and language features, in order to be ready to evaluate the performance of our future work on deeper and more structured models.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Laupheimer, D., M. H. Shams Eddin et N. Haala. « ON THE ASSOCIATION OF LIDAR POINT CLOUDS AND TEXTURED MESHES FOR MULTI-MODAL SEMANTIC SEGMENTATION ». ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (3 août 2020) : 509–16. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-509-2020.

Texte intégral
Résumé :
Abstract. The semantic segmentation of the huge amount of acquired 3D data has become an important task in recent years. We propose a novel association mechanism that enables information transfer between two 3D representations: point clouds and meshes. The association mechanism can be used in a two-fold manner: (i) feature transfer to stabilize semantic segmentation of one representation with features from the other representation and (ii) label transfer to achieve the semantic annotation of both representations. We claim that point clouds are an intermediate product whereas meshes are a final user product that jointly provides geometrical and textural information. For this reason, we opt for semantic mesh segmentation in the first place. We apply an off-the-shelf PointNet++ to a textured urban triangle mesh as generated from LiDAR and oblique imagery. For each face within a mesh, a feature vector is computed and optionally extended by inherent LiDAR features as provided by the sensor (e.g. intensity). The feature vector extension is accomplished with the proposed association mechanism. By these means, we leverage inherent features from both data representations for the semantic mesh segmentation (multi-modality). We achieve an overall accuracy of 86:40% on the face-level on a dedicated test mesh. Neglecting LiDAR-inherent features in the per-face feature vectors decreases mean intersection over union by ∼2%. Leveraging our association mechanism, we transfer predicted mesh labels to the LiDAR point cloud at a stroke. To this end, we semantically segment the point cloud by implicit usage of geometric and textural mesh features. The semantic point cloud segmentation achieves an overall accuracy close to 84% on the point-level for both feature vector compositions.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Caltagirone, Luca, Mauro Bellone, Lennart Svensson, Mattias Wahde et Raivo Sell. « Lidar–Camera Semi-Supervised Learning for Semantic Segmentation ». Sensors 21, no 14 (14 juillet 2021) : 4813. http://dx.doi.org/10.3390/s21144813.

Texte intégral
Résumé :
In this work, we investigated two issues: (1) How the fusion of lidar and camera data can improve semantic segmentation performance compared with the individual sensor modalities in a supervised learning context; and (2) How fusion can also be leveraged for semi-supervised learning in order to further improve performance and to adapt to new domains without requiring any additional labelled data. A comparative study was carried out by providing an experimental evaluation on networks trained in different setups using various scenarios from sunny days to rainy night scenes. The networks were tested for challenging, and less common, scenarios where cameras or lidars individually would not provide a reliable prediction. Our results suggest that semi-supervised learning and fusion techniques increase the overall performance of the network in challenging scenarios using less data annotations.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Amri, Emna, Pierre Dardouillet, Alexandre Benoit, Hermann Courteille, Philippe Bolon, Dominique Dubucq et Anthony Credoz. « Offshore Oil Slick Detection : From Photo-Interpreter to Explainable Multi-Modal Deep Learning Models Using SAR Images and Contextual Data ». Remote Sensing 14, no 15 (25 juillet 2022) : 3565. http://dx.doi.org/10.3390/rs14153565.

Texte intégral
Résumé :
Ocean surface monitoring, emphasizing oil slick detection, has become essential due to its importance for oil exploration and ecosystem risk prevention. Automation is now mandatory since the manual annotation process of oil by photo-interpreters is time-consuming and cannot process the data collected continuously by the available spaceborne sensors. Studies on automatic detection methods mainly focus on Synthetic Aperture Radar (SAR) data exclusively to detect anthropogenic (spills) or natural (seeps) oil slicks, all using limited datasets. The main goal is to maximize the detection of oil slicks of both natures while being robust to other phenomena that generate false alarms, called “lookalikes”. To this end, this paper presents the automation of offshore oil slick detection on an extensive database of real and recent oil slick monitoring scenarios, including both types of slicks. It relies on slick annotations performed by expert photo-interpreters on Sentinel-1 SAR data over four years and three areas worldwide. In addition, contextual data such as wind estimates and infrastructure positions are included in the database as they are relevant data for oil detection. The contributions of this paper are: (i) A comparative study of deep learning approaches using SAR data. A semantic and instance segmentation analysis via FC-DenseNet and Mask R-CNN, respectively. (ii) A proposal for Fuse-FC-DenseNet, an extension of FC-DenseNet that fuses heterogeneous SAR and wind speed data for enhanced oil slick segmentation. (iii) An improved set of evaluation metrics dedicated to the task that considers contextual information. (iv) A visual explanation of deep learning predictions based on the SHapley Additive exPlanation (SHAP) method adapted to semantic segmentation. The proposed approach yields a detection performance of up to 94% of good detection with a false alarm reduction ranging from 14% to 34% compared to mono-modal models. These results provide new solutions to improve the detection of natural and anthropogenic oil slicks by providing tools that allow photo-interpreters to work more efficiently on a wide range of marine surfaces to be monitored worldwide. Such a tool will accelerate the oil slick detection task to keep up with the continuous sensor acquisition. This upstream work will allow us to study its possible integration into an industrial production pipeline. In addition, a prediction explanation is proposed, which can be integrated as a step to identify the appropriate methodology for presenting the predictions to the experts and understanding the obtained predictions and their sensitivity to contextual information. Thus it helps them to optimize their way of working.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Chang, Chin-Chun, Naomi A. Ubina, Shyi-Chyi Cheng, Hsun-Yu Lan, Kuan-Chu Chen et Chin-Chao Huang. « A Two-Mode Underwater Smart Sensor Object for Precision Aquaculture Based on AIoT Technology ». Sensors 22, no 19 (7 octobre 2022) : 7603. http://dx.doi.org/10.3390/s22197603.

Texte intégral
Résumé :
Monitoring the status of culture fish is an essential task for precision aquaculture using a smart underwater imaging device as a non-intrusive way of sensing to monitor freely swimming fish even in turbid or low-ambient-light waters. This paper developed a two-mode underwater surveillance camera system consisting of a sonar imaging device and a stereo camera. The sonar imaging device has two cloud-based Artificial Intelligence (AI) functions that estimate the quantity and the distribution of the length and weight of fish in a crowded fish school. Because sonar images can be noisy and fish instances of an overcrowded fish school are often overlapped, machine learning technologies, such as Mask R-CNN, Gaussian mixture models, convolutional neural networks, and semantic segmentation networks were employed to address the difficulty in the analysis of fish in sonar images. Furthermore, the sonar and stereo RGB images were aligned in the 3D space, offering an additional AI function for fish annotation based on RGB images. The proposed two-mode surveillance camera was tested to collect data from aquaculture tanks and off-shore net cages using a cloud-based AIoT system. The accuracy of the proposed AI functions based on human-annotated fish metric data sets were tested to verify the feasibility and suitability of the smart camera for the estimation of remote underwater fish metrics.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Xue, Xingsi, et Junfeng Chen. « Optimizing Sensor Ontology Alignment through Compact co-Firefly Algorithm ». Sensors 20, no 7 (6 avril 2020) : 2056. http://dx.doi.org/10.3390/s20072056.

Texte intégral
Résumé :
Semantic Sensor Web (SSW) links the semantic web technique with the sensor network, which utilizes sensor ontology to describe sensor information. Annotating sensor data with different sensor ontologies can be of help to implement different sensor systems’ inter-operability, which requires that the sensor ontologies themselves are inter-operable. Therefore, it is necessary to match the sensor ontologies by establishing the meaningful links between semantically related sensor information. Since the Swarm Intelligent Algorithm (SIA) represents a good methodology for addressing the ontology matching problem, we investigate a popular SIA, that is, the Firefly Algorithm (FA), to optimize the ontology alignment. To save the memory consumption and better trade off the algorithm’s exploitation and exploration, in this work, we propose a general-purpose ontology matching technique based on Compact co-Firefly Algorithm (CcFA), which combines the compact encoding mechanism with the co-Evolutionary mechanism. Our proposal utilizes the Gray code to encode the solutions, two compact operators to respectively implement the exploiting strategy and exploring strategy, and two Probability Vectors (PVs) to represent the swarms that respectively focuses on the exploitation and exploration. Through the communications between two swarms in each generation, CcFA is able to efficiently improve the searching efficiency when addressing the sensor ontology matching problem. The experiment utilizes the Conference track and three pairs of real sensor ontologies to test our proposal’s performance. The statistical results show that CcFA based ontology matching technique can effectively match the sensor ontologies and other general ontologies in the domain of organizing conferences.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Alruqimi, Mohammed, et Noura Aknin. « Enabling social WEB for IoT inducing ontologies from social tagging ». International Journal of Informatics and Communication Technology (IJ-ICT) 8, no 1 (1 avril 2019) : 19. http://dx.doi.org/10.11591/ijict.v8i1.pp19-24.

Texte intégral
Résumé :
<span>Semantic domain ontologies are increasingly seen as the key for enabling interoperability across heterogeneous systems and sensor-based applications. The ontologies deployed in these systems and applications are developed by restricted groups of domain experts and not by semantic web experts. Lately, folksonomies are increasingly exploited in developing ontologies. The “collective intelligence”, which emerge from collaborative tagging can be seen as an alternative for the current effort at semantic web ontologies. However, the uncontrolled nature of social tagging systems leads to many kinds of noisy annotations, such as misspellings, imprecision and ambiguity. Thus, the construction of formal ontologies from social tagging data remains a real challenge. Most of researches have focused on how to discover relatedness between tags rather than producing ontologies, much less domain ontologies. This paper proposed an algorithm that utilises tags in social tagging systems to automatically generate up-to-date specific-domain ontologies. The evaluation of the algorithm, using a dataset extracted from BibSonomy, demonstrated that the algorithm could effectively learn a domain terminology, and identify more meaningful semantic information for the domain terminology. Furthermore, the proposed algorithm introduced a simple and effective method for disambiguating tags.</span><span style="font-size: 9pt; font-family: 'Times New Roman', serif;">Semantic domain ontologies are increasingly seen as the key for enabling interoperability across heterogeneous systems and sensor-based applications. The ontologies deployed in these systems and applications are developed by restricted groups of domain experts and not by semantic web experts. Lately, folksonomies are increasingly exploited in developing ontologies. The “collective intelligence”, which emerge from collaborative tagging can be seen as an alternative for the current effort at semantic web ontologies. However, the uncontrolled nature of social tagging systems leads to many kinds of noisy annotations, such as misspellings, imprecision and ambiguity. Thus, the construction of formal ontologies from social tagging data remains a real challenge. Most of researches have focused on how to discover relatedness between tags rather than producing ontologies, much less domain ontologies. This paper proposed an algorithm that utilises tags in social tagging systems to automatically generate up-to-date specific-domain ontologies. The evaluation of the algorithm, using a dataset extracted from BibSonomy, demonstrated that the algorithm could effectively learn a domain terminology, and identify more meaningful semantic information for the domain terminology. Furthermore, the proposed algorithm introduced a simple and effective method for disambiguating tags.</span>
Styles APA, Harvard, Vancouver, ISO, etc.
29

Jodeiri Rad, M., et C. Armenakis. « ACTIVE REINFORCEMENT LEARNING FOR THE SEMANTIC SEGMENTATION OF IMAGES CAPTURED BY MOBILE SENSORS ». International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (30 mai 2022) : 593–99. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-593-2022.

Texte intégral
Résumé :
Abstract. In recent years, various Convolutional Neural Networks (CNN) have been used to achieve acceptable performance on semantic segmentation tasks. However, these supervised learning methods require an extensive amount of annotated training data to perform well. Additionally, the model would need to be trained on the same kind of dataset to generalize well for other tasks. Further, commonly real world datasets are usually highly imbalanced. This problem leads to poor performance in the detection of underrepresented classes, which could be the most critical for some applications. The annotation task is time-consuming human labour that creates an obstacle to utilizing supervised learning methods on vision tasks. In this work, we experiment with implementing a reinforced active learning method with a weighted performance metric to reduce human labour while achieving competitive results. A deep Q-network (DQN) is used to find the optimal policy, which would be choosing the most informative regions of the image to be labelled from the unlabelled set. Then, the neural network would be trained with newly labelled data, and its performance would be evaluated. A weighted Intersection over Union (IoU) is used to calculate the rewards for the DQN network. By using weighted IoU, we target to bring more attention to underrepresented classes.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Vitali, Francesco, Paola Zinno, Emily Schifano, Agnese Gori, Ana Costa, Carlotta De Filippo, Barbara Koroušić Seljak, Panče Panov, Chiara Devirgiliis et Duccio Cavalieri. « Semantics of Dairy Fermented Foods : A Microbiologist’s Perspective ». Foods 11, no 13 (29 juin 2022) : 1939. http://dx.doi.org/10.3390/foods11131939.

Texte intégral
Résumé :
Food ontologies are acquiring a central role in human nutrition, providing a standardized terminology for a proper description of intervention and observational trials. In addition to bioactive molecules, several fermented foods, particularly dairy products, provide the host with live microorganisms, thus carrying potential “genetic/functional” nutrients. To date, a proper ontology to structure and formalize the concepts used to describe fermented foods is lacking. Here we describe a semantic representation of concepts revolving around what consuming fermented foods entails, both from a technological and health point of view, focusing actions on kefir and Parmigiano Reggiano, as representatives of fresh and ripened dairy products. We included concepts related to the connection of specific microbial taxa to the dairy fermentation process, demonstrating the potential of ontologies to formalize the various gene pathways involved in raw ingredient transformation, connect them to resulting metabolites, and finally to their consequences on the fermented product, including technological, health and sensory aspects. Our work marks an improvement in the ambition of creating a harmonized semantic model for integrating different aspects of modern nutritional science. Such a model, besides formalizing a multifaceted knowledge, will be pivotal for a rich annotation of data in public repositories, as a prerequisite to generalized meta-analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Wittenberg, Thomas, Michaela Benz, Andreas Foltyn, Ralf Hackner, Julia Hetzel, Veit Wiesmann et Thomas Eixelberger. « Acquisition of Semantics for AI-based Applications in Medical Technologies ». Current Directions in Biomedical Engineering 7, no 2 (1 octobre 2021) : 515–18. http://dx.doi.org/10.1515/cdbme-2021-2131.

Texte intégral
Résumé :
Abstract For the development, training, and validation of AIbased procedures, such as the analysis of clinical data, prediction of critical events, or planning of healthcare procedures, a lot of data is needed. In addition to this data of any origin (image data, bio-signals, health records, machine states, …) adequate supplementary information about the meaning encoded in the data is required. With this additional information - the semantic or knowledge - a tight relation between the raw data and the human-understandable concepts from the real world can be established. Nevertheless, as the amount of data needed to develop robust AI-based methods is strongly increasing, the assessment and acquisition of the related knowledge becomes more and more challenging. Within this work, an overview of currently available concepts of knowledge acquisition are described and evaluated. Four main groups of knowledge acquisition related to AI-based technologies have been identified. For image data mainly iconic annotation methods are used, where experienced users mark or draw depicted entities in the images and label them using predefined sets of classifications. Similarly, bio-signals are manually labelled, whereby important events along the timeline are marked. If no sufficient data is available, augmentation and simulation techniques are applied yielding data and semantics at the same time. In applications, where expensive sensors are replaced by low-cost devices, the high-grade data can be used as semantics. Finally, classic rule-based approaches are used, where human factual and procedural knowledge about the data and its context is translated into machine-understandable procedures. All these methods are depending on the involvement of human experts. To reduce this, more intelligent and hybrid approaches are needed, shifting the focus from the-human-in-the-loop to the-machine- in-the-loop.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Yuval, Matan, Iñigo Alonso, Gal Eyal, Dan Tchernov, Yossi Loya, Ana C. Murillo et Tali Treibitz. « Repeatable Semantic Reef-Mapping through Photogrammetry and Label-Augmentation ». Remote Sensing 13, no 4 (11 février 2021) : 659. http://dx.doi.org/10.3390/rs13040659.

Texte intégral
Résumé :
In an endeavor to study natural systems at multiple spatial and taxonomic resolutions, there is an urgent need for automated, high-throughput frameworks that can handle plethora of information. The coalescence of remote-sensing, computer-vision, and deep-learning elicits a new era in ecological research. However, in complex systems, such as marine-benthic habitats, key ecological processes still remain enigmatic due to the lack of cross-scale automated approaches (mms to kms) for community structure analysis. We address this gap by working towards scalable and comprehensive photogrammetric surveys, tackling the profound challenges of full semantic segmentation and 3D grid definition. Full semantic segmentation (where every pixel is classified) is extremely labour-intensive and difficult to achieve using manual labeling. We propose using label-augmentation, i.e., propagation of sparse manual labels, to accelerate the task of full segmentation of photomosaics. Photomosaics are synthetic images generated from a projected point-of-view of a 3D model. In the lack of navigation sensors (e.g., a diver-held camera), it is difficult to repeatably determine the slope-angle of a 3D map. We show this is especially important in complex topographical settings, prevalent in coral-reefs. Specifically, we evaluate our approach on benthic habitats, in three different environments in the challenging underwater domain. Our approach for label-augmentation shows human-level accuracy in full segmentation of photomosaics using labeling as sparse as 0.1%, evaluated on several ecological measures. Moreover, we found that grid definition using a leveler improves the consistency in community-metrics obtained due to occlusions and topology (angle and distance between objects), and that we were able to standardise the 3D transformation with two percent error in size measurements. By significantly easing the annotation process for full segmentation and standardizing the 3D grid definition we present a semantic mapping methodology enabling change-detection, which is practical, swift, and cost-effective. Our workflow enables repeatable surveys without permanent markers and specialized mapping gear, useful for research and monitoring, and our code is available online. Additionally, we release the Benthos data-set, fully manually labeled photomosaics from three oceanic environments with over 4500 segmented objects useful for research in computer-vision and marine ecology.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Pan, Erting, Yong Ma, Fan Fan, Xiaoguang Mei et Jun Huang. « Hyperspectral Image Classification across Different Datasets : A Generalization to Unseen Categories ». Remote Sensing 13, no 9 (26 avril 2021) : 1672. http://dx.doi.org/10.3390/rs13091672.

Texte intégral
Résumé :
With the rapid developments of hyperspectral imaging, the cost of collecting hyperspectral data has been lower, while the demand for reliable and detailed hyperspectral annotations has been much more substantial. However, limited by the difficulties of labelling annotations, most existing hyperspectral image (HSI) classification methods are trained and evaluated on a single hyperspectral data cube. It brings two significant challenges. On the one hand, many algorithms have reached a nearly perfect classification accuracy, but their trained models are hard to generalize to other datasets. On the other hand, since different hyperspectral datasets are usually not collected in the same scene, different datasets will contain different classes. To address these issues, in this paper, we propose a new paradigm for HSI classification, which is training and evaluating separately across different hyperspectral datasets. It is of great help to labelling hyperspectral data. However, it has rarely been studied in the hyperspectral community. In this work, we utilize a three-phase scheme, including feature embedding, feature mapping, and label reasoning. More specifically, we select a pair of datasets acquired by the same hyperspectral sensor, and the classifier learns from one dataset and then evaluated it on the other. Inspired by the latest advances in zero-shot learning, we introduce label semantic representation to establish associations between seen categories in the training set and unseen categories in the testing set. Extensive experiments on two pairs of datasets with different comparative methods have shown the effectiveness and potential of zero-shot learning in HSI classification.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Morris, Robert A., Lei Dou, James Hanken, Maureen Kelly, David B. Lowery, Bertram Ludäscher, James A. Macklin et Paul J. Morris. « Semantic Annotation of Mutable Data ». PLoS ONE 8, no 11 (4 novembre 2013) : e76093. http://dx.doi.org/10.1371/journal.pone.0076093.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Zakharova, O. V. « Main Aspects of Big Data Semantic Annotation ». PROBLEMS IN PROGRAMMING, no 4 (décembre 2020) : 022–33. http://dx.doi.org/10.15407/pp2020.04.022.

Texte intégral
Résumé :
Semantic annotations, due to their structure, are an in­teg­ral part of the effective solution of big data problems. However, the problem of defining semantic annotations is not trivial. Manual annotation is not acceptable for big data due to their size and heterogeneity, as well as the complexity and cost of the annotation process, the auto­ma­tic annotation task for big data has not yet decision. So, resolving the problem of semantic annotation requires modern mixed approaches, which would be based on and using the existing theoretical apparatus, namely methods and models of machine learning, statistical learning, wor­king with content of different types and formats, natural lan­guage processing, etc. It also should provide solutions for main annotation tasks: discovering and extracting en­ti­ties and relationships from content of any type and de­fi­ning semantic annotations based on existing sources of know­ledge (dictionaries, ontologies, etc.). The obtained an­notations must be accurate and provide a further op­por­tu­nity to solve application problems with the annotated data. Note that the big data contents are very different, as a result, their properties that should be annotated are very dif­ferent too. This requires different metadata to describe the data. It leads to large number of different metadata stan­dards for data of different types or formats appears. How­ever, to effectively solve the annotation problem, it is necessary to have a generalized description of the metadata types, and we have to consider metadata spe­ci­fi­city within this description. The purpose of this work is to define the general classification of metadata and de­ter­mi­nate common aspects and approaches to big data se­man­tic annotation.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Dugas, M. « Missing Semantic Annotation in Databases ». Methods of Information in Medicine 53, no 06 (2014) : 516–17. http://dx.doi.org/10.3414/me14-04-0002.

Texte intégral
Résumé :
SummaryData integration is a well-known grand challenge in information systems. It is highly relevant in medicine because of the multitude of patient data sources. Semantic annotations of data items regarding concept and value domain, based on comprehensive terminologies can facilitate data integration and migration. Therefore it should be implemented in databases from the very beginning.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Wu, Xiao Ying, Yun Juan Liang, Li Li et Li Juan Ma. « Semantic Fusion of Image Annotation ». Advanced Materials Research 268-270 (juillet 2011) : 1386–89. http://dx.doi.org/10.4028/www.scientific.net/amr.268-270.1386.

Texte intégral
Résumé :
In this paper, improve the image annotation with semantic meaning, and name the new algorithm for semantic fusion of image annotation, that is a image is given to be labeled, use of training data set, the word set, and a collection of image area and other information to establish the probability model ,estimates the joint probability by word and given image areas.The probability value as the size, combined with keywords relevant table that integrates lexical semantics to extract keywords as the most representative image semantic annotation results. The algorithm can effectively use large-scale training data with rich annotation, so as to achieve better recall and precision than the existing automatic image annotation ,and validate the algorithm in the Corel data set.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Kim, Byung-Gon, et Sung-Kyun Oh. « Content based data search using semantic annotation ». Journal of Digital Contents Society 12, no 4 (31 décembre 2011) : 429–36. http://dx.doi.org/10.9728/dcs.2011.12.4.429.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Yordanova, Kristina, et Frank Krüger. « Creating and Exploring Semantic Annotation for Behaviour Analysis ». Sensors 18, no 9 (23 août 2018) : 2778. http://dx.doi.org/10.3390/s18092778.

Texte intégral
Résumé :
Providing ground truth is essential for activity recognition and behaviour analysis as it is needed for providing training data in methods of supervised learning, for providing context information for knowledge-based methods, and for quantifying the recognition performance. Semantic annotation extends simple symbolic labelling by assigning semantic meaning to the label, enabling further reasoning. In this paper, we present a novel approach to semantic annotation by means of plan operators. We provide a step by step description of the workflow to manually creating the ground truth annotation. To validate our approach, we create semantic annotation of the Carnegie Mellon University (CMU) grand challenge dataset, which is often cited, but, due to missing and incomplete annotation, almost never used. We show that it is possible to derive hidden properties, behavioural routines, and changes in initial and goal conditions in the annotated dataset. We evaluate the quality of the annotation by calculating the interrater reliability between two annotators who labelled the dataset. The results show very good overlapping (Cohen’s κ of 0.8) between the annotators. The produced annotation and the semantic models are publicly available, in order to enable further usage of the CMU grand challenge dataset.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Feng, Lin, Chang-You Xu, Bo Jin, Feng Chen et Zhi-Yuan Yin. « Underlying Semantic Annotation Method for Human Motion Capture Data ». Information Technology Journal 10, no 10 (15 septembre 2011) : 1957–63. http://dx.doi.org/10.3923/itj.2011.1957.1963.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Malik, Kaleem Razzaq, Muhammad Asif Habib, Shehzad Khalid, Mudassar Ahmad, Mai Alfawair, Awais Ahmad et Gwanggil Jeon. « A generic methodology for geo-related data semantic annotation ». Concurrency and Computation : Practice and Experience 30, no 15 (4 mai 2018) : e4495. http://dx.doi.org/10.1002/cpe.4495.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Novák, Václav. « Semantic Network Manual Annotation and its Evaluation ». Prague Bulletin of Mathematical Linguistics 90, no 1 (1 décembre 2008) : 69–82. http://dx.doi.org/10.2478/v10108-009-0008-4.

Texte intégral
Résumé :
Semantic Network Manual Annotation and its Evaluation The present contribution is a brief extract of (Novák, 2008). The Prague Dependency Treebank (PDT) is a valuable resource of linguistic information annotated on several layers. These layers range from morphemic to deep and they should contain all the linguistic information about the text. The natural extension is to add a semantic layer suitable as a knowledge base for tasks like question answering, information extraction etc. In this paper I set up criteria for this representation, explore the possible formalisms for this task and discuss their properties. One of them, Multilayered Extended Semantic Networks (Multi-Net), is chosen for further investigation. Its properties are described and an annotation process set up. I discuss some practical modifications of MultiNet for the purpose of manual annotation. MultiNet elements are compared to the elements of the deep linguistic layer of PDT. The tools and problems of the annotation process are presented and initial annotation data evaluated.
Styles APA, Harvard, Vancouver, ISO, etc.
43

ZHU, SONGHAO, ZHIWEI LIANG et XIAOYUAN JING. « VIDEO RETRIEVAL VIA LEARNING COLLABORATIVE SEMANTIC DISTANCE ». International Journal of Pattern Recognition and Artificial Intelligence 25, no 04 (juin 2011) : 475–90. http://dx.doi.org/10.1142/s0218001411008944.

Texte intégral
Résumé :
Graph-based semi-supervised learning approaches have been proven effective and efficient in solving the problem of the inefficiency of labeled data in many real-world application areas, such as video annotation. However, the pairwise similarity metric, a significant factor of existing approaches, has not been fully investigated. That is, these graph-based semi-supervised approaches estimate the pairwise similarity between samples mainly according to the spatial property of video data. On the other hand, temporal property, an essential characteristic of video data, is not embedded into the pairwise similarity measure. Accordingly, a novel framework for video annotation, called Joint Spatio-Temporal Correlation Learning (JSTCL), is proposed in this paper. This framework is characterized by simultaneously taking into account the spatial and temporal property of video data to achieve more accurate pairwise similarity values. We apply the proposed framework to video annotation and report superior performance compared to key existing approaches over the benchmark TRECVID data set.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Ali et Chong. « Semantic Mediation Model to Promote Improved Data Sharing Using Representation Learning in Heterogeneous Healthcare Service Environments ». Applied Sciences 9, no 19 (5 octobre 2019) : 4175. http://dx.doi.org/10.3390/app9194175.

Texte intégral
Résumé :
Interoperability has become a major challenge for the development of integrated healthcare applications. This is mainly because of the reason that data is collected, processed, and managed using heterogeneous protocols, different data formats, and diverse technologies, respectively. Moreover, interoperability among healthcare applications has been limited because of the lack of mutually agreed standards. This article proposes a semantic mediation model for the interoperability provision in heterogeneous healthcare service environments. To enhance semantic mediation, the Web of Objects (WoO) framework has been used to support abstraction and aggregation of healthcare concepts using virtual objects and composite virtual objects with ontologies. Besides, semantic annotation of healthcare data has been achieved with a simplified annotation algorithm. The alignment of diverse data models has been supported with the deep representation learning method. Semantic annotation and alignment provide a common understanding of data and cohesive integration, respectively. The semantic mediation model is backed with a target ontology catalog and standard vocabulary. Healthcare data is modeled using the standard Resource Description Framework (RDF), which provides triples structure to describe the healthcare concepts in a unified way. We demonstrate the semantic mediation process with the experimental settings and provide details on the utilization of the proposed model.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Liu, Yongmei, Tanakrit Wongwitit et Linsen Yu. « Automatic Image Annotation Based on Scene Analysis ». International Journal of Image and Graphics 14, no 03 (juillet 2014) : 1450012. http://dx.doi.org/10.1142/s0219467814500120.

Texte intégral
Résumé :
Automatic image annotation is an important and challenging job for image analysis and understanding such as content-based image retrieval (CBIR). The relationship between the keywords and visual features is too complicated due to the semantic gap. We present an approach of automatic image annotation based on scene analysis. With the constrain of scene semantics, the correlation between keywords and visual features becomes simpler and clearer. Our model has two stages of process. The first stage is training process which groups training image data set into semantic scenes using the extracted semantic feature and visual scenes constructed from the calculation distances of visual features for every pairs of training images by using Earth mover's distance (EMD). Then, combine a pair of semantic and visual scene together and apply Gaussian mixture model (GMM) for all scenes. The second stage is to test and annotate keywords for test image data set. Using the visual features provided by Duygulu, experimental results show that our model outperforms probabilistic latent semantic analysis (PLSA) & GMM (PLSA&GMM) model on Corel5K database.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Saji Chacko, Jaison, et Tulasi B. « Semantic image annotation using convolutional neural network and wordnet ontology ». International Journal of Engineering & ; Technology 7, no 2.27 (2 août 2018) : 56. http://dx.doi.org/10.14419/ijet.v7i2.27.9886.

Texte intégral
Résumé :
Images are a major source of content on the web. The increase in mobile phones and digital cameras have led to huge amount of non-textual data being generated which is mostly images. Accurate annotation is critical for efficient image search and retrieval. Semantic image annotation refers to adding meaningful meta-data to an image which can be used to infer additional knowledge from an image. It enables users to perform complex queries and retrieve accurate image results. This paper proposes an image annotation technique that uses deep learning and semantic labeling. A convolutional neural network is used to classify images and the predicted class labels are mapped to semantic concepts. The results shows that combining semantic class labeling with image classification can help in polishing the results and finding common concepts and themes.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Zhou, Yongxiu, Honghui Wang, Ronghao Yang, Guangle Yao, Qiang Xu et Xiaojuan Zhang. « A Novel Weakly Supervised Remote Sensing Landslide Semantic Segmentation Method : Combing CAM and cycleGAN Algorithms ». Remote Sensing 14, no 15 (29 juillet 2022) : 3650. http://dx.doi.org/10.3390/rs14153650.

Texte intégral
Résumé :
With the development of deep learning algorithms, more and more deep learning algorithms are being applied to remote sensing image classification, detection, and semantic segmentation. The landslide semantic segmentation of a remote sensing image based on deep learning mainly uses supervised learning, the accuracy of which depends on a large number of training data and high-quality data annotation. At this stage, high-quality data annotation often requires the investment of significant human effort. Therefore, the high cost of remote sensing landslide image data annotation greatly restricts the development of a landslide semantic segmentation algorithm. Aiming to resolve the problem of the high labeling cost of landslide semantic segmentation with a supervised learning method, we proposed a remote sensing landslide semantic segmentation with weakly supervised learning method combing class activation maps (CAMs) and cycle generative adversarial network (cycleGAN). In this method, we used the image level annotation data to replace pixel level annotation data as the training data. Firstly, the CAM method was used to determine the approximate position of the landslide area. Then, the cycleGAN method was used to generate the fake image without a landslide, and to make the difference with the real image to obtain the accurate segmentation of the landslide area. Finally, the pixel-level segmentation of the landslide area on remote sensing image was realized. We used mean intersection-over-union (mIOU) to evaluate the proposed method, and compared it with the method based on CAM, whose mIOU was 0.157, and we obtain better result with mIOU 0.237 on the same test dataset. Furthermore, we made a comparative experiment using the supervised learning method of a u-net network, and the mIOU result was 0.408. The experimental results show that it is feasible to realize landslide semantic segmentation in a remote sensing image by using weakly supervised learning. This method can greatly reduce the workload of data annotation.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Rani, P. Shobha, R. M. Suresh et R. Sethukarasi. « Multi-level semantic annotation and unified data integration using semantic web ontology in big data processing ». Cluster Computing 22, S5 (21 août 2017) : 10401–13. http://dx.doi.org/10.1007/s10586-017-1029-7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
49

Pado, S., et M. Lapata. « Cross-lingual Annotation Projection for Semantic Roles ». Journal of Artificial Intelligence Research 36 (17 novembre 2009) : 307–40. http://dx.doi.org/10.1613/jair.2863.

Texte intégral
Résumé :
This article considers the task of automatically inducing role-semantic annotations in the FrameNet paradigm for new languages. We propose a general framework that is based on annotation projection, phrased as a graph optimization problem. It is relatively inexpensive and has the potential to reduce the human effort involved in creating role-semantic resources. Within this framework, we present projection models that exploit lexical and syntactic information. We provide an experimental evaluation on an English-German parallel corpus which demonstrates the feasibility of inducing high-precision German semantic role annotation both for manually and automatically annotated English data.
Styles APA, Harvard, Vancouver, ISO, etc.
50

FERNÁNDEZ, N., J. A. FISTEUS, D. FUENTES, L. SÁNCHEZ et V. LUQUE. « A WIKIPEDIA-BASED FRAMEWORK FOR COLLABORATIVE SEMANTIC ANNOTATION ». International Journal on Artificial Intelligence Tools 20, no 05 (octobre 2011) : 847–86. http://dx.doi.org/10.1142/s0218213011000413.

Texte intégral
Résumé :
The semantic web aims at automating web data processing tasks that nowadays only humans are able to do. To make this vision a reality, the information on web resources should be described in a computer-meaningful way, in a process known as semantic annotation. In this paper, a manual, collaborative semantic annotation framework is described. It is designed to take advantage of the benefits of manual annotation systems (like the possibility of annotating formats difficult to annotate in an automatic manner) addressing at the same time some of their limitations (reduce the burden for non-expert annotators). The framework is inspired by two principles: use Wikipedia as a facade for a formal ontology and integrate the semantic annotation task with common user actions like web search. The tools in the framework have been implemented, and empirical results obtained in experiences carried out with these tools are reported.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie