To see the other types of publications on this topic, follow the link: IF. Information transfer: protocols, formats, techniques.

Journal articles on the topic 'IF. Information transfer: protocols, formats, techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'IF. Information transfer: protocols, formats, techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Manral, Nisha. "Secure Data Transfer Using Image Steganography." International Journal for Research in Applied Science and Engineering Technology 9, no. VIII (August 10, 2021): 175–80. http://dx.doi.org/10.22214/ijraset.2021.37322.

Full text
Abstract:
Steganography is the art of hiding the fact that communication is taking place, by hiding information in other information. Many different carrier file formats can be used, but digital images are the most popular because of their frequency on the Internet. For hiding secret information in images, there exists a large variety of steganographic techniques some are more complex than others and all of them have respective strong and weak points. Different applications have different requirements of the steganography technique used. For example, some applications may require absolute invisibility of the secret information, while others require a larger secret message to be hidden. This paper intends to give an overview of image steganography, its uses and techniques. It also attempts to identify the requirements of a good steganographic algorithm and briefly reflects on which steganographic techniques are more suitable for which applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Afsari, Kereshmeh, Charles Eastman, and Dennis Shelden. "Building Information Modeling data interoperability for Cloud-based collaboration: Limitations and opportunities." International Journal of Architectural Computing 15, no. 3 (September 2017): 187–202. http://dx.doi.org/10.1177/1478077117731174.

Full text
Abstract:
Collaboration within Building Information Modeling process is mainly based on the manual transfer of document files in either vendor-specific formats or neutral format using Industry Foundation Classes. However, since the web enables Cloud-based Building Information Modeling services, it provides an opportunity to exchange data with web technologies. Alternative data sharing solutions include the federation of Building Information Modeling models and an interchange hub for data exchange in real time. These solutions face several challenges, are vendor locked, and integrate Building Information Modeling applications to a third new system. The main objective of this article is to investigate current limitations as well as opportunities of Cloud interoperability to outline a framework for a loosely coupled network-based Building Information Modeling data interoperability. This study explains that Cloud-Building Information Modeling data exchange needs to deploy major components of Cloud interoperability such as Cloud application programming interfaces, data transfer protocols, data formats, and standardization to redefine Building Information Modeling data flow in Cloud-based applications and to reshape collaboration process.
APA, Harvard, Vancouver, ISO, and other styles
3

Sirurmath, Ms Srilakshmi U., and Dr Deepashree Devaraj. "Secure Cloud Storage Techniques: A Review." Journal of University of Shanghai for Science and Technology 23, no. 07 (August 1, 2021): 1388–95. http://dx.doi.org/10.51201/jusst/21/07243.

Full text
Abstract:
Cloud technology has exponentially seen a rise in its absorption for various applications. Cloud users with limited storage might transfer their information to remote systems. In return for monetary compensation, these servers provide access to their clients’ data. Cloud storage protocols verify the integrity of this data which is hosted on the cloud. Broadly there are two types of data – static and dynamic. While many efficient protocols are already present for static data, much research is being undertaken to build a secure cloud storage system for dynamic data. This paper analyzes these existing and proposed cloud storage protocols for both static and dynamic data. Important performance parameters are identified and a comparison is drawn between the chosen methods in order to draw a contrast between the efficiency of the techniques chosen.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Jian, and Alan E. Willner. "Review of Robust Data Exchange Using Optical Nonlinearities." International Journal of Optics 2012 (2012): 1–25. http://dx.doi.org/10.1155/2012/575429.

Full text
Abstract:
Data exchange, namely bidirectional information swapping, provides enhanced flexibility compared to the unidirectional information transfer. To fulfill the rapid development of high-speed large-capacity optical communications with emerging multiplexing/demultiplexing techniques and advanced modulation formats, a laudable goal would be to achieve data exchange in different degrees of freedom (wavelength, time, polarization), for different modulation formats (OOK, DPSK, DQPSK, pol-muxed), and at different granularities (entire data, groups of bits, tributary channels). Optical nonlinearities are potentially suitable candidates to enable data exchange in the wavelength, time, and polarization domains. In this paper, we will review our recent works towards robust data exchange by exploiting miscellaneous optical nonlinearities, including the use of cSFG/DFG in a PPLN waveguide for time- (groups of bits) and channel-selective data exchange and tributary channel exchange between two WDM+OTDM signals, nondegenerate FWM in an HNLF for phase-transparent data exchange (DPSK, DQPSK), bidirectional degenerate FWM in an HNLF for multi-channel data exchange, and Kerr-induced nonlinear polarization rotation in an HNLF for tributary channel exchange of a pol-muxed DPSK OTDM signal. The demonstrated data exchanges in different degrees of freedom, for different modulation formats, and at different granularities, open the door for alternative approaches to achieve superior network performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Alaçam, Sema, Ilker Karadag, and Orkan Zeynel Güzelci. "Reciprocal style and information transfer between historical Istanbul Pervititch Maps and satellite views using machine learning." Estoa 11, no. 22 (July 25, 2022): 71–81. http://dx.doi.org/10.18537/est.v011.n022.a06.

Full text
Abstract:
Historical maps contain significant data on the cultural, social, and urban character of cities. However, most historical maps utilize specific notation methods that differ from those commonly used today and converting these maps to more recent formats can be highly labor-intensive. This study is intended to demonstrate how a machine learning (ML) technique can be used to transform old maps of Istanbul into spatial data that simulates modern satellite views (SVs) through a reciprocal map conversion framework. With this aim, the Istanbul Pervititch Maps (IPMs) made by Jacques Pervititch in 1922-1945 and current SVs were used to test and evaluate the proposed framework. The study consists of a style and information transfer in two stages: (i) from IPMs to SVs, and (ii) from SVs to IPMs using CycleGAN (a type of generative adversarial network). The initial results indicate that the proposed framework can transfer attributes such as green areas, construction techniques/materials, and labels/tags.
APA, Harvard, Vancouver, ISO, and other styles
6

Афанасьев, Олег, and Oleg Afanasiev. "Tourism infographics: potential for educational process." Universities for Tourism and Service Association Bulletin 9, no. 4 (December 1, 2015): 55–63. http://dx.doi.org/10.12737/14584.

Full text
Abstract:
The article is devoted to the problem of using tourism infographic in the educational process of training specialists in the sphere of tourist services. The object of study is a special form of information visualization – infographics tourist. Subject of research is the classification and study of possibilities of using tourism as a special infographic of the format of visualization of the educational information in the process of training specialists in the sphere of tourism. Tourism infographic refers to a special thematic course in theory and practice visualizing the information related to tourism resources, destinations, service in tourism and hospitality. In modern science the concept of tourism infographic, a comprehensive synthesis publication about the pedagogical functions of the infographics are absent. Infographics as a modern genre and the format for reporting is the product of a global information space of the «InfoSphere». Its main function is the transfer of large volumes of information through its visualization on a single, specially organized a compact image with the use of design techniques. In the field of training of specialists on direction «Tourism» infographics have become one of the most important methods of information influence on the students. The infographic has a bunch of important didactic qualities, allowing to form a special visual expertise. The article describes the classification of tourism infographic in various formats (media, information, narrative formats) described critical Internet resources, providing the opportunity to engage in the learning process of the finished renderings of tourist subjects. The article lists and describes the possibility of using in practical activity of students the most important tools for the design and construction of tourist infographic. The article States the pedagogical effect of the use of the materials infographic in the educational process of training specialists in the sphere of service and tourism. The conclusion forms the positive impact of visualization techniques of tourist information by means of infographics on the academic learning of students.
APA, Harvard, Vancouver, ISO, and other styles
7

Langnickel, Lisa, Kilian Krockauer, Mischa Uebachs, Sebastian Schaaf, Sumit Madan, Thomas Klockgether, and Juliane Fluck. "Information Extraction from German Clinical Care Documents in Context of Alzheimer’s Disease." Applied Sciences 11, no. 22 (November 13, 2021): 10717. http://dx.doi.org/10.3390/app112210717.

Full text
Abstract:
Dementia affects approximately 50 million people in the world today, the majority suffering from Alzheimer’s disease (AD). The availability of long-term patient data is one of the most important prerequisites for a better understanding of diseases. Worldwide, many prospective, longitudinal cohort studies have been initiated to understand AD. However, this approach takes years to enroll and follow up with a substantial number of patients, resulting in a current lack of data. This raises the question of whether clinical routine datasets could be utilized to extend collected registry data. It is, therefore, necessary to assess what kind of information is available in memory clinic routine databases. We did exactly this based on the example of the University Hospital Bonn. Whereas a number of data items are available in machine readable formats, additional valuable information is stored in textual documents. The extraction of information from such documents is only applicable via text mining methods. Therefore, we set up modular, rule-based text mining workflows requiring minimal sets of training data. The system achieves F1-scores over 95% for the most relevant classes, i.e., memory disturbances from medical reports and quantitative scores from semi-structured neuropsychological test protocols. Thus, we created a machine-readable core dataset for over 8000 patient visits over a ten-year period.
APA, Harvard, Vancouver, ISO, and other styles
8

Rubí, Jesús Noel Sárez, and Paulo Roberto de Lira Gondim. "Interoperable Internet of Medical Things platform for e-Health applications." International Journal of Distributed Sensor Networks 16, no. 1 (January 2020): 155014771988959. http://dx.doi.org/10.1177/1550147719889591.

Full text
Abstract:
The development of information and telecommunication technologies has given rise to new platforms for e-Health. However, some difficulties have been detected since each manufacturer implements its communication protocols and defines their data formats. A semantic incongruence is observed between platforms since no common healthcare domain vocabulary is shared between manufacturers and stakeholders. Despite the existence of standards for semantic and platform interoperability (e.g. openEHR for healthcare, Semantic Sensor Network for Internet of Medical Things platforms, and machine-to-machine standards), no approach has combined them for granting interoperability or considered the whole integration of legacy Electronic Health Record Systems currently used worldwide. Moreover, the heterogeneity in the large volume of health data generated by Internet of Medical Things platforms must be attenuated for the proper application of big data processing techniques. This article proposes the joint use of openEHR and Semantic Sensor Network semantics for the achievement of interoperability at the semantic level and use of a machine-to-machine architecture for the definition of an interoperable Internet of Medical Things platform.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Ying, Chaopeng Li, Na Chen, Shaowen Liu, Liming Du, Zhuxiao Wang, and Miaomiao Ma. "Semantic Web and Geospatial Unique Features Based Geospatial Data Integration." International Journal on Semantic Web and Information Systems 12, no. 1 (January 2016): 1–22. http://dx.doi.org/10.4018/ijswis.2016010101.

Full text
Abstract:
Since large amount of geospatial data are produced by various sources and stored in incompatible formats, geospatial data integration is difficult because of the shortage of semantics. Despite standardised data format and data access protocols, such as Web Feature Service (WFS), can enable end-users with access to heterogeneous data stored in different formats from various sources, it is still time-consuming and ineffective due to the lack of semantics. To solve this problem, a prototype to implement the geospatial data integration is proposed by addressing the following four problems, i.e., geospatial data retrieving, modeling, linking and integrating. First, we provide a uniform integration paradigm for users to retrieve geospatial data. Then, we align the retrieved geospatial data in the modeling process to eliminate heterogeneity with the help of Karma. Our main contribution focuses on addressing the third problem. Previous work has been done by defining a set of semantic rules for performing the linking process. However, the geospatial data has some specific geospatial relationships, which is significant for linking but cannot be solved by the Semantic Web techniques directly. We take advantage of such unique features about geospatial data to implement the linking process. In addition, the previous work will meet a complicated problem when the geospatial data sources are in different languages. In contrast, our proposed linking algorithms are endowed with translation function, which can save the translating cost among all the geospatial sources with different languages. Finally, the geospatial data is integrated by eliminating data redundancy and combining the complementary properties from the linked records. We mainly adopt four kinds of geospatial data sources, namely, OpenStreetMap(OSM), Wikmapia, USGS and EPA, to evaluate the performance of the proposed approach. The experimental results illustrate that the proposed linking method can get high performance in generating the matched candidate record pairs in terms of Reduction Ratio(RR), Pairs Completeness(PC), Pairs Quality(PQ) and F-score. The integrating results denote that each data source can get much Complementary Completeness(CC) and Increased Completeness(IC).
APA, Harvard, Vancouver, ISO, and other styles
10

Bhattacharya, Supriyo, and Xingcheng Lin. "Recent Advances in Computational Protocols Addressing Intrinsically Disordered Proteins." Biomolecules 9, no. 4 (April 11, 2019): 146. http://dx.doi.org/10.3390/biom9040146.

Full text
Abstract:
Intrinsically disordered proteins (IDP) are abundant in the human genome and have recently emerged as major therapeutic targets for various diseases. Unlike traditional proteins that adopt a definitive structure, IDPs in free solution are disordered and exist as an ensemble of conformations. This enables the IDPs to signal through multiple signaling pathways and serve as scaffolds for multi-protein complexes. The challenge in studying IDPs experimentally stems from their disordered nature. Nuclear magnetic resonance (NMR), circular dichroism, small angle X-ray scattering, and single molecule Förster resonance energy transfer (FRET) can give the local structural information and overall dimension of IDPs, but seldom provide a unified picture of the whole protein. To understand the conformational dynamics of IDPs and how their structural ensembles recognize multiple binding partners and small molecule inhibitors, knowledge-based and physics-based sampling techniques are utilized in-silico, guided by experimental structural data. However, efficient sampling of the IDP conformational ensemble requires traversing the numerous degrees of freedom in the IDP energy landscape, as well as force-fields that accurately model the protein and solvent interactions. In this review, we have provided an overview of the current state of computational methods for studying IDP structure and dynamics and discussed the major challenges faced in this field.
APA, Harvard, Vancouver, ISO, and other styles
11

Ponnuviji, N. P., and M. Vigilson Prem. "An Enhanced Way of Distributed Denial of Service Attack Detection by Applying Machine Learning Algorithms in Cloud Computing." Journal of Computational and Theoretical Nanoscience 17, no. 8 (August 1, 2020): 3765–69. http://dx.doi.org/10.1166/jctn.2020.9317.

Full text
Abstract:
Cloud Computing has revolutionized the Information Technology by allowing the users to use variety number of resources in different applications in a less expensive manner. The resources are allocated to access by providing scalability flexible on-demand access in a virtual manner, reduced maintenance with less infrastructure cost. The majority of resources are handled and managed by the organizations over the internet by using different standards and formats of the networking protocols. Various research and statistics have proved that the available and existing technologies are prone to threats and vulnerabilities in the protocols legacy in the form of bugs that pave way for intrusion in different ways by the attackers. The most common among attacks is the Distributed Denial of Service (DDoS) attack. This attack targets the cloud’s performance and cause serious damage to the entire cloud computing environment. In the DDoS attack scenario, the compromised computers are targeted. The attacks are done by transmitting a large number of packets injected with known and unknown bugs to a server. A huge portion of the network bandwidth of the users’ cloud infrastructure is affected by consuming enormous time of their servers. In this paper, we have proposed a DDoS Attack detection scheme based on Random Forest algorithm to mitigate the DDoS threat. This algorithm is used along with the signature detection techniques and generates a decision tree. This helps in the detection of signature attacks for the DDoS flooding attacks. We have also used other machine learning algorithms and analyzed based on the yielded results.
APA, Harvard, Vancouver, ISO, and other styles
12

Church, Dawson, Peta Stapleton, and Debbie Sabot. "App-Based Delivery of Clinical Emotional Freedom Techniques: Cross-Sectional Study of App User Self-Ratings." JMIR mHealth and uHealth 8, no. 10 (October 14, 2020): e18545. http://dx.doi.org/10.2196/18545.

Full text
Abstract:
Background The burgeoning area of mobile health (mHealth) has experienced rapid growth in mobile apps designed to address mental health issues. Although abundant apps offer strategies for managing symptoms of anxiety and stress, information regarding their efficacy is scarce. Objective This study aimed to assess the effect of an mHealth app on user self-ratings of psychological distress in a sample of 270,461 app users. The Tapping Solution App guides users through the therapeutic protocols of Clinical Emotional Freedom Techniques (EFT), an evidence-based psychophysiological intervention that combines acupressure with elements of cognitive and exposure therapies. Methods App users provided self-ratings of emotional intensity before and after app sessions (termed “tapping meditations”) using an 11-point Subjective Units of Distress scale. App user data for 23 tapping meditations, which addressed psychological symptoms of anxiety and stress, were gathered between October 2018 and October 2019, totaling 380,034 completed app sessions. Results Across 12 anxiety-tapping meditations, the difference in emotional intensity ratings from presession (mean 6.66, SD 0.25) to postsession (mean 3.75, SD 0.30) was statistically significant (P<.001; 95% CI −2.92 to −2.91). Across 11 stress-tapping meditations, a statistically significant difference was also found from presession (mean 6.91, SD 0.48) to postsession (mean 3.83, SD 0.54; P<.001; 95% CI −3.08 to −3.07). The results are consistent with the literature on the efficacy of Clinical EFT for anxiety and stress when offered in conventional therapeutic formats. Conclusions The findings provide preliminary support for the effectiveness of the mHealth app in the immediate reduction of self-rated psychological distress. As an adjunct to professional mental health care, the app promises accessible and convenient therapeutic benefits.
APA, Harvard, Vancouver, ISO, and other styles
13

Venegas, Pablo, Rubén Usamentiaga, Juan Perán, and Idurre Sáez de Ocáriz. "Quaternion Processing Techniques for Color Synthesized NDT Thermography." Applied Sciences 11, no. 2 (January 15, 2021): 790. http://dx.doi.org/10.3390/app11020790.

Full text
Abstract:
Infrared thermography is a widely used technology that has been successfully applied to many and varied applications. These applications include the use as a non-destructive testing tool to assess the integrity state of materials. The current level of development of this application is high and its effectiveness is widely verified. There are application protocols and methodologies that have demonstrated a high capacity to extract relevant information from the captured thermal signals and guarantee the detection of anomalies in the inspected materials. However, there is still room for improvement in certain aspects, such as the increase of the detection capacity and the definition of a detailed characterization procedure of indications, that must be investigated further to reduce uncertainties and optimize this technology. In this work, an innovative thermographic data analysis methodology is proposed that extracts a greater amount of information from the recorded sequences by applying advanced processing techniques to the results. The extracted information is synthesized into three channels that may be represented through real color images and processed by quaternion algebra techniques to improve the detection level and facilitate the classification of defects. To validate the proposed methodology, synthetic data and actual experimental sequences have been analyzed. Seven different definitions of signal-to-noise ratio (SNR) have been used to assess the increment in the detection capacity, and a generalized application procedure has been proposed to extend their use to color images. The results verify the capacity of this methodology, showing significant increments in the SNR compared to conventional processing techniques in thermographic NDT.
APA, Harvard, Vancouver, ISO, and other styles
14

Dharmadhikari, Suyog, Apoorva Mhatre, Juhi Gundavda, Daya D Shetye, Priyanshi Shah, and Heer Madhvi. "Digitally Assisted Implant Therapy." International Journal of Current Research and Review 13, no. 05 (2021): 166–72. http://dx.doi.org/10.31782/ijcrr.2021.sp286.

Full text
Abstract:
Introduction: Techniques and workflows have been developed to use guided surgery in most clinical patient case applications, such as immediate implant placement; tilted implant/hybrid restoration and implant placement. Aims: Integrating the guide sleeves is a manual process, as is the removal of holding or support structures. Methods/Materials: Different techniques have been developed to transfer the ideal implant position to the surgical field, using templates. Implant planning software matches CT data to wax up information, allowing the clinician to view a 3D image of the jaw when planning implant position. Results: Guided implant surgery makes it easy for the clinician in rendering optimal clinical outcome. Computer-assisted implantology has found to overcome the errors encountered during implant site preparation more precisely. Conclusion: The protocols that need to be followed by this technique are based upon the concept of prosthetic-driven implantology and CT-scan analysis.
APA, Harvard, Vancouver, ISO, and other styles
15

Kumar, Ganesh, Shuib Basri, Abdullahi Abubakar Imam, Sunder Ali Khowaja, Luiz Fernando Capretz, and Abdullateef Oluwagbemiga Balogun. "Data Harmonization for Heterogeneous Datasets: A Systematic Literature Review." Applied Sciences 11, no. 17 (September 6, 2021): 8275. http://dx.doi.org/10.3390/app11178275.

Full text
Abstract:
As data size increases drastically, its variety also increases. Investigating such heterogeneous data is one of the most challenging tasks in information management and data analytics. The heterogeneity and decentralization of data sources affect data visualization and prediction, thereby influencing analytical results accordingly. Data harmonization (DH) corresponds to a field that unifies the representation of such a disparate nature of data. Over the years, multiple solutions have been developed to minimize the heterogeneity aspects and disparity in formats of big-data types. In this study, a systematic review of the literature was conducted to assess the state-of-the-art DH techniques. This study aimed to understand the issues faced due to heterogeneity, the need for DH and the techniques that deal with substantial heterogeneous textual datasets. The process produced 1355 articles, but among them, only 70 articles were found to be relevant through inclusion and exclusion criteria methods. The result shows that the heterogeneity of structured, semi-structured, and unstructured (SSU) data can be managed by using DH and its core techniques, such as text preprocessing, Natural Language Preprocessing (NLP), machine learning (ML), and deep learning (DL). These techniques are applied to many real-world applications centered on the information-retrieval domain. Several assessment criteria were implemented to measure the efficiency of these techniques, such as precision, recall, F-1, accuracy, and time. A detailed explanation of each research question, common techniques, and performance measures is also discussed. Lastly, we present readers with a detailed discussion of the existing work, contributions, and managerial and academic implications, along with the conclusion, limitations, and future research directions.
APA, Harvard, Vancouver, ISO, and other styles
16

Guo, Bo, Fu-Shin Lee, Chen-I. Lin, and Yun-Qing Lu. "A cloud integrated strategy for reconfigurable manufacturing systems." Concurrent Engineering 28, no. 4 (October 2, 2020): 305–18. http://dx.doi.org/10.1177/1063293x20958937.

Full text
Abstract:
Manufacturing industries nowadays need to reconfigure their production lines promptly as to acclimate to rapid changing markets. Meanwhile, exercising system reconfigurations also needs to manage innumerous types of manufacturing apparatus involved. Nevertheless, traditional incompatible manufacturing systems delivered by exclusive vendors usually increase manufacture costs and prolong development time. This paper presents a novel RMS framework, which is intended to implement a Redis master/slave server mechanism to integrate various CNC manufacturing apparatus, hardware control means, and data exchange protocols through developed configurating codes. In the RMS framework each manufacturing apparatus or accessory stands for an object, and information of recognized CNC control panel image features, associated apparatus tuned parameters, communication formats, operation procedures, and control APIs, are stored into the Redis master cloud server database. Through implementation of machine vision techniques to acquire CNC controller panel images, the system effectively identifies instantaneous CNC machining states and response messages once the embedded image features are recognized. Upon demanding system reconfigurations for the manufacturing resources, the system issues commands from Redis local client servers to retrieve the stored information in the Redis master cloud servers, in which the resources for registered CNC machines, robots, and built-in accessories are maintained securely. The system then exploits the collected information locally to reconfigure involved manufacturing resources and starts manufacturing immediately, and thus is capable to promptly response to fast revised orders in a comitative market. In a prototyped RMS architecture, the proposed approach takes advantage of recognized feedback visual information, which is obtained using an invariant image feature extraction algorithm, and effectively commands an industrial robot to accomplish demanded actions on a CNC control panel, as a regular operator does daily in front of the CNC machine for manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
17

Alp, Zeynep, Alessandro Ciccola, Ilaria Serafini, Alessandro Nucara, Paolo Postorino, Alessandra Gentili, Roberta Curini, and Gabriele Favero. "Photons for Photography: A First Diagnostic Approach to Polaroid Emulsion Transfer on Paper in Paolo Gioli’s Artworks." Molecules 27, no. 20 (October 18, 2022): 7023. http://dx.doi.org/10.3390/molecules27207023.

Full text
Abstract:
The aim of this research is to study and diagnose for the first time the Polaroid emulsion transfer in the contemporary artist Paolo Gioli’s artworks to provide preliminary knowledge about the materials of his artworks and the appropriate protocols which can be applied for future studies. The spectral analysis performed followed a multi-technical approach first on the mock-up samples created following Gioli’s technique and on one original artwork of Gioli, composed by: FORS (Fiber Optics Reflectance), Raman, and FTIR (Fourier-Transform InfraRed) spectroscopies. These techniques were chosen according to their completely non-invasiveness and no requirement for sample collection. The obtained spectra from FTIR were not sufficient to assign the dyes found in the transferred Polaroid emulsion. However, they provided significant information about the cellulose-based materials. The most diagnostic results were obtained from FORS for the determination of the dye developers present in the mock-up sample which was obtained from Polacolor Type 88 and from Paolo Gioli’s original artwork created with Polacolor type 89.
APA, Harvard, Vancouver, ISO, and other styles
18

Ganesh, Kavitha, P. Latchoumy, and A. Sonya. "An efficient traffic contention and control mechanism to improve QoS in heterogeneous wireless sensor networks." Indonesian Journal of Electrical Engineering and Computer Science 20, no. 2 (November 1, 2020): 968. http://dx.doi.org/10.11591/ijeecs.v20.i2.pp968-975.

Full text
Abstract:
<span>Heterogeneous Wireless Sensor Networks (HWSN) gathers information from a cooperative network. In HWSN, the sensor nodes are scattered and the major challenges are topology control, battery optimization, packet loss and link lifetime. The existing techniques do not concentrate on all the mentioned issues. The objective of this work is to provide congestion-free data transfer with higher throughput and increased packet delivery ratio. In the proposed methodology, three protocols are designed and developed, namely, Hop by Hop Rate Adjustment Protocol (HHRA), Energy Efficient Data Transfer Protocol (EEDT) and Alternative Routing Congestion Control Protocol (ARCC). The HHRA protocol senses the traffic in the channel and adjusts the transmission rate accordingly to avoid congestion. Secondly, the EEDT protocol is used to find specific nodes that are more efficient and transfer packets through those nodes to improve throughput. The ARCC protocol is used to redirect the path of transmission during the occurrence of congestion. Thus, the proposed traffic contention and control mechanisms ensures congestion free transmission and increases the packet delivery ratio by 23% and average throughput by 20% compared to the Dynamic Contention Window based Congestion Control (DCWCC) algorithm. </span>
APA, Harvard, Vancouver, ISO, and other styles
19

Jarah, Baker Akram Falah, Mufleh Amin AL Jarrah, Salam Nawaf Almomani, Emran AlJarrah, and Maen Al-Rashdan. "The effect of reliable data transfer and efficient computer network features in Jordanian banks accounting information systems performance based on hardware and software, database and number of hosts." International Journal of Data and Network Science 7, no. 1 (2023): 357–62. http://dx.doi.org/10.5267/j.ijdns.2022.9.012.

Full text
Abstract:
Reliable data transfer protocols are algorithmic techniques that guarantee the safe and secure transport of data through networks that could experience data loss or corruption. The performance of the systems and some accounting information systems (AIS) will be negatively impacted if the real-time data is not sent. The main factors that affect the performance of computer networks are the number of users, the hardware and software and the bandwidth. The computer network performance will play a role in the performance of the banks as it is an important component of the bank infrastructure. With the advancement of information technology, network technology, and computer technology, computers have been utilized to aid AIS operations, and AIS has become an unavoidable trend of development. Therefore, the purpose of this study was to investigate Data analysis in computer networks to improve AIS performance in Jordanian banks. A questionnaire was used to obtain the information. Jordanian banks account for the bulk of the participants in the survey. A total of 115 people took part in the study. According to the conclusions of this study, communication technology networks have a statistically significant impact on the growth of Jordanian banks' improved AIS performance.
APA, Harvard, Vancouver, ISO, and other styles
20

Valverde, Juan Miguel, Vandad Imani, Ali Abdollahzadeh, Riccardo De Feo, Mithilesh Prakash, Robert Ciszek, and Jussi Tohka. "Transfer Learning in Magnetic Resonance Brain Imaging: A Systematic Review." Journal of Imaging 7, no. 4 (April 1, 2021): 66. http://dx.doi.org/10.3390/jimaging7040066.

Full text
Abstract:
(1) Background: Transfer learning refers to machine learning techniques that focus on acquiring knowledge from related tasks to improve generalization in the tasks of interest. In magnetic resonance imaging (MRI), transfer learning is important for developing strategies that address the variation in MR images from different imaging protocols or scanners. Additionally, transfer learning is beneficial for reutilizing machine learning models that were trained to solve different (but related) tasks to the task of interest. The aim of this review is to identify research directions, gaps in knowledge, applications, and widely used strategies among the transfer learning approaches applied in MR brain imaging; (2) Methods: We performed a systematic literature search for articles that applied transfer learning to MR brain imaging tasks. We screened 433 studies for their relevance, and we categorized and extracted relevant information, including task type, application, availability of labels, and machine learning methods. Furthermore, we closely examined brain MRI-specific transfer learning approaches and other methods that tackled issues relevant to medical imaging, including privacy, unseen target domains, and unlabeled data; (3) Results: We found 129 articles that applied transfer learning to MR brain imaging tasks. The most frequent applications were dementia-related classification tasks and brain tumor segmentation. The majority of articles utilized transfer learning techniques based on convolutional neural networks (CNNs). Only a few approaches utilized clearly brain MRI-specific methodology, and considered privacy issues, unseen target domains, or unlabeled data. We proposed a new categorization to group specific, widely-used approaches such as pretraining and fine-tuning CNNs; (4) Discussion: There is increasing interest in transfer learning for brain MRI. Well-known public datasets have clearly contributed to the popularity of Alzheimer’s diagnostics/prognostics and tumor segmentation as applications. Likewise, the availability of pretrained CNNs has promoted their utilization. Finally, the majority of the surveyed studies did not examine in detail the interpretation of their strategies after applying transfer learning, and did not compare their approach with other transfer learning approaches.
APA, Harvard, Vancouver, ISO, and other styles
21

De Waele, Gaetan, Jim Clauwaert, Gerben Menschaert, and Willem Waegeman. "CpG Transformer for imputation of single-cell methylomes." Bioinformatics 38, no. 3 (October 28, 2021): 597–603. http://dx.doi.org/10.1093/bioinformatics/btab746.

Full text
Abstract:
Abstract Motivation The adoption of current single-cell DNA methylation sequencing protocols is hindered by incomplete coverage, outlining the need for effective imputation techniques. The task of imputing single-cell (methylation) data requires models to build an understanding of underlying biological processes. Results We adapt the transformer neural network architecture to operate on methylation matrices through combining axial attention with sliding window self-attention. The obtained CpG Transformer displays state-of-the-art performances on a wide range of scBS-seq and scRRBS-seq datasets. Furthermore, we demonstrate the interpretability of CpG Transformer and illustrate its rapid transfer learning properties, allowing practitioners to train models on new datasets with a limited computational and time budget. Availability and implementation CpG Transformer is freely available at https://github.com/gdewael/cpg-transformer. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
22

Philo, Ross, and Jay Hollingsworth. "Reducing rig personnel requirements with standards-based real-time data streaming." APPEA Journal 58, no. 2 (2018): 736. http://dx.doi.org/10.1071/aj17110.

Full text
Abstract:
Cost reductions have become an essential response to lower oil and gas prices. Drilling rigs operate in distant and sometimes hostile environments, so relocating rig-based experts to remote control centres saves costs and improves health, safety and environment (HSE). Key staff can work in an improved environment and movements to-and-from the rig are fewer, lowering transport-related costs and risks. The offsite experts can apply their expertise to the operations of multiple drilling rigs from a single location. To make this a reality, data from thousands of sensors on the rig and from measurement devices such as logging while drilling must be fed to the control room instantaneously and continuously. Legacy systems that poll rig-based devices for new data consume significant bandwidth and deliver data in a discontinuous manner with delays of 15 s or more. This does not meet the criteria for safe and reliable remote control of a rig and has been the reason why many roles have remained rig-based. This paper describes a new set of protocols that establish a continuous stream of data from devices on the rig to the control room with sub-second lag time. The new protocol also uses an order of magnitude less bandwidth, thus allowing more data to be carried in less time. Associated with industry-standard well-site information transfer standard mark-up language data transfer formats, the process operates with numerous service providers and software systems transparently. This paper includes a case-study to which the new protocol is applied, resulting in fewer permanent staff on a North Sea rig and fewer visits by an intervention contractor to the rig, with clear cost savings and HSE risk mitigation.
APA, Harvard, Vancouver, ISO, and other styles
23

Wagh, Sameer. "Pika: Secure Computation using Function Secret Sharing over Rings." Proceedings on Privacy Enhancing Technologies 2022, no. 4 (October 2022): 351–77. http://dx.doi.org/10.56553/popets-2022-0113.

Full text
Abstract:
Machine learning algorithms crucially depend on non-linear mathematical functions such as division (for normalization), exponentiation (for softmax and sigmoid), tanh (as an activation function), logarithm (for crossentropy loss), and square root (for back-propagation of normalization layers). However, when machine learning is performed over secure computation, these protocols incur a large communication overhead and high round complexity. In this work, we propose new multi-party computation (MPC) protocols for such functions. Our protocols achieve constant round complexity (3 for semi-honest, 4 for malicious), an order of magnitude lower communication (54 − 121× lower than prior art), and high concrete efficiency (2−1163× faster runtime). We rely on recent advances in function secret sharing (FSS) to construct these protocols. Our contributions can be summarized as follows: (1) A constant round protocol to securely evaluate nonlinear functions such as division, exponentiation, logarithm, and tanh (in comparison to prior art which uses round complexity proportional to the rounds of iterative methods/required precision) with high accuracy. This construction largely follows prior work in look-up style secure computation. (2) Our main contribution is the extension of the above protocol to be secure in the presence of malicious adversaries in the honest majority setting. We provide a malicious sketching protocol for FSS schemes that works over rings and in order to prove its security, we extend (and prove) a corresponding form of SchwartzZippel lemma over rings. This is the first such extension of the lemma and it can be of independent interest in other domains of secure computation. (3) We implement our protocol and showcase order of magnitude improvements in runtime and communication. Given the low round complexity and substantially lower communication, our protocols achieve even better performance over network constrained environments such as WAN. Finally, we showcase how such functions can lead to scalability in machine learning. Note that techniques presented are applicable beyond the application of machine learning as the protocols effectively present an efficient 1-out-of-N oblivious transfer or an efficient private information retrieval protocol.
APA, Harvard, Vancouver, ISO, and other styles
24

Bail, Rosangela de França, João Luiz Kovaleski, Regina Negri Pagani, Daiane Maria de Genaro Chiroli, and Vander Luiz Silva. "First Aid Approaches, Teaching, and Knowledge and Technology Transfer to Undergraduate Engineering Students." Ingeniería e Investigación 42, no. 2 (October 29, 2021): e84788. http://dx.doi.org/10.15446/ing.investig.v42n2.84788.

Full text
Abstract:
First aid and prehospital care practices are fundamental in helping victims, often saving lives. This study aims to present the results of a bibliometric analysis regarding first aid, as well as a case report on teaching in first aid to a group of undergraduate students. The latter presents a project developed at a Brazilian public university for engineering students. The core modules addressed in the project were: first aid concepts, specialized distress calling, site safety, injury mechanism, primary and secondary approaches, bleeding control, cardiopulmonary resuscitation, clinical emergences, seizures, intoxications, fractures, burns, immobilizations, and victim transport. A systematic literature review was conducted, which was based on structured protocols, in four databases: Scopus, Web of Science, PubMed, and ScienceDirect. Significant data were analyzed, such as years of publication, main journals, and more frequent terms in first aid teaching. The SOS-UTFPR Project aims to provide scholars of engineering and related fields with theoretical-practical knowledge about first aid. As of 2021, it has 3 graduated groups, thus generating relevant data for this research. Its main purpose is to train citizens capable of making assertive decisions in emergency situations, whether at the university, work, home, etc. With this, it was possible to promote the dissemination of the Transfer of Knowledge and Technology (KTT) by training individuals to multiply information, techniques, and acquired knowledge, in order to act preventively and save lives.
APA, Harvard, Vancouver, ISO, and other styles
25

Gullo, Giuseppe, Marco Scaglione, Gaspare Cucinella, Vito Chiantera, Antonino Perino, Maria Elisabetta Greco, Antonio Simone Laganà, Enrico Marinelli, Giuseppe Basile, and Simona Zaami. "Neonatal Outcomes and Long-Term Follow-Up of Children Born from Frozen Embryo, a Narrative Review of Latest Research Findings." Medicina 58, no. 9 (September 4, 2022): 1218. http://dx.doi.org/10.3390/medicina58091218.

Full text
Abstract:
In recent years, the growing use of ART (assisted reproductive techniques) has led to a progressive improvement of protocols; embryo freezing is certainly one of the most important innovations. This technique is selectively offered as a tailored approach to reduce the incidence of multiple pregnancies and, most importantly, to lower the risk of developing ovarian hyperstimulation syndrome when used in conjunction with an ovulation-triggering GnRH antagonist. The increase in transfer cycles with frozen embryos made it possible to study the effects of the technique in children thus conceived. Particularly noteworthy is the increase in macrosomal and LGA (large for gestational age) newborns, in addition to a decrease in SGA (small for gestational age) and LBW (low birth weight) newborns. The authors aimed to outline a broad-ranging narrative review by summarizing and elaborating on the most important evidence regarding the neonatal outcome of children born from frozen embryos and provide information on the medium and long-term follow- up of these children. However, given the relatively recent large-scale implementation of such techniques, further studies are needed to provide more conclusive evidence on outcomes and implications.
APA, Harvard, Vancouver, ISO, and other styles
26

Mbah, Melanie, Bettina Brohmann, and Silvia Schütte. "Transdisciplinary disposal governance – Learning and reflexion in and between organisations and through participation of the public." Safety of Nuclear Waste Disposal 1 (November 10, 2021): 221–23. http://dx.doi.org/10.5194/sand-1-221-2021.

Full text
Abstract:
Abstract. The site selection procedure is participatory and citizens are to be involved as “co-designers of the procedure” (§ 5 (1) 2 StandAG). This is an understanding of participation that goes beyond information and consultation. Although participation is differently defined in participation research, there is agreement that participation – especially in this context – goes beyond formal public participation, as is customary in approval procedures in the context of commenting procedures, and includes forms of informal public participation (cf. Mbah, 2017). Further innovative forms of public participation are needed in which concepts – for participation, for learning, for reversibility, etc. – can be (further) developed. Paragraph 5 (3) stipulates a further development of the participation procedure with the public. On the one hand, this provides framework conditions and, on the other hand, opens up a scope for design, which must be designed together with different groups of actors. This requirement was formulated both before and within the framework of the sub-areas conference (cf. Brohmann et al., 2021; Ewer and Thienel, 2019; Kuhbier, 2020; NBG, 2019, 2021). Therefore, we would like to address the following research questions: What does “learning” mean in the German Site Selection Act (StandAG 2017, § 1 (2)) and how can it be governed and implemented? Who learns and under which conditions? What are the requirements and possibilities of participation and what limitations can be derived in this context? Knowledge and information are the basis of all decision-making processes. Learning is part of a reflexive information exchange and essential for creating, transferring, and readjusting knowledge. In this respect, learning and reflexion means at least a two-way process, often multiple ways and loops. Therefore, we would like to focus on reflexive learning processes, so called double-loop learning processes (Argyris, 1977; Argyris and Schön, 1978) that consider that there should be responsive paths of knowledge transfer to generate learning through reflexion. Such reflexive learning processes may take part at different levels; individual, collective (groups, e.g. departments in an organisation), organisational, and between organisations and indirectly involved or responsible (individual and collective) actors must learn. The reflexive learning processes go beyond strategies and techniques to reach a certain goal but scrutinise certain attitudes and may lead to changes in normative values and belief systems. This is not or if at all, only to a certain extent an automatic process. Rather for systematic learning and reflexion spaces and formats are needed as well as different methods of knowledge and information transfer – mainly if it comes to the requirements of participative formats. These methods and formats as well as spaces need to be adjusted to context and time, which means that e.g. different actors need to be differently addressed and the back-bonding into the organisation and institutional routines must be considered. For this, contextual knowledge and collaboration is crucial. Participatory and transdisciplinary approaches are important key concepts which need to be filled in with actions to initiate and further develop learning processes – as understood and demanded by the StandAG and the selected literature. We give insights into findings based on literature reviews, jurisdictional analysis of the StandAG, several interviews with different actors of the procedure and with experts of different topics (regional planning, place attachment, psychology). In summary, we identified challenges for learning and give insights how to overcome or at least process them.
APA, Harvard, Vancouver, ISO, and other styles
27

Fonseca, Jeferson F., Maria Emilia F. Oliveira, Felipe Z. Brandão, Ribrio I. T. P. Batista, Alexandre R. Garcia, Pawel M. Bartlewski, and Joanna M. G. Souza-Fabjan. "Non-surgical embryo transfer in goats and sheep: the Brazilian experience." Reproduction, Fertility and Development 31, no. 1 (2019): 17. http://dx.doi.org/10.1071/rd18324.

Full text
Abstract:
Brazil has presented tremendous progress in non-surgical embryo transfer (NSET) in sheep and goats. New instruments and techniques for non-surgical embryo recovery (NSER) and NSET in small ruminants were implemented. Recent improvements include refinement of the protocols for cervical relaxation combining oestradiol–oxytocin–cloprostenol treatment at specific times before NSER in sheep; recipient goats do not require any hormonal drugs to induce cervical dilation and direct embryo transfer by the cervical route yields excellent results. Transrectal ovarian ultrasonography (B-mode but especially colour Doppler) have proven to be accurate methods to localise and enumerate corpora lutea and luteinised unovulated follicles in recipient and donor does and ewes. An array of new criteria for selecting superior animals for NSER and NSET (e.g. cervical mapping) have been developed by Brazilian researchers. Extensive studies on both technologies were initially conducted in commercial breeds of goats and sheep but have been gradually extended to some native breeds of sheep (germplasm conservation) and dairy goat operations. It is speculated that, in future, NSER and NSET may become methods of choice for caprine and ovine embryo recovery and transfer in Brazil, and then globally. Due primarily to the efficiency of NSET in goats, a novel interspecies (e.g. bovine) IVP method may soon be developed on a large scale. The Brazilian experience is an invaluable source of information and know-how promoting the replacement of conventional surgical assisted reproductive technologies with non-surgical procedures and hence supporting the rapid development of the embryo transfer industry in small ruminants.
APA, Harvard, Vancouver, ISO, and other styles
28

Tin, Phu Tran, Phan Van-Duc, Tan N. Nguyen, and Le Anh Vu. "Performance Analysis for Exact and Upper Bound Capacity in DF Energy Harvesting Full-Duplex with Hybrid TPSR Protocol." Journal of Electrical and Computer Engineering 2021 (January 27, 2021): 1–9. http://dx.doi.org/10.1155/2021/6610107.

Full text
Abstract:
In this paper, we investigate the full-duplex (FD) decode-and-forward (DF) cooperative relaying system, whereas the relay node can harvest energy from radiofrequency (RF) signals of the source and then utilize the harvested energy to transfer the information to the destination. Specifically, a hybrid time-power switching-based relaying method is adopted, which leverages the benefits of time-switching relaying (TSR) and power-splitting relaying (PSR) protocols. While energy harvesting (EH) helps to reduce the limited energy at the relay, full-duplex is one of the most important techniques to enhance the spectrum efficiency by its capacity of transmitting and receiving signals simultaneously. Based on the proposed system model, the performance of the proposed relaying system in terms of the ergodic capacity (EC) is analyzed. Specifically, we derive the exact closed form for upper bound EC by applying some special function mathematics. Then, the Monte Carlo simulations are performed to validate the mathematical analysis and numerical results.
APA, Harvard, Vancouver, ISO, and other styles
29

Hernandez, Cesar A., Valerio Beni, and Johann F. Osma. "Fully Automated Microsystem for Unmediated Electrochemical Characterization, Visualization and Monitoring of Bacteria on Solid Media; E. coli K-12: A Case Study." Biosensors 9, no. 4 (November 4, 2019): 131. http://dx.doi.org/10.3390/bios9040131.

Full text
Abstract:
In this paper, we present a non-fluidic microsystem for the simultaneous visualization and electrochemical evaluation of confined, growing bacteria on solid media. Using a completely automated platform, real-time monitoring of bacterial and image-based computer characterization of growth were performed. Electrochemical tests, using Escherichia coli K-12 as the model microorganism, revealed the development of a faradaic process at the bacteria–microelectrode interface inside the microsystem, as implied by cyclic voltammetry and electrochemical impedance spectrometry measurements. The electrochemical information was used to determine the moment in which bacteria colonized the electrode-enabled area of the microsystem. This microsystem shows potential advantages for long-term electrochemical monitoring of the extracellular environment of cell culture and has been designed using readily available technologies that can be easily integrated in routine protocols. Complementarily, these methods can help elucidate fundamental questions of the electron transfer of bacterial cultures and are potentially feasible to be integrated into current characterization techniques.
APA, Harvard, Vancouver, ISO, and other styles
30

WALTHER, P., M. D. EISAMAN, A. ANDRÉ, F. MASSOU, M. FLEISCHHAUER, A. S. ZIBROV, and M. D. LUKIN. "GENERATION OF NARROW-BANDWIDTH SINGLE PHOTONS USING ELECTROMAGNETICALLY INDUCED TRANSPARENCY IN ATOMIC ENSEMBLES." International Journal of Quantum Information 05, no. 01n02 (February 2007): 51–62. http://dx.doi.org/10.1142/s0219749907002773.

Full text
Abstract:
We review recent experiments [M. D. Eisaman et al., Nature438 (2005) 837] demonstrating the generation of narrow-bandwidth single photons using a room-temperature ensemble of 87 Rb atoms. Our method involves creation of an atomic coherence via Raman scattering and projective measurement, followed by the coherent transfer of this atomic coherence onto a single photon using electromagnetically induced transparency (EIT). The single photons generated using this method are shown to have many properties necessary for quantum information protocols, such as narrow bandwidths, directional emission, and controllable pulse shapes. The narrow bandwidths of these single photons (~MHz), resulting from their matching to the EIT resonance (~MHz), allow them to be stored in narrow-bandwidth quantum memories. We demonstrate this by using dynamic EIT to store and retrieve the single photons in a second ensemble for storage times up to a few microseconds. We also describe recent improvements to the single-photon fidelity compared to the work by M. D. Eisaman in Nature438 (2005) 837. These techniques may prove useful in quantum information applications such as quantum repeaters, linear-optics quantum computation, and daytime free-space quantum communication.
APA, Harvard, Vancouver, ISO, and other styles
31

Weston, Andrea D., Sasha Stasko, and Gerald M. Kidder. "An intensive hands-on course designed to teach molecular biology techniques to physiology graduate students." Advances in Physiology Education 26, no. 1 (March 2002): 42–49. http://dx.doi.org/10.1152/advan.00012.2001.

Full text
Abstract:
To address a growing need to make research trainees in physiology comfortable with the tools of molecular biology, we have developed a laboratory-intensive course designed for graduate students. This course is offered to a small group of students over a three-week period and is organized such that comprehensive background lectures are coupled with extensive hands-on experience. The course is divided into seven modules, each organized by a faculty member who has particular expertise in the area covered by that module. The modules focus on basic methods such as cDNA subcloning, sequencing, gene transfer, polymerase chain reaction, and protein and RNA expression analysis. Each module begins with a lecture that introduces the technique in detail by providing a historical perspective, describing both the uses and limitations of that technique, and comparing the method with others that yield similar information. Most of the lectures are followed by a laboratory session during which students follow protocols that were carefully designed to avoid pitfalls. Throughout these laboratory sessions, students are given an appreciation of the importance of proper technique and accuracy. Communication among the students, faculty, and the assistant coordinator is focused on when and why each procedure would be used, the importance of each step in the procedure, and approaches to troubleshooting. The course ends with an exam that is designed to test the students’ general understanding of each module and their ability to apply the various techniques to physiological questions.
APA, Harvard, Vancouver, ISO, and other styles
32

Berardi, Margherita, Luigi Santamaria Amato, Francesca Cigna, Deodato Tapete, and Mario Siciliani de Cumis. "Text Mining from Free Unstructured Text: An Experiment of Time Series Retrieval for Volcano Monitoring." Applied Sciences 12, no. 7 (March 30, 2022): 3503. http://dx.doi.org/10.3390/app12073503.

Full text
Abstract:
Volcanic activity may influence climate parameters and impact people safety, and hence monitoring its characteristic indicators and their temporal evolution is crucial. Several databases, communications and literature providing data, information and updates on active volcanoes worldwide are available, and will likely increase in the future. Consequently, information extraction and text mining techniques aiming to efficiently analyze such databases and gather data and parameters of interest on a specific volcano can play an important role in this applied science field. This work presents a natural language processing (NLP) system that we developed to extract geochemical and geophysical data from free unstructured text included in monitoring reports and operational bulletins issued by volcanological observatories in HTML, PDF and MS Word formats. The NLP system enables the extraction of relevant gas parameters (e.g., SO2 and CO2 flux) from the text, and was tested on a series of 2839 daily and weekly bulletins published online between 2015 and 2021 for the Stromboli volcano (Italy). The experiment shows that the system proves capable in the extraction of the time series of a set of user-defined parameters that can be later analyzed and interpreted by specialists in relation with other monitoring and geospatial data. The text mining system can potentially be tuned to extract other target parameters from this and other databases.
APA, Harvard, Vancouver, ISO, and other styles
33

Intisar Mohsin Saadoon. "OLSR Protocol based on Fog Computing and SDN in VANet." Global Journal of Engineering and Technology Advances 10, no. 2 (February 28, 2022): 060–70. http://dx.doi.org/10.30574/gjeta.2022.10.2.0037.

Full text
Abstract:
Wireless technology has been the subject of a lot of study in recent years. VANET is the fastest-growing area in wireless communications. The Vehicle Ad-Hoc Network (VANet) is a subtype of the Mobile Ad-Hoc Network (MANet) that is used to enhance road safety and passenger experience, giving a unique perspective on intelligent transportation systems. This wireless invention is supposed to improve road safety and efficiency as part of the Intelligent Transportation System. Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) techniques that enable IEEE 802.11p wireless access technologies deliver applications (secure/unsecure) while exchanging information to avert an accident and provide travelers with trustworthy information make up the VANET. As an end result of investments in independent cars, new network technology including SDN, side computing, and VANET studies are being introduced, causing VANET simulators to review their assist for those new capabilities. In this paper, we offer an SDN-compliant solution for managing wireless fog networks by combining open Flow and IP transfer protocols in a way that provides a flexible and configurable wireless data plane for fog networks in addition intelligent traffic engineering for network offloading. The proposal includes an efficiency rating for connections with decreased latency, flexible load balancing to select the quickest way, and lower network cost.
APA, Harvard, Vancouver, ISO, and other styles
34

Vermeir, Julie F., Melanie J. White, Daniel Johnson, Geert Crombez, and Dimitri M. L. Van Ryckeghem. "Gamified Web-Delivered Attentional Bias Modification Training for Adults With Chronic Pain: Protocol for a Randomized, Double-blind, Placebo-Controlled Trial." JMIR Research Protocols 11, no. 1 (January 27, 2022): e32359. http://dx.doi.org/10.2196/32359.

Full text
Abstract:
Background To date, research has found variable success in using attentional bias modification training (ABMT) procedures in pain samples. Several factors could contribute to these mixed findings, including boredom and low motivation. Indeed, training paradigms are repetitive, which can lead to disengagement and high dropout rates. A potential approach to overcoming some of these barriers is to attempt to increase motivation and engagement through gamification (ie, the use of game elements) of this procedure. To date, research has yet to explore the gamified format of ABMT for chronic pain and its potential for the transfer of benefits. Objective The aim of this study is to investigate the effects of a gamified web-delivered ABMT intervention in a sample of adults with chronic pain via a randomized, double-blind, placebo-controlled trial. Methods A total of 120 adults with chronic musculoskeletal pain, recruited from clinical (hospital outpatient waiting list) and nonclinical (wider community) settings, will be included in this randomized, double-blind, placebo-controlled, 3-arm trial. Participants will be randomly assigned to complete 6 web-based sessions of dot-probe nongamified sham control ABMT, nongamified standard ABMT, or gamified ABMT across a period of 3 weeks. Active ABMT conditions will aim to train attention away from pain-relevant words. Participant outcomes will be assessed at pretraining, during training, immediately after training, and at the 1-month follow-up. Primary outcomes include pain intensity, pain interference, and behavioral and self-reported engagement. Secondary outcomes include attentional bias for pain, anxiety, depression, interpretation bias for pain, and perceived improvement. Results The ethical aspects of this research project have been approved by the human research ethics committees of the Royal Brisbane and Women’s Hospital (HREC/2020/QRBW/61743) and Queensland University of Technology (2000000395). Study recruitment commenced in August 2021 and is ongoing. Data collection and analysis are expected to be concluded by October 2022 and January 2023, respectively. Conclusions This trial will be the first to evaluate the effects of gamification techniques in a pain ABMT intervention. The findings will provide important information on the potential therapeutic benefits of gamified pain ABMT programs, shed light on the motivational influences of certain game elements in the context of pain, and advance our understanding of chronic pain. Trial Registration Australian New Zealand Clinical Trials Registry ACTRN12620000803998; https://anzctr.org.au/ACTRN12620000803998.aspx International Registered Report Identifier (IRRID) PRR1-10.2196/32359
APA, Harvard, Vancouver, ISO, and other styles
35

Tawalbeh, Lo’ai, Fadi Muheidat, Mais Tawalbeh, and Muhannad Quwaider. "IoT Privacy and Security: Challenges and Solutions." Applied Sciences 10, no. 12 (June 15, 2020): 4102. http://dx.doi.org/10.3390/app10124102.

Full text
Abstract:
Privacy and security are among the significant challenges of the Internet of Things (IoT). Improper device updates, lack of efficient and robust security protocols, user unawareness, and famous active device monitoring are among the challenges that IoT is facing. In this work, we are exploring the background of IoT systems and security measures, and identifying (a) different security and privacy issues, (b) approaches used to secure the components of IoT-based environments and systems, (c) existing security solutions, and (d) the best privacy models necessary and suitable for different layers of IoT driven applications. In this work, we proposed a new IoT layered model: generic and stretched with the privacy and security components and layers identification. The proposed cloud/edge supported IoT system is implemented and evaluated. The lower layer represented by the IoT nodes generated from the Amazon Web Service (AWS) as Virtual Machines. The middle layer (edge) implemented as a Raspberry Pi 4 hardware kit with support of the Greengrass Edge Environment in AWS. We used the cloud-enabled IoT environment in AWS to implement the top layer (the cloud). The security protocols and critical management sessions were between each of these layers to ensure the privacy of the users’ information. We implemented security certificates to allow data transfer between the layers of the proposed cloud/edge enabled IoT model. Not only is the proposed system model eliminating possible security vulnerabilities, but it also can be used along with the best security techniques to countermeasure the cybersecurity threats facing each one of the layers; cloud, edge, and IoT.
APA, Harvard, Vancouver, ISO, and other styles
36

Vellaichamy, Jeevanantham, Shakila Basheer, Prabin Selvestar Mercy Bai, Mudassir Khan, Sandeep Kumar Mathivanan, Prabhu Jayagopal, and Jyothi Chinna Babu. "Wireless Sensor Networks Based on Multi-Criteria Clustering and Optimal Bio-Inspired Algorithm for Energy-Efficient Routing." Applied Sciences 13, no. 5 (February 22, 2023): 2801. http://dx.doi.org/10.3390/app13052801.

Full text
Abstract:
Wireless sensor networks (WSNs) are used for recording the information from the physical surroundings and transmitting the gathered records to a principal location via extensively disbursed sensor nodes. The proliferation of sensor devices and advances in size, deployment costs, and user-friendly interfaces have spawned numerous WSN applications. The WSN should use a routing protocol to send information to the sink over a low-cost link. One of the foremost vital problems is the restricted energy of the sensing element and, therefore, the high energy is consumed throughout the time. An energy-efficient routing may increase the lifetime by consuming less energy. Taking this into consideration, this paper provides a multi-criteria clustering and optimal bio-inspired routing algorithmic rule to reinforce network lifetime, to increase the operational time of WSN-based applications and make robust clusters. Clustering is a good methodology of information aggregation that increases the lifetime by group formation. Multi-criteria clustering is used to select the optimal cluster head (CH). After proper selection of the CH, moth flame and salp swarm optimization algorithms are combined to analyze the quality route for transmitting information from the CH to the sink and expand the steadiness of the network. The proposed method is analyzed and contrasted with previous techniques, with parameters such as energy consumption, throughput, end-to-end delay, latency, lifetime, and packet delivery rate. Consumption of energy is minimized by up to 18.6% and network life is increased up to 6% longer compared to other routing protocols.
APA, Harvard, Vancouver, ISO, and other styles
37

Caviglione, Luca. "Trends and Challenges in Network Covert Channels Countermeasures." Applied Sciences 11, no. 4 (February 11, 2021): 1641. http://dx.doi.org/10.3390/app11041641.

Full text
Abstract:
Network covert channels are increasingly used to endow malware with stealthy behaviors, for instance to exfiltrate data or to orchestrate nodes of a botnet in a cloaked manner. Unfortunately, the detection of such attacks is difficult as network covert channels are often characterized by low data rates and defenders do not know in advance where the secret information has been hidden. Moreover, neutralization or mitigation are hard tasks, as they require to not disrupt legitimate flows or degrade the quality perceived by users. As a consequence, countermeasures are tightly coupled to specific channel architectures, leading to poorly generalizable and often scarcely scalable approaches. In this perspective, this paper investigates trends and challenges in the development of countermeasures against the most popular network covert channels. To this aim, we reviewed the relevant literature by considering approaches that can be effectively deployed to detect general injection mechanisms or threats observed in the wild. Emphasis has been put on enlightening trajectories that should be considered when engineering mitigation techniques or planning the research to face the increasing wave of information-hiding-capable malware. Results indicate that many works are extremely specialized and an effective strategy for taming security risks caused by network covert channels may benefit from high-level and general approaches. Moreover, mechanisms to prevent the exploitation of ambiguities should be already considered in early design phases of both protocols and services.
APA, Harvard, Vancouver, ISO, and other styles
38

Nosheen, Irum, Shoab A. Khan, and Umar Ali. "A Cross-Layer Design for a Multihop, Self-Healing, and Self-Forming Tactical Network." Wireless Communications and Mobile Computing 2019 (April 9, 2019): 1–16. http://dx.doi.org/10.1155/2019/1523906.

Full text
Abstract:
In mission and time critical applications, bandwidth and delay optimizations are the key goals of communication systems. This paper presents a cross-layer framework design that reduces the call setup time, provides collision-free communication, and reuses the empty slots of Time Division Multiple Access (TDMA) protocol which otherwise causes low throughput and large delay. As number of communicating nodes in tactical networks is small as compared to commercial mobile ad hoc networks (MANETs), classical TDMA will yield huge number of empty slots and any Carrier Sense Multiple Access/Collision Detection (CSMA/CD) technique may cause more delay in some critical scenarios. Proposed methodology gives a Cross-Layer Architecture for Network (NET) Layer and Medium Access Control (MAC) Layer. Our design provides bandwidth efficient, collision-free communication to Software-Defined Radios (SDRs) in self-forming and self-healing tactical networks with low call setup time and multihop routing. For this purpose TDMA as MAC layer protocol and Ad Hoc On Demand Distance Vector (AODV) as Network Layer Routing Protocol are used. Our slot allocation (SA) algorithm, Cross-Layer TDMA (CL-TDMA), consists of control phase where AODV control packets are exchanged and data transfer phase where transmission of data and voice occurs. All active radios in vicinity gather information about communicating nodes based on the exchange of control packets by SDRs. Our algorithm then uses this information to help all active SDRs find slot(s) that will be used for collision-free transmission. A number of experiments are performed to establish improved performance of the proposed technique compared to other established techniques and protocols.
APA, Harvard, Vancouver, ISO, and other styles
39

Tasnim, Nusrat, Mohammad Khairul Islam, and Joong-Hwan Baek. "Deep Learning Based Human Activity Recognition Using Spatio-Temporal Image Formation of Skeleton Joints." Applied Sciences 11, no. 6 (March 17, 2021): 2675. http://dx.doi.org/10.3390/app11062675.

Full text
Abstract:
Human activity recognition has become a significant research trend in the fields of computer vision, image processing, and human–machine or human–object interaction due to cost-effectiveness, time management, rehabilitation, and the pandemic of diseases. Over the past years, several methods published for human action recognition using RGB (red, green, and blue), depth, and skeleton datasets. Most of the methods introduced for action classification using skeleton datasets are constrained in some perspectives including features representation, complexity, and performance. However, there is still a challenging problem of providing an effective and efficient method for human action discrimination using a 3D skeleton dataset. There is a lot of room to map the 3D skeleton joint coordinates into spatio-temporal formats to reduce the complexity of the system, to provide a more accurate system to recognize human behaviors, and to improve the overall performance. In this paper, we suggest a spatio-temporal image formation (STIF) technique of 3D skeleton joints by capturing spatial information and temporal changes for action discrimination. We conduct transfer learning (pretrained models- MobileNetV2, DenseNet121, and ResNet18 trained with ImageNet dataset) to extract discriminative features and evaluate the proposed method with several fusion techniques. We mainly investigate the effect of three fusion methods such as element-wise average, multiplication, and maximization on the performance variation to human action recognition. Our deep learning-based method outperforms prior works using UTD-MHAD (University of Texas at Dallas multi-modal human action dataset) and MSR-Action3D (Microsoft action 3D), publicly available benchmark 3D skeleton datasets with STIF representation. We attain accuracies of approximately 98.93%, 99.65%, and 98.80% for UTD-MHAD and 96.00%, 98.75%, and 97.08% for MSR-Action3D skeleton datasets using MobileNetV2, DenseNet121, and ResNet18, respectively.
APA, Harvard, Vancouver, ISO, and other styles
40

Jabbar, Mohanad Sameer, and Samer Saeed Issa. "Developed cluster-based load-balanced protocol for wireless sensor networks based on energy-efficient clustering." Bulletin of Electrical Engineering and Informatics 12, no. 1 (February 1, 2023): 196–206. http://dx.doi.org/10.11591/eei.v12i1.4226.

Full text
Abstract:
One of the most pressing issues in wireless sensor networks (WSNs) is energy efficiency. Sensor nodes (SNs) are used by WSNs to gather and send data. The techniques of cluster-based hierarchical routing significantly considered for lowering WSN’s energy consumption. Because SNs are battery-powered, face significant energy constraints, and face problems in an energy-efficient protocol designing. Clustering algorithms drastically reduce each SNs energy consumption. A low-energy adaptive clustering hierarchy (LEACH) considered promising for application-specifically protocol architecture for WSNs. To extend the network's lifetime, the SNs must save energy as much as feasible. The proposed developed cluster-based load-balanced protocol (DCLP) considers for the number of ideal cluster heads (CHs) and prevents nodes nearer base stations (BSs) from joining the cluster realization for accomplishing sufficient performances regarding the reduction of sensor consumed energy. The analysis and comparison in MATLAB to LEACH, a well-known cluster-based protocol, and its modified variant distributed energy efficient clustering (DEEC). The simulation results demonstrate that network performance, energy usage, and network longevity have all improved significantly. It also demonstrates that employing cluster-based routing protocols may successfully reduce sensor network energy consumption while increasing the quantity of network data transfer, hence achieving the goal of extending network lifetime.
APA, Harvard, Vancouver, ISO, and other styles
41

Singal, TL, and Rajvir Singh. "EVOLVING FRAMEWORK FOR IOT-BASED ENERGY EFFICIENT WIRELESS SENSOR NETWORKS FOR HEALTHCARE SERVICES." International Journal of Engineering Applied Sciences and Technology 7, no. 3 (July 1, 2022): 296–305. http://dx.doi.org/10.33564/ijeast.2022.v07i03.042.

Full text
Abstract:
The Internet-of-Things (IoT) is a system of interrelated computing devices having unique identifiers that enables to transfer data over a network without requiring human-to-computer or human-to-human interaction. With the rapid growth in embedded wireless computing devices with high-speed internet connectivity, the Wireless Sensor Networks (WSNs) for healthcare applications comprises of interconnected several tinypowered, wearable, wireless bio-sensors to provide an effective way of collecting vital health-related data. The emerging paradigm of IoT in smart healthcare system requires a specialized secure framework in order to enable real-time health monitoring, reliable diagnostics, effective treatment processes, and many other related aspects of healthcare system. In this paper, various aspects of energy consumption in IoT-based framework of WSNs for healthcare services are discussed. The framework for IoTbased healthcare network cater to upgraded microcontrollers, IoT gateway devices, various wireless and web technologies for IoT, variety of bio-sensors and data collectors, and secure communication protocols. Techniques to optimize the energy consumption in WSNs have been presented leading to energy efficient framework for WSNs suitable for healthcare systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Saha, Soumyadeep, Manoj Sachdev, and Sushanta K. Mitra. "Recent advances in label-free optical, electrochemical, and electronic biosensors for glioma biomarkers." Biomicrofluidics 17, no. 1 (January 2023): 011502. http://dx.doi.org/10.1063/5.0135525.

Full text
Abstract:
Gliomas are the most commonly occurring primary brain tumor with poor prognosis and high mortality rate. Currently, the diagnostic and monitoring options for glioma mainly revolve around imaging techniques, which often provide limited information and require supervisory expertise. Liquid biopsy is a great alternative or complementary monitoring protocol that can be implemented along with other standard diagnosis protocols. However, standard detection schemes for sampling and monitoring biomarkers in different biological fluids lack the necessary sensitivity and ability for real-time analysis. Lately, biosensor-based diagnostic and monitoring technology has attracted significant attention due to several advantageous features, including high sensitivity and specificity, high-throughput analysis, minimally invasive, and multiplexing ability. In this review article, we have focused our attention on glioma and presented a literature survey summarizing the diagnostic, prognostic, and predictive biomarkers associated with glioma. Further, we discussed different biosensory approaches reported to date for the detection of specific glioma biomarkers. Current biosensors demonstrate high sensitivity and specificity, which can be used for point-of-care devices or liquid biopsies. However, for real clinical applications, these biosensors lack high-throughput and multiplexed analysis, which can be achieved via integration with microfluidic systems. We shared our perspective on the current state-of-the-art different biosensor-based diagnostic and monitoring technologies reported and the future research scopes. To the best of our knowledge, this is the first review focusing on biosensors for glioma detection, and it is anticipated that the review will offer a new pathway for the development of such biosensors and related diagnostic platforms.
APA, Harvard, Vancouver, ISO, and other styles
43

N Ahmed, Muzammil, Wemdy Zhou, Miriam Strini, and Pavithra Pathirathna. "Electrochemical Detection of Cd(II) in Environmental Samples Using Nano-Ities." ECS Meeting Abstracts MA2022-02, no. 58 (October 9, 2022): 2196. http://dx.doi.org/10.1149/ma2022-02582196mtgabs.

Full text
Abstract:
Heavy metals such as cadmium, lead, and arsenic have caused global health concerns that continue to rise yearly, which lead to several harmful diseases that primarily target organs such as kidneys and liver among other vital organs. Because of these well-known harmful effects, regulatory authorities have prescribed “permissible” levels of these metals in food, drinking water, and other possible sources to limit human exposure. However, increasing industrialization and inefficient recycling systems for industrially produced metal wastes have led to an increase in heavy metal exposure to humans. These toxins also bioaccumulate across the food chain, leaving us vulnerable to their harmful effects. Hence it is of great interest to develop systems that can detect metals efficiently to help set up effective metal mitigation protocols. Most conventional metal detecting tools, although highly accurate, require extensive sample pre-treatment steps which alter the speciation of the metal; a critical parameter for determining its toxicity. As(III) is known to be more toxic than As(V) and as such, they have separate medical remedies targeted toward the particular metal species. However, a conventional technique such as ICPMS is unable to differentiate between them. Furthermore, these conventional techniques require equipment that is bulky, expensive, and not user-friendly and limit real-time monitoring. Consequently, the development of a low-cost, portable, and robust sensor capable of providing accurate information on metal speciation will significantly aid in establishing metal mitigation systems efficiently. Low cost and user-friendliness will ensure that the sensor is within the economic and technological reach of most of the population and the portability of the sensor will enable testing in areas that are hard to access via a stationary lab. Such attributes coupled with accurate information on metal speciation will make an ideal metal sensor that will significantly aid the fight against heavy metal exposure. This study uses ion transfer between two immiscible electrolyte solutions (ITIES) to develop a Cd(II) sensor. Electrochemistry at ITIES is less complicated than other electrochemical techniques as it is based on the transfer of ions and does not include redox reactions; making it more attractive. Our electrode is a borosilicate glass electrode that is pulled using a carbon dioxide laser puller with an inner radius of ~300 nm. The nano-scale interface of our sensor follows a hemispherical diffusion regime which allows us to have a high mass transfer rate, which is essential for fast kinetic measurements. The nano-interface can also withstand various complex matrices consistently making it ideal for field applications. An ionophore- 1-10 phenanthroline was used to facilitate the Cd(II) transfer across the nano-interface. The sensor was calibrated n various matrices such as potassium chloride and artificial seawater to show its capability to withstand complex matrices without fouling. Stability and selectivity tests were done to showcase the sensor’s performance. It can also successfully detect Cd(II) when it is present in a complex form with strong ligands such as EDTA and NTA, etc. Our sensor’s analytical performance passed the ultimate test when we were able to accurately detect the dissolved Cd(II) ion concentrations in a water sample collected from the Indian River Lagoon in Melbourne, FL. The results from this test were in close agreement with the results reported by another research group that used ICPMS to quantify the amount of Cd(II) dissolved in the same environmental sample. However, ITIES does not require any sample pretreatment steps which is required for ICPMS. Thus, this study shows great promise for the development of an ideal electrochemical metal sensor for environmental samples. To the best of our knowledge, this is the first time a nanometer-scale glass electrode with ITIES to detect Cd(II) ions in complex matrices is being reported. Future studies will focus on the detection of metals in urine and blood samples and develop this into a portable point of care device.
APA, Harvard, Vancouver, ISO, and other styles
44

Businge, Charles Bitamazire, Namhla Madini, Benjamin Longo-Mbenza, and A. P. Kengne. "Insufficient iodine nutrition status and the risk of pre-eclampsia: a protocol for systematic review and meta-analysis." BMJ Open 9, no. 5 (May 2019): e025573. http://dx.doi.org/10.1136/bmjopen-2018-025573.

Full text
Abstract:
IntroductionPre-eclampsia is one of the leading causes of maternal and perinatal morbidity and mortality worldwide. Although subclinical hypothyroidism (SCH) in pregnancy is one of the established risk factors for pre-eclampsia, the link between iodine deficiency, the main cause of hypothyroidism and pre-eclampsia remains uncertain. About two billion people live in areas with iodine insufficiency. The increased renal blood flow during pregnancy leading to increased renal iodine clearance together with the increased placental transfer of iodine to the fetus leads to further iodine deficiency in pregnancy. Iodine is one of the most potent exogenous antioxidants whose deficiency is associated with oxidant imbalance and endothelial dysfunction, one of the mechanisms associated with increased risk of pre-eclampsia.Methods and analysisA systematic search of published literature will be conducted for case–control studies that directly determined the iodine nutrition status of women with pre-eclampsia and appropriate normotensive controls. A similar search will be conducted for cohort studies in which the incidence of pre-eclampsia among pregnant women with adequate and inadequate iodine nutrition status was reported. Databases including MEDLINE, EMBASE, Google Scholar, SCOPUS and Africa Wide Information will be searched up to 31 December 2018. Screening of identified articles and data extraction will be conducted independently by two investigators. Risk of bias of the included studies will be assessed using a Newcastle-Ottawa Scale. Appropriate meta-analytic techniques will be used to pool prevalence and incidence rates, odds and relative risk of pre-eclampsia from studies with similar features, overall and by geographical regions. Heterogeneity of the estimates across studies will be assessed and quantified and publication bias investigated. This protocol is reported according to Preferred Reporting Items for Systematic Reviews and Meta-Analysis protocols (PRISMA-P) 2015 guidelines.Ethics and disseminationSince the proposed study will use published data, there is no requirement for ethical approval. This review seeks to identify the risk of pre-eclampsia associated with insufficient iodine nutrition in pregnancy. This will help to ascertain whether insufficient iodine intake may be an independent risk factor for pre-eclampsia. This will advise policy makers on the possibility of maximising iodine nutrition in pregnancy and reproductive age as one of the remedies for prevention of pre-eclampsia among populations at risk of inadequate iodine intake. This review is part of the thesis that will be submitted for the award of a PhD in Medicine to the Faculty of Health Sciences of the University of Cape Town. In addition the results will be published in a peer-reviewed journal.PROSPERO registration numberCRD42018099427.
APA, Harvard, Vancouver, ISO, and other styles
45

Vázquez, J., E. Lacarra, J. Morán, M. A. Sánchez, A. González, and J. Bruzual. "EDAS (EGNOS Data Access Service) Differential GNSS Corrections: A Reliable Free-of-Charge Alternative for Precision Farming in Europe." Annual of Navigation 26, no. 1 (December 1, 2019): 46–58. http://dx.doi.org/10.1515/aon-2019-0005.

Full text
Abstract:
Abstract EDAS (EGNOS Data Access Service) is the EGNOS internet broadcast service, which provides free of charge access to the data collected and generated by the EGNOS infrastructure. EDAS disseminates over the Internet, both in real time and via an FTP archive, the raw data of the GPS, GLONASS (no commitment on GLONASS data is provided (1)) and EGNOS GEO satellites collected by the receivers located at the EGNOS reference stations, which are mainly distributed over Europe and North Africa. The EDAS services offer several types of GNSS data in various protocols and formats, such as DGNSS corrections. This paper reports on the results of some in-field tests conducted by ESSP and Topcon Agriculture to confirm the suitability of EDAS DGNSS corrections for precision farming in Europe. The European Commission (EC) is the owner of EGNOS system (including EDAS) and has delegated the exploitation of EGNOS to the European GNSS Agency (GSA). EDAS service provision is performed by ESSP, as EGNOS Services Provider, under contract with the GSA, the EGNOS program manager. In the ENC 2018 article “EDAS (EGNOS Data Access Service): Differential GPS corrections performance test with state-of-the-art precision agriculture system”, ESSP and Topcon Agriculture presented the results of the first in-field test conducted in a dynamic and real-life environment in the summer of 2017. The test results indicated that the EDAS DGNSS corrections could enable a reliable pass-to-pass accuracy performance for a wide range of precision agriculture applications and become an attractive solution for cereal farms, when the farm is located in the vicinity of an EGNOS reference station. In particular, Topcon Agriculture acknowledged that the observed performance was sufficient to support the following precision agriculture applications: spraying and spreading of any crop type, tilling and harvesting of cereal. Then, ESSP and Topcon Agriculture engaged in additional testing activities to further characterise the EDAS DGPS performance in different scenarios (i.e. at various European locations and with a variety of distances between the designated farm and the target EGNOS reference station). In each test, multiple runs with the rover tractors have been performed over the reference patterns predefined in the Topcon guidance systems. Data recorded during the tests has been analysed in detail, looking at the key performance indicators (e.g. cross track error and pass-to-pass performance) that characterize the EDAS DGPS performance for precision agriculture applications. Different techniques for the computation of the pass-to-pass accuracy performance have been used, including a procedure to measure live in the field and a post-processing alternative. The diversity of scenarios available allows drawing conclusions on the applicability of EDAS DGPS corrections (in terms of maximum distance from the target EGNOS station) for precision agriculture and also understanding the impact of operationally relevant aspects such as the quality of the mobile internet coverage (highly variable across Europe). The EDAS system and its architecture, the main types of data disseminated through EDAS services and the online information available to the EDAS users are introduced in this paper. In particular, the EDAS Ntrip service is described in detail, since it provides the differential corrections to the GPS and GLONASS satellites at the EGNOS reference stations in RTCM format, which are the basis for the present study. The article also reports on the results of the latest tests, which have been performed using Topcon receivers, vehicles and auto-steering systems. In all cases, two different Topcon guidance systems on board tractors were running simultaneously to assess the EDAS DGPS positioning performance with respect to a the reference provided by a top-performing RTK-based Topcon solution. The objective of this paper is to draw conclusions on the use of EDAS DGPS corrections as a reliable free-of-charge alternative for precision farming in Europe (especially for cereal farms), based on the available performance results from the testing campaign and the feedback from the involved precision agriculture experts.
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Cecilia, Martin Schwartz, Thomas Küstner, Petros Martirosian, and Ferdinand Seith. "Multiparametric Functional MRI of the Kidney: Current State and Future Trends with Deep Learning Approaches." RöFo - Fortschritte auf dem Gebiet der Röntgenstrahlen und der bildgebenden Verfahren, March 10, 2022. http://dx.doi.org/10.1055/a-1775-8633.

Full text
Abstract:
Background Until today, assessment of renal function has remained a challenge for modern medicine. In many cases, kidney diseases accompanied by a decrease in renal function remain undetected and unsolved, since neither laboratory tests nor imaging diagnostics provide adequate information on kidney status. In recent years, developments in the field of functional magnetic resonance imaging with application to abdominal organs have opened new possibilities combining anatomic imaging with multiparametric functional information. The multiparametric approach enables the measurement of perfusion, diffusion, oxygenation, and tissue characterization in one examination, thus providing more comprehensive insight into pathophysiological processes of diseases as well as effects of therapeutic interventions. However, application of multiparametric fMRI in the kidneys is still restricted mainly to research areas and transfer to the clinical routine is still outstanding. One of the major challenges is the lack of a standardized protocol for acquisition and postprocessing including efficient strategies for data analysis. This article provides an overview of the most common fMRI techniques with application to the kidney together with new approaches regarding data analysis with deep learning. Methods This article implies a selective literature review using the literature database PubMed in May 2021 supplemented by our own experiences in this field. Results and Conclusion Functional multiparametric MRI is a promising technique for assessing renal function in a more comprehensive approach by combining multiple parameters such as perfusion, diffusion, and BOLD imaging. New approaches with the application of deep learning techniques could substantially contribute to overcoming the challenge of handling the quantity of data and developing more efficient data postprocessing and analysis protocols. Thus, it can be hoped that multiparametric fMRI protocols can be sufficiently optimized to be used for routine renal examination and to assist clinicians in the diagnostics, monitoring, and treatment of kidney diseases in the future. Key Points: Citation Format
APA, Harvard, Vancouver, ISO, and other styles
47

Qasim, Alaa Jabbar, Roshidi Din, and Farah Qasim Ahmed Alyousuf. "Review on techniques and file formats of image compression." Bulletin of Electrical Engineering and Informatics 9, no. 2 (April 1, 2020). http://dx.doi.org/10.11591/eei.v9i2.2085.

Full text
Abstract:
This paper presents a review of the compression technique in digital image processing. As well as a brief description of the main technologies and traditional format that commonly used in image compression. It can be defined as image compression a set of techniques that are applied to the images to store or transfer them in an effective way. In addition, this paper presents formats that use to reduce redundant information in an image, unnecessary pixels and non-visual redundancy. The conclusion of this paper The results for this paper concludes that image compression is a critical issue in digital image processing because it allows us to store or transmit image data efficiently.
APA, Harvard, Vancouver, ISO, and other styles
48

Anacleto, Ricardo, Lino Figueiredo, Ana Almeida, and Paulo Novais. "Transfering Data from a Server to an Android Mobile Application: A Case Study." Jurnal Teknologi 63, no. 3 (July 15, 2013). http://dx.doi.org/10.11113/jt.v63.1959.

Full text
Abstract:
Nowadays, due to the incredible grow of the mobile devices market, when we want to implement a client-server applications we must consider mobile devices limitations. In this paper we discuss which can be the more reliable and fast way to exchange information between a server and an Android mobile application. This is an important issue because with a responsive application the user experience is more enjoyable. In this paper we present a study that test and evaluate two data transfer protocols, socket and HTTP, and three data serialization formats (XML, JSON and Protocol Buffers) using different environments and mobile devices to realize which is the most practical and fast to use.
APA, Harvard, Vancouver, ISO, and other styles
49

Yadav, Vijay Kumar, Nitish Andola, Shekhar Verma, and S. Venkatesan. "A Survey of Oblivious Transfer Protocol." ACM Computing Surveys, January 5, 2022. http://dx.doi.org/10.1145/3503045.

Full text
Abstract:
Oblivious transfer (OT) protocol is an essential tool in cryptography that provides a wide range of applications like secure multi-party computation, private information retrieval, private set intersection, contract signing, and privacy-preserving location-based services. The OT protocol has different variants such as one-out-of-2, one-out-of- n , k -out-of- n , and OT extension. In the OT (one-out-of-2, one-out-of- n , and OT extension) protocol, the sender has a set of messages, whereas the receiver has a key. The receiver sends that key to the sender in a secure way; the sender cannot get any information about the received key. The sender encrypts every message by operating on every message using the received key and sends all the encrypted messages to the receiver. The receiver is able to extract only the required message using his key. However, in the k -out-of- n OT protocol, the receiver sends a set of k keys to the sender, and in replay, the sender sends all the encrypted messages. The receiver uses his keys and extracts the required messages, but it cannot gain any information about the messages that it has not requested. Generally, the OT protocol requires high communication and computation cost if we transfer millions of oblivious messages. The OT extension protocol provides a solution for this, where the receiver transfers a set of keys to the sender by executing a few numbers of OT protocols. Then, the sender encrypts all the messages using cheap symmetric key cryptography with the help of a received set of keys and transfer millions of oblivious messages to the receiver. In this work, we present different variants of OT protocols such as one-out-of-2, one-out-of- n , k -out-of- n , and OT extension. Furthermore, we cover various aspects of theoretical security guarantees such as semi-honest and malicious adversaries, universally composable, used techniques, computation, and communication efficiency aspects. From the analysis, we found that the semi-honest adversary-based OT protocols required low communication and computation costs as compared to malicious adversary-based OT protocols.
APA, Harvard, Vancouver, ISO, and other styles
50

Kemp, Cliff, Chad Calvert, Taghi M. Khoshgoftaar, and Joffrey L. Leevy. "An approach to application-layer DoS detection." Journal of Big Data 10, no. 1 (February 13, 2023). http://dx.doi.org/10.1186/s40537-023-00699-3.

Full text
Abstract:
AbstractWith the massive resources and strategies accessible to attackers, countering Denial of Service (DoS) attacks is getting increasingly difficult. One of these techniques is application-layer DoS. Due to these challenges, network security has become increasingly more challenging to ensure. Hypertext Transfer Protocol (HTTP), Domain Name Service (DNS), Simple Mail Transfer Protocol (SMTP), and other application protocols have had increased attacks over the past several years. It is common for application-layer attacks to concentrate on these protocols because attackers can exploit some weaknesses. Flood and “low and slow” attacks are examples of application-layer attacks. They target weaknesses in HTTP, the most extensively used application-layer protocol on the Internet. Our experiment proposes a generalized detection approach to identify features for application-layer DoS attacks that is not specific to a single slow DoS attack. We combine four application-layer DoS attack datasets: Slow Read, HTTP POST, Slowloris, and Apache Range Header. We perform a feature-scaling technique that applies a normalization filter to the combined dataset. We perform a feature extraction technique, Principal Component Analysis (PCA), on the combined dataset to reduce dimensionality. We examine ways to enhance machine learning techniques for detecting slow application-layer DoS attacks that employ these methodologies. The machine learners effectively identify multiple slow DoS attacks, according to our findings. The experiment shows that classifiers are good predictors when combined with our selected Netflow characteristics and feature selection techniques.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography