Journal articles on the topic 'Electronic data processing – Distributed processing – Reliability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Electronic data processing – Distributed processing – Reliability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ren, Zhimin. "Data Processing Platform of Cloud Computing and Its Performance Analysis Based on Photoelectric Hybrid Interconnection Architecture." Journal of Nanoelectronics and Optoelectronics 15, no. 6 (June 1, 2020): 743–52. http://dx.doi.org/10.1166/jno.2020.2805.

Full text
Abstract:
Data processing platform is the core support platform of cloud computing. The use of electric interconnection architecture will increase the complexity of network topology, while optical interconnection architecture is ideal, so cloud computing platform based on optical interconnection has become a research hotspot. The distributed optical interconnection architecture of cloud computing data processing platform is focused on. Combining the hybrid mechanism of optical circuit switching and electric packet switching, it can meet a variety of traffic requirements. Meanwhile, it improves the switching mechanism, communication strategy, and router structure. Moreover, considering that the hybrid optoelectronic interconnection architecture can improve the network delay and throughput, but there is still a problem of network consumption. Combined with the network characteristics of the data processing platform (wireless mesh structure) of cloud computing, the network topology algorithm is studied, and the relationship between the topology and the maximum number of allocable channels is analyzed. Furthermore, the equation of topological reliability calculation is defined, and the optimization model of topological design is proposed, according to which the data processing platform of cloud computing is further optimized under the photoelectric hybrid interconnection architecture. During the experiment, before topology optimization, by changing the message length, it is found that adding optical circuit switching can help achieving large capacity and new type of transmission, and can effectively reduce the time delay. After topology optimization structure is adopted, the photoelectric hybrid-data processing platform of cloud computing without topology optimization is compared. It is found that under different reliability constraints, the throughput and end-to-end delay of the network are significantly improved, which proves that the data processing platform of cloud computing based on the photoelectric hybrid interconnection architecture is a feasible cloud computing platform.
APA, Harvard, Vancouver, ISO, and other styles
2

Akhatov, A. R., and F. M. Nazarov. "METHODS OF IMPLEMENTATION OF BLOCKCHAIN TECHNOLOGIES ON THE BASIS OF CRYPTOGRAPHIC PROTECTION FOR THE DATA PROCESSING SYSTEM WITH CONSTRAINT AND LAGGING INTO ELECTRONIC DOCUMENT MANAGEMEN." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 184 (October 2019): 3–12. http://dx.doi.org/10.14489/vkit.2019.10.pp.003-012.

Full text
Abstract:
The problem of application design by restriction and delay in ED (Electronic Document) management based on blockchain technologies to ensure a new level of security, reliability, transparency of data processing is considered. Increasing the reliability of information in systems by limiting and delaying ED management of enterprises and organizations during collecting, transmitting, storing and processing ED based on new, little-studied optimization technologies for processing blockchain-type data is a relevant and promising research topic. Important advantages of the potential use of transaction blocks built according to certain rules in systems by limiting and delaying ED are ensuring security by encrypting transactions for subsequent confirmation, the inability to make unauthorized changes due to the dependence of the current blockchain state on previous transactions, transparency and reliability of procedures due to public and distributed storage, as well as the interaction of a large number of users between without the use of “trusted intermediaries”. Studies show that when using existing algorithms for adding blocks in any system, it is possible to achieve the requirements of decentralization, openness of the entered data, the inability to change the data once entered into the system. However, mathematics-cryptographic information protection must be developed for each designed system separately. The task of providing and formulating the rules of data reliability control by limiting and delaying in ED circulation based on cryptographic methods of encrypting transaction blocks constituting the blockchain has been formulated. The approaches have been adopted as a methodology of support for systems by limiting and delaying electronic documents based on a new database architecture.
APA, Harvard, Vancouver, ISO, and other styles
3

Brito, Carlos, Leonardo Silva, Gustavo Callou, Tuan Anh Nguyen, Dugki Min, Jae-Woo Lee, and Francisco Airton Silva. "Offloading Data through Unmanned Aerial Vehicles: A Dependability Evaluation." Electronics 10, no. 16 (August 10, 2021): 1916. http://dx.doi.org/10.3390/electronics10161916.

Full text
Abstract:
Applications in the Internet of Things (IoT) context continuously generate large amounts of data. The data must be processed and monitored to allow rapid decision making. However, the wireless connection that links such devices to remote servers can lead to data loss. Thus, new forms of a connection must be explored to ensure the system’s availability and reliability as a whole. Unmanned aerial vehicles (UAVs) are becoming increasingly empowered in terms of processing power and autonomy. UAVs can be used as a bridge between IoT devices and remote servers, such as edge or cloud computing. UAVs can collect data from mobile devices and process them, if possible. If there is no processing power in the UAV, the data are sent and processed on servers at the edge or in the cloud. Data offloading throughout UAVs is a reality today, but one with many challenges, mainly due to unavailability constraints. This work proposes stochastic Petri net (SPN) models and reliability block diagrams (RBDs) to evaluate a distributed architecture, with UAVs focusing on the system’s availability and reliability. Among the various existing methodologies, stochastic Petri nets (SPN) provide models that represent complex systems with different characteristics. UAVs are used to route data from IoT devices to the edge or the cloud through a base station. The base station receives data from UAVs and retransmits them to the cloud. The data are processed in the cloud, and the responses are returned to the IoT devices. A sensitivity analysis through Design of Experiments (DoE) showed key points of improvement for the base model, which was enhanced. A numerical analysis indicated the components with the most significant impact on availability. For example, the cloud proved to be a very relevant component for the availability of the architecture. The final results could prove the effectiveness of improving the base model. The present work can help system architects develop distributed architectures with more optimized UAVs and low evaluation costs.
APA, Harvard, Vancouver, ISO, and other styles
4

Granados Hernández, Elkin Dario, Nelson Leonardo Diaz Aldana, and Adriana Carolina Luna Hernández. "Energy Management Electronic Device for Islanded Microgrids Based on Renewable Energy Sources and Battery-based Energy Storage." Ingeniería e Investigación 41, no. 1 (January 29, 2021): e83905. http://dx.doi.org/10.15446/ing.investig.v41n1.83905.

Full text
Abstract:
Energy management systems are one of the most important components in the operation of an electric microgrid. They are responsible for ensuring the supervision of the electrical system, as well as the coordination and reliability of all loads and distributed energy resources in order for the microgrid to be operated as a unified entity. Because of that, an energy management system should be fast enough at processing data and defining control action to guarantee the correct performance of the microgrid. This paper explores the design and implementation of an energy management system deployed over a dedicated electronic device. The proposed energy management device coordinates the distributed energy resources and loads in a residential-scale islanded microgrid, in accordance with a rule-based energy management strategy that ensures reliable and safe operation of the battery-based energy storage system. A hardware-int-he-loop test was performed with a real-time simulation platform to show the operation of the electronic device
APA, Harvard, Vancouver, ISO, and other styles
5

Jeong, Yoon-Su. "Blockchain Processing Technique Based on Multiple Hash Chains for Minimizing Integrity Errors of IoT Data in Cloud Environments." Sensors 21, no. 14 (July 8, 2021): 4679. http://dx.doi.org/10.3390/s21144679.

Full text
Abstract:
As IoT (Internet of Things) devices are diversified in the fields of use (manufacturing, health, medical, energy, home, automobile, transportation, etc.), it is becoming important to analyze and process data sent and received from IoT devices connected to the Internet. Data collected from IoT devices is highly dependent on secure storage in databases located in cloud environments. However, storing directly in a database located in a cloud environment makes it not only difficult to directly control IoT data, but also does not guarantee the integrity of IoT data due to a number of hazards (error and error handling, security attacks, etc.) that can arise from natural disasters and management neglect. In this paper, we propose an optimized hash processing technique that enables hierarchical distributed processing with an n-bit-size blockchain to minimize the loss of data generated from IoT devices deployed in distributed cloud environments. The proposed technique minimizes IoT data integrity errors as well as strengthening the role of intermediate media acting as gateways by interactively authenticating blockchains of n bits into n + 1 and n − 1 layers to normally validate IoT data sent and received from IoT data integrity errors. In particular, the proposed technique ensures the reliability of IoT information by validating hash values of IoT data in the process of storing index information of IoT data distributed in different locations in a blockchain in order to maintain the integrity of the data. Furthermore, the proposed technique ensures the linkage of IoT data by allowing minimal errors in the collected IoT data while simultaneously grouping their linkage information, thus optimizing the load balance after hash processing. In performance evaluation, the proposed technique reduced IoT data processing time by an average of 2.54 times. Blockchain generation time improved on average by 17.3% when linking IoT data. The asymmetric storage efficiency of IoT data according to hash code length is improved by 6.9% on average over existing techniques. Asymmetric storage speed according to the hash code length of the IoT data block was shown to be 10.3% faster on average than existing techniques. Integrity accuracy of IoT data is improved by 18.3% on average over existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
6

Rantala, Ville, Pasi Liljeberg, and Juha Plosila. "Status Data and Communication Aspects in Dynamically Clustered Network-on-Chip Monitoring." Journal of Electrical and Computer Engineering 2012 (2012): 1–14. http://dx.doi.org/10.1155/2012/728191.

Full text
Abstract:
Monitoring and diagnostic systems are required in modern Network-on-Chip implementations to assure high performance and reliability. A dynamically clustered NoC monitoring structure for traffic and fault monitoring is presented. It is a distributed monitoring approach which does not require any centralized control. Essential issues concerning status data diffusion, processing, and format are simulated and analyzed. The monitor communication and placement are also discussed. The results show that the presented monitoring structure can be used to improve the performance of an NoC. Even a small adjustment of parameters, for example, considering monitoring data format or monitoring placing, can have significant influence to the overall performance of the NoC. The analysis shows that the monitoring system should be carefully designed in terms of data diffusion and routing and monitoring algorithms to obtain the potential performance improvement.
APA, Harvard, Vancouver, ISO, and other styles
7

Prashanth, B. U. V., Mohammed Riyaz Ahmed, and Manjunath R. Kounte. "Design and implementation of DA FIR filter for bio-inspired computing architecture." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 2 (April 1, 2021): 1709. http://dx.doi.org/10.11591/ijece.v11i2.pp1709-1718.

Full text
Abstract:
This paper elucidates the system construct of DA-FIR filter optimized for design of distributed arithmetic (DA) finite impulse response (FIR) filter and is based on architecture with tightly coupled co-processor based data processing units. With a series of look-up-table (LUT) accesses in order to emulate multiply and accumulate operations the constructed DA based FIR filter is implemented on FPGA. The very high speed integrated circuit hardware description language (VHDL) is used implement the proposed filter and the design is verified using simulation. This paper discusses two optimization algorithms and resulting optimizations are incorporated into LUT layer and architecture extractions. The proposed method offers an optimized design in the form of offers average miminimizations of the number of LUT, reduction in populated slices and gate minimization for DA-finite impulse response filter. This research paves a direction towards development of bio inspired computing architectures developed without logically intensive operations, obtaining the desired specifications with respect to performance, timing, and reliability.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Seong Cheol, Papia Ray, and S. Surender Reddy. "Features of Smart Grid Technologies: An Overview." ECTI Transactions on Electrical Engineering, Electronics, and Communications 17, no. 2 (August 31, 2019): 169–80. http://dx.doi.org/10.37936/ecti-eec.2019172.215478.

Full text
Abstract:
This paper presents an overview of smart grid (SG) technology features such as two-way communication, advanced metering infrastructure (AMI) system, integration of renewable energy, advanced storage techniques, real time operation and control, data management and processing, physical and cyber security, and self-healing, etc. The SG technology allows twoway communications for better reliability, control, efficiency and economics of the power system. With these new SG technologies, consumers have many energy choices, such as use of renewable energy, usage management, flexible rates, electric vehicles (EVs), etc. The requirement of these technologies is the real time operation, and the SG accommodates this realtime operation and control. SG technology allows distributed generation through demand response and energy efficiency technologies to shed the load demand. However, it’s very difficult to adopt these changes to the conventional grids. Utility companies, governments, independent system operators (ISOs) and energy regulatory commissions need to agree on the scope and time frame of these changes.
APA, Harvard, Vancouver, ISO, and other styles
9

Ledwaba, Lehlogonolo P. I., Gerhard P. Hancke, Sherrin J. Isaac, and Hein S. Venter. "Smart Microgrid Energy Market: Evaluating Distributed Ledger Technologies for Remote and Constrained Microgrid Deployments." Electronics 10, no. 6 (March 18, 2021): 714. http://dx.doi.org/10.3390/electronics10060714.

Full text
Abstract:
The increasing strain on ageing generation infrastructure has seen more frequent instances of scheduled and unscheduled blackouts, rising reliability on fossil fuel based energy alternatives and a slow down in efforts towards achieving universal access to electrical energy in South Africa. To try and relieve the burden on the National Grid and still progress electrification activities, the smart microgrid model and secure energy trade paradigm is considered—enabled by the Industrial IoT (IIoT) and distributed ledger technologies (DLTs). Given the high availability requirements of microgrid operations, the limited resources available on IIoT devices and the high processing and energy requirements of DLT operations, this work aims to determine the effect of native DLT algorithms when implemented on IIoT edge devices to assess the suitability of DLTs as a mechanism to establish a secure, energy trading market for the Internet of Energy. Metrics such as the node transaction time, operating temperature, power consumption, processor and memory usage are considered towards determining possible interference on the edge node operation. In addition, the cost and time required for mining operations associated with the DLT-enabled node are determined in an effort to predict the cost to end users—in terms of fees payable and mobile data costs—as well as predicting the microgrid’s growth and potential blockchain network slowdown.
APA, Harvard, Vancouver, ISO, and other styles
10

Daves, Glenn G. "Trends in Automotive Packaging." Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2014, DPC (January 1, 2014): 001818–50. http://dx.doi.org/10.4071/2014dpc-keynote_th1_daves.

Full text
Abstract:
The long-term trend in automobiles has been increasing electronics content over time. This trend is expected to continue and drives diverse functional, form factor, and reliability requirements. These requirements, in turn, are leading to changes in the package types selected and the performance specifications of the packages used for automotive electronics. Several examples will be given. This abstract covers the development of a distributed high temperature electronics demonstrator for integration with sensor elements to provide digital outputs that can be used by the FADEC (Full Authority Digital Electronic Control) system or the EHMS (Engine Health Monitoring System) on an aircraft engine. This distributed electronics demonstrator eliminates the need for the FADEC or EHMS to process the sensor signal, which will assist in making the overall system more accurate and efficient in processing only digital signals. This will offer weight savings in cables, harnesses and connector pin reduction. The design concept was to take the output from several on-engine sensors, carry out the signal conditioning, multiplexing, analogue to digital conversion and data transmission through a serial data bus. The unit has to meet the environmental requirements of DO-160 with the need to operate at 200°C, with short term operation at temperatures up to 250°C. The work undertaken has been to design an ASIC based on 1.0 μm Silicon on Insulator (SOI) device technology incorporating sensor signal conditioning electronics for sensors including resistance temperature probes, strain gauges, thermocouples, torque and frequency inputs. The ASIC contains analogue multiplexers, temperature stable voltage band-gap reference and bias circuits, ADC, BIST, core logic, DIN inputs and two parallel ARINC 429 serial databuses. The ASIC was tested and showed to be functional up to a maximum temperature of 275°C. The ASIC has been integrated with other high temperature components including voltage regulators, a crystal oscillator, precision resistors, silicon capacitors within a hermetic hybrid package. The hybrid circuit has been assembled within a stainless steel enclosure with high temperature connectors. The high temperature electronics demonstrator has been demonstrated operating from −40°C to +250°C. This work has been carried out under the EU Clean Sky HIGHTECS project with the Project being led by Turbomeca (Fr) and carried out by GE Aviation Systems (UK), GE Research – Munich (D) and Oxford University (UK).
APA, Harvard, Vancouver, ISO, and other styles
11

Nishio, M., T. Mizutani, and N. Takeda. "Structural shape reconstruction with consideration of the reliability of distributed strain data from a Brillouin-scattering-based optical fiber sensor." Smart Materials and Structures 19, no. 3 (February 3, 2010): 035011. http://dx.doi.org/10.1088/0964-1726/19/3/035011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Isaev, E. A., D. V. Pervukhin, V. V. Kornilov, P. A. Tarasov, A. A. Grigoriev, Y. V. Rudyak, G. O. Rytikov, and V. G. Nazarov. "Platelet Adhesion Quantification to Fluorinated Polyethylene from the Structural Caracteristics of Its Surface." Mathematical Biology and Bioinformatics 14, no. 2 (August 9, 2019): 420–29. http://dx.doi.org/10.17537/2019.14.420.

Full text
Abstract:
It is necessary not only to develop information and communication infrastructures and algorithms for distributed and cloud processing of data coming from all kinds of sensors and sensors, but also to design new materials that enable the production of safe, effective and accessible to the general public test systems when creating digital health saving systems as part of the development of modern electronic medical monitoring technologies. An analysis of the market for consumables intended for use in rapid diagnostic devices shows that disposable test strips on a flexible polymer base with high biological resistance to the effects of blood components are most in demand. It has been shown that surface modification of polyethylene by fluorination, sulfonation and plasmification methods provides a significant reduction in platelet adhesion to processed polymer films. It was also suggested that the surface energy of the modified material has a determining effect on its hemocompatibility.This work is devoted to the formation of an analytical model of the surface morphology of fluorinated polyethylene, as well as a quantitative analysis of the structural and functional relationships between the parameters of the morphological model and the resistance of the material to platelet adhesion. The widespread use of the discussed approach to increasing the thromboresistance of polymeric materials will increase the reliability of glycemic analyzes performed by patients on their own using portable express diagnostic systems (glucometers).
APA, Harvard, Vancouver, ISO, and other styles
13

Szalay, Márk, Péter Mátray, and László Toka. "State Management for Cloud-Native Applications." Electronics 10, no. 4 (February 9, 2021): 423. http://dx.doi.org/10.3390/electronics10040423.

Full text
Abstract:
The stateless cloud-native design improves the elasticity and reliability of applications running in the cloud. The design decouples the life-cycle of application states from that of application instances; states are written to and read from cloud databases, and deployed close to the application code to ensure low latency bounds on state access. However, the scalability of applications brings the well-known limitations of distributed databases, in which the states are stored. In this paper, we propose a full-fledged state layer that supports the stateless cloud application design. In order to minimize the inter-host communication due to state externalization, we propose, on the one hand, a system design jointly with a data placement algorithm that places functions’ states across the hosts of a data center. On the other hand, we design a dynamic replication module that decides the proper number of copies for each state to ensure a sweet spot in short state-access time and low network traffic. We evaluate the proposed methods across realistic scenarios. We show that our solution yields state-access delays close to the optimal, and ensures fast replica placement decisions in large-scale settings.
APA, Harvard, Vancouver, ISO, and other styles
14

Yudhanto, Albertus Djaka, and Purwanto Purwanto. "ANALISA PENGARUH PENERAPAN BUDAYA 5S TERHADAP PRODUKTIVITAS KARYAWAN DI PT SAMSUNG ELECTRONICS INDONESIA, BEKASI." Jurnal Muara Ilmu Ekonomi dan Bisnis 4, no. 2 (June 11, 2020): 205. http://dx.doi.org/10.24912/jmieb.v4i2.7609.

Full text
Abstract:
Strategi memenangkan persaingan telah memicu penerapan konsep baru di perusahaan manufaktur khususnya industri elektronik. Tujuan penelitian ini adalah menginvestigasi dampak penerapan budaya 5S sebagai upaya meningkatkan produktivitas karyawan PT Samsung Electronics Indonesia dengan menggunakan pendekatan kuantitatif. Sumber primer yang dikumpulkan melalui bantuan instrumen kuesioner dibagikan ke responden yang telah terlibat langsung. Jumlah sampel sebanyak 155 karyawan di bagian Manufacturing Engineering Departement. Pengolahan dan analisis diawali uji validitas dan reliabilitas. Deskripsi data direpresentasikan dengan angka minimum, maksimum, rata-rata dan standar deviasi. Pemenuhan tahapan uji asumsi klasik yang berupa normalitas, heterokedastisitas, multikolinearitas dan autokorelasi harus dilakukan sebelum regresi linier berganda. Hasil uji parsial dan simultan memperlihatkan bahwa seiri, seiton, seiso, seiketsu dan shitsuke memiliki kontribusi positif signifikan terhadap produktivitas karyawan. Proporsi semua variabel bebas terhadap variabel terikat yang digambarkan oleh nilai koefisien determinasi (adjusted R Square) sebesar 0,460 atau 46%, sedangkan sisanya (54%) ditentukan faktor lain. Peranan shitsuke paling dominan selama proses pengamatan, artinya kerajinan dan pembiasaan untuk bertindak terhadap apa yang diinginkan harus ditempuh meskipun sulit. The strategy to win the competition has triggered the application of new concepts in manufacturing companies, especially the electronics industry. The aims of this research is to investigate the impact of implementing 5S culture as an effort to increase the productivity of PT Samsung Electronics Indonesia by using a quantitative approach. Primary sources collected through the help of questionnaire instruments were distributed to respondents who had been directly involved. The total sample of 155 employees in the Manufacturing Engineering Department. Processing and analysis begins with a validity and reliability test. The description of the data is represented by minimum, maximum, average and standard deviation. Fulfillment of the classical assumption test stages in the form of normality, heterokedasticity, multicollinearity and autocorrelation must be performed before multiple linear regression. Partial and simultaneous test results show that seiri, seiton, seiso, seiketsu and shitsuke have a significant positive contribution to employee productivity. The proportion of all independent variables to the dependent variable described by the coefficient of determination (adjusted R Square) of 0.460 or 46%, while the rest (54%) is determined by other factors. Shitsuke's role is most dominant during the observation process, meaning that the craft and habituation to act on what is desired must be pursued even though it is difficult.
APA, Harvard, Vancouver, ISO, and other styles
15

Kim, Jae-Hoon, Seungchul Lee, and Sengphil Hong. "Autonomous Operation Control of IoT Blockchain Networks." Electronics 10, no. 2 (January 17, 2021): 204. http://dx.doi.org/10.3390/electronics10020204.

Full text
Abstract:
Internet of Things (IoT) networks are typically composed of many sensors and actuators. The operation controls for robots in smart factories or drones produce a massive volume of data that requires high reliability. A blockchain architecture can be used to build highly reliable IoT networks. The shared ledger and open data validation among users guarantee extremely high data security. However, current blockchain technology has limitations for its overall application across IoT networks. Because general permission-less blockchain networks typically target high-performance network nodes with sufficient computing power, a blockchain node with low computing power and memory, such as an IoT sensor/actuator, cannot operate in a blockchain as a fully functional node. A lightweight blockchain provides practical blockchain availability over IoT networks. We propose essential operational advances to develop a lightweight blockchain over IoT networks. A dynamic network configuration enforced by deep clustering provides ad-hoc flexibility for IoT network environments. The proposed graph neural network technique enhances the efficiency of dApp (distributed application) spreading across IoT networks. In addition, the proposed blockchain technology is highly implementable in software because it adopts the Hyperledger development environment. Directly embedding the proposed blockchain middleware platform in small computing devices proves the practicability of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Yingwen, Bowen Hu, Hujie Yu, Zhimin Duan, and Junxin Huang. "A Threshold Proxy Re-Encryption Scheme for Secure IoT Data Sharing Based on Blockchain." Electronics 10, no. 19 (September 27, 2021): 2359. http://dx.doi.org/10.3390/electronics10192359.

Full text
Abstract:
The IoT devices deployed in various application scenarios will generate massive data with immeasurable value every day. These data often contain the user’s personal privacy information, so there is an imperative need to guarantee the reliability and security of IoT data sharing. We proposed a new encrypted data storing and sharing architecture by combining proxy re-encryption with blockchain technology. The consensus mechanism based on threshold proxy re-encryption eliminates dependence on the third-party central service providers. Multiple consensus nodes in the blockchain network act as proxy service nodes to re-encrypt data and combine converted ciphertext, and personal information will not be disclosed in the whole procedure. That eliminates the restrictions of using decentralized network to store and distribute private encrypted data safely. We implemented a lot of simulated experiments to evaluate the performance of the proposed framework. The results show that the proposed architecture can meet the extensive data access demands and increase a tolerable time latency. Our scheme is one of the essays to utilize the threshold proxy re-encryption and blockchain consensus algorithm to support IoT data sharing.
APA, Harvard, Vancouver, ISO, and other styles
17

Carrino, Stefano, Francesco Nicassio, and Gennaro Scarselli. "Nonlinearities Associated with Impaired Sensors in a Typical SHM Experimental Set-Up." Electronics 7, no. 11 (November 6, 2018): 303. http://dx.doi.org/10.3390/electronics7110303.

Full text
Abstract:
Structural Health Monitoring (SHM) gives a diagnosis of a structure assessing the structural integrity and predicting the residual life through appropriate data processing and interpretation. A structure must remain in the design domain, although it can be subjected to normal aging due to usage, action of the environment, and accidental events. SHM involves the integration of electronic devices in the inspected structure that sometimes are Piezoelectric Transducers (PZT). These are lightweight and small and can be produced in different geometries. They are used both in guided wave-based and electromechanical impedance-based methods. The PZT bonding requires essential steps such as preparation of the surfaces, application of the adhesive, and assembly that make the bonding process not so easy to be realised. Furthermore, adhesives are susceptible to environmental degradation. Transducer debonding or non-uniform distributed glue underneath the sensor causes the reduction of the performance and can affect the reliability of the SHM system. In this paper, a sensor diagnostic method for the monitoring of the PZT operational status is proposed in order to detect bonding defect/damage between a PZT patch and a host structure. The authors propose a method based on the nonlinear behaviour of the contact PZT/structure that allows the identification of the damaged PZT and the geometrical characterization of the debonding. The feasibility of the diagnostic procedure is demonstrated by numerical studies and experiments, where disbonds were created by inhibiting the adhesive action on a part of the interface through Teflon film. The proposed method can be used to evaluate the sensor functionality after an extreme loading event or over a long period of service time.
APA, Harvard, Vancouver, ISO, and other styles
18

Nazemi, Sepideh, Kin K. Leung, and Ananthram Swami. "Distributed Optimization Framework for In-Network Data Processing." IEEE/ACM Transactions on Networking 27, no. 6 (December 2019): 2432–43. http://dx.doi.org/10.1109/tnet.2019.2953581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sherstnyov, Vladislav S., Anna I. Sherstnyova, Igor A. Botygin, and Denis A. Kustov. "Distributed Information System for Processing and Storage of Meteorological Data." Key Engineering Materials 685 (February 2016): 867–71. http://dx.doi.org/10.4028/www.scientific.net/kem.685.867.

Full text
Abstract:
The following article features the results of developing distributed network storage of ground meteorological observation data. The data is represented with the national variant of international rapid transmission code of environment data from meteorological stations across the Russian Federation. They are available for researchers in both visual and common export formats. The design of the distributed network storage of meteorological data includes the following modules: dispatcher module (monitors calculation nodes, distributes data to nodes, processes client requests), client module (allows external researchers to access the meteorological data), terminal module (used to import new meteorological data), data processing and storage module (a node for distributed meteorological data storage, consists of two sub-modules for data processing and data storage respectively). The article displays the results of practical testing of the developed software. To simulate the cluster of informational and calculation servers in the pilot project, multithreading was used. Multithreading is supported by nearly every operational system for parallel data processing. The development tools chosen for the network storage allowed to design storage module interaction with the optimal efficiency, to ensure proper performance, stability and reliability of processing and managing large amounts of data. The obtained results allow using the designs for efficient management of meteorological surface observation data, for rapid data gathering, for systematization and storage of hydro-meteorological data in different alphanumeric codes and other related categories.
APA, Harvard, Vancouver, ISO, and other styles
20

Atakishchev, O. I., M. V. Belov, I. S. Zakharov, and A. V. Nikolaev. "Specific Features of Parallel Asynchronous Data Processing in Distributed GIS." Telecommunications and Radio Engineering 64, no. 3 (2005): 167–75. http://dx.doi.org/10.1615/telecomradeng.v64.i3.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nokleby, Matthew, Haroon Raja, and Waheed U. Bajwa. "Scaling-Up Distributed Processing of Data Streams for Machine Learning." Proceedings of the IEEE 108, no. 11 (November 2020): 1984–2012. http://dx.doi.org/10.1109/jproc.2020.3021381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Benediktsson, Jon Atli, and Zebin Wu. "Distributed Computing for Remotely Sensed Data Processing [Scanning the Section]." Proceedings of the IEEE 109, no. 8 (August 2021): 1278–81. http://dx.doi.org/10.1109/jproc.2021.3094335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Li, F. "Distributed Processing of Reliability Index Assessment and Reliability-Based Network Reconfiguration in Power Distribution Systems." IEEE Transactions on Power Systems 20, no. 1 (February 2005): 230–38. http://dx.doi.org/10.1109/tpwrs.2004.841231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Esposito, Christian, and Massimo Ficco. "Recent Developments on Security and Reliability in Large-Scale Data Processing with MapReduce." International Journal of Data Warehousing and Mining 12, no. 1 (January 2016): 49–68. http://dx.doi.org/10.4018/ijdwm.2016010104.

Full text
Abstract:
The demand to access to a large volume of data, distributed across hundreds or thousands of machines, has opened new opportunities in commerce, science, and computing applications. MapReduce is a paradigm that offers a programming model and an associated implementation for processing massive datasets in a parallel fashion, by using non-dedicated distributed computing hardware. It has been successfully adopted in several academic and industrial projects for Big Data Analytics. However, since such analytics is increasingly demanded within the context of mission-critical applications, security and reliability in MapReduce frameworks are strongly required in order to manage sensible information, and to obtain the right answer at the right time. In this paper, the authors present the main implementation of the MapReduce programming paradigm, provided by Apache with the name of Hadoop. They illustrate the security and reliability concerns in the context of a large-scale data processing infrastructure. They review the available solutions, and their limitations to support security and reliability within the context MapReduce frameworks. The authors conclude by describing the undergoing evolution of such solutions, and the possible issues for improvements, which could be challenging research opportunities for academic researchers.
APA, Harvard, Vancouver, ISO, and other styles
25

Urmonov, Odilbek, and HyungWon Kim. "Highly Reliable MAC Protocol Based on Associative Acknowledgement for Vehicular Network." Electronics 10, no. 4 (February 4, 2021): 382. http://dx.doi.org/10.3390/electronics10040382.

Full text
Abstract:
IEEE 1609/802.11p standard obligates each vehicle to broadcast a periodic basic safety message (BSM). The BSM message comprises a positional and kinematic information of a transmitting vehicle. It also contains emergency information that is to be delivered to all the target receivers. In broadcast communication, however, existing carrier sense multiple access (CSMA) medium access control (MAC) protocol cannot guarantee a high reliability as it suffers from two chronic problems, namely, access collision and hidden terminal interference. To resolve these problems of CSMA MAC, we propose a novel enhancement algorithm called a neighbor association-based MAC (NA-MAC) protocol. NA-MAC utilizes a time division multiple access (TDMA) to distribute channel resource into short time-intervals called slots. Each slot is further divided into three parts to conduct channel sensing, slot acquisition, and data transmission. To avoid a duplicate slot allocation among multiple vehicles, NA-MAC introduces a three-way handshake process during slot acquisition. Our simulation results revealed that NA-MAC improved packet reception ratio (PRR) by 19% and successful transmission by 30% over the reference protocols. In addition, NA-MAC reduced the packet collisions by a factor of 4. Using the real on-board units (OBUs), we conducted an experiment where our protocol outperformed in terms of PRR and average transmission interval by 82% and 49%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
26

Evers, Christine, Emanuel A. P. Habets, Sharon Gannot, and Patrick A. Naylor. "DoA Reliability for Distributed Acoustic Tracking." IEEE Signal Processing Letters 25, no. 9 (September 2018): 1320–24. http://dx.doi.org/10.1109/lsp.2018.2849579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sestok, C. K., M. R. Said, and A. V. Oppenheim. "Randomized data selection in detection with applications to distributed signal processing." Proceedings of the IEEE 91, no. 8 (August 2003): 1184–98. http://dx.doi.org/10.1109/jproc.2003.814922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bu, Lingrui, Hui Zhang, Haiyan Xing, and Lijun Wu. "Research on parallel data processing of data mining platform in the background of cloud computing." Journal of Intelligent Systems 30, no. 1 (January 1, 2021): 479–86. http://dx.doi.org/10.1515/jisys-2020-0113.

Full text
Abstract:
Abstract The efficient processing of large-scale data has very important practical value. In this study, a data mining platform based on Hadoop distributed file system was designed, and then K-means algorithm was improved with the idea of max-min distance. On Hadoop distributed file system platform, the parallelization was realized by MapReduce. Finally, the data processing effect of the algorithm was analyzed with Iris data set. The results showed that the parallel algorithm divided more correct samples than the traditional algorithm; in the single-machine environment, the parallel algorithm ran longer; in the face of large data sets, the traditional algorithm had insufficient memory, but the parallel algorithm completed the calculation task; the acceleration ratio of the parallel algorithm was raised with the expansion of cluster size and data set size, showing a good parallel effect. The experimental results verifies the reliability of parallel algorithm in big data processing, which makes some contributions to further improve the efficiency of data mining.
APA, Harvard, Vancouver, ISO, and other styles
29

Rojas Hernandez, Andres Felipe, and Nancy Yaneth Gelvez Garcia. "Distributed processing using cosine similarity for mapping Big Data in Hadoop." IEEE Latin America Transactions 14, no. 6 (June 2016): 2857–61. http://dx.doi.org/10.1109/tla.2016.7555265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Yamamoto, Moriki, and Hisao Koizumi. "An Experimental Evaluation of Distributed Data Stream Processing using Lightweight RDBMS SQLite." IEEJ Transactions on Electronics, Information and Systems 133, no. 11 (2013): 2125–32. http://dx.doi.org/10.1541/ieejeiss.133.2125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kimura, Mitsutaka, Mitsuhiro Imaizumi, and Toshio Nakagawa. "Replication Policy of Real-Time Distributed System for Cloud Computing." International Journal of Reliability, Quality and Safety Engineering 22, no. 05 (October 2015): 1550024. http://dx.doi.org/10.1142/s0218539315500242.

Full text
Abstract:
Recently, cloud computing has been widely used for the purpose of protecting client data on the Internet [A. Weiss, Computing in the clouds, netWorker11 (2007) 16–25; M. Armbrust et al., Above the clouds: A Berkeley view of cloud computing, Technical Report UCV/EECS-2009-28, University of California at Berkeley (2009)]. But when a client receives network service, response time may be slow because the data center is located in a remote place. In order to solve the problem, real-time distributed systems for cloud computing has been proposed [M. Okuno, D. Ito, H. Miyamoto, H. Aoki, Y. Tsushima and T. Yazaki, A study on distributed information and communication processing architecture for next generation cloud system, IEICE Tech. Report109(A48) (2010) 241–246; M. Okuno, S. Tsutsumi and T. Yazaki, A study of high available distributed network processing technique for next generation cloud system, IEICE Tech. Report111(8) (2011) 25–30; S. Yamada, J. Marukawa, D. Ishii, S. Okamoto and N. Yamanaka, A study of parallel transmission technique with GMPLS in intelligent cloud network, IEICE Tech. Report109(455) (2010) 51–56]. The cloud computing system consists of some intelligent nodes as well as a data center. The data center manages all client data. The intelligent node provides client service near clients. It enables to provide client service at short response time [M. Okuno, D. Ito, H. Miyamoto, H. Aoki, Y. Tsushima and T. Yazaki, A study on distributed information and communication processing architecture for next generation cloud system, IEICE Tech. Report109(448) (2010) 241–246]. We considered the reliability model of distributed information processing for cloud computing, derived cost effectiveness and discussed the optimal replication interval to minimize it [M. Kimura, M. Imaizumi and T. Nakagawa, Reliability modeling of distributed information processing for cloud computing, in Proc. 20th ISSAT Int. Conf. Reliability and Quality in Design (2014), pp. 183–187]. Authors had dealt with the server system with one failure mode. In this paper, we consider the reliability model of a real-time distributed system with n intelligent nodes and formulate a stochastic model of the server system with n intelligent nodes for changing the other normal intelligent node at failure. We derive the expected numbers of the replication and of updating the client data. Further, we derive the expected cost and discuss an optimal replication interval to minimize it. Next, we derive the cost effectiveness and discuss an optimal number of intelligent nodes to minimize it.
APA, Harvard, Vancouver, ISO, and other styles
32

Canter, L. H., and Y. S. Sherif. "Parallel processing, distributed systems and local area networks." Microelectronics Reliability 28, no. 6 (January 1988): 919–27. http://dx.doi.org/10.1016/0026-2714(88)90293-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Shen, Godwin, and Antonio Ortega. "Transform-Based Distributed Data Gathering." IEEE Transactions on Signal Processing 58, no. 7 (July 2010): 3802–15. http://dx.doi.org/10.1109/tsp.2010.2047640.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Barrera, E., M. Ruiz, S. Lopez, D. Machon, and J. Vega. "PXI-based architecture for real-time data acquisition and distributed dynamic data processing." IEEE Transactions on Nuclear Science 53, no. 3 (June 2006): 923–26. http://dx.doi.org/10.1109/tns.2006.874372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kannadasan, R., K. P. Rajasekaran, S. Jaganath, and N. Prabakaran. "Performance Analysis of Data Processing Using High Performance Distributed Computer Clusters." Journal of Computational and Theoretical Nanoscience 16, no. 5 (May 1, 2019): 2372–76. http://dx.doi.org/10.1166/jctn.2019.7902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Tang, D., X. Zhou, Y. Jing, W. Cong, and C. Li. "DESIGN AND VERIFICATION OF REMOTE SENSING IMAGE DATA CENTER STORAGE ARCHITECTURE BASED ON HADOOP." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 1639–42. http://dx.doi.org/10.5194/isprs-archives-xlii-3-1639-2018.

Full text
Abstract:
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Yang. "Hadoop-Based Model of Mass Data Storage." Applied Mechanics and Materials 513-517 (February 2014): 632–34. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.632.

Full text
Abstract:
Aiming at more and more data produced by network, it is extremely important to manage and store these data by using mass data storage platform. This paper presents a method of managing rationally and storing mass data based on distributed computing technique. It is based on Hadoop distributed platforms, mainly using the HDFS distributed file system, MapReduce parallel computing models and Hbase distributed database technology as massive data processing methods, to achieve the efficient storage. The model can overcome the existing deficiencies of the current means of storage and solve the problems of mass data in storage, which has good scalability and reliability, thus the efficiency of storage can be further improved.
APA, Harvard, Vancouver, ISO, and other styles
38

Wu, Zebin, Jin Sun, Yi Zhang, Zhihui Wei, and Jocelyn Chanussot. "Recent Developments in Parallel and Distributed Computing for Remotely Sensed Big Data Processing." Proceedings of the IEEE 109, no. 8 (August 2021): 1282–305. http://dx.doi.org/10.1109/jproc.2021.3087029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Yan, J., Wen Hui Yan, and Li Ping Wang. "Reliability Analysis of Pressure Meter Electronic System Based on Fault Tree." Applied Mechanics and Materials 347-350 (August 2013): 917–21. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.917.

Full text
Abstract:
Pressure meter is an important testing and storage investment during oilfield hydraulic fracturing. The pressure meter electronic system mainly composed by the microcontroller module, battery module, temperature signal acquisition and processing module, the pressure signal acquisition and processing module and memory module. This paper briefly introduced the principle of pressure meter, and established fault tree based on schematic diagram of pressure meter electronic system and specific cause of failure; Aiming at solving traditional reliability analysis the limitation of long cycle and poor economy, the paper adopt reliability simulation by using MATLAB and combined with the theory of Monte Carlo. Ultimately,getting the reliability curve and life of pressure meter electronic system. The reliability of electronic systems gradually decreases with increasing time and obeyed exponentially distributed law; Electronic system simulation life of 2877.7 hours. In other words, the electronic system can work 2877.7 hours since down to underground of oil well. It has some guidance for engineering practice, as well as provides a measure of reliability analysis for pressure meter electronic system.
APA, Harvard, Vancouver, ISO, and other styles
40

Roman Čerešňák, Karol Matiaško, and Adam Dudáš. "Various Approaches Proposed for Eliminating Duplicate Data in a System." Communications - Scientific letters of the University of Zilina 23, no. 4 (October 1, 2021): A223—A232. http://dx.doi.org/10.26552/com.c.2021.4.a223-a232.

Full text
Abstract:
The growth of big data processing market led to an increase in the overload of computation data centers, change of methods used in storing the data, communication between the computing units and computational time needed to process or edit the data. Methods of distributed or parallel data processing brought new problems related to computations with data which need to be examined. Unlike the conventional cloud services, a tight connection between the data and the computations is one of the main characteristics of the big data services. The computational tasks can be done only if relevant data are available. Three factors, which influence the speed and efficiency of data processing are - data duplicity, data integrity and data security. We are motivated to study the problems related to the growing time needed for data processing by optimizing these three factors in geographically distributed data centers.
APA, Harvard, Vancouver, ISO, and other styles
41

Wasko, Wojciech, Alessandro Albini, Perla Maiolino, Fulvio Mastrogiovanni, and Giorgio Cannata. "Contact Modelling and Tactile Data Processing for Robot Skins." Sensors 19, no. 4 (February 16, 2019): 814. http://dx.doi.org/10.3390/s19040814.

Full text
Abstract:
Tactile sensing is a key enabling technology to develop complex behaviours for robots interacting with humans or the environment. This paper discusses computational aspects playing a significant role when extracting information about contact events. Considering a large-scale, capacitance-based robot skin technology we developed in the past few years, we analyse the classical Boussinesq–Cerruti’s solution and the Love’s approach for solving a distributed inverse contact problem, both from a qualitative and a computational perspective. Our contribution is the characterisation of the algorithms’ performance using a freely available dataset and data originating from surfaces provided with robot skin.
APA, Harvard, Vancouver, ISO, and other styles
42

Dewri, Rinku, Toan Ong, and Ramakrishna Thurimella. "Linking Health Records for Federated Query Processing." Proceedings on Privacy Enhancing Technologies 2016, no. 3 (July 1, 2016): 4–23. http://dx.doi.org/10.1515/popets-2016-0013.

Full text
Abstract:
Abstract A federated query portal in an electronic health record infrastructure enables large epidemiology studies by combining data from geographically dispersed medical institutions. However, an individual’s health record has been found to be distributed across multiple carrier databases in local settings. Privacy regulations may prohibit a data source from revealing clear text identifiers, thereby making it non-trivial for a query aggregator to determine which records correspond to the same underlying individual. In this paper, we explore this problem of privately detecting and tracking the health records of an individual in a distributed infrastructure. We begin with a secure set intersection protocol based on commutative encryption, and show how to make it practical on comparison spaces as large as 1010 pairs. Using bigram matching, precomputed tables, and data parallelism, we successfully reduced the execution time to a matter of minutes, while retaining a high degree of accuracy even in records with data entry errors. We also propose techniques to prevent the inference of identifier information when knowledge of underlying data distributions is known to an adversary. Finally, we discuss how records can be tracked utilizing the detection results during query processing.
APA, Harvard, Vancouver, ISO, and other styles
43

Du, Jun Wei, Wei Qiang Chen, Zhong Zhu, Xin Liu, Si Jun Wan, and Chang Bin Liao. "A Redundancy Design Schema of Distributed Real-Time Database Applied in ISCS." Applied Mechanics and Materials 174-177 (May 2012): 2142–46. http://dx.doi.org/10.4028/www.scientific.net/amm.174-177.2142.

Full text
Abstract:
Reliability is one of the most important properties of integrated supervisory and control system (ISCS) in metro. Redundancy technology, a fault tolerant mechanism, can significantly improve ISCS reliability. This paper introduces a redundancy design schema and its implementation in distributed real-time database, which is the kernel part of ISCS, Including upstream and downstream data redundancy processing technology, fault detection and redundancy switch technology. The result shows that this schema is feasible and reasonable.
APA, Harvard, Vancouver, ISO, and other styles
44

Akanbi, Adeyinka, and Muthoni Masinde. "A Distributed Stream Processing Middleware Framework for Real-Time Analysis of Heterogeneous Data on Big Data Platform: Case of Environmental Monitoring." Sensors 20, no. 11 (June 3, 2020): 3166. http://dx.doi.org/10.3390/s20113166.

Full text
Abstract:
In recent years, the application and wide adoption of Internet of Things (IoT)-based technologies have increased the proliferation of monitoring systems, which has consequently exponentially increased the amounts of heterogeneous data generated. Processing and analysing the massive amount of data produced is cumbersome and gradually moving from classical ‘batch’ processing—extract, transform, load (ETL) technique to real-time processing. For instance, in environmental monitoring and management domain, time-series data and historical dataset are crucial for prediction models. However, the environmental monitoring domain still utilises legacy systems, which complicates the real-time analysis of the essential data, integration with big data platforms and reliance on batch processing. Herein, as a solution, a distributed stream processing middleware framework for real-time analysis of heterogeneous environmental monitoring and management data is presented and tested on a cluster using open source technologies in a big data environment. The system ingests datasets from legacy systems and sensor data from heterogeneous automated weather systems irrespective of the data types to Apache Kafka topics using Kafka Connect APIs for processing by the Kafka streaming processing engine. The stream processing engine executes the predictive numerical models and algorithms represented in event processing (EP) languages for real-time analysis of the data streams. To prove the feasibility of the proposed framework, we implemented the system using a case study scenario of drought prediction and forecasting based on the Effective Drought Index (EDI) model. Firstly, we transform the predictive model into a form that could be executed by the streaming engine for real-time computing. Secondly, the model is applied to the ingested data streams and datasets to predict drought through persistent querying of the infinite streams to detect anomalies. As a conclusion of this study, a performance evaluation of the distributed stream processing middleware infrastructure is calculated to determine the real-time effectiveness of the framework.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Kun, Linchao Zhuo, Yun Shao, Dong Yue, and Kim Fung Tsang. "Toward Distributed Data Processing on Intelligent Leak-Points Prediction in Petrochemical Industries." IEEE Transactions on Industrial Informatics 12, no. 6 (December 2016): 2091–102. http://dx.doi.org/10.1109/tii.2016.2537788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Sun, Qi, and Hui Yan Zhao. "Design of Distribute Monitoring Platform Base on Cloud Computing." Applied Mechanics and Materials 687-691 (November 2014): 1076–79. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1076.

Full text
Abstract:
Based on cloud computing distributed network measurement system compared to traditional measurement infrastructure, the use of cloud computing platform measurement data stored in massive large virtual resource pool to ensure the reliability of data storage and scalability, re-use cloud computing platform parallel processing mechanism, the mass measurement data for fast, concurrent analytical processing and data mining. Measuring probe supports a variety of different measurement algorithms deployed to support a variety of data acquisition formats, in the measurement method provides a congestion response policies and load balancing strategies.
APA, Harvard, Vancouver, ISO, and other styles
47

Przystupa, Krzysztof, Mykola Beshley, Olena Hordiichuk-Bublivska, Marian Kyryk, Halyna Beshley, Julia Pyrih, and Jarosław Selech. "Distributed Singular Value Decomposition Method for Fast Data Processing in Recommendation Systems." Energies 14, no. 8 (April 19, 2021): 2284. http://dx.doi.org/10.3390/en14082284.

Full text
Abstract:
The problem of analyzing a big amount of user data to determine their preferences and, based on these data, to provide recommendations on new products is important. Depending on the correctness and timeliness of the recommendations, significant profits or losses can be obtained. The task of analyzing data on users of services of companies is carried out in special recommendation systems. However, with a large number of users, the data for processing become very big, which causes complexity in the work of recommendation systems. For efficient data analysis in commercial systems, the Singular Value Decomposition (SVD) method can perform intelligent analysis of information. With a large amount of processed information we proposed to use distributed systems. This approach allows reducing time of data processing and recommendations to users. For the experimental study, we implemented the distributed SVD method using Message Passing Interface, Hadoop and Spark technologies and obtained the results of reducing the time of data processing when using distributed systems compared to non-distributed ones.
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Yuan, Soummya Kar, and Jose M. F. Moura. "Resilient Distributed Parameter Estimation With Heterogeneous Data." IEEE Transactions on Signal Processing 67, no. 19 (October 1, 2019): 4918–33. http://dx.doi.org/10.1109/tsp.2019.2931171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kay, S., and Quan Ding. "On the Performance of Independent Processing of Independent Data Sets for Distributed Detection." IEEE Signal Processing Letters 20, no. 6 (June 2013): 619–22. http://dx.doi.org/10.1109/lsp.2013.2260694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Xiao, Fuyuan, and Masayoshi Aritsugi. "An Adaptive Parallel Processing Strategy for Complex Event Processing Systems over Data Streams in Wireless Sensor Networks." Sensors 18, no. 11 (November 2, 2018): 3732. http://dx.doi.org/10.3390/s18113732.

Full text
Abstract:
Efficient matching of incoming events of data streams to persistent queries is fundamental to event stream processing systems in wireless sensor networks. These applications require dealing with high volume and continuous data streams with fast processing time on distributed complex event processing (CEP) systems. Therefore, a well-managed parallel processing technique is needed for improving the performance of the system. However, the specific properties of pattern operators in the CEP systems increase the difficulties of the parallel processing problem. To address these issues, a parallelization model and an adaptive parallel processing strategy are proposed for the complex event processing by introducing a histogram and utilizing the probability and queue theory. The proposed strategy can estimate the optimal event splitting policy, which can suit the most recent workload conditions such that the selected policy has the least expected waiting time for further processing of the arriving events. The proposed strategy can keep the CEP system running fast under the variation of the time window sizes of operators and the input rates of streams. Finally, the utility of our work is demonstrated through the experiments on the StreamBase system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography