Journal articles on the topic 'Operating system redundancy techniques'

To see the other types of publications on this topic, follow the link: Operating system redundancy techniques.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Operating system redundancy techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Michlowicz, Edward, and Jerzy Wojciechowski. "A method for evaluating and upgrading systems with parallel structures with forced redundancy." Eksploatacja i Niezawodnosc - Maintenance and Reliability 23, no. 4 (October 17, 2021): 770–76. http://dx.doi.org/10.17531/ein.2021.4.19.

Full text
Abstract:
The objects of the study are parallel-structure machine systems with redundancy associated with safety assurance of continuous material flow. The problem concerns systems in which the supply of materials takes place continuously (24 hours a day), and the system of operated machines must ensure the receipt and movement of the material at a strictly defined time and in the desired quantity. It is a system where the presence of a failure poses a threat to human life and environmental degradation. This paper presents a method for system condition assessment and upgrading for maintaining proper operation under conditions of continuous operation. A database of information about the current parameters of the system components (measurements, monitoring) is necessary for condition assessment. The method also uses lean techniques (including TPM). System evaluation and selection criteria for a suitable structure in terms of further operation were proposed. Exemplification was performed for an underground mine drainage system. As a part of the identification, selected parameters of the system components were measured, and their characteristics (motors, pumps, pipelines) were developed. The results of the analysis and the values of the adopted criteria were compared to the indicators for new pump sets. A two-option system upgrade was proposed, in addition to machine operating schedules, maintenance periods, and overhaul cycles.
APA, Harvard, Vancouver, ISO, and other styles
2

Shurtleff, Mark S., Joseph A. Jenkins, and Michelle R. Sams. "Deriving Menu Structures through Modal Block Clustering: A Promising Alternative to Hierarchical Techniques." Proceedings of the Human Factors Society Annual Meeting 32, no. 5 (October 1988): 347–51. http://dx.doi.org/10.1177/154193128803200525.

Full text
Abstract:
Modal block clustering (MBC) is proposed as an approach more suited to the derivation of menu structures than hierarchical clustering techniques. Problems with the application of hierarchical techniques and pairwise similarity ratings (PWSR) from which the clusters are derived are discussed. MBC defines clusters based on the pattern of common command attributes and provides an objective way to determine the composition and number of menu panels to include in a menu structure. The method also objectively defines command redundancy for the menu panels. The method of MBC was applied to the 97 commands that comprise the CMS operating system resulting in 17 menu categories. The menu categories were used to design a help menu system. The MBC procedure provides a viable methodology for complex systems, such as CMS, which derive increased functionality from numerous command options. System designers can fruitfully and efficiently apply this methodology both to current systems and to proposed systems for which there are no expert users.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Hui, and Yun Wang. "Designing Fault Tolerance Strategy by Iterative Redundancy for Component-Based Distributed Computing Systems." Mathematical Problems in Engineering 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/197423.

Full text
Abstract:
Reliability is a critical issue for component-based distributed computing systems, some distributed software allows the existence of large numbers of potentially faulty components on an open network. Faults are inevitable in this large-scale, complex, distributed components setting, which may include a lot of untrustworthy parts. How to provide highly reliable component-based distributed systems is a challenging problem and a critical research. Generally, redundancy and replication are utilized to realize the goal of fault tolerance. In this paper, we propose a CFI (critical fault iterative) redundancy technique, by which the efficiency can be guaranteed to make use of resources (e.g., computation and storage) and to create fault-tolerance applications. When operating in an environment with unknown components’ reliability, CFI redundancy is more efficient and adaptive than other techniques (e.g., K-Modular Redundancy and N-Version Programming). In the CFI strategy of redundancy, the function invocation relationships and invocation frequencies are employed to rank the functions’ importance and identify the most vulnerable function implemented via functionally equivalent components. A tradeoff has to be made between efficiency and reliability. In this paper, a formal theoretical analysis and an experimental analysis are presented. Compared with the existing methods, the reliability of components-based distributed system can be greatly improved by tolerating a small part of significant components.
APA, Harvard, Vancouver, ISO, and other styles
4

HEIDERGOTT, W. F. "SYSTEM LEVEL SINGLE EVENT UPSET MITIGATION STRATEGIES." International Journal of High Speed Electronics and Systems 14, no. 02 (June 2004): 341–52. http://dx.doi.org/10.1142/s0129156404002399.

Full text
Abstract:
Use of a systems engineering process and the application of techniques and methods of fault tolerant systems are applicable to the development of a mitigation strategy for Single Event Upsets (SEU). Specific methods of fault avoidance, fault masking, detection, containment, and recovery techniques are important elements in the mitigation of single event upsets. Fault avoidance through the use of SEU hardened technology, fault masking using coding and redundancy provisions, and solutions applied at the subsystem and system level are available to the system developer. Validation and verification of SEU mitigation and performance of fault tolerance provisions are essential elements of systems design for operation in energetic particle environments.
APA, Harvard, Vancouver, ISO, and other styles
5

., Pankaj, Jasdev Bhatti, and Mohit Kumar Kakkar. "Reliability Analysis of Industrial Model Using Redundancy Technique and Geometric Distribution." ECS Transactions 107, no. 1 (April 24, 2022): 7273–80. http://dx.doi.org/10.1149/10701.7273ecst.

Full text
Abstract:
The present paper is an initiative towards analyzing the maintenance process of an industrial model using regenerative techniques. With an increase in demands to analyze the reliability of any industrial models following continuous or discrete distribution, the major concern to be considered inspect the possibility and level of failure and proceeding the failed unit for further repairing process. The system consists of two parallel units following the passive standby redundancy and with different failure mechanism. The proposed analytical methodology in the paper made it possible to assess the reliability of the whole system in the event of failure of its components. The stochastic analysis of reliability characteristics of the considered system to the types of repair time distributions was also studied using geometric distribution. The numerical equations and results being calculated for reliability parameters like mean time to system failure, availability of system in operative form, down period of the system following repair mechanism, using regenerative techniques, and geometric distribution. Graphical and analytical analysis were presented.
APA, Harvard, Vancouver, ISO, and other styles
6

Ferrentino, Enrico, Federico Salvioli, and Pasquale Chiacchio. "Globally Optimal Redundancy Resolution with Dynamic Programming for Robot Planning: A ROS Implementation." Robotics 10, no. 1 (March 4, 2021): 42. http://dx.doi.org/10.3390/robotics10010042.

Full text
Abstract:
Dynamic programming techniques have proven much more flexible than calculus of variations and other techniques in performing redundancy resolution through global optimization of performance indices. When the state and input spaces are discrete, and the time horizon is finite, they can easily accommodate generic constraints and objective functions and find Pareto-optimal sets. Several implementations have been proposed in previous works, but either they do not ensure the achievement of the globally optimal solution, or they have not been demonstrated on robots of practical relevance. In this communication, recent advances in dynamic programming redundancy resolution, so far only demonstrated on simple planar robots, are extended to be used with generic kinematic structures. This is done by expanding the Robot Operating System (ROS) and proposing a novel architecture meeting the requirements of maintainability, re-usability, modularity and flexibility that are usually required to robotic software libraries. The proposed ROS extension integrates seamlessly with the other software components of the ROS ecosystem, so as to encourage the reuse of the available visualization and analysis tools. The new architecture is demonstrated on a 7-DOF robot with a six-dimensional task, and topological analyses are carried out on both its state space and resulting joint-space solution.
APA, Harvard, Vancouver, ISO, and other styles
7

WANG, CHUN-CHING, YIH-CHUAN LIN, and CHI-YIN LIN. "ROM REDUCTION FOR OFDM SYSTEM USING TIME-STEALING STRATEGY." Journal of Circuits, Systems and Computers 15, no. 06 (December 2006): 907–21. http://dx.doi.org/10.1142/s0218126606003398.

Full text
Abstract:
Modern communication systems frequently exploit the OFDM (Orthogonal Frequency Division Multiplex) technique to obtain a highly robust transmission of multimedia information, such as digital audio/video broadcast. OFDM and most of the other multimedia compression techniques usually require expensive computations, e.g., FFT (Fast Fourier Transform) and IMDCT (Inverse Modified Discrete Cosine Transform) processing. Traditionally, designing FFT and IMDCT separately involves a significant amount of redundancy in hardware. To reduce the required hardware, this investigation proposes a new ROM-sharing design for storing both FFT twiddle factors and IMDCT coefficients in a DAB (Digital Audio Broadcasting) receiver. We first analyze the correlation between FFT operations and IMDCT operations, and then the combinational logic circuit in the FFT processor is modified such that both IMDCT coefficients and FFT twiddle factors can be obtained simultaneously from a shared ROM. This design can also be applied for computing IFFT (Inverse Fast Fourier Transform) and MDCT for DAB transmitter. Compared with the traditional design using separate module scheme, our design does not need extra ROM for IMDCT/MDCT modules. Therefore, the new scheme offers superior solution for combining high-performance FFT (IFFT) operation and IMDCT (MDCT) operation.
APA, Harvard, Vancouver, ISO, and other styles
8

Banteywalu, Solomon, Baseem Khan, Valentijn De Smedt, and Paul Leroux. "A Novel Modular Radiation Hardening Approach Applied to a Synchronous Buck Converter." Electronics 8, no. 5 (May 8, 2019): 513. http://dx.doi.org/10.3390/electronics8050513.

Full text
Abstract:
Radiation and extreme temperature are the main inhibitors for the use of electronic devices in space applications. Radiation challenges the normal and stable operation of DC-DC converters, used as power supply for onboard systems in satellites and spacecrafts. In this situation, special design techniques known as radiation hardening or radiation tolerant designs have to be employed. In this work, a module level design approach for radiation hardening is addressed. A module in this sense is a constituent of a digital controller, which includes an analog to digital converter (ADC), a digital proportional-integral-derivative (PID) controller, and a digital pulse width modulator (DPWM). As a new Radiation Hardening by Design technique (RHBD), a four module redundancy technique is proposed and applied to the digital voltage mode controller driving a synchronous buck converter, which has been implemented as hardware-in-the-loop (HIL) simulation block in MATLAB/Simulink using Xilinx system generator based on the Zynq-7000 development board (ZYBO). The technique is compared, for reliability and hardware resources requirement, with triple modular redundancy (TMR), five modular redundancy (FMR) and the modified triplex–duplex architecture. Furthermore, radiation induced failures are emulated by switching all duplicated modules inputs to different signals, or to ground during simulation. The simulation results show that the proposed technique has 25% and 30%longer expected life compared to TMR and FMR techniques, respectively, and has the lowest hardware resource requirement compared to FMR and the modified triplex–duplex techniques.
APA, Harvard, Vancouver, ISO, and other styles
9

Grehan, Julianne, Dmitry Ignatyev, and Argyrios Zolotas. "Fault Detection in Aircraft Flight Control Actuators Using Support Vector Machines." Machines 11, no. 2 (February 2, 2023): 211. http://dx.doi.org/10.3390/machines11020211.

Full text
Abstract:
Future generations of flight control systems, such as those for unmanned autonomous vehicles (UAVs), are likely to be more adaptive and intelligent to cope with the extra safety and reliability requirements due to pilotless operations. An efficient fault detection and isolation (FDI) system is paramount and should be capable of monitoring the health status of an aircraft. Historically, hardware redundancy techniques have been used to detect faults. However, duplicating the actuators in an UAV is not ideal due to the high cost and large mass of additional components. Fortunately, aircraft actuator faults can also be detected using analytical redundancy techniques. In this study, a data-driven algorithm using Support Vector Machine (SVM) is designed. The aircraft actuator fault investigated is the loss-of-effectiveness (LOE) fault. The aim of the fault detection algorithm is to classify the feature vector data into a nominal or faulty class based on the health of the actuator. The results show that the SVM algorithm detects the LOE fault almost instantly, with an average accuracy of 99%.
APA, Harvard, Vancouver, ISO, and other styles
10

Ahmed, Yahye Abukar, Shamsul Huda, Bander Ali Saleh Al-rimy, Nouf Alharbi, Faisal Saeed, Fuad A. Ghaleb, and Ismail Mohamed Ali. "A Weighted Minimum Redundancy Maximum Relevance Technique for Ransomware Early Detection in Industrial IoT." Sustainability 14, no. 3 (January 21, 2022): 1231. http://dx.doi.org/10.3390/su14031231.

Full text
Abstract:
Ransomware attacks against Industrial Internet of Things (IIoT) have catastrophic consequences not only to the targeted infrastructure, but also the services provided to the public. By encrypting the operational data, the ransomware attacks can disrupt the normal operations, which represents a serious problem for industrial systems. Ransomware employs several avoidance techniques, such as packing, obfuscation, noise insertion, irrelevant and redundant system call injection, to deceive the security measures and make both static and dynamic analysis more difficult. In this paper, a Weighted minimum Redundancy maximum Relevance (WmRmR) technique was proposed for better feature significance estimation in the data captured during the early stages of ransomware attacks. The technique combines an enhanced mRMR (EmRmR) with the Term Frequency-Inverse Document Frequency (TF-IDF) so that it can filter out the runtime noisy behavior based on the weights calculated by the TF-IDF. The proposed technique has the capability to assess whether a feature in the relevant set is important or not. It has low-dimensional complexity and a smaller number of evaluations compared to the original mRmR method. The TF-IDF was used to evaluate the weights of the features generated by the EmRmR algorithm. Then, an inclusive entropy-based refinement method was used to decrease the size of the extracted data by identifying the system calls with strong behavioral indication. After extensive experimentation, the proposed technique has shown to be effective for ransomware early detection with low-complexity and few false-positive rates. To evaluate the proposed technique, we compared it with existing behavioral detection methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Safaei, Ali Asghar, and Saeede Habibi-Asl. "Multidimensional indexing technique for medical images retrieval." Intelligent Data Analysis 25, no. 6 (October 29, 2021): 1629–66. http://dx.doi.org/10.3233/ida-205495.

Full text
Abstract:
Retrieving required medical images from a huge amount of images is one of the most widely used features in medical information systems, including medical imaging search engines. For example, diagnostic decision making has traditionally been accompanied by patient data (image or non-image) and previous medical experiences from similar cases. Indexing as part of search engines (or retrieval system), increases the speed of a search. The goal of this study, is to provide an effective and efficient indexing technique for medical images search engines. In this paper, in order to archive this goal, a multidimensional indexing technique for medical images is designed using the normalization technique that is used to reduce redundancy in relational database design. Data structure of the proposed multidimensional index and also different required operations are designed to create and handle such a multidimensional index. Time complexity of each operation is analyzed and also average memory space required to store any medical image (along with its related metadata) is calculated as the space complexity analysis of the proposed indexing technique. The results show that the proposed indexing technique has a good performance in terms of memory usage, as well as execution time for the usual operations. Moreover, and may be more important, the proposed indexing techniques improves the precision and recall of the information retrieval system (i.e., search engine) which uses this technique for indexing medical images. Besides, a user of such search engine can retrieve medical images which s/he has specified its attributes is some different aspects (dimensions), e.g., tissue, image modality and format, sickness and trauma, etc. So, the proposed multidimensional indexing techniques can improve effectiveness of a medical image information retrieval system (in terms of precision and recall), while having a proper efficiency (in terms of execution time and memory usage), and can improve the information retrieval process for healthcare search engines.
APA, Harvard, Vancouver, ISO, and other styles
12

Michoud, C., S. Bazin, L. H. Blikra, M. H. Derron, and M. Jaboyedoff. "Experiences from site-specific landslide early warning systems." Natural Hazards and Earth System Sciences 13, no. 10 (October 22, 2013): 2659–73. http://dx.doi.org/10.5194/nhess-13-2659-2013.

Full text
Abstract:
Abstract. Landslide early warning systems (EWSs) have to be implemented in areas with large risk for populations or infrastructures when classical structural remediation measures cannot be set up. This paper aims to gather experiences of existing landslide EWSs, with a special focus on practical requirements (e.g., alarm threshold values have to take into account the smallest detectable signal levels of deployed sensors before being established) and specific issues when dealing with system implementations. Within the framework of the SafeLand European project, a questionnaire was sent to about one-hundred institutions in charge of landslide management. Finally, we interpreted answers from experts belonging to 14 operational units related to 23 monitored landslides. Although no standard requirements exist for designing and operating EWSs, this review highlights some key elements, such as the importance of pre-investigation work, the redundancy and robustness of monitoring systems, the establishment of different scenarios adapted to gradual increasing of alert levels, and the necessity of confidence and trust between local populations and scientists. Moreover, it also confirms the need to improve our capabilities for failure forecasting, monitoring techniques and integration of water processes into landslide conceptual models.
APA, Harvard, Vancouver, ISO, and other styles
13

RAVI, VADLAMANI. "OPTIMIZATION OF COMPLEX SYSTEM RELIABILITY BY A MODIFIED GREAT DELUGE ALGORITHM." Asia-Pacific Journal of Operational Research 21, no. 04 (December 2004): 487–97. http://dx.doi.org/10.1142/s0217595904000357.

Full text
Abstract:
In this paper, a global optimization meta-heuristic, the great deluge algorithm, is extended and applied to optimize the reliability of complex systems. Two different kinds of optimization problems (i) Reliability optimization of a complex system with constraints on cost and weight (ii) Optimal redundancy allocation in a multi-stage mixed system with constraints on cost and weight are solved to demonstrate the effectiveness of the algorithm. A software developed in ANSI C, implements the algorithm. In terms of both accuracy and speed, it is observed that the present algorithm, the modified great deluge algorithm (MGDA) yielded far superior results compared to those obtained by the simulated annealing, the improved non-equilibrium simulated annealing and other optimization algorithms. Further, when both accuracy and speed are considered simultaneously, both MGDA and another meta-heuristic, ant colony optimization (ACO) yielded comparable results. In conclusion, the MGDA, can be used as an efficient alternative to ACO and other existing optimization techniques.
APA, Harvard, Vancouver, ISO, and other styles
14

Kundu, Shubhrajyoti, Mehebub Alam, Biman K. Saha Roy, and Siddhartha Sankar Thakur. "Allocation of Optimal PMUs for Power System Observability Using PROMETHEE Approach." International Transactions on Electrical Energy Systems 2022 (March 26, 2022): 1–16. http://dx.doi.org/10.1155/2022/1625853.

Full text
Abstract:
Phasor measurement units (PMUs) are becoming a vital measurement device in wide-area monitoring, operation, and control. The allocation of PMU at each bus will make that bus directly observable. However, considering the high installation costs, it is not feasible to place PMU at each bus. Thus, placing the PMUs at optimal locations is extremely important. In this study, the Preference Ranking Organization Method for Enrichment of Evaluation (PROMETHEE)-based multi-criteria decision-making (MCDM) technique has been applied for the optimal allocation of phasor measurement units (PMUs) with the aim of achieving full system observability. Along with the entire network observability, the proposed approach provides maximum measurement redundancy (MR) too. Unlike some previous popular MCDM techniques, the proposed approach obtains optimal PMU placement (OPP) solution without performing pruning operations. Different criterion has been formulated to construct a decision matrix (DM). This DM helps in calculating the net outranking flow (NOF) of all the buses during the PROMETHEE approach. Based on the maximum NOF value, the PMUs are placed at those buses. The proposed approach also considers the inclusion of zero injection buses (ZIBs). Further, cases such as single PMU outage and existence of conventional measurements have been considered while determining optimal locations of PMUs. The proposed algorithm is demonstrated on IEEE 14-bus, 30-bus, 57-bus, and 118-bus systems, one Indian practical utility, i.e., Northern Regional Power Grid (NRPG) 246-bus system, and larger Polish 2383-bus system. To prove the effectiveness of the proposed algorithm, it has been compared with some of the popular existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
15

Michalopoulos, Panayiotis, George J. Tsekouras, Fotios D. Kanellos, and John M. Prousalidis. "Optimal Selection of the Diesel Generators Supplying a Ship Electric Power System." Applied Sciences 12, no. 20 (October 17, 2022): 10463. http://dx.doi.org/10.3390/app122010463.

Full text
Abstract:
It is very common for ships to have electric power systems comprised of generators of the same type. This uniformity allows for easier and lower-cost maintenance. The classic way to select these generators is primarily by power and secondarily by dimensions and acquisition cost. In this paper, a more comprehensive way to select them, using improved cost indicators, is proposed. These take into account many factors that have a significant impact in the life-cycle cost of the equipment. A realistic and detailed profile of the ship’s electric load spanning a full year of her operation is also developed to allow for a solution that is tailor-made to a specific case. The method used is highly iterative. All combinations of genset quantities and capacities are individually considered to populate a power plant, taking into account the existing redundancy requirements. For each of these and for every time interval in the load profile, the engine consumption is Lagrange-optimized to determine the most efficient combination to run the generators and the resulting cost. The operating cost throughout the year is thus derived. In this way, the method can lead to optimal results as large data sets regarding ship operation and her power system’s technical characteristics can be utilized. This intense calculation process is greatly accelerated using memorization techniques. The reliability cost of the current power plant is also considered along with other cost factors, such as flat annual cost, maintenance, and personnel. The acquisition and installation cost are also included, after being distributed in annuities for various durations and interest rates. The results provide valuable insight into the total cost from every aspect and present the optimum generator selection for minimal expenditure and maximum return of investment. This methodology may be used to enhance the current power-plant design processes and provide investors with more feasible alternatives, as it takes into consideration a multitude of technical and operational characteristics of the examined ship power system.
APA, Harvard, Vancouver, ISO, and other styles
16

Gong, Yuhuan, and Yuchang Mo. "Qualitative Analysis of Commercial Services in MEC as Phased-Mission Systems." Security and Communication Networks 2020 (October 10, 2020): 1–11. http://dx.doi.org/10.1155/2020/8823952.

Full text
Abstract:
Currently, mobile edge computing (MEC) is one of the most popular techniques used to respond to real-time services from a wide range of mobile terminals. Compared with single-phase systems, commercial services in MEC can be modeled as phased-mission systems (PMS) and are much more complex, because of the dependencies across the phases. Over the past decade, researchers have proposed a set of new algorithms based on BDD for fault tree analysis of a wide range of PMS with various mission requirements and failure behaviors. The analysis to be performed on a fault tree can be either qualitative or quantitative. For the quantitative fault tree analysis of PMS by means of BDD, much work has been conducted. However, for the qualitative fault tree analysis of PMS by means of BDD, no much related work can be found. In this paper, we have presented some efficient methods to calculate the MCS encoding by a PMS BDD. Firstly, three kinds of redundancy relations-inclusive relation, internal-implication relation, and external-implication relation-within the cut set are identified, which prevent the cut set from being minimal cut set. Then, three BDD operations, IncRed, InImpRed, and ExImpRed, are developed, respectively, for the elimination of these redundancy relations. Using some proper combinations of these operations, MCS can be calculated correctly. As an illustration, some experimental results on a benchmark MEC system are given.
APA, Harvard, Vancouver, ISO, and other styles
17

Terwilliger, Brent A., and David Ison. "Implementing low cost two-person supervisory control for small unmanned aerial systems." Journal of Unmanned Vehicle Systems 02, no. 01 (March 1, 2014): 36–51. http://dx.doi.org/10.1139/juvs-2013-0020.

Full text
Abstract:
Commercially off-the-shelf remote control (RC) model aircraft have been used as a base platform for the development of small unmanned aerial systems (sUAS). Such designs have included use of first person view (FPV), inertial measurement units, and autopilot systems. Recommended guidelines established for operation of FPV recreational RC aircraft have applicability to operation of sUAS, when use of consistent components and platforms are considered. The purpose of this research was to examine existing literature, guidance, regulations, and other influencing factors to assess the necessity of redundancy management practices to identify recommended control stratagem, processes and procedures, operational criteria, and design of a proof of concept system to operate sUAS with optimal safety and operational benefits within recommended and legislated boundaries. Qualitative content analysis techniques were used to perform a literature review, while a survey of applicable technology (e.g., equipment, components, and software) was used in the development of a proof of concept system. The results were identification of a recommended supervisory control framework, a simulated supervisory control system, a physical proof of concept system, and a series of recommendations relating to considerations and potential follow-up research to better understand the limitations, constraints, and applicable benefits in the actual operation environment.
APA, Harvard, Vancouver, ISO, and other styles
18

Park, Daejin. "Low-Power Code Memory Integrity Verification Using Background Cyclic Redundancy Check Calculator Based on Binary Code Inversion Method." Journal of Circuits, Systems and Computers 25, no. 07 (April 22, 2016): 1650068. http://dx.doi.org/10.1142/s0218126616500687.

Full text
Abstract:
The integrity verification of on-chip flash memory data as code memory is becoming important in microcontroller-based applications such as automotive systems. On-the-fly memory fail-detection requires a fast detection method in the seamless background mode without any interruption of CPU operation and low-power flash access hardware to provide safety-conscious execution of the user-programmed firmware during system operations. In this paper, newly-designed read-path architecture based on the binary inversion techniques is proposed for on-chip flash-embedded microcontrollers. The proposed binary inversion method also enables fail-safe, low-power memory access with zero hardware overhead by embedding the scramble flags on the cyclic redundancy check (CRC) protection code. Time-multiplexed CRC calculation for bit-inversion binary code is automatically executed with the silent background mode during CPU idle time without any CPU wait cost. The implementation result shows that the de-inversion procedure could be achieved with just an additional 1,024 bits CRC data in the case of 64 sectors for 4 KB flash memory by reducing 75% of the area of the previous work. The code memory integrity verification time in the seamless background mode is about 30% of the conventional foreground method. The total average current during the code execution for DhrystoneTM benchmark uses just 15% of the basement.
APA, Harvard, Vancouver, ISO, and other styles
19

Abdallah, Asmaa, M. Ali, Jelena Mišić, and Vojislav Mišić. "Efficient Security Scheme for Disaster Surveillance UAV Communication Networks." Information 10, no. 2 (January 29, 2019): 43. http://dx.doi.org/10.3390/info10020043.

Full text
Abstract:
The Unmanned Aerial Vehicles (UAVs) play a significant role to alleviate the negative impacts of disasters by providing essential assistance to the rescue and evacuation operations in the affected areas. Then, the reliability of UAV connections and the accuracy of exchanged information are critical parameters. In this paper, we propose networking and security architecture for disaster surveillance UAV system. The networking scheme involves a two-tier cluster network based on IEEE 802.11ah, which can provide traffic isolation between the tiers. The security scheme guarantees the accuracy and availability of the collected information from the disaster area applying fingerprint features and data redundancy techniques; the proposed scheme also utilizes the lightweight Ring-Learning with Errors (Ring-LWE) crypto-system to assure the confidentiality of the transmitted data with low overhead.
APA, Harvard, Vancouver, ISO, and other styles
20

Dong, Wei, Qiang Yang, and Xinli Fang. "Multi-Step Ahead Wind Power Generation Prediction Based on Hybrid Machine Learning Techniques." Energies 11, no. 8 (July 30, 2018): 1975. http://dx.doi.org/10.3390/en11081975.

Full text
Abstract:
Accurate generation prediction at multiple time-steps is of paramount importance for reliable and economical operation of wind farms. This study proposed a novel algorithmic solution using various forms of machine learning techniques in a hybrid manner, including phase space reconstruction (PSR), input variable selection (IVS), K-means clustering and adaptive neuro-fuzzy inference system (ANFIS). The PSR technique transforms the historical time series into a set of phase-space variables combining with the numerical weather prediction (NWP) data to prepare candidate inputs. A minimal redundancy maximal relevance (mRMR) criterion based filtering approach is used to automatically select the optimal input variables for the multi-step ahead prediction. Then, the input instances are divided into a set of subsets using the K-means clustering to train the ANFIS. The ANFIS parameters are further optimized to improve the prediction performance by the use of particle swarm optimization (PSO) algorithm. The proposed solution is extensively evaluated through case studies of two realistic wind farms and the numerical results clearly confirm its effectiveness and improved prediction accuracy compared to benchmark solutions.
APA, Harvard, Vancouver, ISO, and other styles
21

Durand, J. M. "Satellite Navigation: GPS Inadequacies: Comparative Study into Solutions for Civil Aviation." Journal of Navigation 43, no. 1 (January 1990): 8–17. http://dx.doi.org/10.1017/s037346330001376x.

Full text
Abstract:
The Global Positioning System (GPS) will be an extremely high-performance satellite-based navigation system which is expected to provide a sole means air navigation service for most aeronautical flight phases. It will be particularly suitable for ‘en route’, ‘terminal’ and ‘non-precision approach’ phases, thus providing substantial savings on aircraft operating costs.However, GPS has three major disadvantages for civil aviation: (1) Insufficient system integrity, since satellites can transmit erroneous information for two hours before being repaired or neutralized. In such an event, the many simultaneous users of the satellite that has lost its integrity can derive false positions and remain unaware of the problem. (2) Availability constrained by the limited number of satellites. Users are then unable to obtain a position fix or else obtain a result with significantly degraded performance. (3) Deliberate spatio-temporal degradation (selective availability) of system performance, the characteristics of which are not fully known or defined.Many solutions to these problems have been put forward. One concept uses the redundancy of the GPS system itself (receiver autonomous integrity monitoring). Another set of solutions is based on complementary information from autonomous navigation equipment (altimeter, clock, inertial system) or external navigation systems already available or being developed (Omega, Loran-C, GLONASS). A third type of solution is to implement a system by which to monitor the status of the GPS satellites and broadcast the information to users.This paper reports on the different techniques put forward and uses different qualitative criteria (technical feasibility, cost, political independence, etc.) to assess their suitability for civil aviation applications. The comparison leads to the recommendation of a system to monitor the status of the GPS satellites and broadcast the information to users. The characteristics of such messages would be as similar as possible to those of GPS messages.
APA, Harvard, Vancouver, ISO, and other styles
22

Niknamfar, Amir Hossein, Seyed Armin Akhavan Niaki, and Marziyeh karimi. "A series-parallel inventory-redundancy green allocation system using a max-min approach via the interior point method." Assembly Automation 38, no. 3 (August 6, 2018): 323–35. http://dx.doi.org/10.1108/aa-07-2017-085.

Full text
Abstract:
Purpose The purpose of this study is to develop a novel and practical series-parallel inventory-redundancy allocation system in a green supply chain including a single manufacturer and multiple retailers operating in several positions without any conflict of interests. The manufacturer first produces multi-product and then dispatches them to the retailers at different wholesale prices based on a common replenishment cycle policy. In contrast, the retailers sell the purchased products to customers at different retail prices. In this way, the manufacturer encounters a redundancy allocation problem (RAP), in which the solution subsequently enhances system production reliability. Furthermore, to emphasize on global warming and human health concerns, this paper pays attention both the tax cost of industrial greenhouse gas (GHG) emissions of all produced products and the limitation for total GHG emissions. Design/methodology/approach The manufacturer intends not only to maximize the total net profit but also to minimize the mean time to failure of his production system using a RAP. To achieve these objectives, the max-min approach associated with the solution method known as the interior point method is utilized to maximize the minimum (the worst) value of the objective functions. Finally, numerical experiments are presented to further demonstrate the applicability of the proposed methodology. Sensitivity analysis on the green supply chain approach is also performed to obtain more insight. Findings The computational results showed that increasing the number of products and retailers might lead into a substantial increase in the total net profit. This indicated that the manufacturer would feel adding a new retailer to the green supply chain strongly. Moreover, an increase in the number of machines provides significant improvement in the reliability of the production system. Furthermore, the results of the performed sensitivity analysis on the green approach indicated that increasing the number of machines has a substantial impact on both the total net profit and the total tax cost. In addition, not only the proposed green supply chain was more efficient than the supply chain without green but also the proposed green supply chain was very sensitive to the tax cost of GHG emission rather than the number of machines. Originality/value In summary, the motivations are as follows: the development of a bi-objective series-parallel inventory-RAP in a green supply chain; proposing a hybrid inventory-RAP; and considering the interior point solution method. The novel method comes from both theoretical and experimental techniques. The paper also has industrial applications. The advantage of using the proposed approach is to generate additional opportunities and cost effectiveness for businesses and companies that operate utilizing the green supply chain under an inventory model.
APA, Harvard, Vancouver, ISO, and other styles
23

C, Subramani, S. S. Dash, Vimala C, and Uma Mageshwari. "Impact of Distributed Power Flow Controller to Improve Line Flow Based on PWM Control with PI Technique." Indonesian Journal of Electrical Engineering and Computer Science 4, no. 1 (November 4, 2016): 57. http://dx.doi.org/10.11591/ijeecs.v4.i1.pp57-64.

Full text
Abstract:
<p>In this paper we presents a new component within the flexible ac-transmission system (FACTS) family, called Distributed Power-Flow Controller (DPFC). The DPFC is derived from the unified power-flow controller (UPFC). The DPFC can be considered as a UPFC with an eliminated common dc link. The active power exchange between the shunt and series converters, which is through the common dc link in the UPFC, is now through the transmission lines at the third-harmonic frequency. The DPFC employs the distributed FACTS (DFACTS) concept, which is to use multiple small-size single-phase converters instead of the one large-size three-phase series converter in the UPFC. The large number of series converters provides redundancy, thereby increasing the system reliability. As the D-FACTS converters are single-phase and floating with respect to the ground, there is no high-voltage isolation required between the phases. Accordingly, the cost of the DPFC system is lower than the UPFC. The DPFC has the same control capability as the UPFC, which comprises the adjustment of the line impedance, the transmission angle, and the bus voltage. The controller is designed to achieve the most appropriate operating point based on the real power priority.</p>
APA, Harvard, Vancouver, ISO, and other styles
24

Liberos, Marian, Raúl González-Medina, Iván Patrao, Gabriel Garcerá, and Emilio Figueres. "A Control Scheme to Suppress Circulating Currents in Parallel-Connected Three-Phase Inverters." Electronics 11, no. 22 (November 13, 2022): 3720. http://dx.doi.org/10.3390/electronics11223720.

Full text
Abstract:
The parallel operation of inverters has many benefits, such as modularity and redundancy. However, the parallel connection of inverters produces circulating currents that may result in malfunctions of the system. In this work, a control technique for the elimination of the low-frequency components of the circulating currents in grid-connected inverters is presented. The proposed control structure contains n-1 zero-sequence control loops, with n being the number of inverters connected in parallel. Simulation and experimental results have been carried out on a prototype composed of two 5 kW inverters connected in parallel. The results have been obtained by considering the following mismatches between both inverters: inductance values of the grid filters, unbalance of the delivered power, and the use of different modulation techniques.
APA, Harvard, Vancouver, ISO, and other styles
25

Sharma, Meera, Abhishek Tandon, Madhu Kumari, and V. B. Singh. "Reduction of Redundant Rules in Association Rule Mining-Based Bug Assignment." International Journal of Reliability, Quality and Safety Engineering 24, no. 06 (November 17, 2017): 1740005. http://dx.doi.org/10.1142/s0218539317400058.

Full text
Abstract:
Bug triaging is a process to decide what to do with newly coming bug reports. In this paper, we have mined association rules for the prediction of bug assignee of a newly reported bug using different bug attributes, namely, severity, priority, component and operating system. To deal with the problem of large data sets, we have taken subsets of data set by dividing the large data set using [Formula: see text]-means clustering algorithm. We have used an Apriori algorithm in MATLAB to generate association rules. We have extracted the association rules for top 5 assignees in each cluster. The proposed method has been empirically validated on 14,696 bug reports of Mozilla open source software project, namely, Seamonkey, Firefox and Bugzilla. In our approach, we observe that taking on these attributes (severity, priority, component and operating system) as antecedents, essential rules are more than redundant rules, whereas in [M. Sharma and V. B. Singh, Clustering-based association rule mining for bug assignee prediction, Int. J. Business Intell. Data Mining 11(2) (2017) 130–150.] essential rules are less than redundant rules in every cluster. The proposed method provides an improvement over the existing techniques for bug assignment problem.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhou, Qing, Yuelei Xu, Jarhinbek Rasol, Tian Hui, Chaofeng Yuan, and Fan Li. "Reliable Design and Control Implementation of Parallel DC/DC Converter for High Power Charging System." Machines 10, no. 12 (December 4, 2022): 1162. http://dx.doi.org/10.3390/machines10121162.

Full text
Abstract:
With the current popularity of Electric Vehicles (EV), especially in some critical EV applications such as hospital EV fleets, the demand for continuous and reliable power supply is increasing. However, most of the charging stations are powered in a centralized way, which causes transistors and other components to be subjected to high voltage and current stresses that reduce reliability, and a single point of failure can cause the entire system to fail. Therefore, a significant effort is made in this paper to improve the reliability of the charging system. In terms of charging system structure design, a distributed charging structure with fault tolerance is designed and a mathematical model to evaluate the reliability of the structure is proposed. In terms of control, a current sharing control algorithm is designed that can achieve fault tolerance. To further improve the reliability of the system, a thermal sharing control method based on current sharing technology is also designed. This method can improve the reliability of the charging system by distributing the load more rationally according to the differences in component performance and operating environment; FPGA-based control techniques are provided, and innovative ideas of pipeline control and details of mathematical reasoning for key IP cores are presented. Experiments show that N + 1 redundancy fault tolerance can be achieved in both current sharing and thermal sharing modes. In the current sharing experiment, when module 3 failed, the total current only fluctuated 800 mA within 500 ms, which is satisfactory. In the thermal sharing experiment, after module 3 failed, modules 1, 2, and 4 adjusted the current reasonably under the correction of the thermal sharing loop, and the total current remained stable throughout the process. The experimental results prove that the charging system structure design and control method proposed in this paper are feasible and excellent.
APA, Harvard, Vancouver, ISO, and other styles
27

Yang, Chunbo, Shengkui Zeng, and Jianbin Guo. "Reliability Analysis of Load-SharingK-out-of-NSystem Considering Component Degradation." Mathematical Problems in Engineering 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/726853.

Full text
Abstract:
TheK-out-of-Nconfiguration is a typical form of redundancy techniques to improve system reliability, where at leastK-out-of-Ncomponents must work for successful operation of system. When the components are degraded, more components are needed to meet the system requirement, which means that the value ofKhas to increase. The current reliability analysis methods overestimate the reliability, because using constantKignores the degradation effect. In a load-sharing system with degrading components, the workload shared on each surviving component will increase after a random component failure, resulting in higher failure rate and increased performance degradation rate. This paper proposes a method combining a tampered failure rate model with a performance degradation model to analyze the reliability of load-sharingK-out-of-Nsystem with degrading components. The proposed method considers the value ofKas a variable which is derived by the performance degradation model. Also, the load-sharing effect is evaluated by the tampered failure rate model. Monte-Carlo simulation procedure is used to estimate the discrete probability distribution ofK. The case of a solar panel is studied in this paper, and the result shows that the reliability considering component degradation is less than that ignoring component degradation.
APA, Harvard, Vancouver, ISO, and other styles
28

Chepkyi, V., V. Skachkov, O. Yefymchykov, V. Nabok, and O. Yelchaninov. "TECHNOLOGICAL APPROACH TO SOLVING THE PROBLEM OF MINIMIZING THE INFLUENCE OF DESTABILIZING FACTORS ON THE WORK OF DISTRIBUTED STRUCTURES OF THE GROUND-ROCKOTECHNICAL COMPLEX IN AN INTEGRAL PROJECT «OBJECT-SYSTEM»." Collection of scientific works of Odesa Military Academy, no. 11 (December 27, 2019): 5–19. http://dx.doi.org/10.37129/2313-7509.2019.11.5-19.

Full text
Abstract:
Mobile structures of the ground-based robotic complex (RTC) are investigated as an active component formation of an integrated project “object-system”, which is operated in a destabilizing environment. The relevant problem of minimizing the influence of external destabilizing factors on the operation of mobile spatially-distributed structures of the ground-based RTC is stated in the descriptions of the conceptual apparatus of complex, poorly formalized multicomponent technical systems. Following the logic of this approach, the basic principles of distributed control are determined and their applications are implemented in the mobile structures of the ground-based RTC with elements of subsidiarity. The quintessence of the latter is represented by the technology of multi-antenna MIMO systems, which made it possible to determine the trade-offs of using classical transmission methods and strategies for receiving and processing MIMO signals in the multi-sensory channels of the information-control system (ICS) and radio communication with the data transmission system and commands. Given the complexity of performing the stated tasks, a set of technological functions of reducing the influence of destabilizing factors and their practical variations in the algorithms for obtaining the target result are proposed. A situational model of reducing (minimizing) information losses at the output of the information-control system of the ground-based robotic complex under destabilization has been built. Options have been proposed to achieve the target result: integration of structural and parametric adaptation methods, MIMO technologies, code division multiplexing techniques with CDMA channels, taking into account the heterogeneity factor of information exchange channels and the artificial redundancy of the system itself with respect to the number of external interference sources.
APA, Harvard, Vancouver, ISO, and other styles
29

Baba, Maveeya, Nursyarizal B. M. Nor, M. Aman.Sheikh, Muhammad Irfan, and Mohammad Tahir. "A Strategic and Significant Method for the Optimal Placement of Phasor Measurement Unit for Power System Network." Symmetry 12, no. 7 (July 16, 2020): 1174. http://dx.doi.org/10.3390/sym12071174.

Full text
Abstract:
Currently the new state of power system relies on a precise monitoring of electrical quantities such as voltage and current phasors. Occasionally, its operation gets disturbed because of the flicking in load and generation which may result in the interruption of power supply or may cause catastrophic failure. The advanced technology of phasor measurement unit (PMU) is introduced in the late 1990s to measure the behavior of power system more symmetrically, accurately, and precisely. However, the implementation of this device at every busbar in a grid station is not an easy task because of its expensive installation and manufacturing cost. As a result, an optimum placement of PMU is much needed in this case. Therefore, this paper proposes a new symmetry approach of multiple objectives for the optimum placement of PMU problem (OPPP) in order to minimize the installed number of PMUs and maximize the measurement redundancy of the network. To overcome the drawbacks of traditional techniques in the proposed work a reduction and exclusion of pure transit node technique is used in the placement set. In which only the strategic, significant, and the most desirable buses are selected without considering zero injection buses (ZIBs). The fundamental novelty of the proposed work considers most importantly the reduction technique of ZIBs from the optimum PMU locations, as far as the prior approaches concern almost every algorithm have taken ZIBs as their optimal placement sets. Furthermore, a PMUs channel limits and an alternative symmetry location for the PMUs placement are considered when there is an outage or PMUs failure may occur. The performance of the proposed method is verified on different IEEE-standard such as: IEEE-9, IEEE-14, IEEE-24, IEEE-30, IEEE-57, IEEE-118, and a New England-39 bus system. The success of the proposed work was compared with the existing techniques’ outcomes from the literature.
APA, Harvard, Vancouver, ISO, and other styles
30

Marinho, Murilo M., Mariana C. Bernardes, and Antonio P. L. Bo. "Using General-Purpose Serial-Link Manipulators for Laparoscopic Surgery with Moving Remote Center of Motion." Journal of Medical Robotics Research 01, no. 04 (November 30, 2016): 1650007. http://dx.doi.org/10.1142/s2424905x16500070.

Full text
Abstract:
Minimally invasive surgical systems are being widely used to aid operating rooms across the globe. Although arguably successful in laparoscopic surgery, the da Vinci robotic system has limitations mostly regarding cost and lack of patient physiological motion compensation. To obtain a more cost-effective alternative, earlier works used general-purpose fully actuated serial-link robots to control instruments in laparoscopic research using constrained Jacobian techniques. In contrast with those works, we present a new technique to solve the laparoscopic constraints for the serial-link manipulator by using a constrained trajectory. This novel technique allows complex 3D remote center-of-motion trajectories to be taken into account. Moreover, it does not have problems related to drifting, and is less prone to singularity related issues as it can be used with redundant manipulators. The proof-of-concept experiments are done by performing artificial trajectories with static and moving trocar points using a physical robot manipulator. Furthermore, the system is tested using user input of 13 medically untrained personnel in an endoscope navigation task. The experiments show that the system can be operated reliably under arbitrary and unpredictable user inputs.
APA, Harvard, Vancouver, ISO, and other styles
31

He, Jun, Shixi Yang, Evangelos Papatheou, Xin Xiong, Haibo Wan, and Xiwen Gu. "Investigation of a multi-sensor data fusion technique for the fault diagnosis of gearboxes." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 233, no. 13 (March 4, 2019): 4764–75. http://dx.doi.org/10.1177/0954406219834048.

Full text
Abstract:
Gearbox is the key functional unit in a mechanical transmission system. As its operating condition being complex and the interference transmitting from diverse paths, the vibration signals collected from an individual sensor may not provide a fully accurate description on the health condition of a gearbox. For this reason, a new method for fault diagnosis of gearboxes based on multi-sensor data fusion is presented in this paper. There are three main steps in this method. First, prior to feature extraction, two signal processing methods, i.e. the energy operator and time synchronous averaging, are applied to multi-sensor vibration signals to remove interference and highlight fault characteristic information, then the statistical features are extracted from both the raw and preprocessed signals to form an original feature set. Second, a coupled feature selection scheme combining the distance evaluation technique and max-relevance and min-redundancy is carried out to obtain an optimal feature set. Finally, the deep belief network, a novel intelligent diagnosis method with a deep architecture, is applied to identify different gearbox health conditions. As the multi-sensor data fusion technique is utilized to provide sufficient and complementary information for fault diagnosis, this method holds the potential to overcome the shortcomings from an individual sensor that may not accurately describe the health conditions of gearboxes. Ten different gearbox health conditions are simulated to validate the performance of the proposed method. The results confirm the superiority of the proposed method in gearbox fault diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
32

Anwer, Jahanzeb, Sebastian Meisner, and Marco Platzner. "Dynamic Reliability Management for FPGA-Based Systems." International Journal of Reconfigurable Computing 2020 (June 13, 2020): 1–19. http://dx.doi.org/10.1155/2020/2808710.

Full text
Abstract:
Radiation tolerance in FPGAs is an important field of research particularly for reliable computation in electronics used in aerospace and satellite missions. The motivation behind this research is the degradation of reliability in FPGA hardware due to single-event effects caused by radiation particles. Redundancy is a commonly used technique to enhance the fault-tolerance capability of radiation-sensitive applications. However, redundancy comes with an overhead in terms of excessive area consumption, latency, and power dissipation. Moreover, the redundant circuit implementations vary in structure and resource usage with the redundancy insertion algorithms as well as number of used redundant stages. The radiation environment varies during the operation time span of the mission depending on the orbit and space weather conditions. Therefore, the overheads due to redundancy should also be optimized at run-time with respect to the current radiation level. In this paper, we propose a technique called Dynamic Reliability Management (DRM) that utilizes the radiation data, interprets it, selects a suitable redundancy level, and performs the run-time reconfiguration, thus varying the reliability levels of the target computation modules. DRM is composed of two parts. The design-time tool flow of DRM generates a library of various redundant implementations of the circuit with different magnitudes of performance factors. The run-time tool flow, while utilizing the radiation/error-rate data, selects a required redundancy level and reconfigures the computation module with the corresponding redundant implementation. Both parts of DRM have been verified by experimentation on various benchmarks. The most significant finding we have from this experimentation is that the performance can be scaled multiple times by using partial reconfiguration feature of DRM, e.g., 7.7 and 3.7 times better performance results obtained for our data sorter and matrix multiplier case studies compared with static reliability management techniques. Therefore, DRM allows for maintaining a suitable trade-off between computation reliability and performance overhead during run-time of an application.
APA, Harvard, Vancouver, ISO, and other styles
33

Cho, Hwan-ho, Seung-hak Lee, Jonghoon Kim, and Hyunjin Park. "Classification of the glioma grading using radiomics analysis." PeerJ 6 (November 22, 2018): e5982. http://dx.doi.org/10.7717/peerj.5982.

Full text
Abstract:
Background Grading of gliomas is critical information related to prognosis and survival. We aimed to apply a radiomics approach using various machine learning classifiers to determine the glioma grading. Methods We considered 285 (high grade n = 210, low grade n = 75) cases obtained from the Brain Tumor Segmentation 2017 Challenge. Manual annotations of enhancing tumors, non-enhancing tumors, necrosis, and edema were provided by the database. Each case was multi-modal with T1-weighted, T1-contrast enhanced, T2-weighted, and FLAIR images. A five-fold cross validation was adopted to separate the training and test data. A total of 468 radiomics features were calculated for three types of regions of interest. The minimum redundancy maximum relevance algorithm was used to select features useful for classifying glioma grades in the training cohort. The selected features were used to build three classifier models of logistics, support vector machines, and random forest classifiers. The classification performance of the models was measured in the training cohort using accuracy, sensitivity, specificity, and area under the curve (AUC) of the receiver operating characteristic curve. The trained classifier models were applied to the test cohort. Results Five significant features were selected for the machine learning classifiers and the three classifiers showed an average AUC of 0.9400 for training cohorts and 0.9030 (logistic regression 0.9010, support vector machine 0.8866, and random forest 0.9213) for test cohorts. Discussion Glioma grading could be accurately determined using machine learning and feature selection techniques in conjunction with a radiomics approach. The results of our study might contribute to high-throughput computer aided diagnosis system for gliomas.
APA, Harvard, Vancouver, ISO, and other styles
34

van de Walle, A. "Merging Fractal Image Compression and Wavelet Transform Methods." Fractals 05, supp01 (April 1997): 3–15. http://dx.doi.org/10.1142/s0218348x97000590.

Full text
Abstract:
Fractal image compression and wavelet transform methods can be combined into a single compression scheme by using an iterated function system to generate the wavelet coefficients. The main advantage of this approach is to significantly reduce the tiling artifacts: operating in wavelet space allows range blocks to overlap without introducing redundant coding. Our scheme also permits reconstruction in a finite number of iterations and lets us relax convergence criteria. Moreover, wavelet coefficients provide a natural and efficient way to classify domain blocks in order to shorten compression times. Conventional fractal compression can be seen as a particular case of our general algorithm if we choose the Haar wavelet decomposition. On the other hand, our algorithm gradually reduces to conventional wavelet compression techniques as more and more range blocks fail to be properly approximated by rescaled domain blocks.
APA, Harvard, Vancouver, ISO, and other styles
35

Ochoa, Joan, Emilio García, Eduardo Quiles, and Antonio Correcher. "Redundant Fault Diagnosis for Photovoltaic Systems Based on an IRT Low-Cost Sensor." Sensors 23, no. 3 (January 24, 2023): 1314. http://dx.doi.org/10.3390/s23031314.

Full text
Abstract:
In large solar farms, supervision is an exhaustive task, often carried out manually by field technicians. Over time, automated or semi-automated fault detection and prevention methods in large photovoltaic plants are becoming increasingly common. The same does not apply when talking about small or medium-sized installations, where the cost of supervision at such level would mean total economic infeasibility. Although there are prevention protocols by suppliers, periodic inspections of the facilities by technicians do not ensure that faults such as the appearance of hot-spots are detected in time. That is why, nowadays, the only way of continuous supervision of a small or medium installation is often carried out by unqualified people and in a purely visual way. In this work, the development of a low-cost system prototype is proposed for the supervision of a medium or small photovoltaic installation based on the acquisition and treatment of thermographic images, with the aim of investigating the feasibility of an actual implementation. The work focuses on the system’s ability to detect hot-spots in supervised panels and successfully report detected faults. To achieve this goal, a low-cost thermal imaging camera is used for development, applying common image processing techniques, operating with OpenCV and MATLAB R2021b libraries. In this way, it is possible to demonstrate that it is achievable to successfully detect the hottest points of a photovoltaic (PV) installation with a much cheaper camera than the cameras used in today’s thermographic inspections, opening up the possibilities of creating a fully developed low-cost thermographic surveillance system.
APA, Harvard, Vancouver, ISO, and other styles
36

He, Shaomei, Forrest I. Bishop, and Katherine D. McMahon. "Bacterial Community and “Candidatus Accumulibacter” Population Dynamics in Laboratory-Scale Enhanced Biological Phosphorus Removal Reactors." Applied and Environmental Microbiology 76, no. 16 (July 2, 2010): 5479–87. http://dx.doi.org/10.1128/aem.00370-10.

Full text
Abstract:
ABSTRACT “Candidatus Accumulibacter” and total bacterial community dynamics were studied in two lab-scale enhanced biological phosphorus removal (EBPR) reactors by using a community fingerprint technique, automated ribosomal intergenic spacer analysis (ARISA). We first evaluated the quantitative capability of ARISA compared to quantitative real-time PCR (qPCR). ARISA and qPCR provided comparable relative quantification of the two dominant “Ca. Accumulibacter” clades (IA and IIA) detected in our reactors. The quantification of total “Ca. Accumulibacter” 16S rRNA genes relative to that from the total bacterial community was highly correlated, with ARISA systematically underestimating “Ca. Accumulibacter” abundance, probably due to the different normalization techniques applied. During 6 months of normal (undisturbed) operation, the distribution of the two clades within the total “Ca. Accumulibacter” population was quite stable in one reactor while comparatively dynamic in the other reactor. However, the variance in the clade distribution did not appear to affect reactor performance. Instead, good EBPR activity was positively associated with the abundance of total “Ca. Accumulibacter.” Therefore, we concluded that the different clades in the system provided functional redundancy. We disturbed the reactor operation by adding nitrate together with acetate feeding in the anaerobic phase to reach initial reactor concentrations of 10 mg/liter NO3-N for 35 days. The reactor performance deteriorated with a concomitant decrease in the total “Ca. Accumulibacter” population, suggesting that a population shift was the cause of performance upset after a long exposure to nitrate in the anaerobic phase.
APA, Harvard, Vancouver, ISO, and other styles
37

Baba, Maveeya, Nursyarizal B. M. Nor, Muhammad Aman Sheikh, Abdul Momin Baba, Muhammad Irfan, Adam Glowacz, Jaroslaw Kozik, and Anil Kumar. "Optimization of Phasor Measurement Unit Placement Using Several Proposed Case Factors for Power Network Monitoring." Energies 14, no. 18 (September 7, 2021): 5596. http://dx.doi.org/10.3390/en14185596.

Full text
Abstract:
Recent developments in electrical power systems are concerned not only with static power flow control but also with their control during dynamic processes. Smart Grids came into being when it was noticed that the traditional electrical power system structure was lacking in reliability, power flow control, and consistency in the monitoring of phasor quantities. The Phasor Measurement Unit (PMU) is one of the main critical factors for Smart Grid (SG) operation. It has the ability to provide real-time synchronized measurement of phasor quantities with the help of a Global Positioning System (GPS). However, when considering the installation costs of a PMU device, it is far too expensive to equip on every busbar in all grid stations. Therefore, this paper proposes a new approach for the Optimum Placement of the PMU problem (OPP problem) to minimize the installed number of PMUs and maximize the measurement redundancy of the network. Exclusion of the unwanted nodes technique is used in the proposed approach, in which only the most desirable buses consisting of generator bus and load bus are selected, without considering Pure Transit Nodes (PTNs) in the optimum PMU placement sets. The focal point of the proposed work considers, most importantly, the case factor of the exclusion technique of PTNs from the optimum PMU locations, as prior approaches concerning almost every algorithm have taken PTNs as their optimal PMU placement sets. Furthermore, other case factors of the proposed approach, namely, PMU channel limits, radial bus, and single PMU outage, are also considered for the OPP problem. The proposed work is tested on standard Institute of Electrical and Electronics Engineering (IEEE)-case studies from MATPOWER on the MATLAB software. To show the success of the proposed work, the outputs are compared with the existing techniques.
APA, Harvard, Vancouver, ISO, and other styles
38

Kamel, Khaled, and Eman Kamel. "PLC Batch Process Control Design and Implementation Fundamentals." September 2020 2, no. 3 (June 9, 2020): 155–61. http://dx.doi.org/10.36548/jei.2020.3.001.

Full text
Abstract:
Batch process control is typically used for repeated chemical reaction tasks. It starts with a measured liquid material filling operations followed by a controlled reaction leading to the discharge or transport of processed quantities of material. The input materials is contained in vessel reactor and subjected to a sequence of processing activities over a recipe predefined duration of time. Batch systems are designed to measure, process, and discharge a varying volume of liquid from drums, tanks, reactors, or other large storage vessel using a programmable logic controller (PLC). These systems are common in pharmaceutical, chemical packaging, Beverage processing, personal care product, biotech manufacturing, dairy processing, soap manufacturing, and food processing industries. This paper briefly discusses the fundamental techniques used in specifying, designing, and implementing a PLC batch process control [1, 2]. A simplified batch process is used to illustrate key issues in designing and implementing such systems. In addition to the structured PLC ladder design; more focus is given to safety requirements, redundancy, interlocking, input data validation, and safe operation. The Allen Bradley (AB) SLC 500 PLC along with the LogixPro simulator are used to illustrate the concepts discussed in this paper. Two pumps are used to bring in material during the tank filling and a third pump is used to drain processed product. The three pumps are equipped with flow meters providing pulses proportional to the actual flow rate through the individual pipes. The tank material is heated to a predefined temperature duration followed by mixing for a set time before discharge. Batch control systems provides automated process controls, typically and universally using PLC’s networked to HMI’s and other data storage, analysis, and assessment computers. The overall system perform several tasks including recipe development and download, production scheduling, batch management and execution, equipment performance monitoring, inventory, production history and tracking functionalities. Flexible batch control systems are designed to accommodate smaller batches of products with greater requirements / recipes variation, efficiently and quickly. In addition to providing process consistency, continuous batch process control quality improvements are attained through the automatic collection and analysis of real-time reliable and accurate event performance data [3, 4].
APA, Harvard, Vancouver, ISO, and other styles
39

Hopping, Steven B., Sasa Janjanin, Neil Tanna, and Arjun S. Joshi. "The S-Plus lift: a short-scar, long-flap rhytidectomy." Annals of The Royal College of Surgeons of England 92, no. 7 (October 2010): 577–82. http://dx.doi.org/10.1308/003588410x12699663904439.

Full text
Abstract:
INTRODUCTION As rhytidectomy is one of the most elective surgical procedures, there is a strong trend toward less aggressive operative techniques. The authors introduce the S-Plus lift, a ‘long flap’ superficial musculo-aponeurotic system (SMAS) imbrication technique that diminishes risks, decreases recovery time, and yields long-lasting results. PATIENTS AND METHODS This paper describes a novel approach to mid-facial rejuvenation that combines the limited incision of an S-lift with two SMASectomies, purse-string suture imbrication of the extended supraplatysmal plane (ESP) and SMAS, and malar soft tissue suspension. SMAS excisions are performed pre-auricularly, and in the region overlying the anterior edge of the parotid gland. Purse-string imbrication sutures are designed to close the SMAS defects, pull the soft tissues of the neck upward, pull the jowl and lower face posteriorly and superiorly, and tighten the platysma. Ancillary purse-string suture lifts the malar fat pad and cheek soft tissues vertically, which achieves mid-face fullness and lifting. Compared to S-lift, the technique extends its efficacy in those patients who have moderate-to-severe mid-facial laxity, prominent nasolabial folds, and platysma redundancy. RESULTS A review of 144 consecutive S-Plus lifts performed by a single surgeon (SBH), with at least 6 months of follow-up, was performed. Over a 3-year period, 130 (90.3%) females and 14 (9.7%) males underwent S-Plus lift. S-Plus lift as primary rhytidectomy was performed in 132 (91.7%) and as secondary in 12 (8.3%) cases. Complication rate was low and comparable with other techniques of rhytidectomy. CONCLUSIONS The S-Plus lift is a novel, hybrid technique with pleasing results, short down-time, and high patient satisfaction rate. The technique combines two SMASectomies with purse-string suture imbrication of the ESP and SMAS, and malar fat suture suspension.
APA, Harvard, Vancouver, ISO, and other styles
40

MURALIDHAR BAIRY, G., U. C. NIRANJAN, SHU LIH OH, JOEL E. W. KOH, VIDYA K. SUDARSHAN, JEN HONG TAN, YUKI HAGIWARA, and EDDIE Y. K. NG. "ALCOHOLIC INDEX USING NON-LINEAR FEATURES EXTRACTED FROM DIFFERENT FREQUENCY BANDS." Journal of Mechanics in Medicine and Biology 17, no. 07 (November 2017): 1740009. http://dx.doi.org/10.1142/s0219519417400097.

Full text
Abstract:
Alcoholism is a complex condition that mainly disturbs the neuronal networks in Central Nervous System (CNS). This disorder not only disturbs the brain, but also affects the behavior, emotions, and cognitive judgements. Electroencephalography (EEG) is a valuable tool to examine the neuropsychiatric disorders like alcoholism. The EEG is a well-established modality to diagnose the electrical activity produced by the populations of neurons in cerebral cortex. However, EEG signals are non-linear in nature; hence very challenging to interpret the valuable information from them using linear methods. Thus, using non-linear methods to analyze EEG signals can be beneficial in order to predict the brain signals condition. This paper presents a computer-aided diagnostic method for the detection of alcoholic EEG signals from normal by employing the non-linear techniques. First, the EEG signals are subjected to six levels of Wavelet Packet Decomposition (WPD) to obtain seven wavebands (delta ([Formula: see text]), theta ([Formula: see text]), lower alpha (la), upper alpha (ua), lower beta (lb), upper beta (ub), lower gamma (lg)). From each wavebands (activity bands), 19 non-linear features such as Recurrence Quantification Analysis (RQA) ([Formula: see text]), Approximate Entropy ([Formula: see text]), Energy ([Formula: see text]), Fractal Dimension (FD) ([Formula: see text]), Permutation Entropy ([Formula: see text]), Detrended Fluctuation Analysis ([Formula: see text]), Hurst Exponent ([Formula: see text]), Largest Lyapunov Exponent ([Formula: see text]), Sample Entropy ([Formula: see text]), Shannon’s Entropy ([Formula: see text]), Renyi’s entropy ([Formula: see text]), Tsalli’s entropy ([Formula: see text]), Fuzzy entropy ([Formula: see text]), Wavelet entropy ([Formula: see text]), Kolmogorov–Sinai entropy ([Formula: see text]), Modified Multiscale Entropy ([Formula: see text]), Hjorth’s parameters (activity ([Formula: see text]), mobility ([Formula: see text]), and complexity ([Formula: see text])) are extracted. The extracted features are then ranked using Bhattacharyya, Entropy, Fuzzy entropy-based Max-Relevancy and Min-Redundancy (mRMR), Receiver Operating Characteristic (ROC), [Formula: see text]-test, and Wilcoxon. These ranked features are given to train Support Vector Machine (SVM) classifier. The SVM classifier with radial basis function (RBF) achieved 95.41% accuracy, 93.33% sensitivity and 97.50% specificity using four non-linear features ranked by Wilcoxon method. In addition, an integrated index called Alcoholic Index (ALCOHOLI) is developed using highly ranked two features for identification of normal and alcoholic EEG signals using a single number. This system is rapid, efficient, and inexpensive and can be employed as an EEG analysis assisting system by clinicians in the detection of alcoholism. In addition, the proposed system can be used in rehabilitation centers to evaluate person with alcoholism over time and observe the outcome of treatment provided for reducing or reversing the impact of the condition on the brain.
APA, Harvard, Vancouver, ISO, and other styles
41

Twum, Stephen Boakye, and Elaine Aspinwall. "Multicriteria reliability modeling and optimisation of a complex system with dual failure modes and high initial reliability." International Journal of Quality & Reliability Management 35, no. 7 (August 6, 2018): 1477–88. http://dx.doi.org/10.1108/ijqrm-02-2017-0032.

Full text
Abstract:
Purpose System reliability optimisation in today’s world is critical to ensuring customer satisfaction, businesses competitiveness, secure and uninterrupted delivery of services and safety of operations. Among many systems configurations, complex systems are the most difficult to model for reliability optimisation. The purpose of this paper is to assess the performance of a novel optimisation methodology of the authors, developed to address the difficulties in the context of a gas carrying system (GCS) exhibiting dual failure modes and high initial reliability. Design/methodology/approach The minimum cut sets involving components of the system were obtained using the fault tree approach, and their reliability constituted into criteria which were maximised and the associated cost of improving their reliabilities minimised. Pareto optimal generic components and system reliabilities were subsequently obtained. Findings The results indicate that the optimisation methodology could improve the system’s reliability even from an initially high one, granted that the feasibility factor for improving a component’s reliability was very high. The results obtained, in spite of the size (41 objective functions and 18 decision variables), the complexity (dual failure modes) and the high initial reliability values provide confidence in the optimisation model and methodology and demonstrate their applicability to systems exhibiting multiple failure modes. Research limitations/implications The GCS was assumed either failed or operational, its parameters precisely determined, and non-repairable. The components failure rates were exponentially distributed and failure modes independent. A single weight vector representing expression of preference in which components reliabilities were weighted higher than cost was used due to the stability of the optimisation model to weight variations. Practical implications The high initial reliability values imply that reliability improvement interventions may not be a critical requirement for the GCS. The high levels could be sustained through planned and systematic inspection and maintenance activities. Even so, purely from an analytical stand point, the results nevertheless show that there was some room for reliability improvement however marginal that is. The improvement may be secured by: use of components with comparable levels of reliability to those achieved; use of redundancy techniques to achieve the desired levels of improvement in reliability; or redesigning of the components. Originality/value The novelty of this work is in the use of a reliability optimisation model and methodology that focuses on a system’s minimum cut sets as criteria to be optimised in order to optimise the system’s reliability, and the specific application to a complex system exhibiting dual failure modes and high component reliabilities.
APA, Harvard, Vancouver, ISO, and other styles
42

Jackson, Charlie. "Tutorial: A Century of Sidewall Coring Evolution and Challenges, From Shallow Land to Deep Water." Petrophysics – The SPWLA Journal of Formation Evaluation and Reservoir Description 62, no. 3 (June 1, 2021): 230–43. http://dx.doi.org/10.30632/pjv62n3-2021t1.

Full text
Abstract:
Due to the high cost of conventional coring operations, rotary sidewall coring has become increasingly important, particularly for deepwater operations. The rig costs, operational challenges, and amount of time involved to core wells below 30,000 ft are considerable, even for wireline operations. As wells get deeper, formation pressures will exceed 30,000 psi, and differential pressures can exceed 10,000 psi, which will eclipse the capabilities of traditional rotary coring tools. New technology has been introduced to enhance the recovery of rotary sidewall cores to improve operations and capabilities on these challenging wells that will be the primary subject of this paper. This new technology can also enhance coring operations and reliability for land and other offshore operations, in addition to deep water. New improvements and challenges include: * Reliable 1.5-in.-diameter core samples, with a 35,000-psi-rated tool * New high-powered coring tools with enhanced energy to address cutting Lower Tertiary wellcemented formations (Wilcox, Lower Miocene, etc.) * Higher torque and horsepower at the bit to enhance cutting and prevent stalling when coring * High-powered surface systems along with highstrength and high-power wireline cables * Upgrades to address high temperatures, highdifferential pressures, high-mud viscosity, large (24 in.) boreholes, and improved reliability * New drill bits and catcher rings to use a high-power system and operate in harsh coring environments * New cutting, retrieval, and core handling advancements for reliability in hard, consolidated formations * Combinability upgrades to reduce wireline trips and reduce rig costs for coring * Dual-coring tools with the ability to have different catcher rings and bits downhole simultaneously on a single run, along with tool redundancy downhole for improved reliability * Combination of rotary coring and formation sampling operations to obtain formation pressures, fluid samples, and rotary sidewall cores on a single run * Downhole monitoring of the coring operation, which includes drilling functions like torque, bit force, penetration rate, core bit penetration, and recovered core length, along with tool orientation * Core recovery information to enable 100% core verification downhole, so extra cores are not cut unnecessarily during the job, with individual core plugs measured and verified downhole * A unique method to seal the cores in a pressurecompensated coring tube downhole to capture all the formation fluids in the rock in downhole conditions * Complete rotary coring downhole operations can be monitored remotely for offsite interaction during the coring operation Besides reviewing historical coring tools and techniques, new technology is also discussed in more detail. The new technology starts with the introduction of the 1.5-in.-diameter rotary sidewall coring tools for deep water over a decade ago. Many applications and technologies are presented to show their effectiveness for deepwater operations. The successful examples include acquiring 1.5-in. cores in large boreholes, hard formations, deep wells, high-differential pressures, and extreme hydrostatic pressure. There are also examples of new technology available for future operations, including dual coring, combination coring, and sealed pressurized coring.
APA, Harvard, Vancouver, ISO, and other styles
43

Ravindra, Padmashree, and Kemafor Anyanwu. "Nesting Strategies for Enabling Nimble MapReduce Dataflows for Large RDF Data." International Journal on Semantic Web and Information Systems 10, no. 1 (January 2014): 1–26. http://dx.doi.org/10.4018/ijswis.2014010101.

Full text
Abstract:
Graph and semi-structured data are usually modeled in relational processing frameworks as “thin” relations (node, edge, node) and processing such data involves a lot of join operations. Intermediate results of joins with multi-valued attributes or relationships, contain redundant subtuples due to repetition of single-valued attributes. The amount of redundant content is high for real-world multi-valued relationships in social network (millions of Twitter followers of popular celebrities) or biological (multiple references to related proteins) datasets. In MapReduce-based platforms such as Apache Hive and Pig, redundancy in intermediate results contributes avoidable costs to the overall I/O, sorting, and network transfer overhead of join-intensive workloads due to longer workflows. Consequently, providing techniques for dealing with such redundancy will enable more nimble execution of such workflows. This paper argues for the use of a nested data model for representing intermediate data concisely using nesting-aware dataflow operators that allow for lazy and partial unnesting strategies. This approach reduces the overall I/O and network footprint of a workflow by concisely representing intermediate results during most of a workflow's execution, until complete unnesting is absolutely necessary. The proposed strategies are integrated into Apache Pig and experimental evaluation over real-world and synthetic benchmark datasets confirms their superiority over relational-style MapReduce systems such as Apache Pig and Hive.
APA, Harvard, Vancouver, ISO, and other styles
44

Kondratov, Vladislav. "Predicting and Determining the Time Between Metrological Failures of Smart Systems for Precision Farming." Cybernetics and Computer Technologies, no. 1 (June 30, 2022): 72–95. http://dx.doi.org/10.34229/2707-451x.22.1.8.

Full text
Abstract:
Introduction. Solving the problem of forecasting and determining the operating time for metrological failure and conducting the first calibration of smart systems of precision land use is possible by solving the problem of self-calibration of smart sensors that are part of these smart systems (SS). This problem is solved and described in [1]. The purpose of the paper is the methodology for dynamic prediction and determination of the time between metrological failures (MF) and the first verification of SS designed for precision farming. Results. The article describes a method patented in Ukraine for measuring the SS operating time for a MF (dynamic prediction method) based on a synthesized probabilistic-physical model (PP-model) of SS MF described by a multi-parameter Kondratov – Weibull distribution function (DF) with controlled (flexible) parameters. The proposed model describes the relationship between the normalized error and the parameters of the metrological reliability (MR) of the SS. It is shown that the dynamic regression PP-models of MF are a combination of the capabilities of regression models using flexible multi-parameter DF, with the possibility of using dynamic (spatio-temporal) processes covering different trends in the change in the values of normalized errors and their uncertainty bands, confidence level, time frame, acceptable boundary conditions, etc. Dynamic regression models of MF SS make it possible to understand the relationship between DF variables and allow the possibility of studying metrological problems (“scenarios”) of the “what if …” type. The dynamic regression method is a set of techniques for reciprocating approximation of the values of the shift parameter of the dynamic PP-model of MF to the predicted value of the shift parameter of the static PP-model of MF SS, as well as methods for assessing the reliability and accuracy of forecasting and determination. The article describes the essence of a new method for determining the operating time of the SS in the MF using the PP-model of the MF based on the Kondratov – Weibull DF. For the first time, a graphical portrait of the PP-model of SS metrological failures in the combined system of scales (coordinates) has been developed and presented - with the scales "probability of metrological failure Pξ" and "normalized error ξx" and separate or combined scales of "interval time scale tx " and "calendar time scale". The procedure for determining the time of the first verification is described, the advantage of non-periodic verifications is noted in order to save costs for their implementation. The possibility of occurrence of "conditional misses" in determining the error and time of operation on the MF during one or another verification is shown. Their existence is established only after the subsequent verification, analysis of the obtained data, and drawing the curve of the DF on a graphical portrait. It is recommended to choose the time between verifications as a multiple of one year, and to carry out verifications on the same day and month of the year. Conclusions. The dynamic regression method is an effective and versatile method due to the high accuracy of forecasting and determining the operating time in the MF. It can also be implemented using MF PP- models based on the DF of Kondratov - Cauchy, Kondratov - Laplace and others. Keywords: smart sensor, self-calibration, wireless sensor systems, methods of redundant measurements, problems of metrological support.
APA, Harvard, Vancouver, ISO, and other styles
45

Amin, Muhammad, Dian Kurnia, and Kevin Mikli. "Design of Three Class Internet Protocol Routing Model Based on Linux Command Line Interface." Journal of Applied Engineering and Technological Science (JAETS) 3, no. 2 (June 24, 2022): 133–38. http://dx.doi.org/10.37385/jaets.v3i2.764.

Full text
Abstract:
The world of information technology is currently experiencing very rapid development, especially in internet technology. The Internet itself is a form of utilization of a computer network system, a computer network system consisting of a group of computer systems and other computing hardware linked together through communication channels to facilitate communication and resource sharing among various users. TCP / IP is a standard protocol that is applied to the internet network. The existence of a router in a TCP/IP network is very important, this is due to the large number of hosts and the differences in the devices used on a TCP/IP network. Router is a computer network device that is used to forward data packets from one network to another, both within the scope of LAN and WAN networks (Yani A., 2008). As a result, a routing mechanism is needed that can integrate many users with a high degree of flexibility. Routing is generally divided into two categories, namely static routing and dynamic routing. Static routing is a routing mechanism that depends on the routing table with manual configuration while dynamic routing is a routing mechanism where the routing table exchange between routers on the network is carried out dynamically. Routing in the world of information technology (IT) is part of how to improve network performance. Routing is a process to choose the path (path) traversed by the packet. Routing itself is very instrumental in building a network, be it LAN or WAN. Currently the use of routing in a network is something that needs to be taken into account in a company. Today's companies that have business processes in the IT sector are very dependent on network availability. Network reliability is the main point in the operation of the system. An adequate and reliable network infrastructure is very much needed by the company, especially regarding the company's electability. Companies that have large-scale networks need several techniques so that the network can work optimally and reliably in overcoming various problems that arise, including network connectivity that is still not stable and link redundancy has not been implemented as a backup path to overcome network failures if interference occurs. alternative network paths that are used to increase network availability, so that if there is a broken link in a network, the data path can still be connected without affecting the connectivity of devices on the network
APA, Harvard, Vancouver, ISO, and other styles
46

Venkata Ramana, D., and S. Baskar. "Incipient Fault Detection of the Inverter Fed Induction Motor Drive." International Journal of Power Electronics and Drive Systems (IJPEDS) 8, no. 2 (June 1, 2017): 722. http://dx.doi.org/10.11591/ijpeds.v8.i2.pp722-729.

Full text
Abstract:
Inverter fed Induction motor drives are deployed across a variety of industrial and commercial applications. Although the drives in the question are well known for their reliable operation in any type of environment, it becomes an important daunting critical task to have them in continuous operation as per the applications’ requirement. Identifying the faulty behavior of power electronic circuits which could lead to catastrophic failures is an attractive proposition. The cost associated with building systems devoted for monitoring and diagnosis is high, however such cost could be justified for the safety-critical systems. Commonly practiced methods for improving the reliability of the power electronic systems are: designing the power circuit conservatively or having parallel redundant operation of components or circuits and clearly these two methods are expensive. An alternative to redundancy is fault tolerant control, which involves drive control algorithm, that in the event of fault occurrence, allows the drive to run in a degraded mode. Such algorithms involve on-line processing of the signals and this requires Digital Signal Processing of the signals. This paper presents the FFT and Wavelet transform techniques for on-line monitoring and analyzing the signals such as stator currents.
APA, Harvard, Vancouver, ISO, and other styles
47

Vintr, Zdenek, and Tomas Vintr. "Reliability Allocation for a System with Complex Redundancy." Applied Mechanics and Materials 436 (October 2013): 505–10. http://dx.doi.org/10.4028/www.scientific.net/amm.436.505.

Full text
Abstract:
The paper deals with the possibilities of allocating reliability requirements for a system using complex redundancy. It means system consists of a few identical subsystems and for its common function it is quite enough if only certain part of these subsystems operates. The subsystems not operating at a certain moment serve as redundancy in case that the subsystems which are operating fail. All the system, however, is not a trivial parallel structure, because if the system is to work properly, always more than one subsystem should operate and the subsystems can function only in configurations set in advance. Practical application of the suggested method of reliability allocation is demonstrated for a pantograph system of a high-speed train. In order to provide the proper function of the system, the minimum number of operating pantographs in pre-set configurations providing safe current collection has to be always available. Using some pantograph configurations (e.g. two pantographs being one after another very closely) is in fact not possible for safety reasons. The article presents the procedure of reliability allocation for this specific system. Suggested method is based on a truth table and Boolean algebra application.
APA, Harvard, Vancouver, ISO, and other styles
48

Han, Chang-Hee, Klaus-Robert Müller, and Han-Jeong Hwang. "Brain-Switches for Asynchronous Brain–Computer Interfaces: A Systematic Review." Electronics 9, no. 3 (March 2, 2020): 422. http://dx.doi.org/10.3390/electronics9030422.

Full text
Abstract:
A brain–computer interface (BCI) has been extensively studied to develop a novel communication system for disabled people using their brain activities. An asynchronous BCI system is more realistic and practical than a synchronous BCI system, in that, BCI commands can be generated whenever the user wants. However, the relatively low performance of an asynchronous BCI system is problematic because redundant BCI commands are required to correct false-positive operations. To significantly reduce the number of false-positive operations of an asynchronous BCI system, a two-step approach has been proposed using a brain-switch that first determines whether the user wants to use an asynchronous BCI system before the operation of the asynchronous BCI system. This study presents a systematic review of the state-of-the-art brain-switch techniques and future research directions. To this end, we reviewed brain-switch research articles published from 2000 to 2019 in terms of their (a) neuroimaging modality, (b) paradigm, (c) operation algorithm, and (d) performance.
APA, Harvard, Vancouver, ISO, and other styles
49

Kajal, Sanjay, and P. C. Tewari. "Performance optimisation of a multistate operating system with hot redundancy." International Journal of Intelligent Enterprise 2, no. 4 (2014): 294. http://dx.doi.org/10.1504/ijie.2014.069069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Hamilton, R., and I. ,. Bazovsky Sr. "Simplified markov techniques for some stand-by redundant systems." Quality and Reliability Engineering International 2, no. 4 (October 1986): 233–40. http://dx.doi.org/10.1002/qre.4680020405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography