Статті в журналах з теми "Fluid power technology Data processing"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Fluid power technology Data processing.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Fluid power technology Data processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Slijkerman, W. F. J., W. J. Looyestijn, P. Hofstra, and J. P. Hofman. "Processing of Multi-Acquisition NMR Data." SPE Reservoir Evaluation & Engineering 3, no. 06 (December 1, 2000): 492–97. http://dx.doi.org/10.2118/68408-pa.

Повний текст джерела
Анотація:
Summary Crucial issues in formation evaluation are the determination of porosity, permeability, hydrocarbon volumes, and net-to-gross ratio. Nuclear magnetic resonance (NMR) logging provides measurements that are directly related to these parameters. The NMR response of fluids contained in pores is governed by their T2- and T1-relaxation times, diffusion coefficient, and whether or not they wet the rock. In the case where fluids possess a sufficiently large contrast in these properties and NMR data have been acquired with suitably chosen acquisition parameters (i.e., wait times and/or inter-echo times) a separation of water, oil, and gas NMR responses can be made. From these separate NMR responses the hydrocarbon volumes, porosity, and permeability estimates are subsequently calculated. Key in these applications is the ability to include all the acquired log NMR data into the processing towards the desired end result. Methods exist to derive hydrocarbon volumes from T2 distributions or from echo decay data. However, these are all methods in which the difference between just two acquisitions that only differ in either wait time or inter-echo time are considered. Over the past years we have developed, tested, and employed an alternative processing technique named multi-acquisition NMR (MacNMR). MacNMR takes any number of log acquisitions (wait time and/or inter-echo time variations) and simultaneously inverts them using a rigorous forward model to derive the desired water and hydrocarbon T2 distributions. In this paper, we discuss the concepts of MacNMR and demonstrate its versatility in NMR log processing. An example will illustrate its benefits. Introduction This paper discusses the method used by Shell to process multi-acquisition nuclear magnetic resonance (NMR) data. The objective of the processing is to extract fluid volumes and properties from multi-acquisition NMR data. The potential of multi-acquisition NMR logging for water, oil, and gas discrimination and volume quantification was recognized already in 1993. At that time no commercial processing of such data was available. It was decided to develop an in-house multi-acquisition processing capability. From 1993 to 1996 the development effort was focused on the evaluation of potential processing concepts and the development of the necessary mathematical algorithms. In 1996 the actual software implementation was developed, and in October 1996 first results were available and published internally. In March 1997 a company-wide beta test of the software was organized. In August 1997 the software was released company wide and has been in use since then. Multi-Acquisition Data Processing Methods As an introduction, we briefly review methods for quantitative processing of multi-acquisition NMR data that are described in the open literature. We make the distinction between methods that operate in the relaxation time domain vs. methods that operate in the acquisition time domain. Analysis in the Relaxation Time (or T2) Domain. Here, methods are discussed that operate in the T2 domain. Differential Spectrum Method. The differential spectrum method, first published by Akkurt and Vinegar1 works on dual-wait-time data. The concept is to independently T2 invert the long- and short-wait-time echo-decay vectors into a T2 spectrum. The two resulting T2 spectra are subtracted and, provided the wait times have been selected suitably,2 the difference between the two T2 spectra only arises from fluids with long T1 components (usually hydrocarbons). Volumes are quantified by integrating the difference T2 spectrum and correcting for the polarization difference between long and short wait time. Enhanced Diffusion Method. The enhanced diffusion method, recently published by Akkurt et al., 3 exploits the diffusion contrast between the diffusive brine and the less diffusive (medium-to-heavy) oil (i.e., water diffusion is faster than oil diffusion). The idea is that the inter-echo time is chosen sufficiently long such that the water and oil signals are fully separated in the T2 domain (i.e., water is at lower T2 than oil). Determining oil volumes is then just a matter of integrating over the appropriate T2 range in the T2 spectrum. Analysis in the Acquisition Time Domain. Here, methods are discussed that operate in the acquisition time domain. Time-Domain Analysis. The time-domain analysis method (TDA) operates on dual-wait-time data. This method was first published by Prammer et al.4 The concept is to subtract the measured long- and short-wait-time decay vectors into an echo difference. In case the wait times have been chosen suitably2 the difference of the two decay vectors should be arising from a long T1 component (usually a hydrocarbon). This difference echo vector is subsequently T2 inverted (using "matched filters," which basically means that a uni- or bi-exponential is fitted to the data). In that way, only the T2 component arising from the hydrocarbon is found. The hydrocarbon volume is deduced by correcting the resulting signal strength from the difference in polarization between long and short wait time. Echo Ratio Method. This method, published by Flaum et al.,5 works on dual-inter-echo-time data. The long- and short-inter-echo-time echo decays are divided and an apparent diffusion coefficient is calculated. The apparent diffusion coefficient can be used as a qualitative indicator for the presence of gas. MacNMR Method MacNMR uses a method that is radically different from the other processing schemes and is a comprehensive implementation of earlier concepts.1,6 MacNMR employs a forward model to model the measured echo-decay vectors. The starting points in the forward model are the T2 spectra for each of the fluids present (water, oil, and/or gas) that would be measured at infinite wait time and zero gradient. From these T2 spectra, echo-decay vectors are constructed by accounting for the effects of hydrogen index, polarization, and diffusion. The best-fit T2 spectra are found by inverting the forward model to the measured echo-decay vectors. All measured echo-decay vectors included in the inversion are treated on an equal statistical footing. They are weighted with their respective rms-noise values. Hence, decays with the lowest noise contribute most. In principle, any number of echo-decay vectors can be included in the inversion. The current software implementation of MacNMR accepts up to a maximum of six echo-decay vectors, totaling a maximum of 7,000 echoes. The inversion typically takes less than 1 second per depth increment. In a sense, MacNMR employs a very classical concept in that it defines unknown variables (T2 spectra for the fluids present) that are determined from the available data (i.e., all the acquired decay vectors) by error minimization. Between the unknown variables and the data is a forward model. The forward model contains the effects of inter-echo-time variation and wait-time variation. Analysis in the Relaxation Time (or T2) Domain. Here, methods are discussed that operate in the T2 domain. Differential Spectrum Method. The differential spectrum method, first published by Akkurt and Vinegar1 works on dual-wait-time data. The concept is to independently T2 invert the long- and short-wait-time echo-decay vectors into a T2 spectrum. The two resulting T2 spectra are subtracted and, provided the wait times have been selected suitably,2 the difference between the two T2 spectra only arises from fluids with long T1 components (usually hydrocarbons). Volumes are quantified by integrating the difference T2 spectrum and correcting for the polarization difference between long and short wait time. Enhanced Diffusion Method. The enhanced diffusion method, recently published by Akkurt et al.,3 exploits the diffusion contrast between the diffusive brine and the less diffusive (medium-to-heavy) oil (i.e., water diffusion is faster than oil diffusion). The idea is that the inter-echo time is chosen sufficiently long such that the water and oil signals are fully separated in the T2 domain (i.e., water is at lower T2 than oil). Determining oil volumes is then just a matter of integrating over the appropriate T2 range in the T2 spectrum. Analysis in the Acquisition Time Domain. Here, methods are discussed that operate in the acquisition time domain. Time-Domain Analysis. The time-domain analysis method (TDA) operates on dual-wait-time data. This method was first published by Prammer et al.4 The concept is to subtract the measured long- and short-wait-time decay vectors into an echo difference. In case the wait times have been chosen suitably2 the difference of the two decay vectors should be arising from a long T1 component (usually a hydrocarbon). This difference echo vector is subsequently T2 inverted (using "matched filters," which basically means that a uni- or bi-exponential is fitted to the data). In that way, only the T2 component arising from the hydrocarbon is found. The hydrocarbon volume is deduced by correcting the resulting signal strength from the difference in polarization between long and short wait time. Echo Ratio Method. This method, published by Flaum et al.,5 works on dual-inter-echo-time data. The long- and short-inter-echo-time echo decays are divided and an apparent diffusion coefficient is calculated. The apparent diffusion coefficient can be used as a qualitative indicator for the presence of gas.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ji, Jinjie, Qing Chen, Lei Jin, Xiaotong Zhou, and Wei Ding. "Fault Diagnosis System of Power Grid Based on Multi-Data Sources." Applied Sciences 11, no. 16 (August 20, 2021): 7649. http://dx.doi.org/10.3390/app11167649.

Повний текст джерела
Анотація:
In order to complete the function of power grid fault diagnosis accurately, rapidly and comprehensively, the power grid fault diagnosis system based on multi-data sources is proposed. The integrated system uses accident-level information, warning-level information and fault recording documents and outputs a complete diagnosis and tracking report. According to the timeliness of three types of information transmission, the system is divided into three subsystems: real-time processing system, quasi-real-time processing system and batch processing system. The complete work is realized through the cooperation between them. While a real-time processing system completes fault diagnosis of elements, it also screens out incorrectly operating protections and circuit breakers and judges the loss of accident-level information. Quasi-real-time system outputs reasons for incorrect actions of protections and circuit breakers under the premise of considering partial warning-level information missing. The batch processing system corrects diagnosis results of the real-time processing system and outputs fault details, including fault phases, types, times and locations of faulty elements. The simulation results and test show that the system can meet actual engineering requirements in terms of execution efficiency and fault diagnosis and tracking effect. It can be used as a reference for self-healing and maintenance of power grids and has a preferable application value.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dindorf, Ryszard, and Piotr Wos. "Universal Programmable Portable Measurement Device for Diagnostics and Monitoring of Industrial Fluid Power Systems." Sensors 21, no. 10 (May 15, 2021): 3440. http://dx.doi.org/10.3390/s21103440.

Повний текст джерела
Анотація:
This paper presents a new universal programmable portable measuring device (PMD) as a complete, accurate, and efficient solution for monitoring and technical diagnostics of industrial fluid power systems. PMD has programmable functions designed for recording, processing, and graphical visualization of measurement results at the test stand or the place of operation of fluid power systems. PMD has a built-in WiFi communication module for transferring measurement data via Industrial Internet of Things (IIoT) technology for online remote monitoring of fluid power systems. PMD can be programmed for a variety of measuring tasks in servicing, repairing, diagnosing, and monitoring fluid power systems. For this purpose, the fluid dynamic quantity, mechanical quantity, and electrical quantity can be measured. The adjustment of the PMD to the indirect measurement of leakage flow rate in a compressed air system (CAS) is presented in detail. Measuring instruments and PMDs were connected to a branch of the pipeline. The tests used the measurement system to estimate the leakage flow rate through air small nozzles, as well as other CAS indicators.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Thorsen, Arve K., Tor Eiane, Holger F. Thern, Paal Fristad, and Stephen Williams. "Magnetic Resonance in Chalk Horizontal Well Logged With LWD." SPE Reservoir Evaluation & Engineering 13, no. 04 (August 5, 2010): 654–66. http://dx.doi.org/10.2118/115699-pa.

Повний текст джерела
Анотація:
Summary This paper describes geological and petrophysical evaluation of a new structure of a mature field to evaluate the reservoir potential in unproduced reservoir zones. The well was drilled in a carbonate with variations in rock quality and with minor subfaulting occurring. Gamma ray (GR), resistivity, density, neutron, and image services were used in the horizontal part of the well in addition to magnetic resonance (MR). To achieve the best possible real-time wellbore placement, reservoir navigation and continuous follow-up on the horizontal log interpretation were performed during drilling. For the first time, a low-gradient-MR-while-drilling technology was deployed in a virgin carbonate horizontal well on the Norwegian Continental Shelf. The MR service was run to obtain porosities (including partitioning of movable and bound fluids), hydrocarbon (HC) saturations, and permeability estimates. Fluid saturations based on traditional methods and the MR were evaluated and compared by core data, enhancing the understanding of the measurement and the reservoir. For post-processing, the MR data were integrated and interpreted together with the other measurements performed in the well, delivering an accurate and consistent reservoir description. The first part of the horizontal part of the well was drilled with conductive drilling fluid and the latter part with nonconductive drilling fluid. Laboratory measurements for the two mud filtrates were performed to understand the influence of the two different drilling-fluid types on the MR measurements. In the absence of water-based mudfiltrate invasion, the MR data show good agreement with saturations from core, confirming the quality and reliability of the MR data. Comparison of the MR T2 distributions and volumetrics with image data indicates that even fine variations in rock quality and lithology are reliably resolved by the MR data. Before logging, old core data were used to refine the constants used in the Timur-Coates MR permeability equation, which quantitatively tracks changes in reservoir quality. The values were calibrated when Timur-Coates constants were derived from the well's core plugs.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Stamatis, A., N. Aretakis, and K. Mathioudakis. "Blade Fault Recognition Based on Signal Processing and Adaptive Fluid Dynamic Modeling." Journal of Engineering for Gas Turbines and Power 120, no. 3 (July 1, 1998): 543–49. http://dx.doi.org/10.1115/1.2818181.

Повний текст джерела
Анотація:
An approach for identification of faults in blades of a gas turbine, based on physical modelling, is presented. A measured quantity is used as an input, and the deformed blading configuration is produced as an output. This is achieved without using any kind of “signature,” as is customary in diagnostic procedures for this kind of faults. A fluid dynamic model is used in a manner similar to what is known as “inverse design methods”: the solid boundaries that produce a certain flow field are calculated by prescribing this flow field. In the present case, a signal, corresponding to the pressure variation on the blade-to-blade plane, is measured. The blade cascade geometry that has produced this signal is then produced by the method. In the paper, the method is described, and applications to test cases are presented. The test cases include theoretically produced faults as well as experimental cases where actual measurement data are shown to produce the geometrical deformations that existed in the test engine.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mousavi, Seyed Mahdi, Saeid Sadeghnejad, and Mehdi Ostadhassan. "Evaluation of 3D printed microfluidic networks to study fluid flow in rocks." Oil & Gas Science and Technology – Revue d’IFP Energies nouvelles 76 (2021): 50. http://dx.doi.org/10.2516/ogst/2021029.

Повний текст джерела
Анотація:
Visualizing fluid flow in porous media can provide a better understanding of transport phenomena at the pore scale. In this regard, transparent micromodels are suitable tools to investigate fluid flow in porous media. However, using glass as the primary material makes them inappropriate for predicting the natural behavior of rocks. Moreover, constructing these micromodels is time-consuming via conventional methods. Thus, an alternative approach can be to employ 3D printing technology to fabricate representative porous media. This study investigates fluid flow processes through a transparent microfluidic device based on a complex porous geometry (natural rock) using digital-light processing printing technology. Unlike previous studies, this one has focused on manufacturing repeatability. This micromodel, like a custom-built transparent cell, is capable of modeling single and multiphase transport phenomena. First, the tomographic data of a carbonate rock sample is segmented and 3D printed by a digital-light processing printer. Two miscible and immiscible tracer injection experiments are performed on the printed microfluidic media, while the experiments are verified with the same boundary conditions using a CFD simulator. The comparison of the results is based on Structural Similarity Index Measure (SSIM), where in both miscible and immiscible experiments, more than 80% SSIM is achieved. This confirms the reliability of printing methodology for manufacturing reusable microfluidic models as a promising and reliable tool for visual investigation of fluid flow in porous media. Ultimately, this study presents a novel comprehensive framework for manufacturing 2.5D realistic microfluidic devices (micromodels) from pore-scale rock images that are validated through CFD simulations.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Fu, Bai Xue, and Sheng Hai Hu. "Study for the Method of Automobile Oil Consumption Measuring Based on Ultrasonic Wave." Applied Mechanics and Materials 26-28 (June 2010): 962–66. http://dx.doi.org/10.4028/www.scientific.net/amm.26-28.962.

Повний текст джерела
Анотація:
In response to the problems of automobile oil consumption direct test and indirect test, Ultrasonic technology and microcontroller control technology can be used to research automobile oil consumption testing technique. Deciding on the method of automobile oil consumption testing is based on the fluid testing principle of ultranonic wave technology and build a mathematical model of automobile oil consumption tesing, According to the functional requirements of oil consumption, design the automobile oil consumption system hardware, control circuit and control automation programming to achieve the oil consumption inteligence non-disintegrated measuring. By means of tests and data processing analaysis and modifying the theoretical model, the theory used by the system is feasible and oil consumption measuring system with low power consumption and high accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yue, Wenzheng, Guo Tao, and Zhengwu Liu. "Identifying Reservoir Fluids by Wavelet Transform of Well Logs." SPE Reservoir Evaluation & Engineering 9, no. 05 (October 1, 2006): 574–81. http://dx.doi.org/10.2118/88559-pa.

Повний текст джерела
Анотація:
Summary The wavelet-transform (WT) method has been applied to logs to extract reservoir-fluid information. In addition to the time (depth)/frequency analysis generally performed by the wavelet method, we also have performed energy spectral analysis for time/frequency-domain signals by the WT method. We have further developed a new method to identify reservoir fluid by setting up a correlation between the energy spectra and reservoir fluid. We have processed 42 models from an oil field in China using this method and have subsequently applied these rules to interpret reservoir layers. It is found that identifications by use of this method are in very good agreement with the results of well tests. Introduction An important log-analysis application is determining reservoir-fluid properties. It is common practice to calculate the water and oil saturations of reservoir formations by use of electrical logs. With the development of well-logging technology, a number of methods have been developed for reservoir-fluid typing with well logs (Hou 2002; Geng et al. 1983; Dahlberg and Ference 1984). A recent report has also described reservoir-fluid typing by the T2 differential spectrum from nuclear-magnetic-resonance (NMR) logs (Coates et al. 2001). However, because of the interference from vugs, fractures, clay content, and mud-filtrate invasion, the reservoir-fluid information contained in well logs is often concealed. The reliability of these log interpretations is thus limited in many cases. Therefore, it is desirable to find a more reliable and consistent way of reservoir-fluid typing with well logs. In this paper, we present a new method using the WT for fluid typing with well logs. The WT technique was developed with the localization idea from Gabor's short-time Fourier analysis and has been expanded further. Wavelets provide the ability to perform local analysis (i.e., analyze a small portion of a larger signal) (Daubechies 1992).This localized analysis represents the next logical step: a windowing technique with variable-sized regions. Wavelet analysis allows the use of long time intervals, where more-precise low-frequency information is wanted, and shorter intervals, where high-frequency information is needed. Wavelet analysis is capable of revealing aspects of data that other signal-analysis techniques miss: aspects such as trends, breakdown points, discontinuities in higher derivatives, and self-similarity. In well-logging-data processing, wavelet analysis has been used to identify formation boundaries, estimate reservoir parameters, and increase vertical resolution (Lu and Horne 2000; Panda et al. 1996; Jiao et al. 1999; Barchiesi and Gharbi 1999). For data interpretation, however, the identification of hydrocarbon-bearing zones by wavelet analysis is still under investigation. In this study, we have developed a technique of wavelet-energy-spectrum analysis (WESA) to identify reservoir-fluid types. We have applied this technique to field-data interpretation and have achieved very good results.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Shi, Jun-Feng, Feng Deng, Li-Zhi Xiao, Hua-Bing Liu, Feng-Qin Ma, Meng-Ying Wang, Rui-Dong Zhao, Shi-Wen Chen, Jian-Jun Zhang, and Chun-Ming Xiong. "A proposed NMR solution for multi-phase flow fluid detection." Petroleum Science 16, no. 5 (September 25, 2019): 1148–58. http://dx.doi.org/10.1007/s12182-019-00367-3.

Повний текст джерела
Анотація:
Abstract In the petroleum industry, detection of multi-phase fluid flow is very important in both surface and down-hole measurements. Accurate measurement of high rate of water or gas multi-phase flow has always been an academic and industrial focus. NMR is an efficient and accurate technique for the detection of fluids; it is widely used in the determination of fluid compositions and properties. This paper is aimed to quantitatively detect multi-phase flow in oil and gas wells and pipelines and to propose an innovative method for online nuclear magnetic resonance (NMR) detection. The online NMR data acquisition, processing and interpretation methods are proposed to fill the blank of traditional methods. A full-bore straight tube design without pressure drop, a Halbach magnet structure design with zero magnetic leakage outside the probe, a separate antenna structure design without flowing effects on NMR measurement and automatic control technology will achieve unattended operation. Through the innovation of this work, the application of NMR for the real-time and quantitative detection of multi-phase flow in oil and gas wells and pipelines can be implemented.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wulandari, I. Gusti Agung Ayu Desy. "Pengaruh Nano Fluida terhadap Temperatur Kondensor Cascade Straight Heat Pipe." Jurnal METTEK 5, no. 2 (January 8, 2020): 79. http://dx.doi.org/10.24843/mettek.2019.v05.i02.p03.

Повний текст джерела
Анотація:
Perkembangan teknologi Central Processing Unit (CPU) pada komputer telah mengarah pada smart technologies yaitu memiliki kinerja yang semakin baik namun dengan dimensi yang diperkecil. Dengan pengurangan dimensi tersebut, maka dapat menyebabkan peningkatan daya yang sangat signifikan dan peningkatan fluks kalor pada CPU yang tinggi. Pada penelitian ini, cascade straight heat pipe dirancang untuk sistem pendingin CPU yang lebih baik tanpa memerlukan tambahan daya dalam pengoperasiannya. Dari data penelitian yang didapat, kinerja termal terbaik ada pada cascade straight heat pipe dengan fluida kerja Al2O3 – TiO2 – air, dengan penurunan temperatur plat simulator sebesar 41,872 % pada beban maksimum dan temperatur keluaran kondensor yang tertinggi. Kinerja termal terbaik kedua adalah pada penggunaan fluida kerja Al2O3 – air dengan penurunan temperatur plat simulator sebesar 35,243 % pada beban maksimum. Kinerja termal yang kurang baik ada pada penggunaan fluida kerja air dengan penurunan temperatur plat simulator sebesar 28,648 % dan temperatur keluaran kondensor yang terendah. The technology development of Central Processing Unit (CPU) on computers has led into smart technologies, which have better performance but with smaller dimensions. With the reduction of the dimensions, it can cause a very significant increase in power and high increasement of heat flux in the CPU. In this research, cascade straight heat pipe is designed for better CPU cooling systems without the need of additional power for the operation. From the data obtained, the best thermal performance is cascade straight heat pipe with the working fluid of Al2O3 - TiO2 - water, with a simulator plate temperature decrease of 41.872 % at maximum load and the highest condenser output temperature. The second best thermal performance is on the use of Al2O3 - water with the simulator plate temperature decrease of 35,243 % at maximum load. The poor thermal performance is on the use of water with the simulator plate temperature decrease of 28,648 % and the lowest condenser output temperature.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Umran, Samir M., Songfeng Lu, Zaid Ameen Abduljabbar, Jianxin Zhu, and Junjun Wu. "Secure Data of Industrial Internet of Things in a Cement Factory Based on a Blockchain Technology." Applied Sciences 11, no. 14 (July 9, 2021): 6376. http://dx.doi.org/10.3390/app11146376.

Повний текст джерела
Анотація:
The Industrial Internet of Things (IIoT) has become a pivotal field of development that can increase the efficiency of real-time collection, recording, analysis, and control of the entire activities of various machines, and can actively enhance quality and reduce costs. The traditional IIoT depends on centralized architectures that are vulnerable to several kinds of cyber-attacks, such as bottlenecks and single points of failure. Blockchain technology has emerged to change these architectures to a decentralized form. In modern industrial settings, blockchain technology is utilized for its ability to provide high levels of security, low computational complexity, P2P communication, transparent logs, and decentralization. The present work proposes the use of a private blockchain mechanism for an industrial application in a cement factory, which offers low power consumption, scalability, and a lightweight security scheme; and which can play an efficient role in controlling access to valuable data generated by sensors and actuators. A low-power ARM Cortex-M processor is utilized due to its efficiency in terms of processing cryptographic algorithms, and this plays an important part in improving the computational execution of the proposed architecture. In addition, instead of proof of work (PoW), our blockchain network uses proof of authentication (PoAh) as a consensus mechanism to ensure secure authentication, scalability, speed, and energy efficiency. Our experimental results show that the proposed framework achieves high levels of security, scalability and ideal performance for smart industrial environments. Moreover, we successfully realized the integration of blockchain technology with the industrial internet of things devices, which provides the blockchain technology features and efficient resistance to common cyber-security attacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lumley, D. E., and R. A. Behrens. "Practical Issues of 4D Seismic Reservoir Monitoring: What an Engineer Needs to Know." SPE Reservoir Evaluation & Engineering 1, no. 06 (December 1, 1998): 528–38. http://dx.doi.org/10.2118/53004-pa.

Повний текст джерела
Анотація:
Summary Time-lapse three-dimensional (3D) seismic, which geophysicists often abbreviate to four-dimensional (4D) seismic, has the ability to image fluid flow in the interwell volume by repeating a series of 3D seismic surveys over time. Four-dimensional seismic shows great potential in reservoir monitoring and management for mapping bypassed oil, monitoring fluid contacts and injection fronts, identifying pressure compartmentalization, and characterizing the fluid-flow properties of faults. However, many practical issues can complicate the simple underlying concept of a 4D project. We address these practical issues from the perspective of a reservoir engineer on an asset team by asking a series of practical questions and discussing them with examples from several of Chevron's ongoing 4D projects. We discuss feasibility tests, technical risks, and the cost of doing 4D seismic. A 4D project must pass three critical tests to be successful in a particular reservoir: Is the reservoir rock highly compressible and porous? Is there a large compressibility contrast and sufficient saturation changes over time between the monitored fluids? and Is it possible to obtain high-quality 3D seismic data in the area with clear reservoir images and highly repeatable seismic acquisition? The risks associated with a 4D seismic project include false anomalies caused by artifacts of time-lapse seismic acquisition and processing and the ambiguity of seismic interpretation in trying to relate time-lapse changes in seismic data to changes in saturation, pressure, temperature, or rock properties. The cost of 4D seismic can be viewed as a surcharge on anticipated well work and expressed as a cost ratio (seismic/wells), which our analysis shows ranges from 5 to 35% on land, 10 to 50% on marine shelf properties, and 5 to 10% in deepwater fields. Four-dimensional seismic is an emerging technology that holds great promise for reservoir management applications, but the significant practical issues involved can make or break any 4D project and need to be carefully considered. Introduction Four-dimensional seismic reservoir monitoring is the process of repeating a series of 3D seismic surveys over a producing reservoir in time-lapse mode. It has a potentially huge impact in reservoir management because it is the first technique that may allow engineers to image dynamic reservoir processes1 such as fluid movement,2 pressure build-up,3 and heat flow4,5 in a reservoir in a true volumetric sense. However, we demonstrate that practical operational issues easily can complicate the simple underlying concept. These issues include requiring the right mix of business drivers, a favorable technical risk assessment and feasibility study, a highly repeatable seismic acquisition survey design, careful high-resolution amplitude-preserved seismic data processing, and an ultimate reconciliation of 4D seismic images with independent reservoir borehole data and history-matched flow simulations. The practical issues associated with 4D seismic suggest that it is not a panacea. Four-dimensional seismic is an exciting new emerging technology that requires careful analysis and integration with traditional engineering data and workflows to be successful. Our objective in this paper is to provide an overview of the 4D seismic method and illuminate the practical issues important to an asset team reservoir engineer. For this reason, we do not present a comprehensive case study of a single 4D project here, but instead draw examples from several Chevron 4D projects to illustrate each of our points. We have structured this paper as a series of questions an engineer should ask before undertaking any 4D seismic project: What is 4D seismic? What can 4D seismic do for me? Will 4D seismic work in my reservoir? What are the risks with 4D seismic? What does 4D seismic cost? We answer these questions, highlight important issues, and offer lessons learned, rules of thumb, and general words of advice. What Is 4D Seismic? To describe the basic concepts underlying 4D seismic, we briefly review the seismic method in general6 and then consider the advantages of the time-lapse aspect of 4D seismic. In a single 3D seismic survey, seismic sources (dynamite, airguns, vibrators, etc.) generate seismic waves at or near the earth's surface. These source waves reflect off subsurface seismic impedance contrasts that are a function of rock and fluid compressibility, shear modulus, and bulk density. Arrays of receivers (geophones or hydrophones) record the reflected seismic waves as they arrive back at the earth's surface. Applying a wave-equation-imaging algorithm7 to the recorded wavefield creates a 3D seismic image of the reservoir rock and fluid property contrasts that are responsible for the reflections. Four-dimensional seismic analysis involves simply repeating the 3D seismic surveys, such that the fourth dimension is calendar time,8 to construct and compare seismic images in time-lapse mode to monitor time-varying processes in the subsurface during reservoir production. The term 4D seismic is usually reserved for time-lapse 3D seismic, as opposed to other time-lapse seismic techniques that do not have 3D volumetric coverage [e.g., two dimensional (2D) surface seismic, and the borehole seismic methods of vertical seismic profiling and crosswell seismic9,10]. Four-dimensional seismic has all the traditional reservoir characterization benefits of 3D seismic,11 plus the major additional benefit that fluid-flow features may be imaged directly. To first order, seismic images are sensitive to spatial contrasts in two distinct types of reservoir properties: time-invariant static geology properties such as lithology, porosity, and shale content; and time-varying dynamic fluid-flow properties such as fluid saturation, pore pressure, and temperature. Fig. 1 shows how the seismic impedance of rock samples with varying porosity changes as the pore saturation changes from oil-full to water-swept conditions. Given a single 3D seismic survey, representing a single snapshot in time of the reservoir, the static geology and dynamic fluid-flow contributions to the seismic image couple nonuniquely and are, therefore, difficult to separate unambiguously. For example, it may be impossible to distinguish a fluid contact from a lithologic boundary in a single seismic image, as shown in Frames 1 and 2 of Fig. 2. Examining the difference between time-lapse 3D seismic images (i.e., 4D seismic) allows the time-invariant geologic contributions to cancel, resulting in a direct image of the time-varying changes caused by reservoir fluid flow (Frame 3 of Fig. 2). In this way, the 4D seismic technique has the potential to image reservoir scale changes in fluid saturation, pore pressure, and temperature during production.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Halliwell, N. A., and G. K. Hargrave. "Optical engineering: Diagnostics for industrial applications." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 217, no. 6 (June 1, 2003): 597–617. http://dx.doi.org/10.1243/095440603321919545.

Повний текст джерела
Анотація:
Optical engineering uses research and development of laser technology, modern photonic detection/imaging systems and optical metrology for engineering applications. It has produced a wide range of processes and techniques from high-power laser material processing to high-sensitivity metrology and has applications in every industrial sector. Modern optical diagnostic techniques are providing new experimental and in situ data, which hitherto were considered to be unobtainable. Engineers are analysing these data in order to provide immediate design improvements in the performance of components. In addition, they use the data to refine theoretical/computer models of engineering processes, which in turn provide more accurate performance prediction. This paper introduces technology now available to the optical engineer and describes how it is being used to provide optical diagnostic techniques for both solid and fluid mechanics applications in industry. The gas industry has to deal with gas provision safely and efficiently from ‘drill bit to burner tip’ and has benefited significantly from optical engineering. Examples of optical diagnostic techniques and applications, which are used to improve this process, are described.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Hughes, J. K. "Examination of Seismic Repeatability as a Key Element of Time-Lapse Seismic Monitoring." SPE Reservoir Evaluation & Engineering 3, no. 06 (December 1, 2000): 517–24. http://dx.doi.org/10.2118/68246-pa.

Повний текст джерела
Анотація:
Summary The propagation of elastic waves in rocks is determined by the bulk modulus, shear modulus, and bulk density of the rock. In porous rocks all these properties are affected by the distribution of pore space, the geometry and interconnectivity of the pores, and the nature of the fluid occupying the pore space. In addition, the bulk and shear moduli are also affected by the effective pressure, which is equivalent to the difference between the confining (or lithostatic) pressure and pore pressure. During production of hydrocarbons from a reservoir, the movement of fluids and changes in pore pressure may contribute to a significant change in the elastic moduli and bulk density of the reservoir rocks. This phenomenon is the basis for reservoir monitoring by repeated seismic (or time-lapse) surveys whereby the difference in seismic response during the lifetime of the field can be directly related to changes in the pore fluids and/or pore pressure. Under suitable conditions, these changes in the reservoir during production can be quantitatively estimated by appropriate repeat three-dimensional (3D) seismic surveys which can contribute to understanding of the reservoir model away from the wells. The benefit to reservoir management is a better flow model which incorporates the information derived from the seismic data. What are suitable conditions? There are two primary factors which determine whether the reservoir changes we wish to observe will be detectable in the seismic data:the magnitude of the change in the elastic moduli (and bulk density) of the reservoir rocks as a result of fluid displacement, pressure changes, etc.;the magnitude of the repeatability errors between time-lapse seismic surveys. This includes errors associated with seismic data collection, ambient noise and data processing. The first is the signal component and the second the noise component. Previous reviews of seismic monitoring suggest that for 3D seismic surveys a signal-to-noise (S/N) ratio of 1.0 is sufficient for qualitative estimation of reservoir changes. Higher S/N ratios may allow quantitative estimates. After a brief examination of the rock physics affecting the seismic signal, we examine the second factor, repeatability errors, and use a synthetic seismic model to illustrate some of the factors which contribute to repeatability error. We also use two land 3D surveys over a Middle East carbonate reservoir to illustrate seismic repeatability. The study finds that repeatability errors, while always larger than desired, are generally within limits which will allow production-induced changes in seismic reflectivity to be confidently detected. Introduction Seismic data have been used successfully for many decades in the petroleum industry and have contributed significantly to the discovery of new fields throughout the world. Initially, seismic surveys were primarily an exploration tool, assisting in the identification of potential hydrocarbon structural and stratigraphic traps for drilling targets. With the introduction of 3D seismic surveys in the 1970's, accurate geological structural mapping became possible while the use of new seismic attributes as hydrocarbon indicators improved the success rate of discovery wells. More recently seismic data have also contributed to a better reservoir description away from the wells by making use of the correlation between suitable seismic attributes and petrophysical quantities such as porosity and net to gross, and by incorporating robust geostatistical methods for estimating the static reservoir model. Better seismic acquisition technology, improved seismic processing methods and an overall improvement in signal to noise have led to further 3D seismic surveys over producing fields primarily for better imaging of the reservoir and improved reservoir characterization. The concept of using repeated seismic surveys (time-lapse seismic) for monitoring changes in the reservoir due to production was suggested in the 1980's,1-3 and early tests were done by Arco in the Holt Sand fireflood4 from 1981-83. Over the last few years, the number of publications relating to time-lapse seismic [often referred to as four-dimensional (4D) seismic] has increased dramatically. Prior to time-lapse seismic monitoring, seismic data have been the domain of geologists and geophysicists, but the possibility of monitoring fluid displacements and pressure changes in a producing reservoir, away from the wells, has direct relevance to reservoir engineers and reservoir management. More exciting possibilities have been introduced by the use of time-lapse seismic data in combination with production history matching5 for greater refinement in optimization of the reservoir model. It is important, however, that reliable criteria are used to assess the feasibility of seismic monitoring.6
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zhou, Qingguo, Qingquan Lv, and Gaofeng Zhang. "A Combined Forecasting System Based on Modified Multi-Objective Optimization for Short-Term Wind Speed and Wind Power Forecasting." Applied Sciences 11, no. 20 (October 9, 2021): 9383. http://dx.doi.org/10.3390/app11209383.

Повний текст джерела
Анотація:
Wind speed and wind power are two important indexes for wind farms. Accurate wind speed and power forecasting can help to improve wind farm management and increase the contribution of wind power to the grid. However, nonlinear and non-stationary wind speed and wind power can influence the forecasting performance of different models. To improve forecasting accuracy and overcome the influence of the original time series on the model, a forecasting system that can effectively forecast wind speed and wind power based on a data pre-processing strategy, a modified multi-objective optimization algorithm, a multiple single forecasting model, and a combined model is developed in this study. A data pre-processing strategy was implemented to determine the wind speed and wind power time series trends and to reduce interference from noise. Multiple artificial neural network forecasting models were used to forecast wind speed and wind power and construct a combined model. To obtain accurate and stable forecasting results, the multi-objective optimization algorithm was employed to optimize the weight of the combined model. As a case study, the developed forecasting system was used to forecast the wind speed and wind power over 10 min from four different sites. The point forecasting and interval forecasting results revealed that the developed forecasting system exceeds all other models with respect to forecasting precision and stability. Thus, the developed system is extremely useful for enhancing forecasting precision and is a reasonable and valid tool for use in intelligent grid programming.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Agnew, Dennis, Nader Aljohani, Reynold Mathieu, Sharon Boamah, Keerthiraj Nagaraj, Janise McNair, and Arturo Bretas. "Implementation Aspects of Smart Grids Cyber-Security Cross-Layered Framework for Critical Infrastructure Operation." Applied Sciences 12, no. 14 (July 7, 2022): 6868. http://dx.doi.org/10.3390/app12146868.

Повний текст джерела
Анотація:
Communication networks in power systems are a major part of the smart grid paradigm. It enables and facilitates the automation of power grid operation as well as self-healing in contingencies. Such dependencies on communication networks, though, create a roam for cyber-threats. An adversary can launch an attack on the communication network, which in turn reflects on power grid operation. Attacks could be in the form of false data injection into system measurements, flooding the communication channels with unnecessary data, or intercepting messages. Using machine learning-based processing on data gathered from communication networks and the power grid is a promising solution for detecting cyber threats. In this paper, a co-simulation of cyber-security for cross-layer strategy is presented. The advantage of such a framework is the augmentation of valuable data that enhances the detection as well as identification of anomalies in the operation of the power grid. The framework is implemented on the IEEE 118-bus system. The system is constructed in Mininet to simulate a communication network and obtain data for analysis. A distributed three controller software-defined networking (SDN) framework is proposed that utilizes the Open Network Operating System (ONOS) cluster. According to the findings of our suggested architecture, it outperforms a single SDN controller framework by a factor of more than ten times the throughput. This provides for a higher flow of data throughout the network while decreasing congestion caused by a single controller’s processing restrictions. Furthermore, our CECD-AS approach outperforms state-of-the-art physics and machine learning-based techniques in terms of attack classification. The performance of the framework is investigated under various types of communication attacks.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Aljohani, Naif Radi, Muhammad Ahtisham Aslam, Alaa O. Khadidos, and Saeed-Ul Hassan. "A Methodological Framework to Predict Future Market Needs for Sustainable Skills Management Using AI and Big Data Technologies." Applied Sciences 12, no. 14 (July 7, 2022): 6898. http://dx.doi.org/10.3390/app12146898.

Повний текст джерела
Анотація:
Analysing big data job posts in Saudi cyberspace to describe the future market need for sustainable skills, this study used the power of artificial intelligence, deep learning, and big data technologies. The study targeted three main stakeholders: students, universities, and job providers. It provides analytical insights to improve student satisfaction, retention, and employability, investigating recent trends in the essential skills pinpointed as enhancing the social effect of learning, and identifying and developing the competencies and talents required for the Kingdom of Saudi Arabia’s (KSA’s) digital transformation into a regional and global leader in technology-driven innovation. The methodological framework comprises smart data processing, word embedding, and case-based reasoning to identify the skills required for job positions. The study’s outcomes may promote the alignment of KSA’s business and industry to academia, highlighting where to build competencies and skills. They may facilitate the parameterisation of the learning process, boost universities’ ability to promote learning efficiency, and foster the labour market’s sustainable evolution towards technology-driven innovation. We believe that this study is crucial to Vision 2030’s realisation through a long-term, inclusive approach to KSA’s transformation of knowledge and research into new employment, innovation, and capacity.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Ali, Nehal M., Mohamed Shaheen, Mai S. Mabrouk, and Mohamed Aborizka. "Machine Learning-Based Models for Detection of Biomarkers of Autoimmune Diseases by Fragmentation and Analysis of miRNA Sequences." Applied Sciences 12, no. 11 (May 31, 2022): 5583. http://dx.doi.org/10.3390/app12115583.

Повний текст джерела
Анотація:
Thanks to high-throughput data technology, microRNA analysis studies have evolved in early disease detection. This work introduces two complete models to detect the biomarkers of two autoimmune diseases, multiple sclerosis and rheumatoid arthritis, via miRNA analysis. Based on work the authors published previously, both introduced models involve complete pipelines of text mining methods, integrated with traditional machine learning methods, and LSTM deep learning. This work also studies the fragmentation of miRNA sequences to reduce the needed processing time and computational power. Moreover, this work studies the impact of obtaining two different library preparation kits (NEBNEXT and NEXTFLEX) on the detection accuracy for rheumatoid arthritis. Additional experiments are applied to the proposed models based on three different transcriptomic datasets. The results denote that the transcriptomic fragmentation model reported a biomarker detection accuracy of 96.45% on a sequence fragment size of 0.2, indicating a significant reduction in execution power while retaining biomarker detection accuracy. On the other hand, the LSTM model obtained a promising detection accuracy of 72%, implying savings in feature engineering processing. Additionally, the fragmentation model and the LSTM model reported 22.4% and 87.5% less execution time than work in the literature, respectively, denoting a considerable execution power reduction.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Singh, Divya, Aasheesh Shukla, Kueh Lee Hui, and Mangal Sain. "Hybrid Precoder Using Stiefel Manifold Optimization for Mm-Wave Massive MIMO System." Applied Sciences 12, no. 23 (November 30, 2022): 12282. http://dx.doi.org/10.3390/app122312282.

Повний текст джерела
Анотація:
Due to the increasing demand for fast data rates and large spectra, millimeter-wave technology plays a vital role in the advancement of 5G communication. The idea behind Mm-Wave communications is to take advantage of the huge and unexploited bandwidth to cope with future multigigabit-per-second mobile data rates, imaging, and multimedia applications. In Mm-Wave systems, digital precoding provides optimal performance at the cost of complexity and power consumption. Therefore, hybrid precoding, i.e., analog–digital precoding, has received significant consideration as a favorable alternative to digital precoding. The conventional methods related to hybrid precoding suffer from low spectral efficiency and large processing time due to nested loops and the number of iterations. A manifold optimization-based algorithm using the gradient method is proposed to increase the spectral efficiency to be near optimal and to speed up the processing speed. A comparison of performances is shown using the simulation outcomes of the proposed work and those of the existing techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Muszynska, A., D. E. Bently, W. D. Franklin, J. W. Grant, and P. Goldman. "Applications of Sweep Frequency Rotating Force Perturbation Methodology in Rotating Machinery for Dynamic Stiffness Identification." Journal of Engineering for Gas Turbines and Power 115, no. 2 (April 1, 1993): 266–71. http://dx.doi.org/10.1115/1.2906704.

Повний текст джерела
Анотація:
This paper outlines the sweep frequency rotating force perturbation method for identifying the dynamic stiffness characteristics of rotor/bearing/seal systems. Emphasis is placed on nonsynchronous perturbation of rotating shafts in a sequence of constant rotative speeds. In particular, results of the identification of flexible rotor multimode parameters and identification of fluid forces in seals and bearings are given. These results, presented in the direct and quadrature dynamic stiffness formats, permit the separation of components for easy identification. Another example of the perturbation method application is the identification of the lateral–torsional coupling due to shaft anisotropy. Results of laboratory rig experiments, the identification algorithm, and data processing techniques are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chambers, K. T., W. S. Hallager, C. S. Kabir, and R. A. Garber. "Characterization of a Carbonate Reservoir With Pressure-Transient Tests and Production Logs: Tengiz Field, Kazakhstan." SPE Reservoir Evaluation & Engineering 4, no. 04 (August 1, 2001): 250–59. http://dx.doi.org/10.2118/72598-pa.

Повний текст джерела
Анотація:
Summary The combination of pressure-transient and production-log (PL) analyses has proved valuable in characterizing reservoir flow behavior in the giant Tengiz field. Among the important findings is the absence of clear dual-porosity flow. This observation contradicts an earlier interpretation that the reservoir contains a well-connected, natural fracture network. Fracturing and other secondary porosity mechanisms play a role in enhancing matrix permeability, but their impact is insufficient to cause dual-porosity flow behavior to develop. Flow profiles measured with production logs consistently show several thin (10 to 30 ft) zones dominating well deliverability over the thick (up to 1,040 ft) perforation intervals at Tengiz. A comparison of PL results and core descriptions reveals a good correlation between high deliverability zones and probable exposure surfaces in the carbonate reservoir. Contrary to earlier postulations, results obtained from pressure-transient and PL data at Tengiz do not support rate-sensitive productivity indices (PI's). Inclusion of rate variations in reconciling buildup and drawdown test results addressed this issue. We developed wellbore hydraulic models and calibrated them with PL data for extending PI results to wells that do not have measured values. A simplified equation-of-state (EOS) fluid description was an important component of the models because the available black-oil fluid correlations do not provide reliable results for the 47°API volatile Tengiz oil. Clear trends in reservoir quality emerge from the PI results. Introduction A plethora of publications exists on transient testing. However, only a few papers address the issue of combining multidisciplinary data to understand reservoir flow behavior (Refs. 1 through 4 are worthy of note). We used a synergistic approach by combining geology, petrophysics, transient tests, PL's, and wellbore-flow modeling to characterize the reservoir flow behavior in the Tengiz field. Understanding this flow behavior is crucial to formulating guidelines for reservoir management. Permeability estimation from pressure-transient data is sensitive to the effective reservoir thickness contributing to flow. Unfortunately, difficulties associated with the calibration of old openhole logs, sparse core coverage, and a major diagenetic overprint of solid bitumen combine to limit the identification of an effective reservoir at Tengiz based on openhole log data alone. Consequently, PL's have been used to identify an effective reservoir in terms of its flow potential. A limitation of production logs is that they only measure fluid entering the wellbore and are not necessarily indicative of flow in the reservoir away from the well. Pressure data from buildup and drawdown tests, on the other hand, provide insights into flow behavior both near the well and farther into the reservoir. The combination of pressure-transient analysis using simultaneous downhole pressure and flow-rate data along with measured production profiles provides an opportunity to reconcile near-wellbore and in-situ flow behavior. Expansion of reservoir fluids along with formation compaction provides the current drive mechanism at Tengiz because the reservoir is undersaturated by over 8,000 psia. As the field is produced, reservoir stresses will increase in response to pressure decreases.5 Increased stresses can significantly reduce permeability if natural fractures provide the primary flow capacity in the reservoir. Wells producing at high drawdowns provide an opportunity to investigate the pressure sensitivity of fractures within the near-wellbore region. Early interpretations of pressure-transient tests at Tengiz uncovered a significant discrepancy between buildup and drawdown permeability, despite efforts to carefully control flow rates during the tests. Drawdown permeabilities typically exceeded the buildup results by 20 to 50%. Although this finding appears counterintuitive to the expectation that drawdowns (that is, higher stresses) would lead to lower permeability, it indicated a possible stress dependence on well deliverability. The method proposed by Kabir6 to reconcile differences between drawdown and buildup results proved useful in addressing this issue. The opportunities to collect PL and downhole pressure data at Tengiz are limited by mechanical conditions in some wells and by the requirement to meet the processing capacity of the oil and gas plant. On the other hand, accurate wellhead-pressure and flow-rate data are routinely available. Wellbore hydraulic calculations provide a basis for calculating flowing bottomhole pressures (FBHP's) with the available surface data. Calculated FBHP's can be combined with available reservoir pressure data to determine PI's for wells lacking bottomhole measurements. The ability to compute accurate fluid properties is critical in applying this approach. Unfortunately, the black-oil correlations routinely used in wellbore hydraulic calculations7–9 do not provide reliable results for the volatile Tengiz oil. We obtained good agreement between laboratory measurements of fluid properties and calculated values using a simplified EOS.10 Surface and bottomhole data collected during PL operations provide a basis for validating wellbore hydraulic calculations. Networks of natural fractures can dominate the producing behavior of carbonate reservoirs such as Tengiz. Early identification of fractured reservoir behavior is critical to the successful development of these types of reservoirs.11 We present an approach for resolving reservoir flow behavior by combining production profiles, pressure-transient tests, and wellbore hydraulic calculations. Furthermore, we discuss the PL procedures developed to allow acquisition of the data required for all three types of analyses in a single logging run. Field examples from Tengiz highlight the usefulness of this approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Doi, Hirokazu. "Multivariate ERP Analysis of Neural Activations Underlying Processing of Aesthetically Manipulated Self-Face." Applied Sciences 12, no. 24 (December 18, 2022): 13007. http://dx.doi.org/10.3390/app122413007.

Повний текст джерела
Анотація:
Representation of self-face is vulnerable to cognitive bias, and consequently, people often possess a distorted image of self-face. The present study sought to investigate the neural mechanism underlying distortion of self-face representation by measuring event-related potentials (ERPs) elicited by actual, aesthetically enhanced, and degraded images of self-face. In addition to conventional analysis of ERP amplitude and global field power, multivariate analysis based on machine learning of single trial data were integrated into the ERP analysis. The multivariate analysis revealed differential pattern of scalp ERPs at a long latency range to self and other familiar faces when they were original or aesthetically degraded. The analyses of ERP amplitude and global field power failed to find any effects of experimental manipulation during long-latency range. The present results indicate the susceptibility of neural correlates of self-face representation to aesthetical manipulation and the usefulness of the machine learning approach in clarifying the neural mechanism underlying self-face processing.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Babusiak, Branko, Marian Hostovecky, Maros Smondrk, and Ladislav Huraj. "Spectral Analysis of Electroencephalographic Data in Serious Games." Applied Sciences 11, no. 6 (March 10, 2021): 2480. http://dx.doi.org/10.3390/app11062480.

Повний текст джерела
Анотація:
In this paper, we describe an investigation of brain activity while playing a serious game (SG). A SG is focused on improving logical thinking, specifically on cognitive training of students in the field of basic logic gates, and we summarize SG description, design, and development. A method based on various signal processing techniques for evaluating electroencephalographic (EEG) data was implemented in the MATLAB. This assessment was based on the analysis of the spectrogram of particular brain activity. Changes in brain activity power at a characteristic frequency band during the gameplay were calculated from the spectrogram. The EEG of 21 respondents was measured. Based on the results, the respondents can be divided into three groups according to specific EEG activity changes during the gameplay compared to a relaxed state. The beta/alpha ratio, an indicator of brain employment to a mental task, was increased during gameplay in 18 of the 21 subjects. Our results reflected the sex of respondents, time of the game and the indicator, and whether the game was successfully completed.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Li, Weihan, Yang Li, Ling Yu, Jian Ma, Lei Zhu, Lingfeng Li, Huayue Chen, and Wu Deng. "A Novel Fault Feature Extraction Method for Bearing Rolling Elements Using Optimized Signal Processing Method." Applied Sciences 11, no. 19 (September 29, 2021): 9095. http://dx.doi.org/10.3390/app11199095.

Повний текст джерела
Анотація:
A rolling element signal has a long transmission path in the acquisition process. The fault feature of the rolling element signal is more difficult to be extracted. Therefore, a novel weak fault feature extraction method using optimized variational mode decomposition with kurtosis mean (KMVMD) and maximum correlated kurtosis deconvolution based on power spectrum entropy and grid search (PGMCKD), namely KMVMD-PGMCKD, is proposed. In the proposed KMVMD-PGMCKD method, a VMD with kurtosis mean (KMVMD) is proposed. Then an adaptive parameter selection method based on power spectrum entropy and grid search for MCKD, namely PGMCKD, is proposed to determine the deconvolution period T and filter order L. The complementary advantages of the KMVMD and PGMCKD are integrated to construct a novel weak fault feature extraction model (KMVMD-PGMCKD). Finally, the power spectrum is employed to deal with the obtained signal by KMVMD-PGMCKD to effectively implement feature extraction. Bearing rolling element signals of Case Western Reserve University and actual rolling element data are selected to prove the validity of the KMVMD-PGMCKD. The experiment results show that the KMVMD-PGMCKD can effectively extract the fault features of bearing rolling elements and accurately diagnose weak faults under variable working conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Izadi, Mehdi. "Technology Focus: Heavy Oil (April 2022)." Journal of Petroleum Technology 74, no. 04 (April 1, 2022): 69–70. http://dx.doi.org/10.2118/0422-0069-jpt.

Повний текст джерела
Анотація:
Heavy oils are characterized by high density, high viscosity, and high-heavy-fraction components. Because of high viscosity and lower API gravity than conventional crude oil, primary recovery of some of these crude oil types requires thermal stimulation of the reservoirs. Most of the technologies that deal with heavy oil need to address the mobility ratio or viscous forces before any flooding. In general, poor recovery is caused by physical reasons or geological reasons. Physical reasons can be categorized as capillary forces (existence of the interfacial tension between oil and water, wettability) or viscous forces (high mobility ratio between water and oil). Geological reasons are heterogeneities in reservoir rock and exist in all petroleum oil systems. In heavy oil reservoirs, enhanced oil recovery (EOR) intends to reduce the capillary forces and interfacial tension to improve microscopic displacement efficiency or improve the sweep efficiency (macroscopic) by reducing the mobility ratio between injected fluid and displaced fluid. Improving the mobility ratio is achieved by increasing the viscosity of water using polymers or by reducing the oil viscosity using heat. In general, all technologies need to address the capillary and viscous forces to improve oil recovery. Paper SPE 207361 discusses improving of the efficiency of the flood by near-wellbore conformance and improving the vertical sweep efficiency. The use of fiber-optic sensors, as addressed in paper SPE 199023, is intended to gather better data and avoid misinterpretation during falloff tests and injectivity tests. Traditionally, for heavy oil EOR simulation, because of the addition of chemical species or heat to the flow equations as well as the need for a finer grid resolution, the use of the full-field model in most cases was limited and the use of sector models and local grid refinement to obtain a reasonable accuracy has been applied in the industry. Sector modeling conditions must be satisfied to establish the reliability and the trade-off between accuracy (sector models) and computational expediency (full-field model). Recent development of hardware and software [graphics-processing-unit (GPU) -based simulators] has provided the industry with the tools to achieve a full-field model simulation in most fields by taking advantage of GPU solvers and using a fine-grid model to predict full-field performance. Recommended additional reading at OnePetro: www.onepetro.org. SPE 200279 - Field Application of the Autonomous Inflow Control Device for Optimized Heavy Oil Production in South Sultanate of Oman by Ali Al-Jumah, Petroleum Development Oman, et al. SPE 203012 - More Oil and Less Water: Autonomous Inflow Control Devices in New and Old Producers in Heavy Oil Fields From South of Oman by Ameera Al Harrasi, Petroleum Development Oman, et al. SPE 207684 - Game Changer in Dealing With Hard Scale Using a Slickline Torque Action Debris Breaker by Mahmoud Mohamed Koriesh, Dragon Oil, et al.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Vurim, Alexandr, Nuriya Mukhamedova, Yuliya Baklanova, Andrey Syssaletin, and Assan Akayev. "Information and Analytical System for Processing of Research Results to Justify the Safety of Atomic Energy." Applied Sciences 12, no. 19 (September 27, 2022): 9705. http://dx.doi.org/10.3390/app12199705.

Повний текст джерела
Анотація:
This paper is devoted to the creation of an information and analytical system (IAS) which is under development to manage the data obtained in experiments and investigations to justify the safety of atomic energy, which the National Nuclear Center of the Republic of Kazakhstan (RSE NNC RK) has been conducting for over 30 years. The main components of the IAS determining its consumer capabilities is an analytical unit that will allow the creation of programs for planned experiments in view of the technical requirements for them and based on the results of previous experiments, generalized and consolidated by processing and comparison tools provided by the IAS. An important component of the IAS is a set of tools for the predictive calculation of the temperature of materials of test sections depending on a given change in the power of energy release in them, predictive calculation of the required power of energy release in materials depending on a given change in their temperature, formation of arrays of experimental information in digital format and graphical form, comparison of experiments and their data among themselves, and the formation of protocols of experiments with the possibility of choosing specific data and methods for their processing. It should be noted that the created IAS greatly simplifies the preparation for experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Al-Eid, Mohammad I., Sunil L. Kokal, William J. Carrigan, Jaffar M. Al-Dubaisi, Henry I. Halpern, and Jamal I. Al-Juraid. "Investigation of H2S Migration in the Marjan Complex." SPE Reservoir Evaluation & Engineering 4, no. 06 (December 1, 2001): 509–15. http://dx.doi.org/10.2118/74713-pa.

Повний текст джерела
Анотація:
Summary The Marjan complex is a large offshore oil field located in the Arabian Gulf and composed of four fields: Marjan 1, Marjan 2, Marjan 3, and Marjan 4. Currently, production in the complex is limited to the Khafji reservoir. For several years, it was known that the H2S concentration in the Khafji reservoir varied across the complex. A comprehensive study was initiated to map the concentration profile of H2S across the complex and to address the migration of H2S within it. This study included area-wide wellstream H2S measurement and geochemical fingerprinting of Marjan oil and gas. For the first time, the concentrations of H2S were mapped across the complex. The southern and southwestern wells show relatively high concentrations of H2S, and the northern wells show no (or negligible) amounts of H2S (less than 10 ppm). The migration of H2S into the northern part of the field has serious implications because the crude-handling facilities, or the gas/oil separation plant (GOSP), were designed for sweet crude processing. If migration is proven, the facilities in the northern part of the field must be upgraded to handle sour crude. The results of this study indicate that there is a hot spot of high H2S located in the southwestern part of the complex. There is a significant H2S gradient across the Marjan complex, with H2S decreasing from the southwest to the northeast. H2S concentration profiles also indicate that there is an increase in H2S concentration with time in the hot spot. The data negate the possibility of H2S generation in the Khafji reservoir from either sulfate-reducing bacteria (SRB) or thermochemical sulfate reduction (TSR). Therefore, it is suggested that the H2S is migrating into the Khafji reservoir from somewhere else, probably from the Ratawi or other, deeper reservoirs. The geochemical analyses show that the hydrocarbon composition is uniform across the complex, and there is no evidence for barriers to fluid flow within the Khafji reservoir. It is proposed that the lateral migration of H2S within the reservoir is arrested because of the presence of H2S-scavenging iron minerals. Two hypotheses are proposed for the migration of H2S into the Khafji reservoir from the Ratawi reservoir:through inter reservoir faults, orthrough channeling leaks behind well casings. These two hypotheses are discussed in the paper. Introduction The Marjan complex consists of the main Marjan 1 field and the adjoining fields of Marjan 2, Marjan 3, and Marjan 4. Currently, production in the area is limited to the Khafji reservoir, a thick middle Cretaceous sandstone reservoir. There are three GOSPs in the complex processing Arab Medium crude. The northern part of the Marjan complex is sweet, and the southern part is sour. Marjan GOSP-A, located in the northern part of the Marjan complex, was designed for sweet crude. The central and southern GOSPs process sour crude. The concentration of H2S varies in the Marjan complex, with the southern wells showing relatively high concentrations of H2S and the northern wells (GOSP-A) showing no (or negligible) amounts of H2S. There was a concern that H2S may be migrating from south to north. A preliminary reservoir simulation study was conducted to address the issue of H2S migration in Marjan. The study concluded that sour crude was migrating into the northern part of the complex before 1992, when GOSP-A wells were on production. In 1993, GOSPs B and C came on stream, and the migration rate was reduced considerably. The present study was initiated to investigate the migration of H2S in the Marjan complex. The main objective of the study is to determine the source and migration pathways of H2S into the Khafji reservoir. The research study was split into two parts:a pressure/volume/temperature (PVT) and phase behavior study to address the variation of H2S across the complex, anda geochemical study to determine the origin of the H2S and the extent of fluid communication within the reservoir. Specific objectives and tasks include:Obtain new fluid-composition data and compare them with previous H2S data to ascertain if H2S concentrations have changed with time (and if so, to what extent) across the complex.Map the variation in H2S concentrations across the complex.Determine whether there is H2S migration in the complex and determine the most probable mechanism causing this migration.Determine whether any reservoir compartments exist. Such compartments may either protect sweet crude from H2S migration or provide a trapping mechanism for sour crude pockets.Determine the origin of H2S through sulfur isotope analysis.Compare oils obtained in the early 1980s with oils from the same wells obtained currently to identify possible production-related migration.Provide input data for a reservoir simulation model with H2S tracking capability. H2S Measurement and Mapping Fluid Sampling. The objective of fluid sampling is to collect representative oil and gas samples. This is a critical operation because all the laboratory measurements depend on the success of the sampling program. Pressurized oil and gas samples were used for analysis and sulfur isotopes of H2S. Depressurized oil samples were used for geochemical characterization. A fieldwide wellhead gas-sampling program was also undertaken to determine the concentration of H2S in the wellhead samples. In this program, gas samples were collected from the wellhead, and H2S concentrations were measured at the wellsite. All wellhead gas samples were collected from sampling points located downstream of the wellhead chokes (Fig. 1). More than 120 wells were sampled and analyzed. H2S Measuring Methodology. Analyzing H2S in the reservoir fluid or wellstream is an elaborate task. All measurements reported in this study were made on the gas stream and then converted to reservoir fluid concentration with a PVT simulator (described later). For a given reservoir fluid composition, the concentration of H2S in the gas stream is a function of its pressure and temperature. Wellhead Gas Samples. Different methods are used to measure the concentration of H2S in gas mixtures. The method needed for measuring depends on the H2S concentration in the sampled fluid. The Tutwiler method1 is used at high concentrations (>1,000 ppm), the length-of-stain detector tube method2 (Drager and Gastec) is used over a wide range from very low (1 ppm range) to high concentrations (up to 40%), and the Gas Monitor3 is used at low concentrations (<1,000 ppm). The Tutwiler method is generally considered more accurate at high concentration levels (>5,000 ppm).
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Meisingset, K. K. "Uncertainties in Reservoir Fluid Description for Reservoir Modeling." SPE Reservoir Evaluation & Engineering 2, no. 05 (October 1, 1999): 431–35. http://dx.doi.org/10.2118/57886-pa.

Повний текст джерела
Анотація:
Summary The objective of the present paper is to communicate the basic knowledge needed for estimating the uncertainty in reservoir fluid parameters for prospects, discoveries, and producing oil and gas/condensate fields. Uncertainties associated with laboratory analysis, fluid sampling, process description, and variations over the reservoirs are discussed, based on experience from the North Sea. Introduction Reliable prediction of the oil and gas production is essential for the optimization of development plans for offshore oil and gas reservoirs. Because large investments have to be made early in the life of the fields, the uncertainty in the in-place volumes and production profiles may have a direct impact on important economical decisions. The uncertainties in the description of reservoir fluid composition and properties contribute to the total uncertainty in the reservoir description, and are of special importance for the optimization of the processing capacities of oil and gas, as well as for planning the transport and marketing of the products from the field. Rules of thumb for estimating the uncertainties in the reservoir fluid description, based on field experience, may therefore be of significant value for the petroleum industry. The discussion in the present paper is based on experience from the fields and discoveries where Statoil is an operator or partner, including almost all fields on the Norwegian Continental Shelf,1,2 and all types of reservoir oils and gas condensates except heavy oils with stock-tank oil densities above 940 kg/m3 (below 20° API). Fluid Parameters in the Reservoir Model The following parameters are used to describe the reservoir fluid in a "black oil" reservoir simulation model:densities at standard conditions of stabilized oil, condensate, gas, and water;viscosity (?O) oil formation volume factor (B O) and gas-oil ratio (RS) of reservoir oil;viscosity (?G) gas formation volume factor (B G) and condensate/gas ratio (RSG) of reservoir gas;viscosity (?W) formation volume factor (BW) and compressibility of formation water; andsaturation pressures: bubblepoint for reservoir oil, dew point for reservoir gas. The actual input is usually slightly more complex, with saturation pressure given as a function of depth, with RS and R SG defined as a function of saturation pressure, and with oil and gas viscosities and formation volume factors given as a function of reservoir pressure for a range of saturation pressure values. However, minor changes in saturation pressure versus depth are usually neglected, and the oil dissolved in the reservoir gas can also be neglected (RSG=0) when the solubility is small. Uncertainties in the modeling of other fluid parameters (interfacial tension may for instance be of importance, because of its effect on the capillary pressure), or compositional effects like revaporization of oil into injection gas, are not discussed here. Uncertainties in viscosity, formation volume factor and compressibility of formation water, and density of gas at standard conditions, are judged to be of minor importance for the total uncertainties in the reservoir model. The uncertainty in the salinity of the formation water is discussed here instead, because it is used for calculations of water resistivity for log interpretation, and therefore, affects the estimates of initial water saturation in the reservoir. In a compositional reservoir simulation model, the composition of reservoir oil and gas (with, typically, 4 to 10 pseudocomponents) is given as a function of depth, while phase equilibria and fluid properties are calculated by use of an equation of state. However, the uncertainties in the fluid description can be described in approximately the same way as for a "black oil" model. Quantified uncertainty ranges in the present paper are coarse estimates, aiming at covering 80% of the probability range for each parameter (estimated value plus/minus an uncertainty estimate defining the range between the 10% and 90% probability values3). Prospect Evaluation Assessments of the uncertainties in the reservoir description, as a basis for economic evaluation, are made in all phases of exploration and production. Of course, the complexity in the fluid description increases strongly from prospect evaluation through the exploration phase and further into the production phase, but the main fluid parameters in the reservoir model are the same. The prediction of fluid parameters in the prospect evaluation phase, before the first well has been drilled, is based on reservoir fluid data from discoveries near by, information about source rocks and migration, and empirical correlations. The uncertainties vary strongly from prospect to prospect. The probability as a function of volume for the presence of reservoir oil and gas is usually the most important fluid parameter. The probability for predicting the correct hydrocarbon phase varies from 50% (equal probability for reservoir oil and gas) to 90% (in regions where either oil or gas reservoirs are strongly dominating, or when the reservoir fluid can be expected to be the same as in another discovery near by). For formation volume factors, gas/liquid ratios, viscosities, and densities, an estimate for the most probable value as well as for a high and low possible value is commonly given. The range between the high and low value is often designed to include 80% of the probability range for the parameter, but accurate uncertainty estimates can seldom be made. The ratio of the high and low value is, typically, 1.5 to 50 for R SG 1.1 to 1.5 for B G 1.1 to 2.5 for ?G 1.2 to 3 for RS 1.1 to 2 for BO 1.5 to 5 for (?O and 1.03 to 1.1 for densities of stabilized oil and condensate. From Discovery to Production After a discovery has been made, the fluid description is based on laboratory analyses of reservoir fluid samples from drill-stem tests, production tests, and wireline sampling (RFT, FMT, MDT) in exploration and production wells. Pressure gradients in the reservoirs from measurements during wireline and drill-stem tests, analysis of residual hydrocarbons in core material from various depths, measurements of gas/oil ratio during drill-stem and production tests, and measurements of product streams from the field, give important supplementary information.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Li, Tianling, Bin He, and Yangyang Zheng. "Research and Implementation of High Computational Power for Training and Inference of Convolutional Neural Networks." Applied Sciences 13, no. 2 (January 11, 2023): 1003. http://dx.doi.org/10.3390/app13021003.

Повний текст джерела
Анотація:
Algorithms and computing power have consistently been the two driving forces behind the development of artificial intelligence. The computational power of a platform has a significant impact on the implementation cost, performance, power consumption, and flexibility of an algorithm. Currently, AI algorithmic models are mainly trained using high-performance GPU platforms, and their inferencing can be implemented using GPU, CPU, and FPGA. On the one hand, due to its high-power consumption and extreme cost, GPU is not suitable for power and cost-sensitive application scenarios. On the other hand, because the training and inference of the neural network use different computing power platforms, the data of the neural network model needs to be transmitted on platforms with varying computing power, which affects the data processing capability of the network and affects the real-time performance and flexibility of the neural network. This paper focuses on the high computing power implementation method of the integration of convolutional neural network (CNN) training and inference in artificial intelligence and proposes to implement the process of CNN training and inference by using high-performance heterogeneous architecture (HA) devices with field programmable gate array (FPGA) as the core. Numerous repeated multiplication and accumulation operations in the process of CNN training and inference have been implemented by programmable logic (PL), which significantly improves the speed of CNN training and inference and reduces the overall power consumption, thus providing a modern implementation method for neural networks in an application field that is sensitive to power, cost, and footprint. First, based on the data stream containing the training and inference process of the CNN, this study investigates methods to merge the training and inference data streams. Secondly, high-level language was used to describe the merged data stream structure, and a high-level description was converted to a hardware register transfer level (RTL) description by the high-level synthesis tool (HLS), and the intellectual property (IP) core was generated. The PS was used for overall control, data preprocessing, and result analysis, and it was then connected to the IP core via an on-chip AXI bus interface in the HA device. Finally, the integrated implementation method was tested and validated with the Xilinx HA device, and the MNIST handwritten digit validation set was used in the tests. According to the test results, compared with using a GPU, the model trained in the HA device PL achieves the same convergence rate with only 78.04 percent training time. With a processing time of only 3.31 ms and 0.65 ms for a single frame image, an average recognition accuracy of 95.697%, and an overall power consumption of only 3.22 W @ 100 MHz, the two convolutional neural networks mentioned in this paper are suitable for deployment in lightweight domains with limited power consumption.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Zhang, Zhi-Gang, Yan-Bao Liu, Hai-Tao Sun, Wei Xiong, Kai Shen, and Quan-Bin Ba. "An alternative approach to match field production data from unconventional gas-bearing systems." Petroleum Science 17, no. 5 (May 18, 2020): 1370–88. http://dx.doi.org/10.1007/s12182-020-00454-w.

Повний текст джерела
Анотація:
Abstract Nowadays, the unconventional gas-bearing system plays an increasingly important role in energy market. The performances of the current history-matching techniques are not satisfied when applied to such systems. To overcome this shortfall, an alternative approach was developed and applied to investigate production data from an unconventional gas-bearing system. In this approach, the fluid flow curve obtained from the field is the superposition of a series of Gaussian functions. An automatic computing program was developed in the MATLAB, and both gas and water field data collected from a vertical well in the Linxing Block, Ordos Basin, were used to present the data processing technique. In the reservoir study, the automatic computing program was applied to match the production data from a single coal seam, multiple coal seams and multiple vertically stacked reservoirs with favourable fitting results. Compared with previous approaches, the proposed approach yields better results for both gas and water production data and can calculate the contributions from different reservoirs. The start time of the extraction for each gas-containing unit can also be determined. The new approach can be applied to the field data prediction and designation for the well locations and patterns at the reservoir scale.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Lin, Whei-Min, and Chien-Hsien Wu. "Fast Support Vector Machine for Power Quality Disturbance Classification." Applied Sciences 12, no. 22 (November 16, 2022): 11649. http://dx.doi.org/10.3390/app122211649.

Повний текст джерела
Анотація:
The power quality disturbance (PQD) problem involves problems of voltage swell, voltage sag, power interruption, harmonics and complex events involving multiple PQD problems. The PQD problem attracted considerable attention from utilities, especially when renewable energy is getting a higher penetration. The PQD problem could downgrade the service quality, causing problems of malfunctions and instabilities. This paper proposed a simplified SVM technique to identify the PQD problem including the multiple PQD classification. With the simple structure proposed, the methodology could reduce a great deal of training data; requires much less memory space and saves computing time. An IEEE 14-bus power system was used to show the performance. Many tests were conducted, and the method was compared with an artificial neural network (ANN). Simulation results showed the shortened processing time and the effectiveness of the proposed approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Lingala, Manjula, Phani Chandrasekhar Nelapatla, and Kusumita Arora. "Evaluating the Effect of Noise from Traffic on HYB Magnetic Observatory Data during COVID-19 Lockdown." Applied Sciences 12, no. 5 (March 7, 2022): 2730. http://dx.doi.org/10.3390/app12052730.

Повний текст джерела
Анотація:
Continuous time series data from geomagnetic observatories are increasingly contaminated by anthropogenic noise related to developing socio-economic activities. More and more sophisticated techniques of data processing are used to eliminate this noise; nonetheless, some of it cannot be removed. The main sources of noise in the Hyderabad (HYB) data are vehicular traffic, power lines and a power station, 500 m to 1 km away. During the nationwide COVID-19 pandemic lockdown from 24 March to 17 May 2020, both road and metro rail traffic came to a complete halt. The data from this time interval give us an opportunity to evaluate the effects of the absence of traffic-generated noise sources. We found noticeable differences in the noise levels present in vector and scalar variation data, due to the vehicular noise observed before and during the lockdown periods. Noise spectrum estimates quantify the reduction in the noise levels during this period. We also noticed decreased scatter in absolute values of the H (horizontal), D (declination), Z (vertical) and I (inclination) components of the geomagnetic field during lockdown. The details of increased data quality in the absence of traffic-generated noise sources are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Kuo, Wen-Chi, Chiun-Hsun Chen, Shih-Hong Hua, and Chi-Chuan Wang. "Assessment of Different Deep Learning Methods of Power Generation Forecasting for Solar PV System." Applied Sciences 12, no. 15 (July 27, 2022): 7529. http://dx.doi.org/10.3390/app12157529.

Повний текст джерела
Анотація:
An increase in renewable energy injected into the power system will directly cause a fluctuation in the overall voltage and frequency of the power system. Thus, renewable energy prediction accuracy becomes vital to maintaining good power dispatch efficiency and power grid operation security. This article compares the one-day-ahead PV power forecasting results of three models paired with three groups of weather data. Since the number, loss, and matching problem of weather data will all influence the training results of the model, a pre-processing data framework is proposed to solve the problem in this study. The models used are a deep learning algorithm-based artificial neural network (ANN), long short-term memory (LSTM), and gated recurrent unit (GRU). The weather data groups are Central Weather Bureau (CWB), local weather station (LWS), and hybrid data (the combination of CWB and LWS data). Compared to the other two groups, hybrid data showed a 5–8% improvement in measurements. In addition, when it comes to different weather conditions, the advantages of the LSTM model were highlighted. After further analysis, the LSTM model combined with hybrid data showed the most accurate measurements, which was proved through forecasting results for one month. Finally, the results indicate that when the amount of data is limited, using hybrid data and the five weather features is helpful for training the model. Accordingly, the proposed model shows better one-day-ahead PV forecasting.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Nes, O. M., R. M. Holt, and E. Fjær. "The Reliability of Core Data as Input to Seismic Reservoir Monitoring Studies." SPE Reservoir Evaluation & Engineering 5, no. 01 (February 1, 2002): 79–86. http://dx.doi.org/10.2118/76641-pa.

Повний текст джерела
Анотація:
Summary There is a potential for improving the reliability of standard core tests for seismic monitoring studies. A primary concern is the ability to quantify and correct for core-damage effects, which significantly enhance the stress dependency of wave velocities. This problem is most relevant for relatively low-strength rocks cored in high-stress environments. We have used synthetic sandstones formed under stress to perform a systematic study of stress-release- induced core-damage effects. The results show that careful laboratory procedures and modeling efforts may reduce core damage effects. However, no simple procedure is currently available to eliminate the problem. The use of simplified laboratory test procedures, particularly the application of an inappropriate effective stress principle, may lead to erroneous interpretations. Introduction Time-lapse (4D) seismic provides a potentially powerful tool to identify changes in a reservoir induced during production. This is accomplished by running repeated seismic surveys throughout the production period and looking for changes in the seismic response. Such changes can, in principle, be ascribed to several parameters, the most obvious being fluid saturation, pore pressure, and temperature. 1,2 Thus, by monitoring the reservoir at various timesteps during an enhanced oil recovery operation such as a water injection, one may identify nonflooded compartments within the reservoir. This information permits subsequent positioning of new production and injection wells or modification of the existing depletion strategy in a way that significantly improves the total recovery of the reservoir. During the past 5 years or so, the number of commercial 4D seismic surveys has increased from fewer than 5 to approximately 25 per year. The cost of a reservoir monitoring project is in many places comparable to that of drilling a new well, and benefits have, in many cases, proven so large that most companies now consider it a natural part of reservoir management. There are, however, a number of factors that influence the success for such surveys, be they related to the reservoir itself in terms of depth, stress, temperature, and structural and compositional complexity, or to such intrinsic reservoir properties as the rock and fluid properties at the given reservoir conditions. The success also is affected by the quality of the seismic acquisition parameters during the surveys, such as the degree of repeatability between subsequent surveys,3 as well as the final processing of the seismic data (see Lumley et al.4 for a technical risk summary). Because of this substantial variability, one should always perform a seismic monitoring feasibility study in advance to quantify the extent to which expected production-induced changes may be detectable from a planned seismic monitoring study. Such a study needs integrated input from a number of disciplines; after a proper reservoir model is built, reservoir simulations must be undertaken to produce relevant scenarios to be expected throughout production. Thereafter, these must be translated into corresponding seismic parameters from rock physical principles before, finally, seismic modeling can be undertaken for various acquisition geometries and subsequent processing alternatives can be tested. Traditionally, seismic monitoring parameters have been deduced from post-stack data through changes in the vertical P-wave reflection coefficient, expressed by the corresponding acoustic impedance ZP = ?·VP, where VP = the acoustic P-wave velocity and p=the density. This, essentially, has allowed for inversion for only one effective reservoir parameter. Knowing that there may be concurrent changes in several parameters has made the interpretation of the seismics difficult. More recently, however, a practical use of amplitude-vs.-offset (AVO) data has been introduced5 that enables the determination of the corresponding shear-wave impedance ZS. This simultaneous determination of P- and S-wave impedances has allowed for distinction between changes in multiple reservoir properties such as saturation and pore pressure, assuming that other parameters remain constant. A crucial point in the initial feasibility study, as well as in the final interpretation of deduced changes in seismic parameters during monitoring, is the quantitative rock physical interpretation of the seismic parameters in terms of changes in reservoir parameters. A number of factors affect the acoustic velocities in a complicated manner, and no theory exists that can be applied generally. Therefore, laboratory testing on core material at representative test conditions is required as a natural part of a feasibility study to quantify the effects of pore pressure, saturation, and temperature that can be encountered during monitoring. The objective of this paper is to elucidate some fundamental questions related to these key issues. In particular, we focus on the neglected effect of core damage upon the laboratory-measured stress sensitivity of velocities6 and the importance of using proper stress conditions during such experiments. We handle this by performing systematic laboratory measurements on synthetic reservoir sandstones formed under stress, and we try to tune the properties of the synthetics to match specific reservoir sandstones. Even if this procedure is not fully representative of all reservoir sandstones, our experience is that it may at least be applicable for weakly cemented, clean sandstone reservoirs. Furthermore, we also illustrate pitfalls in the common use of the so-called effective stress principle. Fundamental Questions As in all core testing, one has to deal with two fundamental questions when running experiments to quantify effects of pore pressure, saturation, and temperature on acoustic velocities:Are the cores representative of the reservoir rock?Are the tests performed under the appropriate conditions for prediction of in-situ behavior? Core Representativity. The first question has two different aspects. First, the small core may not be representative of a large heterogeneous reservoir. The most obvious way to deal with this is to test many cores and then perform some kind of statistical analysis on the acquired data. Still, the reservoir may contain fractures and faults at subseismic length scales, which are not present in the core samples but contribute to seismic velocities. The second aspect to consider is core damage: a rock is "born" and "lives all its life" in a stressed earth. When drilled and brought to the surface, it meets the hostile world of atmospheric conditions. The stress release may be sufficient to induce microcracks or broken grain bonds in the rock core, leading to altered rock properties. The damage is permanent and has been shown to have strong effects on rock mechanical and acoustic parameters.7 In the present paper, we discuss in more detail how core damage affects the predicted stress sensitivity of the seismic velocity.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Režek Jambrak, Anet, Marinela Nutrizio, Ilija Djekić, Sanda Pleslić, and Farid Chemat. "Internet of Nonthermal Food Processing Technologies (IoNTP): Food Industry 4.0 and Sustainability." Applied Sciences 11, no. 2 (January 12, 2021): 686. http://dx.doi.org/10.3390/app11020686.

Повний текст джерела
Анотація:
With the introduction of Industry 4.0, and smart factories accordingly, there are new opportunities to implement elements of industry 4.0 in nonthermal processing. Moreover, with application of Internet of things (IoT), smart control of the process, big data optimization, as well as sustainable production and monitoring, there is a new era of Internet of nonthermal food processing technologies (IoNTP). Nonthermal technologies include high power ultrasound, pulsed electric fields, high voltage electrical discharge, high pressure processing, UV-LED, pulsed light, e-beam, and advanced thermal food processing techniques include microwave processing, ohmic heating and high-pressure homogenization. The aim of this review was to bring in front necessity to evaluate possibilities of implementing smart sensors, artificial intelligence (AI), big data, additive technologies with nonthermal technologies, with the possibility to create smart factories together with strong emphasis on sustainability. This paper brings an overview on digitalization, IoT, additive technologies (3D printing), cloud data storage and smart sensors including two SWOT analysis associated with IoNTPs and sustainability. It is of high importance to perform life cycle assessment (LCA), to quantify (En)—environmental dimension; (So)—social dimension and (Ec)—economic dimension. SWOT analysis showed: potential for energy saving during food processing; optimized overall environmental performance; lower manufacturing cost; development of eco-friendly products; higher level of health and safety during food processing and better work condition for workers. Nonthermal and advanced thermal technologies can be applied also as sustainable techniques working in line with the sustainable development goals (SDGs) and Agenda 2030 issued by United Nations (UN).
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Chan, Yee Kit, Yung Chong Lee, and Voon Chet Koo. "Design and Implementation of Synthetic Aperture Radar (SAR) Field-Programmable Gate Array (FPGA)-Based Processor." Applied Sciences 12, no. 4 (February 10, 2022): 1808. http://dx.doi.org/10.3390/app12041808.

Повний текст джерела
Анотація:
Synthetic aperture radar (SAR) is a unique imaging radar system that is capable of obtaining high-resolution images by using signal-processing techniques while operating in all weather and in the absence of a light source. The potential of SAR in a wide range of applications has led to new challenges in digital SAR processor design. On-board storage of SAR raw data is often not practical for real-time applications. The design of digital SAR processors is always restricted by the available space of the carrier system, data transfer rate, payload capacity and on-board power supplies. As reported in the literature, although customized hardware solutions could offer the desired performance, they are not feasible for low-volume production. This research aims to design and develop an efficient digital SAR processor by using field-programmable gate array (FPGA) with the consideration of hardware resources, processing speed and precision. In this paper, a hardware implementation of an FPGA-based SAR processor is presented. The implementation and architecture of the proposed SAR processor are highlighted in this paper. A MATLAB-based SAR processing range-Doppler algorithm (RDA) was developed as the benchmark for the development of an SAR processor. The target device, Altera Stratix IV GX FPGA EP4SGX230KF40C2, was selected for the design and implementation of an FPGA-based SAR processor. Comprehensive evaluations of the performance of the proposed SAR processor in terms of precision, timing performance and hardware resource utilizations are also presented. The proposed FPGA-based digital SAR processor achieves optimum performance in processing SAR signals for image formation. Evaluation shows that the designed SAR processor is capable of processing SAR images with ±1% difference error as compared to SAR images processed by MATLAB. The results also show a reduction in hardware usage via the implementation of an FPGA-based FFT/IFFT coprocessor. These promising results prove that the performance of the proposed processor is satisfactory and the achieved processing time, as well as the power consumption of the processor, outperformed existing implementations.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Hrovatin, Niki, Aleksandar Tošić, Michael Mrissa, and Branko Kavšek. "Privacy-Preserving Data Mining on Blockchain-Based WSNs." Applied Sciences 12, no. 11 (June 2, 2022): 5646. http://dx.doi.org/10.3390/app12115646.

Повний текст джерела
Анотація:
Currently, the computational power present in the sensors forming a wireless sensor network (WSN) allows for implementing most of the data processing and analysis directly on the sensors in a decentralized way. This shift in paradigm introduces a shift in the privacy and security problems that need to be addressed. While a decentralized implementation avoids the single point of failure problem that typically applies to centralized approaches, it is subject to other threats, such as external monitoring, and new challenges, such as the complexity of providing decentralized implementations for data mining algorithms. In this paper, we present a solution for privacy-aware distributed data mining on wireless sensor networks. Our solution uses a permissioned blockchain to avoid a single point of failure in the system. Contracts are used to construct an onion-like structure encompassing the Hoeffding trees and a route. The onion-routed query conceals the network identity of the sensors from external adversaries, and obfuscates the actual computation to hide it from internally compromised nodes. We validate our solution on a use case related to an air quality-monitoring sensor network. We compare the quality of our model against traditional models to support the feasibility and viability of the solution.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Danese, Alberto, Bendik Nybakk Torsæter, Andreas Sumper, and Michele Garau. "Planning of High-Power Charging Stations for Electric Vehicles: A Review." Applied Sciences 12, no. 7 (March 22, 2022): 3214. http://dx.doi.org/10.3390/app12073214.

Повний текст джерела
Анотація:
Electrification of mobility is paving the way in decreasing emissions from the transport sector; nevertheless, to achieve a more sustainable and inclusive transport system, effective and long-term planning of electric vehicles charging infrastructure will be crucial. Developing an infrastructure that supports the substitution of the internal combustion engine and societal needs is no easy feat; different modes of transport and networks require specific analyses to match the requirements of the users and the capabilities of the power grid. In order to outline best practices and guidelines for a cost-effective and holistic charging infrastructure planning process, the authors have evaluated all the aspects and factors along the charging infrastructure planning cycle, analysing different methodological approaches from scientific literature over the last few years. The review starts with target identification (including transport networks, modes of transport, charging technologies implemented, and candidate sites), second, the data acquisition process (detailing data types sources and data processing), and finally, modelling, allocation, and sizing methodologies. The investigation results in a decision support tool to plan high-power charging infrastructure for electric vehicles, taking into account the interests of all the stakeholders involved in the infrastructure investment and the mobility value chain (distributed system operators, final users, and service providers).
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Sousa, João C., and Hermano Bernardo. "Benchmarking of Load Forecasting Methods Using Residential Smart Meter Data." Applied Sciences 12, no. 19 (September 30, 2022): 9844. http://dx.doi.org/10.3390/app12199844.

Повний текст джерела
Анотація:
As the access to consumption data available in household smart meters is now very common in several developed countries, this kind of information is assuming a providential role for different players in the energy sector. The proposed study was applied to data available from the Smart Meter Energy Consumption Data in the London Households dataset, provided by UK Power Networks, containing half-hourly readings from an original sample of 5567 households (71 households were hereby carefully selected after a justified filtering process). The main aim is to forecast the day—ahead load profile, based only on previous load values and some auxiliary variables. During this research different forecasting models are applied, tested and compared to allow comprehensive analyses integrating forecasting accuracy, processing times and the interpretation of the most influential features in each case. The selected models are based on Multivariate Adaptive Regression Splines, Random Forests and Artificial Neural Networks, and the accuracies resulted from each model are compared and confronted with a baseline (Naïve model). The different forecasting approaches being evaluated have been revealed to be effective, ensuring a mean reduction of 15% in Mean Absolute Error when compared to the baseline. Artificial Neural Networks proved to be the most accurate model for a major part of the residential consumers.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Farooqi, Abdul Majid, M. Afshar Alam, Syed Imtiyaz Hassan, and Sheikh Mohammad Idrees. "A Fog Computing Model for VANET to Reduce Latency and Delay Using 5G Network in Smart City Transportation." Applied Sciences 12, no. 4 (February 17, 2022): 2083. http://dx.doi.org/10.3390/app12042083.

Повний текст джерела
Анотація:
Connected vehicles are a vital part of smart cities, which connect over a wireless connection and bring mobile computation and communication abilities. As a mediator, fog computing resides between vehicles and the cloud and provides vehicles with processing, storage, and networking power through Vehicular Ad-hoc networks (VANET). VANET is a time-sensitive technology that requires less time to process a request received from a vehicle. Delay and latency are the notorious issues of VANET and fog computing. To deal with such problems, in this work, we developed a priority-based fog computing model for smart urban vehicle transportation that reduces the delay and latency of fog computing. To upgrade the fog computing infrastructure to meet the latency and Quality of Service (QoS) requirements, 5G localized Multi-Access Edge Computing (MEC) servers have also been used, which resulted tremendously in reducing the delay and the latency. We decreased the data latency by 20% compared to the experiment carried using only cloud computing architecture. We also reduced the processing delay by 35% compared with the utilization of cloud computing architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Carroll, Sabrina, Joud Satme, Shadhan Alkharusi, Nikolaos Vitzilaios, Austin Downey, and Dimitris Rizos. "Drone-Based Vibration Monitoring and Assessment of Structures." Applied Sciences 11, no. 18 (September 15, 2021): 8560. http://dx.doi.org/10.3390/app11188560.

Повний текст джерела
Анотація:
This paper presents a novel method of procuring and processing data for the assessment of civil structures via vibration monitoring. This includes the development of a custom sensor package designed to minimize the size/weight while being fully self-sufficient (i.e., not relying on external power). The developed package is delivered to the structure utilizing a customized Unmanned Aircraft System (UAS), otherwise known as a drone. The sensor package features an electropermanent magnet for securing it to the civil structure while a second magnet is used to secure the package to the drone during flight. The novel B-Spline Impulse Response Function (BIRF) technique was utilized to extract the Dynamic Signature Response (DSR) from the data collected by the sensor package. Experimental results are presented to validate this method and show the feasibility of deploying the sensor package on structures and collecting data valuable for Structural Health Monitoring (SHM) data processing. The advantages and limitations of the proposed techniques are discussed, and recommendations for further developments are made.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Li, Cheng’en, Yunchao Tang, Xiangjun Zou, Po Zhang, Junqiang Lin, Guoping Lian, and Yaoqiang Pan. "A Novel Agricultural Machinery Intelligent Design System Based on Integrating Image Processing and Knowledge Reasoning." Applied Sciences 12, no. 15 (August 6, 2022): 7900. http://dx.doi.org/10.3390/app12157900.

Повний текст джерела
Анотація:
Agricultural machinery intelligence is the inevitable direction of agricultural machinery design, and the systems in these designs are important tools. In this paper, to address the problem of low processing power of traditional agricultural machinery design systems in analyzing data, such as fit, tolerance, interchangeability, and the assembly process, as well as to overcome the disadvantages of the high cost of intelligent design modules, lack of data compatibility, and inconsistency between modules, a novel agricultural machinery intelligent design system integrating image processing and knowledge reasoning is constructed. An image-processing algorithm and trigger are used to detect the feature parameters of key parts of agricultural machinery and build a virtual prototype. At the same time, a special knowledge base of agricultural machinery is constructed to analyze the test data of the virtual prototype. The results of practical application and software evaluation of third-party institutions show that the system improves the efficiency of intelligent design in key parts of agricultural machinery by approximately 20%, reduces the operation error rate of personnel by approximately 40% and the consumption of computer resources by approximately 30%, and greatly reduces the purchase cost of intelligent design systems to provide a reference for intelligent design to guide actual production.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Choi, Jae Seung, Choong Mo Ryu, Jung Hyun Choi, and Seung Jae Moon. "Improving the Analysis of Sulfur Content and Calorific Values of Blended Coals with Data Processing Methods in Laser-Induced Breakdown Spectroscopy." Applied Sciences 12, no. 23 (December 4, 2022): 12410. http://dx.doi.org/10.3390/app122312410.

Повний текст джерела
Анотація:
In Situ monitoring of the calorific value of coal has the advantage of reducing the amount of unburned carbon by injecting an appropriate amount of combustion air immediately to induce complete combustion. High sulfur concentrations cause severe environmental problems such as acid rain. In order to estimate the calorific value and measure the sulfur concentration, a new powerful technique for mixed coals was studied. Laser-induced breakdown spectroscopy (LIBS) does not require sample preparation. Several blended coals were used for the experiment to replicate the actual coal-fired power plant conditions. Two well-known data processing methods in near-infrared spectroscopy have been adopted to enhance the weak sulfur emission lines. The performance of the partial least square regression model was established by the parameters such as coefficient of determination, R2, relative error, and root mean square error (RMSE). The RMSE average was compared with the results of previous studies. As a result, the values from this study were smaller by 6.02% for the calibration line and by 4.5% for the validation line in near-infrared spectroscopy. The RMSE average values for calorific values were calculated to be less than 1%.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Liang, Lin, Ting Lei, Adam Donald, and Matthew Blyth. "Physics-Driven Machine-Learning-Based Borehole Sonic Interpretation in the Presence of Casing and Drillpipe." SPE Reservoir Evaluation & Engineering 24, no. 02 (February 12, 2021): 310–24. http://dx.doi.org/10.2118/201542-pa.

Повний текст джерела
Анотація:
Summary Interpretation of sonic data acquired by a logging-while-drilling (LWD) tool or wireline tool in cased holes is complicated by the presence of drillpipe or casing because those steel pipes can act as a strong waveguide. Traditional solutions, which rely on using a frequency bandpass filter or waveform arrival-time separation to filter out the unwanted pipe mode, often fail when formation and pipe signals coexist in the same frequency band or arrival-time range. We hence developed a physics-driven machine-learning-based method to overcome the challenge. In this method, two synthetic databases are generated from a general root-finding mode-search routine on the basis of two assumed models: One is defined as a cemented cased hole for a wireline scenario, and the other is defined as a steel pipe immersed in a fluid-filled borehole for the logging-while-drilling scenario. The synthetic databases are used to train neural network models, which are first used to perform global sensitivity analysis on all relevant model parameters so that the influence of each parameter on the dipole dispersion data can be well understood. A least-squares inversion scheme using the trained model was developed and tested on synthetic cases. The scheme showed good results, and a reasonable uncertainty estimate was made for each parameter. We then extended the application of the trained model to develop a method for automated labeling and extraction of the dipole flexural dispersion mode from other disturbances. The method combines the clustering technique with the neural-network-model-based inversion and an adaptive filter. Testing on field data demonstrates that the new method is superior to traditional methods because it introduces a mechanism from which unwanted pipe mode can be physically filtered out. This novel physics-driven machine-learning-based method improved the interpretation of sonic dipole dispersion data to cope with the challenge brought by the existence of steel pipes. Unlike data-driven machine learning methods, it can provide global service with just one-time offline training. Compared with traditional methods, the new method is more accurate and reliable because the processing is confined by physical laws. This method is less dependent on input parameters; hence, a fully automated solution could be achieved.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Cao, Jie, Da Wang, Qi-Ming Wang, Xing-Liang Yuan, Kai Wang, and Chin-Ling Chen. "Network Attack Detection Method of the Cyber-Physical Power System Based on Ensemble Learning." Applied Sciences 12, no. 13 (June 27, 2022): 6498. http://dx.doi.org/10.3390/app12136498.

Повний текст джерела
Анотація:
With the rapid development of power grid informatization, the power system has evolved into a multi-dimensional heterogeneous complex system with high cyber-physical integration, denoting the Cyber-Physical Power System (CPPS). Network attack, in addition to faults, becomes an important factor restricting the stable operation of the power system. Under the influence of network attacks, to improve the operational stability of CPPSs, this paper proposes a CPPS network attack detection method based on ensemble learning. First, to solve the shortcomings of a low detection precision caused by insufficient network attack samples, a power data balancing processing method was proposed. Then, the LightGBM ensemble was constructed to detect network attack events and lock the fault points caused by the attack. At the same time, in the process of gradient boost, the focal loss was introduced to optimize the attention weight of the classifier to the misclassified samples, thus improving the network attack detection precision. Finally, we propose an effective evaluation method of the network attack detection model based on cyber-physical comprehensive consideration. In addition, the cyber-physical power system stability under the action of the network attack detection model is quantitatively analyzed. The experimental results show that the F1 score of network attack detection increases by 16.73%, and the precision increases by 15.67%.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Cespedes, Adolfo Javier Jara, Bramandika Holy Bagas Pangestu, Akitoshi Hanazawa, and Mengu Cho. "Performance Evaluation of Machine Learning Methods for Anomaly Detection in CubeSat Solar Panels." Applied Sciences 12, no. 17 (August 29, 2022): 8634. http://dx.doi.org/10.3390/app12178634.

Повний текст джерела
Анотація:
CubeSat requirements in terms of size, weight, and power restrict the possibility of having redundant systems. Consequently, telemetry data are the primary way to verify the status of the satellites in operation. The monitoring and interpretation of telemetry parameters relies on the operator’s experience. Therefore, telemetry data analysis is less reliable, considering the data’s complexity. This paper presents a Machine Learning (ML) approach to detecting anomalies in solar panel systems. The main challenge inherited from CubeSat is its capability to perform onboard inference of the ML model. Nowadays, several simple yet powerful ML algorithms for performing anomaly detection are available. This study investigates five ML algorithm candidates, considering classification score, execution time, model size, and power consumption in a constrained computational environment. The pre-processing stage introduces the windowed averaging technique besides standardization and principal component analysis. Furthermore, the paper features the background, bus system, and initial operational data of BIRDS-4, a constellation made of three 1U CubeSats released from the International Space Station in March 2021, with a ML model proposal for future satellite missions.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Sun, Siqi, Jie Yang, Yun-Hsuan Chen, Jiaqi Miao, and Mohamad Sawan. "EEG Signals Based Internet Addiction Diagnosis Using Convolutional Neural Networks." Applied Sciences 12, no. 13 (June 21, 2022): 6297. http://dx.doi.org/10.3390/app12136297.

Повний текст джерела
Анотація:
Internet addiction (IA), as a new and often unrecognized psychosocial disorder, endangers people’s health and their lives. However, the common biometric analysis based on the combination of EEG signals and results of questionnaires is not quantitative, and thus difficult to ensure a specific biomarker. This work aims to develop a deep learning algorithm (no need to identify biomarkers) used for diagnosing IA and evaluating therapy efficacy. Herein, a five-layer CNN model combined with a fast Fourier transform is proposed to diagnose IA quantitatively. This algorithm is validated in the Lemon dataset by using it to process raw data, full spectral power, and alpha-beta-gamma spectral power (related to IA). In contrast to alpha-beta-gamma spectral power, the results based on full spectral power show better performance (87.59% accuracy, 88.80% sensitivity, and 86.41% specificity), which confirms that the proposed algorithm can diagnose IA without biomarkers. In addition, this proposed CNN model presents obvious advantages in processing raw data, achieving 81.1% accuracy. Such results verify that this method can contribute to the reduction of diagnosis time and be potentially used in real-time health monitoring systems. This work provides a quantitative approach to diagnose IA and evaluate therapy efficacy, as a general strategy, and can be widely used in other disorder diagnoses that affect EEG signals, such as psychiatric disorders, substance dependence, and depression.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Tomutsa, Liviu, Dmitriy Borisovich Silin, and Velimir Radmilovic. "Analysis of Chalk Petrophysical Properties by Means of Submicron-Scale Pore Imaging and Modeling." SPE Reservoir Evaluation & Engineering 10, no. 03 (June 1, 2007): 285–93. http://dx.doi.org/10.2118/99558-pa.

Повний текст джерела
Анотація:
Summary For many rocks of high economic interest such as chalk, diatomite, shale, tight gas sands, or coal, a submicron-scale resolution is needed to resolve the 3D pore structure, which controls the flow and trapping of fluids in the rocks. Such a resolution cannot be achieved with existing tomographic technologies. A new 3D imaging method based on serial sectioning, which uses the focused-ion-beam (FIB) technology, has been developed. FIB technology allows for the milling of layers as thin as 10 nm by using accelerated gallium (Ga+) ions to sputter atoms from the sample surface. After each milling step, as a new surface is exposed, a 2D image of this surface is generated, and the 2D images are stacked to reconstruct the 3D pore structure. Next, the maximum-inscribed-spheres (MIS) image-processing method computes the petrophysical properties by direct morphological analysis of the pore space. The computed capillary pressure curves agree well with laboratory data. Applied to the FIB data, this method generates the fluid distribution in the chalk pore space at various saturations. Introduction Field-scale oil-recovery processes are the result of countless events happening in individual pores. To model multiphase flow in porous media at pore scale, the resolution of the 3D images must be adequate for the rock of interest. Chalk formations in the oil fields of Texas, the Middle East, the North Sea, and other areas hold significant oil reserves. The extremely small typical pore sizes in chalk impose very high requirements on imaging resolution. In the last decade, X-ray microtomography has been used extensively for direct visualization of the pore system and the fluids within sandstone (Jasti et al. 1993; Coles et al. 1998; Wildenschild et al. 2003; Seright et al. 2003). While this approach is fast and nondestructive, its applicability is limited mostly to micron resolutions, although recent developments are bringing the resolution to submicron range (Stampanoni et al. 2002). For chalk pore systems, which are characterized by submicron- to nanometer-length scales, 3D stochastic methods based on 2D scanning-electron-microscope (SEM) images of thin sections have been used to reconstruct the pore system (Talukdar et al. 2001). The advent of FIB technology has it made possible to reconstruct submicron 3D pore systems for diatomite and chalk (Tomutsa and Radmilovic 2003) (Fig. 1). FIB technology is used in microelectronics to access individual components with nanoscale accuracy for design verification, failure analysis, and circuit modification (Orloff et al. 2002). FIB has been used in material sciences for sectional sample preparation for SEM and for 3D imaging of alloy components (Kubis et al. 2004). In earth sciences, the FIB also has been used for sample preparation for SEM and to access inner regions for performing microanalysis (Heaney et al. 2001).To access the pore structure at submicron scale, the FIB mills successive layers of the rock material as thin as 10 nm. As successive 2D surfaces are exposed, they are imaged with either the electron or the ion beam. After processing, the images are stacked to reconstruct the 3D pore structure. The geometry of the pore space of the obtained structure can be analyzed further to estimate petrophysical rock properties through computer simulations. To analyze the 3D chalk images obtained by the FIB method, we applied the MIS technique (Hazlett 1995; Silin et al. 2003, 2004; Silin and Patzek 2006). The MIS method analyzes the 3D pore-space image directly, without construction of pore networks. It bypasses the nontrivial task of extracting a simple but representative network of pore throats linking pore bodies from the 3D data (Lindquist 2002). Moreover, the pore-network extraction methods, which are based on relatively simple grain and pore shapes in sandstones (Øren and Bakke 2002), may not always be feasible for the complex pore structures of carbonates. Although a pore-network-based flow-modeling approach enjoyed a significant interest from the researchers and resulted in theoretically and practically sound conclusions (Øren et al. 1998; Xu et al. 1999; Patzek 2001; Blunt 2001), we believe that direct pore-space analysis deserves more attention. In addition, direct analysis of the pore space provides an opportunity to study alteration of the rock flow properties (e.g., those resulting from mechanical transformations or mineralization) (Jin et al. 2003).
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Ye, Qing, Huafeng Sun, Zhiqiang Jin, and Bing Wang. "Study on Shear Velocity Profile Inversion Using an Improved High Frequency Constrained Algorithm." Energies 16, no. 1 (December 21, 2022): 59. http://dx.doi.org/10.3390/en16010059.

Повний текст джерела
Анотація:
The formation shear-wave (S-wave)’s velocity information around a borehole is of great importance in evaluating borehole stability, reflecting fluid invasion, and selecting perforation positions. Dipole acoustic logging is an effective method for determining a formation S-wave’s velocity radial profile around the borehole. Currently, the formation S-wave’s radial-profile inversion methods are mainly based on the impacts of radial velocity changes of formations outside the borehole on the dispersion characteristics of dipole waveforms, without considering the impacts of an acoustic tool on the dispersion curves in the inversion methods. Accordingly, the inversion accuracy is greatly impacted in practical data-processing applications. In this paper, a novel inversion algorithm, which introduces equivalent-tool theory into the shear-velocity radial profile constrained-inversion method, is proposed to obtain the S-wave’s slowness radial profile. Based on the equivalent-tool theory, the acoustic tool can be modeled using two parameters, radius and elastic modulus. The tool's impact on the dipole waveform’s dispersion is eliminated first by using the equivalent-tool theory. Then, the corrected dispersion curve is used to carry out the constrained inversion processing. The results of this processing on the simulation data and the real logging data show the validity of the proposed algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

De Fazio, Roberto, Abdel-Razzak Al-Hinnawi, Massimo De Vittorio, and Paolo Visconti. "An Energy-Autonomous Smart Shirt Employing Wearable Sensors for Users’ Safety and Protection in Hazardous Workplaces." Applied Sciences 12, no. 6 (March 13, 2022): 2926. http://dx.doi.org/10.3390/app12062926.

Повний текст джерела
Анотація:
Wearable devices represent a versatile technology in the IoT paradigm, enabling non-invasive and accurate data collection directly from the human body. This paper describes the development of a smart shirt to monitor working conditions in particularly dangerous workplaces. The wearable device integrates a wide set of sensors to locally acquire the user’s vital signs (e.g., heart rate, blood oxygenation, and temperature) and environmental parameters (e.g., the concentration of dangerous gas species and oxygen level). Electrochemical gas-monitoring modules were designed and integrated into the garment for acquiring the concentrations of CO, O2, CH2O, and H2S. The acquired data are wirelessly sent to a cloud platform (IBM Cloud), where they are displayed, processed, and stored. A mobile application was deployed to gather data from the wearable devices and forward them toward the cloud application, enabling the system to operate in areas where a WiFi hotspot is not available. Additionally, the smart shirt comprises a multisource harvesting section to scavenge energy from light, body heat, and limb movements. Indeed, the wearable device integrates several harvesters (thin-film solar panels, thermoelectric generators (TEGs), and piezoelectric transducers), a low-power conditioning section, and a 380 mAh LiPo battery to accumulate the recovered charge. Field tests indicated that the harvesting section could provide up to 216 mW mean power, fully covering the power requirements (P¯ = 1.86 mW) of the sensing, processing, and communication sections in all considered conditions (3.54 mW in the worst-case scenario). However, the 380 mAh LiPo battery guarantees about a 16-day lifetime in the complete absence of energy contributions from the harvesting section.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії