Academic literature on the topic 'Fluid power technology Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Fluid power technology Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Fluid power technology Data processing"

1

Slijkerman, W. F. J., W. J. Looyestijn, P. Hofstra, and J. P. Hofman. "Processing of Multi-Acquisition NMR Data." SPE Reservoir Evaluation & Engineering 3, no. 06 (December 1, 2000): 492–97. http://dx.doi.org/10.2118/68408-pa.

Full text
Abstract:
Summary Crucial issues in formation evaluation are the determination of porosity, permeability, hydrocarbon volumes, and net-to-gross ratio. Nuclear magnetic resonance (NMR) logging provides measurements that are directly related to these parameters. The NMR response of fluids contained in pores is governed by their T2- and T1-relaxation times, diffusion coefficient, and whether or not they wet the rock. In the case where fluids possess a sufficiently large contrast in these properties and NMR data have been acquired with suitably chosen acquisition parameters (i.e., wait times and/or inter-echo times) a separation of water, oil, and gas NMR responses can be made. From these separate NMR responses the hydrocarbon volumes, porosity, and permeability estimates are subsequently calculated. Key in these applications is the ability to include all the acquired log NMR data into the processing towards the desired end result. Methods exist to derive hydrocarbon volumes from T2 distributions or from echo decay data. However, these are all methods in which the difference between just two acquisitions that only differ in either wait time or inter-echo time are considered. Over the past years we have developed, tested, and employed an alternative processing technique named multi-acquisition NMR (MacNMR). MacNMR takes any number of log acquisitions (wait time and/or inter-echo time variations) and simultaneously inverts them using a rigorous forward model to derive the desired water and hydrocarbon T2 distributions. In this paper, we discuss the concepts of MacNMR and demonstrate its versatility in NMR log processing. An example will illustrate its benefits. Introduction This paper discusses the method used by Shell to process multi-acquisition nuclear magnetic resonance (NMR) data. The objective of the processing is to extract fluid volumes and properties from multi-acquisition NMR data. The potential of multi-acquisition NMR logging for water, oil, and gas discrimination and volume quantification was recognized already in 1993. At that time no commercial processing of such data was available. It was decided to develop an in-house multi-acquisition processing capability. From 1993 to 1996 the development effort was focused on the evaluation of potential processing concepts and the development of the necessary mathematical algorithms. In 1996 the actual software implementation was developed, and in October 1996 first results were available and published internally. In March 1997 a company-wide beta test of the software was organized. In August 1997 the software was released company wide and has been in use since then. Multi-Acquisition Data Processing Methods As an introduction, we briefly review methods for quantitative processing of multi-acquisition NMR data that are described in the open literature. We make the distinction between methods that operate in the relaxation time domain vs. methods that operate in the acquisition time domain. Analysis in the Relaxation Time (or T2) Domain. Here, methods are discussed that operate in the T2 domain. Differential Spectrum Method. The differential spectrum method, first published by Akkurt and Vinegar1 works on dual-wait-time data. The concept is to independently T2 invert the long- and short-wait-time echo-decay vectors into a T2 spectrum. The two resulting T2 spectra are subtracted and, provided the wait times have been selected suitably,2 the difference between the two T2 spectra only arises from fluids with long T1 components (usually hydrocarbons). Volumes are quantified by integrating the difference T2 spectrum and correcting for the polarization difference between long and short wait time. Enhanced Diffusion Method. The enhanced diffusion method, recently published by Akkurt et al., 3 exploits the diffusion contrast between the diffusive brine and the less diffusive (medium-to-heavy) oil (i.e., water diffusion is faster than oil diffusion). The idea is that the inter-echo time is chosen sufficiently long such that the water and oil signals are fully separated in the T2 domain (i.e., water is at lower T2 than oil). Determining oil volumes is then just a matter of integrating over the appropriate T2 range in the T2 spectrum. Analysis in the Acquisition Time Domain. Here, methods are discussed that operate in the acquisition time domain. Time-Domain Analysis. The time-domain analysis method (TDA) operates on dual-wait-time data. This method was first published by Prammer et al.4 The concept is to subtract the measured long- and short-wait-time decay vectors into an echo difference. In case the wait times have been chosen suitably2 the difference of the two decay vectors should be arising from a long T1 component (usually a hydrocarbon). This difference echo vector is subsequently T2 inverted (using "matched filters," which basically means that a uni- or bi-exponential is fitted to the data). In that way, only the T2 component arising from the hydrocarbon is found. The hydrocarbon volume is deduced by correcting the resulting signal strength from the difference in polarization between long and short wait time. Echo Ratio Method. This method, published by Flaum et al.,5 works on dual-inter-echo-time data. The long- and short-inter-echo-time echo decays are divided and an apparent diffusion coefficient is calculated. The apparent diffusion coefficient can be used as a qualitative indicator for the presence of gas. MacNMR Method MacNMR uses a method that is radically different from the other processing schemes and is a comprehensive implementation of earlier concepts.1,6 MacNMR employs a forward model to model the measured echo-decay vectors. The starting points in the forward model are the T2 spectra for each of the fluids present (water, oil, and/or gas) that would be measured at infinite wait time and zero gradient. From these T2 spectra, echo-decay vectors are constructed by accounting for the effects of hydrogen index, polarization, and diffusion. The best-fit T2 spectra are found by inverting the forward model to the measured echo-decay vectors. All measured echo-decay vectors included in the inversion are treated on an equal statistical footing. They are weighted with their respective rms-noise values. Hence, decays with the lowest noise contribute most. In principle, any number of echo-decay vectors can be included in the inversion. The current software implementation of MacNMR accepts up to a maximum of six echo-decay vectors, totaling a maximum of 7,000 echoes. The inversion typically takes less than 1 second per depth increment. In a sense, MacNMR employs a very classical concept in that it defines unknown variables (T2 spectra for the fluids present) that are determined from the available data (i.e., all the acquired decay vectors) by error minimization. Between the unknown variables and the data is a forward model. The forward model contains the effects of inter-echo-time variation and wait-time variation. Analysis in the Relaxation Time (or T2) Domain. Here, methods are discussed that operate in the T2 domain. Differential Spectrum Method. The differential spectrum method, first published by Akkurt and Vinegar1 works on dual-wait-time data. The concept is to independently T2 invert the long- and short-wait-time echo-decay vectors into a T2 spectrum. The two resulting T2 spectra are subtracted and, provided the wait times have been selected suitably,2 the difference between the two T2 spectra only arises from fluids with long T1 components (usually hydrocarbons). Volumes are quantified by integrating the difference T2 spectrum and correcting for the polarization difference between long and short wait time. Enhanced Diffusion Method. The enhanced diffusion method, recently published by Akkurt et al.,3 exploits the diffusion contrast between the diffusive brine and the less diffusive (medium-to-heavy) oil (i.e., water diffusion is faster than oil diffusion). The idea is that the inter-echo time is chosen sufficiently long such that the water and oil signals are fully separated in the T2 domain (i.e., water is at lower T2 than oil). Determining oil volumes is then just a matter of integrating over the appropriate T2 range in the T2 spectrum. Analysis in the Acquisition Time Domain. Here, methods are discussed that operate in the acquisition time domain. Time-Domain Analysis. The time-domain analysis method (TDA) operates on dual-wait-time data. This method was first published by Prammer et al.4 The concept is to subtract the measured long- and short-wait-time decay vectors into an echo difference. In case the wait times have been chosen suitably2 the difference of the two decay vectors should be arising from a long T1 component (usually a hydrocarbon). This difference echo vector is subsequently T2 inverted (using "matched filters," which basically means that a uni- or bi-exponential is fitted to the data). In that way, only the T2 component arising from the hydrocarbon is found. The hydrocarbon volume is deduced by correcting the resulting signal strength from the difference in polarization between long and short wait time. Echo Ratio Method. This method, published by Flaum et al.,5 works on dual-inter-echo-time data. The long- and short-inter-echo-time echo decays are divided and an apparent diffusion coefficient is calculated. The apparent diffusion coefficient can be used as a qualitative indicator for the presence of gas.
APA, Harvard, Vancouver, ISO, and other styles
2

Ji, Jinjie, Qing Chen, Lei Jin, Xiaotong Zhou, and Wei Ding. "Fault Diagnosis System of Power Grid Based on Multi-Data Sources." Applied Sciences 11, no. 16 (August 20, 2021): 7649. http://dx.doi.org/10.3390/app11167649.

Full text
Abstract:
In order to complete the function of power grid fault diagnosis accurately, rapidly and comprehensively, the power grid fault diagnosis system based on multi-data sources is proposed. The integrated system uses accident-level information, warning-level information and fault recording documents and outputs a complete diagnosis and tracking report. According to the timeliness of three types of information transmission, the system is divided into three subsystems: real-time processing system, quasi-real-time processing system and batch processing system. The complete work is realized through the cooperation between them. While a real-time processing system completes fault diagnosis of elements, it also screens out incorrectly operating protections and circuit breakers and judges the loss of accident-level information. Quasi-real-time system outputs reasons for incorrect actions of protections and circuit breakers under the premise of considering partial warning-level information missing. The batch processing system corrects diagnosis results of the real-time processing system and outputs fault details, including fault phases, types, times and locations of faulty elements. The simulation results and test show that the system can meet actual engineering requirements in terms of execution efficiency and fault diagnosis and tracking effect. It can be used as a reference for self-healing and maintenance of power grids and has a preferable application value.
APA, Harvard, Vancouver, ISO, and other styles
3

Dindorf, Ryszard, and Piotr Wos. "Universal Programmable Portable Measurement Device for Diagnostics and Monitoring of Industrial Fluid Power Systems." Sensors 21, no. 10 (May 15, 2021): 3440. http://dx.doi.org/10.3390/s21103440.

Full text
Abstract:
This paper presents a new universal programmable portable measuring device (PMD) as a complete, accurate, and efficient solution for monitoring and technical diagnostics of industrial fluid power systems. PMD has programmable functions designed for recording, processing, and graphical visualization of measurement results at the test stand or the place of operation of fluid power systems. PMD has a built-in WiFi communication module for transferring measurement data via Industrial Internet of Things (IIoT) technology for online remote monitoring of fluid power systems. PMD can be programmed for a variety of measuring tasks in servicing, repairing, diagnosing, and monitoring fluid power systems. For this purpose, the fluid dynamic quantity, mechanical quantity, and electrical quantity can be measured. The adjustment of the PMD to the indirect measurement of leakage flow rate in a compressed air system (CAS) is presented in detail. Measuring instruments and PMDs were connected to a branch of the pipeline. The tests used the measurement system to estimate the leakage flow rate through air small nozzles, as well as other CAS indicators.
APA, Harvard, Vancouver, ISO, and other styles
4

Thorsen, Arve K., Tor Eiane, Holger F. Thern, Paal Fristad, and Stephen Williams. "Magnetic Resonance in Chalk Horizontal Well Logged With LWD." SPE Reservoir Evaluation & Engineering 13, no. 04 (August 5, 2010): 654–66. http://dx.doi.org/10.2118/115699-pa.

Full text
Abstract:
Summary This paper describes geological and petrophysical evaluation of a new structure of a mature field to evaluate the reservoir potential in unproduced reservoir zones. The well was drilled in a carbonate with variations in rock quality and with minor subfaulting occurring. Gamma ray (GR), resistivity, density, neutron, and image services were used in the horizontal part of the well in addition to magnetic resonance (MR). To achieve the best possible real-time wellbore placement, reservoir navigation and continuous follow-up on the horizontal log interpretation were performed during drilling. For the first time, a low-gradient-MR-while-drilling technology was deployed in a virgin carbonate horizontal well on the Norwegian Continental Shelf. The MR service was run to obtain porosities (including partitioning of movable and bound fluids), hydrocarbon (HC) saturations, and permeability estimates. Fluid saturations based on traditional methods and the MR were evaluated and compared by core data, enhancing the understanding of the measurement and the reservoir. For post-processing, the MR data were integrated and interpreted together with the other measurements performed in the well, delivering an accurate and consistent reservoir description. The first part of the horizontal part of the well was drilled with conductive drilling fluid and the latter part with nonconductive drilling fluid. Laboratory measurements for the two mud filtrates were performed to understand the influence of the two different drilling-fluid types on the MR measurements. In the absence of water-based mudfiltrate invasion, the MR data show good agreement with saturations from core, confirming the quality and reliability of the MR data. Comparison of the MR T2 distributions and volumetrics with image data indicates that even fine variations in rock quality and lithology are reliably resolved by the MR data. Before logging, old core data were used to refine the constants used in the Timur-Coates MR permeability equation, which quantitatively tracks changes in reservoir quality. The values were calibrated when Timur-Coates constants were derived from the well's core plugs.
APA, Harvard, Vancouver, ISO, and other styles
5

Stamatis, A., N. Aretakis, and K. Mathioudakis. "Blade Fault Recognition Based on Signal Processing and Adaptive Fluid Dynamic Modeling." Journal of Engineering for Gas Turbines and Power 120, no. 3 (July 1, 1998): 543–49. http://dx.doi.org/10.1115/1.2818181.

Full text
Abstract:
An approach for identification of faults in blades of a gas turbine, based on physical modelling, is presented. A measured quantity is used as an input, and the deformed blading configuration is produced as an output. This is achieved without using any kind of “signature,” as is customary in diagnostic procedures for this kind of faults. A fluid dynamic model is used in a manner similar to what is known as “inverse design methods”: the solid boundaries that produce a certain flow field are calculated by prescribing this flow field. In the present case, a signal, corresponding to the pressure variation on the blade-to-blade plane, is measured. The blade cascade geometry that has produced this signal is then produced by the method. In the paper, the method is described, and applications to test cases are presented. The test cases include theoretically produced faults as well as experimental cases where actual measurement data are shown to produce the geometrical deformations that existed in the test engine.
APA, Harvard, Vancouver, ISO, and other styles
6

Mousavi, Seyed Mahdi, Saeid Sadeghnejad, and Mehdi Ostadhassan. "Evaluation of 3D printed microfluidic networks to study fluid flow in rocks." Oil & Gas Science and Technology – Revue d’IFP Energies nouvelles 76 (2021): 50. http://dx.doi.org/10.2516/ogst/2021029.

Full text
Abstract:
Visualizing fluid flow in porous media can provide a better understanding of transport phenomena at the pore scale. In this regard, transparent micromodels are suitable tools to investigate fluid flow in porous media. However, using glass as the primary material makes them inappropriate for predicting the natural behavior of rocks. Moreover, constructing these micromodels is time-consuming via conventional methods. Thus, an alternative approach can be to employ 3D printing technology to fabricate representative porous media. This study investigates fluid flow processes through a transparent microfluidic device based on a complex porous geometry (natural rock) using digital-light processing printing technology. Unlike previous studies, this one has focused on manufacturing repeatability. This micromodel, like a custom-built transparent cell, is capable of modeling single and multiphase transport phenomena. First, the tomographic data of a carbonate rock sample is segmented and 3D printed by a digital-light processing printer. Two miscible and immiscible tracer injection experiments are performed on the printed microfluidic media, while the experiments are verified with the same boundary conditions using a CFD simulator. The comparison of the results is based on Structural Similarity Index Measure (SSIM), where in both miscible and immiscible experiments, more than 80% SSIM is achieved. This confirms the reliability of printing methodology for manufacturing reusable microfluidic models as a promising and reliable tool for visual investigation of fluid flow in porous media. Ultimately, this study presents a novel comprehensive framework for manufacturing 2.5D realistic microfluidic devices (micromodels) from pore-scale rock images that are validated through CFD simulations.
APA, Harvard, Vancouver, ISO, and other styles
7

Fu, Bai Xue, and Sheng Hai Hu. "Study for the Method of Automobile Oil Consumption Measuring Based on Ultrasonic Wave." Applied Mechanics and Materials 26-28 (June 2010): 962–66. http://dx.doi.org/10.4028/www.scientific.net/amm.26-28.962.

Full text
Abstract:
In response to the problems of automobile oil consumption direct test and indirect test, Ultrasonic technology and microcontroller control technology can be used to research automobile oil consumption testing technique. Deciding on the method of automobile oil consumption testing is based on the fluid testing principle of ultranonic wave technology and build a mathematical model of automobile oil consumption tesing, According to the functional requirements of oil consumption, design the automobile oil consumption system hardware, control circuit and control automation programming to achieve the oil consumption inteligence non-disintegrated measuring. By means of tests and data processing analaysis and modifying the theoretical model, the theory used by the system is feasible and oil consumption measuring system with low power consumption and high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
8

Yue, Wenzheng, Guo Tao, and Zhengwu Liu. "Identifying Reservoir Fluids by Wavelet Transform of Well Logs." SPE Reservoir Evaluation & Engineering 9, no. 05 (October 1, 2006): 574–81. http://dx.doi.org/10.2118/88559-pa.

Full text
Abstract:
Summary The wavelet-transform (WT) method has been applied to logs to extract reservoir-fluid information. In addition to the time (depth)/frequency analysis generally performed by the wavelet method, we also have performed energy spectral analysis for time/frequency-domain signals by the WT method. We have further developed a new method to identify reservoir fluid by setting up a correlation between the energy spectra and reservoir fluid. We have processed 42 models from an oil field in China using this method and have subsequently applied these rules to interpret reservoir layers. It is found that identifications by use of this method are in very good agreement with the results of well tests. Introduction An important log-analysis application is determining reservoir-fluid properties. It is common practice to calculate the water and oil saturations of reservoir formations by use of electrical logs. With the development of well-logging technology, a number of methods have been developed for reservoir-fluid typing with well logs (Hou 2002; Geng et al. 1983; Dahlberg and Ference 1984). A recent report has also described reservoir-fluid typing by the T2 differential spectrum from nuclear-magnetic-resonance (NMR) logs (Coates et al. 2001). However, because of the interference from vugs, fractures, clay content, and mud-filtrate invasion, the reservoir-fluid information contained in well logs is often concealed. The reliability of these log interpretations is thus limited in many cases. Therefore, it is desirable to find a more reliable and consistent way of reservoir-fluid typing with well logs. In this paper, we present a new method using the WT for fluid typing with well logs. The WT technique was developed with the localization idea from Gabor's short-time Fourier analysis and has been expanded further. Wavelets provide the ability to perform local analysis (i.e., analyze a small portion of a larger signal) (Daubechies 1992).This localized analysis represents the next logical step: a windowing technique with variable-sized regions. Wavelet analysis allows the use of long time intervals, where more-precise low-frequency information is wanted, and shorter intervals, where high-frequency information is needed. Wavelet analysis is capable of revealing aspects of data that other signal-analysis techniques miss: aspects such as trends, breakdown points, discontinuities in higher derivatives, and self-similarity. In well-logging-data processing, wavelet analysis has been used to identify formation boundaries, estimate reservoir parameters, and increase vertical resolution (Lu and Horne 2000; Panda et al. 1996; Jiao et al. 1999; Barchiesi and Gharbi 1999). For data interpretation, however, the identification of hydrocarbon-bearing zones by wavelet analysis is still under investigation. In this study, we have developed a technique of wavelet-energy-spectrum analysis (WESA) to identify reservoir-fluid types. We have applied this technique to field-data interpretation and have achieved very good results.
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Jun-Feng, Feng Deng, Li-Zhi Xiao, Hua-Bing Liu, Feng-Qin Ma, Meng-Ying Wang, Rui-Dong Zhao, Shi-Wen Chen, Jian-Jun Zhang, and Chun-Ming Xiong. "A proposed NMR solution for multi-phase flow fluid detection." Petroleum Science 16, no. 5 (September 25, 2019): 1148–58. http://dx.doi.org/10.1007/s12182-019-00367-3.

Full text
Abstract:
Abstract In the petroleum industry, detection of multi-phase fluid flow is very important in both surface and down-hole measurements. Accurate measurement of high rate of water or gas multi-phase flow has always been an academic and industrial focus. NMR is an efficient and accurate technique for the detection of fluids; it is widely used in the determination of fluid compositions and properties. This paper is aimed to quantitatively detect multi-phase flow in oil and gas wells and pipelines and to propose an innovative method for online nuclear magnetic resonance (NMR) detection. The online NMR data acquisition, processing and interpretation methods are proposed to fill the blank of traditional methods. A full-bore straight tube design without pressure drop, a Halbach magnet structure design with zero magnetic leakage outside the probe, a separate antenna structure design without flowing effects on NMR measurement and automatic control technology will achieve unattended operation. Through the innovation of this work, the application of NMR for the real-time and quantitative detection of multi-phase flow in oil and gas wells and pipelines can be implemented.
APA, Harvard, Vancouver, ISO, and other styles
10

Wulandari, I. Gusti Agung Ayu Desy. "Pengaruh Nano Fluida terhadap Temperatur Kondensor Cascade Straight Heat Pipe." Jurnal METTEK 5, no. 2 (January 8, 2020): 79. http://dx.doi.org/10.24843/mettek.2019.v05.i02.p03.

Full text
Abstract:
Perkembangan teknologi Central Processing Unit (CPU) pada komputer telah mengarah pada smart technologies yaitu memiliki kinerja yang semakin baik namun dengan dimensi yang diperkecil. Dengan pengurangan dimensi tersebut, maka dapat menyebabkan peningkatan daya yang sangat signifikan dan peningkatan fluks kalor pada CPU yang tinggi. Pada penelitian ini, cascade straight heat pipe dirancang untuk sistem pendingin CPU yang lebih baik tanpa memerlukan tambahan daya dalam pengoperasiannya. Dari data penelitian yang didapat, kinerja termal terbaik ada pada cascade straight heat pipe dengan fluida kerja Al2O3 – TiO2 – air, dengan penurunan temperatur plat simulator sebesar 41,872 % pada beban maksimum dan temperatur keluaran kondensor yang tertinggi. Kinerja termal terbaik kedua adalah pada penggunaan fluida kerja Al2O3 – air dengan penurunan temperatur plat simulator sebesar 35,243 % pada beban maksimum. Kinerja termal yang kurang baik ada pada penggunaan fluida kerja air dengan penurunan temperatur plat simulator sebesar 28,648 % dan temperatur keluaran kondensor yang terendah. The technology development of Central Processing Unit (CPU) on computers has led into smart technologies, which have better performance but with smaller dimensions. With the reduction of the dimensions, it can cause a very significant increase in power and high increasement of heat flux in the CPU. In this research, cascade straight heat pipe is designed for better CPU cooling systems without the need of additional power for the operation. From the data obtained, the best thermal performance is cascade straight heat pipe with the working fluid of Al2O3 - TiO2 - water, with a simulator plate temperature decrease of 41.872 % at maximum load and the highest condenser output temperature. The second best thermal performance is on the use of Al2O3 - water with the simulator plate temperature decrease of 35,243 % at maximum load. The poor thermal performance is on the use of water with the simulator plate temperature decrease of 28,648 % and the lowest condenser output temperature.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Fluid power technology Data processing"

1

Lundqvist, Viktor. "A smoothed particle hydrodynamic simulation utilizing the parallel processing capabilites of the GPUs." Thesis, Linköping University, Department of Science and Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-21761.

Full text
Abstract:

Simulating fluid behavior has proven to be a demanding challenge which requires complex computational models and highly efficient data structures. Smoothed Particle Hydrodynamics (SPH) is a particle based computational model used to simulate fluid behavior that has been found capable of producing convincing results. However, the SPH algorithm is computational heavy which makes it cumbersome to work with.

This master thesis describes how the SPH algorithm can be accelerated by utilizing the GPU’s computational resources. It describes a model for how to distribute the work load on the GPU and presents a suitable data structure. In addition, it proposes a method to represent and handle moving objects in the fluids surroundings. Finally, the performance gain due to the GPU is evaluated by comparing processing times with an identical implementation running solely on the CPU.

APA, Harvard, Vancouver, ISO, and other styles
2

Horko, Michael. "CFD optimisation of an oscillating water column wave energy converter." University of Western Australia. School of Mechanical Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0089.

Full text
Abstract:
Although oscillating water column type wave energy devices are nearing the stage of commercial exploitation, there is still much to be learnt about many facets of their hydrodynamic performance. This research uses the commercially available FLUENT computational fluid dynamics flow solver to model a complete OWC system in a two dimensional numerical wave tank. A key feature of the numerical modelling is the focus on the influence of the front wall geometry and in particular the effect of the front wall aperture shape on the hydrodynamic conversion efficiency. In order to validate the numerical modelling, a 1:12.5 scale experimental model has been tested in a wave tank under regular wave conditions. The effects of the front lip shape on the hydrodynamic efficiency are investigated both numerically and experimentally and the results compared. The results obtained show that with careful consideration of key modelling parameters as well as ensuring sufficient data resolution, there is good agreement between the two methods. The results of the testing have also illustrated that simple changes to the front wall aperture shape can provide marked improvements in the efficiency of energy capture for OWC type devices.
APA, Harvard, Vancouver, ISO, and other styles
3

Nduku, Nyaniso Prudent. "Development of methods for distribution network power quality variation monitoring." Thesis, Cape Peninsula University of Technology, 2009. http://hdl.handle.net/20.500.11838/1144.

Full text
Abstract:
Thesis (MTech (Electrical Engineering))--Cape Peninsula University of Technology, 2009
The purpose of this project is to develop methods for distribution network power quality' variations monitoring. Power quality (PO) has become a significant issue for both power suppliers and customers. There have been important changes in power system regarding to power quality requirements. "Power quality" is the combination at voltage quality and current quality. The main research problem of the project is to investigate the power quality of a distribution network by selection of proper measurement, applying and developing the existing classic and modern signal conditioning methods for power disturbance's parameters extracting and monitoring. The research objectives are: To study the standard lEC 61000-4-30 requirements. to investigate the common couplings in the distribution network. To identity the points for measurement, to develop MySQL database for the data from the measurement and to develop MATLAB software tor simulation of the network To develop methods based on Fourier transforms for estimation of the parameters of the disturbances. To develop software for the methods implementation, The influence of different loads on power quality disturbances are considered in the distribution network. Points on the network and meters according to the lEC power quality standards are investigated and applied for the CPUT Bellville campus distribution network. The implementation of the power quality monitoring for the CPUT Bellville campus helps the quality of power supply to be improved and the used power to be reduced. MATLAB programs to communicate with the database and calculate the disturbances and power quality parameters are developed.
APA, Harvard, Vancouver, ISO, and other styles
4

REIS, JUNIOR JOSE S. B. "Métodos e softwares para análise da produção científica e detecção de frentes emergentes de pesquisa." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26929.

Full text
Abstract:
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2016-12-21T15:07:24Z No. of bitstreams: 0
Made available in DSpace on 2016-12-21T15:07:24Z (GMT). No. of bitstreams: 0
O progresso de projetos anteriores salientou a necessidade de tratar o problema dos softwares para detecção, a partir de bases de dados de publicações científicas, de tendências emergentes de pesquisa e desenvolvimento. Evidenciou-se a carência de aplicações computacionais eficientes dedicadas a este propósito, que são artigos de grande utilidade para um melhor planejamento de programas de pesquisa e desenvolvimento em instituições. Foi realizada, então, uma revisão dos softwares atualmente disponíveis, para poder-se delinear claramente a oportunidade de desenvolver novas ferramentas. Como resultado, implementou-se um aplicativo chamado Citesnake, projetado especialmente para auxiliar a detecção e o estudo de tendências emergentes a partir da análise de redes de vários tipos, extraídas das bases de dados científicas. Através desta ferramenta computacional robusta e eficaz, foram conduzidas análises de frentes emergentes de pesquisa e desenvolvimento na área de Sistemas Geradores de Energia Nuclear de Geração IV, de forma que se pudesse evidenciar, dentre os tipos de reatores selecionados como os mais promissores pelo GIF - Generation IV International Forum, aqueles que mais se desenvolveram nos últimos dez anos e que se apresentam, atualmente, como os mais capazes de cumprir as promessas realizadas sobre os seus conceitos inovadores.
Dissertação (Mestrado em Tecnologia Nuclear)
IPEN/D
Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
5

Ramafalo, Mogale Emmanuel. "A framework for the evaluation of the information system at Eskom." Thesis, 2014. http://hdl.handle.net/10352/305.

Full text
Abstract:
M. Tech. (Information Technology, Faculty of Applied and Computer Sciences) Vaal University of Technology
A reliable and efficient information system (IS) is critical for Eskom so that it is able to manage and meet its energy demands. A reliable power supply provides stakeholders with the confidence that supply of power is managed sustainably, effectively and efficiently. Thus, an information system is integral in the effective and efficient generation, distribution and transmission of electricity. The purpose of the study was to investigate IS evaluation criteria and to develop a comprehensive framework that will serve as basis for IS evaluation across Eskom. The research study additionally investigated IS evaluation methods and instruments that are currently used in Eskom. This study produced an information systems success evaluation framework. The proposed model was built by reviewing well-established information systems success models and information systems theories found in the literature. This research study followed the interpretive research paradigm combining it with qualitative case study. The research findings linked information systems success to top management support, change management process and information quality. The findings of the study alsorevealed that quality of IS department’s service as perceived by users can greatly influence IS success. The results of this study provided enlightening reference benefit for Eskom, which was in line with Eskom’s goal of improving business processes, efficiencies and eliminating waste.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Fluid power technology Data processing"

1

Bath International Fluid Power Workshop (3rd 1990 University of Bath). Computers in fluid power. Taunton, Somerset, England: Research Studies Press, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Watton, J. Fluid power systems: Modelling, simulation, analog and microcomputer control. New York: Prentice Hall, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fluid power systems: Modeling, simulation, analog and microcomputer control. New York: Prentice-Hall, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Halme, Jarkko. Utilization of genetic algorithm in on-line tuning of fluid power servos. Lappeenranta: Lappeenranta University of Technology, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

McKenzie, Jamieson A. Power learning in the classroom. Newbury Park, Calif: Corwin Press, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Organisations and information technology: Systems, power and job design. Oxford [England]: Blackwell, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

John, Webster, ed. Inescapable data: Harnessing the power of convergence. Upper Saddle River, NJ: IBM Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

The power of now: How winning companies sense and respond to change using real-time technology. New York: McGraw-Hill, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hank, Bromley, and Apple Michael W, eds. Education, technology, power: Educational computing as a social practice. Albany: State University of New York Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jacques, Roy. The power of now: Real-time analytics and IBM InfoSphere Streams. New York: McGraw-Hill Education, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Fluid power technology Data processing"

1

Dong, Weijie, Kui Luo, Guanghui Shi, Guoqing He, and Wenwen Sun. "Data Processing Technology of Power Grid Measurement Based on Error Correction." In Application of Intelligent Systems in Multi-modal Information Analytics, 107–14. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74814-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Xian. "Spectrum Allocation Technology of Elastic Optical Networks Based on Power Business Perception." In 2020 International Conference on Data Processing Techniques and Applications for Cyber-Physical Systems, 555–62. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1726-3_68.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Tongwen, Hong Zhang, Jinhui Ma, and Xincun Shen. "Distributed Multi-source Service Data Stream Processing Technology and Application in Power Grid Dispatching System." In Big Data Management and Analysis for Cyber Physical Systems, 85–94. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-17548-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rathee, Geetanjali, and Hemraj Saini. "Electronic Voting Application Powered by Blockchain Technology." In Large-Scale Data Streaming, Processing, and Blockchain Security, 230–46. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3444-1.ch011.

Full text
Abstract:
India is the largest democracy in the world, and in spite of that, it faces various challenges on a daily basis that hinder its growth like corruption and human rights violations. One of the ugliest phases of corruption and political mayhem is visible during the election process where no stone is kept unturned in order to gain power. However, it is the common citizen who suffers most in terms of clarity as well as security when it comes to his/her vote. Blockchain can play a very important role in ensuring that the voters registering their votes are legit and the counting of votes is not manipulated in any way. It is also needed in today's times where the world is available to people in their smart phones to also give them the opportunity to register their votes hassle free via their smart phones without having to worry about the system getting hacked. Therefore, in this chapter, the proposed layout will be based on a smart contract, using Ethereum software to create an e-voting app. In this chapter, the authors have proposed a secure e-voting framework through blockchain mechanism.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Haoliang, Wei Liu, and Tolga Soyata. "Accessing Big Data in the Cloud Using Mobile Devices." In Cloud Technology, 222–48. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-6539-2.ch010.

Full text
Abstract:
The amount of data acquired, stored, and processed annually over the Internet has exceeded the processing capabilities of modern computer systems, including supercomputers with multiple-Petaflop processing power, giving rise to the term Big Data. Continuous research efforts to implement systems to cope with this insurmountable amount of data are underway. The authors introduce the ongoing research in three different facets: 1) in the Acquisition front, they introduce a concept that has come to the forefront in the past few years: Internet-of-Things (IoT), which will be one of the major sources for Big Data generation in the following decades. The authors provide a brief survey of IoT to understand the concept and the ongoing research in this field. 2) In the Cloud Storage and Processing front, they provide a survey of techniques to efficiently store the acquired Big Data in the cloud, index it, and get it ready for processing. While IoT relates primarily to sensor nodes and thin devices, the authors study this storage and processing aspect of Big Data within the framework of Cloud Computing. 3) In the Mobile Access front, they perform a survey of existing infrastructures to access the Big Data efficiently via mobile devices. This survey also includes intermediate devices, such as a Cloudlet, to accelerate the Big Data collection from IoT and access to Big Data for applications that require response times that are close to real-time.
APA, Harvard, Vancouver, ISO, and other styles
6

Hirji, Karim K. "Process-Based Data Mining." In Encyclopedia of Information Science and Technology, First Edition, 2321–25. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-553-5.ch409.

Full text
Abstract:
In contrast to the Industrial Revolution, the Digital Revolution is happening much more quickly. For example, in 1946, the world’s first programmable computer, the Electronic Numerical Integrator and Computer (ENIAC), stood 10 feet tall, stretched 150 feet wide, cost millions of dollars, and could execute up to 5,000 operations per second. Twenty-five years later, Intel packed 12 times ENIAC’s processing power into a 12–square-millimeter chip. Today’s personal computers with Pentium processors perform in excess of 400 million instructions per second. Database systems, a subfield of computer science, has also met with notable accelerated advances. A major strength of database systems is their ability to store volumes of complex, hierarchical, heterogeneous, and time-variant data and to provide rapid access to information while correctly capturing and reflecting database updates.
APA, Harvard, Vancouver, ISO, and other styles
7

Chhaya, Lipi, Paawan Sharma, Adesh Kumar, and Govind Bhagwatikar. "Application of Data Mining in Smart Grid Technology." In Research Anthology on Smart Grid and Microgrid Development, 869–82. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-3666-0.ch038.

Full text
Abstract:
Smart grid technology is a radical approach for improvisation in existing power grid. Some of the significant features of smart grid technology are bidirectional communication, AMI, SCADA, renewable integration, active consumer participation, distribution automation, and complete management of entire grid through wireless communication standards and technologies. Management of complex, hierarchical, and heterogeneous smart grid infrastructure requires data collection, storage, processing, analysis, retrieval, and communication for self-healing and complete automation. Data mining techniques can be an effective solution for smart grid operation and management. Data mining is a computational process for data analysis. Data scrutiny is unavoidable for unambiguous knowledge discovery as well as decision making practices. Data mining is inevitable for analysis of various statistics associated with power generation, distribution automation, data communications, billing, consumer participation, and fault diagnosis in smart power grid.
APA, Harvard, Vancouver, ISO, and other styles
8

Chhaya, Lipi, Paawan Sharma, Adesh Kumar, and Govind Bhagwatikar. "Application of Data Mining in Smart Grid Technology." In Encyclopedia of Information Science and Technology, Fifth Edition, 815–27. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3479-3.ch056.

Full text
Abstract:
Smart grid technology is a radical approach for improvisation in existing power grid. Some of the significant features of smart grid technology are bidirectional communication, AMI, SCADA, renewable integration, active consumer participation, distribution automation, and complete management of entire grid through wireless communication standards and technologies. Management of complex, hierarchical, and heterogeneous smart grid infrastructure requires data collection, storage, processing, analysis, retrieval, and communication for self-healing and complete automation. Data mining techniques can be an effective solution for smart grid operation and management. Data mining is a computational process for data analysis. Data scrutiny is unavoidable for unambiguous knowledge discovery as well as decision making practices. Data mining is inevitable for analysis of various statistics associated with power generation, distribution automation, data communications, billing, consumer participation, and fault diagnosis in smart power grid.
APA, Harvard, Vancouver, ISO, and other styles
9

Mansmann, Svetlana, and Marc H. Scholl. "Empowering the OLAP Technology to Support Complex Dimension Hierarchies." In Data Warehousing and Mining, 2164–84. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-951-9.ch128.

Full text
Abstract:
Comprehensive data analysis has become indispensable in a variety of domains. OLAP (On-Line Analytical Processing) systems tend to perform poorly or even fail when applied to complex data scenarios. The restriction of the underlying multidimensional data model to admit only homogeneous and balanced dimension hierarchies is too rigid for many real-world applications and, therefore, has to be overcome in order to provide adequate OLAP support. We present a framework for classifying and modeling complex multidimensional data, with the major effort at the conceptual level as to transform irregular hierarchies to make them navigable in a uniform manner. The properties of various hierarchy types are formalized and a two-phase normalization approach is proposed: heterogeneous dimensions are reshaped into a set of wellbehaved homogeneous subdimensions, followed by the enforcement of summarizability in each dimension’s data hierarchy. Mapping the data to a visual data browser relies solely on metadata, which captures the properties of facts, dimensions, and relationships within the dimensions. The navigation is schema-based, that is, users interact with dimensional levels with on-demand data display. The power of our approach is exemplified using a real-world study from the domain of academic administration.
APA, Harvard, Vancouver, ISO, and other styles
10

Talmon, Arno, and Ebi Meshkati. "Rheology, Rheometry and Wall Slip." In Slurry Technology - New Advances [Working Title]. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.108048.

Full text
Abstract:
In diverse resource, processing and dredging applications wall slip occurs. In hydraulic transport of highly concentrated particulate mixtures, wall slip can be beneficial as it may substantially reduce hydraulic gradients. In other occasions, for instance in rheometry, wall slip may obscure rheology. Rheometric wall slip is not specific to industrial slurries and appears in natural (fluid) mud as well, mostly found in harbours and estuaries. In natural (fluid) muds, in contrary to industrial muds, coarse solids are absent. However, similarly, (clay) colloids govern their non-Newtonian flow characteristics. It is exciting to see that wall slip does not only occur in the case of dispersed coarse materials but also in the absence of those. In this chapter, we elaborate on wall slip in some existing resource industry rheometry data and compare them with typical recent results of fluid mud rheology. Moreover, measurement of a (stationary) fluid mud’s longitudinal profile in a harbour basin is used to examine consequences of utilising slippage data. We finally evaluate measuring element usage and implementation of rheology in calculation methods.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Fluid power technology Data processing"

1

Fujiwara, K., Y. Nakamura, A. Kaneko, and Y. Abe. "Data Processing Technology of Interference Fringe for Particle Decontamination Measurement." In ASME-JSME-KSME 2019 8th Joint Fluids Engineering Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/ajkfluids2019-4983.

Full text
Abstract:
Abstract In severe accidents (SAs) of nuclear power plants, release of gas containing fission products (FPs) from the reactor vessel is thought to be a major issue. As to reduce the leakage of FPs into the environment, gas containing FPs are generally discharged though the wet well, and decontaminated by the transfer effect of FPs from the gas-phase to the liquid phase. This effect is called pool scrubbing. In SA analysis codes such as MELCOR, it is predicted in the model that FP particle motion inside a single bubble, created by the bubbly flow inside the wet well as a major factor in decontamination. However, there are almost no experimental data to investigate the decontamination behavior. Therefore, in our experiment, we used an advanced M/Z interferometer in order to visualize the particle decontamination behavior by adopting Maki prism and installing a high-speed camera. However, since the interferometer experiments are not specialized in non-stable phenomena and, there were several problems to be solved. The first problem was the phase extraction method in the FFT measurement. Since the FFT information of interference is complicated, the existing extract the phase information by hand from the overall amplitude. However, since the high-speed camera visualization provide a large amount of information, this is not a realistic solution in our experiment. Therefore, in order to obtain the threshold between phase information from the overall amplitude quantitatively, we applied the Gaussian mixture model (GMM) as to cluster the data. From the measurement results, we succeed in obtaining a threshold from the fitting results of GMM. The next problem was to obtain a fine image of the bubble interface in order to obtain the decontamination behavior in the bubble interface. However, the interference image contains a stripe on the background which makes it difficult to obtain the interface information. Therefore, in our experiment, we added a LED backlight coaxial to the laser of interferometer in order to obtain the backlight image on the bubble. The interference and backlight image are divided by wave length with a dichroic mirror. We have done a synchronous visualization of interference and backlight image. A fine mask to extract the interface of bubble is obtained from the calibration and comparison of two images. Using the visualization of continuous image of particle decontamination behavior from a single bubble, the decontamination behavior of particle from a single bubble was clearly obtained. Although the existing model predict the decontamination behavior as stable, the non-stationary decontamination of particle from the bubble has been measured.
APA, Harvard, Vancouver, ISO, and other styles
2

AlZoukani, Ahmad, Gallyam Aidagulov, Farhan Ali, Mohammed Al-Hamad, and Wael Abdallah. "Robust Centrifuge Data Processing for Tight and Permeable Rock Samples." In International Petroleum Technology Conference. IPTC, 2022. http://dx.doi.org/10.2523/iptc-22488-ms.

Full text
Abstract:
Abstract Capillary pressure measurements are key to reservoir characterization. The centrifuge technique is the most used industrial laboratory method to obtain capillary pressure curves for rock samples. The generated experimental data, however, requires conversion of average saturation into local saturation to get correct capillary pressure curves, which is often complicated by the need of fitting of complex and noisy data. Therefore, the objective of this study is to construct a smooth, stable and physically-consistent data fitting model for complex centrifuge data, in order to deliver accurate local saturations for different capillary pressure curves. Drainage capillary pressure curves were generated by centrifugation. Isoparaffinic oil was used to displace brine from core samples at elevated capillary pressure steps. Average water saturation was determined at each capillary pressure step after attaining production stability. Hassler-Brunner and Forbes’s second approximate solutions were used to convert the acquired average water saturations into local saturations. For these two solutions, three analytical fitting techniques were compared on different sets of experimental data. These are power law, global polynomial and cubic spline fitting methods. Two carbonate samples of (96 md) and (0.7 md) permeability were evaluated to represent two distinct cases of a capillary pressure curves. Initially, the power law was used to fit the centrifuge data. For both permeable and tight samples, the resulting capillary pressure curves were found strongly biased by a choice of non-zero initial pressure point, which makes this technique not suitable for data interpretation. The second approach was to use the polynomial fitting method, which found unable to properly fit the tight sample data. It was, however, capable to fit the raw data of the permeable sample. The generated corrected capillary pressure curve, however, was unphysical at low water saturation ranges. Therefore, the raw data of the both samples required application of more complex fitting approach, i.e. the spline method. From the results, the spline function showed high degree of fitting and could account for irregularities of the experimental data. However, non-physical oscillations may occur during the data processing. Therefore, additional constraints of monotonicity of the fit and of the derived Forbes solutions were imposed on the optimal fitting spline. This approach was implemented using cubic splines and verified by equally good results obtained in processing experimental data sets for tight and permeable samples. Robust interpretation workflow to reconstruct capillary pressure curves from centrifuge experiment was built and verified on two limiting cases of tight and permeable samples. The approach is based on fitting of noisy experimental data with cubic spline, constructed using constrained optimization procedure to ensure monotonicity of the derived solutions. The latter physical consistency of the constructed spline fit returns correct capillary pressure curves required for accurate prediction of oil recovery and reservoir fluid distribution.
APA, Harvard, Vancouver, ISO, and other styles
3

Bergtun, Kjetil, Carsten Mahler, and Trond Melheim. "Introduction to New Technology Applications, Data Acquisition Capabilities, and Features of the All-Electric Boosting Control System." In Offshore Technology Conference. OTC, 2022. http://dx.doi.org/10.4043/31943-ms.

Full text
Abstract:
Section 1: Abstract This paper will explain what an All-Electric System (AES) is, how it fits within a subsea application and how it compares to the traditional Electro-Hydraulic System. It explains how implementing this system will simplify operating processes, reduce both topside and subsea weight, reduce cost, and lower the risk of hazardous fluids escaping into the ocean. The Vigdis Booster Station (VBS) project started back in 2017. Technical requirements demanded an AES. The concept of using an AES in subsea applications is not new. Development and operation have been underway since the early 2000's, primarily for subsea production system which pioneered this technological development. The VBS is the first AES for a Subsea Processing System. A multi-phase pump boosts production from the Vigdis field to the Snorre A platform. The pump is controlled from the Snorre B platform through an 18 km combined power and control umbilical. The VBS started production early 2021. The main difference between an AES and a traditional electro-hydraulic system is that it replaces hydraulic fluid with electric power as the energy source for the Subsea Control System. This leads to reduced complexity and carbon footprint: Eliminates a costly and heavy Control Fluid (CF) hydraulic power unit (HPU)Saves topside tubing down to umbilical hang-offEliminates the need for control fluid tubes in umbilical or CF jumpers subseaEliminates the need for a large and heavy subsea hydraulic actuatorEliminates the need for subsea accumulators for control fluid.Eliminates the need for any sea-chestsMakes it possible to have all actuators individually retrievable and interchangeable. It also avoids pressurized control fluids and the associated hazards connected with them; and prevents sea pollution from the exhaust control fluid. Besides eliminating these downsides of a traditional electro-hydraulic system, implementing an electric control system comes with various inherent technological features such as: Simplified system testingMore precise, system-conserving valve operationsImproved diagnostic capabilities and system health monitoringEnhanced data evaluation, e.g. predictive maintenance The VBS includes eight off actuated process valves and small-bore valves, of which six valves range from 10" to 12" with a retrievable torque intensifier on the valve's class 7 interface. The electric actuators (eActuators) provide valve operation through standard class 4 interfaces. The electric subsea control module (eSCM) is mounted on a retrievable pump module with a dedicated installation funnel. This leads to a flexible system where eSCM, actuators and jumpers are all individually retrievable. This paper focuses on the All-Electric Boosting Station design. Special focus is put on the electrical actuators that were introduced in this project as well as the challenges to master during project execution. It will highlight new data acquisition capabilities and give an outlook on features that provide additional value to the boosting station operator.
APA, Harvard, Vancouver, ISO, and other styles
4

Xia, Hua, Nelson Settles, and David DeWire. "Hydrophobic Dielectric Sealing Material Enabled Highly Reliable Electrical Connectors for Downhole Data and Power Transmission Application." In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/31288-ms.

Full text
Abstract:
Abstract A high-strength dielectric sealing material has been developed for sealing electrical connectors, feedthroughs, bulkheads, and interconnectors. X-ray diffraction analyses have identified that the microstructures of the sealing material could be of amorphous and α-phase mixed morphology, α+β mixed phase, and β-phase dominated tetrahedral microstructure, which primarily depend upon the material processing temperature. The electrical insulation resistance of the β-phase dominated sealing material have nearly two times higher than that of α+β mixed phase sealing material. Both β-phase dominated and α+β mixed phase sealing materials have shown water repelling properties, while amorphous glass phase has shown hydrophilic properties. If a 5,000MΩ insulation resistance is also regarded as baseline for a downhole electrical connector, the maximum operation temperature of α+β mixed phase sealing materials is around 240°C while that of the β-phase dominated sealing material can be up to 300°C. Furthermore, a thermo-mechanical modeling has been developed to quantify if a designed electrical connector has sufficient reliability in the hostile wellbore or downhole environments. The temperature- and pressure-dependent seal compression have suggested that the temperature-related safety factor should be chosen in the range from 2.0 to 5.0 while the pressure-related safety factor should be chosen in the range from 1.5 to 2.0 to ensure 10-20 years electrical connector downhole operating reliability. The qualification tests from prototyped electrical connectors, under 260°C/32,000PSI simulated water-fluid based conditions, have demonstrated that such high-strength sealing material sealed electrical connector could be integrated with logging while drilling (LWD) or/and measurement while drilling (MWD) tools for providing long-term reliable signal, data, and electrical power transmission services, regardless of a water-based or moisture-rich wellbore or/and downhole environment.
APA, Harvard, Vancouver, ISO, and other styles
5

Åman, Rafael, Heikki Handroos, Hannu Kärkkäinen, Jari Jussila, and Pasi Korkealaakso. "Novel ICT-Enabled Collaborative Design Processes and Tools for Developing Non-Road Mobile Machinery." In ASME/BATH 2015 Symposium on Fluid Power and Motion Control. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/fpmc2015-9571.

Full text
Abstract:
The improvement of the energy efficiency is an important topic for non-road mobile machinery developers and manufacturers. These machines normally use fluid power transmission in drivelines and working actuators. New energy efficient technologies, e.g. a hybrid power transmission with an energy recovery feature, have been introduced. Currently most of the manufacturers are still using conventional technologies in their product development process. Human operators have an effect on the overall efficiency of the machines. Taking into account the human effects is difficult and expensive using the conventional design processes and tools. The objective of this study is to provide international machine manufacturers instrumental, yet novel, community and simulation-based (ICT-enabled) tools/methods for the strategic and cost effective development of their product practices and design processes. The development of models and methods will allow for rapid real-time virtual prototyping of complex machines and machine fleets that operate within a number of worksites or geographical conditions. The introduction of this state-of-the art (and going beyond) advancement in real-time virtual technology, simulation, internet based design technologies and software, cyber-physical and big data processing systems, will present a holistic approach to improve the entire product life. Targeted user groups are manufacturers of non-road mobile machinery (i.e. excavators, wheel loaders, etc.). These machines and production systems share the following key features: 1) They are complex mechatronic systems with several interconnections between hydraulic drives; mechanics, electronics, software and 2) they include autonomous, semiautonomous and human driven operated systems. Methods developed will enable machine manufacturers’ access to technologies that will lead to a more cost effective consumer ordinated, life cycle optimization process. This paper will introduce the method of developing customized products in a fast, agile and networked way that will lead to significantly reduced life-cycle costs.
APA, Harvard, Vancouver, ISO, and other styles
6

Kalola, M. G., Mahesh Dasar, K. P. Shete, and R. S. Patil. "Effect of Novel Swirling Perforated Distributor on Fluid Dynamic Characteristics of Circulating Fluidized Bed Riser." In ASME 2016 Power Conference collocated with the ASME 2016 10th International Conference on Energy Sustainability and the ASME 2016 14th International Conference on Fuel Cell Science, Engineering and Technology. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/power2016-59165.

Full text
Abstract:
The present work is associated with Circulating Fluidized Bed (CFB) technology, related to the energy sector. The applications of CFB technology span across wide range of areas i.e. boiler, gasifier, combustor, dryer, etc. In the present paper, CFD simulations using ANSYS-Fluent 14.5 were performed to study the effect of novel swirling perforated distributor on fluid dynamics characteristics like pressure drop along the riser and distributor, suspension density variations along the riser of the Circulating Fluidized Bed (CFB). The simulation results were also used to compare qualitatively and quantitatively the dead-zone formations in the four corners of riser just above the distributor plate for swirl and normal distributor plates. The riser alongwith distributor was modeled using Pro-E 5.0, and it was meshed in ICEM CFD 14.5. Post processing simulations were performed using Fluent 14.5. 3D CFD simulations were performed on the CFB riser of cross section 0.15 m × 0.15 m and height 2.85 m. RNG k-ε model was used for turbulence modeling. Eulerian model with Syamlal-O’Brien phase interaction scheme was used to simulate the two phase flow (air + sand mixture flow). RNG k-ε model was used for turbulence modeling of the flow inside the riser. The RNG turbulence model has a calculation for effective viscosity. Modeling and simulations were performed for normal perforated distributor plate and results obtained were compared with available experimental data. In this way, after validation of computational results, further CFD simulations were performed for novel geometry of swirl distributor plate. It is observed that suspension density (particles’ concentration) was more in the middle and upper region of the riser in case of swirl distributor plate. However, pressure drop across the distributor plate increased in the case of novel swirl distributor plate. The objective of significant reduction in the dead-zone formation just above the normal distributor plate was achieved through novel swirl distributor, which in-turn is expected to increase particles’ participation in combustion which takes place in oxygen rich middle portion of CFB riser and subsequently increases heat transfer rate in the CFB riser.
APA, Harvard, Vancouver, ISO, and other styles
7

Muhammad, Moin, Saja Al Balushi, and Carrie Murtland. "Harvesting Geothermal Energy from Produced Reservoir Fluids Eliminates CO2 Emission from Production Facility Operations." In International Petroleum Technology Conference. IPTC, 2022. http://dx.doi.org/10.2523/iptc-22313-ea.

Full text
Abstract:
Abstract Objective ICE thermal Harvesting has developed a patented technology to convert neglected thermal energy existing in producing oil and gas wells to 100% emissions free electrical power to fulfil in-field power needs and improve operators’ emissions profile. By leveraging advanced process design and automation, heat is harvested and converted to electricity which is then safely delivered to local equipment, the grid, or energy storage fields. During production of oil and gas from high-temperature, high-pressure formations, reservoir fluids are sent through a surface choke reducing the pressure prior to flowing to surface production equipment and pipelines. Flowing pressures before a choke can be as high as 10,000 psi and will most commonly be reduced to pressures below 1,400 psi. This pressure regulation is critical to both limit unmitigated flow from the well, optimizing the ultimate recovery from the reservoir, as well as to protect surface assets from potentially damaging flowing pressures. However, as the flowing pressure is reduced, the temperature as a result also drops significantly and the thermal energy is lost. Additionally, due to the depth of many of these producer wells, the fluid being produced from the subterranean reservoirs contain large amounts of thermal energy. Currently, this thermal energy is unutilized because there is no existing methodology or technology to effectively capture this thermal energy or convert it to electrical power. Based on the EIA estimates, there are roughly 900,000 producing wells across US lands and waters. From conservative initial ICE estimates, at least 7,500 of these well sites have the potential to be utilized for this application. With electric power rates of ICE packages varying from 125kW to 210kW, this would equate to 937,500 MW to 1,575,000 MW of emissions free power production for consumption within the United States. Contrary to previous past projects exploring similar technologies aiming to utilize oil and gas wells as geothermal reserviors, the requirement of continuously pumping large volumes of fresh water downhole is eliminated by utilizing producing wells instead of reconditioning de-commissioned wells. Because the wells are already producing, the ICE system relies on the reservoir pressure or others production lift mechnism to push the oil and gas stream back to surface, rather than pumping large volumes of fluid downhole to recover the geothermal energy. The benefit of this is reducing the parasitic loads imposed by pumping fluid downhole, ultimately improving net power output by over 50%. ICE's innovations to date have been primarily centered around the harvesting of one or more heat sources, aggregating those heat sources in an optimal manner through a patented process loop, and modulating heat transfer through automated control methods. This controlled thermal product is then transferred to the Organic Rankine Cycle generator portion of the system for conversion to electricity. Building upon decades of experience in the electrification of oilfield services, ICE engineers designed the system to be highly mobile, modular, and scalable to comply with the demands of remote oilfield operations. Contrary to other heat-to-power systems, the ICE system does not necessitate civil infrastructure work or the employment of EPC firms to install. ICE systems are planned to be installed in processes spanning several industrial spaces including cement manufacturing, power production, and industrial manufacturing; anywhere large industrial cooling is required, there exists opportunity to implement ICE technology. The initial strong interest from oil and gas operators has caused the initial deployments to focus on the energy sector. These applications are found across the oil and gas value chain, ranging from upstream, midstream, and downstream processes. For this overview, two ICE system applications will be described. For the first application, thermal energy will be harvested from aggregated oil production from 11 conventional wells. As liquid production is aggregated in-field and routed toward initial processing, the production stream will flow though ICE Thermal Harvesting's system, where heat will be extracted from the stream. The second application will harvest thermal energy from natural gas wells. In this application, hot, high- pressure gas from two wells will flow through the ICE system in the vicinity of the wellhead where flowing pressures are still high. Wellhead temperatures of these wells are greater than 230 degrees Fahrenheit. The ICE system is expected to have a cooling impact of over 40 degrees Fahrenheit on the gas stream during the power production process, which will greatly reduce the cooling duty required on location. Both projects will be executed in three phases: Phase 1: Assessing the feasibility of power production from subject assets by evaluating production dataPhase 2: Utilizing the measured heat within the subject assets, ICE will finalize engineering design on heat exchange equipment best suited to harvest the maximum amount of thermal energy from production streams.Phase 3: Critical parameters will be continuously monitored remotely. Optimization engineering to be performed to maximize power production from the system to achieve as close to 125kW nameplate output as possible.
APA, Harvard, Vancouver, ISO, and other styles
8

Shinde, Pravin A., Pratik V. Bansode, Satyam Saini, Rajesh Kasukurthy, Tushar Chauhan, Jimil M. Shah, and Dereje Agonafer. "Experimental Analysis for Optimization of Thermal Performance of a Server in Single Phase Immersion Cooling." In ASME 2019 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/ipack2019-6590.

Full text
Abstract:
Abstract Liquid immersion cooling of servers in synthetic dielectric fluids is an emerging technology which offers significant cooling energy savings and increased power densities for data centers. A noteworthy advantage of using immersion cooling is high heat dissipation capacity which is roughly 1200 times greater than air. Other advantages of dielectric fluid immersion cooling include high rack density, better server performance, even temperature profile, reduction in noise etc. The enhanced thermal properties of oil lead to the considerable savings in both upfront and operating cost over traditional methods. In this study, a server is completely submerged in a synthetic dielectric fluid. Experiments are conducted to observe the effects of varying the volumetric flow rate and oil inlet temperature on thermal performance and power consumption of the server. Various parameters like total server power consumption, the temperature of all heat generating components like Central Processing Unit (CPU), Dual in Line Memory Module (DIMM), input/output hub (IOH) chip, Platform Controller Hub (PCH), Network Interface Controller (NIC) are measured at steady state. Since this is an air-cooled server, the results obtained from the experiments will help in proposing better heat removal strategies like heat sink optimization, better ducting and server architecture. Assessment has been made on the effect of thermal shadowing caused by the two CPUs on the nearby components like DIMMs and PCH.
APA, Harvard, Vancouver, ISO, and other styles
9

Shah, Jimil M., Ravya Dandamudi, Chinmay Bhatt, Pranavi Rachamreddy, Pratik Bansode, and Dereje Agonafer. "CFD Analysis of Thermal Shadowing and Optimization of Heatsinks in 3rd Generation Open Compute Server for Single-Phase Immersion Cooling." In ASME 2019 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/ipack2019-6600.

Full text
Abstract:
Abstract In today’s networking world, utilization of servers and data centers has been increasing significantly. Increasing demand of processing and storage of data causes a corresponding increase in power density of servers. The data center energy efficiency largely depends on thermal management of servers. Currently, air cooling is the most widely used thermal management technology in data centers. However, air cooling has started to reach its limits due to high-powered processors. To overcome these limitations of air cooling in data centers, liquid immersion cooling methods using different dielectric fluids can be a viable option. Thermal shadowing is an effect in which temperature of a cooling medium increases by carrying heat from one source and results in decreasing its heat carrying capacity due to reduction in the temperature difference between the maximum junction temperature of successive heat sink and incoming fluid. Thermal Shadowing is a challenge for both air and low velocity oil flow cooling. In this study, the impact of thermal shadowing in a third-generation open compute server using different dielectric fluids is compared. The heat sink is a critical part for cooling effectiveness at server level. This work also provides an efficient range of heat sinks with computational modelling of third generation open compute server. Optimization of heat sink can allow to cool high-power density servers effectively for single-phase immersion cooling applications. A parametric study is conducted, and significant savings in the volume of a heat sink have been reported.
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Jie, Chunyu Zhang, Zhihao Zheng, and Jiesheng Min. "A Digital Proof of Concept (POC) for Simulating the Coupled Phenomena Between Neutronics, Structural and Fluid Dynamics in a Reactor Core." In 2022 29th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/icone29-93008.

Full text
Abstract:
Abstract Numerical simulation is a key technology for improvement of nuclear reactor design and maintenance with its low cost and safety comparing to traditional experiments, especially when irradiation is involved. Various software exist today in the fields of structural mechanics fluid dynamics and even neutronics to model and understand these phenomena. Nevertheless, coupled simulation of the reactor core represents a span of different mechanisms with intricate links between neutronics, materials and the surrounding coolant. With increasing computing power, advanced coupling of the phenomena occurring in the reactor core is being developed nowadays. In the present study, a multi-physics coupling framework is developed by coupling three codes, i.e., a neutronics transport code, a computational fluid dynamics code and the finite element structural analysis code. The three solvers are hosted within the open-source platform SALOME. Besides this framework, the platform also contains additional modules to realize the full process of a multi-physics simulation. The multi-physics coupling framework provides a strong scalability, users are able to add user-defined modules though open API interface. This paper provides a route forward for the development of the framework, along with the description of its various anticipated modules and functionalities. Ultimately, it is expected that the multi-physics coupling framework will achieve the full process of a multi-physics simulation, including preprocessing, neutronics calculation, structural analysis, CFD (Computational Fluid Dynamics) simulation, multi-physics coupling simulation, post-processing, and data analysis.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Fluid power technology Data processing"

1

Avis, William. Drivers, Barriers and Opportunities of E-waste Management in Africa. Institute of Development Studies (IDS), December 2021. http://dx.doi.org/10.19088/k4d.2022.016.

Full text
Abstract:
Population growth, increasing prosperity and changing consumer habits globally are increasing demand for consumer electronics. Further to this, rapid changes in technology, falling prices and consumer appetite for better products have exacerbated e-waste management challenges and seen millions of tons of electronic devices become obsolete. This rapid literature review collates evidence from academic, policy focussed and grey literature on e-waste management in Africa. This report provides an overview of constitutes e-waste, the environmental and health impacts of e-waste, of the barriers to effective e-waste management, the opportunities associated with effective e-waste management and of the limited literature available that estimate future volumes of e-waste. Africa generated a total of 2.9 million Mt of e-waste, or 2.5 kg per capita, the lowest regional rate in the world. Africa’s e-waste is the product of Local and imported Sources of Used Electronic and Electrical Equipment (UEEE). Challenges in e-waste management in Africa are exacerbated by a lack of awareness, environmental legislation and limited financial resources. Proper disposal of e-waste requires training and investment in recycling and management technology as improper processing can have severe environmental and health effects. In Africa, thirteen countries have been identified as having a national e-waste legislation/policy.. The main barriers to effective e-waste management include: Insufficient legislative frameworks and government agencies’ lack of capacity to enforce regulations, Infrastructure, Operating standards and transparency, illegal imports, Security, Data gaps, Trust, Informality and Costs. Aspirations associated with energy transition and net zero are laudable, products associated with these goals can become major contributors to the e-waste challenge. The necessary wind turbines, solar panels, electric car batteries, and other "green" technologies require vast amounts of resources. Further to this, at the end of their lifetime, they can pose environmental hazards. An example of e-waste associated with energy transitions can be gleaned from the solar power sector. Different types of solar power cells need to undergo different treatments (mechanical, thermal, chemical) depending on type to recover the valuable metals contained. Similar issues apply to waste associated with other energy transition technologies. Although e-waste contains toxic and hazardous metals such as barium and mercury among others, it also contains non-ferrous metals such as copper, aluminium and precious metals such as gold and copper, which if recycled could have a value exceeding 55 billion euros. There thus exists an opportunity to convert existing e-waste challenges into an economic opportunity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography