Academic literature on the topic 'Multi-Version Execution'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-Version Execution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-Version Execution"

1

Qiang, Weizhong, Feng Chen, Laurence T. Yang, and Hai Jin. "MUC: Updating cloud applications dynamically via multi-version execution." Future Generation Computer Systems 74 (September 2017): 254–64. http://dx.doi.org/10.1016/j.future.2015.12.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dr. Vivek Jaglan, Ms Swati, Dr Shalini Bhaskar Bajaj,. "A NOVEL MULTI GRANULARITY LOCKING SCHEME BASED ON CONCURRENT MULTI -VERSION HIERARCHICAL STRUCTURE." INFORMATION TECHNOLOGY IN INDUSTRY 9, no. 1 (March 15, 2021): 932–47. http://dx.doi.org/10.17762/itii.v9i1.221.

Full text
Abstract:
We present an efficient locking scheme in a hierarchical data structure. The existing multi-granularity locking mechanism works on two extremes: fine-grained locking through which concurrency is being maximized, and coarse grained locking that is being applied to minimize the locking cost. Between the two extremes, there lies several pare to-optimal options that provide a trade-off between the concurrency that can be attained. In this work, we present a locking technique, Collaborative Granular Version Locking (CGVL) which selects an optimal locking combination to serve locking requests in a hierarchical structure. In CGVL a series of version is being maintained at each granular level which allows the simultaneous execution of read and write operation on the data item. Our study reveals that in order to achieve optimal performance the lock manager explore various locking options by converting certain non-supporting locking modes into supporting locking modes thereby improving the existing compatibility matrix of multiple granularity locking protocol. Our claim is being quantitatively validated by using a Java Sun JDK environment, which shows that our CGVL perform better compared to the state-of-the-art existing MGL methods. In particular, CGVL attains 20% reduction in execution time for the locking operation that are being carried out by considering, the following parameters: i) The number of threads ii) The number of locked object iii) The duration of critical section (CPU Cycles) which significantly supports the achievement of enhanced concurrency in terms of the number of concurrent read accesses.
APA, Harvard, Vancouver, ISO, and other styles
3

Saramud, Mikhail V., Igor V. Kovalev, Vasiliy V. Losev, Mariam O. Petrosyan, and Dmitriy I. Kovalev. "Multi-version approach to improve the reliability of processing data of the earth remote sensing in the real-time." E3S Web of Conferences 75 (2019): 01005. http://dx.doi.org/10.1051/e3sconf/20197501005.

Full text
Abstract:
The article describes the use of a multi-version approach to improve the accuracy of the classification of images when solving the problem of image analysis for Earth remote sensing. The implementation of this approach makes it possible to reduce the classification error and, consequently, to increase the reliability of processing remote sensing data. A practical study was carried out in a multi-version real-time execution environment, which makes it possible to organize image processing on board of an unmanned vehicle. The results confirm the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Ramos, Alfredo S., Pablo A. Miranda-Gonzalez, Samuel Nucamendi-Guillén, and Elias Olivares-Benitez. "A Formulation for the Stochastic Multi-Mode Resource-Constrained Project Scheduling Problem Solved with a Multi-Start Iterated Local Search Metaheuristic." Mathematics 11, no. 2 (January 9, 2023): 337. http://dx.doi.org/10.3390/math11020337.

Full text
Abstract:
This research introduces a stochastic version of the multi-mode resource-constrained project scheduling problem (MRCPSP) and its mathematical model. In addition, an efficient multi-start iterated local search (MS-ILS) algorithm, capable of solving the deterministic MRCPSP, is adapted to deal with the proposed stochastic version of the problem. For its deterministic version, the MRCPSP is an NP-hard optimization problem that has been widely studied. The problem deals with a trade-off between the amount of resources that each project activity requires and its duration. In the case of the proposed stochastic formulation, the execution times of the activities are uncertain. Benchmark instances of projects with 10, 20, 30, and 50 activities from well-known public libraries were adapted to create test instances. The adapted algorithm proved to be capable and efficient for solving the proposed stochastic problem.
APA, Harvard, Vancouver, ISO, and other styles
5

Švancara, Jiří, Marek Vlk, Roni Stern, Dor Atzmon, and Roman Barták. "Online Multi-Agent Pathfinding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7732–39. http://dx.doi.org/10.1609/aaai.v33i01.33017732.

Full text
Abstract:
Multi-agent pathfinding (MAPF) is the problem of moving a group of agents to a set of target destinations while avoiding collisions. In this work, we study the online version of MAPF where new agents appear over time. Several variants of online MAPF are defined and analyzed theoretically, showing that it is not possible to create an optimal online MAPF solver. Nevertheless, we propose effective online MAPF algorithms that balance solution quality, runtime, and the number of plan changes an agent makes during execution.
APA, Harvard, Vancouver, ISO, and other styles
6

Jena, Swagat Kumar, Satyabrata Das, and Satya Prakash Sahoo. "Design and Development of a Parallel Lexical Analyzer for C Language." International Journal of Knowledge-Based Organizations 8, no. 1 (January 2018): 68–82. http://dx.doi.org/10.4018/ijkbo.2018010105.

Full text
Abstract:
Future of computing is rapidly moving towards massively multi-core architecture because of its power and cost advantages. Almost everywhere Multi-core processors are being used now-a-days and number of cores per chip is also relatively increasing. To exploit full potential offered by multi-core architecture, the system software like compilers should be designed for parallelized execution. In the past, various significant works have been made to change the design of traditional compiler to take advantages of the future multi-core platform. This paper focuses on adapting parallelism in the lexical analysis phase of the compilation process. The main objective of our proposal is to do the lexical analysis i.e., finding the tokens in an input stream in parallel. We use the parallel constructs available in OpenMP to achieve parallelism in the lexical analysis process for multi-core machines. The experimental result of our proposal shows a significant performance improvement in the parallel lexical analysis phase as compared to sequential version in terms of time of execution.
APA, Harvard, Vancouver, ISO, and other styles
7

Serttaş, Sevil, and Veysel Harun Şahin. "PBench: A Parallel, Real-Time Benchmark Suite." Academic Perspective Procedia 1, no. 1 (November 9, 2018): 178–86. http://dx.doi.org/10.33793/acperpro.01.01.37.

Full text
Abstract:
Real-time systems are widely used from the automotive industry to the aerospace industry. The scientists, researchers, and engineers who develop real-time platforms, worst-case execution time analysis methods and tools need to compare their solutions to alternatives. For this purpose, they use benchmark applications. Today many of our computing systems are multicore and/or multiprocessor systems. Therefore, to be able to compare the effectiveness of real-time platforms, worst-case execution time analysis methods and tools, the research community need multi-threaded benchmark applications which scale on multicore and/or multiprocessor systems. In this paper, we present the first version of PBench, a parallel, real-time benchmark suite. PBench includes different types of multi-threaded applications which implement various algorithms from searching to sorting, matrix multiplication to probability distribution calculation. In addition, PBench provides single-threaded versions of all programs to allow side by side comparisons.
APA, Harvard, Vancouver, ISO, and other styles
8

Okumura, Keisuke, and Xavier Défago. "Solving Simultaneous Target Assignment and Path Planning Efficiently with Time-Independent Execution." Proceedings of the International Conference on Automated Planning and Scheduling 32 (June 13, 2022): 270–78. http://dx.doi.org/10.1609/icaps.v32i1.19810.

Full text
Abstract:
Real-time planning for a combined problem of target assignment and path planning for multiple agents, also known as the unlabeled version of Multi-Agent Path Finding (MAPF), is crucial for high-level coordination in multi-agent systems, e.g., pattern formation by robot swarms. This paper studies two aspects of unlabeled-MAPF: (1) offline scenario: solving large instances by centralized approaches with small computation time, and (2) online scenario: executing unlabeled-MAPF despite timing uncertainties of real robots. For this purpose, we propose TSWAP, a novel sub-optimal complete algorithm, which takes an arbitrary initial target assignment then repeats one-timestep path planning with target swapping. TSWAP can adapt to both offline and online scenarios. We empirically demonstrate that Offline TSWAP is highly scalable; providing near-optimal solutions while reducing runtime by orders of magnitude compared to existing approaches. In addition, we present the benefits of Online TSWAP, such as delay tolerance, through real-robot demos.
APA, Harvard, Vancouver, ISO, and other styles
9

Vianna, Dalessandro Soares, José Elias Claudio Arroyo, Pedro Sampaio Vieira, and Thiago Ribeiro de Azeredo. "Parallel strategies for a multi-criteria GRASP algorithm." Production 17, no. 1 (April 2007): 84–93. http://dx.doi.org/10.1590/s0103-65132007000100006.

Full text
Abstract:
This paper proposes different strategies of parallelizing a multi-criteria GRASP (Greedy Randomized Adaptive Search Problem) algorithm. The parallel GRASP algorithm is applied to the multi-criteria minimum spanning tree problem, which is NP-hard. In this problem, a vector of costs is defined for each edge of the graph and the goal is to find all the efficient or Pareto optimal spanning trees (Pareto-optimal solutions). Each process finds a subset of efficient solutions. These subsets are joined using different strategies to obtain the final set of efficient solutions. The multi-criteria GRASP algorithm with the different parallel strategies are tested on complete graphs with n = 20, 30 and 50 nodes and r = 2 and 3 criteria. The computational results show that the proposed parallel algorithms reduce the execution time and the results obtained by the sequential version were improved.
APA, Harvard, Vancouver, ISO, and other styles
10

Cavus, Mustafa, Mohammed Shatnawi, Resit Sendag, and Augustus K. Uht. "Fast Key-Value Lookups with Node Tracker." ACM Transactions on Architecture and Code Optimization 18, no. 3 (June 2021): 1–26. http://dx.doi.org/10.1145/3452099.

Full text
Abstract:
Lookup operations for in-memory databases are heavily memory bound, because they often rely on pointer-chasing linked data structure traversals. They also have many branches that are hard-to-predict due to random key lookups. In this study, we show that although cache misses are the primary bottleneck for these applications, without a method for eliminating the branch mispredictions only a small fraction of the performance benefit is achieved through prefetching alone. We propose the Node Tracker (NT), a novel programmable prefetcher/pre-execution unit that is highly effective in exploiting inter key-lookup parallelism to improve single-thread performance. We extend NT with branch outcome streaming (BOS) to reduce branch mispredictions and show that this achieves an extra 3× speedup. Finally, we evaluate the NT as a pre-execution unit and demonstrate that we can further improve the performance in both single- and multi-threaded execution modes. Our results show that, on average, NT improves single-thread performance by 4.1× when used as a prefetcher; 11.9× as a prefetcher with BOS; 14.9× as a pre-execution unit and 18.8× as a pre-execution unit with BOS. Finally, with 24 cores of the latter version, we achieve a speedup of 203× and 11× over the single-core and 24-core baselines, respectively.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multi-Version Execution"

1

Hosek, Petr. "Multi-version execution for increasing the reliability and availability of updated software." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/27046.

Full text
Abstract:
Software updates are an integral part of the software development and maintenance process, but unfortunately they present a high risk, as new releases often introduce new bugs and security vulnerabilities; as a consequence, many users refuse to upgrade their software, relying instead on outdated versions, which often leave them exposed to known software bugs and security vulnerabilities. In this thesis we propose a novel multi-version execution approach, a variant of N-version execution, for improving the software update process. Whenever a new program update becomes available, instead of upgrading the software to the newest version, we run the new version in parallel with the old one, and carefully synchronise their execution to create a more reliable multi-version application. We propose two different schemes for implementing the multi-version execution technique—via failure recovery and via transparent failover—and we describe two possible designs for implementing these schemes: Mx, focused on recovering from crashes caused by the faulty software updates; and Varan, focused on running a large number of versions in parallel with a minimal performance overhead. Mx uses static binary analysis, system call interposition, lightweight checkpointing and runtime state manipulation to implement a novel fault recovery mechanism, which enables the recovery of the crashing version using the code of the other, non-crashing version. We have shown how Mx can be applied successfully to recover from real crashes in several real applications. Varan combines selective binary rewriting with high-performance event streaming to significantly reduce performance overhead, without sacrificing the size of the trusted computing base, nor flexibility or ease of debugging. Our experimental evaluation has demonstrated that Varan can run C10k network servers with low performance overhead, and can be used in various scenarios such as transparent failover and live sanitisation.
APA, Harvard, Vancouver, ISO, and other styles
2

Mallangi, Siva Sai Reddy. "Low-Power Policies Based on DVFS for the MUSEIC v2 System-on-Chip." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229443.

Full text
Abstract:
Multi functional health monitoring wearable devices are quite prominent these days. Usually these devices are battery-operated and consequently are limited by their battery life (from few hours to a few weeks depending on the application). Of late, it was realized that these devices, which are currently being operated at fixed voltage and frequency, are capable of operating at multiple voltages and frequencies. By switching these voltages and frequencies to lower values based upon power requirements, these devices can achieve tremendous benefits in the form of energy savings. Dynamic Voltage and Frequency Scaling (DVFS) techniques have proven to be handy in this situation for an efficient trade-off between energy and timely behavior. Within imec, wearable devices make use of the indigenously developed MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). This system is optimized for efficient and accurate collection, processing, and transfer of data from multiple (health) sensors. MUSEIC v2 has limited means in controlling the voltage and frequency dynamically. In this thesis we explore how traditional DVFS techniques can be applied to the MUSEIC v2. Experiments were conducted to find out the optimum power modes to efficiently operate and also to scale up-down the supply voltage and frequency. Considering the overhead caused when switching voltage and frequency, transition analysis was also done. Real-time and non real-time benchmarks were implemented based on these techniques and their performance results were obtained and analyzed. In this process, several state of the art scheduling algorithms and scaling techniques were reviewed in identifying a suitable technique. Using our proposed scaling technique implementation, we have achieved 86.95% power reduction in average, in contrast to the conventional way of the MUSEIC v2 chip’s processor operating at a fixed voltage and frequency. Techniques that include light sleep and deep sleep mode were also studied and implemented, which tested the system’s capability in accommodating Dynamic Power Management (DPM) techniques that can achieve greater benefits. A novel approach for implementing the deep sleep mechanism was also proposed and found that it can obtain up to 71.54% power savings, when compared to a traditional way of executing deep sleep mode.
Nuförtiden så har multifunktionella bärbara hälsoenheter fått en betydande roll. Dessa enheter drivs vanligtvis av batterier och är därför begränsade av batteritiden (från ett par timmar till ett par veckor beroende på tillämpningen). På senaste tiden har det framkommit att dessa enheter som används vid en fast spänning och frekvens kan användas vid flera spänningar och frekvenser. Genom att byta till lägre spänning och frekvens på grund av effektbehov så kan enheterna få enorma fördelar när det kommer till energibesparing. Dynamisk skalning av spänning och frekvens-tekniker (såkallad Dynamic Voltage and Frequency Scaling, DVFS) har visat sig vara användbara i detta sammanhang för en effektiv avvägning mellan energi och beteende. Hos Imec så använder sig bärbara enheter av den internt utvecklade MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). Systemet är optimerat för effektiv och korrekt insamling, bearbetning och överföring av data från flera (hälso) sensorer. MUSEIC v2 har begränsad möjlighet att styra spänningen och frekvensen dynamiskt. I detta examensarbete undersöker vi hur traditionella DVFS-tekniker kan appliceras på MUSEIC v2. Experiment utfördes för att ta reda på de optimala effektlägena och för att effektivt kunna styra och även skala upp matningsspänningen och frekvensen. Eftersom att ”overhead” skapades vid växling av spänning och frekvens gjordes också en övergångsanalys. Realtidsoch icke-realtidskalkyler genomfördes baserat på dessa tekniker och resultaten sammanställdes och analyserades. I denna process granskades flera toppmoderna schemaläggningsalgoritmer och skalningstekniker för att hitta en lämplig teknik. Genom att använda vår föreslagna skalningsteknikimplementering har vi uppnått 86,95% effektreduktion i jämförelse med det konventionella sättet att MUSEIC v2-chipets processor arbetar med en fast spänning och frekvens. Tekniker som inkluderar lätt sömn och djupt sömnläge studerades och implementerades, vilket testade systemets förmåga att tillgodose DPM-tekniker (Dynamic Power Management) som kan uppnå ännu större fördelar. En ny metod för att genomföra den djupa sömnmekanismen föreslogs också och enligt erhållna resultat så kan den ge upp till 71,54% lägre energiförbrukning jämfört med det traditionella sättet att implementera djupt sömnläge.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multi-Version Execution"

1

He, Xiang, Zhiying Tu, Lei Liu, Xiaofei Xu, and Zhongjie Wang. "Optimal Evolution Planning and Execution for Multi-version Coexisting Microservice Systems." In Service-Oriented Computing, 3–18. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-65310-1_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Smits, Djura, Bart Van Beusekom, Frank Martin, Lourens Veen, Gijs Geleijnse, and Arturo Moncada-Torres. "An Improved Infrastructure for Privacy-Preserving Analysis of Patient Data." In Studies in Health Technology and Informatics. IOS Press, 2022. http://dx.doi.org/10.3233/shti220682.

Full text
Abstract:
Incorporating healthcare data from different sources is crucial for a better understanding of patient (sub)populations. However, data centralization raises concerns about data privacy and governance. In this work, we present an improved infrastructure that allows privacy-preserving analysis of patient data: vantage6 v3. For this new version, we describe its architecture and upgraded functionality, which allows algorithms running at each party to communicate with one another through a virtual private network (while still being isolated from the public internet to reduce the risk of data leakage). This allows the execution of different types of algorithms (e.g., multi-party computation) that were practically infeasible before, as showcased by the included examples. The (continuous) development of this type of infrastructure is fundamental to meet the current and future demands of healthcare research with a strong emphasis on preserving the privacy of sensitive patient data.
APA, Harvard, Vancouver, ISO, and other styles
3

Pears, Russel. "Accelerating Multi Dimensional Queries in Data Warehouses." In Advances in Database Research, 178–203. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-172-8.ch011.

Full text
Abstract:
Data Warehouses are widely used for supporting decision making. On Line Analytical Processing or OLAP is the main vehicle for querying data warehouses. OLAP operations commonly involve the computation of multidimensional aggregates. The major bottleneck in computing these aggregates is the large volume of data that needs to be processed which in turn leads to prohibitively expensive query execution times. On the other hand, Data Analysts are primarily concerned with discerning trends in the data and thus a system that provides approximate answers in a timely fashion would suit their requirements better. In this chapter we present the Prime Factor scheme, a novel method for compressing data in a warehouse. Our data compression method is based on aggregating data on each dimension of the data warehouse. Extensive experimentation on both real-world and synthetic data have shown that it outperforms the Haar Wavelet scheme with respect to both decoding time and error rate, while maintaining comparable compression ratios (Pears and Houliston, 2007). One encouraging feature is the stability of the error rate when compared to the Haar Wavelet. Although Wavelets have been shown to be effective at compressing data, the approximate answers they provide varies widely, even for identical types of queries on nearly identical values in distinct parts of the data. This problem has been attributed to the thresholding technique used to reduce the size of the encoded data and is an integral part of the Wavelet compression scheme. In contrast the Prime Factor scheme does not rely on thresholding but keeps a smaller version of every data element from the original data and is thus able to achieve a much higher degree of error stability which is important from a Data Analysts point of view.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-Version Execution"

1

Hosek, Petr, and Cristian Cadar. "Safe software updates via multi-version execution." In 2013 35th International Conference on Software Engineering (ICSE). IEEE, 2013. http://dx.doi.org/10.1109/icse.2013.6606607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Malhotra. "Optimizing the execution of independent multi-version programs." In 22nd Digital Avionics Systems Conference Proceedings (Cat No 03CH37449. IEEE, 2003. http://dx.doi.org/10.1109/dasc.2003.1245930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Feng, Weizhong Qiang, Hai Jin, Deqing Zou, and Duoqiang Wang. "Multi-version Execution for the Dynamic Updating of Cloud Applications." In 2015 IEEE 39th Annual Computer Software and Applications Conference (COMPSAC). IEEE, 2015. http://dx.doi.org/10.1109/compsac.2015.130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lam, Kam-Yiu, Guo Hui Li, and Tei-Wei Kuo. "A multi-version data model for executing real-time transactions in a mobile environment." In the 2nd ACM international workshop. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/376868.376911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zulkipli, Siti Najmi Farhan, Dzulfadly Johare, David Nanlung Dangfa, M. Farid M Amin, Shahril Yang, M. Zahin A Razak, and Intan Salwa Omar. "Redefining Well Abandonment Strategy: Tipping the Scale Towards Greater Cost and Operational Efficiency Through a Novel Multi-Layer Steel Barriers Cement Bond Logging." In SPE Asia Pacific Oil & Gas Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/210699-ms.

Full text
Abstract:
Abstract Abandoning a producer or injector well post production decline and pressure depletion after all intervention efforts to revive the well productivity have been exhausted is gaining momentum in recent years by most of major operators. Huge number of candidates to be abandoned fall under legacy wells category with incomplete drilling history and cement integrity records which warrant new logging run to be conducted for cement bond evaluation at the expense of additional cost and rig time. Conventional cement bond logging will demand additional cost due to tubulars cut and pull requirement to obtain conclusive log evaluation of cement integrity behind multiple casing layers. This paper will highlight a novel and first deployment of new cement evaluation logging technology by the company in multi-layer casings environment to obtain cement quality assessment behind each casing within a single logging run. Rigorous technology seeking efforts are conducted to fill in the industry gaps for a reliable and cost effective solution for well abandonment with unknown cement barrier status behind multiple casing layers. Total of four emerging technologies are evaluated including the slim logging tool version considering various technical criteria on job execution, operating principles, log data quality assurance, technical support and ways to improve which lead to the pilot deployment in 2 wells during the well abandonment campaign. The new cement bond logging technology was run inside a 7 inch tubing with the objective to evaluate cement quality behind 9 5/8 inch and 13 3/8 inch casings. Post logging run, the tubing was cut and pulled to expose single casing layer where conventional ultrasonic cement bond logging will be run for comparison with the new tool technology outcomes. Results indicate that the new logging tool deliverables conform to the conventional logging outcomes which highlights the intervals of good, moderate and poor cement bond condition with likelihood for channeling and flow behind casing event. This approach achieves a lower operational cost by eliminating the cut and pull technique for existing tubulars which is a mandatory requirement for any conventional cement bond logging. The first job trial had successfully recorded USD 0.95 million cost saving, eliminating 3 days of rig time and optimizing 6% of total operational cost. Best practices and quick operational guideline imperative for optimizing well abandonment campaign have been developed to set the high tone for cost and operational excellence. In conclusion, the newly introduced cement logging technology capable of evaluating cement quality behind multiple steel layers had shown encouraging results to support a cost effective well integrity assurance. Technology and measurement will keep evolving with time to resolve current business challenges as well as serving a more advanced technical requirement in the future.
APA, Harvard, Vancouver, ISO, and other styles
6

Ijomanta, Henry, Lukman Lawal, Onyekachi Ike, Raymond Olugbade, Fanen Gbuku, and Charles Akenobo. "Digital Oil Field; The NPDC Experience." In SPE Nigeria Annual International Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/207169-ms.

Full text
Abstract:
Abstract This paper presents an overview of the implementation of a Digital Oilfield (DOF) system for the real-time management of the Oredo field in OML 111. The Oredo field is predominantly a retrograde condensate field with a few relatively small oil reservoirs. The field operating philosophy involves the dual objective of maximizing condensate production and meeting the daily contractual gas quantities which requires wells to be controlled and routed such that the dual objectives are met. An Integrated Asset Model (IAM) (or an Integrated Production System Model) was built with the objective of providing a mathematical basis for meeting the field's objective. The IAM, combined with a Model Management and version control tool, a workflow orchestration and automation engine, A robust data-management module, an advanced visualization and collaboration environment and an analytics library and engine created the Oredo Digital Oil Field (DOF). The Digital Oilfield is a real-time digital representation of a field on a computer which replicates the behavior of the field. This virtual field gives the engineer all the information required to make quick, sound and rational field management decisions with models, workflows, and intelligently filtered data within a multi-disciplinary organization of diverse capabilities and engineering skill sets. The creation of the DOF involved 4 major steps; DATA GATHERING considered as the most critical in such engineering projects as it helps to set the limits of what the model can achieve and cut expectations. ENGINEERING MODEL REVIEW, UPDATE AND BENCHMARKING; Majorly involved engineering models review and update, real-time data historian deployment etc. SYSTEM PRECONFIGURATION AND DEPLOYMENT; Developed the DOF system architecture and the engineering workflow setup. POST DEPLOYMENT REVIEW AND UPDATE; Currently ongoing till date, this involves after action reviews, updates and resolution of challenges of the DOF, capability development by the operator and optimizing the system for improved performance. The DOF system in the Oredo field has made it possible to integrate, automate and streamline the execution of field management tasks and has significantly reduced the decision-making turnaround time. Operational and field management decisions can now be made within minutes rather than weeks or months. The gains and benefits cuts across the entire production value chain from improved operational safety to operational efficiency and cost savings, real-time production surveillance, optimized production, early problem detection, improved Safety, Organizational/Cross-discipline collaboration, data Centralization and Efficiency. The DOF system did not come without its peculiar challenges observed both at the planning, execution and post evaluation stages which includes selection of an appropriate Data Gathering & acquisition system, Parts interchangeability and device integration with existing field devices, high data latency due to bandwidth, signal strength etc., damage of sensors and transmitters on wellheads during operations such as slickline & WHM activities, short battery life, maintenance, and replacement frequency etc. The challenges impacted on the project schedule and cost but created great lessons learnt and improved the DOF learning curve for the company. The Oredo Digital Oil Field represents a future of the oil and gas industry in tandem with the industry 4.0 attributes of using digital technology to drive efficiency, reduce operating expenses and apply surveillance best practices which is required for the survival of the Oil and Gas industry. The advent of the 5G technology with its attendant influence on data transmission, latency and bandwidth has the potential to drive down the cost of automated data transmission and improve the performance of data gathering further increasing the efficiency of the DOF system. Improvements in digital integration technologies, computing power, cloud computing and sensing technologies will further strengthen the future of the DOF. There is need for synergy between the engineering team, IT, and instrumentation engineers to fully manage the system to avoid failures that may arise from interface management issues. Battery life status should always be monitored to ensure continuous streaming of real field data. New set of competencies which revolves around a marriage of traditional Petro-technical skills with data analytic skills is required to further maximize benefit from the DOF system. NPDC needs to groom and encourage staff to venture into these data analytic skill pools to develop knowledge-intelligence required to maximize benefit for the Oredo Digital Oil Field and transfer this knowledge to other NPDC Asset.
APA, Harvard, Vancouver, ISO, and other styles
7

Weir, David A., Stephen Murray, Pankaj Bhawnani, and Douglas Rosenberg. "Experiences in Establishing Trustworthy Digital Repositories Within a Large Multi-National Pipeline Company." In 2012 9th International Pipeline Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/ipc2012-90177.

Full text
Abstract:
Traditionally business areas within an organization individually manage data essential for their operation. This data may be incorporated into specialized software applications, MS Excel or MS Access etc., e-mail filing, and hardcopy documents. These applications and data stores support the local business area decision-making and add to its knowledge. There have been problems with this approach. Data, knowledge and decisions are only captured locally within the business area and in many cases this information is not easily identifiable or available for enterprise-wide sharing. Furthermore, individuals within the business areas often keep “shadow files” of data and information. The state of accuracy, completeness, and timeliness of the data contained within these files is often questionable. Information created and managed at a local business level can be lost when a staff member leaves his or her role. This is especially significant given ongoing changes in today’s workforce. Data must be properly managed and maintained to retain its value within the organization. The development and execution of “single version of the truth” or master data management requires a partnership between the business areas, records management, legal, and the information technology groups of an organization. Master data management is expected to yield significant gains in staff effectiveness, efficiency, and productivity. In 2011, Enbridge Pipelines applied the principles of master data management and trusted data digital repositories to a widely used, geographically dispersed small database (less than 10,000 records) that had noted data shortcomings such as incomplete or incorrect data, multiple shadow files, and inconsistent usage throughout the organization of the application that stewards the data. This paper provides an overview of best practices in developing an authoritative single source of data and Enbridge experience in applying these practices to a real-world example. Challenges of the approach used by Enbridge and lessons learned will be examined and discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography