Gotowa bibliografia na temat „Operator Execution Algorithms”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Operator Execution Algorithms”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Operator Execution Algorithms"

1

MARION-POTY, VIRGINIE, i SERGE MIGUET. "DATA ALLOCATION STRATEGIES FOR PARALLEL IMAGE PROCESSING ALGORITHMS". International Journal of Pattern Recognition and Artificial Intelligence 09, nr 04 (sierpień 1995): 615–34. http://dx.doi.org/10.1142/s0218001495000237.

Pełny tekst źródła
Streszczenie:
This paper discusses several data allocation strategies used for the parallel implementation of basic imaging operators. It shows that depending on the operator (sequential or parallel, with regular or irregular execution time), the image data must be partitioned in very different manners: The square sub-domains are best adapted for minimizing the communication volume, but rectangles can perform better when we take into account the time for constructing messages. Block allocations are well adapted for inherently parallel operators since they minimize interprocessor interactions, but in the case of recursive operators, they lead to nearly sequential executions. In this framework, we show the usefulness of block-cyclic allocations. Finally, we illustrate the fact that allocating the same amount of image data to each processor can lead to severe load imbalance in the case of some operators with data-dependant execution times.
Style APA, Harvard, Vancouver, ISO itp.
2

Guo, Wenyan, Mingfei Wu, Fang Dai i Yufan Qiang. "Improved Environmental Stimulus and Biological Competition Tactics Interactive Artificial Ecological Optimization Algorithm for Clustering". Biomimetics 8, nr 2 (7.06.2023): 242. http://dx.doi.org/10.3390/biomimetics8020242.

Pełny tekst źródła
Streszczenie:
An interactive artificial ecological optimization algorithm (SIAEO) based on environmental stimulus and a competition mechanism was devised to find the solution to a complex calculation, which can often become bogged down in local optimum because of the sequential execution of consumption and decomposition stages in the artificial ecological optimization algorithm. Firstly, the environmental stimulus defined by population diversity makes the population interactively execute the consumption operator and decomposition operator to abate the inhomogeneity of the algorithm. Secondly, the three different types of predation modes in the consumption stage were regarded as three different tasks, and the task execution mode was determined by the maximum cumulative success rate of each individual task execution. Furthermore, the biological competition operator is recommended to modify the regeneration strategy so that the SIAEO algorithm can provide consideration to the exploitation in the exploration stage, break the equal probability execution mode of the AEO, and promote the competition among operators. Finally, the stochastic mean suppression alternation exploitation problem is introduced in the later exploitation process of the algorithm, which can tremendously heighten the SIAEO algorithm to run away the local optimum. A comparison between SIAEO and other improved algorithms is performed on the CEC2017 and CEC2019 test set.
Style APA, Harvard, Vancouver, ISO itp.
3

Nobre, Tiago, Silvia Regina Vergilio i Aurora Pozo. "Reducing Interface Mutation Costs with Multiobjective Optimization Algorithms". International Journal of Natural Computing Research 3, nr 3 (lipiec 2012): 21–40. http://dx.doi.org/10.4018/jncr.2012070102.

Pełny tekst źródła
Streszczenie:
To reduce mutation test costs, different strategies were proposed to find a set of essential operators that generates a reduced number of mutants without decreasing the mutation score. However, the operator selection is influenced by other factors, such as: number of test data, execution time, number of revealed faults, etc. In fact this is a multiobjective problem. For that, different good solutions exist. To properly deal with this problem, a selection strategy based on multiobjective algorithms was proposed and investigated for unit testing. This work explores the use of such strategy in the integration testing phase. Three multiobjective algorithms are used and evaluated with real programs: one algorithm based on tabu search (MTabu), one based on Genetic Algorithm (NSGA-II) and the third one based on Ant Colony Optimization (PACO). The results are compared with traditional strategies and contrasted with essential operators obtained in the unit testing level.
Style APA, Harvard, Vancouver, ISO itp.
4

Lougmiri, Zekri. "A New Progressive Method for Computing Skyline Queries". Journal of Information Technology Research 10, nr 3 (lipiec 2017): 1–21. http://dx.doi.org/10.4018/jitr.2017070101.

Pełny tekst źródła
Streszczenie:
Skyline queries are important in many fields, especially for decision making. In this context, objects or tuples of databases are defined according to some numerical and non numerical attributes. The skyline operator acts on the numerical ones. The algorithms that implements this skyline operator are genrally of progressive or non progressive. The progressive ones return the skyline operator during its execution while non preogressive alogrithms return the result at the end of its execution. This paper presents a new progressive algorithm for computing the skyline points. This algorithm is based on sorting as a preprocessing of the input. The authors present new theorems for deducing promptly the first skyline points and reducing the candidate space. A new version of Divide-and-Conquer algorithm is used for computing the final skyline. Intensive experimentations on both real and synthetic datasets show that our algorithm presents best performance comparatively to other methods.
Style APA, Harvard, Vancouver, ISO itp.
5

Lenin, K. "REDUCTION OF ACTIVE POWER LOSS BY VOLITION PARTICLE SWARM OPTIMIZATION". International Journal of Research -GRANTHAALAYAH 6, nr 6 (30.06.2018): 346–56. http://dx.doi.org/10.29121/granthaalayah.v6.i6.2018.1379.

Pełny tekst źródła
Streszczenie:
This paper projects Volition Particle Swarm Optimization (VP) algorithm for solving optimal reactive power problem. Particle Swarm Optimization algorithm (PSO) has been hybridized with the Fish School Search (FSS) algorithm to improve the capability of the algorithm. FSS presents an operator, called as collective volition operator, which is capable to auto-regulate the exploration-exploitation trade-off during the algorithm execution. Since the PSO algorithm converges faster than FSS but cannot auto-adapt the granularity of the search, we believe the FSS volition operator can be applied to the PSO in order to mitigate this PSO weakness and improve the performance of the PSO for dynamic optimization problems. In order to evaluate the efficiency of the proposed Volition Particle Swarm Optimization (VP) algorithm, it has been tested in standard IEEE 30 bus test system and compared to other reported standard algorithms. Simulation results show that Volition Particle Swarm Optimization (VP) algorithm is more efficient then other algorithms in reducing the real power losses with control variables are within the limits.
Style APA, Harvard, Vancouver, ISO itp.
6

Khamprapai, Wanida, Cheng-Fa Tsai i Paohsi Wang. "Analyzing the Performance of the Multiple-Searching Genetic Algorithm to Generate Test Cases". Applied Sciences 10, nr 20 (17.10.2020): 7264. http://dx.doi.org/10.3390/app10207264.

Pełny tekst źródła
Streszczenie:
Software testing using traditional genetic algorithms (GAs) minimizes the required number of test cases and reduces the execution time. Currently, GAs are adapted to enhance performance when finding optimal solutions. The multiple-searching genetic algorithm (MSGA) has improved upon current GAs and is used to find the optimal multicast routing in network systems. This paper presents an analysis of the optimization of test case generations using the MSGA by defining suitable values of MSGA parameters, including population size, crossover operator, and mutation operator. Moreover, in this study, we compare the performance of the MSGA with a traditional GA and hybrid GA (HGA). The experimental results demonstrate that MSGA reaches the maximum executed branch statements in the lowest execution time and the smallest number of test cases compared to the GA and HGA.
Style APA, Harvard, Vancouver, ISO itp.
7

Ebrahimi, Maryam, Parviz Dinari, Mohamad Samaei, Rouhollah Sohrabi i Soheil Sherafatianfini. "A New Approach Based on the Learning Effect for Sequence-Dependent Parallel Machine Scheduling Problem under Uncertainty". Discrete Dynamics in Nature and Society 2022 (10.09.2022): 1–9. http://dx.doi.org/10.1155/2022/2648936.

Pełny tekst źródła
Streszczenie:
Production system design has lots of restrictions and complex assumptions that cause difficulty in decision making. One of the most important of them is the complexity of the relationship between man and machine. In this regard, operator learning is recognized as an effective element in completing tasks in the production system. In this research, a mathematical model for scheduling the parallel machines in terms of job degradation and operator learning is presented. As one of the most important assumptions, the sequence-dependent setup time is of concern. In other words, jobs are processed sequentially, and there is a sequence-dependent setup time. Moreover, the processing time and delivery due date are considered uncertain, and a fuzzy conversion method is used to deal with this uncertainty. The proposed mathematical model is a multiobjective one and tries to minimize speed and completion time. In order to optimize this mathematical model, the genetic algorithm (GA) and variable neighborhood search (VNS) algorithms have been used. A new hybrid algorithm has also been developed for this problem. The results show that the hybrid algorithm can provide more substantial results than classical algorithms. Moreover, it is revealed that a large percentage of Pareto solutions in the proposed algorithm have a generation time of more than 80% of the algorithm’s execution time.
Style APA, Harvard, Vancouver, ISO itp.
8

García, José, Gino Astorga i Víctor Yepes. "An Analysis of a KNN Perturbation Operator: An Application to the Binarization of Continuous Metaheuristics". Mathematics 9, nr 3 (24.01.2021): 225. http://dx.doi.org/10.3390/math9030225.

Pełny tekst źródła
Streszczenie:
The optimization methods and, in particular, metaheuristics must be constantly improved to reduce execution times, improve the results, and thus be able to address broader instances. In particular, addressing combinatorial optimization problems is critical in the areas of operational research and engineering. In this work, a perturbation operator is proposed which uses the k-nearest neighbors technique, and this is studied with the aim of improving the diversification and intensification properties of metaheuristic algorithms in their binary version. Random operators are designed to study the contribution of the perturbation operator. To verify the proposal, large instances of the well-known set covering problem are studied. Box plots, convergence charts, and the Wilcoxon statistical test are used to determine the operator contribution. Furthermore, a comparison is made using metaheuristic techniques that use general binarization mechanisms such as transfer functions or db-scan as binarization methods. The results obtained indicate that the KNN perturbation operator improves significantly the results.
Style APA, Harvard, Vancouver, ISO itp.
9

Fan, Yi, Minghui Cai, Naiqi Li i Yongmei Liu. "A First-Order Interpreter for Knowledge-Based Golog with Sensing based on Exact Progression and Limited Reasoning". Proceedings of the AAAI Conference on Artificial Intelligence 26, nr 1 (20.09.2021): 734–42. http://dx.doi.org/10.1609/aaai.v26i1.8230.

Pełny tekst źródła
Streszczenie:
While founded on the situation calculus, current implementations of Golog are mainly based on the closed-world assumption or its dynamic versions or the domain closure assumption. Also, they are almost exclusively based on regression. In this paper, we propose a first-order interpreter for knowledge-based Golog with sensing based on exact progression and limited reasoning. We assume infinitely many unique names and handle first-order disjunctive information in the form of the so-called proper+ KBs. Our implementation is based on the progression and limited reasoning algorithms for proper+ KBs proposed by Liu, Lakemeyer and Levesque. To improve efficiency, we implement the two algorithms by grounding via a trick based on the unique name assumption. The interpreter is online but the programmer can use two operators to specify offline execution for parts of programs. The search operator returns a conditional plan, while the planning operator is used when local closed-world information is available and calls a modern planner to generate a sequence of actions.
Style APA, Harvard, Vancouver, ISO itp.
10

GURMAN, Ivan, Pavlo MOTORNYI, Viktor CHESHUN, Andrii DZHULIY i Vitalii CHORNENKYI. "ALGORITHM OF SYNTHESIS OF DISCRETE TESTS BASED ON EVOLUTIONARY THEORIES". Herald of Khmelnytskyi National University. Technical sciences 309, nr 3 (26.05.2022): 40–43. http://dx.doi.org/10.31891/2307-5732-2022-309-3-40-43.

Pełny tekst źródła
Streszczenie:
The complexity of modern digital components and devices as objects of diagnosis, in most cases, makes it impossible for experts to synthesize tests manually in the time allowed to solve the problem of organizing diagnostic tests. This determines the relevance of the development and implementation of intelligent methods and tools for automated synthesis of test sequences. One of the promising options for the synthesis and optimization of discrete tests is the solution of this problem based on the theory of evolution or genetic algorithms. The paper describes an algorithm for the synthesis of discrete tests using genetic algorithms. The algorithm of realization of diagnostic experiment contains stages of formation of initial population, crossing, processing of the received results and check of a condition of a stop of work of genetic algorithm. The main characteristics of the algorithm: the type of evolutionary algorithms used – hybrid; method of generating binary words of chromosome codes by crossing operator from values of diagnostic vectors of elementary tests; the method of selection of applicants for crossing – proportional, based on the probability function of fitness and the control operator of the compatibility assessment; type of crossover operator – uniform; the method of forming a new generation – the descendants of crosses of individuals based on the results of proportional selection and elitistly selected individuals that do not allow crossings under the control operator of compatibility assessment; the main way to stop the evolutionary algorithm – the loss of the ability to implement crossings of individuals in the resulting generation; additional (possible) ways to stop the algorithm – limiting the execution time or the number of iterations (generated generations).
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Operator Execution Algorithms"

1

Mallangi, Siva Sai Reddy. "Low-Power Policies Based on DVFS for the MUSEIC v2 System-on-Chip". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229443.

Pełny tekst źródła
Streszczenie:
Multi functional health monitoring wearable devices are quite prominent these days. Usually these devices are battery-operated and consequently are limited by their battery life (from few hours to a few weeks depending on the application). Of late, it was realized that these devices, which are currently being operated at fixed voltage and frequency, are capable of operating at multiple voltages and frequencies. By switching these voltages and frequencies to lower values based upon power requirements, these devices can achieve tremendous benefits in the form of energy savings. Dynamic Voltage and Frequency Scaling (DVFS) techniques have proven to be handy in this situation for an efficient trade-off between energy and timely behavior. Within imec, wearable devices make use of the indigenously developed MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). This system is optimized for efficient and accurate collection, processing, and transfer of data from multiple (health) sensors. MUSEIC v2 has limited means in controlling the voltage and frequency dynamically. In this thesis we explore how traditional DVFS techniques can be applied to the MUSEIC v2. Experiments were conducted to find out the optimum power modes to efficiently operate and also to scale up-down the supply voltage and frequency. Considering the overhead caused when switching voltage and frequency, transition analysis was also done. Real-time and non real-time benchmarks were implemented based on these techniques and their performance results were obtained and analyzed. In this process, several state of the art scheduling algorithms and scaling techniques were reviewed in identifying a suitable technique. Using our proposed scaling technique implementation, we have achieved 86.95% power reduction in average, in contrast to the conventional way of the MUSEIC v2 chip’s processor operating at a fixed voltage and frequency. Techniques that include light sleep and deep sleep mode were also studied and implemented, which tested the system’s capability in accommodating Dynamic Power Management (DPM) techniques that can achieve greater benefits. A novel approach for implementing the deep sleep mechanism was also proposed and found that it can obtain up to 71.54% power savings, when compared to a traditional way of executing deep sleep mode.
Nuförtiden så har multifunktionella bärbara hälsoenheter fått en betydande roll. Dessa enheter drivs vanligtvis av batterier och är därför begränsade av batteritiden (från ett par timmar till ett par veckor beroende på tillämpningen). På senaste tiden har det framkommit att dessa enheter som används vid en fast spänning och frekvens kan användas vid flera spänningar och frekvenser. Genom att byta till lägre spänning och frekvens på grund av effektbehov så kan enheterna få enorma fördelar när det kommer till energibesparing. Dynamisk skalning av spänning och frekvens-tekniker (såkallad Dynamic Voltage and Frequency Scaling, DVFS) har visat sig vara användbara i detta sammanhang för en effektiv avvägning mellan energi och beteende. Hos Imec så använder sig bärbara enheter av den internt utvecklade MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). Systemet är optimerat för effektiv och korrekt insamling, bearbetning och överföring av data från flera (hälso) sensorer. MUSEIC v2 har begränsad möjlighet att styra spänningen och frekvensen dynamiskt. I detta examensarbete undersöker vi hur traditionella DVFS-tekniker kan appliceras på MUSEIC v2. Experiment utfördes för att ta reda på de optimala effektlägena och för att effektivt kunna styra och även skala upp matningsspänningen och frekvensen. Eftersom att ”overhead” skapades vid växling av spänning och frekvens gjordes också en övergångsanalys. Realtidsoch icke-realtidskalkyler genomfördes baserat på dessa tekniker och resultaten sammanställdes och analyserades. I denna process granskades flera toppmoderna schemaläggningsalgoritmer och skalningstekniker för att hitta en lämplig teknik. Genom att använda vår föreslagna skalningsteknikimplementering har vi uppnått 86,95% effektreduktion i jämförelse med det konventionella sättet att MUSEIC v2-chipets processor arbetar med en fast spänning och frekvens. Tekniker som inkluderar lätt sömn och djupt sömnläge studerades och implementerades, vilket testade systemets förmåga att tillgodose DPM-tekniker (Dynamic Power Management) som kan uppnå ännu större fördelar. En ny metod för att genomföra den djupa sömnmekanismen föreslogs också och enligt erhållna resultat så kan den ge upp till 71,54% lägre energiförbrukning jämfört med det traditionella sättet att implementera djupt sömnläge.
Style APA, Harvard, Vancouver, ISO itp.
2

Garg, Vishesh. "Towards Designing PCM-Conscious Database Systems". Thesis, 2016. https://etd.iisc.ac.in/handle/2005/4889.

Pełny tekst źródła
Streszczenie:
Phase Change Memory (PCM) is a recently developed non-volatile memory technology that is expected to provide an attractive combination of the best features of conventional disks (persistence, capacity) and of DRAM (access speed). For instance, it is about 2 to 4 times denser than DRAM, while providing a DRAM-comparable read latency. On the other hand, it consumes much less energy than magnetic hard disks while providing substantively smaller write latency. Due to this suite of desirable features, PCM technology is expected to play a prominent role in the next generation of computing systems, either augmenting or replacing current components in the memory hierarchy. A limitation of PCM, however, is that there is a significant difference between the read and write behaviors in terms of energy, latency and bandwidth. A PCM write, for example, consumes 6 times more energy than a read. Further, PCM has limited write endurance since a memory cell becomes unusable after the number of writes to the cell exceeds a threshold determined by the underlying glass material. Database systems, by virtue of dealing with enormous amounts of data, are expected to be a prime beneficiary of this new technology. Accordingly, recent research has investigated how database engines may be redesigned to suit DBMS deployments on PCM, covering areas such as indexing techniques, logging mechanisms and query processing algorithms. Prior database research has primarily focused on computing architectures wherein either a) PCM completely replaces the DRAM memory ; or b) PCM and DRAM co-exist side-by-side and are independently controlled by the software. However, a third option that is gaining favor in the architecture community is where the PCM is augmented with a small hardware-managed DRAM buffer. In this model, which we refer to as DRAM HARD, the address space of the application maps to PCM, and the DRAM buffer can simply be visualized as yet another level of the existing cache hierarchy. With most of the query processing research being preoccupied with the first two models, this third model has remained largely ignored. Moreover, even in this limited literature, the emphasis has been restricted to exploring execution-time strategies; the compile-time plan selection process itself being left unaltered. In this thesis, we propose minimalist reworkings of current implementations of database operators, that are tuned to the DRAM HARD model, to make them PCM-conscious. We also propose novel algorithms for compile-time query plan selection, thereby taking a holistic approach to introducing PCM-compliance in present-day database systems. Specifically, our contributions are two-fold, as outlined below. First, we address the pragmatic goal of minimally altering current implementations of database operators to make them PCM-conscious, the objective being to facilitate an easy transition to the new technology. Specifically, we target the implementations of the \workhorse" database operators: sort, hash join and group-by. Our customized algorithms and techniques for each of these operators are designed to significantly reduce the number of writes while simultaneously saving on execution times. For instance, in the case of sort operator, we perform an in-place partitioning of input data into DRAM-sized chunks so that the subsequent sorting of these chunks can finish inside the DRAM, consequently avoiding both intermediate writes and their associated latency overheads. Second, we redesign the query optimizer to suit the new environment of PCM. Each of the new operator implementations is accompanied by simple but effective write estimators that make these implementations suitable for incorporation in the optimizer. Current optimizers typically choose plans using a latency-based costing mechanism assigning equal costs to both read and write memory operations. The asymmetric read-write nature of PCM implies that these models are no longer accurate. We therefore revise the cost models to make them cognizant of this asymmetry by accounting for the additional latency during writes. Moreover, since the number of writes is critical to the lifespan of a PCM device, a new metric of write cost is introduced in the optimizer plan selection process, with its value being determined using the above estimators. Consequently, the query optimizer needs to select plans that simultaneously minimize query writes and response times. We propose two solutions for handling this dual-objective optimization problem. The first approach is a heuristic propagation algorithm that extends the widely used dynamic programming plan propagation procedure to drastically reduce the exponential search space of candidate plans. The algorithm uses the write costs of sub-plans at each of the operator nodes to decide which of them can be selectively pruned from further consideration. The second approach maps this optimization problem to the linear multiple-choice knapsack problem, and uses its greedy solution to return the nal plan for execution. This plan is known to be optimal within the set of non interesting-order plans in a single join order search space. Moreover, it may contain a weighted execution of two algorithms for one of the operator nodes in the plan tree. Therefore overall, while the greedy algorithm comes with optimality guarantees, the heuristic approach is advantageous in terms of easier implementation. The experimentation for our proposed techniques is conducted on Multi2sim, a state-of the- art cycle-accurate simulator. Since it does not have native support for PCM, we made a major extension to its existing memory module to model PCM device. Specifically, we added separate data tracking functionality for the DRAM and PCM resident data, to implement the commonly used read-before-write technique for PCM writes reduction. Similarly, modifications were made to Multi2sim's timing subsystem to account for the asymmetric read-write latencies of PCM. A new DRAM replacement policy called N-Chance, that has been shown to work well for PCM-based hardware, was also introduced. Our new techniques are evaluated on end-to-end TPC-H benchmark queries with regard to the following metrics: number of writes, response times and wear distribution. The experimental results indicate that, in comparison to their PCM-oblivious counterparts, the PCM-conscious operators collectively reduce the number of writes by a factor of 2 to 3, while concurrently improving the query response times by about 20% to 30%. When combined with the appropriate plan choices, the improvements are even higher. In the case of Query 19, for instance, we obtained a 64% savings in writes, while the response time came down to two-thirds of the original. In essence, our algorithms provide both short-term and long-term benefits. These outcomes augur well for database engines that wish to leverage the impending transition to PCM-based computing.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Operator Execution Algorithms"

1

United States. National Aeronautics and Space Administration., red. Conversion-integration of MSFC nonlinear signal diagnostic analysis algorithms for realtime execution on MSFC's MPP Prototype System: Final report, NASA contract no. NAS8-40341. Huntsville, AL: AI Signal Research, Inc., 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Conversion-integration of MSFC nonlinear signal diagnostic analysis algorithms for realtime execution on MSFC's MPP Prototype System: Final report, NASA contract no. NAS8-40341. Huntsville, AL: AI Signal Research, Inc., 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Conversion-integration of MSFC nonlinear signal diagnostic analysis algorithms for realtime execution on MSFC's MPP Prototype System: Final report, NASA contract no. NAS8-40341. Huntsville, AL: AI Signal Research, Inc., 1996.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Operator Execution Algorithms"

1

Hsu, Tzu-Han, Borzoo Bonakdarpour, Bernd Finkbeiner i César Sánchez. "Bounded Model Checking for Asynchronous Hyperproperties". W Tools and Algorithms for the Construction and Analysis of Systems, 29–46. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30823-9_2.

Pełny tekst źródła
Streszczenie:
AbstractMany types of attacks on confidentiality stem from the nondeterministic nature of the environment that computer programs operate in. We focus on verification of confidentiality in nondeterministic environments by reasoning about asynchronous hyperproperties. We generalize the temporal logic to allow nested trajectory quantification, where a trajectory determines how different execution traces may advance and stutter. We propose a bounded model checking algorithm for based on QBF-solving for a fragment of and evaluate it by various case studies on concurrent programs, scheduling attacks, compiler optimization, speculative execution, and cache timing attacks. We also rigorously analyze the complexity of model checking .
Style APA, Harvard, Vancouver, ISO itp.
2

Majumdar, Rupak, Kaushik Mallik, Mateusz Rychlicki, Anne-Kathrin Schmuck i Sadegh Soudjani. "A Flexible Toolchain for Symbolic Rabin Games under Fair and Stochastic Uncertainties". W Computer Aided Verification, 3–15. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37709-9_1.

Pełny tekst źródła
Streszczenie:
AbstractWe present a flexible and efficient toolchain to symbolically solve (standard) Rabin games, fair-adversarial Rabin games, and "Image missing"-player Rabin games. To our best knowledge, our tools are the first ones to be able to solve these problems. Furthermore, using these flexible game solvers as a back-end, we implemented a tool for computing correct-by-construction controllers for stochastic dynamical systems under LTL specifications. Our implementations use the recent theoretical result that all of these games can be solved using the same symbolic fixpoint algorithm but utilizing different, domain specific calculations of the involved predecessor operators. The main feature of our toolchain is the utilization of two programming abstractions: one to separate the symbolic fixpoint computations from the predecessor calculations, and another one to allow the integration of different BDD libraries as back-ends. In particular, we employ a multi-threaded execution of the fixpoint algorithm by using the multi-threaded BDD library Sylvan, which leads to enormous computational savings.
Style APA, Harvard, Vancouver, ISO itp.
3

Chang, Wanli, Swaminathan Narayanaswamy, Alma Pröbstl i Samarjit Chakraborty. "Reliable CPS Design for Unreliable Hardware Platforms". W Dependable Embedded Systems, 545–63. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52017-5_23.

Pełny tekst źródła
Streszczenie:
AbstractToday, many battery-operated cyber-physical systems (CPS) ranging from domestic robots, to drones, and electric vehicles are highly software-intensive. The software in such systems involves multiple feedback control loops that implement different functionality. How these control loops are designed is closely related to both the semiconductor aging of the processors on which the software is run and also the aging of the batteries in these systems. For example, sudden acceleration in an electric vehicle can negatively impact the health of the vehicle’s battery. On the other hand, processors age over time and stress, impacting the execution of control algorithms and thus the control performance. With increasing semiconductor scaling, and our increasing reliance on battery-operated devices, these aging effects are of concern for the lifetime of these devices. Traditionally, the design of the control loops focused only on control-theoretic metrics, related to stability and performance (such as peak overshoot or settling time). In this chapter we show that such controller design techniques that are oblivious of the characteristics of the hardware implementation platform dramatically worsen the battery behaviour and violate the safety requirement with processor aging. However, with proper controller design these effects can be mitigated—thereby improving the lifetime of the devices.
Style APA, Harvard, Vancouver, ISO itp.
4

Monazzah, Amir Mahdi Hosseini, Amir M. Rahmani, Antonio Miele i Nikil Dutt. "Exploiting Memory Resilience for Emerging Technologies: An Energy-Aware Resilience Exemplar for STT-RAM Memories". W Dependable Embedded Systems, 505–26. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52017-5_21.

Pełny tekst źródła
Streszczenie:
AbstractDue to the consistent pressing quest of larger on-chip memories and caches of multicore and manycore architectures, Spin Transfer Torque Magnetic RAM (STT-MRAM or STT-RAM) has been proposed as a promising technology to replace classical SRAMs in near-future devices. Main advantages of STT-RAMs are a considerably higher transistor density and a negligible leakage power compared with SRAM technology. However, the drawback of this technology is the high probability of errors occurring especially in write operations. Such errors are asymmetric and transition-dependent, where 0 → 1 is the most critical one, and is high subjected to the amount and current (voltage) supplied to the memory during the write operation. As a consequence, STT-RAMs present an intrinsic trade-off between energy consumption vs. reliability that needs to be properly tuned w.r.t. the currently running application and its reliability requirement. This chapter proposes FlexRel, an energy-aware reliability improvement architectural scheme for STT-RAM cache memories. FlexRel considers a memory architecture provided with Error Correction Codes (ECCs) and a custom current regulator for the various cache ways and conducts a trade-off between reliability and energy consumption. FlexRel cache controller dynamically profiles the number of 0 → 1 transitions of each individual bit write operation in a cache block and based on that selects the most-suitable cache way and current level to guarantee the necessary error rate threshold (in terms of occurred write errors) while minimizing the energy consumption. We experimentally evaluated the efficiency of FlexRel against the most efficient uniform protection scheme from reliability, energy, area, and performance perspectives. Experimental simulations performed by using gem5 has demonstrated that while FlexRel satisfies the given error rate threshold, it delivers up to 13.2% energy saving. From the area footprint perspective, FlexRel delivers up to 7.9% cache ways’ area saving. Furthermore, the performance overhead of the FlexRel algorithm which changes the traffic patterns of the cache ways during the executions is 1.7%, on average.
Style APA, Harvard, Vancouver, ISO itp.
5

Cruz-Chávez, Marco Antonio, Abelardo Rodríguez-León, Rafael Rivera-López, Fredy Juárez-Pérez, Carmen Peralta-Abarca i Alina Martínez-Oropeza. "Grid Platform Applied to the Vehicle Routing Problem with Time Windows for the Distribution of Products". W Logistics Management and Optimization through Hybrid Artificial Intelligence Systems, 52–81. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0297-7.ch003.

Pełny tekst źródła
Streszczenie:
Around the world there have recently been new and more powerful computing platforms created that can be used to work with computer science problems. Some of these problems that are dealt with are real problems of the industry; most are classified by complexity theory as hard problems. One such problem is the vehicle routing problem with time windows (VRPTW). The computational Grid is a platform which has recently ventured into the treatment of hard problems to find the best solution for these. This chapter presents a genetic algorithm for the vehicle routing problem with time windows. The algorithm iteratively applies a mutation operator, first of the intelligent type and second of the restricting type. The algorithm takes advantage of Grid computing to increase the exploration and exploitation of the solution space of the problem. The Grid performance is analyzed for a genetic algorithm and a measurement of the latencies that affect the algorithm is studied. The convenience of applying this new computing platform to the execution of algorithms specially designed for Grid computing is presented.
Style APA, Harvard, Vancouver, ISO itp.
6

Salhi, Dhai Eddine, Abelkamel Tari i Mohand Tahar Kechadi. "Using E-Reputation for Sentiment Analysis". W Research Anthology on Implementing Sentiment Analysis Across Multiple Disciplines, 1384–400. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-6303-1.ch071.

Pełny tekst źródła
Streszczenie:
In a competitive world, companies are looking to gain a positive reputation through these clients. Electronic reputation is part of this reputation mainly in social networks, where everyone is free to express their opinion. Sentiment analysis of the data collected in these networks is very necessary to identify and know the reputation of a companies. This paper focused on one type of data, Twits on Twitter, where the authors analyzed them for the company Djezzy (mobile operator in Algeria), to know their satisfaction. The study is divided into two parts: The first part was the pre-processing phase, where this research filtered the Twits (eliminate useless words, use the tokenization) to keep the necessary information for a better accuracy. The second part was the application of machine learning algorithms (SVM and logistic regression) for a supervised classification since the results are binary. The strong point of this study was the possibility to run the chosen algorithms on a cloud in order to save execution time; the solution also supports the three languages: Arabic, English, and French.
Style APA, Harvard, Vancouver, ISO itp.
7

Saraf, Kundankumar Rameshwar, i Malathi P. Jesudason. "Encryption Principles and Techniques for the Internet of Things". W Cryptographic Security Solutions for the Internet of Things, 42–66. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-5742-5.ch002.

Pełny tekst źródła
Streszczenie:
This chapter explores the encryption techniques used for the internet of things (IoT). The security algorithm used for IoT should follow many constraints of an embedded system. Hence, lightweight cryptography is an optimum security solution for IoT devices. This chapter mainly describes the need for security in IoT, the concept of lightweight cryptography, and various cryptographic algorithms along with their shortcomings given IoT. This chapter also describes the principle of operation of all the above algorithms along with their security analysis. Moreover, based on the algorithm size (i.e., the required number of gate equivalent, block size, key size, throughput, and execution speed of the algorithm), the chapter reports the comparative analysis of their performance. The chapter discusses the merits and demerits of these algorithms along with their use in the IoT system.
Style APA, Harvard, Vancouver, ISO itp.
8

Pencheva, Tania, Maria Angelova i Krassimir Atanassov. "Genetic Algorithms Quality Assessment Implementing Intuitionistic Fuzzy Logic". W Handbook of Research on Novel Soft Computing Intelligent Algorithms, 327–54. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-4450-2.ch011.

Pełny tekst źródła
Streszczenie:
Intuitionistic fuzzy logic has been implemented in this investigation aiming to derive intuitionistic fuzzy estimations of model parameters of yeast fed-batch cultivation. Considered here are standard simple and multi-population genetic algorithms as well as their modifications differ from each other in execution order of main genetic operators (selection, crossover, and mutation). All are applied for the purpose of parameter identification of S. cerevisiae fed-batch cultivation. Performances of the examined algorithms have been assessed before and after the application of a procedure for narrowing the range of model parameters variation. Behavior of standard simple genetic algorithm has been also examined for different values of proof as the most sensitive genetic algorithms parameter toward convergence time, namely, generation gap (GGAP). Results obtained after the intuitionistic fuzzy logic implementation for assessment of genetic algorithms performance have been compared. As a result, the most reliable algorithm/value of GGAP ensuring the fastest and the most valuable solution is distinguished.
Style APA, Harvard, Vancouver, ISO itp.
9

"Issues Related to Acceleration of Algorithms". W Advances in Systems Analysis, Software Engineering, and High Performance Computing, 173–94. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8350-0.ch010.

Pełny tekst źródła
Streszczenie:
This chapter provides examples on acceleration of various algorithms using the dataflow paradigm. Implementation of algorithms demands input data changes, operation substitutions, and data tilling to achieve significant performance. The implementation becomes even more challenging if data are coming via a serial stream. This chapter presents acceleration mechanisms on three different use cases related to presented algorithms. This chapter consists of three parts: Acceleration of Algorithms Using Innovations in Suboptimal Calculus and Approximate Computing, Acceleration of Algorithms Using Innovations in Computer Architecture and Implementational Technologies, and Speeding up the Execution of Data Mining Algorithms.
Style APA, Harvard, Vancouver, ISO itp.
10

"Implementation Details of Neural Networks Using Dataflow". W Advances in Systems Analysis, Software Engineering, and High Performance Computing, 142–51. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-8350-0.ch006.

Pełny tekst źródła
Streszczenie:
This chapter presents dataflow paradigm in general and loop unrolling and data pipelines as key points for acceleration and discusses implementation details of multilayer perceptron neural networks. The iterative nature of the algorithm makes it suitable for dataflow implementation using matrix multiplication as a basic operation. Also, it presents major differences in code execution between conventional controlflow paradigm and dataflow paradigm. It is shown how part of an algorithm (feed-forward phase) can be migrated to the accelerator while the rest remains the same.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Operator Execution Algorithms"

1

Butler, Erin, Dryonis Pertuso, Gerald Hua i Price Stark. "Automated Hydraulic Fracturing Integrated with Predictive Machine Learning". W SPE Hydraulic Fracturing Technology Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/209165-ms.

Pełny tekst źródła
Streszczenie:
Abstract In unconventional wells, returns are driven in part by the reduction of variability in efficiency and performance. In 2021 the stimulation of two wells in the Bakken proceeded under the architecture of an automated frac control system communicating directly to Machine Learning predictive models at the headquarters of the well operator. This represents the first time an algorithmic frac was conducted via automation, adjusting stage designs, and pushing those stage designs to the field without human intervention. This paper discusses a fully integrated and automated completion performed on the operator's four- well pad in the Bakken. It reviews the impact on completion performance, completion design, components of the system and execution. Throughout the completion, automated software interfaced with the frac control system executing the job. Additionally, data was uploaded live and fed to the Machine Learning predictive model. This allowed the model to learn from actual well data and suggest improvements. Improvements were captured, iterated on, and design updates were sent back to the control system for the next stage in the completion sequence. Human oversight was conducted but only as a check, during the entire process. Both the automated frac control system and algorithmic design system were functionally separate but communicated live, allowing the operator to take advantage of their complete basin knowledge database without compromising data integrity and model confidentiality. Additionally, sensors provided real-time data such as treating pressure, rate and proppant concentration, as well as downhole data such as cluster uniformity, fracture geometry, and offset well interactions. The project was launched with several primary goals in mind: First was to functionally test the automation of the frac fleet for the operator proving its ability to consistently place their designs. Second was to incorporate the prediction model algorithms into completion design and test how quickly and how much the Machine Learning models could actually learn from actual well stages. Both of these primary goals were achieved, validating the ability to automatically execute completions and to tie design changes live to a control system elsewhere. This represents the first time a hydraulic fracture was conducted via automation with algorithmic integrated design improvement, either independently or together. These capabilities can improve execution and performance where it is becoming increasingly difficult to deliver step changes in well performance with current manual crews and technology. Integrated automation provides an upgrade to completion performance by reducing variability in execution and well performance while also enabling tailored designs on scales previously unattainable.
Style APA, Harvard, Vancouver, ISO itp.
2

Marsden, Gary C., Ted Shaneyfelt, Sadik Esener i Sing H. Lee. "Optoelectronic relational algebraic processor". W OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.fbb4.

Pełny tekst źródła
Streszczenie:
The dual scale topology optoelectronic processor (DSTOP) was developed for generalized matrix algebra.1 Vector outer products are a particularly useful feature of the DSTOP architecture. Since the computation is performed electronically, a generalized function may be substituted for conventional numerical multiplication; this is especially useful in symbolic computations such as those required in relational databases. We have developed algorithms appropriate for DSTOP that use generalized outer products in the execution of relational algebraic operations.2 The DSTOP processor serves as a decision subsystem in a relational database machine. The DSTOP relational algebraic processor is well suited to interface with 3D optical storage devices such as optical disks, photorefractive crystals or two-photon memories. The operations performed include joins, set difference, union, intersection and set division. Duplicate removal may also be performed in the support of the projection relational operator. The remaining relational operators, selection and Cartesian product, are not included. Cartesian products require no decision mechanism while selection, which is a unary operator, is best performed on a smaller, front end processor.
Style APA, Harvard, Vancouver, ISO itp.
3

Cenci, Stefano, Giulio Rosati, Damiano Zanotto, Fabio Oscari i Aldo Rossi. "First Test Results of a Haptic Tele-Operation System to Enhance Stability of Telescopic Handlers". W ASME 2010 10th Biennial Conference on Engineering Systems Design and Analysis. ASMEDC, 2010. http://dx.doi.org/10.1115/esda2010-25305.

Pełny tekst źródła
Streszczenie:
According to a recent report of ILO (International Labour Organization), more than two million people die or loose the working capability every year because of accidents or work-related diseases. A large portion of these accidents are related to the execution of motion and transportation tasks involving heavy duty machines. The insufficient degree of interaction between the human operator and the machine may be regarded as one of the major causes of this phenomenon. The main goal of the tele-operation system presented in this paper is to both preserving slave (machine) stability, by reducing the inputs of slave actuators when certain unsafe working conditions occur, and improving the level of interaction at master (operator) side. Different control schemes are proposed in the paper, including several combinations of master and slave control strategies. The effectiveness of the algorithms is analyzed by presenting some experimental results, based on the use of a two degrees-of-freedom force feedback input device (with one active actuator and one passive stiff joint) coupled with a simulator of a telescopic handler.
Style APA, Harvard, Vancouver, ISO itp.
4

Bohlin, Markus, Mathias Wa¨rja, Anders Holst, Pontus Slottner i Kivanc Doganay. "Optimization of Condition-Based Maintenance for Industrial Gas Turbines: Requirements and Results". W ASME Turbo Expo 2009: Power for Land, Sea, and Air. ASMEDC, 2009. http://dx.doi.org/10.1115/gt2009-59935.

Pełny tekst źródła
Streszczenie:
In oil and gas applications, the careful planning and execution of preventive maintenance is important due to the high costs associated with shutdown of critical equipment. Optimization and lifetime management for equipment such as gas turbines is therefore crucial in order to achieve high availability and reliability. In this paper, a novel condition-based gas turbine maintenance strategy is described and evaluated. Using custom-made gas turbine maintenance planning software, maintenance is repeatedly reoptimized to fit into the time intervals where production losses are least costly and result in the lowest possible impact. The strategy focuses on accurate online lifetime estimates for gas turbine components, where algorithms predicting future maintenance requirements are used to produce maintenance deadlines. This ensures that the gas turbines are maintained in accordance with the conditions on site. To show the feasibility and economic effects of a customer-adapted maintenance planning process, the maintenance plan for a gas turbine used in a real-world scenario is optimized using a combinatorial optimization algorithm and input from gas turbine operation data, maintenance schedules and operator requirements. The approach was validated through the inspection of a reference gas turbine after a predetermined time interval. It is shown that savings may be substantial compared to a traditional preventive maintenance plan. In the evaluation, typical cost reductions range from 25 to 65%. The calculated availability increase in practice is estimated to range from 0.5 to 1%. In addition, downtime reductions of approximately 12% are expected, due solely to improved planning. This indicates significant improvements.
Style APA, Harvard, Vancouver, ISO itp.
5

Dubinsky, Alexey, i Yana Zhykharieva. "THE PROJECT OF TEXT ADVENTURE GAME FOR STUDYING THE PARAMEDICAL ALGORITHMS". W eLSE 2020. University Publishing House, 2020. http://dx.doi.org/10.12753/2066-026x-20-004.

Pełny tekst źródła
Streszczenie:
First aid skills are the important part of medical's competency. The set of instructions for first aid operations are officially approved by the state. This instruction texts are the algorithms. Medical students are studying these algorithms in the special course. First of all, we convert the instructions from text to the graphical flowcharts (according to ISO 5807-85 standard) for checking the ambiguity and possible misunderstanding. The execution process of such algorithms is one of typical "complex open ended assignments". We have the classification of typical user errors. On the base of this errors we construct the set of alternative choices for all steps of algorithm. Every such set will convert to the answers for multiple choice question (MCQ). There are repeated cyclic question for student (executor): "what you will do?" or "what is your next operation?" We plan to build the special environment for gamification of the learning process. Short version will have only one right way - sequence of answers. Every wrong answer will lead the error message - "your patient is dead" and explanation why it happened. In the more complex model we evaluate the patient state and student can read the comments and the errors list only after the end of the algorithm execution. This year (2020) we plan to make the first iteration: text-based online adventure game, one content set, based on the first aid instructions that are approved in Ukraine. The next iteration will use first aid instructions that are accepted in other countries, starting from countries of EU. We suppose future development of this game will be like a well-known history of evolution of computer games. This project will be part of the second co-author PhD thesis
Style APA, Harvard, Vancouver, ISO itp.
6

Shekhawat, Dushyant Singh, Vishal Devgun, Bhartendu Bhatt, Rajeev Ranjan Sinha, Praprut Songchitruksa, Karthick Vadivel, Adarsh Dhiman, Apurv Jain, Parimal Unmesh Shaha i Ralf Holy. "Well Intervention Opportunity Management Using Artificial Intelligence and Machine Learning". W ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211824-ms.

Pełny tekst źródła
Streszczenie:
Abstract Operators face many challenges when selecting well-intervention candidates and evaluating a field’s potential because the process is highly time consuming, labor intensive, and susceptible to cognitive biases. An operator can lose up to USD 10 million/year because of ineffective well-intervention strategies in a single field. The objective of this study is to reduce such losses and standardize the well-intervention process by intelligently using the domain knowledge with artificial-intelligence (AI) and machine-learning (ML) techniques. The workflow developed in this study can automatically and autonomously analyze the surface-subsurface data to expeditiously recommend the top intervention candidates. The workflow leverages proven petroleum-engineering methods and customizable business logic to identify underperforming wells and then recommend workover techniques, post-workover production, success probability, and profitability. It uses production, petrophysics, reservoir, and economics data to run a series of AI/ML techniques. The data-analytics engine runs k-nearest neighbors to predict post-workover rates, followed by a decision tree to identify the remedies. Artificial neural network, random forest, and Monte-Carlo simulation are adapted to identify new perforation opportunities in existing wells. Analytic hierarchy process ranks the top intervention candidates based on post-workover rate, permeability, remaining reserves, and reservoir-production trends. Finally, Bayesian belief network calculates the probability of success. With this implementation, the manual benchmarking process of opportunity identification, which usually takes weeks to months, can now be completed within minutes. Once the opportunity is identified and reviewed, it gets registered in the opportunity tracker list for the final evaluation by the asset team. The results are displayed on web-based applications with customizable dashboards and can be integrated with any existing online/offline systems. Because the whole process is now automated and takes very little execution time, petroleum engineers can review the field’s performance on a daily basis. With more than 80% predictive accuracy and 90% time saving compared to the manual process, this workflow presents a step-change in the operator’s well-intervention management capacity. In this paper, the authors discuss the adaptations to the industry-standard AI/ML algorithms and the best practices to provide a faster, more accurate, and efficient well-intervention advisory system.
Style APA, Harvard, Vancouver, ISO itp.
7

Faz-Hernández, Armando, i Julio López. "High-Performance Elliptic Curve Cryptography: A SIMD Approach to Modern Curves". W Concurso de Teses e Dissertações. Sociedade Brasileira de Computação - SBC, 2023. http://dx.doi.org/10.5753/ctd.2023.230156.

Pełny tekst źródła
Streszczenie:
Cryptography based on elliptic curves is endowed with efficient methods for public-key cryptography. Recent research has shown the superiority of the Montgomery and Edwards curves over the Weierstrass curves as they require fewer arithmetic operations. Using these modern curves has, however, introduced several challenges to the cryptographic algorithm’s design, opening up new opportunities for optimization. Our main objective is to propose algorithmic optimizations and implementation techniques for cryptographic algorithms based on elliptic curves. In order to speed up the execution of these algorithms, our approach relies on the use of extensions to the instruction set architecture. In addition to those specific for cryptography, we use extensions that follow the Single Instruction, Multiple Data (SIMD) parallel computing paradigm. In this model, the processor executes the same operation over a set of data in parallel. We investigated how to apply SIMD to the implementation of elliptic curve algorithms. As part of our contributions, we design parallel algorithms for prime field and elliptic curve arithmetic. We also design a new three-point ladder algorithm for the scalar multiplication P + kQ, and a faster formula for calculating 3P on Montgomery curves. These algorithms have found applicability in isogeny-based cryptography. Using SIMD extensions such as SSE, AVX, and AVX2, we develop optimized implementations of the following cryptographic algorithms: X25519, X448, SIDH, ECDH, ECDSA, EdDSA, and qDSA. Performance benchmarks show that these implementations are faster than existing implementations in the state of the art. Our study confirms that using extensions to the instruction set architecture is an effective tool for optimizing implementations of cryptographic algorithms based on elliptic curves. May this be an incentive not only for those seeking to speed up programs in general but also for computer manufacturers to include more advanced extensions that support the increasing demand for cryptography.
Style APA, Harvard, Vancouver, ISO itp.
8

Wang, Haifeng, Michael Thiel, Jean-Michel Denichou, Diogo Salim, Soazig Leveque, Vera Krissetiawati Wibowo, Chris Woods i Darren Baker. "TOWARD THREE-DIMENSIONAL RESERVOIR MAPPING – A NEW APPROACH FOR MAPPING RESERVOIRS WITH ADVANCED ULTRA-DEEP AZIMUTHAL RESISTIVITY MEASUREMENTS". W 2021 SPWLA 62nd Annual Logging Symposium Online. Society of Petrophysicists and Well Log Analysts, 2021. http://dx.doi.org/10.30632/spwla-2021-0040.

Pełny tekst źródła
Streszczenie:
Recently the drilling industry has seen many advances in the application of deep directional electromagnetic (EM) measurements for mapping deeper into the reservoir, with the latest one capable of seeing over 250 ft above and below the wellbore providing unprecedented understanding of the reservoir. This measurement technology is now being used to look ahead of bit while drilling, for exploration wells to reduce drilling risks associated with unexpectedly penetrating certain formation. With the increasing complexity of the reservoirs that the industry is targeting, there is more and more quest for expanding the reservoir mapping capability, not just a 1D approach that can only map resistive boundaries on the vertical axis or near vertical axis and assume infinite extend in all other directions, but to enable geoscientists to better steer the well and better understand the reservoir structure and fluid contact in a full three-dimensional context around the wellbore. In this communication, the authors introduce a new solution to this quest for full three-dimensional real-time reservoir mapping. The solution is composed of three parts: a set of new measurements acquired downhole and transmitted to surface in real-time, a new inversion algorithm that is model independent and therefore fit for any reservoir complexity, and a new computing paradigm that make it possible to provide answers in real-time while drilling. The new set of measurements almost doubles the number of well logs that were acquired before and greatly enriches formations evaluation around the wellbore. The new algorithm, different from all previous algorithms, is not confined to any specific forms of models, making it suitable for exploring and finding solutions in complex reservoir settings. Finally, taking advantage of the latest advances in the Cloud computing, turnaround time of the new inversion is improved by over hundred times, thanks to the scalability of the algorithm design and Cloud computing infrastructure. Combining all these together allows to achieve three-dimensional reservoir map, without having to tradeoff between high resolution and depth of investigation. The 3D reservoir map that is generated from multiple transverse 2D inversion slices in real-time, enables timely update of reservoir model as drilling progress for the operator to make informed decisions. This new technology is currently deployed in several locations around the world and in different environments. In this paper, the authors review deployment results, to illustrate the technology, from preparation to real-time execution, and finally to post-job model update. With the ability of mapping in all directions while drilling, this technology opens the door to many applications and will enable the operators to target more complex reservoirs and achieving better geosteering results where 3D mapping and steering are required. In addition to its benefits for real-time operations, the technology also enables the geoscientists to update and calibrate their reservoir models with fine and accurate details, which can further benefit multiple disciplines including drilling, completion, production and reservoir management.
Style APA, Harvard, Vancouver, ISO itp.
9

Silvestre, Iago, i Leandro Becker. "Performance Analysis of Embedded Control Algorithms used in UAVs". W Simpósio Brasileiro de Engenharia de Sistemas Computacionais. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/sbesc_estendido.2020.13110.

Pełny tekst źródła
Streszczenie:
Performance analysis of embedded systems is critical when dealing with Cyber-Physical Systems that require stability guarantees. They typically operate having to respect deadlines imposed during the design of the related control system. In a recent past performance analysis was typically done only by executing the code, and making measures, on the target embedded platform. Nowadays, code execution/measuring can also be done on simulation software, which offers greater degree of liberty for designers to configure the system for the desired tests. This paper presents results obtained from analyzing the performance of two control algorithms developed for controlling an Unmanned Aerial Vehicle (UAV) running on simulated and real embedded platforms. Such analysis is important for twofold reasons: better understand the timing behavior or the algorithms and evaluate architectural issues related with the target embedded platform. Raspberry Pi 3 Model B+ (with Cortex-A53 processor) is used as reference platform and serves as basis for creating different simulated versions for the analysis. Initial results highlighted the important role played by cache memory in the performance of the control algorithms and were able to detect a major bottleneck in one of the control algorithms that could compromise system stability.
Style APA, Harvard, Vancouver, ISO itp.
10

E Silva, Jorge, i Eduardo Marques. "Executing Algorithms for Dynamic Dataflow Reconfigurable Hardware -The Operators Protocol". W 2006 IEEE International Conference on Reconfigurable Computing and FPGA's (ReConFig 2006). IEEE, 2006. http://dx.doi.org/10.1109/reconf.2006.307754.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii