Siga este enlace para ver otros tipos de publicaciones sobre el tema: Probabilistic execution time.

Artículos de revistas sobre el tema "Probabilistic execution time"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Probabilistic execution time".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Tongsima, S., E. H. M. Sha, C. Chantrapornchai, D. R. Surma y N. L. Passos. "Probabilistic loop scheduling for applications with uncertain execution time". IEEE Transactions on Computers 49, n.º 1 (2000): 65–80. http://dx.doi.org/10.1109/12.822565.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Draskovic, Stefan, Rehan Ahmed, Pengcheng Huang y Lothar Thiele. "Schedulability of probabilistic mixed-criticality systems". Real-Time Systems 57, n.º 4 (21 de febrero de 2021): 397–442. http://dx.doi.org/10.1007/s11241-021-09365-4.

Texto completo
Resumen
AbstractMixed-criticality systems often need to fulfill safety standards that dictate different requirements for each criticality level, for example given in the ‘probability of failure per hour’ format. A recent trend suggests designing this kind of systems by jointly scheduling tasks of different criticality levels on a shared platform. When this is done, the usual assumption is that tasks of lower criticality are degraded when a higher criticality task needs more resources, for example when it overruns a bound on its execution time. However, a way to quantify the impact this degradation has on the overall system is not well understood. Meanwhile, to improve schedulability and to avoid over-provisioning of resources due to overly pessimistic worst-case execution time estimates of higher criticality tasks, a new paradigm emerged where task’s execution times are modeled with random variables. In this paper, we analyze a system with probabilistic execution times, and propose metrics that are inspired by safety standards. Among these metrics are the probability of deadline miss per hour, the expected time before degradation happens, and the duration of the degradation. We argue that these quantities provide a holistic view of the system’s operation and schedulability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Santos, R., J. Santos y J. Orozco. "Hard Real-Time Systems with Stochastic Execution Times: Deterministic and Probabilistic Guarantees". International Journal of Computers and Applications 27, n.º 2 (enero de 2005): 57–62. http://dx.doi.org/10.1080/1206212x.2005.11441758.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jimenez Gil, Samuel, Iain Bate, George Lima, Luca Santinelli, Adriana Gogonel y Liliana Cucu-Grosjean. "Open Challenges for Probabilistic Measurement-Based Worst-Case Execution Time". IEEE Embedded Systems Letters 9, n.º 3 (septiembre de 2017): 69–72. http://dx.doi.org/10.1109/les.2017.2712858.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Xiao, Peng, Dongbo Liu y Kaijian Liang. "Improving scheduling efficiency by probabilistic execution time model in cloud environments". International Journal of Networking and Virtual Organisations 18, n.º 4 (2018): 307. http://dx.doi.org/10.1504/ijnvo.2018.093651.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Xiao, Peng, Dongbo Liu y Kaijian Liang. "Improving scheduling efficiency by probabilistic execution time model in cloud environments". International Journal of Networking and Virtual Organisations 18, n.º 4 (2018): 307. http://dx.doi.org/10.1504/ijnvo.2018.10014681.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ren, Jiankang, Zichuan Xu, Chao Yu, Chi Lin, Guowei Wu y Guozhen Tan. "Execution allowance based fixed priority scheduling for probabilistic real-time systems". Journal of Systems and Software 152 (junio de 2019): 120–33. http://dx.doi.org/10.1016/j.jss.2019.03.001.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Lacerda, Bruno, David Parker y Nick Hawes. "Multi-Objective Policy Generation for Mobile Robots under Probabilistic Time-Bounded Guarantees". Proceedings of the International Conference on Automated Planning and Scheduling 27 (5 de junio de 2017): 504–12. http://dx.doi.org/10.1609/icaps.v27i1.13865.

Texto completo
Resumen
We present a methodology for the generation of mobile robot controllers which offer probabilistic time-bounded guarantees on successful task completion, whilst also trying to satisfy soft goals. The approach is based on a stochastic model of the robot’s environment and action execution times, a set of soft goals, and a formal task specification in co-safe linear temporal logic, which are analysed using multi-objective model checking techniques for Markov decision processes. For efficiency, we propose a novel two-step approach. First, we explore policies on the Pareto front for minimising expected task execution time whilst optimising the achievement of soft goals. Then, we use this to prune a model with more detailed timing information, yielding a time-dependent policy for which more fine-grained probabilistic guarantees can be provided. We illustrate and evaluate the generation of policies on a delivery task in a care home scenario, where the robot also tries to engage in entertainment activities with the patients.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Chanel, Caroline, Charles Lesire y Florent Teichteil-Königsbuch. "A Robotic Execution Framework for Online Probabilistic (Re)Planning". Proceedings of the International Conference on Automated Planning and Scheduling 24 (11 de mayo de 2014): 454–62. http://dx.doi.org/10.1609/icaps.v24i1.13669.

Texto completo
Resumen
Due to the high complexity of probabilistic planning algorithms, roboticists often opt for deterministic replanning paradigms, which can quickly adapt the current plan to the environment's changes. However, probabilistic planning suffers in practice from the common misconception that it is needed to generate complete or closed policies, which would not require to be adapted on-line. In this work, we propose an intermediate approach, which generates incomplete partial policies taking into account mid-term probabilistic uncertainties, continually improving them on a gliding horizon or regenerating them when they fail. Our algorithm is a configurable anytime meta-planner that drives any sub-(PO)MDP standard planner, dealing with all pending and time-bounded planning requests sent by the execution framework from many reachable possible future execution states, in anticipation of the probabilistic evolution of the system. We assess our approach on generic robotic problems and on combinatorial UAVs (PO)MDP missions, which we tested during real flights: emergency landing with discrete and continuous state variables, and target detection and recognition in unknown environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Fusi, Matteo, Fabio Mazzocchetti, Albert Farres, Leonidas Kosmidis, Ramon Canal, Francisco J. Cazorla y Jaume Abella. "On the Use of Probabilistic Worst-Case Execution Time Estimation for Parallel Applications in High Performance Systems". Mathematics 8, n.º 3 (1 de marzo de 2020): 314. http://dx.doi.org/10.3390/math8030314.

Texto completo
Resumen
Some high performance computing (HPC) applications exhibit increasing real-time requirements, which call for effective means to predict their high execution times distribution. This is a new challenge for HPC applications but a well-known problem for real-time embedded applications where solutions already exist, although they target low-performance systems running single-threaded applications. In this paper, we show how some performance validation and measurement-based practices for real-time execution time prediction can be leveraged in the context of HPC applications on high-performance platforms, thus enabling reliable means to obtain real-time guarantees for those applications. In particular, the proposed methodology uses coordinately techniques that randomly explore potential timing behavior of the application together with Extreme Value Theory (EVT) to predict rare (and high) execution times to, eventually, derive probabilistic Worst-Case Execution Time (pWCET) curves. We demonstrate the effectiveness of this approach for an acoustic wave inversion application used for geophysical exploration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Hardy, Damien y Isabelle Puaut. "Static probabilistic worst case execution time estimation for architectures with faulty instruction caches". Real-Time Systems 51, n.º 2 (5 de noviembre de 2014): 128–52. http://dx.doi.org/10.1007/s11241-014-9212-x.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Al-Refaie, Abbas, Ahmad Al-Hawadi, Natalija Lepkova y Ghaleb Abbasi. "BLOCKCHAIN OF OPTIMAL MULTIPLE CONSTRUCTION PROJECTS PLANNING UNDER PROBABILISTIC ARRIVAL AND STOCHASTIC DURATIONS". JOURNAL OF CIVIL ENGINEERING AND MANAGEMENT 29, n.º 1 (3 de enero de 2023): 15–34. http://dx.doi.org/10.3846/jcem.2023.17927.

Texto completo
Resumen
With the rapid development of projects, firms are facing challenges in planning and controlling complex multiple construction projects. This research, therefore, aims at developing blockchain of optimal scheduling and sequencing of multiple construction projects under probabilistic arrival and stochastic durations. Each project task was considered as a block. Then, a framework for electronic project recording (EPR) system was developed. The EPRs are records for project tasks that make information available directly and securely to authorized users. In this framework, two optimization models were developed for scheduling and sequencing project blocks. The scheduling model aims to assign project tasks to available resources at minimal total cost and maximal the number of assigned project tasks. On the other hand, the sequencing model seeks to determine the start time of block execution while minimizing delay costs and minimizing the sum of task’s start times. The project arrival date and the task’s execution duration were assumed probabilistic and stochastic (normally distributed), respectively. The developed EPR system was implemented on a real case study of five projects with total of 121 tasks. Further, the system was developed when the task’s execution duration follows the Program Evaluation and Review Technique (PERT) model with four replications. The project costs (idle time and overtime costs) at optimal plan were then compared between the task’s execution duration normally distributed and PERT modelled. The results revealed negligible differences between project costs and slight changes in the sequence of project activities. Consequently, both distributions can be used interchangeably to model the task’s execution duration. Furthermore, the project costs were also compared between four solution replications and were found very close, which indicates the robustness of model solutions to random generation of task’s execution duration at both models. In conclusion, the developed EPR framework including the optimization models provided an effective planning and monitoring of construction projects that can be used to make decisions through project progress and efficient sharing of project resources at minimal idle and overtime costs. Future research considers developing a Blockchain of optimal maintenance planning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Ghazi, Alaa y Yasir Hashim. "Probabilistic Modeling for Conditional Statements". AIUB Journal of Science and Engineering (AJSE) 22, n.º 3 (22 de diciembre de 2023): 271–78. http://dx.doi.org/10.53799/ajse.v22i3.841.

Texto completo
Resumen
A new mathematical framework is proposed in this study to comprehend the impact of program architecture on input random variables, the IF statement was the main topic. The primary idea that is theoretically and experimentally supported in this study is that the part of the joint pmf of a collection of random variables that represents the condition will be shifted to the part that represents the action. After sorting two random variables, the framework is used with four random variables, and the theoretically produced results were realistically validated. The study's equations can be applied to assessing probabilistic models of various sorting algorithms or other intricate program structures. This may also result in future investigations formalizing more precise execution time expectations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Stemmer, Ralf, Hai-Dang Vu, Sébastien Le Nours, Kim Grüttner, Sébastien Pillement y Wolfgang Nebel. "A Measurement-Based Message-Level Timing Prediction Approach for Data-Dependent SDFGs on Tile-Based Heterogeneous MPSoCs". Applied Sciences 11, n.º 14 (20 de julio de 2021): 6649. http://dx.doi.org/10.3390/app11146649.

Texto completo
Resumen
Fast yet accurate performance and timing prediction of complex parallel data flow applications on multi-processor systems remains a very difficult discipline. The reason for it comes from the complexity of the data flow applications w.r.t. data dependent execution paths and the hardware platform with shared resources, like buses and memories. This combination may lead to complex timing interferences that are difficult to express in pure analytical or classical simulation-based approaches. In this work, we propose the combination of timing measurement and statistical simulation models for probabilistic timing and performance prediction of Synchronous Data Flow (SDF) applications on MPSoCs with shared memories. We exploit the separation of computation and communication in our SDF model of computation to set-up simulation-based performance prediction models following different abstraction approaches. We especially propose a message-level communication model driven by a data-dependent probabilistic execution phase timing model. We compare our work against measurement on two case-studies from the computer vision domain: a Sobel filter and a JPEG decoder. We show that the accuracy and execution time of our modeling and evaluation framework outperforms existing approaches and is suitable for a fast yet accurate design space exploration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Caserman, Polona, Clemens Krug y Stefan Göbel. "Recognizing Full-Body Exercise Execution Errors Using the Teslasuit". Sensors 21, n.º 24 (15 de diciembre de 2021): 8389. http://dx.doi.org/10.3390/s21248389.

Texto completo
Resumen
Regular physical exercise is essential for overall health; however, it is also crucial to mitigate the probability of injuries due to incorrect exercise executions. Existing health or fitness applications often neglect accurate full-body motion recognition and focus on a single body part. Furthermore, they often detect only specific errors or provide feedback first after the execution. This lack raises the necessity for the automated detection of full-body execution errors in real-time to assist users in correcting motor skills. To address this challenge, we propose a method for movement assessment using a full-body haptic motion capture suit. We train probabilistic movement models using the data of 10 inertial sensors to detect exercise execution errors. Additionally, we provide haptic feedback, employing transcutaneous electrical nerve stimulation immediately, as soon as an error occurs, to correct the movements. The results based on a dataset collected from 15 subjects show that our approach can detect severe movement execution errors directly during the workout and provide haptic feedback at respective body locations. These results suggest that a haptic full-body motion capture suit, such as the Teslasuit, is promising for movement assessment and can give appropriate haptic feedback to the users so that they can improve their movements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Rodriguez Ferrandez, Ivan, Alvaro Jover Alvarez, Matina Maria Trompouki, Leonidas Kosmidis y Francisco J. Cazorla. "Worst Case Execution Time and Power Estimation of Multicore and GPU Software: A Pedestrian Detection Use Case". ACM SIGAda Ada Letters 43, n.º 1 (30 de octubre de 2023): 111–17. http://dx.doi.org/10.1145/3631483.3631502.

Texto completo
Resumen
Worst Case Execution Time estimation of software running on parallel platforms is a challenging task, due to resource interference of other tasks and the complexity of the underlying CPU and GPU hardware architectures. Similarly, the increased complexity of the hardware, challenges the estimation of worst case power consumption. In this paper, we employ Measurement Based Probabilistic Timing Analysis (MBPTA), which is capable of managing complex architectures such as multicores. We enable its use by software randomisation, which we show for the first time that is also possible on GPUs. We demonstrate our method on a pedestrian detection use case on an embedded multicore and GPU platform for the automotive domain, the NVIDIA Xavier. Moreover, we extend our measurement based probabilistic method in order to predict the worst case power consumption of the software on the same platform.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Getir Yaman, Sinem, Esteban Pavese y Lars Grunske. "Quantitative Verification of Stochastic Regular Expressions". Fundamenta Informaticae 179, n.º 2 (10 de marzo de 2021): 135–63. http://dx.doi.org/10.3233/fi-2021-2018.

Texto completo
Resumen
In this article, we introduce a probabilistic verification algorithm for stochastic regular expressions over a probabilistic extension of the Action based Computation Tree Logic (ACTL*). The main results include a novel model checking algorithm and a semantics on the probabilistic action logic for stochastic regular expressions (SREs). Specific to our model checking algorithm is that SREs are defined via local probabilistic functions. Such functions are beneficial since they enable to verify properties locally for sub-components. This ability provides a flexibility to reuse the local results for the global verification of the system; hence, the framework can be used for iterative verification. We demonstrate how to model a system with an SRE and how to verify it with the probabilistic action based logic and present a preliminary performance evaluation with respect to the execution time of the reachability algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Jayakumar, K. y S. Thangavel. "Vibration Analysis of Industrial Drive for Broken Bearing Detection Using Probabilistic Wavelet Neural Network". International Journal of Power Electronics and Drive Systems (IJPEDS) 5, n.º 4 (1 de abril de 2015): 541. http://dx.doi.org/10.11591/ijpeds.v5.i4.pp541-551.

Texto completo
Resumen
A reliable monitoring of industrial drives plays a vital role to prevent from the performance degradation of machinery. Today’s fault detection system mechanism uses wavelet transform for proper detection of faults, however it required more attention on detecting higher fault rates with lower execution time. Existence of faults on industrial drives leads to higher current flow rate and the broken bearing detected system determined the number of unhealthy bearings but need to develop a faster system with constant frequency domain. Vibration data acquisition was used in our proposed work to detect broken bearing faults in induction machine. To generate an effective fault detection of industrial drives, Biorthogonal Posterior Vibration Signal-Data Probabilistic Wavelet Neural Network (BPPVS-WNN) system was proposed in this paper. This system was focused to reducing the current flow and to identify faults with lesser execution time with harmonic values obtained through fifth derivative. Initially, the construction of Biorthogonal vibration signal-data based wavelet transform in BPPVS-WNN system localizes the time and frequency domain. The Biorthogonal wavelet approximates the broken bearing using double scaling and factor, identifies the transient disturbance due to fault on induction motor through approximate coefficients and detailed coefficient. Posterior Probabilistic Neural Network detects the final level of faults using the detailed coefficient till fifth derivative and the results obtained through it at a faster rate at constant frequency signal on the industrial drive. Experiment through the Simulink tool detects the healthy and unhealthy motor on measuring parametric factors such as fault detection rate based on time, current flow rate and execution time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Lakehal, Abderrahim, Adel Alti y Philippe Roose. "Novel Semantic-Based Probabilistic Context Aware Approach for Situations Enrichment and Adaptation". Applied Sciences 12, n.º 2 (12 de enero de 2022): 732. http://dx.doi.org/10.3390/app12020732.

Texto completo
Resumen
This paper aims at ensuring an efficient recommendation. It proposes a new context-aware semantic-based probabilistic situations injection and adaptation using an ontology approach and Bayesian-classifier. The idea is to predict the relevant situations for recommending the right services. Indeed, situations are correlated with the user’s context. It can, therefore, be considered in designing a recommendation approach to enhance the relevancy by reducing the execution time. The proposed solution in which four probability-based-context rule situation items (user’s location and time, user’s role, their preferences and experiences) are chosen as inputs to predict user’s situations. Subsequently, the weighted linear combination is applied to calculate the similarity of rule items. The higher scores between the selected items are used to identify the relevant user’s situations. Three context parameters (CPU speed, sensor availability and RAM size) of the current devices are used to ensure adaptive service recommendation. Experimental results show that the proposed approach enhances accuracy rate with a high number of situations rules. A comparison with existing recommendation approaches shows that the proposed approach is more efficient and decreases the execution time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Zhuang, Xueqiu, Huihua Jiao y Kai Lu. "Augmented Reality Interactive Guide System and Method for Tourist Attractions Based on Geographic Location". Journal of Electrical and Computer Engineering 2022 (23 de junio de 2022): 1–13. http://dx.doi.org/10.1155/2022/7606289.

Texto completo
Resumen
With the increasing improvement of people’s living standards, more and more tourists choose to travel independently, which puts forward higher requirements for the existing tourist guide system of scenic spots. Augmented reality (AR) is a technology that integrates computer-generated virtual information into real scenes. It provides more interactive modes, allowing visitors to have a stronger sense of immersion in the real world. The interactive navigation system is an analysis and processing system that recommends algorithms for tourists through the back-end server by collecting the user’s registration information data. It provides tourists with reasonable travel itineraries. This can not only allow users to avoid the peak flow of people and save tourists’ time, but also reduce the hidden dangers of scenic spots and improve the turnover rate of passenger flow. This paper aims to study an interactive guide system combined with augmented reality technology to provide tourism services for tourists in tourist attractions based on geographic locations. This paper proposes an improved probabilistic algorithm model to accurately locate tourists. (Probabilistic algorithms are also called randomization algorithms. Probabilistic algorithms allow the random choice of the next computational step during execution. In many cases, when the algorithm is faced with a choice during the execution process, the random choice is time-saving compared to the optimal one. Therefore, the probabilistic algorithm can greatly reduce the complexity of the algorithm.) At the same time, the paper designs an AR scenic spot tour guide system. It tests the performance of the system. The results show that the initial response speed of the improved probability algorithm is 0.69 s, the average response time is 4 s when the number of concurrent users is finally increased to 600, and the growth rate is about 0.006 s. The improved algorithm can obviously enhance the response speed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Sant'Anna, Annibal Parracho. "Probabilistic composition of criteria for schedule monitoring". Pesquisa Operacional 30, n.º 3 (diciembre de 2010): 751–67. http://dx.doi.org/10.1590/s0101-74382010000300013.

Texto completo
Resumen
Time is a key factor in management. Along a project execution, keeping the best completion rates, not too slow and not too fast, is a central objective but such a best rate cannot be suitably anticipated in a precise schedule. It must be determined on the job, on a comparative basis. This paper develops an evaluation system involving the measurement of schedule fitting indicators designed to deal with such conditions. This evaluation system is based on a transformation of the data into probabilities of reaching the frontier of best performances that permits precisely composing measurements on correlated attributes. This feature of the system allows for combining criteria evaluated on elementary and on aggregate levels.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Bhandari, Guru Prasad, Ratneshwer Gupta y Satyanshu K. Upadhyay. "Colored Petri Nets Based Fault Diagnosis in Service Oriented Architecture". International Journal of Web Services Research 15, n.º 4 (octubre de 2018): 1–28. http://dx.doi.org/10.4018/ijwsr.2018100101.

Texto completo
Resumen
Diagnosing faults in a service-oriented architecture (SOA) is a difficult task due to limited accessibility of software services. Probabilistic approaches of diagnostic faults may be insufficient due to the black-box nature of services. In SOA, software services may be obtained by different service providers and get composed at run-time. This is the reason why there are diagnosis faults at execution time, and is a costly affair. The authors have demonstrated a Color Petri Nets (CPN)-based approach to model different faults that may occur at execution time. Some heuristics are proposed to diagnose faults from the CPN modeling. CPN behavioral properties have also been used for fault diagnosis. The model may be helpful for dependability enhancement of an SOA-based systems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

SAXENA, AVINASH y SHRISHA RAO. "DEGRADATION ANALYSIS OF PROBABILISTIC PARALLEL CHOICE SYSTEMS". International Journal of Reliability, Quality and Safety Engineering 21, n.º 03 (junio de 2014): 1450012. http://dx.doi.org/10.1142/s0218539314500120.

Texto completo
Resumen
Degradation analysis is used to analyze the useful lifetimes of systems, their failure rates, and various other system parameters like mean time to failure (MTTF), mean time between failures (MTBF), and the system failure rate (SFR). In many systems, certain possible parallel paths of execution that have greater chances of success are preferred over others. Thus we introduce here the concept of probabilistic parallel choice. We use binary and n-ary probabilistic choice operators in describing the selections of parallel paths. These binary and n-ary probabilistic choice operators are considered so as to represent the complete system (described as a series-parallel system) in terms of the probabilities of selection of parallel paths and their relevant parameters. Our approach allows us to derive new and generalized formulae for system parameters like MTTF, MTBF, and SFR. We use a generalized exponential distribution, allowing distinct installation times for individual components, and use this model to derive expressions for such system parameters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Shu, Chang, Yinhui Luo y Fang Liu. "Probabilistic Task Offloading with Uncertain Processing Times in Device-to-Device Edge Networks". Electronics 13, n.º 10 (11 de mayo de 2024): 1889. http://dx.doi.org/10.3390/electronics13101889.

Texto completo
Resumen
D2D edge computing is a promising solution to address the conflict between limited network capacity and increasing application demands, where mobile devices can offload their tasks to other peer devices/servers for better performance. Task offloading is critical to the performance of D2D edge computing. Most existing works on task offloading assume the task processing time is known or can be accurately estimated. However, the processing time is often uncertain until it is finished. Moreover, the same task can have largely different execution times under different scenarios, which leads to inaccurate offloading decisions and degraded performance. To address this problem, we propose a game-based probabilistic task offloading scheme with an uncertain processing time in D2D edge networks. First, we characterize the uncertainty of the task processing time using a probabilistic model. Second, we incorporate the proposed probabilistic model into an offloading decision game. We also analyze the structural properties of the game and prove that it can reach a Nash equilibrium. We evaluate the proposed work using real-world applications and datasets. The experimental results show that the proposed probabilistic model can accurately characterize the uncertainty of completion time, and the offloading algorithm can effectively improve the overall task completion rate in D2D networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Bagaev, Dmitry y Bert de Vries. "Reactive Message Passing for Scalable Bayesian Inference". Scientific Programming 2023 (27 de mayo de 2023): 1–26. http://dx.doi.org/10.1155/2023/6601690.

Texto completo
Resumen
We introduce reactive message passing (RMP) as a framework for executing schedule-free, scalable, and, potentially, more robust message passing-based inference in a factor graph representation of a probabilistic model. RMP is based on the reactive programming style, which only describes how nodes in a factor graph react to changes in connected nodes. We recognize reactive programming as the suitable programming abstraction for message passing-based methods that improve robustness, scalability, and execution time of the inference procedure and are useful for all future implementations of message passing methods. We also present our own implementation ReactiveMP.jl, which is a Julia package for realizing RMP through minimization of a constrained Bethe free energy. By user-defined specification of local form and factorization constraints on the variational posterior distribution, ReactiveMP.jl executes hybrid message passing algorithms including belief propagation, variational message passing, expectation propagation, and expectation maximization update rules. Experimental results demonstrate the great performance of our RMP implementation compared to other Julia packages for Bayesian inference across a range of probabilistic models. In particular, we show that the RMP framework is capable of performing Bayesian inference for large-scale probabilistic state-space models with hundreds of thousands of random variables on a standard laptop computer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Kolomvatsos, Kostas y Christos Anagnostopoulos. "A probabilistic model for assigning queries at the edge". Computing 102, n.º 4 (18 de noviembre de 2019): 865–92. http://dx.doi.org/10.1007/s00607-019-00767-8.

Texto completo
Resumen
AbstractData management at the edge of the network can increase the performance of applications as the processing is realized close to end users limiting the observed latency in the provision of responses. A typical data processing involves the execution of queries/tasks defined by users or applications asking for responses in the form of analytics. Query/task execution can be realized at the edge nodes that can undertake the responsibility of delivering the desired analytics to the interested users or applications. In this paper, we deal with the problem of allocating queries to a number of edge nodes. The aim is to support the goal of eliminating further the latency by allocating queries to nodes that exhibit a low load and high processing speed, thus, they can respond in the minimum time. Before any allocation, we propose a method for estimating the computational burden that a query/task will add to a node and, afterwards, we proceed with the final assignment. The allocation is concluded by the assistance of an ensemble similarity scheme responsible to deliver the complexity class for each query/task and a probabilistic decision making model. The proposed scheme matches the characteristics of the incoming queries and edge nodes trying to conclude the optimal allocation. We discuss our mechanism and through a large set of simulations and the adoption of benchmarking queries, we reveal the potentials of the proposed model supported by numerical results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Saint-Guillain, Michael, Tiago Vaquero, Steve Chien, Jagriti Agrawal y Jordan Abrahams. "Probabilistic Temporal Networks with Ordinary Distributions: Theory, Robustness and Expected Utility". Journal of Artificial Intelligence Research 71 (27 de agosto de 2021): 1091–136. http://dx.doi.org/10.1613/jair.1.13019.

Texto completo
Resumen
Most existing works in Probabilistic Simple Temporal Networks (PSTNs) base their frameworks on well-defined, parametric probability distributions. Under the operational contexts of both strong and dynamic control, this paper addresses robustness measure of PSTNs, i.e. the execution success probability, where the probability distributions of the contingent durations are ordinary, not necessarily parametric, nor symmetric (e.g. histograms, PERT), as long as these can be discretized. In practice, one would obtain ordinary distributions by considering empirical observations (compiled as histograms), or even hand-drawn by field experts. In this new realm of PSTNs, we study and formally define concepts such as degree of weak/strong/dynamic controllability, robustness under a predefined dispatching protocol, and introduce the concept of PSTN expected execution utility. We also discuss the limitation of existing controllability levels, and propose new levels within dynamic controllability, to better characterize dynamic controllable PSTNs based on based practical complexity considerations. We propose a novel fixed-parameter pseudo-polynomial time computation method to obtain both the success probability and expected utility measures. We apply our computation method to various PSTN datasets, including realistic planetary exploration scenarios in the context of the Mars 2020 rover. Moreover, we propose additional original applications of the method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Allam, Tahani M. "MOTIFSM: Cloudera Motif DNA Finding Algorithm". International Journal of Information Technology and Computer Science 15, n.º 4 (8 de agosto de 2023): 10–18. http://dx.doi.org/10.5815/ijitcs.2023.04.02.

Texto completo
Resumen
Many studying systems of gene function work depend on the DNA motif. DNA motifs finding generate a lot of trails which make it complex. Regulation of gene expression is identified according to Transcription Factor Binding Sites (TFBSs). There are different algorithms explained, over the past decades, to get an accurate motif tool. The major problems for these algorithms are on the execution time and the memory size which depend on the probabilistic approaches. Our previous algorithm, called EIMF, is recently proposed to overcome these problems by rearranging data. Because cloud computing involves many resources, the challenge of mapping jobs to infinite computing resources is an NP-hard optimization problem. In this paper, we proposed an Impala framework for solving a motif finding algorithms in single and multi-user based on cloud computing. Also, the comparison between Cloud motif and previous EIMF algorithms is performed in three different motif group. The results obtained the Cloudera motif was a considerable finding algorithms in the experimental group that decreased the execution time and the Memory size, when compared with the previous EIMF algorithms. The proposed MOTIFSM algorithm based on the cloud computing decrease the execution time by 70% approximately in MOTIFSM than EIMF framework. Memory size also is decreased in MOTIFSM about 75% than EIMF.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Michael, A. Andrew y Thiagarajan M. "Improving Expected Time Matrix based Queuing Theory for Cache Arrived Probabilistic Execution Model to Reduce Server Utilization for Customer Idle Time". Journal of Advanced Research in Dynamical and Control Systems 12, n.º 5 (30 de mayo de 2020): 54–64. http://dx.doi.org/10.5373/jardcs/v12i5/20201689.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Zhou, Wen-Hao, Jun Gao, Zhi-Qiang Jiao, Xiao-Wei Wang, Ruo-Jing Ren, Xiao-Ling Pang, Lu-Feng Qiao, Chao-Ni Zhang, Tian-Huai Yang y Xian-Min Jin. "Timestamp boson sampling". Applied Physics Reviews 9, n.º 3 (septiembre de 2022): 031408. http://dx.doi.org/10.1063/5.0066103.

Texto completo
Resumen
Quantum advantage, benchmarking the computational power of quantum machines outperforming all classical computers in a specific task, represents a crucial milestone in developing quantum computers and has been driving different physical implementations since the concept was proposed. A boson sampling machine, an analog quantum computer that only requires multiphoton interference and single-photon detection, is considered to be a promising candidate to reach this goal. However, the probabilistic nature of photon sources and the inevitable loss in evolution network make the execution time exponentially increasing with the problem size. Here, we propose and experimentally demonstrate a timestamp boson sampling scheme that can effectively reduce the execution time for any problem size. By developing a time-of-flight storage technique with a precision up to picosecond level, we are able to detect and record the complete time information of 30 individual modes out of a large-scale 3D photonic chip. We perform the three-photon injection and one external trigger experiment to demonstrate that the timestamp protocol works properly and effectively reduce the execution time. We further verify that timestamp boson sampler is distinguished from other samplers in the case of limited datasets through the three heralded single photons injection experiment. The timestamp protocol can speed up the sampling process, which can be widely applied in multiphoton experiments at low-sampling rate. The approach associated with newly exploited resource from time information can boost all the count-rate-limited experiments, suggesting an emerging field of timestamp quantum optics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Lacerda, Bruno, Fatma Faruq, David Parker y Nick Hawes. "Probabilistic planning with formal performance guarantees for mobile service robots". International Journal of Robotics Research 38, n.º 9 (16 de junio de 2019): 1098–123. http://dx.doi.org/10.1177/0278364919856695.

Texto completo
Resumen
We present a framework for mobile service robot task planning and execution, based on the use of probabilistic verification techniques for the generation of optimal policies with attached formal performance guarantees. Our approach is based on a Markov decision process model of the robot in its environment, encompassing a topological map where nodes represent relevant locations in the environment, and a range of tasks that can be executed in different locations. The navigation in the topological map is modeled stochastically for a specific time of day. This is done by using spatio-temporal models that provide, for a given time of day, the probability of successfully navigating between two topological nodes, and the expected time to do so. We then present a methodology to generate cost optimal policies for tasks specified in co-safe linear temporal logic. Our key contribution is to address scenarios in which the task may not be achievable with probability one. We introduce a task progression function and present an approach to generate policies that are formally guaranteed to, in decreasing order of priority: maximize the probability of finishing the task; maximize progress towards completion, if this is not possible; and minimize the expected time or cost required. We illustrate and evaluate our approach with a scalability evaluation in a simulated scenario, and report on its implementation in a robot performing service tasks in an office environment for long periods of time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Kim, Youngjoon y Jiwon Yoon. "MaxAFL: Maximizing Code Coverage with a Gradient-Based Optimization Technique". Electronics 10, n.º 1 (24 de diciembre de 2020): 11. http://dx.doi.org/10.3390/electronics10010011.

Texto completo
Resumen
Evolutionary fuzzers generally work well with typical software programs because of their simple algorithm. However, there is a limitation that some paths with complex constraints cannot be tested even after long execution. Fuzzers based on concolic execution have emerged to address this issue. The concolic execution fuzzers also have limitations in scalability. Recently, the gradient-based fuzzers that use a gradient to mutate inputs have been introduced. Gradient-based fuzzers can be applied to real-world programs and achieve high code coverage. However, there is a problem that the existing gradient-based fuzzers require heavyweight analysis or sufficient learning time. In this paper, we propose a new type of gradient-based fuzzer, MaxAFL, to overcome the limitations of existing gradient-based fuzzers. Our approach constructs an objective function through fine-grained static analysis. After constructing a well-made objective function, we can apply the gradient-based optimization algorithm. We use a modified gradient-descent algorithm to minimize our objective function and propose some probabilistic techniques to escape local optimum. We introduce an adaptive objective function which aims to explore various paths in the program. We implemented MaxAFL based on the original AFL. MaxAFL achieved increase of code coverage per time compared with three other fuzzers in six open-source Linux binaries. We also measured cumulative code coverage per total execution, and MaxAFL outperformed the other fuzzers in this metric. Finally, MaxAFL can also find more bugs than the other fuzzers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

POERNOMO, IMAN, HEINZ SCHMIDT y JANE JAYAPUTERA. "VERIFICATION AND PREDICTION OF TIMED PROBABILISTIC PROPERTIES OVER THE DMTF CIM". International Journal of Cooperative Information Systems 15, n.º 04 (diciembre de 2006): 633–58. http://dx.doi.org/10.1142/s0218843006001517.

Texto completo
Resumen
Understanding nonfunctional aspects of system behavior is an essential component of practical software development and maintenance. Many nonfunctional system properties, such as reliability and availability, involve time and probabilities. In this paper, we present a framework for runtime verification and prediction of timed and probabilistic nonfunctional properties of component-based architectures, built using the Meta-Object Facility and the Distributed Management Task Force's Common Information Model (CIM) standard. We describe a Microsoft .NET-based implementation of our framework. We define a language for describing timed probabilistic behavior based on Probabilistic Computational Tree Logic (PCTL). We provide a formal semantics for this language in terms of observed application execution traces. The semantics is interesting in that it permits checking of required timing behavior both over the overall average of traces and also over local "trends" in traces. The latter aspect of the semantics is achieved by incorporating exponential smoothing prediction techniques into the truth function for statements of our language. The semantics is generic over the aspects of an application that are represented by states and state transitions. This enables the language to be used to describe a wide range of nonfunctional properties for runtime verification and prediction purposes. We explain how statements of our language are used to define precise contracts for system monitoring, through relating the semantics to an extended CIM monitoring infrastructure.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Potli, Manohar y Chandrasekhar Reddy Atla. "Stochastic Diffusion Process-based Multi-Level Monte Carlo for Predictive Reliability Assessment of Distribution System". U.Porto Journal of Engineering 7, n.º 4 (26 de noviembre de 2021): 87–102. http://dx.doi.org/10.24840/2183-6493_007.004_0007.

Texto completo
Resumen
Reliability assessment of electrical distribution systems is an important criterion to determine system performance in terms of interruptions. Probabilistic assessment methods are usually used in reliability analysis to deal with uncertainties. These techniques require a longer execution time in order to account for uncertainty. Multi-Level Monte Carlo (MLMC) is an advanced Monte Carlo Simulation (MCS) approach to improve accuracy and reduce the execution time. This paper provides a systematic approach to model the static and dynamic uncertainties of Time to Failure (TTF) and Time to Repair (TTR) of power distribution components using a Stochastic Diffusion Process. Further, the Stochastic Diffusion Process is integrated into MLMC to estimate the impacts of uncertainties on reliability indices. The Euler Maruyama path discretization applied to evaluate the solution of the Stochastic Diffusion Process. The proposed Stochastic Diffusion Process-based MLMC method is integrated into a systematic failure identification technique to evaluate the distribution system reliability. The proposed method is validated with analytical and Sequential MCS methods for IEEE Roy Billinton Test Systems. Finally, the numerical results show the accuracy and fast convergence rates to handle uncertainties compared to Sequential MCS method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Srivastava, Rohini, Basant Kumar, Fayadh Alenezi, Adi Alhudhaif, Sara A. Althubiti y Kemal Polat. "Automatic Arrhythmia Detection Based on the Probabilistic Neural Network with FPGA Implementation". Mathematical Problems in Engineering 2022 (22 de marzo de 2022): 1–11. http://dx.doi.org/10.1155/2022/7564036.

Texto completo
Resumen
This paper presents a prototype implementation of arrhythmia classification using Probabilistic neural network (PNN). Arrhythmia is an irregular heartbeat, resulting in severe heart problems if not diagnosed early. Therefore, accurate and robust arrhythmia classification is a vital task for cardiac patients. The classification of ECG has been performed using PNN into eight ECG classes using a unique combination of six ECG features: heart rate, spectral entropy, and 4th order of autoregressive coefficients. In addition, FPGA implementation has been proposed to prototype the complete system of arrhythmia classification. Artix-7 board has been used for the FPGA implementation for easy and fast execution of the proposed arrhythmia classification. As a result, the average accuracy for ECG classification is found to be 98.27%, and the time consumed in the classification is found to be 17 seconds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Shalini, Sheel y Kanhaiya Lal. "Mining Changes in Temporal Patterns in Latest Time Window for Knowledge Discovery". Journal of Information & Knowledge Management 18, n.º 03 (septiembre de 2019): 1950028. http://dx.doi.org/10.1142/s021964921950028x.

Texto completo
Resumen
Temporal Association Rule mining uncovers time integrated associations in a transactional database. However, in an environment where database is regularly updated, maintenance of rules is a challenging process. Earlier algorithms suggested for maintaining frequent patterns either suffered from the problem of repeated scanning or the problem of larger storage space. Therefore, this paper proposes an algorithm “Probabilistic Incremental Temporal Association Rule Mining (PITARM)” that uncovers the changed behaviour in an updated database to maintain the rules efficiently. The proposed algorithm defines two support measures to identify itemsets expected to be frequent in the successive segment in advance. It reduces unnecessary scanning of itemsets in the entire database through three-fold verification and avoids generating redundant supersets and power sets from infrequent itemsets. Implementation of pruning technique in incremental mining is a novel approach that makes it better than earlier incremental mining algorithms and consequently reduces search space to a great extent. It scans the entire database only once, thus reducing execution time. Experimental results confirm that it is an enhancement over earlier algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Kuźmiński, Łukasz, Zdzisław Kes, Veselin Draskovic, Andrzej Gawlik, Marcin Rabe, Katarzyna Widera, Agnieszka Łopatka y Maciej Śniegowski. "Modelling of the Risk of Budget Variances of Cost Energy Consumption Using Probabilistic Quantification". Energies 16, n.º 5 (5 de marzo de 2023): 2477. http://dx.doi.org/10.3390/en16052477.

Texto completo
Resumen
Budgets in organisational units are considered to be traditional management support tools. On the other hand, budgetary control is the essence of control measures, allowing for the increase in the efficiency of an enterprise through appropriate allocation of resources. The methodology used in the analysis of budget variances (obtained as a result of applying budgetary control) undoubtedly influences the management efficiency of almost every organizational unit. The authors indicate a research gap of methodological and application nature in the area of risk measurement in the analysis of budget variances. Therefore, the aim of the article is to create universal and flexible models enabling probabilistic quantification of the risk of budget variance regardless of the nature of the cost, the person budgeting and the budgeting unit. Extreme value theory was used to develop the model. The results of the work are models allowing for the estimation of the limit level of deviation for assumed probabilities and models determining the level of deviation for a given probability level. The application of these models in budgetary control will allow for a synthetic assessment of the degree of budget execution in the company, comparing the quality of budget execution over time as well as between units, defining the limits of materiality of budget variances. For the purpose of model verification, the authors have used budget variances of cost energy consumption, which have been determined on the basis of empirical distributions obtained from data coming from the system of budgetary control implemented at a university located in a larger European city.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Lukács, Dániel, Gergely Pongrácz y Máté Tejfel. "Model Checking-Based Performance Prediction for P4". Electronics 11, n.º 14 (6 de julio de 2022): 2117. http://dx.doi.org/10.3390/electronics11142117.

Texto completo
Resumen
Next-generation networks focus on scale and scope at the price of increasing complexity, leading to difficulties in network design and planning. As a result, anticipating all hardware- and software-related factors of network performance requires time-consuming and expensive benchmarking. This work presents a framework and software tool for automatically inferring the performance of P4 programmable network switches based on the P4 source code and probabilistic models of the execution environment with the hope of eliminating the requirement of the costly set-up of networked hardware and conducting benchmarks. We designed the framework using a top-down approach. First, we transform high-level P4 programs into a representation that can be refined incrementally by adding probabilistic environment models of increasing levels of complexity in order to improve the estimation precision. Then, we use the PRISM probabilistic model checker to perform the heavy weight calculations involved in static performance prediction. We present a formalization of the performance estimation problem, detail our solution, and illustrate its usage and validation through a case study conducted using a small P4 program and the P4C-BM reference switch. We show that the framework is already capable of performing estimation, and it can be extended with more concrete information to yield better estimates.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Rohiem, Nasyith Hananur, Adi Soeprijanto, Dimas Fajar Uman Putra, Mat Syai’in, Irrine Budi Sulistiawati, Muhammad Zahoor y Luqman Ali Shah. "Resolving Economic Dispatch with Uncertainty Effect in Microgrids Using Hybrid Incremental Particle Swarm Optimization and Deep Learning Method". Proceedings of the Pakistan Academy of Sciences: A. Physical and Computational Sciences 58, S (7 de diciembre de 2021): 119–29. http://dx.doi.org/10.53560/ppasa(58-sp1)762.

Texto completo
Resumen
Microgrids are one example of a low voltage distributed generation pattern that can cover a variety of energy, such as conventional generators and renewable energy. Economic dispatch (ED) is an important function and a key of a power system operation in microgrids. There are several procedures to find the optimum generation. The first step is to find every feasible state (FS) for thermal generator ED. The second step is to find optimum generation based on FS using incremental particle swarm optimization (IPSO), FS is assumed that all units are activated. The third step is to train the input and output of the IPSO into deep learning (DL). And the last step is to compare DL output with IPSO. The microgrids system in this paper considered 10 thermal units and a wind plant with power generation based on probabilistic data. IPSO shows good results by being capable to generate a total generation as the load requirement every hour for 24 h. However, IPSO has a weakness in execution times, from 10 experiments the average IPSO process takes 30 min. DL based on IPSO can make the execution time of its ED function faster with an 11 input and 10 output architecture. From the same experiments with IPSO, DL can produce the same output as IPSO but with a faster execution time. From the total cost side, wind energy is affecting to reduce total cost until USD 22.86 million from IPSO and USD 22.89 million from DL.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Kadnova, A. M. "COMPUTER SCIENCE, COMPUTER ENGINEERING AND MANAGEMENT METHODICAL APPROACH TO EVALUATING THE PROBABILISTIC TIME PERFORMANCE INDICATOR OF AUTOMATED ADMINISTRATOR OPERATIONS IN INFORMATION PROTECTION SYSTEMS". Herald of Dagestan State Technical University. Technical Sciences 46, n.º 3 (24 de noviembre de 2019): 87–96. http://dx.doi.org/10.21822/2073-6185-2019-46-3-87-96.

Texto completo
Resumen
Objectives At present, in accordance with the requirements of the guiding documents of the Federal Service for Technical and Export Control (FSTEC) of Russia, as well as international standards in the development and operation of protected automated systems, it is necessary to evaluate the effectiveness (general utility) of information protection systems. The article is devoted to the development of a method for assessing the ergotechnical characteristics of software information security systems for use the assessment of the general utility of such systems. The aim of the work is to develop a methodology for assessing the probabilistic indicator of the timeliness of typical operations for the administration of information security systems.Method To achieve this goal, user groups were created in order to perform typical administrative operations within the information protection system. The operation time for each group, recorded using the IOGraphV1.0.1 tool, was utilised to calculate the probabilities of timely execution of typical operations by the administrator according to a truncated normal distribution formula.Results An assessment of a probabilistic indicator was carried out in order to evaluate the timeliness of operations performed by the administrator of the information protection system.Conclusion The results can be used in a comprehensive assessment of the effectiveness (reliability) of the automated functioning of information security software systems when modelling and analysing the security of special-purpose informatisation facilities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Rempel, Sergej, Marcus Ricker y Josef Hegger. "Safety Concept for Textile-Reinforced Concrete Structures with Bending Load". Applied Sciences 10, n.º 20 (20 de octubre de 2020): 7328. http://dx.doi.org/10.3390/app10207328.

Texto completo
Resumen
In most countries, for the production and execution of concrete structures with textile reinforcement, building owners must have a general approval (e.g., “abZ” in Germany) or an individual license (e.g., “ZiE” in Germany). Therefore, it is quite common for building authorities to request experimental tests that evaluate the ultimate limit state (ULS) and the serviceability limit state (SLS). However, these experimental tests are detailed, time-consuming and expensive. A practical and simple design model would help to reduce the number of tests needed and would offer structural planners a useful tool. An important aspect is that such design model must fulfil a set of reliability requirements in order to guarantee an adequate safety standard. To this end, probabilistic calculations are required. For the setup of such model, different parameters must be considered, namely the effective depth d and the tensile failure stress of the textile ft for the concrete compressive strength fc. This article presents the probabilistic calculations needed to attain a general safety factor γT that satisfies all the safety requirements for the textile reinforcement of concrete structures with bending load.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Blackmore, Lars, Masahiro Ono y Brian C. Williams. "Chance-Constrained Optimal Path Planning With Obstacles". IEEE Transactions on Robotics 27, n.º 6 (diciembre de 2011): 1080–94. http://dx.doi.org/10.1109/tro.2011.2161160.

Texto completo
Resumen
Autonomous vehicles need to plan trajectories to a specified goal that avoid obstacles. For robust execution, we must take into account uncertainty, which arises due to uncertain localization, modeling errors, and disturbances. Prior work handled the case of set-bounded uncertainty. We present here a chance-constrained approach, which uses instead a probabilistic representation of uncertainty. The new approach plans the future probabilistic distribution of the vehicle state so that the probability of failure is below a specified threshold. Failure occurs when the vehicle collides with an obstacle or leaves an operator-specified region. The key idea behind the approach is to use bounds on the probability of collision to show that, for linear-Gaussian systems, we can approximate the nonconvex chance-constrained optimization problem as a disjunctive convex program. This can be solved to global optimality using branch-and-bound techniques. In order to improve computation time, we introduce a customized solution method that returns almost-optimal solutions along with a hard bound on the level of suboptimality. We present an empirical validation with an aircraft obstacle avoidance example.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

LI, KEQIN. "AVERAGE-CASE SCALABILITY ANALYSIS OF PARALLEL COMPUTATIONS ON k-ARY d-CUBES". Journal of Interconnection Networks 05, n.º 01 (marzo de 2004): 27–45. http://dx.doi.org/10.1142/s0219265904001015.

Texto completo
Resumen
We investigate the average-case scalability of parallel algorithms executing on multicomputer systems whose static networks are k-ary d-cubes. Our performance metrics are isoefficiency function and isospeed scalability. For the purpose of average-case performance analysis, we formally define the concepts of average-case isoefficiency function and average-case isospeed scalability. By modeling parallel algorithms on multicomputers using task interaction graphs, we are mainly interested in the effects of communication overhead and load imbalance on the performance of parallel computations. We focus on the topology of static networks whose limited connectivities are constraints to high performance. In our probabilistic model, task computation and communication times are treated as random variables, so that we can analyze the average-case performance of parallel computations. We derive the expected parallel execution time on symmetric static networks and apply the result to k-ary d-cubes. We characterize the maximum tolerable communication overhead such that constant average-case efficiency and average-case average-speed could be maintained and that the number of tasks has a growth rate Θ(P log P), where P is the number of processors. It is found that the scalability of a parallel computation is essentially determined by the topology of a static network, i.e., the architecture of a parallel computer system. We also argue that under our probabilistic model, the number of tasks should grow at least in the rate of Θ(P log P), so that constant average-case efficiency and average-speed can be maintained.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Li, Siqiao, Tadashi Dohi y Hiroyuki Okamura. "A Comprehensive Analysis of Proportional Intensity-Based Software Reliability Models with Covariates". Electronics 11, n.º 15 (28 de julio de 2022): 2353. http://dx.doi.org/10.3390/electronics11152353.

Texto completo
Resumen
This paper focuses on the so-called proportional intensity-based software reliability models (PI-SRMs), which are extensions of the common non-homogeneous Poisson process (NHPP)-based SRMs, and describe the probabilistic behavior of software fault-detection process by incorporating the time-dependent software metrics data observed in the development process. Specifically, we generalize the seminal PI-SRM in Rinsaka et al. (2006) by introducing eleven well-known fault-detection time distributions, and investigate their goodness-of-fit and predictive performances. In numerical illustrations with four data sets collected in real software development projects, we utilize the maximum likelihood estimation to estimate model parameters with three time-dependent covariates (test execution time, failure identification work, and computer time-failure identification), and examine the performances of our PI-SRMs in comparison with the existing NHPP-based SRMs without covariates. It is shown that our PI-STMs could give better goodness-of-fit and predictive performances in many cases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Kang, Bong Gu, Kyung-Min Seo y Tag Gon Kim. "Machine learning-based discrete event dynamic surrogate model of communication systems for simulating the command, control, and communication system of systems". SIMULATION 95, n.º 8 (28 de noviembre de 2018): 673–91. http://dx.doi.org/10.1177/0037549718809890.

Texto completo
Resumen
Command and control (C2) and communication are at the heart of successful military operations in network-centric warfare. Interoperable simulation of a C2 system model and a communication (C) system model may be employed to interactively analyze their detailed behaviors. However, such simulation would be inefficient in simulation time for analysis of combat effectiveness of the C2 model against possible input combinations while considering the communication effect in combat operations. This study proposes a discrete event dynamic surrogate model (DEDSM) for the C model, which would be integrated with the C2 model and simulated. The proposed integrated simulation reduces execution time markedly in analysis of combat effectiveness without sacrificing the accuracy reflecting the communication effect. We hypothesize the DEDSM as a probabilistic priority queuing model whose semantics is expressed in a discrete event systems specification model with some characteristic functions unknown. The unknown functions are identified by machine learning with a data set generated by interoperable simulation of the C2 and C models. The case study with the command, control, and communication system of systems first validates the proposed approach through an equivalence test between the interoperable simulation and the proposed one. It then compares the simulation execution times and the number of events exchanged between the two simulations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Chen, Xing, Umit Ogras y Chaitali Chakrabarti. "Probabilistic Risk-Aware Scheduling with Deadline Constraint for Heterogeneous SoCs". ACM Transactions on Embedded Computing Systems 21, n.º 2 (31 de marzo de 2022): 1–27. http://dx.doi.org/10.1145/3489409.

Texto completo
Resumen
Hardware Trojans can compromise System-on-Chip (SoC) performance. Protection schemes implemented to combat these threats cannot guarantee 100% detection rate and may also introduce performance overhead. This paper defines the risk of running a job on an SoC as a function of the misdetection rate of the hardware Trojan detection methods implemented on the cores in the SoC. Given the user-defined deadlines of each job, our goal is to minimize the job-level risk as well as the deadline violation rate for both static and dynamic scheduling scenarios. We assume that there is no relationship between the execution time and risk of a task executed on a core. Our risk-aware scheduling algorithm first calculates the probability of possible task allocations and then uses it to derive the task-level deadlines. Each task is then allocated to the core with minimum risk that satisfies the task-level deadline. In addition, in dynamic scheduling, where multiple jobs are injected randomly, we propose to explicitly operate with a reduced virtual deadline to avoid possible future deadline violations. Simulations on randomly generated graphs show that our static scheduler has no deadline violations and achieves 5.1%–17.2% lower job-level risk than the popular Earliest Time First (ETF) algorithm when the deadline constraint is 1.2×–3.0× the makespan of ETF. In the dynamic case, the proposed algorithm achieves a violation rate comparable to that of Earliest Deadline First (EDF) , an algorithm optimized for dynamic scenarios. Even when the injection rate is high, it outperforms EDF with 8.4%–10% lower risk when the deadline is 1.5×–3.0× the makespan of ETF.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Canesche, Michael, Westerley Carvalho, Lucas Reis, Matheus Oliveira, Salles Magalhães, Peter Jamieson, Jaugusto M. Nacif y Ricardo Ferreira. "You Only Traverse Twice: A YOTT Placement, Routing, and Timing Approach for CGRAs". ACM Transactions on Embedded Computing Systems 20, n.º 5s (31 de octubre de 2021): 1–25. http://dx.doi.org/10.1145/3477038.

Texto completo
Resumen
Coarse-grained reconfigurable architecture (CGRA) mapping involves three main steps: placement, routing, and timing. The mapping is an NP-complete problem, and a common strategy is to decouple this process into its independent steps. This work focuses on the placement step, and its aim is to propose a technique that is both reasonably fast and leads to high-performance solutions. Furthermore, a near-optimal placement simplifies the following routing and timing steps. Exact solutions cannot find placements in a reasonable execution time as input designs increase in size. Heuristic solutions include meta-heuristics, such as Simulated Annealing (SA) and fast and straightforward greedy heuristics based on graph traversal. However, as these approaches are probabilistic and have a large design space, it is not easy to provide both run-time efficiency and good solution quality. We propose a graph traversal heuristic that provides the best of both: high-quality placements similar to SA and the execution time of graph traversal approaches. Our placement introduces novel ideas based on “you only traverse twice” (YOTT) approach that performs a two-step graph traversal. The first traversal generates annotated data to guide the second step, which greedily performs the placement, node per node, aided by the annotated data and target architecture constraints. We introduce three new concepts to implement this technique: I/O and reconvergence annotation, degree matching, and look-ahead placement. Our analysis of this approach explores the placement execution time/quality trade-offs. We point out insights on how to analyze graph properties during dataflow mapping. Our results show that YOTT is 60.6 , 9.7 , and 2.3 faster than a high-quality SA, bounding box SA VPR, and multi-single traversal placements, respectively. Furthermore, YOTT reduces the average wire length and the maximal FIFO size (additional timing requirement on CGRAs) to avoid delay mismatches in fully pipelined architectures.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Girtler, Jerzy. "Application of theory of semi-Markov processes to determining distribution of probabilistic process of marine accidents resulting from collision of ships". Polish Maritime Research 21, n.º 1 (1 de enero de 2013): 9–13. http://dx.doi.org/10.2478/pomr-2014-0002.

Texto completo
Resumen
ABSTRACT In this paper is presented possible application of the theory of semi-Markov processes to elaborating an eight-state model of the process of occurrence of serviceability state and unserviceability states of sea-going ships making critical manoeuvres during their entering and leaving the ports. In the analysis it was taken into account that sea-going ships are in service for a very long time t (t → ∞). The model was elaborated to determine the probability (P0) of correct execution of critical manoeuvres during ship’s entering and leaving the port as well as the probabilities Pj(j = 1, 2, 3, …, 7) of incorrect execution of critical manoeuvres by a ship, that leads to marine accidents. It was assumed that such accidents result from: ship’s grounding on port approaching fairway, collision with a ship on port approching fairway, collision with a pierhead during passing through port entrance, collision with a hydrotechnical structure during ship’s passing through port channels, collision with a port quay during coming alongside it and collision with a ship already moored to the quay. The probability (P0) was assumed a measure of safe execution of a critical manoeuvre. The probability characterizes possibility of avoiding any collision during ship’s entering and leaving the port. The probability Pa = 1 - P0 was assumed a measure of occurrence of a collsion and - consequently - marine accident. The probability Pa was interpreted as a sum of the probabilities Pj(j = 1, 2, 3, …, 7) of occurrence of all the selected events. In summing up the paper, attention was drawn to its merits which - in opinion of this author - are crucial for research on real process of accidents during entering the port and leaving it by sea-going ship in difficult navigation conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Khosravi, Faramarz, Alexander Rass y Jürgen Teich. "Efficient Computation of Probabilistic Dominance in Multi-objective Optimization". ACM Transactions on Evolutionary Learning and Optimization 1, n.º 4 (31 de diciembre de 2021): 1–26. http://dx.doi.org/10.1145/3469801.

Texto completo
Resumen
Real-world problems typically require the simultaneous optimization of multiple, often conflicting objectives. Many of these multi-objective optimization problems are characterized by wide ranges of uncertainties in their decision variables or objective functions. To cope with such uncertainties, stochastic and robust optimization techniques are widely studied aiming to distinguish candidate solutions with uncertain objectives specified by confidence intervals, probability distributions, sampled data, or uncertainty sets. In this scope, this article first introduces a novel empirical approach for the comparison of candidate solutions with uncertain objectives that can follow arbitrary distributions. The comparison is performed through accurate and efficient calculations of the probability that one solution dominates the other in terms of each uncertain objective. Second, such an operator can be flexibly used and combined with many existing multi-objective optimization frameworks and techniques by just substituting their standard comparison operator, thus easily enabling the Pareto front optimization of problems with multiple uncertain objectives. Third, a new benchmark for evaluating uncertainty-aware optimization techniques is introduced by incorporating different types of uncertainties into a well-known benchmark for multi-objective optimization problems. Fourth, the new comparison operator and benchmark suite are integrated into an existing multi-objective optimization framework that features a selection of multi-objective optimization problems and algorithms. Fifth, the efficiency in terms of performance and execution time of the proposed comparison operator is evaluated on the introduced uncertainty benchmark. Finally, statistical tests are applied giving evidence of the superiority of the new comparison operator in terms of \epsilon -dominance and attainment surfaces in comparison to previously proposed approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Qu, Chengzhi, Yan Zhang, Xin Zhang y Yang Yang. "Reinforcement Learning-Based Data Association for Multiple Target Tracking in Clutter". Sensors 20, n.º 22 (18 de noviembre de 2020): 6595. http://dx.doi.org/10.3390/s20226595.

Texto completo
Resumen
Data association is a crucial component of multiple target tracking, in which each measurement obtained by the sensor can be determined whether it belongs to the target. However, many methods reported in the literature may not be able to ensure the accuracy and low computational complexity during the association process, especially in the presence of dense clutters. In this paper, a novel data association method based on reinforcement learning (RL), i.e., the so-called RL-JPDA method, has been proposed for solving the aforementioned problem. In the presented method, the RL is leveraged to acquire available information of measurements. In addition, the motion characteristics of the targets are utilized to ensure the accuracy of the association results. Experiments are performed to compare the proposed method with the global nearest neighbor data association method, the joint probabilistic data association method, the fuzzy optimal membership data association method and the intuitionistic fuzzy joint probabilistic data association method. The results show that the proposed method yields a shorter execution time compared to other methods. Furthermore, it can obtain an effective and feasible estimation in the environment with dense clutters.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía