Auswahl der wissenschaftlichen Literatur zum Thema „Probabilistic execution time“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Probabilistic execution time" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Probabilistic execution time"

1

Tongsima, S., E. H. M. Sha, C. Chantrapornchai, D. R. Surma und N. L. Passos. „Probabilistic loop scheduling for applications with uncertain execution time“. IEEE Transactions on Computers 49, Nr. 1 (2000): 65–80. http://dx.doi.org/10.1109/12.822565.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Draskovic, Stefan, Rehan Ahmed, Pengcheng Huang und Lothar Thiele. „Schedulability of probabilistic mixed-criticality systems“. Real-Time Systems 57, Nr. 4 (21.02.2021): 397–442. http://dx.doi.org/10.1007/s11241-021-09365-4.

Der volle Inhalt der Quelle
Annotation:
AbstractMixed-criticality systems often need to fulfill safety standards that dictate different requirements for each criticality level, for example given in the ‘probability of failure per hour’ format. A recent trend suggests designing this kind of systems by jointly scheduling tasks of different criticality levels on a shared platform. When this is done, the usual assumption is that tasks of lower criticality are degraded when a higher criticality task needs more resources, for example when it overruns a bound on its execution time. However, a way to quantify the impact this degradation has on the overall system is not well understood. Meanwhile, to improve schedulability and to avoid over-provisioning of resources due to overly pessimistic worst-case execution time estimates of higher criticality tasks, a new paradigm emerged where task’s execution times are modeled with random variables. In this paper, we analyze a system with probabilistic execution times, and propose metrics that are inspired by safety standards. Among these metrics are the probability of deadline miss per hour, the expected time before degradation happens, and the duration of the degradation. We argue that these quantities provide a holistic view of the system’s operation and schedulability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Santos, R., J. Santos und J. Orozco. „Hard Real-Time Systems with Stochastic Execution Times: Deterministic and Probabilistic Guarantees“. International Journal of Computers and Applications 27, Nr. 2 (Januar 2005): 57–62. http://dx.doi.org/10.1080/1206212x.2005.11441758.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Jimenez Gil, Samuel, Iain Bate, George Lima, Luca Santinelli, Adriana Gogonel und Liliana Cucu-Grosjean. „Open Challenges for Probabilistic Measurement-Based Worst-Case Execution Time“. IEEE Embedded Systems Letters 9, Nr. 3 (September 2017): 69–72. http://dx.doi.org/10.1109/les.2017.2712858.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Xiao, Peng, Dongbo Liu und Kaijian Liang. „Improving scheduling efficiency by probabilistic execution time model in cloud environments“. International Journal of Networking and Virtual Organisations 18, Nr. 4 (2018): 307. http://dx.doi.org/10.1504/ijnvo.2018.093651.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Xiao, Peng, Dongbo Liu und Kaijian Liang. „Improving scheduling efficiency by probabilistic execution time model in cloud environments“. International Journal of Networking and Virtual Organisations 18, Nr. 4 (2018): 307. http://dx.doi.org/10.1504/ijnvo.2018.10014681.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ren, Jiankang, Zichuan Xu, Chao Yu, Chi Lin, Guowei Wu und Guozhen Tan. „Execution allowance based fixed priority scheduling for probabilistic real-time systems“. Journal of Systems and Software 152 (Juni 2019): 120–33. http://dx.doi.org/10.1016/j.jss.2019.03.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lacerda, Bruno, David Parker und Nick Hawes. „Multi-Objective Policy Generation for Mobile Robots under Probabilistic Time-Bounded Guarantees“. Proceedings of the International Conference on Automated Planning and Scheduling 27 (05.06.2017): 504–12. http://dx.doi.org/10.1609/icaps.v27i1.13865.

Der volle Inhalt der Quelle
Annotation:
We present a methodology for the generation of mobile robot controllers which offer probabilistic time-bounded guarantees on successful task completion, whilst also trying to satisfy soft goals. The approach is based on a stochastic model of the robot’s environment and action execution times, a set of soft goals, and a formal task specification in co-safe linear temporal logic, which are analysed using multi-objective model checking techniques for Markov decision processes. For efficiency, we propose a novel two-step approach. First, we explore policies on the Pareto front for minimising expected task execution time whilst optimising the achievement of soft goals. Then, we use this to prune a model with more detailed timing information, yielding a time-dependent policy for which more fine-grained probabilistic guarantees can be provided. We illustrate and evaluate the generation of policies on a delivery task in a care home scenario, where the robot also tries to engage in entertainment activities with the patients.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Chanel, Caroline, Charles Lesire und Florent Teichteil-Königsbuch. „A Robotic Execution Framework for Online Probabilistic (Re)Planning“. Proceedings of the International Conference on Automated Planning and Scheduling 24 (11.05.2014): 454–62. http://dx.doi.org/10.1609/icaps.v24i1.13669.

Der volle Inhalt der Quelle
Annotation:
Due to the high complexity of probabilistic planning algorithms, roboticists often opt for deterministic replanning paradigms, which can quickly adapt the current plan to the environment's changes. However, probabilistic planning suffers in practice from the common misconception that it is needed to generate complete or closed policies, which would not require to be adapted on-line. In this work, we propose an intermediate approach, which generates incomplete partial policies taking into account mid-term probabilistic uncertainties, continually improving them on a gliding horizon or regenerating them when they fail. Our algorithm is a configurable anytime meta-planner that drives any sub-(PO)MDP standard planner, dealing with all pending and time-bounded planning requests sent by the execution framework from many reachable possible future execution states, in anticipation of the probabilistic evolution of the system. We assess our approach on generic robotic problems and on combinatorial UAVs (PO)MDP missions, which we tested during real flights: emergency landing with discrete and continuous state variables, and target detection and recognition in unknown environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Fusi, Matteo, Fabio Mazzocchetti, Albert Farres, Leonidas Kosmidis, Ramon Canal, Francisco J. Cazorla und Jaume Abella. „On the Use of Probabilistic Worst-Case Execution Time Estimation for Parallel Applications in High Performance Systems“. Mathematics 8, Nr. 3 (01.03.2020): 314. http://dx.doi.org/10.3390/math8030314.

Der volle Inhalt der Quelle
Annotation:
Some high performance computing (HPC) applications exhibit increasing real-time requirements, which call for effective means to predict their high execution times distribution. This is a new challenge for HPC applications but a well-known problem for real-time embedded applications where solutions already exist, although they target low-performance systems running single-threaded applications. In this paper, we show how some performance validation and measurement-based practices for real-time execution time prediction can be leveraged in the context of HPC applications on high-performance platforms, thus enabling reliable means to obtain real-time guarantees for those applications. In particular, the proposed methodology uses coordinately techniques that randomly explore potential timing behavior of the application together with Extreme Value Theory (EVT) to predict rare (and high) execution times to, eventually, derive probabilistic Worst-Case Execution Time (pWCET) curves. We demonstrate the effectiveness of this approach for an acoustic wave inversion application used for geophysical exploration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Probabilistic execution time"

1

Küttler, Martin, Michael Roitzsch, Claude-Joachim Hamann und Marcus Völp. „Probabilistic Analysis of Low-Criticality Execution“. Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-233117.

Der volle Inhalt der Quelle
Annotation:
The mixed-criticality toolbox promises system architects a powerful framework for consolidating real-time tasks with different safety properties on a single computing platform. Thanks to the research efforts in the mixed-criticality field, guarantees provided to the highest criticality level are well understood. However, lower-criticality job execution depends on the condition that all high-criticality jobs complete within their more optimistic low-criticality execution time bounds. Otherwise, no guarantees are made. In this paper, we add to the mixed-criticality toolbox by providing a probabilistic analysis method for low-criticality tasks. While deterministic models reduce task behavior to constant numbers, probabilistic analysis captures varying runtime behavior. We introduce a novel algorithmic approach for probabilistic timing analysis, which we call symbolic scheduling. For restricted task sets, we also present an analytical solution. We use this method to calculate per-job success probabilities for low-criticality tasks, in order to quantify, how low-criticality tasks behave in case of high-criticality jobs overrunning their optimistic low-criticality reservation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Küttler, Martin, Michael Roitzsch, Claude-Joachim Hamann und Marcus Völp. „Probabilistic Analysis of Low-Criticality Execution“. Technische Universität Dresden, 2017. https://tud.qucosa.de/id/qucosa%3A30798.

Der volle Inhalt der Quelle
Annotation:
The mixed-criticality toolbox promises system architects a powerful framework for consolidating real-time tasks with different safety properties on a single computing platform. Thanks to the research efforts in the mixed-criticality field, guarantees provided to the highest criticality level are well understood. However, lower-criticality job execution depends on the condition that all high-criticality jobs complete within their more optimistic low-criticality execution time bounds. Otherwise, no guarantees are made. In this paper, we add to the mixed-criticality toolbox by providing a probabilistic analysis method for low-criticality tasks. While deterministic models reduce task behavior to constant numbers, probabilistic analysis captures varying runtime behavior. We introduce a novel algorithmic approach for probabilistic timing analysis, which we call symbolic scheduling. For restricted task sets, we also present an analytical solution. We use this method to calculate per-job success probabilities for low-criticality tasks, in order to quantify, how low-criticality tasks behave in case of high-criticality jobs overrunning their optimistic low-criticality reservation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kumar, Tushar. „Characterizing and controlling program behavior using execution-time variance“. Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55000.

Der volle Inhalt der Quelle
Annotation:
Immersive applications, such as computer gaming, computer vision and video codecs, are an important emerging class of applications with QoS requirements that are difficult to characterize and control using traditional methods. This thesis proposes new techniques reliant on execution-time variance to both characterize and control program behavior. The proposed techniques are intended to be broadly applicable to a wide variety of immersive applications and are intended to be easy for programmers to apply without needing to gain specialized expertise. First, we create new QoS controllers that programmers can easily apply to their applications to achieve desired application-specific QoS objectives on any platform or application data-set, provided the programmers verify that their applications satisfy some simple domain requirements specific to immersive applications. The controllers adjust programmer-identified knobs every application frame to effect desired values for programmer-identified QoS metrics. The control techniques are novel in that they do not require the user to provide any kind of application behavior models, and are effective for immersive applications that defy the traditional requirements for feedback controller construction. Second, we create new profiling techniques that provide visibility into the behavior of a large complex application, inferring behavior relationships across application components based on the execution-time variance observed at all levels of granularity of the application functionality. Additionally for immersive applications, some of the most important QoS requirements relate to managing the execution-time variance of key application components, for example, the frame-rate. The profiling techniques not only identify and summarize behavior directly relevant to the QoS aspects related to timing, but also indirectly reveal non-timing related properties of behavior, such as the identification of components that are sensitive to data, or those whose behavior changes based on the call-context.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Guet, Fabrice. „Étude de l'application de la théorie des valeurs extrêmes pour l'estimation fiable et robuste du pire temps d'exécution probabiliste“. Thesis, Toulouse, ISAE, 2017. http://www.theses.fr/2017ESAE0041/document.

Der volle Inhalt der Quelle
Annotation:
Dans les systèmes informatiques temps réel, les tâches logicielles sont contraintes par le temps. Pour garantir la sûreté du système critique contrôlé par le système temps réel, il est primordial d'estimer de manière sûre le pire temps d'exécution de chaque tâche. Les performances des processeurs actuels du commerce permettent de réduire en moyenne le temps d'exécution des tâches, mais la complexité des composants d'optimisation de la plateforme rendent difficile l'estimation du pire temps d'exécution. Il existe différentes approches d'estimation du pire temps d'exécution, souvent ségréguées et difficilement généralisables ou au prix de modèles coûteux. Les approches probabilistes basées mesures existantes sont vues comme étant rapides et simples à mettre en œuvre, mais souffrent d'un manque de systématisme et de confiance dans les estimations qu'elles fournissent. Les travaux de cette thèse étudient les conditions d'application de la théorie des valeurs extrêmes à une suite de mesures de temps d'exécution pour l'estimation du pire temps d'exécution probabiliste, et ont été implémentées dans l'outil diagxtrm. Les capacités et les limites de l'outil ont été étudiées grâce à diverses suites de mesures issues de systèmes temps réel différents. Enfin, des méthodes sont proposées pour déterminer les conditions de mesure propices à l'application de la théorie des valeurs extrêmes et donner davantage de confiance dans les estimations
Software tasks are time constrained in real time computing systems. To ensure the safety of the critical systems that embeds the real time system, it is of paramount importance to safely estimate the worst-case execution time of each task. Modern commercial processors optimisation components enable to reduce in average the task execution time at the cost of a hard to determine task worst-case execution time. Many approaches for executing a task worst-case execution time exist but are usually segregated and hardly scalable, or by building very complex models. Measurement-based probabilistic timing analysis approaches are said to be easy and fast, but they suffer from a lack of systematism and confidence in their estimates. This thesis studies the applicability of the extreme value theory to a sequence of execution time measurements for the estimation of the probabilistic worst-case execution time, leading to the development of the diagxtrm tool. Thanks to a large panel of sequences of measurements from different real time systems, capabilities and limits of the tool are enlightened. Finally, a couple of methods are provided for determining measurements conditions that foster the application of the theory and raise more confidence in the estimates
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Probabilistic execution time"

1

Guan, Ji, und Nengkun Yu. „A Probabilistic Logic for Verifying Continuous-time Markov Chains“. In Tools and Algorithms for the Construction and Analysis of Systems, 3–21. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99527-0_1.

Der volle Inhalt der Quelle
Annotation:
AbstractA continuous-time Markov chain (CTMC) execution is a continuous class of probability distributions over states. This paper proposes a probabilistic linear-time temporal logic, namely continuous-time linear logic (CLL), to reason about the probability distribution execution of CTMCs. We define the syntax of CLL on the space of probability distributions. The syntax of CLL includes multiphase timed until formulas, and the semantics of CLL allows time reset to study relatively temporal properties. We derive a corresponding model-checking algorithm for CLL formulas. The correctness of the model-checking algorithm depends on Schanuel’s conjecture, a central open problem in transcendental number theory. Furthermore, we provide a running example of CTMCs to illustrate our method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Meyer, Philipp J., Javier Esparza und Philip Offtermatt. „Computing the Expected Execution Time of Probabilistic Workflow Nets“. In Tools and Algorithms for the Construction and Analysis of Systems, 154–71. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-17465-1_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Santinelli, Luca, und Zhishan Guo. „On the Criticality of Probabilistic Worst-Case Execution Time Models“. In Dependable Software Engineering. Theories, Tools, and Applications, 59–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69483-2_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Reghenzani, Federico. „Beyond the Traditional Analyses and Resource Management in Real-Time Systems“. In Special Topics in Information Technology, 67–77. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-85918-3_6.

Der volle Inhalt der Quelle
Annotation:
AbstractThe difficulties in estimating the Worst-Case Execution Time (WCET) of applications make the use of modern computing architectures limited in real-time systems. Critical embedded systems require the tasks of hard real-time applications to meet their deadlines, and formal proofs on the validity of this condition are usually required by certification authorities. In the last decade, researchers proposed the use of probabilistic measurement-based methods to estimate the WCET instead of traditional static methods. In this chapter, we summarize recent theoretical and quantitative results on the use of probabilistic approaches to estimate the WCET presented in the PhD thesis of the author, including possible exploitation scenarios, open challenges, and future directions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lundén, Daniel, Gizem Çaylak, Fredrik Ronquist und David Broman. „Automatic Alignment in Higher-Order Probabilistic Programming Languages“. In Programming Languages and Systems, 535–63. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30044-8_20.

Der volle Inhalt der Quelle
Annotation:
AbstractProbabilistic Programming Languages (PPLs) allow users to encode statistical inference problems and automatically apply an inference algorithm to solve them. Popular inference algorithms for PPLs, such as sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC), are built around checkpoints—relevant events for the inference algorithm during the execution of a probabilistic program. Deciding the location of checkpoints is, in current PPLs, not done optimally. To solve this problem, we present a static analysis technique that automatically determines checkpoints in programs, relieving PPL users of this task. The analysis identifies a set of checkpoints that execute in the same order in every program run—they are aligned. We formalize alignment, prove the correctness of the analysis, and implement the analysis as part of the higher-order functional PPL Miking CorePPL. By utilizing the alignment analysis, we design two novel inference algorithm variants: aligned SMC and aligned lightweight MCMC. We show, through real-world experiments, that they significantly improve inference execution time and accuracy compared to standard PPL versions of SMC and MCMC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Höfig, Kai. „Failure-Dependent Timing Analysis - A New Methodology for Probabilistic Worst-Case Execution Time Analysis“. In Lecture Notes in Computer Science, 61–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28540-0_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Stemmer, Ralf, Hai-Dang Vu, Kim Grüttner, Sebastien Le Nours, Wolfgang Nebel und Sebastien Pillement. „Experimental Evaluation of Probabilistic Execution-Time Modeling and Analysis Methods for SDF Applications on MPSoCs“. In Lecture Notes in Computer Science, 241–54. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27562-4_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Falcone, Yliès, Gwen Salaün und Ahang Zuo. „Probabilistic Runtime Enforcement of Executable BPMN Processes“. In Fundamental Approaches to Software Engineering, 56–76. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57259-3_3.

Der volle Inhalt der Quelle
Annotation:
AbstractA business process is a collection of structured tasks corresponding to a service or a product. Business processes do not execute once and for all, but are executed multiple times resulting in multiple instances. In this context, it is particularly difficult to ensure correctness and efficiency of the multiple executions of a process. In this paper, we propose to rely on Probabilistic Model Checking (PMC) to automatically verify that multiple executions of a process respect some specific probabilistic property. This approach applies at runtime, thus the evaluation of the property is periodically verified and the corresponding results updated. However, we go beyond runtime PMC for BPMN, since we propose runtime enforcement techniques to keep executing the process while avoiding the violation of the property. To do so, our approach combines monitoring techniques, computation of probabilistic models, PMC, and runtime enforcement techniques. The approach has been implemented as a toolchain and has been validated on several realistic BPMN processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gentili, Elisabetta, Alice Bizzarri, Damiano Azzolini, Riccardo Zese und Fabrizio Riguzzi. „Regularization in Probabilistic Inductive Logic Programming“. In Inductive Logic Programming, 16–29. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-49299-0_2.

Der volle Inhalt der Quelle
Annotation:
AbstractProbabilistic Logic Programming combines uncertainty and logic-based languages. Liftable Probabilistic Logic Programs have been recently proposed to perform inference in a lifted way. LIFTCOVER is an algorithm used to perform parameter and structure learning of liftable probabilistic logic programs. In particular, it performs parameter learning via Expectation Maximization and LBFGS. In this paper, we present an updated version of LIFTCOVER, called LIFTCOVER+, in which regularization was added to improve the quality of the solutions and LBFGS was replaced by gradient descent. We tested LIFTCOVER+ on the same 12 datasets on which LIFTCOVER was tested and compared the performances in terms of AUC-ROC, AUC-PR, and execution times. Results show that in most cases Expectation Maximization with regularization improves the quality of the solutions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Huang, Wei-Chih, und William J. Knottenbelt. „Low-Overhead Development of Scalable Resource-Efficient Software Systems“. In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 81–105. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-6026-7.ch005.

Der volle Inhalt der Quelle
Annotation:
As the variety of execution environments and application contexts increases exponentially, modern software is often repeatedly refactored to meet ever-changing non-functional requirements. Although programmer effort can be reduced through the use of standardised libraries, software adjustment for scalability, reliability, and performance remains a time-consuming and manual job that requires high levels of expertise. Previous research has proposed three broad classes of techniques to overcome these difficulties in specific application domains: probabilistic techniques, out of core storage, and parallelism. However, due to limited cross-pollination of knowledge between domains, the same or very similar techniques have been reinvented all over again, and the application of techniques still requires manual effort. This chapter introduces the vision of self-adaptive scalable resource-efficient software that is able to reconfigure itself with little other than programmer-specified Service-Level Objectives and a description of the resource constraints of the current execution environment. The approach is designed to be low-overhead from the programmer's perspective – indeed a naïve implementation should suffice. To illustrate the vision, the authors have implemented in C++ a prototype library of self-adaptive containers, which dynamically adjust themselves to meet non-functional requirements at run time and which automatically deploy mitigating techniques when resource limits are reached. The authors describe the architecture of the library and the functionality of each component, as well as the process of self-adaptation. They explore the potential of the library in the context of a case study, which shows that the library can allow a naïve program to accept large-scale input and become resource-aware with very little programmer overhead.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Probabilistic execution time"

1

Liang, Yun, und Tulika Mitra. „Cache modeling in probabilistic execution time analysis“. In the 45th annual conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1391469.1391551.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Arcaro, Luis Fernando, Karila Palma Silva, Romulo Silva de Oliveira und Luis Almeida. „Reliability Test based on a Binomial Experiment for Probabilistic Worst-Case Execution Times“. In 2020 IEEE Real-Time Systems Symposium (RTSS). IEEE, 2020. http://dx.doi.org/10.1109/rtss49844.2020.00016.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhu, Dakai, Hakan Aydin und Jian-Jia Chen. „Optimistic Reliability Aware Energy Management for Real-Time Tasks with Probabilistic Execution Times“. In 2008 IEEE 29th Real-Time Systems Symposium (RTSS). IEEE, 2008. http://dx.doi.org/10.1109/rtss.2008.37.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hardy, Damien, und Isabelle Puaut. „Static probabilistic worst case execution time estimation for architectures with faulty instruction caches“. In the 21st International conference. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2516821.2516842.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Marchiori, Dúnia, Ricardo Custódio, Daniel Panario und Lucia Moura. „Towards constant-time probabilistic root finding for code-based cryptography“. In Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbseg.2021.17313.

Der volle Inhalt der Quelle
Annotation:
In code-based cryptography, deterministic algorithms are used in the root-finding step of the decryption process. However, probabilistic algorithms are more time efficient than deterministic ones for large fields. These algorithms can be useful for long-term security where larger parameters are relevant. Still, current probabilistic root-finding algorithms suffer from time variations making them susceptible to timing side-channel attacks. To prevent these attacks, we propose a countermeasure to a probabilistic root-finding algorithm so that its execution time does not depend on the degree of the input polynomial but on the cryptosystem parameters. We compare the performance of our proposed algorithm to other root-finding algorithms already used in code-based cryptography. In general, our method is faster than the straightforward algorithm in Classic McEliece. The results also show the range of degrees in larger finite fields where our proposed algorithm is faster than the Additive Fast Fourier Transform algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Marchiori, Dúnia, Ricardo Custódio, Daniel Panario und Lucia Moura. „Towards constant-time probabilistic root finding for code-based cryptography“. In Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/sbseg.2021.17313.

Der volle Inhalt der Quelle
Annotation:
In code-based cryptography, deterministic algorithms are used in the root-finding step of the decryption process. However, probabilistic algorithms are more time efficient than deterministic ones for large fields. These algorithms can be useful for long-term security where larger parameters are relevant. Still, current probabilistic root-finding algorithms suffer from time variations making them susceptible to timing side-channel attacks. To prevent these attacks, we propose a countermeasure to a probabilistic root-finding algorithm so that its execution time does not depend on the degree of the input polynomial but on the cryptosystem parameters. We compare the performance of our proposed algorithm to other root-finding algorithms already used in code-based cryptography. In general, our method is faster than the straightforward algorithm in Classic McEliece. The results also show the range of degrees in larger finite fields where our proposed algorithm is faster than the Additive Fast Fourier Transform algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Psarros, George Ad. „Comparing the Navigator’s Response Time in Collision and Grounding Accidents“. In ASME 2015 34th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/omae2015-41001.

Der volle Inhalt der Quelle
Annotation:
When the ship navigator (deck officer of the watch) is stressed, attention is narrowed and the normal flow of activities as well as action alternatives to be performed can be missed, ignored or discounted. Thus, the amount of time required integrating accessible information (i.e. displays, communication equipment, presence of hazards, etc.) and cope with the situation (course keeping or track changing) can be overestimated leading to poor or unsuccessful performance that may contribute to an accident. In order to understand how the navigator’s situational assessment can be improved, a probabilistic model is proposed consisting of three cognitive processes: information pre-processing, decision making and action implementation. This model can be evaluated by analyzing actual data derived from publicly available accident investigation reports concerning collisions and groundings. With this approach, it is possible to determine the minimum required time for navigation task execution so that erroneous behavior can be prevented from developing and materializing into an accident.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Saint-Guillain, Michael, Tiago Stegun Vaquero, Jagriti Agrawal und Steve Chien. „Robustness Computation of Dynamic Controllability in Probabilistic Temporal Networks with Ordinary Distributions“. In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/576.

Der volle Inhalt der Quelle
Annotation:
Most existing works in Probabilistic Simple Temporal Networks (PSTNs) base their frameworks on well-defined probability distributions. This paper addresses on PSTN Dynamic Controllability (DC) robustness measure, i.e. the execution success probability of a network under dynamic control. We consider PSTNs where the probability distributions of the contingent edges are ordinary distributed (e.g. non-parametric, non-symmetric). We introduce the concepts of dispatching protocol (DP) as well as DP-robustness, the probability of success under a predefined dynamic policy. We propose a fixed-parameter pseudo-polynomial time algorithm to compute the exact DP-robustness of any PSTN under NextFirst protocol, and apply to various PSTN datasets, including the real case of planetary exploration in the context of the Mars 2020 rover, and propose an original structural analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Orji, Mirian Kosi, Toyin Arowosafe und John Agiaye. „Improving Well Construction/Intervention Time and Cost Estimation Accuracy Via Historical Performance Data Analysis“. In SPE Nigeria Annual International Conference and Exhibition. SPE, 2023. http://dx.doi.org/10.2118/217196-ms.

Der volle Inhalt der Quelle
Annotation:
Abstract Time and cost estimation and the accuracy of it, are central to engineering design and forms the basis for economic analysis of projects. There are several factors that could result in cost or schedule overruns ranging from unplanned non-productive time, inefficiencies, changes in macro indices etcetera; however, one often overlooked factor is deficient time and cost estimation. Hence in addition to factoring prevailing market and contract rates for materials and services, it is important to critically analyze and benchmark plans against known performance for higher accuracy around estimations. Well project time and cost models generally consist of estimating in modules or sub-phases and aggregating these modules to makeup the total. This can either result in a single discrete estimate or in ranges based on probability and statistical performance - inherently implying that some form of historic performance is crucial to the estimation accuracy. This paper describes a structured approach to developing a probabilistic estimation tool by analyzing past performance data at a phase or subphase level. This tool can be domiciled on a range of computation platforms using similar methodology, which comprises data collection from execution reports, data cleanup and organization to harmonize terminologies and group operation types, and finally statistical and mathematical data analysis. Statistical analysis develops probabilistic relationships in the dataset and correlation between performance variables such as depth and time; while mathematical analysis incorporates numerical correlations and multiple variables to generate estimates in modules and finally aggregates the discrete phase estimates. The estimation has two major components – Time and Cost. The analysis of time component considers the productive and non-productive time by phase; determines depth dependent operations and their correlation to time and assigns a mathematical function to each phase The cost component is broken down into two sub-components – recurrent cost which is highly time-dependent and non-recurrent (material and services) cost which are usually based on pre-defined contractual rates An additional end function is benchmarking, for comparison between estimates and historic performance The aggregate of the probabilistic estimate of each module gives the total estimate of time and cost for a given well construction or intervention scope, with the overall objective of improving and maintaining estimation accuracy to avoid overruns and over-estimation of drilling, completions, workover and intervention projects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Mohamat Nor, Noor Azman, und Andrew Findlay. „Unit Health Assessment- Oil & Gas Equipment Probabilistic Case Study“. In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-59318.

Der volle Inhalt der Quelle
Annotation:
Abstract The focus of this case study is the analysis of offshore Oil & Gas facilities recorded downtime data which are classified into gas turbine downtime categories and causes. Each event is then correlated with the maintenance repair records to determine the respective root cause. The key objective of this study is to establish the Critical Success Factors (CSF) for unit health after a gas turbine has been in operation for more than 10 years. The outcome is used to enhance the unit performance, efficiency, maintainability, and operability. As a first step, Content Analysis technique was employed to systematically decipher and organize the downtime causes from collected data. Over 500 data samples collected over a period of 3 years were sorted into relevant categories and causes: comprising a total downtime of 11,410 hours. The downtime data, which is interval scale in nature that is in ‘hours’, is meticulously tabulated against respective downtime categories and causes location by location for the 11 gas turbines sites and correlating this to the repair work. Within scope is downtime related to: Forced Outage Automatic Trip; Failure to Start; Forced Outage Manual Shutdown; and Maintenance Unscheduled while those out of scope are Non-Curtailing and Reserve Shutdown as these are external to gas turbine operational influence. In the second step, descriptive statistics analysis was carried out to understand the key downtime drivers by categories. Pattern recognition is used to identify whether the cause is a “One Time Event”, “Random Event” or “Recurring Event” to confirm data integrity and establish the problem statement. This approach assists in the discovery of erroneous data that could mislead the outcome of statistical analysis. Pattern recognition through data stratification and clustering classifies issue impact as reliability or availability. Simplistic analyses can miss major customer impact issues such as: frequent small shutdowns that do not accumulate a lot of hours per event but cause operational disruption; or infrequent time consuming events resulting from a lack of trained personnel, spares shortages, and difficulty in troubleshooting. In the third step, statistical correlation analysis was applied to establish the relationship between gas turbine downtime and repair works in determining the root causes. Benchmarking these analyses outcome with the actual equipment landscape provides for high probability root cause, thus facilitating solutions for improved site reliability and availability. The study identified CSF in the following areas: personnel training and competency; correct maintenance philosophy and its execution in practice; and life cycle management including obsolescence and spares management. Near term recommendations on changes to site operations or equipment based on OEM guidelines and current available best practices are summarized for each site analyzed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie