Articoli di riviste sul tema "Execution trace analysis"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Execution trace analysis.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Execution trace analysis".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

LANGEVINE, LUDOVIC, e MIREILLE DUCASSÉ. "Design and implementation of a tracer driver: Easy and efficient dynamic analyses of constraint logic programs". Theory and Practice of Logic Programming 8, n. 5-6 (novembre 2008): 581–609. http://dx.doi.org/10.1017/s147106840800344x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractTracers provide users with useful information about program executions. In this article, we propose a “tracer driver”. From a single tracer, it provides a powerful front-end enabling multiple dynamic analysis tools to be easily implemented, while limiting the overhead of the trace generation. The relevant execution events are specified by flexible event patterns and a large variety of trace data can be given either systematically or “on demand”. The proposed tracer driver has been designed in the context of constraint logic programming (CLP); experiments have been made within GNU-Prolog. Execution views provided by existing tools have been easily emulated with a negligible overhead. Experimental measures show that the flexibility and power of the described architecture lead to good performance. The tracer driver overhead is inversely proportional to the average time between two traced events. Whereas the principles of the tracer driver are independent of the traced programming language, it is best suited for high-level languages, such as CLP, where each traced execution event encompasses numerous low-level execution steps. Furthermore, CLP is especially hard to debug. The current environments do not provide all the useful dynamic analysis tools. They can significantly benefit from our tracer driver which enables dynamic analyses to be integrated at a very low cost.
2

JAHIER, ERWAN, e MIREILLE DUCASSÉ. "Generic program monitoring by trace analysis". Theory and Practice of Logic Programming 2, n. 4-5 (luglio 2002): 611–43. http://dx.doi.org/10.1017/s1471068402001461.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Program execution monitoring consists of checking whole executions for given properties, and collecting global run-time information. Monitoring gives valuable insights and helps programmers maintain their programs. However, application developers face the following dilemma: either they use existing monitoring tools which never exactly fit their needs, or they invest a lot of effort to implement relevant monitoring code. In this paper, we argue that when an event-oriented tracer exists, the compiler developers can enable the application developers to easily code their own monitors. We propose a high-level primitive called foldt which operates on execution traces. One of the key advantages of our approach is that it allows a clean separation of concerns; the definition of monitors is totally distinct from both the user source code and the language compiler. We give a number of applications of the use of foldt to define monitors for Mercury program executions: execution profiles, graphical abstract views, and two test coverage measurements. Each example is implemented by a few simple lines of Mercury.
3

Simmons, Sharon, Dennis Edwards e Phil Kearns. "Communication Analysis of Distributed Programs". Scientific Programming 14, n. 2 (2006): 151–70. http://dx.doi.org/10.1155/2006/763568.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Capturing and examining the causal and concurrent relationships of a distributed system is essential to a wide range of distributed systems applications. Many approaches to gathering this information rely on trace files of executions. The information obtained through tracing is limited to those executions observed. We present a methodology that analyzes the source code of the distributed system. Our analysis considers each process's source code and produces a single comprehensive graph of the system's possible behaviors. The graph, termed the partial order graph (POG), uniquely represents each possible partial order of the system. Causal and concurrent relationships can be extracted relative either to a particular partial order, which is synonymous to a single execution, or to a collection of partial orders. The graph provides a means of reasoning about the system in terms of relationships that will definitely occur, may possible occur, and will never occur. Distributed assert statements provide a means to monitor distributed system executions. By constructing thePOGprior to system execution, the causality information provided by thePOGenables run-time evaluation of the assert statement without relying on traces or addition messages.
4

Côté, Mathieu, e Michel R. Dagenais. "Problem Detection in Real-Time Systems by Trace Analysis". Advances in Computer Engineering 2016 (6 gennaio 2016): 1–12. http://dx.doi.org/10.1155/2016/9467181.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper focuses on the analysis of execution traces for real-time systems. Kernel tracing can provide useful information, without having to instrument the applications studied. However, the generated traces are often very large. The challenge is to retrieve only relevant data in order to find quickly complex or erratic real-time problems. We propose a new approach to help finding those problems. First, we provide a way to define the execution model of real-time tasks with the optional suggestions of a pattern discovery algorithm. Then, we show the resulting real-time jobs in a Comparison View, to highlight those that are problematic. Once some jobs that present irregularities are selected, different analyses are executed on the corresponding trace segments instead of the whole trace. This allows saving huge amount of time and execute more complex analyses. Our main contribution is to combine the critical path analysis with the scheduling information to detect scheduling problems. The efficiency of the proposed method is demonstrated with two test cases, where problems that were difficult to identify were found in a few minutes.
5

Al-Rousan, Thamer, e Hasan Abualese. "A new technique for understanding large-scale software systems". Telfor Journal 12, n. 1 (2020): 34–39. http://dx.doi.org/10.5937/telfor2001034a.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Comprehending a huge execution trace is not a straightforward task due to the size of data to be processed. Detecting and removing utilities are useful to facilitate the understanding of software and decrease the complexity and size of the execution trace. The goal of this study is to develop a novel technique to minimize the complexity and the size of traces by detecting and removing utilities from the execution trace of object-oriented software. Two novel utility detection class metrics were suggested to decide the degree that a specific class can be counted as a utility class. Dynamic coupling analysis forms the basis for the proposed technique to address object-oriented features. The technique presented in this study has been tested by two case studies to evaluate the effectiveness of the proposed technique. The results from the case studies show the usefulness and effectiveness of our technique.
6

Ryan, Gabriel, Burcu Cetin, Yongwhan Lim e Suman Jana. "Accurate Data Race Prediction in the Linux Kernel through Sparse Fourier Learning". Proceedings of the ACM on Programming Languages 8, OOPSLA1 (29 aprile 2024): 810–32. http://dx.doi.org/10.1145/3649840.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Testing for data races in the Linux OS kernel is challenging because there is an exponentially large space of system calls and thread interleavings that can potentially lead to concurrent executions with races. In this work, we introduce a new approach for modeling execution trace feasibility and apply it to Linux OS Kernel race prediction. To address the fundamental scalability challenge posed by the exponentially large domain of possible execution traces, we decompose the task of predicting trace feasibility into independent prediction subtasks encoded as learning Boolean indicator functions for specific memory accesses, and apply a sparse fourier learning approach to learning each feasibility subtask. Boolean functions that are sparse in their fourier domain can be efficiently learned by estimating the coefficients of their fourier expansion. Since the feasibility of each memory access depends on only a few other relevant memory accesses or system calls (e.g., relevant inter-thread communications), we observe that trace feasibility functions often have this sparsity property and can be learned efficiently. We use learned trace feasibility functions in conjunction with conservative alias analysis to implement a kernel race-testing system, HBFourier, that uses sparse fourier learning to efficiently model feasibility when making predictions. We evaluate our approach on a recent Linux development kernel and show it finds 44 more races with 15.7% more accurate race predictions than the next best performing system in our evaluation, in addition to identifying 5 new race bugs confirmed by kernel developers.
7

Ma, Ming Yang, Yi Qiang Wang, Wei Luo, Er Hu Zhang, Chao Fu e Li Xue Wang. "Fault Localization of CNC Software Based on Searching in Divided Execution Trace". Applied Mechanics and Materials 101-102 (settembre 2011): 876–79. http://dx.doi.org/10.4028/www.scientific.net/amm.101-102.876.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In the proceeding of the CNC system, it is a valuable subject that how to locate the fault based on the function of the CNC system. To solve this problem, the method of searching in divided execution trace was introduced into the fault localization of the CNC system. This method divided the execution trace into two segments, searching the statements continually. At the same time, the Fuzzy Arithmetic was used to calculate the suspiciousness of the statements executed by these traces. After the integrated analysis, we could locate the faulty statements of the program. The experiment results indicated that this method was effective in locating the fault of the CNC system software.
8

Cornelissen, Bas, Andy Zaidman, Danny Holten, Leon Moonen, Arie van Deursen e Jarke J. van Wijk. "Execution trace analysis through massive sequence and circular bundle views". Journal of Systems and Software 81, n. 12 (dicembre 2008): 2252–68. http://dx.doi.org/10.1016/j.jss.2008.02.068.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Gamino del Río, Iván, Agustín Martínez Hellín, Óscar R. Polo, Miguel Jiménez Arribas, Pablo Parra, Antonio da Silva, Jonatan Sánchez e Sebastián Sánchez. "A RISC-V Processor Design for Transparent Tracing". Electronics 9, n. 11 (7 novembre 2020): 1873. http://dx.doi.org/10.3390/electronics9111873.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Code instrumentation enables the observability of an embedded software system during its execution. A usage example of code instrumentation is the estimation of “worst-case execution time” using hybrid analysis. This analysis combines static code analysis with measurements of the execution time on the deployment platform. Static analysis of source code determines where to insert the tracing instructions, so that later, the execution time can be captured using a logic analyser. The main drawback of this technique is the overhead introduced by the execution of trace instructions. This paper proposes a modification of the architecture of a RISC pipelined processor that eliminates the execution time overhead introduced by the code instrumentation. In this way, it allows the tracing to be non-intrusive, since the sequence and execution times of the program under analysis are not modified by the introduction of traces. As a use case of the proposed solution, a processor, based on RISC-V architecture, was implemented using VHDL language. The processor, synthesized on a FPGA, was used to execute and evaluate a set of examples of instrumented code generated by a “worst-case execution time” estimation tool. The results validate that the proposed architecture executes the instrumented code without overhead.
10

Kabamba, Herve M., Matthew Khouzam e Michel R. Dagenais. "Vnode: Low-Overhead Transparent Tracing of Node.js-Based Microservice Architectures". Future Internet 16, n. 1 (29 dicembre 2023): 13. http://dx.doi.org/10.3390/fi16010013.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Tracing serves as a key method for evaluating the performance of microservices-based architectures, which are renowned for their scalability, resource efficiency, and high availability. Despite their advantages, these architectures often pose unique debugging challenges that necessitate trade-offs, including the burden of instrumentation overhead. With Node.js emerging as a leading development environment recognized for its rapidly growing ecosystem, there is a pressing need for innovative performance debugging approaches that reduce the telemetry data collection efforts and the overhead incurred by the environment’s instrumentation. In response, we introduce a new approach designed for transparent tracing and performance debugging of microservices in cloud settings. This approach is centered around our newly developed Internal Transparent Tracing and Context Reconstruction (ITTCR) technique. ITTCR is adept at correlating internal metrics from various distributed trace files to reconstruct the intricate execution contexts of microservices operating in a Node.js environment. Our method achieves transparency by directly instrumenting the Node.js virtual machine, enabling the collection and analysis of trace events in a transparent manner. This process facilitates the creation of visualization tools, enhancing the understanding and analysis of microservice performance in cloud environments. Compared to other methods, our approach incurs an overhead of approximately 5% on the system for the trace collection infrastructure while exhibiting minimal utilization of system resources during analysis execution. Experiments demonstrate that our technique scales well with very large trace files containing huge numbers of events and performs analyses in very acceptable timeframes.
11

Abbasi, Hossein, Naser Ezzati-Jivan, Martine Bellaiche, Chamseddine Talhi e Michel R. Dagenais. "Machine Learning-Based EDoS Attack Detection Technique Using Execution Trace Analysis". Journal of Hardware and Systems Security 3, n. 2 (26 gennaio 2019): 164–76. http://dx.doi.org/10.1007/s41635-018-0061-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Kohyarnejadfard, Iman, Daniel Aloise, Michel R. Dagenais e Mahsa Shakeri. "A Framework for Detecting System Performance Anomalies Using Tracing Data Analysis". Entropy 23, n. 8 (3 agosto 2021): 1011. http://dx.doi.org/10.3390/e23081011.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Advances in technology and computing power have led to the emergence of complex and large-scale software architectures in recent years. However, they are prone to performance anomalies due to various reasons, including software bugs, hardware failures, and resource contentions. Performance metrics represent the average load on the system and do not help discover the cause of the problem if abnormal behavior occurs during software execution. Consequently, system experts have to examine a massive amount of low-level tracing data to determine the cause of a performance issue. In this work, we propose an anomaly detection framework that reduces troubleshooting time, besides guiding developers to discover performance problems by highlighting anomalous parts in trace data. Our framework works by collecting streams of system calls during the execution of a process using the Linux Trace Toolkit Next Generation(LTTng), sending them to a machine learning module that reveals anomalous subsequences of system calls based on their execution times and frequency. Extensive experiments on real datasets from two different applications (e.g., MySQL and Chrome), for varying scenarios in terms of available labeled data, demonstrate the effectiveness of our approach to distinguish normal sequences from abnormal ones.
13

Bai, Jin Rong, Guo Zhong Zou e Shi Guang Mu. "Malware Analysis Platform Based on Secondary Development of Xen". Applied Mechanics and Materials 530-531 (febbraio 2014): 865–68. http://dx.doi.org/10.4028/www.scientific.net/amm.530-531.865.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The API calls reflect the functional levels of a program, analysis of the API calls would lead to an understanding of the behavior of the malware. Malware analysis environment has been widely used, but some malware already have the anti-virtual, anti-debugging and anti-tracking ability with the evolution of the malware. These analysis environments use a combination of API hooking and/or API virtualization, which are detectable by malware running at the same privilege level. In this work, we develop the fully automated platform to trace the native API calls based on secondary development of Xen and have obtained the most transparent and similar system to a Windows OS as possible in order to obtain an execution trace of a program as if it was run in an environment with no tracer present. In contrast to other approaches, the hardware-assisted nature of our approach implicitly avoids many shortcomings that arise from incomplete or inaccurate system emulation.
14

Ezzati-Jivan, Naser, Houssem Daoud e Michel R. Dagenais. "Debugging of Performance Degradation in Distributed Requests Handling Using Multilevel Trace Analysis". Wireless Communications and Mobile Computing 2021 (16 novembre 2021): 1–17. http://dx.doi.org/10.1155/2021/8478076.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Root cause identification of performance degradation within distributed systems is often a difficult and time-consuming task, yet it is crucial for maintaining high performance. In this paper, we present an execution trace-driven solution that reduces the efforts required to investigate, debug, and solve performance problems found in multinode distributed systems. The proposed approach employs a unified analysis method to represent trace data collected from the user-space level to the hardware level of involved nodes, allowing for efficient and effective root cause analysis. This solution works by extracting performance metrics and state information from trace data collected at user-space, kernel, and network levels. The multisource trace data is then synchronized and structured in a multidimensional data store, which is designed specifically for this kind of data. A posteriori analysis using a top-down approach is then used to investigate performance problems and detect their root causes. In this paper, we apply this generic framework to analyze trace data collected from the execution of the web server, database server, and application servers in a distributed LAMP (Linux, Apache, MySQL, and PHP) Stack. Using industrial level use cases, we show that the proposed approach is capable of investigating the root cause of performance issues, addressing unusual latency, and improving base latency by 70%. This is achieved with minimal tracing overhead that does not significantly impact performance, as well as O log n query response times for efficient analysis.
15

Tariq, Zeeshan, Darryl Charles, Sally McClean, Ian McChesney e Paul Taylor. "Anomaly Detection for Service-Oriented Business Processes Using Conformance Analysis". Algorithms 15, n. 8 (25 luglio 2022): 257. http://dx.doi.org/10.3390/a15080257.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A significant challenge for organisations is the timely identification of the abnormalities or deviations in their process executions. Abnormalities are generally due to missing vital aspects of a process or possession of unwanted behaviour in the process execution. Conformance analysis techniques examine the synchronisation between the recorded logs and the learned process models, but the exploitation of event logs for abnormality detection is a relatively under-explored area in process mining. In this paper, we proposed a novel technique for the identification of abnormalities in business process execution through the extension of available conformance analysis techniques. Non-traditional conformance analysis techniques are used to find correlations and discrepancies between simulated and observed behaviour in process logs. Initially, the raw event log is filtered into two variants, successful and failed, based upon the outcome of the instances. Successfully executed instances refer to an ideal conduct of process and are utilised to discover an optimal process model. Later, the process model is used as a behavioural benchmark to classify the abnormality in the failed instances. Abnormal behaviour is compiled grounded on three dimensions of conformance, control flow-based alignment, trace-level alignment and event-level alignment. For early predictions, we introduced the notion of conformance lifeline presenting the impact of varying fitness scores during process execution. We applied the proposed methodology to a real-world event log and presented several process-specific improvement measures in the discussion section.
16

Finkbeiner, Bernd, Christopher Hahn, Marvin Stenger e Leander Tentrup. "Efficient monitoring of hyperproperties using prefix trees". International Journal on Software Tools for Technology Transfer 22, n. 6 (20 febbraio 2020): 729–40. http://dx.doi.org/10.1007/s10009-020-00552-5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Hyperproperties, such as non-interference and observational determinism, relate multiple computation traces with each other and are thus not monitorable by tools that consider computations in isolation. We present the monitoring approach implemented in the latest version of $$\text {RVHyper}$$ RVHyper , a runtime verification tool for hyperproperties. The input to the tool are specifications given in the temporal logic $$\text {HyperLTL}$$ HyperLTL , which extends linear-time temporal logic (LTL) with trace quantifiers and trace variables. $$\text {RVHyper}$$ RVHyper processes execution traces sequentially until a violation of the specification is detected. In this case, a counterexample, in the form of a set of traces, is returned. $$\text {RVHyper}$$ RVHyper employs a range of optimizations: a preprocessing analysis of the specification and a procedure that minimizes the traces that need to be stored during the monitoring process. In this article, we introduce a novel trace storage technique that arranges the traces in a tree-like structure to exploit partially equal traces. We evaluate $$\text {RVHyper}$$ RVHyper on existing benchmarks on secure information flow control, error correcting codes, and symmetry in hardware designs. As an example application outside of security, we show how $$\text {RVHyper}$$ RVHyper can be used to detect spurious dependencies in hardware designs.
17

Manap, Norpadzlihatun, Kavitha Sandirasegaran, Noor Shahifah Syahrom e Amnorzahira Amir. "Analysis of Trace Metal Contamination in Pahang River and Kelantan River, Malaysia". MATEC Web of Conferences 266 (2019): 04003. http://dx.doi.org/10.1051/matecconf/201926604003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The primary objective of this study is to determine trace metal contamination in environmental samples obtained from Pahang River and Kelantan River, Malaysia which may help to identify the risk of sustainable dredging in these areas. This research also proceeds to compare the trace metal concentration with the National Water Quality Standards of Malaysia, Interim Canadian Sediment Quality Guidelines and Malaysian Food Act 1983 to determine its limits and risks. Samples of water, sediment, snails and fishes were collected and analyzed for As, Cu, Cd, Cr, Fe, Pb, Ni, Mn, and Hg by using atomic absorption spectrophotometer. It was found that the concentration of trace metals namely As, Cu, Cd, Cr, Pb, Ni, and Hg in river water, sediment, snail and fish samples in Pahang River were lower than the maximum allowable limits, except for Fe and Mn. In Kelantan River, the concentration of trace metals indicating that it is contaminated with Fe, Mn, Pb, Cr, Cu, Hg, and As as all trace metals exceeded the maximum allowable limits. Negative impacts may arise, and the river may contaminate more in future if there is no proper management to tackle this issue during execution of dredging activities.
18

Prylli, L., e B. Tourancheau. "Execution-Driven Simulation of Parallel Applications". Parallel Processing Letters 08, n. 01 (marzo 1998): 95–109. http://dx.doi.org/10.1142/s0129626498000122.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper presents our work on the simulation of distributed memory parallel computers. We design a distributed simulator that takes as input an application written for a MIMD computer and run it on a workstations cluster with just a recompilation of the code. The hardware of the target machine is simulated so that the behavior of the application is identical to a native run on the simulated computer with virtual timings and trace file. Moreover, our analysis sets up the conditions required to achieve a good speedup as a function of the number of simulation hosts, the network latency and the granularity of the application.
19

Singh, Amit Kumar, Muhammad Shafique, Akash Kumar e Jorg Henkel. "Resource and Throughput Aware Execution Trace Analysis for Efficient Run-Time Mapping on MPSoCs". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 35, n. 1 (gennaio 2016): 72–85. http://dx.doi.org/10.1109/tcad.2015.2446938.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

WYLIE, BRIAN J. N., MARKUS GEIMER, BERND MOHR, DAVID BÖHME, ZOLTÁN SZEBENYI e FELIX WOLF. "LARGE-SCALE PERFORMANCE ANALYSIS OF SWEEP3D WITH THE SCALASCA TOOLSET". Parallel Processing Letters 20, n. 04 (dicembre 2010): 397–414. http://dx.doi.org/10.1142/s0129626410000314.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cray XT and IBM Blue Gene systems present current alternative approaches to constructing leadership computer systems relying on applications being able to exploit very large configurations of processor cores, and associated analysis tools must also scale commensurately to isolate and quantify performance issues that manifest at the largest scales. In studying the scalability of the Scalasca performance analysis toolset to several hundred thousand MPI processes on XT5 and BG/P systems, we investigated a progressive execution performance deterioration of the well-known ASCI Sweep3D compact application. Scalasca runtime summarization analysis quantified MPI communication time that correlated with computational imbalance, and automated trace analysis confirmed growing amounts of MPI waiting times. Further instrumentation, measurement and analyses pinpointed a conditional section of highly imbalanced computation which amplified waiting times inherent in the associated wavefront communication that seriously degraded overall execution efficiency at very large scales. By employing effective data collation, management and graphical presentation, in a portable and straightforward to use toolset, Scalasca was thereby able to demonstrate performance measurements and analyses with 294,912 processes.
21

de la Fuente, Rene, Ricardo Fuentes, Jorge Munoz-Gama, Arnoldo Riquelme, Fernando R. Altermatt, Juan Pedemonte, Marcia Corvetto e Marcos Sepúlveda. "Control-flow analysis of procedural skills competencies in medical training through process mining". Postgraduate Medical Journal 96, n. 1135 (27 novembre 2019): 250–56. http://dx.doi.org/10.1136/postgradmedj-2019-136802.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
BackgroundProcedural skills are key to good clinical results, and training in them involves a significant amount of resources. Control-flow analysis (ie, the order in which a process is performed) can provide new information for those who train and plan procedural training. This study outlines the steps required for control-flow analysis using process mining techniques in training in an ultrasound-guided internal jugular central venous catheter placement using a simulation.MethodsA reference process model was defined through a Delphi study, and execution data (event logs) were collected from video recordings from pretraining (PRE), post-training (POST) and expert (EXP) procedure executions. The analysis was performed to outline differences between the model and executions. We analysed rework (activity repetition), alignment-based fitness (conformance with the ideal model) and trace alignment analysis (visual ordering pattern similarities).ResultsExpert executions do not present repetition of activities (rework). The POST rework is lower than the PRE, concentrated in the steps of the venous puncture and guidewire placement. The adjustment to the ideal model measure as alignment-based fitness, expressed as a median (25th–75th percentile) of PRE 0.74 (0.68–0.78) is less than POST 0.82 (0.76–0.86) and EXP 0.87 (0.82–0.87). There are no significant differences between POST and EXP. The graphic analysis of alignment and executions shows a progressive increase in order from PRE to EXP executions.ConclusionProcess mining analysis is able to pinpoint more difficult steps, assess the concordance between reference mode and executions, and identify control-flow patterns in procedural training courses.
22

Iqbal, Muhammad Munwar, Muhammad Ali, Mai Alfawair, Ahsan Lateef, Abid Ali Minhas, Abdulaziz Al Mazyad e Kashif Naseer. "Augmenting High-Performance Mobile Cloud Computations for Big Data in AMBER". Wireless Communications and Mobile Computing 2018 (2018): 1–12. http://dx.doi.org/10.1155/2018/4796535.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Big data is an inspirational area of research that involves best practices used in the industry and academia. Challenging and complex systems are the core requirements for the data collation and analysis of big data. Data analysis approaches and algorithms development are the necessary and essential components of the big data analytics. Big data and high-performance computing emergent nature help to solve complex and challenging problems. High-Performance Mobile Cloud Computing (HPMCC) technology contributes to the execution of the intensive computational application at any location independently on laptops using virtual machines. HPMCC technique enables executing computationally extreme scientific tasks on a cloud comprising laptops. Assisted Model Building with Energy Refinement (AMBER) with the force fields calculations for molecular dynamics is a computationally hungry task that requires high and computational hardware resources for execution. The core objective of the study is to deliver and provide researchers with a mobile cloud of laptops capable of doing the heavy processing. An innovative execution of AMBER with force field empirical formula using Message Passing Interface (MPI) infrastructure on HPMCC is proposed. It is homogeneous mobile cloud platform comprising a laptop and virtual machines as processors nodes along with dynamic parallelism. Some processes can be executed to distribute and run the task among the various computational nodes. This task-based and data-based parallelism is achieved in proposed solution by using a Message Passing Interface. Trace-based results and graphs will present the significance of the proposed method.
23

Cuzzocrea, Alfredo, Francesco Folino, Massimo Guarascio e Luigi Pontieri. "Deviance-Aware Discovery of High-Quality Process Models". International Journal on Artificial Intelligence Tools 27, n. 07 (novembre 2018): 1860009. http://dx.doi.org/10.1142/s0218213018600096.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Process Discovery techniques, allowing to extract graph-like models from large process logs, are a valuable mean for grasping a summarized view of real business processes’ behaviors. If augmented with statistics on process performances (e.g., processing times), such models help study the evolution of process performances across different processing steps, and possibly detect bottlenecks and worst practices. However, when the process analyzed exhibits complex and heterogeneous behaviors, these techniques fail to yield good quality models, in terms of readability, accuracy and generality. In particular, the presence of deviant traces may lead to cumbersome models and misleading performance statistics. Current noise/outlier filtering solutions can alleviate this problem and help discover a better model for “normal” process executions, but they do not provide insight on the deviant ones. Then, difficult and expensive analyses are usually performed to extract interpretable and general enough patterns for deviant behaviors. The performance-oriented discovery approach proposed here is addressed to recognize and describe both a normal execution scenario and deviant ones for the process analyzed, by inducing different sub-models: (i) a collection of readable clustering rules (conjunctive patterns over trace attributes) defining the deviance scenarios; (ii) a performance model [Formula: see text] for the “normal” traces that do not fall in any deviant scenario; and (iii) a performance model (and a “difference” model emphasizing the differences in behaviors from the “normal” execution scenario), for each discovered deviance scenario. Technically, these models are discovered by exploiting a conceptual clustering method, embedded in an iterative optimization scheme where the current version of [Formula: see text] is replaced with the model extracted from the newly found normality cluster, in case the latter is more accurate than [Formula: see text]; on the other hand, the clustering procedure is devised to greedily find groups of traces that maximally deviate from [Formula: see text]. Tests on real-life logs confirmed the validity of this approach, and its capability to find good performance models, and to support the analysis of deviant process instances.
24

Sánchez, César, Gerardo Schneider, Wolfgang Ahrendt, Ezio Bartocci, Domenico Bianculli, Christian Colombo, Yliès Falcone et al. "A survey of challenges for runtime verification from advanced application domains (beyond software)". Formal Methods in System Design 54, n. 3 (novembre 2019): 279–335. http://dx.doi.org/10.1007/s10703-019-00337-w.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Runtime verification is an area of formal methods that studies the dynamic analysis of execution traces against formal specifications. Typically, the two main activities in runtime verification efforts are the process of creating monitors from specifications, and the algorithms for the evaluation of traces against the generated monitors. Other activities involve the instrumentation of the system to generate the trace and the communication between the system under analysis and the monitor. Most of the applications in runtime verification have been focused on the dynamic analysis of software, even though there are many more potential applications to other computational devices and target systems. In this paper we present a collection of challenges for runtime verification extracted from concrete application domains, focusing on the difficulties that must be overcome to tackle these specific challenges. The computational models that characterize these domains require to devise new techniques beyond the current state of the art in runtime verification.
25

Chen Kuang Piao, Yonni, Naser Ezzati-jivan e Michel R. Dagenais. "Distributed Architecture for an Integrated Development Environment, Large Trace Analysis, and Visualization". Sensors 21, n. 16 (18 agosto 2021): 5560. http://dx.doi.org/10.3390/s21165560.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Integrated development environments (IDEs) provide many useful tools such as a code editor, a compiler, and a debugger for creating software. These tools are highly sophisticated, and their development requires a significant effort. Traditionally, an IDE supports different programming languages via plugins that are not usually reusable in other IDEs. Given the high complexity and constant evolution of popular programming languages, such as C++ and even Java, the effort to update those plugins has become unbearable. Thus, recent work aims to modularize IDEs and reuse the existing parser implementation directly in compilers. However, when IDE debugging tools are insufficient at detecting performance defects in large and multithreaded systems, developers must use tracing and trace visualization tools in their software development process. Those tools are often standalone applications and do not interoperate with the new modular IDEs, thus losing the power and the benefits of many features provided by the IDE. The structure and use cases of tracing tools, with the potentially massive execution traces, significantly differ from the other tools in IDEs. Thus, it is a considerable challenge, one which has not been addressed previously, to integrate them into the new modular IDEs. In this paper, we propose an efficient modular client–server architecture for trace analysis and visualization that solves those problems. The proposed architecture is well suited for performance analysis on Internet of Things (IoT) devices, where resource limitations often prohibit data collection, processing, and visualization all on the same device. The experimental evaluation demonstrated that our proposed flexible and reusable solution is scalable and has a small acceptable performance overhead compared to the standalone approach.
26

ALPUENTE, M., F. FRECHINA, J. SAPIÑA e D. BALLIS. "Assertion-based analysis via slicing withABETS(system description)". Theory and Practice of Logic Programming 16, n. 5-6 (settembre 2016): 515–32. http://dx.doi.org/10.1017/s1471068416000375.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractWe presentABETS, an assertion-based, dynamic analyzer that helps diagnose errors in Maude programs.ABETSuses slicing to automatically create reduced versions of both a run's execution trace and executed program, reduced versions in which any information that is not relevant to the bug currently being diagnosed is removed. In addition,ABETSemploys runtime assertion checking to automate the identification of bugs so that whenever an assertion is violated, the system automatically infers accurate slicing criteria from the failure. We summarize the main services provided byABETS, which also include a novel assertion-based facility for program repair that generates suitable program fixes when a state invariant is violated. Finally, we provide an experimental evaluation that shows the performance and effectiveness of the system.
27

Schmitt, Felix, Robert Dietrich e Guido Juckeland. "Scalable critical-path analysis and optimization guidance for hybrid MPI-CUDA applications". International Journal of High Performance Computing Applications 31, n. 6 (1 agosto 2016): 485–98. http://dx.doi.org/10.1177/1094342016661865.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The use of accelerators in heterogeneous systems is an established approach in designing petascale applications. Today, Compute Unified Device Architecture (CUDA) offers a rich programming interface for GPU accelerators but requires developers to incorporate several layers of parallelism on both the CPU and the GPU. From this increasing program complexity emerges the need for sophisticated performance tools. This work contributes by analyzing hybrid MPI-CUDA programs for properties based on wait states, such as the critical path, a metric proven to identify application bottlenecks effectively. We developed a tool to construct a dependency graph based on an execution trace and the inherent dependencies of the programming models CUDA and Message Passing Interface (MPI). Thereafter, it detects wait states and attributes blame to responsible activities. Together with the property of being on the critical path, we can identify activities that are most viable for optimization. To evaluate the global impact of optimizations to critical activities, we predict the program execution using a graph-based performance projection. The developed approach has been demonstrated with suitable examples to be both scalable and correct. Furthermore, we establish a new categorization of CUDA inefficiency patterns ensuing from the dependencies between CUDA activities.
28

Mujumdar, Purva, e J. Uma Maheswari. "Alternate beeline diagramming method network analysis for interdependent design entities". Engineering, Construction and Architectural Management 26, n. 1 (18 febbraio 2019): 66–84. http://dx.doi.org/10.1108/ecam-07-2017-0112.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Purpose The design phase is generally characterized with two-way multiple information exchanges/overlaps between the interdependent entities. In this paper, entity is a generic term to represent teams, components, activities or parameters. Existing approaches can either capture a single overlap or lack practical application in representing multiple overlaps. The beeline diagraming method (BDM) network is efficient in representing multiple overlaps for construction projects. However, it considers any entity as indivisible and cannot distinguish partial criticality of entities. In reality, the design phase in any construction project is driven by need basis and often has numerous interruptions. Hence, there is a need to develop an alternate network analysis for BDM for interruptible execution. The paper aims to discuss these issues. Design/methodology/approach A pilot study is conducted to formulate the hypothetical examples. Subsequently, these hypothetical BDM examples are analyzed to trace a pattern for criticality. This pattern study along with the existing precedence diagramming method network analysis enabled to derive new equations for forward pass, backward pass and float. Finally, the proposed concepts are applied to two design cases and reviewed with the design experts. Findings The proposed network analysis for BDM is efficient for interruptible entity execution. Practical implications The proposed BDM network is an information-intensive network that enables the design participants to view the project holistically. Application to two distinct cases emphasizes that the concept is generic and can be applied to any project that is characterized with beelines. Originality/value An alternate network analysis for BDM is investigated for interruptible entity execution. This study also clarifies the related concepts – interdependency, iteration, overlaps and multiple information exchanges/linkages.
29

Lapkina, Anna Vadimovna, e Andrew Alexandrovitch Petukhov. "HTTP-Request Classification in Automatic Web Application Crawling". Proceedings of the Institute for System Programming of the RAS 33, n. 3 (2021): 77–86. http://dx.doi.org/10.15514/ispras-2021-33(3)-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The problem of automatic requests classification, as well as the problem of determining the routing rules for the requests on the server side, is directly connected with analysis of the user interface of dynamic web pages. This problem can be solved at the browser level, since it contains complete information about possible requests arising from interaction interaction between the user and the web application. In this paper, in order to extract the classification features, using data from the request execution context in the web client is suggested. A request context or a request trace is a collection of additional identification data that can be obtained by observing the web page JavaScript code execution or the user interface elements changes as a result of the interface elements activation. Such data, for example, include the position and the style of the element that caused the client request, the JavaScript function call stack, and the changes in the page's DOM tree after the request was initialized. In this study the implementation of the Chrome Developer Tools Protocol is used to solve the problem at the browser level and to automate the request trace selection.
30

Radenković, Uroš, Marko Mićović e Zaharije Radivojević. "Evaluation and Benefit of Imprecise Value Prediction for Certain Types of Instructions". Electronics 12, n. 17 (24 agosto 2023): 3568. http://dx.doi.org/10.3390/electronics12173568.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Based on branch prediction, value prediction has emerged as a solution for problems caused by true data dependencies in pipelined processors. While branch predictors have binary outcomes (taken/not taken), value predictors face a challenging task as their outcomes can take any value. Because of that, coverage is reduced to enhance high accuracy and minimise costly recovery from misprediction. This paper evaluates value prediction, focusing on instruction execution with imprecisely predicted operands whose result can still be correct. Two analytical models are introduced to represent instruction execution with value prediction. One model focuses on correctly predicted operands, while the other allows for imprecisely predicted operands as long as the instruction results remain correct. A trace-driven simulator was developed for simulation purposes, implementing well-known predictors and some of the predictors presented at the latest Championship Value Prediction. The gem5 simulator was upgraded to generate program traces of SPEC and EEMBC benchmarks that were used in simulations. Based on the simulation result, proposed analytical models were compared to reveal the conditions under which a model with imprecisely predicted operands, but still correct results, achieved better execution time than a model with correctly predicted operands. Analysis revealed that the accuracy of the correct instruction result based on the predicted operand, even when the predicted operand is imprecise, is higher than the accuracy of the correctly predicted operand. The accuracy improvement ranges from 0.8% to 44%, depending on the specific predictor used.
31

Cosimi, Francesco, Antonio Arena, Paolo Gai e Sergio Saponara. "From SW Timing Analysis and Safety Logging to HW Implementation: A Possible Solution with an Integrated and Low-Power Logger Approach". Journal of Low Power Electronics and Applications 13, n. 4 (2 novembre 2023): 59. http://dx.doi.org/10.3390/jlpea13040059.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this manuscript, we propose a configurable hardware device in order to build a coherent data log unit. We address the need for analyzing mixed-criticality systems, thus guaranteeing the best performances without introducing additional sources of interference. Log data are essential to inspect the behavior of running applications when safety analyses or worst-case execution time measurements are performed. Furthermore, performance and timing investigations are useful for solving scheduling issues to balance resource budgets and investigate misbehavior and failure causes. We additionally present a performance evaluation and log capabilities by means of simulations on a RISC-V use case. The simulations highlight that such a data log unit can trace the execution from a single- to an octa-core microcontroller. Such an analysis allows a silicon developer to obtain the right sizings and timings of devices during the development phase. Finally, we present an analysis of a real RISC-V implementation for a Xilinx UltraScale+ FPGA, which was obtained with Vivado 2018. The results show that our data log unit implementation does not introduce a significant area overhead if compared to the RISC-V core targeted for tests, and that the timing constraints are not violated.
32

Souprayen, Balamurugan, Ayyasamy Ayyanar e Suresh Joseph K. "Optimization of C5.0 Classifier With Bayesian Theory for Food Traceability Management Using Internet of Things". International Journal of Smart Sensor Technologies and Applications 1, n. 1 (gennaio 2020): 1–21. http://dx.doi.org/10.4018/ijssta.2020010101.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In order to survive with the existing financial circumstances and the development of global food supply chain, the authors propose efficient food traceability techniques using the internet of things and obtain a solution for data prediction. The purpose of the food traceability is used to retain the good quality of raw material supply, diminish the loss, and reduce system complexity. The primary issue is to tackle current limitations to prevent food defects from exceeding hazardous levels and to inform the safety measures to the customers. The proposed hybrid algorithm is for food traceability to make accurate predictions and enhanced period data. The operation of the internet of things is addressed to track and trace the food quality to check the data acquired from manufacturers and consumers. The experimental analysis depicts that proposed algorithm has high accuracy rate, less execution time and error rate.
33

Sun, Tao, e Xinming Ye. "A Model Reduction Method for Parallel Software Testing". Journal of Applied Mathematics 2013 (2013): 1–13. http://dx.doi.org/10.1155/2013/595897.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Modeling and testing for parallel software systems are very difficult, because the number of states and execution sequences expands significantly caused by parallel behaviors. In this paper, a model reduction method based on Coloured Petri Net (CPN) is shown, which could generate a functionality-equivalent and trace-equivalent model with smaller scale. Model-based testing for parallel software systems becomes much easier after the model is reduced by the reduction method. Specifically, a formal model for software system specification is constructed based on CPN. Then the places in the model are divided into input places, output places, and internal places; the transitions in the model are divided into input transitions, output transitions, and internal transitions. Internal places and internal transitions could be reduced if preconditions are matching, and some other operations should be done for functionality equivalence and trace equivalence. If the place and the transition are in a parallel structure, then many execution sequences will be removed from the state space. We have proved the equivalence and have analyzed the reduction effort, so that we could get the same testing result with much lower testing workload. Finally, some practices and a performance analysis show that the method is effective.
34

Rashidi, Amirreza, Jolanta Tamošaitienė, Mehdi Ravanshadnia e Hadi Sarvari. "A Scientometric Analysis of Construction Bidding Research Activities". Buildings 13, n. 1 (12 gennaio 2023): 220. http://dx.doi.org/10.3390/buildings13010220.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Bidding is the process in which a contractor submits a tender to the owner of a construction project to undertake its execution. This enables companies to properly employ required contractors. This paper investigates the trends of research conducted on construction bidding from 1975 to 2022 through a scientometric analysis from different viewpoints. A total of 299 relevant articles published in 191 journals were collected from the Web of Science database and analyzed by HistCite and CiteSpace software. The top journals, articles, institutes, and authors that contributed to bidding studies were ranked. The trends of published articles and contributions from different countries on the subject were examined. Moreover, the co-occurrence network, strongest burst detection, trends of the top keywords, and cluster analysis were determined. This review creates an in-depth insight into the content, enabling researchers to understand the existing body of knowledge and to trace a practical guideline for future studies.
35

Chung, Jinsuk, Ikhwan Lee, Michael Sullivan, Jee Ho Ryoo, Dong Wan Kim, Doe Hyun Yoon, Larry Kaplan e Mattan Erez. "Containment Domains: A Scalable, Efficient and Flexible Resilience Scheme for Exascale Systems". Scientific Programming 21, n. 3-4 (2013): 197–212. http://dx.doi.org/10.1155/2013/473915.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This paper describes and evaluates a scalable and efficient resilience scheme based on the concept of containment domains. Containment domains are a programming construct that enable applications to express resilience needs and to interact with the system to tune and specialize error detection, state preservation and restoration, and recovery schemes. Containment domains have weak transactional semantics and are nested to take advantage of the machine and application hierarchies and to enable hierarchical state preservation, restoration and recovery. We evaluate the scalability and efficiency of containment domains using generalized trace-driven simulation and analytical analysis and show that containment domains are superior to both checkpoint restart and redundant execution approaches.
36

Younan, Simon, e David R. Novog. "Development and Testing of TRACE/PARCS ECI Capability for Modelling CANDU Reactors with Reactor Regulating System Response". Science and Technology of Nuclear Installations 2022 (27 marzo 2022): 1–31. http://dx.doi.org/10.1155/2022/7500629.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The use of the USNRC codes TRACE and PARCS has been considered for the coupled safety analysis of CANDU reactors. A key element of CANDU simulations is the interactions between thermal-hydraulic and physic phenomena with the CANDU reactor regulating system (RRS). To date, no or limited development has taken place in TRACE-PARCS in this area. In this work, the system thermal-hydraulic code TRACE_Mac1.0 is natively coupled with the core physic code PARCS_Mac1.0, and RRS control is implemented via the exterior communications interface (ECI) in TRACE. ECI is used for coupling the external codes to TRACE, including additional physical models and control system models. In this work, a Python interface to the TRACE ECI library is developed, along with an RRS model written in Python. This coupling was tested using a CANDU-6 IAEA code coupling benchmark and a 900 MW CANDU model for various transients. For the CANDU-6 benchmark, the transients did not include RRS response, however, the TRACE_Mac1.0/PARCS_Mac1.0 coupling and ECI script functionality was compared to the previous benchmark simulations, which utilized external coupling. For the 900 MW CANDU simulations, all aspects of the ECI module and RRS were included. The results from the CANDU-6 benchmark when using the built-in coupling are comparable to those previously achieved using external coupling between the two codes with coupled simulations taking 2x to 3x less execution time. The 900 MW CANDU simulations successfully demonstrate the RRS functionality for the loss of flow events, and the coupled solutions demonstrate adequate performance for figure-of-eight flow instability modeling.
37

Grossmann, Georg, Shamila Mafazi, Wolfgang Mayer, Michael Schrefl e Markus Stumptner. "Change Propagation and Conflict Resolution for the Co-Evolution of Business Processes". International Journal of Cooperative Information Systems 24, n. 01 (marzo 2015): 1540002. http://dx.doi.org/10.1142/s021884301540002x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In large organizations, multiple stakeholders may modify the same business process. This paper addresses the problem when stakeholders perform changes on process views which become inconsistent with the business process and other views. Related work addressing this problem is based on execution trace analysis which is performed in a post-analysis phase and can be complex when dealing with large business process models. In this paper, we propose a design-based approach that can efficiently check consistency criteria and propagate changes on-the-fly from a process view to its reference process and related process views. The technique is based on consistent specialization of business processes and supports the control flow aspect of processes. Consistency checks can be performed during the design time by checking simple rules which support an efficient change propagation between views and reference process.
38

Pegoraro, Marco, Merih Seran Uysal e Wil M. P. van der Aalst. "Efficient Time and Space Representation of Uncertain Event Data". Algorithms 13, n. 11 (9 novembre 2020): 285. http://dx.doi.org/10.3390/a13110285.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Process mining is a discipline which concerns the analysis of execution data of operational processes, the extraction of models from event data, the measurement of the conformance between event data and normative models, and the enhancement of all aspects of processes. Most approaches assume that event data is accurately captured behavior. However, this is not realistic in many applications: data can contain uncertainty, generated from errors in recording, imprecise measurements, and other factors. Recently, new methods have been developed to analyze event data containing uncertainty; these techniques prominently rely on representing uncertain event data by means of graph-based models explicitly capturing uncertainty. In this paper, we introduce a new approach to efficiently calculate a graph representation of the behavior contained in an uncertain process trace. We present our novel algorithm, prove its asymptotic time complexity, and show experimental results that highlight order-of-magnitude performance improvements for the behavior graph construction.
39

Heimann, Peter, Carl-Arndt Krapp, Bernhard Westfechtel e Gregor Joeris. "Graph-Based Software Process Management". International Journal of Software Engineering and Knowledge Engineering 07, n. 04 (dicembre 1997): 431–55. http://dx.doi.org/10.1142/s0218194097000254.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Software process dynamics challenge the capabilities of process-centered software engineering environments. Dynamic task nets represent evolving software processes by hierarchically organized nets of tasks which are connected by control, data, and feedback flows. Project managers operate on dynamic task nets in order to assess the current status of a project, trace its history, perform impact analysis, handle feedback, adapt the project plan to changed product structures, etc. Developers are supported through task agendas and provision of tools and documents. Chained tasks may be executed in parallel (simultaneous engineering), and cooperation is controlled through releases of document versions. Dynamic task nets are formally specified by a programmed graph rewriting system. Operations on task nets are specified declaratively by graph rewrite rules at a high level of abstraction. Furthermore, editing, analysis, and execution steps on a dynamic task net, which may be interleaved seamlessly, are described in a uniform formalism.
40

Askanius, Tina. "On Frogs, Monkeys, and Execution Memes: Exploring the Humor-Hate Nexus at the Intersection of Neo-Nazi and Alt-Right Movements in Sweden". Television & New Media 22, n. 2 (22 gennaio 2021): 147–65. http://dx.doi.org/10.1177/1527476420982234.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This article is based on a case study of the online media practices of the militant neo-Nazi organization the Nordic Resistance Movement, currently the biggest and most active extreme-right actor in Scandinavia. I trace a recent turn to humor, irony, and ambiguity in their online communication and the increasing adaptation of stylistic strategies and visual aesthetics of the Alt-Right inspired by online communities such as 4chan, 8chan, Reddit, and Imgur. Drawing on a visual content analysis of memes ( N = 634) created and circulated by the organization, the analysis explores the place of humor, irony, and ambiguity across these cultural expressions of neo-Nazism and how ideas, symbols, and layers of meaning travel back and forth between neo-Nazi and Alt-right groups within Sweden today.
41

Solovev, Mikhail Aleksandrovich, Maksim Gennadevich Bakulin, Sergei Sergeevich Makarov, Dmitrii Valerevich Manushin e Vartan Andronikovich Padaryan. "Practical Abstract Interpretation of Binary Code". Proceedings of the Institute for System Programming of the RAS 32, n. 6 (2020): 101–10. http://dx.doi.org/10.15514/ispras-2020-32(6)-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The mathematical foundations of abstract interpretation provide a unified method of formalization and research of program analysis algorithms for a broad spectrum of practical problems. However, its practical usage for binary code analysis faces several challenges, of both scientific and engineering nature. In this paper we address some of those challenges. We describe an intermediate representation that is tailored to binary code analysis; unlike some other IRs it is still useable in system code analysis. To achieve this, we take into account the low-level specifics of how CPUs work; on the IR level this mostly pertains to modeling main memory in that accesses can fail, and addresses can alias. Further, we propose an infrastructure for carrying out abstract interpretation on top of the IR. The user needs to implement the abstract state and the transfer functions, and the infrastructure handles the rest: two executors are currently implemented, one for analysis of a single path, and one for fixed point analysis. Both executors handle interprocedural analysis internally, via inlining or using summaries, so the interpretations only consider only procedure at a time, which greatly simplifies implementation. The IR and the abstract interpretation framework are used together to define a model pipeline for a target instruction set architecture, consisting of a fetch stage, a decode stage, and an execute stage. A distinct fetch stage allows to model delay slots, hardware loops, etc. We currently have limited implementations for RISC-V and x86. The x86 implementation is evaluated in two experiments where concolic execution is used to automatically analyze a «crackme» program, both in dynamic (execution trace) and static (executable image) setting. In conclusion, we outline the future directions of our project.
42

Yang, Xiaodong, Omar Ali Beg, Matthew Kenigsberg e Taylor T. Johnson. "A Framework for Identification and Validation of Affine Hybrid Automata from Input-Output Traces". ACM Transactions on Cyber-Physical Systems 6, n. 2 (30 aprile 2022): 1–24. http://dx.doi.org/10.1145/3470455.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Automata-based modeling of hybrid and cyber-physical systems (CPS) is an important formal abstraction amenable to algorithmic analysis of its dynamic behaviors, such as in verification, fault identification, and anomaly detection. However, for realistic systems, especially industrial ones, identifying hybrid automata is challenging, due in part to inferring hybrid interactions, which involves inference of both continuous behaviors, such as through classical system identification, as well as discrete behaviors, such as through automata (e.g., L*) learning. In this paper, we propose and evaluate a framework for inferring and validating models of deterministic hybrid systems with linear ordinary differential equations (ODEs) from input/output execution traces. The framework contains algorithms for the approximation of continuous dynamics in discrete modes, estimation of transition conditions, and the inference of automata mode merging. The algorithms are capable of clustering trace segments and estimating their dynamic parameters, and meanwhile, deriving guard conditions that are represented by multiple linear inequalities. Finally, the inferred model is automatically converted to the format of the original system for the validation. We demonstrate the utility of this framework by evaluating its performance in several case studies as implemented through a publicly available prototype software framework called HAutLearn and compare it with a membership-based algorithm.
43

Nebesnaya, A. "IMPROVING THE DEVELOPMENT OF TOURISM INFRASTRUCTURE IN THE REGION". Actual directions of scientific researches of the XXI century: theory and practice 11, n. 4 (29 dicembre 2023): 128–41. http://dx.doi.org/10.34220/2308-8877-2023-11-4-128-141.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The article is devoted to improving the development of tourism infrastructure in the region. The purpose of the study is to analyze the development of tourism infrastructure in the regions, in particular collective accommodation facilities, to identify the advantages and disadvantages of its development, and to trace its overall dynamics. An analysis of literary sources on the research issues in recent years was carried out. The methodological basis was the methods of comparative and calculation-analytical analyses, including the collection and analysis of the main indicators of the development of tourism infrastructure. A SWOT analysis of factors influencing the effectiveness of the development of tourism infrastructure in the regions of the Russian Federation was carried out. Based on the results of the analysis, potential opportunities and threats to its development were identified. Strengths and opportunities were found to outweigh weaknesses and threats. The mechanism for financing the development of tourism infrastructure is described and its effectiveness is analyzed. Recommendations were given to improve the mechanism for financing infrastructure projects, which include optimizing the number of projects in favor of their high-quality execution; establishing high-quality control over budget investments; development of tax breaks for participants in infrastructure projects; the use of PPP mechanisms, allowing businesses to share costs and risks with the state.
44

Sayadi, Hossein, Yifeng Gao, Hosein Mohammadi Makrani, Jessica Lin, Paulo Cesar Costa, Setareh Rafatirad e Houman Homayoun. "Towards Accurate Run-Time Hardware-Assisted Stealthy Malware Detection: A Lightweight, yet Effective Time Series CNN-Based Approach". Cryptography 5, n. 4 (17 ottobre 2021): 28. http://dx.doi.org/10.3390/cryptography5040028.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
According to recent security analysis reports, malicious software (a.k.a. malware) is rising at an alarming rate in numbers, complexity, and harmful purposes to compromise the security of modern computer systems. Recently, malware detection based on low-level hardware features (e.g., Hardware Performance Counters (HPCs) information) has emerged as an effective alternative solution to address the complexity and performance overheads of traditional software-based detection methods. Hardware-assisted Malware Detection (HMD) techniques depend on standard Machine Learning (ML) classifiers to detect signatures of malicious applications by monitoring built-in HPC registers during execution at run-time. Prior HMD methods though effective have limited their study on detecting malicious applications that are spawned as a separate thread during application execution, hence detecting stealthy malware patterns at run-time remains a critical challenge. Stealthy malware refers to harmful cyber attacks in which malicious code is hidden within benign applications and remains undetected by traditional malware detection approaches. In this paper, we first present a comprehensive review of recent advances in hardware-assisted malware detection studies that have used standard ML techniques to detect the malware signatures. Next, to address the challenge of stealthy malware detection at the processor’s hardware level, we propose StealthMiner, a novel specialized time series machine learning-based approach to accurately detect stealthy malware trace at run-time using branch instructions, the most prominent HPC feature. StealthMiner is based on a lightweight time series Fully Convolutional Neural Network (FCN) model that automatically identifies potentially contaminated samples in HPC-based time series data and utilizes them to accurately recognize the trace of stealthy malware. Our analysis demonstrates that using state-of-the-art ML-based malware detection methods is not effective in detecting stealthy malware samples since the captured HPC data not only represents malware but also carries benign applications’ microarchitectural data. The experimental results demonstrate that with the aid of our novel intelligent approach, stealthy malware can be detected at run-time with 94% detection performance on average with only one HPC feature, outperforming the detection performance of state-of-the-art HMD and general time series classification methods by up to 42% and 36%, respectively.
45

Pani, Santosh Kumar, e G. B. Mund. "Property Based Dynamic Slicing of Object Oriented Programs". International Journal of Software Engineering and Technologies (IJSET) 1, n. 2 (1 agosto 2016): 69. http://dx.doi.org/10.11591/ijset.v1i2.4570.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Slicing is used for program analysis. It the process of extracting the statements of a program that are relevant to a given computation. Static slicing generates slices for all possible execution of a program helping in program understanding, verification, maintenance and testing. Dynamic slices are smaller in size as they extract slices for a given execution of a program and helps in interactive applications like debugging and testing. With the wide spread use of object oriented software, there are many papers on Dynamic Slicing of object oriented programs but few papers only address in details about the most basic features of Object Oriented Programming that is class definition, Object creation, accessing object through reference, invoking methods of a class , polymorphism, inheritance etc. From last three decades many algorithms have been designed to slice a program with respect to the syntax of the program. The real world object oriented programs consists of thousands of lines of code. Traditional Syntax based slices for program variables used at many places in a program are generally large even for dynamic slices. Recently, some work has been done to get slices based on abstract/concrete properties of program variables. For smooth debugging and testing, the slice will be small if any particular property is being considered (semantics based). Most of the semantics based slicing algorithms have focused on finding static slices on the abstract properties by using SSA as intermediate representation and extract slices by storing an execution trace of a program. To the best of our knowledge generating dynamic slices based on abstract/Concrete properties of program variables is scarcely reported in literature. In this paper we present an algorithm for generating dynamic abstract slices of object oriented programs addressing all key object oriented features.
46

Yang, Zhixin, Wei Xu, Pak-Kin Wong e Xianbo Wang. "Modeling of RFID-Enabled Real-Time Manufacturing Execution System in Mixed-Model Assembly Lines". Mathematical Problems in Engineering 2015 (2015): 1–15. http://dx.doi.org/10.1155/2015/575402.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
To quickly respond to the diverse product demands, mixed-model assembly lines are well adopted in discrete manufacturing industries. Besides the complexity in material distribution, mixed-model assembly involves a variety of components, different process plans and fast production changes, which greatly increase the difficulty for agile production management. Aiming at breaking through the bottlenecks in existing production management, a novel RFID-enabled manufacturing execution system (MES), which is featured with real-time and wireless information interaction capability, is proposed to identify various manufacturing objects including WIPs, tools, and operators, etc., and to trace their movements throughout the production processes. However, being subject to the constraints in terms of safety stock, machine assignment, setup, and scheduling requirements, the optimization of RFID-enabled MES model for production planning and scheduling issues is a NP-hard problem. A new heuristical generalized Lagrangian decomposition approach has been proposed for model optimization, which decomposes the model into three subproblems: computation of optimal configuration of RFID senor networks, optimization of production planning subjected to machine setup cost and safety stock constraints, and optimization of scheduling for minimized overtime. RFID signal processing methods that could solve unreliable, redundant, and missing tag events are also described in detail. The model validity is discussed through algorithm analysis and verified through numerical simulation. The proposed design scheme has important reference value for the applications of RFID in multiple manufacturing fields, and also lays a vital research foundation to leverage digital and networked manufacturing system towards intelligence.
47

Вера Александровна, Щетнева,. "POLITICAL, LEGAL AND ORGANIZATIONAL TENDENSIES IN THE EXECUTION OF PUNISHMENT IN THE FORM OF IMPRISONMENT IN RELATION TO CONVICTED WOMEN IN POST-SOVIET CORRECTIONAL COLONIES PERIOD". Vestnik Samarskogo iuridicheskogo instituta, n. 5(51) (20 dicembre 2022): 71–78. http://dx.doi.org/10.37523/sui.2022.51.5.012.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
В постсоветский период (с 1991 г.) в уголовно-исполнительной системе России происходили как позитивные, так и негативные изменения. В статье исследуется определенные тенденции, которые сложились в политико-правовом и организационном аспекте в отношении правового положения осужденных женщин, особенностей порядка исполнения наказания в виде лишения свободы в исправительных учреждениях в указанный промежуток времени (более чем 30 лет). Автором в ходе подготовки научной статьи использованы теоретические методы исследования (анализ, синтез, аналогии, индукции, дедукции, обобщения), практические (частные) методы исследования (наблюдение, беседа и интервьюирование, сравнительно-правовой, историко-правовой). В завершение делается вывод о том, что специфику исполнения наказания в исправительных учреждениях в отношении осужденных женщин определяет психофизиологический фактор. The post-Soviet period (since 1991) for the penal system of Russia is characterized by both positive and negative moments. Under these conditions, the formation and development of the institution of execution of punishment in the form of imprisonment against convicted women in correctional colonies took place. The period of time we are studying (more than 30 years) allows us to trace certain trends that have developed in the political, legal and organizational aspect in relation to the legal status of convicted women, the peculiarities of the procedure for the execution of sentences in the form of imprisonment in correctional institutions. During the preparation of the scientific article, the author used theoretical research methods (analysis, synthesis, analogies, induction, deduction, generalization), practical (private) research methods (observation, conversation and interviewing, comparative legal, historical and legal).
48

Dushku, Edlira, Jeppe Hagelskjær Østergaard e Nicola Dragoni. "Memory Offloading for Remote Attestation of Multi-Service IoT Devices". Sensors 22, n. 12 (8 giugno 2022): 4340. http://dx.doi.org/10.3390/s22124340.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Remote attestation (RA) is an effective malware detection mechanism that allows a trusted entity (Verifier) to detect a potentially compromised remote device (Prover). The recent research works are proposing advanced Control-Flow Attestation (CFA) protocols that are able to trace the Prover’s execution flow to detect runtime attacks. Nevertheless, several memory regions remain unattested, leaving the Prover vulnerable to data memory and mobile adversaries. Multi-service devices, whose integrity is also dependent on the integrity of any attached external peripheral devices, are particularly vulnerable to such attacks. This paper extends the state-of-the-art RA schemes by presenting ERAMO, a protocol that attests larger memory regions by adopting the memory offloading approach. We validate and evaluate ERAMO with a hardware proof-of-concept implementation using a TrustZone-capable LPC55S69 running two sensor nodes. We enhance the protocol by providing extensive memory analysis insights for multi-service devices, demonstrating that it is possible to analyze and attest the memory of the attached peripherals. Experiments confirm the feasibility and effectiveness of ERAMO in attesting dynamic memory regions.
49

Shaduntc, Elena. "The Middle Ages in the Landscape of the Present-Day Pereslavl-Zalessky". ISTORIYA 12, n. 9 (107) (2021): 0. http://dx.doi.org/10.18254/s207987840017120-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Among ancient Russian towns, Pereslavl-Zalessky stands out for the rare preservation of its historical town-planning structure. The basis for such estimation is provided by the evidence of sources on the town’s history during the 12th — 17th centuries and a comparative analysis of cartographical documents of the early modern time. The views on the formation of Old Russian towns and assessment of the factors affecting the town-planning have recently undergone considerable changes. The comparison of historical and archival as well as archaeological evidence with the present-day topography of Pereslavl allows us to trace the modification of the planning structure that has retained not only separate architectural objects of the 12th — 17th centuries but also parcels of the medieval town layout. The article presents examples of the ‘exceptions to the rule’ during the execution of a regular plan at the end of the 18th century that, together with historical evidence on the composition and occupations of the trading quarter’s population, permit to more precisely determine the historical peculiarity of Pereslavl.
50

Ang, Zhendong, e Umang Mathur. "Predictive Monitoring against Pattern Regular Languages". Proceedings of the ACM on Programming Languages 8, POPL (5 gennaio 2024): 2191–225. http://dx.doi.org/10.1145/3632915.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
While current bug detection techniques for concurrent software focus on unearthing low-level issues such as data races or deadlocks, they often fall short of discovering more intricate temporal behaviours that can arise even in the absence of such low-level issues. In this paper, we focus on the problem of dynamically analysing concurrent software against high-level temporal specifications such as LTL. Existing techniques for runtime monitoring against such specifications are primarily designed for sequential software and remain inadequate in the presence of concurrency — violations may be observed only in intricate thread interleavings, requiring many re-runs of the underlying software in conjunction with the analysis. Towards this, we study the problem of predictive runtime monitoring , inspired by the analogous problem of predictive data race detection studied extensively recently. The predictive runtime monitoring question asks, given an execution σ, if it can be soundly reordered to expose violations of a specification. In general, this problem may become easily intractable when either the specifications or the notion of reorderings used is complex. In this paper, we focus on specifications that are given in regular languages. Our notion of reorderings is trace equivalence , where an execution is considered a reordering of another if it can be obtained from the latter by successively commuting adjacent independent actions. We first show that, even in this simplistic setting, the problem of predictive monitoring admits a super-linear lower bound of O ( n α ), where n is the number of events in the execution, and α is a parameter describing the degree of commutativity, and typically corresponds to the number of threads in the execution. As a result, predictive runtime monitoring even in this setting is unlikely to be efficiently solvable, unlike in the non-predictive setting where the problem can be checked using a deterministic finite automaton (and thus, a constant-space streaming linear-time algorithm). Towards this, we identify a sub-class of regular languages, called pattern languages (and their extension generalized pattern languages ). Pattern languages can naturally express specific ordering of some number of (labelled) events, and have been inspired by popular empirical hypotheses underlying many concurrency bug detection approaches such as the “small bug depth” hypothesis. More importantly, we show that for pattern (and generalized pattern) languages, the predictive monitoring problem can be solved using a constant-space streaming linear-time algorithm. We implement and evaluate our algorithm PatternTrack on benchmarks from the literature and show that it is effective in monitoring large-scale applications.

Vai alla bibliografia