Academic literature on the topic 'Execution trace analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Execution trace analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Execution trace analysis":

1

LANGEVINE, LUDOVIC, and MIREILLE DUCASSÉ. "Design and implementation of a tracer driver: Easy and efficient dynamic analyses of constraint logic programs." Theory and Practice of Logic Programming 8, no. 5-6 (November 2008): 581–609. http://dx.doi.org/10.1017/s147106840800344x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractTracers provide users with useful information about program executions. In this article, we propose a “tracer driver”. From a single tracer, it provides a powerful front-end enabling multiple dynamic analysis tools to be easily implemented, while limiting the overhead of the trace generation. The relevant execution events are specified by flexible event patterns and a large variety of trace data can be given either systematically or “on demand”. The proposed tracer driver has been designed in the context of constraint logic programming (CLP); experiments have been made within GNU-Prolog. Execution views provided by existing tools have been easily emulated with a negligible overhead. Experimental measures show that the flexibility and power of the described architecture lead to good performance. The tracer driver overhead is inversely proportional to the average time between two traced events. Whereas the principles of the tracer driver are independent of the traced programming language, it is best suited for high-level languages, such as CLP, where each traced execution event encompasses numerous low-level execution steps. Furthermore, CLP is especially hard to debug. The current environments do not provide all the useful dynamic analysis tools. They can significantly benefit from our tracer driver which enables dynamic analyses to be integrated at a very low cost.
2

JAHIER, ERWAN, and MIREILLE DUCASSÉ. "Generic program monitoring by trace analysis." Theory and Practice of Logic Programming 2, no. 4-5 (July 2002): 611–43. http://dx.doi.org/10.1017/s1471068402001461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Program execution monitoring consists of checking whole executions for given properties, and collecting global run-time information. Monitoring gives valuable insights and helps programmers maintain their programs. However, application developers face the following dilemma: either they use existing monitoring tools which never exactly fit their needs, or they invest a lot of effort to implement relevant monitoring code. In this paper, we argue that when an event-oriented tracer exists, the compiler developers can enable the application developers to easily code their own monitors. We propose a high-level primitive called foldt which operates on execution traces. One of the key advantages of our approach is that it allows a clean separation of concerns; the definition of monitors is totally distinct from both the user source code and the language compiler. We give a number of applications of the use of foldt to define monitors for Mercury program executions: execution profiles, graphical abstract views, and two test coverage measurements. Each example is implemented by a few simple lines of Mercury.
3

Simmons, Sharon, Dennis Edwards, and Phil Kearns. "Communication Analysis of Distributed Programs." Scientific Programming 14, no. 2 (2006): 151–70. http://dx.doi.org/10.1155/2006/763568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Capturing and examining the causal and concurrent relationships of a distributed system is essential to a wide range of distributed systems applications. Many approaches to gathering this information rely on trace files of executions. The information obtained through tracing is limited to those executions observed. We present a methodology that analyzes the source code of the distributed system. Our analysis considers each process's source code and produces a single comprehensive graph of the system's possible behaviors. The graph, termed the partial order graph (POG), uniquely represents each possible partial order of the system. Causal and concurrent relationships can be extracted relative either to a particular partial order, which is synonymous to a single execution, or to a collection of partial orders. The graph provides a means of reasoning about the system in terms of relationships that will definitely occur, may possible occur, and will never occur. Distributed assert statements provide a means to monitor distributed system executions. By constructing thePOGprior to system execution, the causality information provided by thePOGenables run-time evaluation of the assert statement without relying on traces or addition messages.
4

Côté, Mathieu, and Michel R. Dagenais. "Problem Detection in Real-Time Systems by Trace Analysis." Advances in Computer Engineering 2016 (January 6, 2016): 1–12. http://dx.doi.org/10.1155/2016/9467181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper focuses on the analysis of execution traces for real-time systems. Kernel tracing can provide useful information, without having to instrument the applications studied. However, the generated traces are often very large. The challenge is to retrieve only relevant data in order to find quickly complex or erratic real-time problems. We propose a new approach to help finding those problems. First, we provide a way to define the execution model of real-time tasks with the optional suggestions of a pattern discovery algorithm. Then, we show the resulting real-time jobs in a Comparison View, to highlight those that are problematic. Once some jobs that present irregularities are selected, different analyses are executed on the corresponding trace segments instead of the whole trace. This allows saving huge amount of time and execute more complex analyses. Our main contribution is to combine the critical path analysis with the scheduling information to detect scheduling problems. The efficiency of the proposed method is demonstrated with two test cases, where problems that were difficult to identify were found in a few minutes.
5

Al-Rousan, Thamer, and Hasan Abualese. "A new technique for understanding large-scale software systems." Telfor Journal 12, no. 1 (2020): 34–39. http://dx.doi.org/10.5937/telfor2001034a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Comprehending a huge execution trace is not a straightforward task due to the size of data to be processed. Detecting and removing utilities are useful to facilitate the understanding of software and decrease the complexity and size of the execution trace. The goal of this study is to develop a novel technique to minimize the complexity and the size of traces by detecting and removing utilities from the execution trace of object-oriented software. Two novel utility detection class metrics were suggested to decide the degree that a specific class can be counted as a utility class. Dynamic coupling analysis forms the basis for the proposed technique to address object-oriented features. The technique presented in this study has been tested by two case studies to evaluate the effectiveness of the proposed technique. The results from the case studies show the usefulness and effectiveness of our technique.
6

Ryan, Gabriel, Burcu Cetin, Yongwhan Lim, and Suman Jana. "Accurate Data Race Prediction in the Linux Kernel through Sparse Fourier Learning." Proceedings of the ACM on Programming Languages 8, OOPSLA1 (April 29, 2024): 810–32. http://dx.doi.org/10.1145/3649840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Testing for data races in the Linux OS kernel is challenging because there is an exponentially large space of system calls and thread interleavings that can potentially lead to concurrent executions with races. In this work, we introduce a new approach for modeling execution trace feasibility and apply it to Linux OS Kernel race prediction. To address the fundamental scalability challenge posed by the exponentially large domain of possible execution traces, we decompose the task of predicting trace feasibility into independent prediction subtasks encoded as learning Boolean indicator functions for specific memory accesses, and apply a sparse fourier learning approach to learning each feasibility subtask. Boolean functions that are sparse in their fourier domain can be efficiently learned by estimating the coefficients of their fourier expansion. Since the feasibility of each memory access depends on only a few other relevant memory accesses or system calls (e.g., relevant inter-thread communications), we observe that trace feasibility functions often have this sparsity property and can be learned efficiently. We use learned trace feasibility functions in conjunction with conservative alias analysis to implement a kernel race-testing system, HBFourier, that uses sparse fourier learning to efficiently model feasibility when making predictions. We evaluate our approach on a recent Linux development kernel and show it finds 44 more races with 15.7% more accurate race predictions than the next best performing system in our evaluation, in addition to identifying 5 new race bugs confirmed by kernel developers.
7

Ma, Ming Yang, Yi Qiang Wang, Wei Luo, Er Hu Zhang, Chao Fu, and Li Xue Wang. "Fault Localization of CNC Software Based on Searching in Divided Execution Trace." Applied Mechanics and Materials 101-102 (September 2011): 876–79. http://dx.doi.org/10.4028/www.scientific.net/amm.101-102.876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the proceeding of the CNC system, it is a valuable subject that how to locate the fault based on the function of the CNC system. To solve this problem, the method of searching in divided execution trace was introduced into the fault localization of the CNC system. This method divided the execution trace into two segments, searching the statements continually. At the same time, the Fuzzy Arithmetic was used to calculate the suspiciousness of the statements executed by these traces. After the integrated analysis, we could locate the faulty statements of the program. The experiment results indicated that this method was effective in locating the fault of the CNC system software.
8

Cornelissen, Bas, Andy Zaidman, Danny Holten, Leon Moonen, Arie van Deursen, and Jarke J. van Wijk. "Execution trace analysis through massive sequence and circular bundle views." Journal of Systems and Software 81, no. 12 (December 2008): 2252–68. http://dx.doi.org/10.1016/j.jss.2008.02.068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gamino del Río, Iván, Agustín Martínez Hellín, Óscar R. Polo, Miguel Jiménez Arribas, Pablo Parra, Antonio da Silva, Jonatan Sánchez, and Sebastián Sánchez. "A RISC-V Processor Design for Transparent Tracing." Electronics 9, no. 11 (November 7, 2020): 1873. http://dx.doi.org/10.3390/electronics9111873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Code instrumentation enables the observability of an embedded software system during its execution. A usage example of code instrumentation is the estimation of “worst-case execution time” using hybrid analysis. This analysis combines static code analysis with measurements of the execution time on the deployment platform. Static analysis of source code determines where to insert the tracing instructions, so that later, the execution time can be captured using a logic analyser. The main drawback of this technique is the overhead introduced by the execution of trace instructions. This paper proposes a modification of the architecture of a RISC pipelined processor that eliminates the execution time overhead introduced by the code instrumentation. In this way, it allows the tracing to be non-intrusive, since the sequence and execution times of the program under analysis are not modified by the introduction of traces. As a use case of the proposed solution, a processor, based on RISC-V architecture, was implemented using VHDL language. The processor, synthesized on a FPGA, was used to execute and evaluate a set of examples of instrumented code generated by a “worst-case execution time” estimation tool. The results validate that the proposed architecture executes the instrumented code without overhead.
10

Kabamba, Herve M., Matthew Khouzam, and Michel R. Dagenais. "Vnode: Low-Overhead Transparent Tracing of Node.js-Based Microservice Architectures." Future Internet 16, no. 1 (December 29, 2023): 13. http://dx.doi.org/10.3390/fi16010013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Tracing serves as a key method for evaluating the performance of microservices-based architectures, which are renowned for their scalability, resource efficiency, and high availability. Despite their advantages, these architectures often pose unique debugging challenges that necessitate trade-offs, including the burden of instrumentation overhead. With Node.js emerging as a leading development environment recognized for its rapidly growing ecosystem, there is a pressing need for innovative performance debugging approaches that reduce the telemetry data collection efforts and the overhead incurred by the environment’s instrumentation. In response, we introduce a new approach designed for transparent tracing and performance debugging of microservices in cloud settings. This approach is centered around our newly developed Internal Transparent Tracing and Context Reconstruction (ITTCR) technique. ITTCR is adept at correlating internal metrics from various distributed trace files to reconstruct the intricate execution contexts of microservices operating in a Node.js environment. Our method achieves transparency by directly instrumenting the Node.js virtual machine, enabling the collection and analysis of trace events in a transparent manner. This process facilitates the creation of visualization tools, enhancing the understanding and analysis of microservice performance in cloud environments. Compared to other methods, our approach incurs an overhead of approximately 5% on the system for the trace collection infrastructure while exhibiting minimal utilization of system resources during analysis execution. Experiments demonstrate that our technique scales well with very large trace files containing huge numbers of events and performs analyses in very acceptable timeframes.

Dissertations / Theses on the topic "Execution trace analysis":

1

Zhou, Yang. "Execution Trace Visualization for Java Pathfinder using Trace Compass." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Multi-threading is commonly applied in modern computer programs, bringing many conveniences but also causing concurrency issues. Among the various error debugging tools, Java Pathfinder (JPF) can detect latent errors of multithreaded Java programs through model checking. However, the text-based format of the output trace is hard to read, and previous attempts in visualizing JPF traces show limitations. For long-term development, popular trace analytic platform such as Trace Compass (TC) is extended to adapt to JPF traces. In this thesis, the development of JPF and TC makes it possible to analyze JPF traces on TC with a user interface including visual diagrams. The development solves the conceptual differences between the tools and successfully visualize important trace data. The implementation can help provide a generic approach for analyzing JPF traces with visualization.
Multitrådning används ofta i moderna datorprogram, vilket har många fördelar men kan också orsaka samtidighetsproblem. Bland olika felsökningsverktyg kan Java Pathfinder (JPF) upptäcka latenta fel hos multitrådade Javaprogram genom modellkontroll. Spårningsinformationen i form av text har låg läsbarhet, och tidigare försök att visualsera JPF-spår har visat begränsningar. För långsiktig utveckling har populära spårningsanalysplattformar som Trace Compass (TC) utvidgats för att anpassas till JPF-spår. I examensprojektet gör utvecklingen av JPF och TC det möjligt att analysera JPF-spår på TC med ett användargränssnitt baserat på visuella diagram. Utvecklingen löser den konceptuella skillnaden mellan verktygen och visualiserar spårdata på ett framgångsrikt sätt. Implementeringen bidrar med ett generiskt tillvägagångssätt för att analysera JPF spår med hjälp av visualisering.
2

Reger, Giles Matthew. "Automata based monitoring and mining of execution traces." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/automata-based-monitoring-and-mining-of-execution-traces(08eb0a62-53a3-4171-b4d2-36bfe450b9a7).html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis contributes work to the fields of runtime monitoring and specification mining. It develops a formalism for specifying patterns of behaviour in execution traces and defines techniques for checking these patterns in, and extracting patterns from, traces. These techniques represent an extension in the expressiveness of properties that can be efficiently and effectively monitored and mined. The behaviour of a computer system is considered in terms of the actions it performs, captured in execution traces. Patterns of behaviour, formally defined in trace specifications, denote the traces that the system should (or should not) exhibit. The main task this work considers is that of checking that the system conforms to the specification i.e. is correct. Additionally, trace specifications can be used to document behaviour to aid maintenance and development. However, formal specifications are often missing or incomplete, hence the mining activity. Previous work in the field of runtime monitoring (checking execution traces) has tended to either focus on efficiency or expressiveness, with different approaches making different trade-offs. This work considers both, achieving the expressiveness of the most expressive existing tools whilst remaining competitive with the most efficient. These elements of expressiveness and efficiency depend on the specification formalism used. Therefore, we introduce quantified event automata for describing patterns of behaviour in execution traces and then develop a range of efficient monitoring algorithms. To monitor execution traces we need a formal description of expected behaviour. However, these are often difficult to write - especially as there is often a lack of understanding of actual behaviour. The field of specification mining aims to explain the behaviour present in execution traces by extracting specifications that conform to those traces. Previous work in this area has primarily been limited to simple specifications that do not consider data. By leveraging the quantified event automata formalism, and its efficient trace checking procedures, we introduce a generate-and-check style mining framework capable of accurately extracting complex specifications. This thesis, therefore, makes separate significant contributions to the fields of runtime monitoring and specification mining. This work generalises and extends existing techniques in runtime monitoring, enabling future research to better understand the interaction between expressiveness and efficiency. This work combines and extends previous approaches to specification mining, increasing the expressiveness of specifications that can be mined.
3

Zoor, Maysam. "Latency verification in execution traces of HW/SW partitioning model." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Alors que de nombreux travaux de recherche visent à définir de nouvelles techniques de vérification (formelle) pour vérifier les exigences dans un modèle, la compréhension de la cause profonde de la violation d'une exigence reste un problème ouvert pour les plateformes complexes construites autour de composants logiciels et matériels. Par exemple, la violation d'une exigence de latence est-elle due à un ordonnancement temps réel défavorable, à des conflits sur les bus, aux caractéristiques des algorithmes fonctionnels ou des composants matériels ? Cette thèse introduit une approche d'analyse précise de la latence appelée PLAN. PLAN prend en entrée une instance d'un modèle de partitionnement HW/SW, une trace d'exécution, et une contrainte de temps exprimée sous la forme suivante : la latence entre l'opérateur A et l'opérateur B doit être inférieure à une valeur de latence maximale. PLAN vérifie d'abord si la condition de latence est satisfaite. Si ce n'est pas le cas, l'intérêt principal de PLAN est de fournir la cause première de la non satisfaction en classant les transactions d'exécution en fonction de leur impact sur la latence : transaction obligatoire, transaction induisant une contention, transaction n'ayant aucun impact, etc. Une première version de PLAN suppose une exécution pour laquelle il existe une exécution unique de l'opérateur A et une exécution unique de l'opérateur B. Une seconde version de PLAN peut calculer, pour chaque opérateur A exécuté, l'opérateur B correspondant. Pour cela, notre approche s'appuie sur des techniques de tainting. La thèse formalise les deux versions de PLAN et les illustre par des exemples ludiques. Ensuite, nous montrons comment PLAN a été intégré dans un Framework Dirigé par le Modèle (TTool). Les deux versions de PLAN sont illustrées par deux études de cas tirées du projet H2020 AQUAS. En particulier, nous montrons comment l'altération peut traiter efficacement les multiples et occurrences concurrentes du même opérateur
While many research works aim at defining new (formal) verification techniques to check for requirements in a model, understanding the root cause of a requirement violation is still an open issue for complex platforms built around software and hardware components. For instance, is the violation of a latency requirement due to unfavorable real-time scheduling, to contentions on buses, to the characteristics of functional algorithms or hardware components?This thesis introduces a Precise Latency ANalysis approach called PLAN. PLAN takes as input an instance of a HW/SW partitioning model, an execution trace, and a time constraint expressed in the following format: the latency between operator A and operator B should be less than a maximum latency value. First PLAN checks if the latency requirement is satisfied. If not, the main interest of PLAN is to provide the root cause of the non satisfaction by classifying execution transactions according to their impact on latency: obligatory transaction, transaction inducing a contention, transaction having no impact, etc.A first version of PLAN assumes an execution for which there is a unique execution of operator A and a unique execution of operator B. A second version of PLAN can compute, for each executed operator A, the corresponding operator B. For this, our approach relies on tainting techniques.The thesis formalizes the two versions of PLAN and illustrates them with toy examples. Then, we show how PLAN was integrated into a Model-Driven Framework (TTool). The two versions of PLAN are illustrated with two case studies taken from the H2020 AQUAS project. In particular, we show how tainting can efficiently handle the multiple and concurrent occurrences of the same operator
4

Emteu, Tchagou Serge Vladimir. "Réduction à la volée du volume des traces d'exécution pour l'analyse d'applications multimédia de systèmes embarqués." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM051/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le marché de l'électronique grand public est dominé par les systèmes embarqués du fait de leur puissance de calcul toujours croissante et des nombreuses fonctionnalités qu'ils proposent.Pour procurer de telles caractéristiques, les architectures des systèmes embarqués sont devenues de plus en plus complexes (pluralité et hétérogénéité des unités de traitements, exécution concurrente des tâches, ...).Cette complexité a fortement influencé leur programmabilité au point où rendre difficile la compréhension de l'exécution d'une application sur ces architectures.L'approche la plus utilisée actuellement pour l'analyse de l'exécution des applications sur les systèmes embarqués est la capture des traces d'exécution (séquences d'événements, tels que les appels systèmes ou les changements de contexte, générés pendant l'exécution des applications).Cette approche est utilisée lors des activités de test, débogage ou de profilage des applications.Toutefois, suivant certains cas d'utilisation, les traces d'exécution générées peuvent devenir très volumineuses, de l'ordre de plusieurs centaines de gigaoctets.C'est le cas des tests d'endurance ou encore des tests de validation, qui consistent à tracer l'exécution d'une application sur un système embarqué pendant de longues périodes, allant de plusieurs heures à plusieurs jours.Les outils et méthodes d'analyse de traces d'exécution actuels ne sont pas conçus pour traiter de telles quantités de données.Nous proposons une approche de réduction du volume de trace enregistrée à travers une analyse à la volée de la trace durant sa capture.Notre approche repose sur les spécificités des applications multimédia, qui sont parmi les plus importantes pour le succès des dispositifs populaires comme les Set-top boxes ou les smartphones.Notre approche a pour but de détecter automatiquement les fragments (périodes) suspectes de l'exécution d'une application afin de n'enregistrer que les parties de la trace correspondant à ces périodes d'activités.L'approche que nous proposons comporte deux étapes : une étape d'apprentissage qui consiste à découvrir les comportements réguliers de l'application à partir de la trace d'exécution, et une étape de détection d'anomalies qui consiste à identifier les comportements déviant des comportements réguliers.Les nombreuses expériences, réalisées sur des données synthétiques et des données réelles, montrent que notre approche permet d'obtenir une réduction du volume de trace enregistrée d'un ordre de grandeur avec d'excellentes performances de détection des comportements suspects
The consumer electronics market is dominated by embedded systems due to their ever-increasing processing power and the large number of functionnalities they offer.To provide such features, architectures of embedded systems have increased in complexity: they rely on several heterogeneous processing units, and allow concurrent tasks execution.This complexity degrades the programmability of embedded system architectures and makes application execution difficult to understand on such systems.The most used approach for analyzing application execution on embedded systems consists in capturing execution traces (event sequences, such as system call invocations or context switch, generated during application execution).This approach is used in application testing, debugging or profiling.However in some use cases, execution traces generated can be very large, up to several hundreds of gigabytes.For example endurance tests, which are tests consisting in tracing execution of an application on an embedded system during long periods, from several hours to several days.Current tools and methods for analyzing execution traces are not designed to handle such amounts of data.We propose an approach for monitoring an application execution by analyzing traces on the fly in order to reduce the volume of recorded trace.Our approach is based on features of multimedia applications which contribute the most to the success of popular devices such as set-top boxes or smartphones.This approach consists in identifying automatically the suspicious periods of an application execution in order to record only the parts of traces which correspond to these periods.The proposed approach consists of two steps: a learning step which discovers regular behaviors of an application from its execution trace, and an anomaly detection step which identifies behaviors deviating from the regular ones.The many experiments, performed on synthetic and real-life datasets, show that our approach reduces the trace size by an order of magnitude while maintaining a good performance in detecting suspicious behaviors
5

Hamou-Lhadj, Abdelwahab. "Techniques to simplify the analysis of execution traces for program comprehension." Thesis, University of Ottawa (Canada), 2006. http://hdl.handle.net/10393/29296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Understanding a large execution trace is not easy task due to the size and complexity of typical traces. In this thesis, we present various techniques that tackle this problem. Firstly, we present a set of metrics for measuring various properties of an execution trace in order to assess the work required for understanding its content. We show the result of applying these metrics to thirty traces generated from three different software systems. We discuss how these metrics can be supported by tools to facilitate the exploration of traces based on their complexity. Secondly, we present a novel technique for manipulating traces called trace summarization, which consists of taking a trace as input and return a summary of its main content as output. Traces summaries can be used to enable top-down analysis of traces as well as the recovery of the system behavioural models. In this thesis, we present a trace summarization algorithm that is based on successive filtering of implementation details from traces. An analysis of the concept of implementation details such as utilities is also presented. Thirdly, we have developed a scalable exchange format called the Compact Trace Format (CTF) in order to enable sharing and reusing of traces. The design of CTF satisfies well-known requirements for a standard exchange format. Finally, this thesis includes a survey of eight trace analysis tools. A study of the advantages and limitations of the techniques supported by these tools is provided. The approaches presented in this thesis have been applied to real software systems. The obtained results demonstrate the effectiveness and usefulness of our techniques.
6

Rose, Annica Elizabeth. "An Analysis of Investor Trading Behaviour and Its Impact on Trade Execution, Market Quality and Stock Returns." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This dissertation examines the impact of investor trading behaviour upon intra-day prices, market quality and cross-sectional stock returns for an over 10 year period including the global financial crisis period across different investor categories. Chapter 2 examines investors’ behaviour towards choosing order execution platform in a segmented market and its impact upon the market quality of the ASX. It finds that most uninformed liquidity traders choose the upstairs market. Fleeting orders and related trades are mostly used by uninformed liquidity traders and have a generally positive impact on the market quality of the ASX. Chapter 3 investigates the association between investor behaviour and cross-sectional stock returns for different investor categories on the OMXH. It finds that in the short term high signed small trade turnover (SSTT) stocks outperform low SSTT stocks, in the medium term the difference in performance is insignificant, and in the long term high SSTT stocks underperform low SSTT stocks. Chapter 4 discusses the outperformance of high SSTT stocks in the short term and determines whether the outperformance is best understood in the context of investor behaviour or informed trading. It concludes that the trading of high SSTT stocks is associated with lower PPE and contains less price sensitive information, its outperformance is therefore better explained by trading behaviour than by informed trading. Chapter 5 examines the association between SSTT and stock returns of the SSTT portfolios formed during the financial crisis period, and compares the results with the pre and post crisis periods. It confirms that this association differs in the three periods due to changes in investor behaviour. Differences in the association exist between investor categories during the three different periods.
7

Vigouroux, Xavier. "Analyse distribuée de traces d'exécution de programmes parallèles." Lyon, École normale supérieure (sciences), 1996. http://www.theses.fr/1996ENSL0016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le monitoring consiste a generer des informations de trace durant une execution d'un programme parallele pour detecter les problemes de performances. La quantite d'information generee par de tres grosses machines paralleles rend les outils d'analyse classiques inutilisables. Cette these resout ce probleme en distribuant l'information de trace sur plusieurs fichiers contenus sur plusieurs sites, les fichiers pouvant etre lus en parallele. La manipulation de ces fichiers afin d'obtenir une information coherente est la base d'un logiciel client-serveur grace auquel des clients demandent de l'information deja filtree sur une execution. Cette architecture client serveur est extensible (l'utilisateur peut creer ses propres clients) et modulable. Nous avons, d'autre part, cree deja plusieurs clients novateurs: client hierarchique, sonore, recherche automatique de problemes, interface filtrante aux outils classique, integration d'outil 3D
8

Taqi, Alawi. "A qualitative analysis of the current and future leadership development needs of third-line leaders in the oil and gas sector in Kuwait." Thesis, University of Stirling, 2016. http://hdl.handle.net/1893/24788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Whilst the topic of leadership has been widely studied it remains little understood, particularly at the first-level line of leadership, especially as it relates to developing countries such as Kuwait. This study critically analyses and presents the needs, skills and capabilities of frontline leaders working in the Kuwait’s Oil and Gas Sector companies. It also examines how such needs and competencies can be developed so as to make these leaders more effective in leading functional units (teams) and to improve organisational performance overall. The study produces a frontline leadership needs and skills development framework that contributes to a better understanding of leadership in a Middle Eastern country (Kuwait), taking into account important contextual factors that influence leadership. Influenced by a social constructivist philosophy and based on qualitative evidence gathered from 42 Team Leaders, the essential leadership needs neglected by previous literature (and possibly lacking in Kuwait) were: business knowledge, technical skills, leadership and managerial skills, communication skills, decision-making skills and change management skills. These leadership needs reflected what the third line leaders understood and personally believed to be essential leadership dimensions for them to be effective and to competently undertake their work. These leadership needs constituted the foundation for their present and future leadership development in order to enhance their leadership capabilities. However, no single methodology was identified as a ‘one size fits all’ solution to meeting the development needs of the Team Leaders. Nevertheless, on the job-training was considered to be the most effective approach to develop these skills and capabilities. It is recommended that top management, and in particular human resources departments within the Oil and Gas Sector companies should continuously identify the needs of third-line leaders and focus on developing skills and competencies considered to be lacking and the most important by these frontline leaders, rather than offering a raft of seemingly unconnected development activities.
9

Mallangi, Siva Sai Reddy. "Low-Power Policies Based on DVFS for the MUSEIC v2 System-on-Chip." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Multi functional health monitoring wearable devices are quite prominent these days. Usually these devices are battery-operated and consequently are limited by their battery life (from few hours to a few weeks depending on the application). Of late, it was realized that these devices, which are currently being operated at fixed voltage and frequency, are capable of operating at multiple voltages and frequencies. By switching these voltages and frequencies to lower values based upon power requirements, these devices can achieve tremendous benefits in the form of energy savings. Dynamic Voltage and Frequency Scaling (DVFS) techniques have proven to be handy in this situation for an efficient trade-off between energy and timely behavior. Within imec, wearable devices make use of the indigenously developed MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). This system is optimized for efficient and accurate collection, processing, and transfer of data from multiple (health) sensors. MUSEIC v2 has limited means in controlling the voltage and frequency dynamically. In this thesis we explore how traditional DVFS techniques can be applied to the MUSEIC v2. Experiments were conducted to find out the optimum power modes to efficiently operate and also to scale up-down the supply voltage and frequency. Considering the overhead caused when switching voltage and frequency, transition analysis was also done. Real-time and non real-time benchmarks were implemented based on these techniques and their performance results were obtained and analyzed. In this process, several state of the art scheduling algorithms and scaling techniques were reviewed in identifying a suitable technique. Using our proposed scaling technique implementation, we have achieved 86.95% power reduction in average, in contrast to the conventional way of the MUSEIC v2 chip’s processor operating at a fixed voltage and frequency. Techniques that include light sleep and deep sleep mode were also studied and implemented, which tested the system’s capability in accommodating Dynamic Power Management (DPM) techniques that can achieve greater benefits. A novel approach for implementing the deep sleep mechanism was also proposed and found that it can obtain up to 71.54% power savings, when compared to a traditional way of executing deep sleep mode.
Nuförtiden så har multifunktionella bärbara hälsoenheter fått en betydande roll. Dessa enheter drivs vanligtvis av batterier och är därför begränsade av batteritiden (från ett par timmar till ett par veckor beroende på tillämpningen). På senaste tiden har det framkommit att dessa enheter som används vid en fast spänning och frekvens kan användas vid flera spänningar och frekvenser. Genom att byta till lägre spänning och frekvens på grund av effektbehov så kan enheterna få enorma fördelar när det kommer till energibesparing. Dynamisk skalning av spänning och frekvens-tekniker (såkallad Dynamic Voltage and Frequency Scaling, DVFS) har visat sig vara användbara i detta sammanhang för en effektiv avvägning mellan energi och beteende. Hos Imec så använder sig bärbara enheter av den internt utvecklade MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). Systemet är optimerat för effektiv och korrekt insamling, bearbetning och överföring av data från flera (hälso) sensorer. MUSEIC v2 har begränsad möjlighet att styra spänningen och frekvensen dynamiskt. I detta examensarbete undersöker vi hur traditionella DVFS-tekniker kan appliceras på MUSEIC v2. Experiment utfördes för att ta reda på de optimala effektlägena och för att effektivt kunna styra och även skala upp matningsspänningen och frekvensen. Eftersom att ”overhead” skapades vid växling av spänning och frekvens gjordes också en övergångsanalys. Realtidsoch icke-realtidskalkyler genomfördes baserat på dessa tekniker och resultaten sammanställdes och analyserades. I denna process granskades flera toppmoderna schemaläggningsalgoritmer och skalningstekniker för att hitta en lämplig teknik. Genom att använda vår föreslagna skalningsteknikimplementering har vi uppnått 86,95% effektreduktion i jämförelse med det konventionella sättet att MUSEIC v2-chipets processor arbetar med en fast spänning och frekvens. Tekniker som inkluderar lätt sömn och djupt sömnläge studerades och implementerades, vilket testade systemets förmåga att tillgodose DPM-tekniker (Dynamic Power Management) som kan uppnå ännu större fördelar. En ny metod för att genomföra den djupa sömnmekanismen föreslogs också och enligt erhållna resultat så kan den ge upp till 71,54% lägre energiförbrukning jämfört med det traditionella sättet att implementera djupt sömnläge.
10

"Supporting Source Code Feature Analysis Using Execution Trace Mining." Thesis, 2013. http://hdl.handle.net/10388/ETD-2013-10-1266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Software maintenance is a significant phase of a software life-cycle. Once a system is developed the main focus shifts to maintenance to keep the system up to date. A system may be changed for various reasons such as fulfilling customer requirements, fixing bugs or optimizing existing code. Code needs to be studied and understood before any modification is done to it. Understanding code is a time intensive and often complicated part of software maintenance that is supported by documentation and various tools such as profilers, debuggers and source code analysis techniques. However, most of the tools fail to assist in locating the portions of the code that implement the functionality the software developer is focusing. Mining execution traces can help developers identify parts of the source code specific to the functionality of interest and at the same time help them understand the behaviour of the code. We propose a use-driven hybrid framework of static and dynamic analyses to mine and manage execution traces to support software developers in understanding how the system's functionality is implemented through feature analysis. We express a system's use as a set of tests. In our approach, we develop a set of uses that represents how a system is used or how a user uses some specific functionality. Each use set describes a user's interaction with the system. To manage large and complex traces we organize them by system use and segment them by user interface events. The segmented traces are also clustered based on internal and external method types. The clusters are further categorized into groups based on application programming interfaces and active clones. To further support comprehension we propose a taxonomy of metrics which are used to quantify the trace. To validate the framework we built a tool called TrAM that implements trace mining and provides visualization features. It can quantify the trace method information, mine similar code fragments called active clones, cluster methods based on types, categorise them based on groups and quantify their behavioural aspects using a set of metrics. The tool also lets the users visualize the design and implementation of a system using images, filtering, grouping, event and system use, and present them with values calculated using trace, group, clone and method metrics. We also conducted a case study on five different subject systems using the tool to determine the dynamic properties of the source code clones at runtime and answer three research questions using our findings. We compared our tool with trace mining tools and profilers in terms of features, and scenarios. Finally, we evaluated TrAM by conducting a user study on its effectiveness, usability and information management.

Books on the topic "Execution trace analysis":

1

Company, John T. Boyd. Executive summary, independent analysis, 21 closure review collieries British Coal Corporation United Kingdom. London: HMSO, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aborisade, Femi. Nigeria: Freedom of association and the Trade Unions Act : a critical analysis : includes appraisal of the Executive Bill to Amend Trade Unions Act, 2004. Ibadan: Centre for Labour Studies (CLS), 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ontario. Dispute settlement mechanisms: An analysis of the dispute settlement provisions of the Canada-U.S. Free Trade Agreement preliminary transcript : executive summary, November 4, 1987. [Toronto, Ont.]: Govt. of Ontario, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Consultants, Spectrum Strategy, and Great Britain. Department of Trade and Industry., eds. Development of the information society: An international analysis : executive summary : based on a report by Spectrum Strategy Consultants for the Department of Trade and Industry. London: Department of Trade and Industry, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Goodman, Seymour. Executive briefing: An examination of high-performance computing export control policy in the 1990s. Los Alamitos, Calif: IEEE Computer Society Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

United States. General Accounting Office., ed. Financial management: Analysis of DOD's first Biennial Financial Management Improvement Plan : report to Congressional committees. Washington, D.C: The Office, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Office, General Accounting. Financial management: Analysis of operating cash balance of the Defense Logistics Agency's stock fund : report to the chairman, Subcommittee on Defense, House Committee on Appropriations, House of Representatives. Washington, D.C: The Office, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shadlen, Kenneth C. Coalitional Clash, Export Mobilization, and Executive Agency. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780199593903.003.0005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This chapter analyzes over-compliance in Brazil’s introduction of pharmaceutical patents in the 1990s. Extensive legislative deliberation and societal mobilization delayed and diluted this outcome, but could not prevent it. Brazil’s national pharmaceutical sector was able to tap into a network of social movements around the environmental and ethical dimensions of patenting to resist over-compliance. Yet, ultimately, the Executive secured over-compliance by using the country’s vulnerability to trade sanctions to mobilize exporters in support of this campaign. Comparative perspective reveals the conditional importance of external pressures and Executive preferences. Like Argentina, Brazil was subject to threats of trade sanctions and considerable intervention by the United States, and by mid-1990s both countries had Presidents that were committed to satisfying these external demands. What sets Brazil apart, however, was a different social structure that allowed the Executive and its societal allies to use these external pressures to build a broad coalition for over-compliance.
9

Moseley, Mason W. Uneven Democracy and Contentious Politics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780190694005.003.0007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Building on the previous chapter, this chapter analyzes variation in protest activity across Argentine provinces using statistical analysis. Drawing on two sources of protest events data, survey data, and an inventive method for measuring subnational democracy introduced by Gervasoni (2010), I trace how characteristics of subnational democratic institutions related to electoral competition and executive dominance produce different protest outcomes over the past twenty years. Departing from prior studies of protest in Latin America, I focus on the differential effects of subnational democracy on distinct protest repertoires. That is, might certain institutional characteristics of provinces spur aggressive modes of contention but diminish the incidence of peaceful protests, and vice versa? In conclusion, this chapter reveals that even in a protest state like Argentina, significant subnational variation in terms of democratic quality can produce stark variation in both the prevalence and type of contentious politics.
10

Eizenstat, Stuart E., and Marney L. Cheek. Executive Reports: Legal Analysis of the Bipartisan Trade Promotion Authority Act of 2002 - The Over-Arching Issues You Need to Know (Executive Reports). Aspatore Books, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Execution trace analysis":

1

Khoury, Raphaël, Sylvain Hallé, and Omar Waldmann. "Execution Trace Analysis Using LTL-FO $$^+$$." In Leveraging Applications of Formal Methods, Verification and Validation: Discussion, Dissemination, Applications, 356–62. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-47169-3_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Zunchen, and Chao Wang. "Symbolic Predictive Cache Analysis for Out-of-Order Execution." In Fundamental Approaches to Software Engineering, 163–83. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-99429-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractWe propose a trace-based symbolic method for analyzing cache side channels of a program under a CPU-level optimization called out-of-order execution (OOE). The method is predictive in that it takes the in-order execution trace as input and then analyzes all possible out-of-order executions of the same set of instructions to check if any of them leaks sensitive information of the program. The method has two important properties. The first one is accurately analyzing cache behaviors of the program execution under OOE, which is largely overlooked by existing methods for side-channel verification. The second one is efficiently analyzing the cache behaviors using an SMT solver based symbolic technique, to avoid explicitly enumerating a large number of out-of-order executions. Our experimental evaluation on C programs that implement cryptographic algorithms shows that the symbolic method is effective in detecting OOE-related leaks and, at the same time, is significantly more scalable than explicit enumeration.
3

Al Haider, Newres, Benoit Gaudin, and John Murphy. "Execution Trace Exploration and Analysis Using Ontologies." In Runtime Verification, 412–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29860-8_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lima, Leonardo, Andrei Herasimau, Martin Raszyk, Dmitriy Traytel, and Simon Yuan. "Explainable Online Monitoring of Metric Temporal Logic." In Tools and Algorithms for the Construction and Analysis of Systems, 473–91. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30820-8_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractRuntime monitors analyze system execution traces for policy compliance. Monitors for propositional specification languages, such as metric temporal logic (MTL), produce Boolean verdicts denoting whether the policy is satisfied or violated at a given point in the trace. Given a sufficiently complex policy, it can be difficult for the monitor’s user to understand how the monitor arrived at its verdict. We develop an MTL monitor that outputs verdicts capturing why the policy was satisfied or violated. Our verdicts are proof trees in a sound and complete proof system that we design. We demonstrate that such verdicts can serve as explanations for end users by augmenting our monitor with a graphical interface for the interactive exploration of proof trees. As a second application, our verdicts serve as certificates in a formally verified checker we develop using the Isabelle proof assistant.
5

Beutner, Raven, and Bernd Finkbeiner. "AutoHyper: Explicit-State Model Checking for HyperLTL." In Tools and Algorithms for the Construction and Analysis of Systems, 145–63. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30823-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractHyperLTL is a temporal logic that can express hyperproperties, i.e., properties that relate multiple execution traces of a system. Such properties are becoming increasingly important and naturally occur, e.g., in information-flow control, robustness, mutation testing, path planning, and causality checking. Thus far, complete model checking tools for HyperLTL have been limited to alternation-free formulas, i.e., formulas that use only universal or only existential trace quantification. Properties involving quantifier alternations could only be handled in an incomplete way, i.e., the verification might fail even though the property holds. In this paper, we present , an explicit-state automata-based model checker that supports full HyperLTL and is complete for properties with arbitrary quantifier alternations. We show that language inclusion checks can be integrated into HyperLTL verification, which allows to benefit from a range of existing inclusion-checking tools. We evaluate on a broad set of benchmarks drawn from different areas in the literature and compare it with existing (incomplete) methods for HyperLTL verification.
6

Wang, Wubing, Guoxing Chen, Yueqiang Cheng, Yinqian Zhang, and Zhiqiang Lin. "Specularizer : Detecting Speculative Execution Attacks via Performance Tracing." In Detection of Intrusions and Malware, and Vulnerability Assessment, 151–72. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-80825-9_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThis paper presents Specularizer, a framework for uncovering speculative execution attacks using performance tracing features available in commodity processors. It is motivated by the practical difficulty of eradicating such vulnerabilities in the design of CPU hardware and operating systems and the principle of defense-in-depth. The key idea of Specularizer is the use of Hardware Performance Counters and Processor Trace to perform lightweight monitoring of production applications and the use of machine learning techniques for identifying the occurrence of the attacks during offline forensics analysis. Different from prior works that use performance counters to detect side-channel attacks, Specularizer monitors triggers of the critical paths of the speculative execution attacks, thus making the detection mechanisms robust to different choices of side channels used in the attacks. To evaluate Specularizer, we model all known types of exception-based and misprediction-based speculative execution attacks and automatically generate thousands of attack variants. Experimental results show that Specularizer yields superior detection accuracy and the online tracing of Specularizer incur reasonable overhead.
7

Beutner, Raven, and Bernd Finkbeiner. "Software Verification of Hyperproperties Beyond k-Safety." In Computer Aided Verification, 341–62. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13185-1_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractTemporal hyperproperties are system properties that relate multiple execution traces. For (finite-state) hardware, temporal hyperproperties are supported by model checking algorithms, and tools for general temporal logics like HyperLTL exist. For (infinite-state) software, the analysis of temporal hyperproperties has, so far, been limited to k-safety properties, i.e., properties that stipulate the absence of a bad interaction between any k traces. In this paper, we present an automated method for the verification of $$\forall ^k\exists ^l$$ ∀ k ∃ l -safety properties in infinite-state systems. A $$\forall ^k\exists ^l$$ ∀ k ∃ l -safety property stipulates that for any k traces, there existl traces such that the resulting $$k+l$$ k + l traces do not interact badly. This combination of universal and existential quantification enables us to express many properties beyond k-safety, including, for example, generalized non-interference or program refinement. Our method is based on a strategy-based instantiation of existential trace quantification combined with a program reduction, both in the context of a fixed predicate abstraction. Notably, our framework allows for mutual dependence of strategy and reduction.
8

Beutner, Raven. "Automated Software Verification of Hyperliveness." In Tools and Algorithms for the Construction and Analysis of Systems, 196–216. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57249-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractHyperproperties relate multiple executions of a program and are commonly used to specify security and information-flow policies. Most existing work has focused on the verification of k-safety properties, i.e., properties that state that all k-tuples of execution traces satisfy a given property. In this paper, we study the automated verification of richer properties that combine universal and existential quantification over executions. Concretely, we consider $$\forall ^k\exists ^l$$ ∀ k ∃ l properties, which state that for all k executions, there exist l executions that, together, satisfy a property. This captures important non-k-safety requirements, including hyperliveness properties such as generalized non-interference, opacity, refinement, and robustness. We design an automated constraint-based algorithm for the verification of $$\forall ^k\exists ^l$$ ∀ k ∃ l properties. Our algorithm leverages a sound-and-complete program logic and a (parameterized) strongest postcondition computation. We implement our algorithm in a tool called and report on encouraging experimental results.
9

Loose, Nils, Felix Mächtle, Florian Sieck, and Thomas Eisenbarth. "SWAT: Modular Dynamic Symbolic Execution for Java Applications using Dynamic Instrumentation (Competition Contribution)." In Tools and Algorithms for the Construction and Analysis of Systems, 399–405. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57256-2_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractSWAT is a novel dynamic symbolic execution engine for Java applications utilizing dynamic instrumentation. SWAT’s unique modular design facilitates flexible communication between its symbolic explorer and executor using HTTP endpoints, thus enhancing adaptability to diverse application scenarios. The symbolic executor’s ability to attach to Java applications enables efficient constraint generation and path exploration. SWAT employs JavaSMT for constraint generation and ASM for bytecode instrumentation, ensuring robust performance. SWAT’s efficacy is evaluated in the Java Track of SV-COMP 2024, achieving fourth place.
10

Schuster, Daniel, Lukas Schade, Sebastiaan J. van Zelst, and Wil M. P. van der Aalst. "Visualizing Trace Variants from Partially Ordered Event Data." In Lecture Notes in Business Information Processing, 34–46. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractExecuting operational processes generates event data, which contain information on the executed process activities. Process mining techniques allow to systematically analyze event data to gain insights that are then used to optimize processes. Visual analytics for event data are essential for the application of process mining. Visualizing unique process executions—also called trace variants, i.e., unique sequences of executed process activities—is a common technique implemented in many scientific and industrial process mining applications. Most existing visualizations assume a total order on the executed process activities, i.e., these techniques assume that process activities are atomic and were executed at a specific point in time. In reality, however, the executions of activities are not atomic. Multiple timestamps are recorded for an executed process activity, e.g., a start-timestamp and a complete-timestamp. Therefore, the execution of process activities may overlap and, thus, cannot be represented as a total order if more than one timestamp is to be considered. In this paper, we present a visualization approach for trace variants that incorporates start- and complete-timestamps of activities.

Conference papers on the topic "Execution trace analysis":

1

Bohnet, Johannes, Martin Koeleman, and Juergen Doellner. "Visualizing massively pruned execution traces to facilitate trace exploration." In 2009 5th IEEE International Workshop on Visualizing Software for Understanding and Analysis (VISSOFT). IEEE, 2009. http://dx.doi.org/10.1109/vissof.2009.5336416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pekarek, Daniel, and Hanspeter Mössenböck. "trcview: Interactive Architecture Agnostic Execution Trace Analysis." In MPLR '20: 17th International Conference on Managed Programming Languages and Runtimes. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3426182.3426190.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alouneh, Sahel, Sa'ed Abed, Bassam Jamil Mohd, and Ahmad Al-Khasawneh. "Relational database approach for execution trace analysis." In 2012 International Conference on Computer, Information and Telecommunication Systems (CITS). IEEE, 2012. http://dx.doi.org/10.1109/cits.2012.6220394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Abualese, Hasan, Putra Sumari, Thamer Al-Rousan, and Mohammad Rasmi Al-Mousa. "Utility classes detection metrics for execution trace analysis." In 2017 8th International Conference on Information Technology (ICIT). IEEE, 2017. http://dx.doi.org/10.1109/icitech.2017.8080044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rezazadeh, Majid, Naser Ezzati-Jivan, Evan Galea, and Michel R. Dagenais. "Multi-Level Execution Trace Based Lock Contention Analysis." In 2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). IEEE, 2020. http://dx.doi.org/10.1109/issrew51248.2020.00068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mendes, Celso L. "Performance Prediction by Trace Transformation." In Simpósio Brasileiro de Arquitetura de Computadores e Processamento de Alto Desempenho. Sociedade Brasileira de Computação, 1993. http://dx.doi.org/10.5753/sbac-pad.1993.23023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Performance stability is an essential feature for the widespread adoption of multicomputers. In this paper, we report the preliminary steps of our research in performance prediction and extrapolation. Performance tuning, guided by extrapolation, may help achieve a substantial fraction of peak performance rates across a broader range of applications while providing guidance for code porting. We introduce a methodology for assessing stability of parallel programs, based on stability of the program execution graph, using time perturbation analysis. For programs with stable behavior, we present a model for performance prediction under architecture variations, by transformation of the execution traces with parameters that reflect the differences in architecture between two systems. We illustrate the use of this transformation with an example of a parallel PDE solver executing on a multicomputer.
7

Li, Hongzhe, Taebeom Kim, Munkhbayar Bat-Erdene, and Heejo Lee. "Software Vulnerability Detection Using Backward Trace Analysis and Symbolic Execution." In 2013 Eighth International Conference on Availability, Reliability and Security (ARES). IEEE, 2013. http://dx.doi.org/10.1109/ares.2013.59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zoor, Maysam, Ludovic Apvrille, and Renaud Pacalet. "Execution Trace Analysis for a Precise Understanding of Latency Violations." In 2021 ACM/IEEE 24th International Conference on Model Driven Engineering Languages and Systems (MODELS). IEEE, 2021. http://dx.doi.org/10.1109/models50736.2021.00021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Toda, Tatsuya, Takashi Kobayashi, Noritoshi Atsumi, and Kiyoshi Agusa. "Grouping Objects for Execution Trace Analysis Based on Design Patterns." In 2013 20th Asia-Pacific Software Engineering Conference (APSEC). IEEE, 2013. http://dx.doi.org/10.1109/apsec.2013.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lu, Yue, Thomas Nolte, Iain Bate, and Liliana Cucu-Grosjean. "A trace-based statistical worst-case execution time analysis of component-based real-time embedded systems." In Factory Automation (ETFA 2011). IEEE, 2011. http://dx.doi.org/10.1109/etfa.2011.6059190.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Execution trace analysis":

1

Harkema, Marcel, Dick Quartel, Rob van der Mei, and Bart Gijsen. JPMT: A Java Performance Monitoring Tool. Centre for Telematics and Information Technology (CTIT), 2003. http://dx.doi.org/10.3990/1.5152400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper describes our Java Performance Monitoring Toolkit (JPMT), which is developed for detailed analysis of the behavior and performance of Java applications. JPMT represents internal execution behavior of Java applications by event traces, where each event represents the occurrence of some activity, such as thread creation, method invocation, and locking contention. JPMT supports event filtering during and after application execution. Each event is annotated by high-resolution performance attributes, e.g., duration of locking contention and CPU time usage by method invocations. JPMT is an open toolkit, its event trace API can be used to develop custom performance analysis applications. JPMT comes with an event trace visualizer and a command-line event trace query tool for scripting purposes. The instrumentation required for monitoring the application is added transparently to the user during run-time. Overhead is minimized by only instrumenting for events the user is interested in and by careful implementation of the instrumentation itself.
2

Amela, R., R. Badia, S. Böhm, R. Tosi, C. Soriano, and R. Rossi. D4.2 Profiling report of the partner’s tools, complete with performance suggestions. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This deliverable focuses on the proling activities developed in the project with the partner's applications. To perform this proling activities, a couple of benchmarks were dened in collaboration with WP5. The rst benchmark is an embarrassingly parallel benchmark that performs a read and then multiple writes of the same object, with the objective of stressing the memory and storage systems and evaluate the overhead when these reads and writes are performed in parallel. A second benchmark is dened based on the Continuation Multi Level Monte Carlo (C-MLMC) algorithm. While this algorithm is normally executed using multiple levels, for the proling and performance analysis objectives, the execution of a single level was enough since the forthcoming levels have similar performance characteristics. Additionally, while the simulation tasks can be executed as parallel (multi-threaded tasks), in the benchmark, single threaded tasks were executed to increase the number of simulations to be scheduled and stress the scheduling engines. A set of experiments based on these two benchmarks have been executed in the MareNostrum 4 supercomputer and using PyCOMPSs as underlying programming model and dynamic scheduler of the tasks involved in the executions. While the rst benchmark was executed several times in a single iteration, the second benchmark was executed in an iterative manner, with cycles of 1) Execution and trace generation; 2) Performance analysis; 3) Improvements. This had enabled to perform several improvements in the benchmark and in the scheduler of PyCOMPSs. The initial iterations focused on the C-MLMC structure itself, performing re-factors of the code to remove ne grain and sequential tasks and merging them in larger granularity tasks. The next iterations focused on improving the PyCOMPSs scheduler, removing existent bottlenecks and increasing its performance by making the scheduler a multithreaded engine. While the results can still be improved, we are satised with the results since the granularity of the simulations run in this evaluation step are much ner than the one that will be used for the real scenarios. The deliverable nishes with some recommendations that should be followed along the project in order to obtain good performance in the execution of the project codes.
3

Álvarez, Carola, Leonardo Corral, José Martínez, and César Montiel. Project Completion Report Analysis: Implications for the Portfolio. Inter-American Development Bank, March 2021. http://dx.doi.org/10.18235/0003145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This investigation builds on the Alvarez et al. (2021) Project Completion Report (PCR) analysis and its aim is to assess the implications of that study for the current portfolio of projects under execution at the Inter-American Development Bank (IDB). We use the sample of PCRs which reached operational closure (CO) in 2017 and 2018 to estimate the impact that design and execution performance characteristics of projects played in the likelihood of ending as successful and/or effective. Based on the estimated coefficients, we construct risk curves to isolate the effect specific characteristics have on the likelihood of a project being classified as unsuccessful/ineffective. We then use the estimated coefficients and, using the actual values for the current portfolio of projects in execution, identify the fraction of the portfolio that is at risk of ending as unsuccessful/ineffective projects. According to our analysis, of the 249 projects assessed, 39 have a 50% or less chance of being successful. Thirteen (13) projects have less than a 10% chance. For about 70% of the projects analyzed, given the characteristics they exhibit, the likelihood that they end up successful has already been curtailed. The type of analysis presented here can help IDB Management identify key performance indicators to keep track of during execution to periodically assess the level of risk it is willing to accept in terms of projects ending unsuccessful/ineffective as rated by the current PCR methodology.
4

WANG, Peng, Zhidong CAI, Qingying ZHAO, Wanting JIANG, Cong LIU, and Xing WANG. A Bayesian Network Meta-analysis of the Effect of Acute Exercise on Executive Function in Middle-aged and Senior People. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, December 2021. http://dx.doi.org/10.37766/inplasy2021.12.0086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Review question / Objective: Objective: To compare the intervention effect of multiple acute movement formulas on the executive function in middle-aged and senior people and to provide references for the discussion of the plans for precise movements. P: middle-aged and senior people elderly people; I: acute exercise; C: reading or sitting; O: Executive Function; S: RCT/crossover. Information sources: Randomized searches were carried out in Chinese databases such as CNKI, Wanfang Database, VTTMS, SinoMed and foreign databases such as PubMed, EMBASE, Cochrane Library, Web of Science. The retrieval period is from the beginning of each database to August 2021, supplemented with manual searches for gray literature and references traced back to previous systematic reviews.
5

Mueller, Bernardo, Carlos Pereira, Lee J. Alston, and Marcus André Melo. Political Institutions, Policymaking Processes and Policy Outcomes in Brazil. Inter-American Development Bank, March 2006. http://dx.doi.org/10.18235/0011295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper analyses the dynamics of policy-making among the various political institutions in Brazil. The authors find that the driving force behind policies in Brazil is the strong set of powers given to the President, though several institutions constrain and check this power, in particular the legislature, the judiciary, the public prosecutors, the auditing office, state governors and the Constitution itself. The electorate of Brazil holds the President accountable for economic growth, inflation and unemployment. At least for the past ten years, and particularly during the Lula administration, executive power has been aimed at pushing policy towards macro orthodoxy. Achieving stable macro policies required constitutional amendments as well as considerable legislation. To attain their goals, the past administrations used their property rights over pork to trade for policy changes. The rationale for members of Congress to exchange votes on policy for pork is that the electorates reward or punish members of Congress based on the degree to which pork lands in their district.
6

Tarko, Andrew P., Mario A. Romero, Vamsi Krishna Bandaru, and Cristhian Lizarazo. TScan–Stationary LiDAR for Traffic and Safety Applications: Vehicle Interpretation and Tracking. Purdue University, 2022. http://dx.doi.org/10.5703/1288284317402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To improve traffic performance and safety, the ability to measure traffic accurately and effectively, including motorists and other vulnerable road users, at road intersections is needed. A past study conducted by the Center for Road Safety has demonstrated that it is feasible to detect and track various types of road users using a LiDAR-based system called TScan. This project aimed to progress towards a real-world implementation of TScan by building two trailer-based prototypes with full end-user documentation. The previously developed detection and tracking algorithms have been modified and converted from the research code to its implementational version written in the C++ programming language. Two trailer-based TScan units have been built. The design of the prototype was iterated multiple times to account for component placement, ease of maintenance, etc. The expansion of the TScan system from a one single-sensor unit to multiple units with multiple LiDAR sensors necessitated transforming all the measurements into a common spatial and temporal reference frame. Engineering applications for performing traffic counts, analyzing speeds at intersections, and visualizing pedestrian presence data were developed. The limitations of the existing SSAM for traffic conflicts analysis with computer simulation prompted the research team to develop and implement their own traffic conflicts detection and analysis technique that is applicable to real-world data. Efficient use of the development system requires proper training of its end users. An INDOT-CRS collaborative process was developed and its execution planned to gradually transfer the two TScan prototypes to INDOT’s full control. This period will be also an opportunity for collecting feedback from the end user and making limited modifications to the system and documentation as needed.
7

Lu, Tianjun, Jian-yu Ke, Azure Fisher, Mahmoud Salari, Patricia Valladolid, and Fynnwin Prager. Should State Land in Southern California Be Allocated to Warehousing Goods or Housing People? Analyzing Transportation, Climate, and Unintended Consequences of Supply Chain Solutions. Mineta Transportation Institute, December 2023. http://dx.doi.org/10.31979/mti.2023.2231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In response to COVID-19 pandemic supply chain issues, the State of California issued Executive Order (EO) N-19-21 to use state land to increase warehousing capacity. This highlights a land-use paradox between economic and environmental goals: adding warehouse capacity increases climate pollution and traffic congestion around the ports and warehouses, while there is a deficit of affordable housing and high homeless rates in port-adjacent underserved communities. This study aims to inform regional policymakers and community stakeholders about these trade-offs by identifying current and future supply of and demand for warehousing and housing in Southern California through 2040. The study uses statistical analysis and forecasting, and evaluates across numerous scenarios the environmental impact of meeting demand for both with the Community LINE Source Model. Warehousing and housing are currently projected to be in high demand across Southern California in future decades, despite short-run adjustments in the post-pandemic period of inflation and net declines in population. Using state land for warehousing creates environmental justice concerns, as the number of air pollution hotspots increases even with electrifying trucking fleets, especially when compared against low-impact affordable housing developments. However, low-income housing demand appears to be positively correlated with unemployment, suggesting that the jobs provided by warehousing development might help to ameliorate that concern.
8

Dudoit, Alain. The urgency of the first link: Canada’s supply chain at breaking point, a national security issue. CIRANO, July 2023. http://dx.doi.org/10.54932/cxwf7311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The creation of an intelligent supply chain is now an urgent national security priority that cannot be achieved without the joint mobilization of various stakeholders in Canada. It is not, however, an end in itself: the achievement of a single, competitive, sustainable, and consumer-focused domestic market should be the ultimate outcome of the national taskforce needed to collaboratively implement the recommendations of three complementary public policy reports published in 2022 on the state of the supply chain in Canada. The supply chain challenge is vast, and it will only become more complex over time. Governments in Canada must act together now, in conjunction with collaborative efforts with our allies and partners, notably the United States and the European Union, to ensure supply chain resilience in the face of accelerating current and anticipated upheavals, geopolitical conflicts and natural disasters. Québec's geostrategic position is a major asset, and gives it a critical role and responsibility in implementing not only the Final Report of the National Supply Chain Task Force ("ACT"), but also of the recommendations contained in the report published by the Council of Ministers Responsible for Transportation and Highway Safety (COMT) and those contained in the report of the House of Commons Standing Committee on Transport, Infrastructure and Communities published in Ottawa in November 2022, "Improving the Efficiency and Resilience of Canada's Supply Chains". The mobilizing approach towards a common data space for Canada's supply chain is inspired by Advantage St. Lawrence's forward-looking Smart Economic Corridor vision and builds on and integrates experience gained from various initiatives and programs implemented in Canada, the U.S. and Europe, as appropriate. Its initial implementation in the St. Lawrence - Great Lakes trade corridor will facilitate the subsequent access and sharing of data from across the Canadian supply chain in a reliable and secure manner. The accelerated joint development of a common data space is a game-changer not only in terms of solving critical supply chain challenges, but also in terms of the impetus it will generate in the pursuit of fundamental Canadian priorities, including the energy transition. This Bourgogne report offers a four-part synthesis: - An overview of a background characterized by numerous consultations, strategy announcements, measures, and mixed results. - A cross-analysis of the recommendations of three important and complementary public policy reports at federal level, as well as the Quebec strategy, “l'Avantage Saint-Laurent”. - An analysis of the fundamental issues of mobilization capacity, execution, and under-utilization of data. - Some operational solutions for moving into « Action, Collaboration and Transformation » (ACT) mode.
9

Dudoit, Alain. European common data spaces: a structuring initiative that is both necessary and adaptable to Canada. CIRANO, November 2023. http://dx.doi.org/10.54932/skhp9567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Faced with the acceleration of the digital economy, the governance and effective sharing of data have become fundamental issues for public policy at all levels of jurisdictions and in all areas of human activity. This paper reviews the initiatives and challenges associated with data governance, with a particular focus on the European Common Data Spaces (ECDS) and their direct relevance to the Canadian context. It explores the inherent complexity of data governance, which must reconcile sector-specificities with more horizontal governance principles. In doing so, it highlights the importance of strategic and coordinated action to maximize the social and economic benefits of data. The Burgundy Report, published by CIRANO in July 2023, calls for the creation of a common data space in the Great Lakes-St. Lawrence Strategic Trade Corridor by 2030. This proposal builds in particular on three separate policy reports published in 2022 by the National Supply Chain Task Force, the Council of Ministers Responsible for Transportation and Highway Safety (COMT) and the House of Commons Standing Committee on Transportation, Infrastructure and Communities. The findings and recommendations of these reports raise fundamental questions that are central to the critical issues of governance, organizational culture, execution capacity, public and private stakeholder engagement, and data underutilization within the Canadian government machinery strained by years of delay and exacerbated by recent disruptions related to anticipated climate disasters. The creation of a common data space is envisaged as a structuring investment in Canada's essential infrastructure for intermodal transport and the supply chain. This working paper on European Common Data Spaces (ECDS) extends the synthesis and recommendations published last July 2023 by providing an operational analysis of the transformative initiative currently underway within the European Union (EU). This major policy development stems from the 2020 European Data Strategy and seeks to establish twelve common data spaces in strategic sectors, including mobility and transport. The document is divided into three main parts. The first part provides an overview of data-related public policies in Canada and the EU between 2018 and 2023. The second part focuses on the implications and lessons learned from the impact assessment supporting the adoption of data governance legislation by the European institutions. This directive establishes a regulatory framework for the creation of common data spaces in the EU. The third section discusses the current deployment of ECDSs, highlighting key milestones and ongoing processes. The paper highlights notable similarities between the EU and Canada in the identification of data issues and the formulation of public policy objectives. It also highlights differences in optimizing data sharing between jurisdictions and stakeholders. A fundamental difference between these two strategic partners is the absence of an effective and sustained pooling of resources within the Canadian intergovernmental machinery in pursuit of common objectives in the face of major shared challenges such as data accessibility and sharing. This situation is in stark contrast to the EU's groundbreaking deployment of the ECDS in pursuit of identical objectives of positioning itself as a world leader in the data economy. This lack of consideration, let alone joint action, by Canada's intergovernmental machinery to implement a common data strategy in Canada is damaging. To be effective, the Canadian response must be agile, results-oriented, and interoperable across jurisdictions. The rigorous management, responsible use, and organized sharing of data within and between jurisdictions are crucial to addressing the complex challenges and major risks facing Canada. Neither the federal nor provincial governments are currently well positioned to treat data as a shared strategic asset. The resolution of regulatory, legal, and technical obstacles to data exchange between jurisdictions and organizations cannot be achieved without the creation of a common data space. This can only be achieved by combining the necessary tools and infrastructures, and by addressing issues of trust, for example by means of common rules drawn up for this purpose. “The barriers that prevent the establishment of robust health data sharing systems are not technical, but rather fundamentally political and cultural.”
10

Financial Stability Report - September 2015. Banco de la República, August 2021. http://dx.doi.org/10.32468/rept-estab-fin.sem2.eng-2015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
From this edition, the Financial Stability Report will have fewer pages with some changes in its structure. The purpose of this change is to present the most relevant facts of the financial system and their implications on the financial stability. This allows displaying the analysis more concisely and clearly, as it will focus on describing the evolution of the variables that have the greatest impact on the performance of the financial system, for estimating then the effect of a possible materialization of these risks on the financial health of the institutions. The changing dynamics of the risks faced by the financial system implies that the content of the Report adopts this new structure; therefore, some analyses and series that were regularly included will not necessarily be in each issue. However, the statistical annex that accompanies the publication of the Report will continue to present the series that were traditionally included, regardless of whether or not they are part of the content of the Report. In this way we expect to contribute in a more comprehensive way to the study and analysis of the stability of the Colombian financial system. Executive Summary During the first half of 2015, the main advanced economies showed a slow recovery on their growth, while emerging economies continued with their slowdown trend. Domestic demand in the United States allowed for stabilization on its average growth for the first half of the year, while other developed economies such as the United Kingdom, the euro zone, and Japan showed a more gradual recovery. On the other hand, the Chinese economy exhibited the lowest growth rate in five years, which has resulted in lower global dynamism. This has led to a fall in prices of the main export goods of some Latin American economies, especially oil, whose price has also responded to a larger global supply. The decrease in the terms of trade of the Latin American economies has had an impact on national income, domestic demand, and growth. This scenario has been reflected in increases in sovereign risk spreads, devaluations of stock indices, and depreciation of the exchange rates of most countries in the region. For Colombia, the fall in oil prices has also led to a decline in the terms of trade, resulting in pressure on the dynamics of national income. Additionally, the lower demand for exports helped to widen the current account deficit. This affected the prospects and economic growth of the country during the first half of 2015. This economic context could have an impact on the payment capacity of debtors and on the valuation of investments, affecting the soundness of the financial system. However, the results of the analysis featured in this edition of the Report show that, facing an adverse scenario, the vulnerability of the financial system in terms of solvency and liquidity is low. The analysis of the current situation of credit institutions (CI) shows that growth of the gross loan portfolio remained relatively stable, as well as the loan portfolio quality indicators, except for microcredit, which showed a decrease in these indicators. Regarding liabilities, traditional sources of funding have lost market share versus non-traditional ones (bonds, money market operations and in the interbank market), but still represent more than 70%. Moreover, the solvency indicator remained relatively stable. As for non-banking financial institutions (NBFI), the slowdown observed during the first six months of 2015 in the real annual growth of the assets total, both in the proprietary and third party position, stands out. The analysis of the main debtors of the financial system shows that indebtedness of the private corporate sector has increased in the last year, mostly driven by an increase in the debt balance with domestic and foreign financial institutions. However, the increase in this latter source of funding has been influenced by the depreciation of the Colombian peso vis-à-vis the US dollar since mid-2014. The financial indicators reflected a favorable behavior with respect to the historical average, except for the profitability indicators; although they were below the average, they have shown improvement in the last year. By economic sector, it is noted that the firms focused on farming, mining and transportation activities recorded the highest levels of risk perception by credit institutions, and the largest increases in default levels with respect to those observed in December 2014. Meanwhile, households have shown an increase in the financial burden, mainly due to growth in the consumer loan portfolio, in which the modalities of credit card, payroll deductible loan, revolving and vehicle loan are those that have reported greater increases in risk indicators. On the side of investments that could be affected by the devaluation in the portfolio of credit institutions and non-banking financial institutions (NBFI), the largest share of public debt securities, variable-yield securities and domestic private debt securities is highlighted. The value of these portfolios fell between February and August 2015, driven by the devaluation in the market of these investments throughout the year. Furthermore, the analysis of the liquidity risk indicator (LRI) shows that all intermediaries showed adequate levels and exhibit a stable behavior. Likewise, the fragility analysis of the financial system associated with the increase in the use of non-traditional funding sources does not evidence a greater exposure to liquidity risk. Stress tests assess the impact of the possible joint materialization of credit and market risks, and reveal that neither the aggregate solvency indicator, nor the liquidity risk indicator (LRI) of the system would be below the established legal limits. The entities that result more individually affected have a low share in the total assets of the credit institutions; therefore, a risk to the financial system as a whole is not observed. José Darío Uribe Governor

To the bibliography