Dissertations / Theses on the topic 'Criticality'

To see the other types of publications on this topic, follow the link: Criticality.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Criticality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Di, Laudo Umberto. "Deconfined quantum criticality." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25125/.

Full text
Abstract:
In this work it is studied a type of quantum phase transitions beyond the Landau-Ginzburg-Wilson (LGW) paradigm. In particular it is described a second order transition between the Nèel and the Valence Bond Solid (VBS) states for a two dimensional quantum square lattice with antiferromagnetic interactions. The natural description of this critical theory is not given in terms of the order parameter, but in terms of an emergent gauge field which mediates interactions between "fractional" particles. These particles are confined on either sides of the transition, while they emerge at the critical point, that is thus called "deconfined". This critical theory corresponds to that of a 3D classical O(3) model with monopoles suppressed. In the second part of this work, this model is numerically simulated by using Monte Carlo methods, and its critical exponents are obtained.
APA, Harvard, Vancouver, ISO, and other styles
2

Vanni, Fabio. "Criticality in Cooperative Systems." Thesis, University of North Texas, 2012. https://digital.library.unt.edu/ark:/67531/metadc271910/.

Full text
Abstract:
Cooperative behavior arises from the interactions of single units that globally produce a complex dynamics in which the system acts as a whole. As an archetype I refer to a flock of birds. As a result of cooperation the whole flock gets special abilities that the single individuals would not have if they were alone. This research work led to the discovery that the function of a flock, and more in general, that of cooperative systems, surprisingly rests on the occurrence of organizational collapses. In this study, I used cooperative systems based on self-propelled particle models (the flock models) which have been proved to be virtually equivalent to sociological network models mimicking the decision making processes (the decision making model). The critical region is an intermediate condition between a highly disordered state and a strong ordered one. At criticality the waiting times distribution density between two consecutive collapses shows an inverse power law form with an anomalous statistical behavior. The scientific evidences are based on measures of information theory, correlation in time and space, and fluctuation statistical analysis. In order to prove the benefit for a system to live at criticality, I made a flock system interact with another similar system, and then observe the information transmission for different disturbance values. I proved that at criticality the transfer of information gets the maximal efficiency. As last step, the flock model has been shown that, despite its simplicity, is sufficiently a realistic model as proved via the use of 3D simulations and computer animations.
APA, Harvard, Vancouver, ISO, and other styles
3

Stiansen, Einar B. "Criticality in Quantum Dissipative Systems." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for fysikk, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-17475.

Full text
Abstract:
This thesis consists of five scientific papers in the field of condensed matter physics. In all papers we employ large scale Monte Carlo simulations to investigate quantum critical behavior in systems coupled to an environment. Special attention is paid to possible anisotropies between spatial fluctuations and fluctuations in imaginary time. Implications of the results to the loop current theory of cuprates are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Pruessner, Gunnar. "Studies in self-organised criticality." Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Iberti, Massimo. "Ising-Kac models near criticality." Thesis, University of Warwick, 2018. http://wrap.warwick.ac.uk/109480/.

Full text
Abstract:
The present thesis consists in an investigation around the result shown by H. Weber and J.C. Mourrat in [MW17a], where the authors proved that the fluctuation of an Ising models with Kac interaction under a Glauber-type dynamic on a periodic two-dimensional discrete torus near criticality converge to the solution of the Stochastic Quantization Equation Φ 4/2. In Chapter 2, starting from a conjecture in [SW16], we show the robustness of the method proving the convergence in law of the fluctuation field for a general class of ferromagnetic spin models with Kac interaction undergoing a Glauber dynamic near critical temperature. We show that the limiting law solves an SPDE that depends heavily on the state space of the spin system and, as a consequence of our method, we construct a spin system whose dynamical fluctuation field converges to Φ 2n/2. In Chapter 3 we apply an idea by H. Weber and P. Tsatsoulis employed in [TW16], to show tightness for the sequence of magnetization fluctuation fields of the Ising-Kac model on a periodic two-dimensional discrete torus near criticality and characterise the law of the limit as the Φ 4/2 measure on the torus. This result is not an immediate consequence of [MW17a]. In Chapter 4 we study the fluctuations of the magnetization field of the Ising-Kac model under the Kawasaki dynamic at criticality in a one dimensional discrete torus, and we provide some evidence towards the convergence in law to the solution to the Stochastic Cahn-Hilliard equation.
APA, Harvard, Vancouver, ISO, and other styles
6

Boonzaaier, Leandro. "Self-organised criticality and seismicity." Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/53047.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2002.
ENGLISH ABSTRACT: In this thesis we give an overview of self-organised criticality and its application to studying seismicity. We recall some of the basic models and techniques for studying self-organised critical systems. We discuss one of these, the sandpile model, in detail and show how various properties of the model can be calculated using a matrix formulation thereof. A correspondence between self-organised critical systems and seismicity is then proposed. Finally, we consider the timeevolution of the sandpile model by using a time-to-failure analysis, originally developed in the study of seismicity and obtain results for the sandpile model that show similarities with that of the analyses of seismic data.
AFRIKAANSE OPSOMMING: In hierdie tesis gee ons 'n oorsig van self-organiserende kritikaliteit en die toepassing daarvan in die studie van seismisiteit. Ons beskryf die basiese modelle en tegnieke vir die studie van self-organiserende kritiese sisteme. Ons bespreek een van hierdie, die sandhoopmodel, in besonderheid en wys hoe om verskeie eienskappe van die model te bereken deur gebruik te maak van 'n matriks-formulering daarvan. Ons stel dan 'n korrespondensie tussen self-organiserende kritiese sisteme en seismisiteit voor. Ter afsluiting ondersoek ons die tydontwikkeling van die sand hoopmodel deur gebruik te maak van 'n deurbreektyd analise wat oorspronklik in die bestudering seismiese data ontwikkel is. Die resultate vir die analise van die sandhoopmodel toon ooreenkomste met dit wat verkry word vir seismiese data.
APA, Harvard, Vancouver, ISO, and other styles
7

Kahil, Rany. "Schedulability in Mixed-criticality Systems." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM023/document.

Full text
Abstract:
Les systèmes temps-réel critiques doivent exécuter leurs tâches dans les délais impartis. En cas de défaillance, des événements peuvent avoir des catastrophes économiques. Des classifications des défaillances par rapport aux niveaux des risques encourus ont été établies, en particulier dans les domaines des transports aéronautique et automobile. Des niveaux de criticité sont attribués aux différentes fonctions des systèmes suivant les risques encourus lors d'une défaillance et des probabilités d'apparition de celles-ci. Ces différents niveaux de criticité influencent les choix d'architecture logicielle et matérielle ainsi que le type de composants utilisés pour sa réalisation. Les systèmes temps-réels modernes ont tendance à intégrer sur une même plateforme de calcul plusieurs applications avec différents niveaux de criticité. Cette intégration est nécessaire pour des systèmes modernes comme par exemple les drones (UAV) afin de réduire le coût, le poids et la consommation d'énergie. Malheureusement, elle conduit à des difficultés importantes lors de leurs conceptions. En plus, ces systèmes doivent être certifiés en prenant en compte ces différents niveaux de criticités.Il est bien connu que le problème d'ordonnancement des systèmes avec différents niveaux de criticités représente un des plus grand défi dans le domaine de systèmes temps-réel. Les techniques traditionnelles proposent comme solution l’isolation complète entre les niveaux de criticité ou bien une certification globale au plus haut niveau. Malheureusement, une telle solution conduit à une mauvaise des ressources et à la perte de l’avantage de cette intégration. En 2007, Vestal a proposé un modèle pour représenter les systèmes avec différents niveaux de criticité dont les tâches ont plusieurs temps d’exécution, un pour chaque niveau de criticité. En outre, les conditions de validité des stratégies d’ordonnancement ont été définies de manière formelle, permettant ainsi aux tâches les moins critiques d’échapper aux délais, voire d’être abandonnées en cas de défaillance ou de situation d’urgence.Les politiques de planification conventionnelles et les tests d’ordonnoncement se sont révélés inadéquats.Dans cette thèse, nous contribuons à l’étude de l’ordonnancement dans les systèmes avec différents niveaux de criticité. La surcharge d'un système est représentée sous la forme d'un ensemble de tâches pouvant décrire l'exécution sur l'hyper-période de tâches ou sur une durée donnée. Ce modèle nous permet d’étudier la viabilité des tests de correction basés sur la simulation pour les systèmes avec différents niveaux de criticité. Nous montrons que les tests de simulation peuvent toujours être utilisés pour ces systèmes, et la possibilité de l’ordonnancement du pire des scénarios ne suffit plus, même pour le cas de l’ordonnancement avec priorité fixe. Nous montrons que les politiques d'ordonnancement ne sont généralement pas prévisibles. Nous définissons le concept de faible prévisibilité pour les systèmes avec différents niveaux de criticité et nous montrons ensuite qu'une classe spécifique de stratégies à priorité fixe sont faiblement prévisibles. Nous proposons deux tests de correction basés sur la simulation qui fonctionnent pour des stratégies faiblement prévisibles.Nous montrons également que, contrairement à ce que l’on croyait, le contrôle de l’exactitude ne peut se faire que par l’intermédiaire d’un nombre linéaire de préemptions.La majorité des travaux reliés à notre domaine portent sur des systèmes à deux niveaux de criticité en raison de la difficulté du problème. Mais pour les systèmes automobiles et aériens, les normes industrielles définissent quatre ou cinq niveaux de criticité, ce qui nous a motivés à proposer un algorithme de planification qui planifie les systèmes à criticité mixte avec théoriquement un nombre quelconque de niveaux de criticité. Nous montrons expérimentalement que le taux de réussite est supérieur à celui de l’état de la technique
Real-time safety-critical systems must complete their tasks within a given time limit. Failure to successfully perform their operations, or missing a deadline, can have severe consequences such as destruction of property and/or loss of life. Examples of such systems include automotive systems, drones and avionics among others. Safety guarantees must be provided before these systems can be deemed usable. This is usually done through certification performed by a certification authority.Safety evaluation and certification are complicated and costly even for smaller systems.One answer to these difficulties is the isolation of the critical functionality. Executing tasks of different criticalities on separate platforms prevents non-critical tasks from interfering with critical ones, provides a higher guaranty of safety and simplifies the certification process limiting it to only the critical functions. But this separation, in turn, introduces undesirable results portrayed by an inefficient resource utilization, an increase in the cost, weight, size and energy consumption which can put a system in a competitive disadvantage.To overcome the drawbacks of isolation, Mixed Criticality (MC) systems can be used. These systems allow functionalities with different criticalities to execute on the same platform. In 2007, Vestal proposed a model to represent MC-systems where tasks have multiple Worst Case Execution Times (WCETs), one for each criticality level. In addition, correctness conditions for scheduling policies were formally defined, allowing lower criticality jobs to miss deadlines or be even dropped in cases of failure or emergency situations.The introduction of multiple WCETs and different conditions for correctness increased the difficulty of the scheduling problem for MC-systems. Conventional scheduling policies and schedulability tests proved inadequate and the need for new algorithms arose. Since then, a lot of work has been done in this field.In this thesis, we contribute to the study of schedulability in MC-systems. The workload of a system is represented as a set of jobs that can describe the execution over the hyper-period of tasks or over a duration in time. This model allows us to study the viability of simulation-based correctness tests in MC-systems. We show that simulation tests can still be used in mixed-criticality systems, but in this case, the schedulability of the worst case scenario is no longer sufficient to guarantee the schedulability of the system even for the fixed priority scheduling case. We show that scheduling policies are not predictable in general, and define the concept of weak-predictability for MC-systems. We prove that a specific class of fixed priority policies are weakly predictable and propose two simulation-based correctness tests that work for weakly-predictable policies.We also demonstrate that contrary to what was believed, testing for correctness can not be done only through a linear number of preemptions.The majority of the related work focuses on systems of two criticality levels due to the difficulty of the problem. But for automotive and airborne systems, industrial standards define four or five criticality levels, which motivated us to propose a scheduling algorithm that schedules mixed-criticality systems with theoretically any number of criticality levels. We show experimentally that it has higher success rates compared to the state of the art.We illustrate how our scheduling algorithm, or any algorithm that generates a single time-triggered table for each criticality mode, can be used as a recovery strategy to ensure the safety of the system in case of certain failures.Finally, we propose a high level concurrency language and a model for designing an MC-system with coarse grained multi-core interference
APA, Harvard, Vancouver, ISO, and other styles
8

Pueyo, Puntí Salvador. "Irreversibility and Criticality in the Biosphere." Doctoral thesis, Universitat de Barcelona, 2003. http://hdl.handle.net/10803/1421.

Full text
Abstract:
This work is the result of a search for general (or nearly general) regularities at ecosystem level, and an exploration of their practical relevance in our relations with the environment.
I began by adding some new contributions to the thermodynamic approach to systemic ecology, but concluded that there is little scope for further progress of strictly ecological interest with this orientation. Instead, the key for a systemic ecology seems to lie in the "large number" effects that arise at the limit of many organisms and/or species, just like the whole scientific body of statistical physics stands on the general features that emerge at the limit of many particles. The concept of criticality seems to have a special importance within this context (criticality is the quality of lying at the critical point in which there is a second order phase transition).

Some specific issues that I analyze in depth, taking advantage of the concept of criticality and other concepts related to statistical physics, are:

·Wildland fire dynamics. Practical tools to predict and manage fire in boreal forests and in the Mediterranean. Limits to anthropogenic impacts on tropical rainforests before a major fire catastrophe unfolds. The possible generalization of the findings on wildland fires to other kinds of catastrophes, with emphasis on agricultural pests and epidemics.

· Diversity patterns. The origin of species abundance distributions and species-area relations. Their interpretation (and misinterpretation). The case of marine phytoplankton. The quantification of diversity for conservation purposes.

· The effects of diversity on stability. The sources of the apparent inconsistencies between theoretical models, both historical and current, and between theoretical expectations and some experimental results.

I conclude with a discussion on the interest of my and other related findings from the point of view of ecological economics.
APA, Harvard, Vancouver, ISO, and other styles
9

Küttler, Martin, Michael Roitzsch, Claude-Joachim Hamann, and Marcus Völp. "Probabilistic Analysis of Low-Criticality Execution." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-233117.

Full text
Abstract:
The mixed-criticality toolbox promises system architects a powerful framework for consolidating real-time tasks with different safety properties on a single computing platform. Thanks to the research efforts in the mixed-criticality field, guarantees provided to the highest criticality level are well understood. However, lower-criticality job execution depends on the condition that all high-criticality jobs complete within their more optimistic low-criticality execution time bounds. Otherwise, no guarantees are made. In this paper, we add to the mixed-criticality toolbox by providing a probabilistic analysis method for low-criticality tasks. While deterministic models reduce task behavior to constant numbers, probabilistic analysis captures varying runtime behavior. We introduce a novel algorithmic approach for probabilistic timing analysis, which we call symbolic scheduling. For restricted task sets, we also present an analytical solution. We use this method to calculate per-job success probabilities for low-criticality tasks, in order to quantify, how low-criticality tasks behave in case of high-criticality jobs overrunning their optimistic low-criticality reservation.
APA, Harvard, Vancouver, ISO, and other styles
10

Hawtin, Benjamin Charles. "Defect criticality of carbon fibre composites." Thesis, University of Bath, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Jones, Thomas Berry. "Criticality Assessments for Improving Algorithmic Robustness." Thesis, The University of New Mexico, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10980232.

Full text
Abstract:

Though computational models typically assume all program steps execute flawlessly, that does not imply all steps are equally important if a failure should occur. In the "Constrained Reliability Allocation" problem, sufficient resources are guaranteed for operations that prompt eventual program termination on failure, but those operations that only cause output errors are given a limited budget of some vital resource, insufficient to ensure correct operation for each of them.

In this dissertation, I present a novel representation of failures based on a combination of their timing and location combined with criticality assessments—a method used to predict the behavior of systems operating outside their design criteria. I observe that strictly correct error measures hide interesting failure relationships, failure importance is often determined by failure timing, and recursion plays an important role in structuring output error. I employ these observations to improve the output error of two matrix multiplication methods through an economization procedure that moves failures from worse to better locations, thus providing a possible solution to the constrained reliability allocation problem. I show a 38% to 63% decrease in absolute value error on matrix multiplication algorithms, despite nearly identical failure counts between control and experimental studies. Finally, I show that efficient sorting algorithms are less robust at large scale than less efficient sorting algorithms.

APA, Harvard, Vancouver, ISO, and other styles
12

Siapas, Athanassios G. "Criticality and parallelism in combinatorial optimization." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11009.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 60-63).
by Athanassios G. Siapas.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
13

Merchant, P. L. H. "Excitations and criticality in quantum magnets." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1388128/.

Full text
Abstract:
This thesis describes the neutron scattering studies of three model magnetic systems; the coupled spin dimer compound TlCuCl3, the frustrated spin ladder material BiCu2PO6 and the impurity-doped spin ladder material BiCu_2(1-x)Zn2xPO6. TlCuCl3 is a realisation of a continuously tunable model magnet, where applied hydrostatic pressure can drive the system from a state of disorder into long-range magnetic order with the emergence of an excitation at the quantum critical point that corresponds to longitudinal fluctuations of the ordered moment. The study of the excitations in TlCuCl3 is now extended to finite temperatures. The results are summarised in Chapter 4, where similarities are reported between the quantum phase and thermal phase transitions. Spin ladder systems provide an exciting opportunity to study aspects of low-dimensional physics. With model magnets previously constrained to the limits of `strong' exchange (~ 100 meV) in the cuprates and `weak' exchange (~1 meV) in the metal-organics, the new spin ladder BiCu2PO6 offers the opportunity for study of spin ladder physics in the `intermediate' exchange regime (~ 10 meV). Inelastic neutron scattering studies of this system are presented in Chapter 5, where the magnon dispersion, exchange geometry and anisotropy are deduced from analysis of the excitation energies. Substitution of the Cu2+ sites in BiCu2PO6 with non-magnetic impurities Zn2+ results in the creation of BiCu2(1-x)Zn2xPO6, where a single S = 1/2 moment is liberated for each impurity. These moments are shown to demonstrate long-range correlations and magnetic ordering below a characteristic temperature, TN. Single crystal samples with x = 0.01, 0.03 and 0.05 have been investigated and structural studies of each are reported in Chapter 6. The field and temperature dependence of the observed long-range order is reported as well as a magnetic structure determination and studies of the impurity dependence of the coherence of the magnetic order.
APA, Harvard, Vancouver, ISO, and other styles
14

De, Villiers Anton Pierre. "Edge criticality in secure graph domination." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95841.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: The domination number of a graph is the cardinality of a smallest subset of its vertex set with the property that each vertex of the graph is in the subset or adjacent to a vertex in the subset. This graph parameter has been studied extensively since its introduction during the early 1960s and finds application in the generic setting where the vertices of the graph denote physical entities that are typically geographically dispersed and have to be monitored efficiently, while the graph edges model links between these entities which enable guards, stationed at the vertices, to monitor adjacent entities. In the above application, the guards remain stationary at the entities. In 2005, this constraint was, however, relaxed by the introduction of a new domination-related parameter, called the secure domination number. In this relaxed, dynamic setting, each unoccupied entity is defended by a guard stationed at an adjacent entity who can travel along an edge to the unoccupied entity in order to resolve a security threat that may occur there, after which the resulting configuration of guards at the entities is again required to be a dominating set of the graph. The secure domination number of a graph is the smallest number of guards that can be placed on its vertices so as to satisfy these requirements. In this generalised setting, the notion of edge removal is important, because one might seek the cost, in terms of the additional number of guards required, of protecting the complex of entities modelled by the graph if a number of edges in the graph were to fail (i.e. a number of links were to be eliminated form the complex, thereby disqualifying guards from moving along such disabled links). A comprehensive survey of the literature on secure graph domination is conducted in this dissertation. Descriptions of related, generalised graph protection parameters are also given. The classes of graphs with secure domination number 1, 2 or 3 are characterised and a result on the number of defenders in any minimum secure dominating set of a graph without end-vertices is presented, after which it is shown that the decision problem associated with computing the secure domination number of an arbitrary graph is NP-complete. Two exponential-time algorithms and a binary programming problem formulation are presented for computing the secure domination number of an arbitrary graph, while a linear algorithm is put forward for computing the secure domination number of an arbitrary tree. The practical efficiencies of these algorithms are compared in the context of small graphs. The smallest and largest increase in the secure domination number of a graph are also considered when a fixed number of edges are removed from the graph. Two novel cost functions are introduced for this purpose. General bounds on these two cost functions are established, and exact values of or tighter bounds on the cost functions are determined for various infinite classes of special graphs. Threshold information is finally established in respect of the number of possible edge removals from a graph before increasing its secure domination number. The notions of criticality and stability are introduced and studied in this respect, focussing on the smallest number of arbitrary edges whose deletion necessarily increases the secure domination number of the resulting graph, and the largest number of arbitrary edges whose deletion necessarily does not increase the secure domination number of the resulting graph.
AFRIKAANSE OPSOMMING: Die dominasiegetal van ’n grafiek is die kardinaalgetal van ’n kleinste deelversameling van die grafiek se puntversameling met die eienskap dat elke punt van die grafiek in die deelversameling is of naasliggend is aan ’n punt in die deelversameling. Hierdie grafiekparameter is sedert die vroeë 1960s uitvoerig bestudeer en vind toepassing in die generiese situasie waar die punte van die grafiek fisiese entiteite voorstel wat tipies geografies verspreid is en doeltreffend gemonitor moet word, terwyl die lyne van die grafiek skakels tussen hierdie entiteite voorstel waarlangs wagte, wat by die entiteite gebaseer is, naasliggende entiteite kan monitor. In die bogenoemde toepassing, bly die wagte bewegingloos by die fisiese entiteite waar hulle geplaas word. In 2005 is hierdie beperking egter verslap met die daarstelling van ’n nuwe dominasie-verwante grafiekparameter, bekend as die sekure dominasiegetal. In hierdie verslapte, dinamiese situasie word elke punt sonder ’n wag deur ’n wag verdedig wat by ’n naasliggende punt geplaas is en wat langs die verbindingslyn na die leë punt kan beweeg om daar ’n bedreiging te neutraliseer, waarna die gevolglike plasing van wagte weer ’n dominasieversameling van die grafiek moet vorm. Die sekure dominasiegetal van ’n grafiek is die kleinste getal wagte wat op die punte van die grafiek geplaas kan word om aan hierdie vereistes te voldoen. Die beginsel van lynverwydering speel ’n belangrike rol in hierdie veralgemeende situasie, omdat daar gevra mag word na die koste, in terme van die addisionele getal wagte wat vereis word, om die kompleks van entiteite wat deur die grafiek gemodelleer word, te beveilig indien ’n aantal lynfalings in die grafiek plaasvind (m.a.w. indien ’n aantal skakels uit die kompleks van entiteite verwyder word, en wagte dus nie meer langs sulke skakels mag beweeg nie). ’n Omvattende literatuurstudie oor sekure dominasie van grafieke word in hierdie verhandeling gedoen. Beskrywings van verwante, veralgemeende verdedigingsparameters in grafiekteorie word ook gegee. Die klasse van grafieke met sekure dominasiegetal 1, 2 of 3 word gekarakteriseer en ’n resultaat oor die getal verdedigers in enige kleinste sekure dominasieversameling van ’n grafiek sonder endpunte word daargestel, waarna daar getoon word dat die beslissingsprobleem onderliggend aan die berekening van die sekure dominasiegetal van ’n arbitrêre grafiek NP- volledig is. Twee eksponensiële-tyd algoritmes en ’n binêre programmeringsformulering word vir die bepaling van die sekure dominasiegetal van ’n arbitrêre grafiek daargestel, terwyl ’n lineêre algoritme vir die berekening van die sekure dominasiegetal van ’n arbitrêre boom ontwerp word. Die praktiese doeltreffendhede van hierdie algoritmes word vir klein grafieke met mekaar vergelyk. Die kleinste en groostste toename in die sekure dominasiegetal van ’n grafiek word ook oorweeg wanneer ’n vaste getal lyne uit die grafiek verwyder word. Twee nuwe kostefunksies word vir hierdie doel daargestel en algemene grense word op hierdie kostefunksies vir arbitrêre grafieke bepaal, terwyl eksakte waardes van of verbeterde grense op hierdie kostefunksies vir verskeie oneindige klasse van spesiale grafieke bereken word. Drempelinligting word uiteindelik bepaal in terme van die moontlike getal lynverwyderings uit ’n grafiek voordat die sekure dominasiegetal daarvan toeneem. Die konsepte van kritiekheid en stabiliteit word in hierdie konteks bestudeer, met ’n fokus op die kleinste getal arbitrêre lynfalings wat noodwendig die sekure dominasiegetal van die gevolglike grafiek laat toeneem, of die grootste getal arbitrêre lynfalings wat noodwendig die sekure dominasiegetal van die gevolglike grafiek onveranderd laat.
APA, Harvard, Vancouver, ISO, and other styles
15

Socci, Dario. "Scheduling of certifiable mixed-criticality systems." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM025/document.

Full text
Abstract:
Les systèmes temps-réels modernes ont tendance à obtenir la criticité mixte, dans le sens où ils intègrent sur une même plateforme de calcul plusieurs applications avec différents niveaux de criticités. D'un côté, cette intégration permet de réduire le coût, le poids et la consommation d'énergie. Ces exigences sont importantes pour des systèmes modernes comme par exemple les drones (UAV). De l'autre, elle conduit à des complications majeures lors de leur conception. Ces systèmes doivent être certifiés en prenant en compte ces différents niveaux de criticités. L'ordonnancement temps réel des systèmes avec différents niveaux de criticités est connu comme étant l’un des plus grand défi dans le domaine. Les techniques traditionnelles nécessitent une isolation complète entre les niveaux de criticité ou bien une certification globale au plus haut niveau. Une telle solution conduit à un gaspillage des ressources, et à la perte de l’avantage de cette intégration. Ce problème a suscité une nouvelle vague de recherche dans la communauté du temps réel, et de nombreuses solutions ont été proposées. Parmi elles, l'une des méthodes la plus utilisée pour ordonnancer de tels systèmes est celle d'Audsley. Malheureusement, elle a un certain nombre de limitations, dont nous parlerons dans cette thèse. Ces limitations sont encore beaucoup plus accentuées dans le cas de l'ordonnancement multiprocesseur. Dans ce cas précis, l'ordonnancement basé sur la priorité perd des propriétés importantes. C’est la raison pour laquelle, les algorithmes d'ordonnancement avec différents niveaux de criticités pour des architectures multiprocesseurs ne sont que très peu étudiés et ceux qu’on trouve dans la littérature sont généralement construits sur des hypothèses restrictives. Cela est particulièrement problématique car les systèmes industriels temps réel cherchent à migrer vers plates-formes multi-cœurs. Dans ce travail nous proposons une approche différente pour résoudre ces problèmes
Modern real-time systems tend to be mixed-critical, in the sense that they integrate on the same computational platform applications at different levels of criticality. Integration gives the advantages of reduced cost, weight and power consumption, which can be crucial for modern applications like Unmanned Aerial Vehicles (UAVs). On the other hand, this leads to major complications in system design. Moreover, such systems are subject to certification, and different criticality levels needs to be certified at different level of assurance. Among other aspects, the real-time scheduling of certifiable mixed critical systems has been recognized to be a challenging problem. Traditional techniques require complete isolation between criticality levels or global certification to the highest level of assurance, which leads to resource waste, thus loosing the advantage of integration. This led to a novel wave of research in the real-time community, and many solutions were proposed. Among those, one of the most popular methods used to schedule such systems is Audsley approach. However this method has some limitations, which we discuss in this thesis. These limitations are more pronounced in the case of multiprocessor scheduling. In this case priority-based scheduling looses some important properties. For this reason scheduling algorithms for multiprocessor mixed-critical systems are not as numerous in literature as the single processor ones, and usually are built on restrictive assumptions. This is particularly problematic since industrial real-time systems strive to migrate from single-core to multi-core and many-core platforms. Therefore we motivate and study a different approach that can overcome these problems.A restriction of practical usability of many mixed-critical and multiprocessor scheduling algorithms is assumption that jobs are independent. In reality they often have precedence constraints. In the thesis we show the mixed-critical variant of the problem formulation and extend the system load metrics to the case of precedence-constraint task graphs. We also show that our proposed methodology and scheduling algorithm MCPI can be extended to the case of dependent jobs without major modification and showing similar performance with respect to the independent jobs case. Another topic we treated in this thesis is time-triggered scheduling. This class of schedulers is important because they considerably reduce the uncertainty of job execution intervals thus simplifying the safety-critical system certification. They also simplify any auxiliary timing-based analyses that may be required to validate important extra-functional properties in embedded systems, such as interference on shared buses and caches, peak power dissipation, electromagnetic interference etc..The trivial method of obtaining a time-triggered schedule is simulation of the worst-case scenario in event-triggered algorithm. However, when applied directly, this method is not efficient for mixed-critical systems, as instead of one worst-case scenario they have multiple corner-case scenarios. For this reason, it was proposed in the literature to treat all scenarios into just a few tables, one per criticality mode. We call this scheduling approach Single Time Table per Mode (STTM) and propose a contribution in this context. In fact we introduce a method that transforms practically any scheduling algorithm into an STTM one. It works optimally on single core and shows good experimental results for multi-cores.Finally we studied the problem of the practical realization of mixed critical systems. Our effort in this direction is a design flow that we propose for multicore mixed critical systems. In this design flow, as the model of computation we propose a network of deterministic multi-periodic synchronous processes. Our approach is demonstrated using a publicly available toolset, an industrial application use case and a multi-core platform
APA, Harvard, Vancouver, ISO, and other styles
16

Küttler, Martin, Michael Roitzsch, Claude-Joachim Hamann, and Marcus Völp. "Probabilistic Analysis of Low-Criticality Execution." Technische Universität Dresden, 2017. https://tud.qucosa.de/id/qucosa%3A30798.

Full text
Abstract:
The mixed-criticality toolbox promises system architects a powerful framework for consolidating real-time tasks with different safety properties on a single computing platform. Thanks to the research efforts in the mixed-criticality field, guarantees provided to the highest criticality level are well understood. However, lower-criticality job execution depends on the condition that all high-criticality jobs complete within their more optimistic low-criticality execution time bounds. Otherwise, no guarantees are made. In this paper, we add to the mixed-criticality toolbox by providing a probabilistic analysis method for low-criticality tasks. While deterministic models reduce task behavior to constant numbers, probabilistic analysis captures varying runtime behavior. We introduce a novel algorithmic approach for probabilistic timing analysis, which we call symbolic scheduling. For restricted task sets, we also present an analytical solution. We use this method to calculate per-job success probabilities for low-criticality tasks, in order to quantify, how low-criticality tasks behave in case of high-criticality jobs overrunning their optimistic low-criticality reservation.
APA, Harvard, Vancouver, ISO, and other styles
17

Zare, Marzieh. "Cooperation-induced Criticality in Neural Networks." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc283813/.

Full text
Abstract:
The human brain is considered to be the most complex and powerful information-processing device in the known universe. The fundamental concepts behind the physics of complex systems motivate scientists to investigate the human brain as a collective property emerging from the interaction of thousand agents. In this dissertation, I investigate the emergence of cooperation-induced properties in a system of interacting units. I demonstrate that the neural network of my research generates a series of properties such as avalanche distribution in size and duration coinciding with the experimental results on neural networks both in vivo and in vitro. Focusing attention on temporal complexity and fractal index of the system, I discuss how to define an order parameter and phase transition. Criticality is assumed to correspond to the emergence of temporal complexity, interpreted as a manifestation of non-Poisson renewal dynamics. In addition, I study the transmission of information between two networks to confirm the criticality and discuss how the network topology changes over time in the light of Hebbian learning.
APA, Harvard, Vancouver, ISO, and other styles
18

Grilli, Jacopo. "Randomness and Criticality in Biological Interactions." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424011.

Full text
Abstract:
In this thesis we study from a physics perspective two problems related to biological interactions. In the first part of this thesis we consider ecological interactions, that shape ecosystems and determine their fate, and their relation with stability of ecosystems. Using random matrix theory we are able to identify the key aspect, the order parameters, determining the stability of large ecosystems. We then consider the problem of determining the persistence of a population living in a randomly fragmented landscape. Using some techniques borrowed from random matrix theory applied to disordered systems, we are able to identify what are the key drivers of persistence. The second part of the thesis is devoted to the observation that many living systems seem to tune their interaction close to a critical point. We introduce a stochastic model, based on information theory, that predict the critical point as a natural outcome of a process of evolution or adaptation, without fine-tuning of parameters.
In questa tesi studiamo da una prospettiva fisica due problemi legati alle interazioni biologiche. Nella prima parte della tesi consideriamo le interazioni ecologiche, che danno forma agli ecosistemi e determinano la loro sorte, e la loro relazione con la stabilità degli stessi. Usando la teoria delle matrici aleatorie, siamo in grado di identificare gli aspetti chiave, i parametri d'ordine, che determinano la stabilità degli ecosistemi. Quindi consideriamo il problema di determinare la persistenza di una popolazione che vive in un territorio frammentato aleatoriamente. Usando alcune tecniche prese in prestito dalla teoria delle matrici aleatorie applicata ai sistemi disordinati, riusciamo a identificare quali sono gli ingredienti chiave per la persistenza. La seconda parte della tesi è dedicata all'osservazione che molti sistemi viventi sembrano essere calibrati precisamente vicino a un punto critico. Indroduciamo un modello stocastico, basato sulla teoria dell'informazione, che predice i punti critici come risultato naturale di un processo di voluzione e adattamento, senza una calibrazione dei parametri
APA, Harvard, Vancouver, ISO, and other styles
19

Namazi, Alireza. "Emergent behavior and criticality in online auctions." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=976716739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Jingtao. "The nature of asymmetry in fluid criticality." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3815.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Chemical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
21

London, Mark Daniel. "Complexity and criticality in financial time series." Thesis, De Montfort University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ford, Gary Nicholas. "Data criticality in through life engineering support." Thesis, University of Bristol, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.761228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yeo, Dominic. "Self-organised criticality in random graph processes." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:23af1abc-2128-4315-9b25-55ed8f290875.

Full text
Abstract:
In the first half of this thesis, we study the random forest obtained by conditioning the Erdös-Rényi random graph G(N,p) to include no cycles. We focus on the critical window, in which
p(N) = 1+λN-1/3
N
, as studied by Aldous for G(N,p). We describe a scaling limit for the sizes of the largest trees in this critical random forest, in terms of the excursions above zero of a particular reflected diffusion. We proceed by showing convergence of the reflected exploration process associated to the critical random forests, using careful enumeration of classes of forests, and the asymptotic properties of uniform trees. In the second half of this thesis, we study a random graph process where vertices have one of k types. An inhomogeneous random graph represents the initial connections between vertices, and over time new edges are added homogeneously, as in the classical random graph process. Each vertex is frozen at some rate, resulting in the removal of its entire component. This is a version of the frozen percolation model introduced by R\'ath, which (under mild conditions) exhibits self-organised criticality: the dynamics first drive the system to a critical state, and from then on maintain it in criticality. We prove a convergence result for the proportion of vertices of each type which survive until time t, and describe the local limit in terms of a multitype branching process whose parameters are critical and given by the solution to an unusual differential equation driven by Perron--Frobenius eigenvectors. The argument relies on a novel multitype exploration process, leading to a concentration result for the proportion of types in all large components of a near-critical inhomogeneous random graph; and on a stronger convergence result for mean-field frozen percolation, when the initial graphs may be random.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhou, Luyuan. "Security Risk Analysis based on Data Criticality." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-93055.

Full text
Abstract:
Nowadays, security risk assessment has become an integral part of network security as everyday life has become interconnected with and dependent on computer networks. There are various types of data in the network, often with different criticality in terms of availability or confidentiality or integrity of information. Critical data is riskier when it is exploited. Data criticality has an impact on network security risks. The challenge of diminishing security risks in a specific network is how to conduct network security risk analysis based on data criticality. An interesting aspect of the challenge is how to integrate the security metric and the threat modeling, and how to consider and combine the various elements that affect network security during security risk analysis. To the best of our knowledge, there exist no security risk analysis techniques based on threat modeling that consider the criticality of data. By extending the security risk analysis with data criticality, we consider its impact on the network in security risk assessment. To acquire the corresponding security risk value, a method for integrating data criticality into graphical attack models via using relevant metrics is needed. In this thesis, an approach for calculating the security risk value considering data criticality is proposed. Our solution integrates the impact of data criticality in the network by extending the attack graph with data criticality. There are vulnerabilities in the network that have potential threats to the network. First, the combination of these vulnerabilities and data criticality is identified and precisely described. Thereafter the interaction between the vulnerabilities through the attack graph is taken into account and the final security metric is calculated and analyzed. The new security metric can be used by network security analysts to rank security levels of objects in the network. By doing this, they can find objects that need to be given additional attention in their daily network protection work. The security metric could also be used to help them prioritize vulnerabilities that need to be fixed when the network is under attack. In general, network security analysts can find effective ways to resolve exploits in the network based on the value of the security metric.
APA, Harvard, Vancouver, ISO, and other styles
25

Navas, Portella Víctor. "Statistical modelling of avalanche observables: criticality and universality." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670764.

Full text
Abstract:
Complex systems can be understood as an entity composed by a large number of interactive elements whose emergent global behaviour cannot be derived from the local laws characterizing their constituents. The observables characterizing these systems can be observed at different scales and they often exhibit interesting properties such as lack of characteristic scales and self-similarity. In this context, power-law type functions take an important role in the description of these observables. The presence of power-law functions resembles to the situation of thermodynamic quantities close to a critical point in equilibrium critical phenomena. Different complex systems can be grouped into the same universality class when the power-law functions characterizing their observables have the same exponents. The response of some complex systems proceeds by the so called avalanche process: a collective response of the system characterized by following an intermittent dynamics, with sudden bursts of activity separated by periods of silence. This kind of out-of-equilibrium systems can be found in different disciplines such as seismology, astrophysics, ecology, finance or epidemiology, just to mention a few of them. Avalanches are characterized by a set of observables such as the size, the duration or the energy. When avalanche observables exhibit lack of characteristic scales, their probability distributions can be statistically modelled by power-law-type distributions. Avalanche criticality occurs when avalanche observables can be characterized by this kind of distributions. In this sense, the concepts of criticality and universality, which are well defined in equilibrium phenomena, can be also extended for the probability distributions describing avalanche observables in out-of-equilibrium systems. The main goal of this PhD thesis relies on providing robust statistical methods in order to characterize avalanche criticality and universality in empirical datasets. Due to limitations in data acquisition, empirical datasets often only cover a narrow range of observation, making it difficult to establish power-law behaviour unambiguously. With the aim of discussing the concepts of avalanche criticality and universality, two different systems are going to be considered: earthquakes and acoustic emission events generated during compression experiments of porous materials in the laboratory (labquakes). The techniques developed in this PhD thesis are mainly focused on the distribution of earthquake and labquake sizes, which is known as the Gutenberg-Richter law. However, the methods are much more general and can be applied to any other avalanche observable. The statistical techniques provided in this work can also be helpful for earthquake forecasting. Coulomb-stress theory has been used for years in seismology to understand how earthquakes trigger each other. Earthquake models that relate earthquake rates and Coulomb stress after a main event, such as the rate-and-state model, assume that the magnitude distribution of earthquakes is not affected by the change in the Coulomb stress. Several statistical analyses are performed to test whether the distribution of magnitudes is sensitive to the sign of the Coulomb-stress increase. The use of advanced statistical techniques for the analysis of complex systems has been found to be necessary and very helpful in order to provide rigour to the empirical results, particularly, to those problems regarding hazard analysis.
Els sistemes complexos es poden entendre com entitats compostes per un gran nombre d’elements en interacció on la seva resposta global i emergent no es pot derivar de les lleis particulars que caracteritzen cadascun dels seus constituents. Els observables que caracteritzen aquests sistemes es poden observar a diferents escales i, sovint, mostren propietats interessants tals com la manca d’escales característiques i autosimilitud. En aquest context, les funcions amb lleis de potència prenen un paper important en la descripció d’aquests observables. La presència de lleis de potència s’assimila a la situació dels fenòmens crítics en equilibri, on algunes quantitats termodinàmiques mostren un comportament funcional similar prop d’un punt crític. Diferents sistemes complexos es poden agrupar en la mateixa classe d’universalitat quan les funcions de lleis de potència que caracteritzen els seus observables tenen els mateixos exponents. Quan són conduïts externament, la resposta d’alguns sistemes complexos segueix el que s’anomonena un procès d’allaus: una resposta col·lectiva del sistema caracteritzada per seguir una dinàmica intermitent, amb sobtats increments d’activitat separats per períodes de silenci. Aquesta mena de sistemes fora de l’equilibri es poden trobar en diferents disciplines tals com la sismologia, astrofísica, ecologia, epidemologia o finances, per mencionar alguns. Les allaus estan caracteritzades per un conjunt d’observables tals com la mida, l’energia o la durada. Quan aquests observables mostren una manca d’escales característiques, les seves distribucions de probabilitat es poden modelitzar estadísticament per distribucions de lleis de potència. S’anomenen allaus crítiques aquelles en que els seus observables es poden caracteritzar per aquestes distribucions. En aquest sentit, els conceptes de criticalitat i universalitat, els quals estan ben definits per fenòmens en equilibri, es poden extendre per les distribucions de probabilitat que descriuen els observables de les allaus en sistemes fora de l’equilibri. L’objectiu principal d’aquesta tesi doctoral és proporcionar mètodes estadístics robusts per tal de caracteritzar la criticalitat i la universalitat en allaus corresponents a dades empíriques. Degut a les limitacions en l’adquisició de dades, les dades empíriques sovint cobreixen un rang petit d’observació, dificultant que es pugui establir un determinat comportament en forma de llei de potència de manera inequívoca. Amb l’objectiu de discutir els conceptes de criticalitat i universalitat en allaus, es consideraran dos sistemes diferents: els terratrèmols i els esdeveniments d’emissió acústica que es generen durant experiments de compressió de materials porosos al laboratori (labquakes). Les tècniques desenvolupades en aquesta tesi doctoral estan enfocades principalment a la distribució de la mida dels terratrèmols i labquakes, altrament coneguda com a llei de Gutenberg-Richter. No obstant, aquests mètodes són molt més generals i es poden aplicar a qualsevol observable de les allaus. Les tècniques estadístistiques proporcionades en aquest treball poden també ajudar al pronòstic de terratrèmols. Durant anys, la teoria d’esforços de Coulomb s’ha utilitzat en sismologia per tal d’entendre com els terratrèmols desencadenen l’ocurrència d’altres de nous. Els models de terratrèmols que relacionen la taxa d’ocurrència de rèpliques i l’esforç de Coulomb després d’un gran esdeveniment, assumeixen que la distribució de la mida dels terratrèmols no està afectada pel canvi en l’esforç de Coulomb. Diverses anàlisi estadístiques s’aplicaran per tal de comprovar si la distribució de magnituds és sensible al signe de l’esforç de Coulomb. S’ha provat que l’ús de tècniques estadístiques avançades en l’anàlisi de sistemes complexos és útil i necessari per tal d’aportar rigor als resultats empírics i, en particular, a problemes d’anàlisi de riscos.
APA, Harvard, Vancouver, ISO, and other styles
26

Naz, Sabiha. "Benchmark criticality calculations for one speed neutron transport." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/5927.

Full text
Abstract:
Thesis (Ph. D.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on October 17, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
27

Coetzer, Audrey. "Criticality of the lower domination parameters of graphs." Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/1051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Hasty, Jeff. "A renormalization group study of self-organized criticality." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/29887.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Peters, Ole Bjoern. "Approaches to criticality : rainfall and other relaxation processes." Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Stapleton, Matthew Alexander. "Self-organised criticality and non-equilibrium statistical mechanics." Thesis, Imperial College London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.443792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Hollinghurst, Joe. "Enabling software defined networking in high criticality networks." Thesis, University of Bristol, 2018. http://hdl.handle.net/1983/8ac68df0-62ba-4cf8-beee-b69ee807f43e.

Full text
Abstract:
High-criticality networking solutions are often dedicated, highly specialised, even bespoke in case of hard real-time guarantees. This is required to ensure (quasi) deterministic behaviour of the network services as seen by critical applications. However, dedicated networks incur significant expense, along with the inability to update the system efficiently and effectively. Software-Defined Networking (SDN) uses controllers to allow dynamic, user-controlled, on-demand configuration of the network. This provokes interesting questions on the applicability of SDN concepts and architectures in high-criticality networks. Although SDN offers flexibility and programmability to the network infrastructure through the introduction of a controller, the controller introduces extra delay into the system. This is due to new flows querying the controller for instructions of how to route traffic. This becomes an increasing problem for large scale and delay sensitive networks such as those found in high-criticality infrastructure. The delay introduced can be minimised by optimal placement of the controller or decreased further by introducing additional controllers. Although the problem of optimal placement for multiple controllers is known to be NP hard, approximations can be used. The analysis of three different methods has been conducted and investigates the scalability, and how the accuracy of the methods varies with the complexity. In the latter stage of the thesis the use of redundancy and coding is analysed with the aim to reduce latency and increase reliability within the network. The objective is to provide an analysis of the gains achievable through the use of redundant messages and coding. Both redundancy and coding increase the network load and hence the delay of each packet, but can reduce overall delay by exploiting independent randomness across multiple paths. Both the average delay minimisation and probabilistic guarantees on delay exceeding some tolerance threshold are considered.
APA, Harvard, Vancouver, ISO, and other styles
32

Fleming, Thomas David. "Allocation and optimisation of mixed criticality cyclic executives." Thesis, University of York, 2017. http://etheses.whiterose.ac.uk/19031/.

Full text
Abstract:
Incorporating applications of differing levels of criticality onto the same platform in an efficient manner is a challenging problem. Highly critical applications require stringent verification and certification while lower criticality work may seek to make greater use of modern processing power with little to no requirement for verification. Much study into mixed criticality systems has considered this issue by taking scheduling paradigms designed to provide good platform utilisation at the expense of predictability and attempting to provide mechanisms that will allow for the verification of higher criticality work. In this thesis we take the alternative approach, we utilise a cyclic executive scheduler. Such schedulers are used extensively in industrial practice and provide very high levels of determinism making them a strong choice for applications with strict certification requirements. This work provides a platform which supports the highly critical work, alongside work of lower criticalities in a cyclic executive context. The aim being to provide a near-future platform which is able to support existing legacy highly critical software alongside newer less critical software which seeks to utilise multi-core architectures. One of the fundamental challenges of designing a system for a static scheduler is the allocation of applications/tasks to the cores and, in the case of cyclic executives, minor cycles of the system. Throughout this work we explore task allocation, we make extensive use of Linear Programming to model and allocate work. We suggest a limited task splitting technique to aid in system design and allocation. Finally, we propose two ways in which an allocation of work might be optimised to meet some design goal. This thesis proposes a scheduling policy for mixed criticality multi-core systems using a cyclic executive scheduler and explores the design, allocation and optimisation of such a system.
APA, Harvard, Vancouver, ISO, and other styles
33

Roux, Adriana. "Vertex-criticality of the domination parameters of graphs." Thesis, Stellenbosch : University of Stellenbosch, 2011. http://hdl.handle.net/10019.1/6874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zamani, Farzaneh. "Local quantum criticality in and out of equilibrium." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-213688.

Full text
Abstract:
In this thesis I investigate several aspects of local quantum criticality, a concept of key importance in a number of physical contexts ranging from critical heavy fermion compounds to quantum dot systems. Quantum critical points are associated with second order phase transitions at zero temperature. In contrast to their finite-temperature counterparts, the zero-point motion cannot be neglected near a quantum critical point. As a result, the incorporation of quantum dynamics leads to an effective dimension larger than the spatial dimension of the system for the order parameter fluctuations within the Ginzburg-Landau-Wilson treatment of criticality. This so-called quantum-to-classical mapping works well for the critical properties in insulating systems but apparently fails in systems containing gapless fermions. This has been experimentally most clearly been demonstrated within a particular class of intermetallic compounds called heavy fermions. A particular way in which the Ginzburg-Landau-Wilson paradigm fails is for critical Kondo destruction that seems to underlie the unconventional quantum criticality seen in the heavy fermions. I focus on studying the properties of critical Kondo destruction and the emergence of energy-over-temperature-scaling in systems without spatial degrees of freedom, i.e., so-called quantum impurity systems. In particular, I employ large-N techniques to address critical properties of this class of quantum phase transitions in and out of equilibrium. As quantum critical systems are characterized by a scale-invariant spectrum with many low-lying excitations, it may appear that any perturbation can lead to a response beyond the linear response regime. Understanding what governs the non-linear response regime near quantum criticality is an interesting area. Here, I first present a path integral version of the Schrieffer-Wolff transformation which relates the functional integral form of the partition function of the Anderson model to that of its effective low-energy model. The equivalence between the low-energy sector of the Anderson model in the Kondo regime and the spin-isotropic Kondo model is usually established via a canonical transformation performed on the Hamiltonian, followed by a projection. The resulting functional integral assumes the form of a spin path integral and includes a geometric phase factor, i.e. a Berry phase. The approach stresses the underlying symmetries of the model and allows for a straightforward generalization of the transformation to more involved models. As an example of the efficiency of the approach I apply it to a single electron transistor attached to ferromagnetic leads and derive the effective low-energy model of such a magnetic transistor. As Kondo screening is a local phenomenon, it and its criticality can be studied using the appropriate impurity model. A general impurity model to study critical Kondo destruction is the pseudogap Bose-Fermi Kondo model. Here, I concentrate on the multi-channel version of the model using the dynamical large-N study. This model allows to study the non-trivial interplay between two different mechanisms of critical Kondo destruction. The interplay of two processes that can each by itself lead to critical Kondo destruction. The zero-temperature residual entropy at various fixed points for the model is also discussed. The two channel Anderson model exhibits several continuous quantum phase transitions between weak- and strong-coupling phases. The non-crossing approximation (NCA) is believed to give reliable results for the standard two-channel Anderson model of a magnetic impurity in a metal. I revisit the reliability of the NCA for the standard two channel Anderson model (constant conduction electron density of states) and investigate its reliability for the two-channel pseudogap Anderson model. This is done by comparing finite-temperature, finite-frequency solutions of the NCA equations and asymptotically exact zero-temperature NCA solutions with numerical renormalization-group calculations. The phase diagram of this model is well established. The focus here will be on the dynamical scaling properties obtained within the NCA. Finally, I study the thermal and non-thermal steady state scaling functions and the steady-state dynamics of the pseudogap Kondo model. This model allows us to study the concept of effective temperatures near fully interacting as well as weak-coupling fixed points and compare the out-of-equilibrium scaling properties of critical Kondo destruction to those of the traditional spin-density wave (SDW) scenario. The differences I identify can be experimentally probed. This may be helpful in identifying the nature of the quantum critical points observed in certain heavy fermion compounds.
APA, Harvard, Vancouver, ISO, and other styles
35

Reis, Elohim Fonseca dos 1984. "Criticality in neural networks = Criticalidade em redes neurais." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276917.

Full text
Abstract:
Orientadores: José Antônio Brum, Marcus Aloizio Martinez de Aguiar
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Física Gleb Wataghin
Made available in DSpace on 2018-08-29T15:40:55Z (GMT). No. of bitstreams: 1 Reis_ElohimFonsecados_M.pdf: 2277988 bytes, checksum: 08f2c3b84a391217d575c0f425159fca (MD5) Previous issue date: 2015
Resumo: Este trabalho é dividido em duas partes. Na primeira parte, uma rede de correlação é construída baseada em um modelo de Ising em diferentes temperaturas, crítica, subcrítica e supercrítica, usando um algorítimo de Metropolis Monte-Carlo com dinâmica de \textit{single-spin-flip}. Este modelo teórico é comparado com uma rede do cérebro construída a partir de correlações das séries temporais do sinal BOLD de fMRI de regiões do cérebro. Medidas de rede, como coeficiente de aglomeração, mínimo caminho médio e distribuição de grau são analisadas. As mesmas medidas de rede são calculadas para a rede obtida pelas correlações das séries temporais dos spins no modelo de Ising. Os resultados da rede cerebral são melhor explicados pelo modelo teórico na temperatura crítica, sugerindo aspectos de criticalidade na dinâmica cerebral. Na segunda parte, é estudada a dinâmica temporal da atividade de um população neural, ou seja, a atividade de células ganglionares da retina gravadas em uma matriz de multi-eletrodos. Vários estudos têm focado em descrever a atividade de redes neurais usando modelos de Ising com desordem, não dando atenção à estrutura dinâmica. Tratando o tempo como uma dimensão extra do sistema, a dinâmica temporal da atividade da população neural é modelada. O princípio de máxima entropia é usado para construir um modelo de Ising com interação entre pares das atividades de diferentes neurônios em tempos diferentes. O ajuste do modelo é feito com uma combinação de amostragem de Monte-Carlo e método do gradiente descendente. O sistema é caracterizado pelos parâmetros aprendidos, questões como balanço detalhado e reversibilidade temporal são analisadas e variáveis termodinâmicas, como o calor específico, podem ser calculadas para estudar aspectos de criticalidade
Abstract: This work is divided in two parts. In the first part, a correlation network is build based on an Ising model at different temperatures, critical, subcritical and supercritical, using a Metropolis Monte-Carlo algorithm with single-spin-flip dynamics. This theoretical model is compared with a brain network built from the correlations of BOLD fMRI temporal series of brain regions activity. Network measures, such as clustering coefficient, average shortest path length and degree distributions are analysed. The same network measures are calculated to the network obtained from the time series correlations of the spins in the Ising model. The results from the brain network are better explained by the theoretical model at the critical temperature, suggesting critical aspects in the brain dynamics. In the second part, the temporal dynamics of the activity of a neuron population, that is, the activity of retinal ganglion cells recorded in a multi-electrode array was studied. Many studies have focused on describing the activity of neural networks using disordered Ising models, with no regard to the dynamic nature. Treating time as an extra dimension of the system, the temporal dynamics of the activity of the neuron population is modeled. The maximum entropy principle approach is used to build an Ising model with pairwise interactions between the activities of different neurons at different times. Model fitting is performed by a combination of Metropolis Monte Carlo sampling with gradient descent methods. The system is characterized by the learned parameters, questions like detailed balance and time reversibility are analysed and thermodynamic variables, such as specific heat, can be calculated to study critical aspects
Mestrado
Física
Mestre em Física
2013/25361-6
FAPESP
APA, Harvard, Vancouver, ISO, and other styles
36

Gates, John Fitzgerald. "Leadership of changing universities : a case for criticality." Thesis, University College London (University of London), 2009. http://discovery.ucl.ac.uk/10019906/.

Full text
Abstract:
Contemporary universities are changing universities that function under conditions of uncertainty, even unknowability, to meet often-unclear demands from within and outside their walls. The complexities of changing universities render unfeasible a singular perspective by which to guide them. At issue is how leaders might understand, correlate, and utilize the awarenesses within their universities to develop a sense of institutional knowing. It is hypothesized in this thesis that, given the above conditions, the effective leadership of changing universities necessitates a critical method. Case studies of the CEOs of three universities, one in Great Britain and two in the United States, form the empirical basis of the study. From the case studies three interlocking themes (knowledge frameworks, institutional identity, and social exchange) emerged. The study revealed that in a time of change and uncertainty, the effective leadership of universities requires a means by which to transform information into knowledge, knowledge into knowing, and knowing into being. The study further revealed that 1) knowledge frameworks (cognitive structures for understanding) are adaptable; 2) the leadership of changing universities is largely transactional; and 3) leaders and staff make their way amidst change and uncertainty through their collective efforts to address institutional issues. Based on Barnett's (1997) idea of criticality, which encompasses critical knowing, critical action, and critical being, criticality for university leadership is here developed as a set of theoretical propositions for the practice of university leadership under conditions of change and uncertainty. The study will contribute to the body of knowledge on, and aid in the examination of, the leadership of contemporary universities as well as the sociology of organisations.
APA, Harvard, Vancouver, ISO, and other styles
37

Peliz, Pinto Teixeira Filipe. "Criticality and its effect on other cortical phenomena." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/31608.

Full text
Abstract:
Neuronal avalanches are a cortical phenomenon defined by bursts of neuronal firing encapsulated by periods of quiescence. It has been found both in vivo and in vitro that neuronal avalanches follow a power law distribution which is indicative of the system being within or near a critical state. A system is critical if it is poised between order and disorder with the possibility of minor event leading to a large chain reaction. This is also observed by the system exhibiting a diverging correlation length between its components as it approaches the critical point. It has been shown that neuronal criticality is a scale-free phenomenon observed throughout the entire system as well as within each module of the system. At a small scale, neuronal networks produce avalanches which conform to power law-like distributions. At a larger scale, we observe that these systems consist of modules exhibiting long-range temporal correlations identifiable via Detrended Fluctuation Analysis (DFA). This phenomenon is hypothesised to affect network behaviour with regards to information processing, information storage, computational power, and stability - The Criticality Hypothesis. This thesis attempts to better understand critical neuronal networks and how criticality may link with other neuronal phenomena. This work begins by investigating the interplay of network connectivity, synaptic plasticity, and criticality. Using different network construction algorithms, the thesis demonstrates that Hebbian learning and Spike Timing Dependent Plasticity (STDP) robustly drive small networks towards a critical state. Moreover the thesis shows that, while the initial distribution of synaptic weights plays a significant role in attaining criticality, the network's topology at the modular level has little or no impact. Using an expanded eight-module oscillatory spiking neural network the thesis then shows the link between the different critical markers we use when attempting to observe critical behaviour at different scales. The findings demonstrate that modules exhibiting power law-like behaviour also demonstrate long-range temporal correlations throughout the system. Furthermore, we show that when modules no longer exhibit power law-like behaviour we find that they become uncorrelated or noisy. This shows a correlation between power law-like behaviour observed within each module and the long-range temporal correlations between the modules. The thesis concludes by demonstrating how criticality may be linked with other related phenomena, namely metastability and dynamical complexity. Metastability is a global property of neuronal populations that migrate between attractor-like states. Metastability can be quantified by the variance of synchrony, a measure that has been hypothesised to capture the varying influence neuronal populations have over one another and the system as a whole. The thesis shows a correlation between critical behaviour and metastability where the latter is most reliably maximised only when the former is near the critical state. This conclusion is expected as metastability, similarly to criticality reflects the interplay between the integrating and segregating tendencies of the system components. Agreeing with previous findings this suggests that metastable dynamics may be another marker of critical behaviour. A neural system is said to exhibit dynamical complexity if a balance of integrated and segregated activity occurs within the system. A common attribute of critical systems is a balance between excitation and inhibition. The final part of the thesis attempts to understand how criticality may be linked with dynamical complexity. This work shows a possible connection between these phenomena providing a foundation for further analysis. The thesis concludes with a discussion of the significant role criticality plays in determining the behaviour of neuronal networks.
APA, Harvard, Vancouver, ISO, and other styles
38

Pribyl, David James 1963. "Nuclear excursions in criticality accidents with fissile solutions." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/276965.

Full text
Abstract:
An accidental criticality may occur in a solution of fissile material. Since the processing of nuclear materials in solution is prevalent throughout the fuel cycle, it would be judicious to have the capability to predict a possible hazard. In view of this concern, a computer simulation was performed of the Los Alamos accident of December 30, 1958, in which the actuation of an electric stirrer produced a sudden criticality. A complete equation of state for a liquid containing gas bubbles was coupled with the equations of energy, momentum, and space-independent point kinetics. Multiplication calculations, implemented with the Monte Carlo Code for Neutron and Photon Transport (MCNP), were performed on thermally expanding solution geometries, to generate a reactivity feedback representation. With the knowledge of the total energy produced in the accident, the maximum reciprocal period on which the power rose was computed.
APA, Harvard, Vancouver, ISO, and other styles
39

Romero, de Mills L. Patricia. "The development of criticality amongst undergraduate students of Spanish." Thesis, University of Southampton, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494974.

Full text
Abstract:
The skills-based versus knowledge-based learning debate in British higher education has given rise to a double agenda, clearly observable in Modern Languages degrees, where the sometimes called "content units" (knowledge-based learning) form one dimension, and the "language units" (skills-based learning) form the other. The organization of this structure is sustained in the understanding that knowledge-based courses provide students with the necessary intellectual tools for students to make informed judgements of the different contexts and situations where their (linguistic, or other) skills can be applied. From this perspective, a Modern Languages degree offers invaluable opportunities for undergraduates to develop as well-rounded critical beings (Barnett, 1997). However, a programme where its main components are conducted in (at least) two different linguistic codes, poses additional, but often overlooked challenges for the transferability of information and skills between units described above. This could affect at the same time the development of Modern Languages students' criticality. It is hoped that this study will contribute to our understanding of this higher level learning process involving two languages. This research explores the different learning experiences organized for students of languages, which curriculum designers believe help students to build up their capacity to behave critically in different areas of their professional and personal lives. Through an ethnographic-type investigation, this study aims to provide an in-depth analysis of the elements involved in the development of Spanish students' criticality, and in particular, of the ways these elements are interlinked with one another. The data collected for this investigation includes samples of students' oral and written work, interviews with students and tutors, classroom observations and a collection of course documents and university documentation, all of which was qualitatively recorded and analysed.
APA, Harvard, Vancouver, ISO, and other styles
40

Miller, Gael. "Measurements of criticality in self-organizing cellular automata models." Thesis, Heriot-Watt University, 2003. http://hdl.handle.net/10399/301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Curtin, Oliver James. "Quantum criticality and emergent symmetry in coupled quantum dots." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/42499.

Full text
Abstract:
We consider strongly correlated regimes which emerge at low temperature in coupled quantum dot (or magnetic impurity) systems. In strongly correlated systems a single particle description fails to explain the observed behaviour, so we resort to many body methods. We describe our system using a 2-impurity Anderson model and develop a numerical renormalisation group procedure which provides non-perturbative insight into the low energy behaviour, through calculation of dynamic quantities. We combine this approach with renormalised perturbation theory, thus acquiring a picture of how the Hamiltonian and interactions change at low energies. These approaches are first used to study the emergence of a Kondo effect with an SU(4) symmetry in capacitively-coupled double quantum dot systems. We classify the 'types' of SU(4) symmetry which can emerge and show how an experimentalist might achieve such emergence through tuning their system. We provide a way of distinguishing between the SU(2) and SU(4) Kondo regimes by considering the conductance. We also study a quantum critical point which occurs in the Heisenberg coupled quantum dot/impurity model. There is an anomalous entropy contributed by the impurities in this regime which is indicative of an uncoupled Majorana Fermion. We calculate dynamic quantities in regimes with different symmetries and establish correspondence with the 2-channel Kondo model. We formulate possible pictures of the underlying mechanisms of the critical point and construct a Majorana fermion model for the case with particle-hole symmetry, which explains the non-Fermi liquid energy levels and degeneracies obtained. We conjecture that a Majorana zero mode is present, and that this is responsible for the anomalous entropy.
APA, Harvard, Vancouver, ISO, and other styles
42

Scheben, Fynn. "Iterative methods for criticality computations in neutron transport theory." Thesis, University of Bath, 2011. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.545319.

Full text
Abstract:
This thesis studies the so-called “criticality problem”, an important generalised eigenvalue problem arising in neutron transport theory. The smallest positive real eigenvalue of the problem contains valuable information about the status of the fission chain reaction in the nuclear reactor (i.e. the criticality of the reactor), and thus plays an important role in the design and safety of nuclear power stations. Because of the practical importance, efficient numerical methods to solve the criticality problem are needed, and these are the focus of this thesis. In the theory we consider the time-independent neutron transport equation in the monoenergetic homogeneous case with isotropic scattering and vacuum boundary conditions. This is an unsymmetric integro-differential equation in 5 independent variables, modelling transport, scattering, and fission, where the dependent variable is the neutron angular flux. We show that, before discretisation, the nonsymmetric eigenproblem for the angular flux is equivalent to a related eigenproblem for the scalar flux, involving a symmetric positive definite weakly singular integral operator(in space only). Furthermore, we prove the existence of a simple smallest positive real eigenvalue with a corresponding eigenfunction that is strictly positive in the interior of the reactor. We discuss approaches to discretise the problem and present discretisations that preserve the underlying symmetry in the finite dimensional form. The thesis then describes methods for computing the criticality in nuclear reactors, i.e. the smallest positive real eigenvalue, which are applicable for quite general geometries and physics. In engineering practice the criticality problem is often solved iteratively, using some variant of the inverse power method. Because of the high dimension, matrix representations for the operators are often not available and the inner solves needed for the eigenvalue iteration are implemented by matrix-free inneriterations. This leads to inexact iterative methods for criticality computations, for which there appears to be no rigorous convergence theory. The fact that, under appropriate assumptions, the integro-differential eigenvalue problem possesses an underlying symmetry (in a space of reduced dimension) allows us to perform a systematic convergence analysis for inexact inverse iteration and related methods. In particular, this theory provides rather precise criteria on how accurate the inner solves need to be in order for the whole iterative method to converge. The theory is illustrated with numerical examples on several test problems of physical relevance, using GMRES as the inner solver. We also illustrate the use of Monte Carlo methods for the solution of neutron transport source problems as well as for the criticality problem. Links between the steps in the Monte Carlo process and the underlying mathematics are emphasised and numerical examples are given. Finally, we introduce an iterative scheme (the so-called “method of perturbation”) that is based on computing the difference between the solution of the problem of interest and the known solution of a base problem. This situation is very common in the design stages for nuclear reactors when different materials are tested, or the material properties change due to the burn-up of fissile material. We explore the relation ofthe method of perturbation to some variants of inverse iteration, which allows us to give convergence results for the method of perturbation. The theory shows that the method is guaranteed to converge if the perturbations are not too large and the inner problems are solved with sufficiently small tolerances. This helps to explain the divergence of the method of perturbation in some situations which we give numerical examples of. We also identify situations, and present examples, in which the method of perturbation achieves the same convergence rate as standard shifted inverse iteration. Throughout the thesis further numerical results are provided to support the theory.
APA, Harvard, Vancouver, ISO, and other styles
43

Lawley, Martyn Laurence. "Aspects of quantum criticality in itinerant electron ferromagnetic systems." Thesis, University of Birmingham, 2010. http://etheses.bham.ac.uk//id/eprint/536/.

Full text
Abstract:
Fermi-liquid theory is one of the standard models of condensed matter physics, supplying a valid explanation of the low temperature properties of many metals. However, non-Fermi-liquid behaviours arise in many itinerant systems that exhibit a zero temperature magnetic phase transition. This thesis is mainly concerned with such quantum critical points and is an investigation into the various phenomena seen in the phase diagram of itinerant ferromagnetic systems. We apply a standard theory of itinerant quantum criticality to a quantum-critical end-point in a three-dimensional ferromagnet, before speculating on ZrZn\(_2\) being a test-bed of our results. Then we consider two explanations for the appearance of a first-order phase transition at low temperatures and attempt to reconcile them with ZrZn\(_2\). Finally we concentrate on the wide range of novel states that appear instead of a pure quantum critical point. Such exotic phases are superconducting or magnetic in nature and we investigate whether the onset of ferromagnetic quantum critical fluctuations can give rise to a certain class of such states.
APA, Harvard, Vancouver, ISO, and other styles
44

Maurer, Simon. "Analysis and coordination of mixed-criticality cyber-physical systems." Thesis, University of Hertfordshire, 2018. http://hdl.handle.net/2299/21094.

Full text
Abstract:
A Cyber-physical System (CPS) can be described as a network of interlinked, concurrent computational components that interact with the physical world. Such a system is usually of reactive nature and must satisfy strict timing requirements to guarantee a correct behaviour. The components can be of mixed-criticality which implies different progress models and communication models, depending whether the focus of a component lies on predictability or resource efficiency. In this dissertation I present a novel approach that bridges the gap between stream processing models and Labelled Transition Systems (LTSs). The former offer powerful tools to describe concurrent systems of, usually simple, components while the latter allow to describe complex, reactive, components and their mutual interaction. In order to achieve the bridge between the two domains I introduce the novel LTS Synchronous Interface Automaton (SIA) that allows to model the interaction protocol of a process via its interface and to incrementally compose simple processes into more complex ones while preserving the system properties. Exploiting these properties I introduce an analysis to identify permanent blocking situations in a network of composed processes. SIAs are wrapped by the novel component-based coordination model Process Network with Synchronous Communication (PNSC) that allows to describe a network of concurrent processes where multiple communication models and the co-existence and interaction of heterogeneous processes is supported due to well defined interfaces. The work presented in this dissertation follows a holistic approach which spans from the theory of the underlying model to an instantiation of the model as a novel coordination language, called Streamix. The language uses network operators to compose networks of concurrent processes in a structured and hierarchical way. The work is validated by a prototype implementation of a compiler and a Run-time System (RTS) that allows to compile a Streamix program and execute it on a platform with support for ISO C, POSIX threads, and a Linux operating system.
APA, Harvard, Vancouver, ISO, and other styles
45

Tadjfar, Nagisa. "Assessing the criticality of germanium as a by-product." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111354.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 37-38).
Although germanium production is currently nowhere near its supply potential, many sources cite germanium, a by-product material produced primarily from zinc and coal, as a critical metal. Current methods for assessing criticality include frameworks that rely on geopolitical risk metrics, geological reserves, substitutability, and processing limitations during extraction among others but there is a gap in understanding the complex supply and demand dynamics that are involved in the market for by-products. This thesis addressed this gap by assessing the supply risk of germanium using an econometric framework to generate estimates of price elasticities. Annual world production and price data of years 1967 - 2014 for germanium was used to construct supply and demand models in order to obtain estimates for the price elasticities of supply and demand. Ordinary least squares (OLS) regression was used on an autoregressive distributed lag (ARDL) model for both supply and demand. The supply model was constructed with price, zinc production, and 5-year interest rate as shifters along with lag terms for germanium production, germanium price, 5-year interest rate, and zinc production. The adjusted R2 was 0.761 and the long term supply price elasticity was found to be 0.05 with an upper bound of 0.7 and a lower bound of -0.6 indicating that germanium supply is price inelastic. In a similar fashion, a demand model was constructed with two structural breaks accounting for fundamental changes in the market structure in 1991 and 2003, along with lag terms for germanium production, germanium price and antimony price. The adjusted R2 value for the demand model was 0.683 and the price elasticity was 0.05 with an upper bound of 1 and a lower bound of -1 indicating that demand, too, is price inelastic. This creates an added risk for supply shortages, adding to the criticality of germanium. However, the stabilizing behavior of its carriers, coal and zinc, reduce the likelihood of an actual shortage. This type of analysis improves upon existing methods and can lead to more accurate quantified estimates for long-term criticality.
by Nagisa Tadjfar.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
46

Pinheiro, Neto Joao [Verfasser]. "Criticality and sampling in neural networks / Joao Pinheiro Neto." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2021. http://d-nb.info/1228364605/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Ruoyu. "An Evaluation of Mixed Criticality Metric for Mechatronics Systems." Thesis, KTH, Maskinkonstruktion (Inst.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-201092.

Full text
Abstract:
In the thesis, we studied mechatronics systems which integrate tasks (applications) with different level of criticality on a common embedded system. The integration aims to reduce hardware cost (less processors and other components) and the weight and volume of the system. The power consumption of the system would be also reduced. The integration gives a lot of advantages but also creates new challenges. The main challenge of the system development is how to separate (isolate/protect) tasks with different criticality levels. Separation can be classified in two types, temporal separation and spatial separation. Temporal separation ensures that high criticality tasks can access resources with higher priority than low criticality tasks. Spatial separation is about preventing fault propagation from low criticality tasks to high criticality tasks. Many techniques that can make separation of tasks are studied in the thesis, and can be grouped into four categories: scheduling, power management, memory protection and communication protection. To select proper techniques that can improve the system the most, fault tree analysis and mixed criticality metric are employed. Fault tree analysis helps us to find courses of hazards that the system has to deal with. Then, we identify some techniques that can solve the problem of tasks separation. Mixed criticality metric is employed to evaluate these techniques. The evaluation result will help developers to select techniques. A self-balancing robot, which is simulated by SimScape (SimMechanics and SimElectronics) and TrueTime toolbox, was developed for experiments. Such techniques as scheduling, power management and communication protection were examined on the platform. Pros and cons ofthese techniques were evaluated. Finally, a number of recommendations for engineers with regards to the techniques for mixed criticality systems, based on our research, were provided. (Source code are shared in Matlab community).
APA, Harvard, Vancouver, ISO, and other styles
48

Hinton, Michael Glenn. "Inter-Core Interference Mitigation in a Mixed Criticality System." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8648.

Full text
Abstract:
In this thesis, we evaluate how well isolation can be achieved between two virtual machines within a mixed criticality system on a multi-core processor. We achieve this isolation with Jailhouse, an open-source, minimalist hypervisor. We then enhance Jailhouse with core throttling, a technique we use to minimize inter-core interference between VMs. Then, we run workloads with and without core throttling to determine the effect throttling has on interference between a non-real time VM and a real-time VM. We find that Jailhouse provides excellent isolation between VMs even without throttling, and that core throttling suppresses the remaining inter-core interference to a large extent.
APA, Harvard, Vancouver, ISO, and other styles
49

Klamser, Pascal. "Collective Information Processing and Criticality, Evolution and Limited Attention." Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/23099.

Full text
Abstract:
Im ersten Teil analysiere ich die Selbstorganisation zur Kritikalität (hier ein Phasenübergang von Ordnung zu Unordnung) und untersuche, ob Evolution ein möglicher Organisationsmechanismus ist. Die Kernfrage ist, ob sich ein simulierter kohäsiver Schwarm, der versucht, einem Raubtier auszuweichen, durch Evolution selbst zum kritischen Punkt entwickelt, um das Ausweichen zu optimieren? Es stellt sich heraus, dass (i) die Gruppe den Jäger am besten am kritischen Punkt vermeidet, aber (ii) nicht durch einer verstärkten Reaktion, sondern durch strukturelle Veränderungen, (iii) das Gruppenoptimum ist evolutionär unstabiler aufgrund einer maximalen räumlichen Selbstsortierung der Individuen. Im zweiten Teil modelliere ich experimentell beobachtete Unterschiede im kollektiven Verhalten von Fischgruppen, die über mehrere Generationen verschiedenen Arten von größenabhängiger Selektion ausgesetzt waren. Diese Größenselektion soll Freizeitfischerei (kleine Fische werden freigelassen, große werden konsumiert) und die kommerzielle Fischerei mit großen Netzbreiten (kleine/junge Individuen können entkommen) nachahmen. Die zeigt sich, dass das Fangen großer Fische den Zusammenhalt und die Risikobereitschaft der Individuen reduziert. Beide Befunde lassen sich mechanistisch durch einen Aufmerksamkeits-Kompromiss zwischen Sozial- und Umweltinformationen erklären. Im letzten Teil der Arbeit quantifiziere ich die kollektive Informationsverarbeitung im Feld. Das Studiensystem ist eine an sulfidische Wasserbedingungen angepasste Fischart mit einem kollektiven Fluchtverhalten vor Vögeln (wiederholte kollektive Fluchttauchgängen). Die Fische sind etwa 2 Zentimeter groß, aber die kollektive Welle breitet sich über Meter in dichten Schwärmen an der Oberfläche aus. Es zeigt sich, dass die Wellengeschwindigkeit schwach mit der Polarisation zunimmt, bei einer optimalen Dichte am schnellsten ist und von ihrer Richtung relativ zur Schwarmorientierung abhängt.
In the first part, I focus on the self-organization to criticality (here an order-disorder phase transition) and investigate if evolution is a possible self-tuning mechanism. Does a simulated cohesive swarm that tries to avoid a pursuing predator self-tunes itself by evolution to the critical point to optimize avoidance? It turns out that (i) the best group avoidance is at criticality but (ii) not due to an enhanced response but because of structural changes (fundamentally linked to criticality), (iii) the group optimum is not an evolutionary stable state, in fact (iv) it is an evolutionary accelerator due to a maximal spatial self-sorting of individuals causing spatial selection. In the second part, I model experimentally observed differences in collective behavior of fish groups subject to multiple generation of different types of size-dependent selection. The real world analog to this experimental evolution is recreational fishery (small fish are released, large are consumed) and commercial fishing with large net widths (small/young individuals can escape). The results suggest that large harvesting reduces cohesion and risk taking of individuals. I show that both findings can be mechanistically explained based on an attention trade-off between social and environmental information. Furthermore, I numerically analyze how differently size-harvested groups perform in a natural predator and fishing scenario. In the last part of the thesis, I quantify the collective information processing in the field. The study system is a fish species adapted to sulfidic water conditions with a collective escape behavior from aerial predators which manifests in repeated collective escape dives. These fish measure about 2 centimeters, but the collective wave spreads across meters in dense shoals at the surface. I find that wave speed increases weakly with polarization, is fastest at an optimal density and depends on its direction relative to shoal orientation.
APA, Harvard, Vancouver, ISO, and other styles
50

Joyce, Peter James. "Experimental investigation of defect criticality in FRP laminate composites /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography