Thèses sur le sujet « Flow hierarchy »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Flow hierarchy.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 16 meilleures thèses pour votre recherche sur le sujet « Flow hierarchy ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Lindqvist, Karl, et Karl Gladh. « Risk and cost assessment in supply chain decision making : Developing a tool with analytical hierarchy methodology ». Thesis, Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74516.

Texte intégral
Résumé :
This study aims to describe how a tool can be developed by assessing risk and cost within the supply chain of a company. By interviewing stakeholders of the chosen case company, and later analysing their answers with the help of a thematic analysis, we were able to isolate the risk criteria seen as significant. Quality, people, delivery, cost-variation, flexibility and information risk were then used in an AHP model, together with the addition of a cost criterion. By using the AHP methodology, we were able to establish the relation between risk and cost, the different risk criteria and the different product flows considered. The AHP resulted in a matrix which presents the internal relations which can be used as a tool when choosing between different product flows. The purpose of this tool is to help mitigate some of the uncertainties that can emerge when making decisions within the supply chain. The data used in this study is based on the input of the case company, the general application of the matrix have therefore not been tested.
Studien syftar till att beskriva hur ett verktyg kan utvecklas genom att risk och kostnad i ett företags logistikkedja tas i beaktning. Riskerna som behandlas i rapporten identifierades genom att intervjua nyckelpersoner inom det fallföretag som studien utgick ifrån. För att identifiera de risker som ansågs påverka valet av produktflöde, genomfördes en tematisk analys av intervjumaterialet där riskkriteriernas relativa frekvens uppmättes. Den tematiska analysen resulterade i en enkät där parvisa jämförelser mellan riskelement och produktflöden utfördes utefter en beslutshierarki som utarbetats. Resultatet från den tematiska analysen och enkäten visade att risker kopplade till, kvalité, kompetens, leverans, kostnadsvariationer, flexibilitet samt information skall ges en större vikt när produktflöden utvärderas relativt den kostnad som flödena ger upphov till. Resultatet har sammanställts i en matris, där förhoppningen är att matrisen skall hjälpa motverka en del av den osäkerhet som kan uppstå när produktflöden utvärderas. Matrisen i sig är framtagen med hjälp av fallföretaget och dess generella applicering har därför inte testats.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Vanderklei, Mark Wynyard. « Risk Control in ERP Implementations : The flow-on effect of prior decision making in the control of risks for Project Managers ». Thesis, University of Canterbury. Accounting and Information Systems, 2013. http://hdl.handle.net/10092/9050.

Texte intégral
Résumé :
Enterprise Resource Planning (ERP) systems have been in existence for over 2 decades yet businesses are still losing billions of dollars annually in the implementation of software designed to reduce costs and increase profitability. The inability to manage risks is an area that contributes to these losses, specifically due to uncertain outcomes when dealing with an interconnected construct such as risk, and a research gap at the tactical and operational levels between risks and controls. A comparative case study approach, encompassing 12 different organisations was adopted to explore emerging patterns at the project implementation level, and from this three contributions emerged. After observing risks behaving in a hierarchical fashion with predictable results, Hierarchy of Risk models representing different implementation stages were constructed. Although these models are still in their formative stages, it may prove useful in furthering our understanding of the close inter-relationship between different risks, where they occur in ERP implementations and the implications of managerial choice when determining risk prioritisation. A second finding is that no direct linear relationship appears to exist between risks and controls. Rather, this counter-intuitive finding suggests that it is additional factors including risk categories, implementation stages, prior control decision making and the hierarchical flow-on effect of impacts as a consequence of identified risks. Finally, by combining the Hierarchy of Risk models and the risk-to-impact-to-control relationship, a method of reverse engineering portfolios of control was discovered. This potentially offers an explanation as to how portfolios of control can be constructed, and why they are essential in ERP implementations.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Ahsan, Kazi Badrul. « Lean integrated optimisation model of emergency department for improved patient flow ». Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/212928/1/Kazi_Ahsan_Thesis.pdf.

Texte intégral
Résumé :
This research attempts to improve the patient flow of Emergency Department by focusing on the identification of key factors that influence patient flow, evaluating the factors, and developing an optimisation model that integrates a lean concept. A mixed method strategy is utilised to analyse qualitative and quantitative data to identify the key factors that contribute to overcrowding. A comprehensive review of the literature is conducted, and retrospective and observational data have been utilised which were collected from the Emergency Departments of two major hospitals in Brisbane, Australia.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Rath, Michael [Verfasser], et Bruno [Akademischer Betreuer] Eckhardt. « Low-dimensional Models for Subcritical Turbulence in Channel Flow - A Model Hierarchy Built on Production, Transfer and Dissipation of Turbulent Kinetic Energy / Michael Rath ; Betreuer : Bruno Eckhardt ». Marburg : Philipps-Universität Marburg, 2018. http://d-nb.info/116415625X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Jenkins, Rhodri. « Renewable liquid transport fuels from microbes and waste resources ». Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.655722.

Texte intégral
Résumé :
In order to satisfy the global requirement for transport fuel sustainably, renewable liquid biofuels must be developed. Currently, two biofuels dominate the market; bioethanol for spark ignition and biodiesel for compression ignition engines. However, both fuels exhibit technical issues such as low energy density, poor low temperature performance and poor stability. In addition, bioethanol and biodiesel sourced from first generation feedstocks use arable land in competition with food production, and can only meet a fraction of the current demand. To address these issues it is vital that biofuels be developed from truly sustainable sources, such as lignocellulosic waste resources, and possess improved physical properties. To improve and control the physical properties of a fuel for specific application, one must be able to tailor the products formed in its production process. All studies within this thesis, therefore, have the aim of assessing the fuels produced for their variability in physical property, or the aim of directing the process considered to specific fuel molecules. In Chapter 2, spent coffee grounds from a range of geographical locations, bean types and brewing processes were assessed as a potential feedstock for biodiesel production. While the lipid yield was comparable to that of conventional biodiesel sources, the fatty acid profile remained constant irrespective of the coffee source. Despite this lack of variation, the fuel properties varied widely, presumably due to a range of alternative biomolecules present in the lipid. Though coffee biodiesel was produced from a waste product, the fuel properties were found to be akin to palm oil biodiesel, with a high viscosity and pour point. The blend level would therefore be restricted. In Chapter 3 the coffee lipid, as well as a range of microbial oils potentially derived from renewable sources were transformed into a novel aviation and road transport fuel through cross-metathesis with ethene. Hoveyda-Grubbs 2nd generation catalyst was found to be the most suitable, achieving 41% terminal bond selectivity under optimum conditions. Metathesis yielded three fractions: an alkene hydrocarbon fraction suitable for aviation, a shorter chain triglyceride fraction that upon transesterification produced a short chain biodiesel fuel, and a multifunctional volatile alkene fraction that could potentially have application in the polymer industry. Though there was variation for the road transport fuel fraction due to the presence of long chain saturates, the compounds fell within the US standard for biodiesel. The aviation fraction lowered the viscosity, increased the energy density, and remained soluble with Jet A-1 down to the required freezing point. Oleaginous organisms generally only produce a maximum of 40% lipid, leaving a large portion of fermentable biomass. In Chapter 4, a variety of ethyl and butyl esters of organic acids – potentially obtainable from fermentation – were assessed for their suitability as fuels in comparison to bioethanol. One product, butyl butyrate, was deemed suitable as a Jet A-1 replacement while four products, diethyl succinate, dibutyl succinate, dibutyl fumarate and dibutyl malonate, were considered as potential blending agents for diesel. Diethyl succinate, being the most economically viable of the four, was chosen for an on-engine test using a 20 vol% blend of DES (DES 20) on a chassis dynamometer under pseudo-steady state conditions. DES20 was found to cause an increase in fuel demand and NOx emissions, and a decrease in exhaust temperature, wheel force, and CO emissions. While fermentation is generally directed to one product, producing unimolecular fuels, they do not convert the entirety of the biomass available. An alternative chemical transformation is pyrolysis. In Chapter 5, zeolite-catalysed fast pyrolysis of a model compound representative of the ketonic portion of biomass pyrolysis vapour – mesityl oxide – was carried out. The aim of this study was to understand the mechanistic changes that occur, which could lead to improved bio-oil yields and more directed fuel properties of the pyrolysis oil. While HZSM-5 and Cu ZSM-5 showed no activity for hydrogenation and little activity for oligomerisation, Pd ZSM-5 led to near-complete selective hydrogenation of mesityl oxide to methyl isobutyl ketone, though this reduced at higher temperatures. At lower temperature (150-250 °C), a small amount of useful oligomerisation was observed, which could potentially lead to a selective pyrolysis oligomerisation reaction pathway.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Darmofal, David L. (David Louis). « Hierarchal visualization of three-dimensional vortical flow calculations ». Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/44269.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lundkvist, Linn, et Zakrisson Lovisa Dahlman. « Återbruk av byggmaterial - En undersökning av framgångsfaktorer och utmaningar ». Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-92511.

Texte intégral
Résumé :
Sverige idag genererar byggbranschen 12 miljoner ton avfall per år. Enligt svensk lag ska avfallet sorteras för att öka möjligheten till återvinning och återbruk. Byggbranschen står för drygt 20 procent av växthusgasutsläppen i Sverige och genom att återbruka byggmaterial kan denna siffra reduceras. Avsikten med detta examensarbete är att lyfta ett antal möjligheter för att minska klimatavtrycket från byggbranschen. Föreliggande studie syftar därför till att fokusera på de framgångsfaktorer som identifierats hos utmärkande projekt på Skanska gällande återbruk av byggmaterial och produkter. Syftet med studien är att undersöka och belysa de utmaningar som finns med att återbruka samt att uppmärksamma tillvägagångssätt för hur arbete med att återbruka kan ökas.Studien har baserats på intervjuer samt litteratursökning inom ämnet återbruk. Fem deltagare i olika projekt hos Skanska har intervjuats där återbruk varit en del av processen. Arbetet har begränsats till fem intervjudeltagare då det ansågs vara tillräckligt för att besvara studiens syfte. Studien har avgränsats till återbrukat byggmaterial och produkter från byggnader, där material från anläggningsarbeten inte inkluderas. Arbetet skrivs i samarbete med Skanska.De framgångsfaktorer som har identifierats i resultatet är att det är betydelsefullt att lägga god tid på materialinventering och att systematiskt dokumentera de material som finns att tillgå, samt att planering av återbruk sker tidigt i processen. En annan framgångsfaktor för att lyckas med återbruk är att involvera externa aktörer som kan förmedla materialet vidare. De utmaningar som finns med att återbruka byggmaterial och produkter är de krav som ställs genom myndigheter och i lagar. Även kunskapsnivån är bristfällig när det kommer till återbruk i branschen och därmed problematiseras processen. För att anamma återbruk i framtiden kommer det cirkulära byggandet och det långsiktiga perspektivet vara av betydelse inom byggbranschen. Ett sätt för att arbeta med återbruk är att involvera projektörer tidigt i processen genom att förse dem med återbrukat material som går att projektera in i projektet. Vid projekteringen av byggnader bör även materialens framtida återbruk finnas i åtanke. När återbruk blir en del av fler byggprojekt kommer det medföra att avfallet minimeras, det kommer främja företagens hållbarhetsarbete samt att klimatpåverkan från branschen med största sannolikhet kommer minska.
The construction industry generates 12 million tons of waste annually in Sweden alone. The waste according to Swedish law has to be sorted in order to increase the possibility of recycling and reuse. The construction industry produces 20 percent of the Swedish greenhouse gas emissions and by reusing building materials this number can be reduced. The intention of this bachelor's thesis is to illustrate possibilities for reducing the industry's climate footprint. Following thesis intends to focus on identifying the success factors of prominent projects regarding reuse of building materials and products at Skanska. The purpose of the thesis is to explore and illuminate the challenges that come with reusing and detect approaches on how to increase work with reused materials.The thesis is based on interviews and literature in the subject of reuse. Five participants from different projects at Skanska where reuse was a part of the process was interviewed. The interviews were limited to five participants because it was considered to be enough to answer the purpose of the study. The study is limited to reuse of materials and products from buildings, waste from other parts of the industry is not included. This study is written in a collaboration with Skanska.The success factors that were found in this study is that it is important to assign the material listing phase a good amount of time, to systematically document the materials at hand and to plan to reuse early in the process. Another success factor to reusing is to involve external vendors that can help assign the material to new proprietors. The challenges that come with reusing building materials and products are the requirements set by authorities and laws. Also the level of knowledge in the industry about reuse is inadequate which complicates the process. The circular flow and long-time perspective will be of importance in order to appropriate reuse in the building industry in the future. One way to initiate this is by involving architects early in the process by providing them reused materials to plan into their project. In the design of new buildings, the future reuse of the materials should be kept in mind. When reuse becomes a part of more building projects it will result in less waste, the business’ sustainability work will advance and the climate footprint from the industry will most likely get reduced.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Cassé, Clement. « Prévision des performances des services Web en environnement Cloud ». Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30268.

Texte intégral
Résumé :
Le Cloud Computing a bouleversé la façon dont sont développés et déployées les logiciels. De nos jours, les applications Cloud sont conçues comme des systèmes distribués, en constante évolution, hébergés dans des data~center gérés par des tiers, et potentiellement même dispersés dans le monde entier. Ce changement de paradigme a également eu un impact considérable sur la façon dont les logiciels sont monitorés : les applications cloud se sont développées pour atteindre l'ordre de centaines de services, et les outils de monitoring ont rapidement rencontré des problèmes de mise à l'échelle. De plus, ces outils de monitoring doivent désormais également traiter les défaillances et les pannes inhérentes aux systèmes distribués, comme par exemple, les pannes partielles, les configurations incohérentes, les goulots d'étranglement ou même la vampirisation de ressources. Dans cette thèse, nous présentons une approche basée sur une nouvelle source de télémétrie qui s'est développée dans le domaine du monitoring des applications Cloud. En effet, en nous appuyant sur le récent standard OpenTelemetry, nous présentons un système qui convertit les données de "traces distribuées" en un graphe de propriétés hiérarchique. Grâce un tel modèle, il devient possible de mettre en évidence la topologie des applications, y compris la répartition sur les différentes machines des programmes, y compris sur plusieurs data-centers. L'objectif de ce modèle est donc d'exposer le comportement des fournisseurs de service Cloud aux développeurs qui maintiennent et optimisent leur application. Ensuite, nous présentons l'utilisation de ce modèle pour résoudre certains des défis majeurs des systèmes distribués~: la détection des communications inefficaces entre les services et l'anticipation des goulots d'étranglement. Nous abordons ces deux problèmes avec une approche basée sur la théorie des graphes. La composition inefficace des services est détectée avec le calcul de l'indice de hiérarchie de flux. Une plateforme Proof-of-Concept représentant un cluster Kubernetes zonal pourvu d'une instrumentation OpenTelemetry est utilisée pour créer et détecter les compositions de services inefficaces. Dans une dernière partie, nous abordons la problématique de la détection des goulots d'étranglement dans un réseau de services au travers de l'analyse de centralité du graphe hiérarchique précédent. Ce travail s'appuie sur un programme de simulation qui a aussi été instrumenté avec OpenTelemetry afin d'émettre des données de traçage. Ces traces ont été converties en un graphe de propriétés hiérarchique et une étude sur les algorithmes de centralité a permis d'identifier les points d'étranglement. Les deux approches présentées dans cette thèse utilisent et exploitent l'état de l'art en matière de monitoring des applications Cloud. Elles proposent une nouvelle utilisation des données de "distributed tracing" pas uniquement pour l'investigation et le débogage, mais pour la détection et la réaction automatiques sur un système réel
Cloud Computing has changed how software is now developed and deployed. Nowadays, Cloud applications are designed as rapidly evolving distributed systems that are hosted in third-party data centre and potentially scattered around the globe. This shift of paradigms also had a considerable impact on how software is monitored: Cloud application have been growing to reach the scale of hundreds of services, and state-of-the-art monitoring quickly faced scaling issues. In addition, monitoring tools also now have to address distributed systems failures, like partial failures, configuration inconsistencies, networking bottlenecks or even noisy neighbours. In this thesis we present an approach based on a new source of telemetry that has been growing in the realm of Cloud application monitoring. Indeed, by leveraging the recent OpenTelemetry standard, we present a system that converts "distributed tracing" data in a hierarchical property graph. With such a model, it becomes possible to highlight the actual topology of Cloud applications like the physical distribution of its workloads in multiple data centres. The goal of this model is to exhibit the behaviour of Cloud Providers to the developers maintaining and optimizing their application. Then, we present how this model can be used to solve some prominent distributed systems challenges: the detection of inefficient communications and the anticipation of hot points in a network of services. We tackle both of these problems with a graph-theory approach. Inefficient composition of services is detected with the computation of the Flow Hierarchy index. A Proof of Concept is presented based on a real OpenTelemetry instrumentation of a Zonal Kubernetes Cluster. In, a last part we address the concern of hot point detection in a network of services through the perspective of graph centrality analysis. This work is supported by a simulation program that has been instrumented with OpenTelemetry in order to emit tracing data. These traces have been converted in a hierarchical property graph and a study on the centrality algorithms allowed to identify choke points. Both of the approaches presented in this thesis comply with state-of-the-art Cloud application monitoring. They propose a new usage of Distributed Tracing not only for investigation and debugging but for automatic detection and reaction on a full system
Styles APA, Harvard, Vancouver, ISO, etc.
9

Li, Hongmei. « Hierarchic modeling and history matching of multi-scale flow barriers in channelized reservoirs / ». May be available electronically:, 2008. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Storey, Richard Goodwin. « Spatial and temporal variability in a hyporheic zone, a hierarchy of controls from water flows to meiofauna ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ63814.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Thorpe, David Stuart. « A process for the management of physical infrastructure ». Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36067/7/36067_Digitsed_Thesis.pdf.

Texte intégral
Résumé :
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Perdikis, Dionysios. « Functionnal organization of complex behavioral processes ». Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX22050/document.

Texte intégral
Résumé :
Selon des études comportementales, les comportements complexes sont des processus multi-échelles, souvent composés de sous-éléments (unités fonctionnelles ou primitives). Cette thèse propose des architectures fonctionnelles afin de représenter la structure dynamique des unités fonctionnelles ainsi que celle des comportements multi-échelles résultants. Dans un premier temps, des unités fonctionnelles sont modélisées comme des flux structurés de faible dimension dans l'espace de phase (modes de fonctionnement). Des dynamiques supplémen-taires (signaux opérationnels) opèrent sur ces modes de fonctionnement faisant émerger des comportements complexes et sont classifiés selon la séparation entre leur échelle temporelle et celle des modes. Ensuite, des mesures de complexité, appliquées sur des architectures dis-tinctes composant un mouvement simple, révèlent un compromis entre la complexité des modes de fonctionnement et celle des signaux opérationnels. Celui-ci dépend de la séparation entre leurs échelles temporelles et soutient l'efficacité des architectures utilisant des modes non triviaux. Dans un deuxième temps, une architecture pour le comportement séquentiel (ici l'écriture) est construite via le couplage des modes de fonctionnement (réalisant des lettres) et des signaux opérationnels, ceux-ci beaucoup plus lents ou beaucoup plus rapides. Ainsi, l'importance des interactions entre les échelles temporelles pour l'organisation du comporte-ment est illustrée. Enfin, les contributions des modes et des signaux sur la sortie de l'architec-ture sont déterminées. Ceci semble être uniquement possible grâce à l'analyse du flux de phase (c'est-à-dire, non pas à partir des trajectoires dans l'espace de phase ni des séries temporelles)
Behavioural studies suggest that complex behaviours are multiscale processes, which may be composed of elementary ones (units or primitives). Traditional approaches to cognitive mod-elling generally employ reductionistic (mostly static) representations and computations of simplistic dynamics. The thesis proposes functional architectures to capture the dynamical structure of both functional units and the composite multiscale behaviours. First, a mathe-matical formalism of functional units as low dimensional, structured flows in phase space is introduced (functional modes). Second, additional dynamics (operational signals), which act upon functional modes for complex behaviours to emerge, are classified according to the separation between their characteristic time scale and the one of modes. Then, complexity measures are applied to distinct architectures for a simple composite movement and reveal a trade off between the complexities of functional modes and operational signals, depending on their time scale separation (in support of the control effectiveness of architectures employing non trivial modes). Subsequently, an architecture for serial behaviour (along the example of handwriting) is demonstrated, comprising of functional modes implementing characters, and operational signals much slower (establishing a mode competition and ‘binding’ modes into sequences) or much faster (as meaningful perturbations). All components being coupled, the importance of time scale interactions for behavioural organization is illustrated. Finally, the contributions of modes and signals to the output are recovered, appearing to be possible only through analysis of the output phase flow (i.e., not from trajectories in phase space or time)
Styles APA, Harvard, Vancouver, ISO, etc.
13

Ubomba-Jaswa, Florence Otae. « Coverage of African countries in Pan-African business magazines : evidence of hierarchy in regional news flows ». Diss., 2009. http://hdl.handle.net/10500/4258.

Texte intégral
Résumé :
This dissertation examines the flow of economic news in Africa, in order to investigate the potential existence of regional hierarchies in international news flow. The research was based on a framework of theories on international news flow. A quantitative and qualitative content analysis of a sample of news articles published in Africa Investor, African Business and Business in Africa during 2007 and 2008 was analysed. The quantitative results showed that South Africa received the highest level of coverage and was covered to a greater extent than any other African country. The qualitative results indicated that there was clear evidence of regional hierarchy in the coverage of African countries: South Africa received extensive coverage probably due to the fact that it is the largest, most advanced and influential economy in the continent. The study showed that inequality in news coverage is not only a global issue, but also a regional one.
Communication Science
M.A. (International Communication)
Styles APA, Harvard, Vancouver, ISO, etc.
14

Chan, Chih-Hao, et 詹智皓. « The Study of Position in Hierarchy on East Asian Port Cities-The Comparison and analysis from "Flow Space" and "Local Conditions" Aspects ». Thesis, 2003. http://ndltd.ncl.edu.tw/handle/22927513521617009581.

Texte intégral
Résumé :
碩士
國立成功大學
都市計劃學系碩博士班
91
The trend of globalization makes it more competitive in the global network of flowing goods. It becomes more and more important for a port city to find its exact position and roles in the network of flowing goods. This study tried to explore the regional position and hierarchy among port cities in East Asia in different aspects from “TEU”, “Space of flow” and “Local conditions”. On the other hand, we also compared the results of “Position in hierarchy” in these three aspects mentioned above and analyzed the appropriate meanings in explanation of them. Nine port cities in East Asia are selected in this study. The results of related datum analysis and the empirical study show that “Hong Kong” and “Singapore” is still the most important node in the hierarchy, they also possess many competitive advantages in both aspects of “Space of flow” and “Local conditions”. As for Kaohsiung, its position is lower than Shanghai, but similar to Shenzhen. Shanghai and Pusan are the most competitive opponents to Kaohsiung. At last, we can say the “Space of flow” aspect is more appropriate concept on measuring the position and importance of a port city, because it could reflect more meanings in explanations of “Network relations”.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Sznajder, Paweł. « Stan stacjonarny sedymentującej zawiesiny przy małej liczbie Reynoldsa i dużej liczbie Pecleta ». Doctoral thesis, 2019. https://depotuw.ceon.pl/handle/item/3554.

Texte intégral
Résumé :
W pracy przedstawiono podstawy teoretyczne potrzebne do skonstruowania pełnego opisu statystycznego dla jednorodnego stanu stacjonarnego sedymentującej zawiesiny w granicy znikającej liczby Reynoldsa i nieskończonej liczby Pecleta. Wyprowadzono ścisłe wyrażenia na fluktuacje prędkości, a następnie przy pomocy diagramów sklasyfikowano wszystkie potencjalnie rozbieżne wyrażenia. Dla każdej klasy sformułowano warunki konieczne dla zachowania zbieżności otrzymanych wyrażeń. Pokazano, że ww. warunki nie zapewniają krótkiego zasięgu funkcjom korelacji wyznaczanym z hierarchii równań BBGKY wyprowadzonej przez Cichockiego i Sadleja. Następnie przeanalizowano propozycję rozwiązań Batchelora oraz Cichockiego i Sadleja. Wskazano na wewnętrzną sprzeczność postępowania Batchelora oraz na ograniczone zastosowanie rozumowania Cichockiego i Sadleja. Wyniki tych analiz pozwoliły na sformułowanie nowej hierarchii, która nie zawiera wyrazów dalekiego zasięgu. Opiera się ona na mechanizmie niwelowania pola liniowego, który proponowali Cichocki i Sadlej. Rozwiązania tej hierarchii muszą być całkowalne i izotropowe. Nowy schemat rozwiązuje problem współczynnika sedymentacji dla zawiesin polidyspersyjnych (problem Batchelora). W zaproponowanym schemacie fluktuacje pozostają rozbieżne w granicy termodynamicznej, co sugeruje istnienie niestabilności na dużych odległościach. Otrzymane wnioski zgadzają się z rezultatami prac Ladda oraz Abade, którzy symulowali sedymentację w układzie z periodycznymi warunkami brzegowymi.
In thesis there is presented theoretical basis needed to construct full statistical description for a stationary state of a sedimenting suspension in the limit of vanishing Reynolds number and infinite value of Peclet number. Rigorous expressions for velocity fluctuations are derived and all potentially divergent terms are classified. For each class conditions required for convergence of those expressions are formulated. It is shown that mentioned conditions are not sufficient to ensure short range of correlation functions governed by the BBGKY hierarchy derived by Cichocki and Sadlej. Then solutions proposed by Batchelor and Cichocki and Sadlej were analysed. Contradiction was shown in Batchelors scheme and limitations of Cichocki and Sadlej solution was discussed. Those results allowed for a new hierarchy formulation which does not contain long range terms. It uses mechanism of a response to linear flow which was described by Cichocki and Sadlej. Solutions of the new hierarchy must be isotropic and integrable. New scheme solves a problem of calculating a sedimentation coefficient for polydisperse suspensions (Batchelors problem). In proposed approach the fluctuations remain divergent in the thermodynamic limit. Derived results are in agreement with simulations of the sedimenting suspension in a periodic boundary conditions performed independently by Ladd and Abade.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Κρητικάκου, Αγγελική. « Development of methodologies for memory management and design space exploration of SW/HW computer architectures for designing embedded systems ». Thesis, 2013. http://hdl.handle.net/10889/7503.

Texte intégral
Résumé :
This PhD dissertation proposes innovative methodologies to support the designing and the mapping process of embedded systems. Due to the increasing requirements, embedded systems have become quite complex, as they consist of several partially dependent heterogeneous components. Systematic Design Space Exploration (DSE) methodologies are required to support the near-optimal design of embedded systems within the available short time-to-market. In this target domain, the existing DSE approaches either require too much exploration time to find near-optimal designs due to the high number of parameters and the correlations between the parameters of the target domain, or they end up with a less efficient trade-off result in order to find a design within acceptable time. In this dissertation we present an alternative DSE methodology, which is based on systematic creation of scalable and near-optimal DSE frameworks. The frameworks describe all the available options of the exploration space in a finite set of classes. A set of principles is presented which is used in the reusable DSE methodology to create a scalable and near-optimal framework and to efficiently use it to derive scalable and near-optimal design solutions within a Pareto trade-off space. The DSE reusable methodology is applied to several stages of the embedded system design flow to derive scalable and near-optimal methodologies. The first part of the dissertation is dedicated to the development of mapping methodologies for storing large embedded system data arrays in the lower layers of the on-chip background data memory hierarchy, and the second part to the DSE methodologies for the processing part of SW/HW architectures in embedded systems including the foreground memory systems. Existing mapping approaches for the background memory part are either enumerative, symbolic/polyhedral and worst case (heuristics) approximations. The enumerative approaches require too much exploration time, the worst case approximation lead to overestimation of the storage requirements, whereas the symbolic/polytope approaches are scalable and near-optimal for solid and regular iteration spaces. By applying the new reusable DSE methodology, we have developed an intra-signal in-place optimization methodology which is scalable and near-optimal for highly irregular access schemes. Scalable and near-optimal solutions for the different cases of the proposed methodology have been developed for the cases of non-overlapping and overlapping store and load access schemes. To support the proposed methodology, a new representation of the array access schemes, which is appropriate to express the irregular shapes in a scalable and near-optimal way, is presented. A general pattern formulation has been proposed which describes the access scheme in a compact and repetitive way. Pattern operations were developed to combine the patterns in a scalable and near-optimal way under all the potential pattern combination cases, which may exist in the application under study. In the processing oriented part of the dissertation, a DSE methodology is developed for mapping instance of a predefined target application domain onto a partially fixed architecture platform template, which consists of one processor core and several custom hardware accelerators. The DSE methodology consists of uni-directional steps, which are implemented through parametric templates and are applied without costly design iterations. The proposed DSE methodology explores the space by instantiating the steps and propagating design constraints which prune design options following the steps ordering. The result is a final Pareto trade-off curve with the most relevant near-optimal designs. As the scheduling and the assignment are the major tasks of both the foreground and the datapath, near-optimal and scalable techniques are required to support the parametric templates of the proposed DSE methodology. A framework which describes the scheduling and assignment of the scalars into the registers and the scheduling and assignment of the operation into the function units of the data path is developed. Based on the framework, a systematic methodology to arrive at parametric templates for scheduling and assignment techniques which satisfy the target domain constraints is developed. In this way, a scalable parametric template for scheduling and assignment tasks is created, which guarantees near-optimality for the domain under study. The developed template can be used in the Foreground Memory Management step and Data-path mapping step of the overall design flow. For the DSE of the domain under study, near-optimal results are hence achieved through a truly scalable technique.
Η παρούσα διδακτορική διατριβή προτείνει καινοτόμες μεθοδολογίες για τον σχεδιασμό και τη διαδικασία απεικόνισης σε ενσωματωμένα συστημάτα. Λόγω των αυξανόμενων απαιτήσεων, τα ενσωματωμένα συστήματα είναι αρκετά περίπλοκα, καθώς αποτελούνται από πολλά και εν μέρει εξαρτώμενα ετερογενή στοιχεία. Συστηματικές μεθοδολογίες για την εξερεύνηση του χώρου λύσεων (Design Space Exploration – DSE) απαιτούνται σχεδόν βέλτιστες σχεδιάσεις ενσωματωμένων συστημάτων εντός του διαθέσιμου χρονου. Οι υπάρχουσες DSE μεθοδολογίες απαιτούν είτε πάρα πολύ χρόνο εξερεύνησης για να βρουν τους σχεδόν βέλτιστους σχεδιασμούς, λόγω του μεγάλου αριθμού των παραμέτρων και τις συσχετίσεις μεταξύ των παραμέτρων, ή καταλήγουν με ένα λιγότερο βέλτιστο σχέδιο, προκειμένου να βρειθεί ένας σχεδιασμός εντός του διαθέσιμου χρόνου. Στην παρούσα διδακτορική διατριβή παρουσιάζουμε μια εναλλακτική DSE μεθοδολογία, η οποία βασίζεται στη συστηματική δημιουργία επεκτάσιμων και σχεδόν βέλτιστων DSE πλαισίων. Τα πλαίσια περιγράφουν όλες τις διαθέσιμες επιλογές στο χώρο εξερεύνησης με ένα πεπερασμένο σύνολο κατηγοριών. Ένα σύνολο αρχών χρησιμοποιείται στην επαναχρησιμοποιήούμενη DSE μεθοδολογία για να δημιουργήσει ένα επεκτάσιμο και σχεδόν βέλτιστο DSE πλαίσιο και να χρησιμοποιήθεί αποτελεσματικά για να δημιουργήσει επεκτάσιμες και σχεδόν βέλτιστες σχεδιαστικές λύσεις σε ένα Pareto Trade-off χώρο λύσεων. Η DSE μεθοδολογία εφαρμόζεται διάφορα στάδια της σχεδιαστικής ροής για ενσωματωμένα συστήματα και να δημιουργήσει επεκτάσιμες και σχεδόν βέλτιστες μεθοδολογίες. Το πρώτο μέρος της διατριβής είναι αφιερωμένο στην ανάπτυξη των μεθόδων απεικόνισης για την αποθήκευση μεγάλων πινάκων που χρησιμοποιούνται στα ενσωματωμένα συστήματα και αποθηκεύονται στα χαμηλότερα στρώματα της on-chip Background ιεραρχία μνήμης. Το δεύτερο μέρος είναι αφιερωμένο σε DSE μεθοδολογίες για το τμήμα επεξεργασίας σε αρχιτεκτονικές λογισμικού/υλικού σε ενσωματωμένα συστήματα, συμπεριλαμβανομένων των συστημάτων της προσκήνιας (foreground) μνήμης. Υπάρχουσες μεθοδολογίες απεικόνισης για την Background μνήμης είτε εξονυχιστικές, συμβολικές/πολυεδρικές και προσεγγίσεις με βάση τη χειρότερη περίπτωση. Οι εξονυχιστικές απαιτούν πάρα πολύ μεγάλο χρόνο εξερεύνησης, οι προσεγγίσεις οδηγούν σε υπερεκτίμηση των απαιτήσεων αποθήκευσης, ενώ οι συμβολικές είναι επεκτάσιμη και σχεδόν βέλτιστές μονο για τακτικούς χώρους επαναλήψεων. Με την εφαρμογή της προτεινόμενης DSE μεθοδολογίας αναπτύχθηκε μια επεκτάσιμη και σχεδόν βέλτιστη μεθοδολγοία για την εύρεση του αποθηκευτικού μεγέθους για τα δεδομένα ενός πίνακα για άτακτους και για τακτικούς χώρους επαναλήψεων. Προτάθηκε μια νέα αναπαράσταση των προσπελάσεων στη μνήμη, η οποία εκφράζει τα ακανόνιστα σχήματα στο χώρο επεναλήψεων με επακτάσιμο και σχεδόν βέλτιστο τρόπο. Στο δεύτερο τμήμα της διατριβής, μια DSE μεθοδολογία αναπτύχθηκε για το σχεδιασμό ενός προκαθορισμένου τομέα από εφαρμογές σε μια μερικώς αποφασισμένη αρχιτεκτονική πλατφόρμα, η οποία αποτελείται από ένα πυρήνα επεξεργαστή και αρκετούς συνεπεξεργαστές. Η DSE μεθοδολογία αποτελείται από μονής κατεύθυνσης βήματα, τα οποία υλοποιούνται μέσω παραμετρικών πλαισίων και εφαρμόζονται αποφέυγοντας τις δαπανηρές επαναλήψεις κατά τον σχεδιασμό. Η προτεινόμενη DSE μεθοδολογία εξερευνά το χώρο βρίσκοντας στιγμιότυπα για καθε βήμα και διαδίδονατς τις αποφάσεις μεταξύ βημάτων. Με αυτό το τρόπο κλαδεύουν τις επιλογές σχεδιασμού στα επόμενα βήματα. Το αποτέλεσμα είναι μια Pareto καμπύλη. Ένα DSE πλαίσιο προτάθηκε που περιγράφει τις τεχνικές χρονοπρογραμματισμού και ανάθεσης πόρων των καταχωρητών και των μονάδων εκτέλεσης του συστήματος. Προτάθηκε μια μεθοδολογία για να δημιουργεί σχεδόν βέλτιστα και επεκτάσιμα παραμετρικά πρότυπα για τον χρονοπρογραμματισμό και την ανάθεση πόρων που ικανοποιεί τους περιορισμούς ενός τομέα εφαρμογών.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie