Dissertations / Theses on the topic 'Core'

To see the other types of publications on this topic, follow the link: Core.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Core.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Serpa, Matheus da Silva. "Source code optimizations to reduce multi core and many core performance bottlenecks." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183139.

Full text
Abstract:
Atualmente, existe uma variedade de arquiteturas disponíveis não apenas para a indústria, mas também para consumidores finais. Processadores multi-core tradicionais, GPUs, aceleradores, como o Xeon Phi, ou até mesmo processadores orientados para eficiência energética, como a família ARM, apresentam características arquiteturais muito diferentes. Essa ampla gama de características representa um desafio para os desenvolvedores de aplicações. Os desenvolvedores devem lidar com diferentes conjuntos de instruções, hierarquias de memória, ou até mesmo diferentes paradigmas de programação ao programar para essas arquiteturas. Para otimizar uma aplicação, é importante ter uma compreensão profunda de como ela se comporta em diferentes arquiteturas. Os trabalhos relacionados provaram ter uma ampla variedade de soluções. A maioria deles se concentrou em melhorar apenas o desempenho da memória. Outros se concentram no balanceamento de carga, na vetorização e no mapeamento de threads e dados, mas os realizam separadamente, perdendo oportunidades de otimização. Nesta dissertação de mestrado, foram propostas várias técnicas de otimização para melhorar o desempenho de uma aplicação de exploração sísmica real fornecida pela Petrobras, uma empresa multinacional do setor de petróleo. Os experimentos mostram que loop interchange é uma técnica útil para melhorar o desempenho de diferentes níveis de memória cache, melhorando o desempenho em até 5,3 e 3,9 nas arquiteturas Intel Broadwell e Intel Knights Landing, respectivamente. Ao alterar o código para ativar a vetorização, o desempenho foi aumentado em até 1,4 e 6,5 . O balanceamento de carga melhorou o desempenho em até 1,1 no Knights Landing. Técnicas de mapeamento de threads e dados também foram avaliadas, com uma melhora de desempenho de até 1,6 e 4,4 . O ganho de desempenho do Broadwell foi de 22,7 e do Knights Landing de 56,7 em comparação com uma versão sem otimizações, mas, no final, o Broadwell foi 1,2 mais rápido que o Knights Landing.
Nowadays, there are several different architectures available not only for the industry but also for final consumers. Traditional multi-core processors, GPUs, accelerators such as the Xeon Phi, or even energy efficiency-driven processors such as the ARM family, present very different architectural characteristics. This wide range of characteristics presents a challenge for the developers of applications. Developers must deal with different instruction sets, memory hierarchies, or even different programming paradigms when programming for these architectures. To optimize an application, it is important to have a deep understanding of how it behaves on different architectures. Related work proved to have a wide variety of solutions. Most of then focused on improving only memory performance. Others focus on load balancing, vectorization, and thread and data mapping, but perform them separately, losing optimization opportunities. In this master thesis, we propose several optimization techniques to improve the performance of a real-world seismic exploration application provided by Petrobras, a multinational corporation in the petroleum industry. In our experiments, we show that loop interchange is a useful technique to improve the performance of different cache memory levels, improving the performance by up to 5.3 and 3.9 on the Intel Broadwell and Intel Knights Landing architectures, respectively. By changing the code to enable vectorization, performance was increased by up to 1.4 and 6.5 . Load Balancing improved the performance by up to 1.1 on Knights Landing. Thread and data mapping techniques were also evaluated, with a performance improvement of up to 1.6 and 4.4 . We also compared the best version of each architecture and showed that we were able to improve the performance of Broadwell by 22.7 and Knights Landing by 56.7 compared to a naive version, but, in the end, Broadwell was 1.2 faster than Knights Landing.
APA, Harvard, Vancouver, ISO, and other styles
2

Zainuddin, Nurjuanis Zara. "In-core optimisation of thorium-plutonium-fuelled PWR cores." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sakaida, Akira. "Effects of core material on losses in transformer cores." Thesis, Cardiff University, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kwok, Tai-on Tyrone, and 郭泰安. "Multi-core design and resource allocation: from big core to ultra-tiny core." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B40987814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kwok, Tai-on Tyrone. "Multi-core design and resource allocation from big core to ultra-tiny core /." Click to view the E-thesis via HKUTO, 2008. http://sunzi.lib.hku.hk/hkuto/record/B40987814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Stephen Yi-Chih. "Core capabilities and core rigidities in the multimedia industry." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264906.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bendiuga, Volodymyr. "Multi-Core Pattern." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-16484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Abdel-Khalik, Hany Samy. "Adaptive Core Simulation." NCSU, 2004. http://www.lib.ncsu.edu/theses/available/etd-10252004-094938/.

Full text
Abstract:
The work presented in this thesis is a continuation of a master?s thesis research project conducted by the author to gain insight into the applicability of inverse methods to developing adaptive simulation capabilities for core physics problems. Use of adaptive simulation is intended to improve the fidelity and robustness of important core attributes predictions such as core power distribution, thermal margins and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables to adapt the simulation in a meaningful way that is reflected in higher fidelity and robustness of the adapted core simulators models. We propose an inverse theory approach in which the multitudes of input data to core simulators, i.e. reactor physics and thermal-hydraulic data, are to be adjusted to improve agreement with measured observables while keeping core simulators models unadapted. At a first glance, devising such adaption for typical core simulators models would render the approach impractical. This follows, since core simulators are based on very demanding computational models, i.e. based on complex physics models with millions of input data and output observables. This would spawn not only several prohibitive challenges but also numerous disparaging concerns. The challenges include the computational burdens of the sensitivity-type calculations required to construct Jacobian operators for the core simulators models. Also, the computational burdens of the uncertainty-type calculations required to estimate the uncertainty information of core simulators input data presents a demanding challenge. The concerns however are mainly related to the reliability of the adjusted input data. We demonstrate that the power of our proposed approach is mainly driven by taking advantage of this unfavorable situation and show that significant reductions in both computational and storage burdens can be attained for a typical BWR core simulator adaption problem without compromising the quality of the adaption.
APA, Harvard, Vancouver, ISO, and other styles
9

Khan, Ahmad Salman, and Mira Kajko-Mattsson. "Core Handover Problems." KTH, Programvaru- och datorsystem, SCS, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-90212.

Full text
Abstract:
Even if a handover process is a critical stage in the software lifecycle, little is known about the problems encountered when transferring a software system from development to maintenance. In this paper, we have elicited five core handover problems as faced by five IT organizations today. These are (1) insufficient system knowledge, (2) lack of domain knowledge, (3) insufficient communication, (4) inadequate documentation, and (5) difficulties in tracking changes.
QC 20120223
APA, Harvard, Vancouver, ISO, and other styles
10

Smith, Lindsey C. "Formalising CORE requirements." Thesis, Cranfield University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.331990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lizana, Ricardo, and Verónica Toro. "Geriatry Home Core." Tesis, Universidad de Chile, 2015. http://repositorio.uchile.cl/handle/2250/136518.

Full text
Abstract:
Tesis para optar al grado de Magíster en Administración
Autores no autorizan el acceso a texto completo de su documento
El aumento de la esperanza de vida, ha hecho que el cuidado o supervisión de los adultos mayores sean una de las principales tareas que se deben gestionar en el núcleo familiar, ya que en ocasiones requieren cuidados especiales y suelen ser demandantes en forma progresiva en la medida que avanza la edad. Muchas veces los hijos o familiares cercanos no están preparados o tienen la capacidad y conocimientos para cuidarlos de la mejor manera, por lo tanto es necesario un apoyo adicional que asegure el correcto cuidado y bienestar de los adultos mayores junto a sus familias. Por eso hemos decidido crear Geriatry Home Care una empresa especialista en el cuidado del adulto mayor en domicilios brindando seguridad, confianza y profesionalismo a las familias para el cuidado del paciente, quien será atendido a través de un equipo de profesionales especialistas en cuidado de ancianos. Nuestra propuesta de Valor es ofrecer a la comunidad un servicio de cuidado domiciliario integral de ancianos, con los más altos estándares de calidad y calidez, brindando asistencia permanente a nuestros pacientes y familiares en su propio hogar. Nuestros clientes son familiares de adultos mayores pertenecientes al GSE ABC1 que requieren asistencia en el cuidado del adulto mayor perteneciente a su núcleo familiar en su hogar. Nuestros usuarios son los adultos mayores cuyos familiares pertenecen al grupo objetivo antes descrito. Los adultos mayores representan el 16.7% de la población en Chile, y están creciendo a razón de 3.5% al año, y se espera que al año 2015 el 20% de la población tenga más de 60 años. El 10% de los adultos mayores en Chile pertenecen al GSE más altos, quienes tienen el poder adquisitivo mayor, y tienen mayor disposición a pagar por productos y servicios de alta calidad. Si bien actualmente existen algunas empresas dedicadas al cuidado domiciliario, todas ellas son básicamente empresas de hospitalización domiciliaria en general, que últimamente están desarrollando el negocio de cuidado de adultos mayores. También existe oferta de cuidado por parte de auxiliares quienes prestan el servicio en forma informal y desorganizada, que se dan a conocer a través de la recomendación proveniente de los mismos usuarios entre conocidos, y son muy demandadas, incluso escasas, pero no cuentan con la supervisión de profesionales en el servicio entregado. Lo anterior hace que exista una porción importante del mercado no atendido y otra que podría recibir una mejor oferta de servicio. Nuestra oferta consta en ofrecer un servicio integral de cuidado del adulto mayor en el hogar, incluyendo estimulación, recreación, alimentación, medicación, limpieza y confort del paciente, otorgando a él y a su entorno calidad de vida. Algunos de los servicios ofrecidos serán:  Cuidado paciente no crítico: paciente valente que no requiere hospitalización domiciliaria.  Cuidado de paciente crítico: paciente no valente que requiere hospitalización domiciliaria. Queremos “Ser los principales cuidadores de adultos mayores a domicilio en la región metropolitana de Santiago. Distinguiéndonos por la excelencia en el servicio que prestamos, la vocación de servicio y la solidez operacional”. La inversión inicial considerada para el proyecto es de $170,000,000. Considerando la evaluación financiera, podemos ver que es un negocio rentable y atractivo, dadas las condiciones de la industria, la baja penetración del mercado, el bajo nivel de riesgo y los flujos futuros estimados. Es importante considerar además la estructura de costos de la compañía, ya que el ser una empresa de servicios da un atractivo especial al negocio, ajustar el tamaño no es complejo, ya que tiene una estructura de costos flexibles. El apalancamiento operativo es bajo, un aumento en las ventas produce una variación en mayor proporción en los beneficios y viceversa, por lo tanto la empresa tiene un nivel de riesgo operacional menor, ya que no es complejo cubrir los costos fijos.
APA, Harvard, Vancouver, ISO, and other styles
12

Wilson, Jacqueline Anne. "Core design aspects." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/core-design-aspects(2b99527b-6153-45c0-895b-3ebb43207557).html.

Full text
Abstract:
This statement gives an overall summary of the aims and achievements of the research work and scholarship carried out by the author during her time at The University of Manchester (and UMIST - now part of The University of Manchester) for which the publications presented give evidence. The research has been about exploring the design process, the activities and issues, and elements involved - from both an industry and student point of view. The publications explore design pedagogy, the skills required by designers and how these might fit into a curriculum for design today.In three parts it summarises the publications presented, reviews the main aspects of design and the current state of knowledge and research in design and summarises the core aspects as distilled from over 36 years practice, research and scholarship.The driver for much of the research undertaken has been to gain a better understanding of the core aspects of design - what key knowledge and skills are required by designers to allow the consistent design of better products and services which enhance the experiences of users. The work presented investigates design and design methods: the activities and processes and the elements involved. It considers responses to designs, the emotional aspect of design - why some designs are preferred over others, why some colour combinations are more desirable, and why repetition is so important to the human psyche. Underpinning the work presented are three research questions. • Are design rules and processes generic for whatever is being designed? • Can a better understanding of design theory and the emotional response to designs ensure a more effective process and thus lead to stronger designs? • Can students be educated to be better design thinkers and ultimately better designers? It concludes that: • 'design' is a process; • design is a problem-solving process and problem-solving is a design process; • for the most effective outcomes a creative and structured approach is required; • this process is based on generic rules and principles which are applicable across all discipline areas; • collaborative/cross disciplinary elements reinforce the concept that there are processes involved that are not unique to individuals or discipline specific; • a greater understanding of the process is of benefit to all individuals and organisations; • any design/problem solving activity will normally result in more than one solution option. The results of the research have informed the author's teaching practice and have been disseminated through publications to benefit the wider education arena. The work presented aims to inform students and design education practitioners.
APA, Harvard, Vancouver, ISO, and other styles
13

Chadima, Antonín. "Core banking systémy." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-358802.

Full text
Abstract:
This diploma thesis deals with the topic of core banking systems. The main objective is to analyze the implementation of the SEPA payments into the payment module. The theoretical part defines the concept of core banking systems and its history. It also compares conventional approaches to core banking systems with Islamic ones. The theoretical part also includes chapters about implementation approaches, the most common challenges in implementation and architecture of core banking systems. Next part of thesis is about the basic modules of core banking systems. The practical part is about the analysis of requirements on core banking systems. Especially the requirements that are mandatory from legislation perspective. These are SEPA payments, PSD2 and instant payments. Gap analysis is used as the main method. We chose SEPA payment implemetation as the requirement that we will analyse. There are two possible solutions that can be used. The first one is the customization of the current payment module and the second solution is implementation of the payment hub. The conclusion of the thesis focuses on the best solution for each of two types of bank institutions. The main acquisition of the thesis is the recommended solution for two different types of banks. And second of all, the conclusions which was founded in this thesis should be used for another requirements such as PSD2, the introduction of instant payments, and more.
APA, Harvard, Vancouver, ISO, and other styles
14

Kanellou, Eleni. "Data structures for current multi-core and future many-core architectures." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S171/document.

Full text
Abstract:
Actuellement, la majorité des architectures de processeurs sont fondées sur une mémoire partagée avec cohérence de caches. Des prototypes intégrant de grandes quantités de cœurs, reliés par une infrastructure de transmission de messages, indiquent que, dans un proche avenir, les architectures de processeurs vont probablement avoir ces caractéristiques. Ces deux tendances exigent que les processus s'exécutent en parallèle et rendent la programmation concurrente nécessaire. Cependant, la difficulté inhérente du raisonnement sur la concurrence peut rendre ces nouvelles machines difficiles à programmer. Nous explorons trois approches ayant pour but de faciliter la programmation concurrente. Nous proposons WFR-TM, une approche fondé sur la mémoire transactionnelle (TM), un paradigme de programmation concurrente qui utilise des transactions afin de synchroniser l'accès aux données partagées. Une transaction peut soit terminer (commit), rendant visibles ses modifications, soit échouer (abort), annulant toutes ses modifications. WFR-TM tente de combiner des caractéristiques désirables des TM optimistes et pessimistes. Une TM pessimiste n'échoue jamais aucune transaction; néanmoins les algorithmes existants utilisent des verrous pour exécuter de manière séquentielle les transactions qui contiennent des opérations d'écriture. Les algorithmes TM optimistes exécutent toutes les transactions en parallèle mais les terminent seulement si elles n'ont pas rencontré de conflit au cours de leur exécution. WFR-TM fournit des transactions en lecture seule qui sont wait-free, sans jamais exécuter d'opérations de synchronisation coûteuse (par ex. CAS, LL\SC, etc) ou sacrifier le parallélisme entre les transactions d'écriture. Nous présentons également Dense, une implémentation concurrente de graphe. Les graphes sont des structures de données polyvalentes qui permettent la mise en oeuvre d'une variété d'applications. Cependant, des applications multi-processus qui utilisent des graphes utilisent encore largement des versions séquentielles. Nous introduisons un nouveau modèle de graphes concurrents, permettant l'ajout ou la suppression de n'importe quel arc du graphe, ainsi que la traversée atomique d'une partie (ou de l'intégralité) du graphe. Dense offre la possibilité d'effectuer un snapshot partiel d'un sous-ensemble du graphe défini dynamiquement. Enfin, nous ciblons les futures architectures. Dans l'intérêt de la réutilisation du code il existe depuis quelques temps une tentative d'adaptation des environnements d'exécution de logiciel - comme par ex. JVM, l'environnement d'exécution de Java - initialement prévus pour mémoire partagée, à des machines sans cohérence de caches. Nous étudions des techniques générales pour implémenter des structures de données distribuées en supposant qu'elles vont être utilisées sur des architectures many-core, qui n'offrent qu'une cohérence partielle de caches, voir pas de cohérence du tout
Though a majority of current processor architectures relies on shared, cache-coherent memory, current prototypes that integrate large amounts of cores, connected through a message-passing substrate, indicate that architectures of the near future may have these characteristics. Either of those tendencies requires that processes execute in parallel, making concurrent programming a necessary tool. The inherent difficulty of reasoning about concurrency, however, may make the new processor architectures hard to program. In order to deal with issues such as this, we explore approaches for providing ease of programmability. We propose WFR-TM, an approach based on transactional memory (TM), which is a concurrent programming paradigm that employs transactions in order to synchronize the access to shared data. A transaction may either commit, making its updates visible, or abort, discarding its updates. WFR-TM combines desirable characteristics of pessimistic and optimistic TM. In a pessimistic TM, no transaction ever aborts; however, in order to achieve that, existing TM algorithms employ locks in order to execute update transactions sequentially, decreasing the degree of achieved parallelism. Optimistic TMs execute all transactions concurrently but commit them only if they have encountered no conflict during their execution. WFR-TM provides read-only transactions that are wait-free, without ever executing expensive synchronization operations (like CAS, LL/SC, etc), or sacrificing the parallelism between update transactions. We further present Dense, a concurrent graph implementation. Graphs are versatile data structures that allow the implementation of a variety of applications. However, multi-process applications that rely on graphs still largely use a sequential implementation. We introduce an innovative concurrent graph model that provides addition and removal of any edge of the graph, as well as atomic traversals of a part (or the entirety) of the graph. Dense achieves wait-freedom by relying on light-weight helping and provides the inbuilt capability of performing a partial snapshot on a dynamically determined subset of the graph. We finally aim at predicted future architectures. In the interest of ode reuse and of a common paradigm, there is recent momentum towards porting software runtime environments, originally intended for shared-memory settings, onto non-cache-coherent machines. JVM, the runtime environment of the high-productivity language Java, is a notable example. Concurrent data structure implementations are important components of the libraries that environments like these incorporate. With the goal of contributing to this effort, we study general techniques for implementing distributed data structures assuming they have to run on many-core architectures that offer either partially cache-coherent memory or no cache coherence at all and present implementations of stacks, queues, and lists
APA, Harvard, Vancouver, ISO, and other styles
15

Lyons, Reneé C., and Deborah Parrott. "To the Core: Multicultural Literature, Differentiated Instruction, and the Common Core." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/etsu-works/2386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Grosic, Hasan, and Emir Hasanovic. "Optimizing Inter-core Data-propagation Delays in Multi-core Embedded Systems." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44770.

Full text
Abstract:
The demand for computing power and performance in real-time embedded systems is continuously increasing since new customer requirements and more advanced features are appearing every day. To support these functionalities and handle them in a more efficient way, multi-core computing platforms are introduced. These platforms allow for parallel execution of tasks on multiple cores, which in addition to its benefits to the system's performance introduces a major problem regarding the timing predictability of the system. That problem is reflected in unpredictable inter-core interferences, which occur due to shared resources among the cores, such as the system bus. This thesis investigates the application of different optimization techniques for the offline scheduling of tasks on the individual cores, together with a global scheduling policy for the access to the shared bus. The main effort of this thesis focuses on optimizing the inter-core data propagation delays which can provide a new way of creating optimized schedules. For that purpose, Constraint Programming optimization techniques are employed and a Phased Execution Model of the tasks is assumed. Also, in order to enforce end-to-end timing constraints that are imposed on the system, job-level dependencies are generated prior and subsequently applied during the scheduling procedure. Finally, an experiment with a large number of test cases is conducted to evaluate the performance of the implemented scheduling approach. The obtained results show that the method is applicable for a wide spectrum of abstract systems with variable requirements, but also open for further improvement in several aspects.
APA, Harvard, Vancouver, ISO, and other styles
17

Nobile, Marco. "Piattaforme per Internet of Things: Windows IoT Core come caso di studio." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9216/.

Full text
Abstract:
Questa tesi si pone l'obiettivo di esplorare alcuni aspetti di uno dei settori più in crescita in questi anni (e nei prossimi) in ambito informatico: \textbf{Internet of Things}, con un occhio rivolto in particolar modo a quelle che sono le piattaforme di sviluppo disponibili in questo ambito. Con queste premesse, si coglie l'occasione per addentrarsi nella scoperta della piattaforma realizzata e rilasciata da pochi mesi da uno dei colossi del mercato IT: Microsoft. Nel primo capitolo verrà trattato Internet of Things in ambito generale, attraverso una panoramica iniziale seguita da un'analisi approfondita dei principali protocolli sviluppati per questa tecnologia. Nel secondo capitolo verranno elencate una serie di piattaforme open source disponibili ad oggi per lo sviluppo di sistemi IoT. Dal terzo capitolo verrà incentrata l'attenzione sulle tecnologie Microsoft, in particolare prima si tratterà Windows 10 in generale, comprendendo \emph{UWP Applications}. Di seguito, nel medesimo capitolo, sarà focalizzata l'attenzione su Windows IoT Core, esplorandolo dettagliatamente (Windows Remote Arduino, Modalità Headed/Headless, etc.). Il capitolo a seguire concernerà la parte progettuale della tesi, comprendendo lo sviluppo del progetto \textbf{Smart Parking} in tutte le sue fasi (dei Requisiti fino ad Implementazione e Testing). Nel quinto (ed ultimo) capitolo, saranno esposte le conclusioni relative a Windows IoT Core e i suoi vantaggi/svantaggi.
APA, Harvard, Vancouver, ISO, and other styles
18

Li, Pei. "Unified system of code transformation and execution for heterogeneous multi-core architectures." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0441/document.

Full text
Abstract:
Architectures hétérogènes sont largement utilisées dans le domaine de calcul haute performance. Cependant, le développement d'applications sur des architectures hétérogènes est indéniablement fastidieuse et sujette à erreur pour un programmeur même expérimenté. Pour passer une application aux architectures multi-cœurs hétérogènes, les développeurs doivent décomposer les données de l'entrée, gérer les échanges de valeur intermédiaire au moment d’exécution et garantir l'équilibre de charge de système. L'objectif de cette thèse est de proposer une solution de programmation parallèle pour les programmeurs novices, qui permet de faciliter le processus de codage et garantir la qualité de code. Nous avons comparé et analysé les défauts de solutions existantes, puis nous proposons un nouvel outil de programmation STEPOCL avec un nouveau langage de domaine spécifique qui est conçu pour simplifier la programmation sur les architectures hétérogènes. Nous avons évalué la performance de STEPOCL sur trois cas d'application classiques : un stencil 2D, une multiplication de matrices et un problème à N corps. Le résultat montre que : (i) avec l'aide de STEPOCL, la performance d'application varie linéairement selon le nombre d'accélérateurs, (ii) la performance de code généré par STEPOCL est comparable à celle de la version manuscrite. (iii) les charges de travail, qui sont trop grandes pour la mémoire d'un seul accélérateur, peuvent être exécutées en utilisant plusieurs accélérateurs. (iv) grâce à STEPOCL, le nombre de lignes de code manuscrite est considérablement réduit
Heterogeneous architectures have been widely used in the domain of high performance computing. However developing applications on heterogeneous architectures is time consuming and error-prone because going from a single accelerator to multiple ones indeed requires to deal with potentially non-uniform domain decomposition, inter-accelerator data movements, and dynamic load balancing. The aim of this thesis is to propose a solution of parallel programming for novice developers, to ease the complex coding process and guarantee the quality of code. We lighted and analysed the shortcomings of existing solutions and proposed a new programming tool called STEPOCL along with a new domain specific language designed to simplify the development of an application for heterogeneous architectures. We evaluated both the performance and the usefulness of STEPOCL. The result show that: (i) the performance of an application written with STEPOCL scales linearly with the number of accelerators, (ii) the performance of an application written using STEPOCL competes with an handwritten version, (iii) larger workloads run on multiple devices that do not fit in the memory of a single device, (iv) thanks to STEPOCL, the number of lines of code required to write an application for multiple accelerators is roughly divided by ten
APA, Harvard, Vancouver, ISO, and other styles
19

Bush, Isabelle. "NMR studies of enhanced oil recovery core floods and core analysis protocols." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/290145.

Full text
Abstract:
With conventional oil reserves in decline, energy companies are increasingly turning to enhanced oil recovery (EOR) processes to extend the productive life of oilfield wells. Laboratory-scale core floods, in which one fluid displaces another from the pore space of a rock core, are widely used in petroleum research for oilfield evaluation and screening EOR processes. Achieving both macro- and pore-scale understandings of such fluid displacement processes is central to being able to optimise EOR strategies. Many of the mechanisms at play, however, are still poorly understood. In this thesis nuclear magnetic resonance (NMR) has been used for quantitatively, non-invasively and dynamically studying laboratory core floods at reservoir-representative conditions. Spatially-resolved relaxation time measurements (L-T1-T2) have been applied to studying a special core analysis laboratory (SCAL) protocol, used for simulating reservoir oil saturations following initial oil migration (primary drainage) and characterising core samples (capillary pressure curves). Axial heterogeneities in pore filling processes were revealed. It was demonstrated that upon approaching irreducible water saturation, brine saturation was reduced to a continuous water-wetting film throughout the pore space; further hydrocarbon injection resulted in pore pressure rise and wetting film thinning. L-T1-T2 techniques were also applied to a xanthan gum polymer-EOR flood in a sandstone core, providing a continuous measurement of core saturation and pore filling behaviours. A total recovery of 56.1% of the original oil in place (OOIP) was achieved, of which 4.9% was from xanthan. It was demonstrated that deposition of xanthan debris in small pores resulted in small-pore blocking, diverting brine to larger pores, enabling greater oil displacement therein. L-T1-T2, spectral and pulsed field gradient (PFG) approaches were applied to a hydrolysed polyacrylamide (HPAM)-EOR flood in a sandstone core. A total recovery of 62.4% of OOIP was achieved, of which 4.3% was from HPAM. Continued brine injection following conventional recovery (waterflooding) and EOR procedures demonstrated most moveable fluid saturation pertained to brine, with a small fraction to hydrocarbon. Increases in residual oil ganglia size was demonstrated following HPAM-EOR, suggesting HPAM encourages ganglia coalescence, supporting the "oil thread/column stabilisation" mechanism proposed in the literature. NMR relaxometry techniques used for assessing surface interaction strengths (T1/T¬2) were benchmarked against an industry-standard SCAL wettability measurement (Amott-Harvey) on a water-wet sandstone at magnetic field strengths comparable to reservoir well-logging tools (WLTs). At 2 MHz, T1/T2 was demonstrated to be weakly sensitive to the core wettability, although yielded wettability information at premature stages of the Amott-Harvey cycle. This suggests the potential for NMR to deliver faster wettability measurements, in SCAL applications or downhole WLT in situ reservoir characterisation.
APA, Harvard, Vancouver, ISO, and other styles
20

Vidović, Tin, and Lamija Hasanagić. "TIGHTER INTER-CORE DELAYS IN MULTI-CORE EMBEDDED SYSTEMS UNDER PARTITIONED SCHEDULING." Thesis, Mälardalens högskola, Inbyggda system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48575.

Full text
Abstract:
There exists an increasing demand for computing power and performance in real-time embedded systems, as new, more complex customer requirements and function-alities are appearing every day. In order to support these requirements and func-tionalities without breaking the power consumption wall, many embedded systems areswitching from traditional single-core hardware architectures to multi-core architec-tures. Multi-core architectures allow for parallel execution of tasks on the multiplecores. This introduces many benets from the perspective of achievable performance,but in turn introduces major issues when it comes to the timing predictability ofthe real-time embedded system applications deployed on them. The problem arisesfrom unpredictable and potentially unbounded inter-core interferences, which occuras a result of contention for the shared resources, such as the shared system busor shared system memory. This thesis studies the possible application of constraintprogramming as a resource optimization technique for the purpose of creating oineschedules for tasks in real-time embedded system applications executing on a dual-core architecture. The main focus is placed on tightening inter-core data-propagationinterferences, which can result in lower over-all data-propagation delays. A proto-type of an optimization engine, employing constraint programming techniques on ap-plications comprised of tasks structured according to the Phased Execution Model isdeveloped. The prototype is evaluated through several experiments on a large numberof industry inspired intellectual-property free benchmarks. Alongside the experimentsa case study is conducted on an example engine-control application and the resultingschedule is compared to a schedule generated by the Rubus-ICE industrial tool suite.The obtained results show that the proposed method is applicable to a potentially widerange of abstract systems with dierent requirements. The limitations of the methodare also discussed and potential future work is debated based on these results.

Presentation was held over Zoom, due to the COVID-19 situation.

APA, Harvard, Vancouver, ISO, and other styles
21

Yan, Jun. "Design and analysis of time-predictable single-core and multi-core processors /." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1879689241&sid=3&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--Southern Illinois University Carbondale, 2009.
"Department of Electrical and Computer Engineering." Keywords: Cache, Multicore processors, Very Long Instruction Word, Worst Case Execution Time, Time predictability. Includes bibliographical references (p. 108-116). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
22

Grau, David. "Relating interfacial fracture toughness to core thickness in honeycomb-core sandwich composites." [Gainesville, Fla.] : University of Florida, 2003. http://purl.fcla.edu/fcla/etd/UFE0002701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yan, Jun. "Design And Analysis Of Time-Predicatable Single-Core And Multi-Core Processors." OpenSIUC, 2009. https://opensiuc.lib.siu.edu/dissertations/20.

Full text
Abstract:
Time predictability is one of the most important design considerations for real-time systems. In this dissertation, time predictability of the instruction cache is studied on both single core processors and multi-core processors. It is observed that many features in modern microprocessor architecture such as cache memories and branch prediction are in favor of average-case performance, which can significantly compromise the time predictability and make accurate worst-case performance analysis extremely difficult if not impossible. Therefore, the time predictability of VLIW (Very Long Instruction Word) processors and its compiler support is studied. The impediments to time predictability for VLIW processors are analyzed and compiler-based techniques to address these problems with minimal modifications to the VLIW hardware design are proposed. Specifically, the VLIW compiler is enhanced to support full if conversion, hyperblock scheduling, and intra-block nop insertion to enable efficient WCET (Worst Case Execution Time) analysis for VLIW processors. Our time-predictable processor incorporates the instruction caches which can mitigate the latency of fetching instructions that hit in the cache. For instruction missing from the cache, instruction prefetching is a useful technique to boost the average-case performance. However, it is unclear whether or not instruction prefetching can benefit the worst-case performance as well. Thus, the impact of instruction prefetching on the worst-case performance of instruction caches is studied. Extension of the static cache simulation technique is applied to model and compute the worst-case instruction cache performance with prefetching. It is shown that instruction prefetching can be reasonably bound, however, the time variation of computing is increased by instruction prefetching. As the technology advances, it is projected that multi-core chips will be increasingly adopted by microprocessor industry. For real-time systems to safely harness the potential of multi-core computing, designers must be able to accurately obtain the worst-case execution time (WCET) of applications running on multi-core platforms, which is very challenging due to the possible runtime inter-core interferences in using shared resources such as the shared L2 caches. As the first step toward time-predictable multi-core computing, this dissertation presents novel approaches to bounding the worst-case performance for threads running on multi-core processors with shared L2 instruction caches. CF (Control Flow) based approach. This approach computes the worst-case instruction access interferences between different threads based on the program control flow information of each thread, which can be statically analyzed. Extended ILP (Integer Linear Programming) based approach. This approach uses constraint programming to model the worst-case instruction access interferences between different threads. In the context of timing analysis in many core architecture, static approaches may also face the scalability issue. Thus, it is important and challenging to design time predictable caches in multi-core architecture. We propose an approach to leverage the prioritized shared L2 caches to improve time predictability for real-time threads running on multi-core processors. The prioritized shared L2 caches give higher priority to real-time threads while allowing low-priority threads to use shared L2 cache space that is available. Detailed implementation and experimental results discussion are presented in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
24

Eriksen, Stein Ove. "Low-power microcontroller core." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9048.

Full text
Abstract:

Energy efficiency in embedded processors is of major importance in order to achieve longer operating time for battery operated devices. In this thesis the energy efficiency of a microcontroller based on the open source ZPU microprocessor is evaluated and improved. The ZPU microprocessor is a zero-operand stack machine originally designed for small size FPGA implementation, but in this thesis the core is synthesized for implementation with a 180nm technology library. Power estimation of the design is done both before and after synthesis in the design flow, and it is shown that power estimates based on RTL simulations (before synthesis) are 35x faster to obtain than power estimates based on gate-level simulations (after synthesis). The RTL estimates deviate from the gate-level estimates by only 15% and can provide faster design cycle iterations without sacrificing too much accuracy. The energy consumption of the ZPU microcontroller is reduced by implementing clock gating in the ZPU core and also implementing a tiny stack cache to reduce stack activity energy consumption. The result of these improvements show a 46% reduction in average power consumption. The ZPU architecture is also compared to the more common MIPS architecture, and the Plasma CPU of MIPS architecture is synthesized and simulated to serve as comparison to the ZPU microcontroller. The results of the comparison with the MIPS architecture shows that the ZPU needs on average 15x as many cycles and 3x as many memory accesses to complete the benchmark programs as the MIPS does.

APA, Harvard, Vancouver, ISO, and other styles
25

Shenoy, Pranab Johnson Jeremy. "Universal FFT core generator/." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/2535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Zhong, Ming. "Partial core power transformer." Thesis, University of Canterbury. Electrical and Computer Engineering, 2012. http://hdl.handle.net/10092/7537.

Full text
Abstract:
This thesis describes the design, construction, and testing of a 15kVA, 11kV/230V partial core power transformer (PCPT) for continuous operation. While applications for the partial core transformer have been developed for many years, the concept of constructing a partial core transformer, from conventional copper windings, as a power transformer has not been investigated, specifically to have a continuous operation. In this thesis, this concept has been investigated and tested. The first part of the research involved creating a computer program to model the physical dimensions and the electrical performance of a partial core transformer, based on the existing partial core transformer models. Also, since the hot-spot temperature is the key factor for limiting the power rating of the PCPT, the second part of the research investigates a thermal model to simulate the change of the hot-spot temperature for the designed PCPT. The cooling fluid of the PCPT applied in this project was BIOTEMP®. The original thermal model used was from the IEEE Guide for Loading Mineral-Oil-Immersed transformer. However, some changes to the original thermal model had to be made since the original model does not include BIOTEMP® as a type of cooling fluid. The constructed partial core transformer was tested to determine its hot-spot temperature when it is immersed by BIOTEMP®, and the results compared with the thermal model. The third part of the research involved using both the electrical model and the thermal model to design a PCPT. The PCPT was tested to obtain the actual electrical and the thermal performance for the PCPT. The overall performance of the PCPT was very close to the model estimation. However, cooling of the PCPT was not sufficient to allow the PCPT to operate at the design rated load for continuous operation. Therefore, the PCPT was down rated from 15kVA to maintain the hot-spot temperature at 100°C for continuous operation. The actual rating of the PCPT is 80% of the original power rating, which is 12kVA.
APA, Harvard, Vancouver, ISO, and other styles
27

Ames, Kelly. "Novel bent-core metallomesogens." Thesis, University of Nottingham, 2005. http://eprints.nottingham.ac.uk/11933/.

Full text
Abstract:
Novel polycatenar bent-core Schiff-base metallomesogens from derivatives of 1,1 O-phenanthroline ([MCb(LPhen-n)]), 2,2-bipyridine ([MCb(L Bipy-n)]) and 5,5'-dimethyldipyrromethane ((tc)_[M(LDipy-n)] and ex-[M(L Dipy-n)]) have been investigated in this body of work. The mesomorphic properties of these first- and second-row transition metal complexes have been studied. Further to the examination of the compounds in the liquid crystalline state, single crystal X-ray studies of short chain analogues were performed to determine the coordination geometry and the degree of selfassembly of the molecules in the solid state. Chapter 1 introduces the field of liquid crystals and metallomesogens, with a focus on thermotropic liquid crystals and their nomenclature, physical properties and applications. The historical background of the field is briefly explored and previous research on bent-core metallomesogens from the Schroder group in Nottingham has been reviewed. The characterisation of liquid crystalline mesophases, namely by polarised optical microscopy, differential scanning calorimetry and X-ray diffraction, are described. Further discussion is dedicated to the X-ray diffraction patterns generated by columnar mesophases. The chapter finishes with a description of the aims of the project. Chapter 2 commences with an introduction to liquid crystals derived from 1,1 O-phenanthroline. Following this is a description of the synthesis and characterisation of mesomorphic metal-free ligands, LPhen-n (n = 10, 12, 14, 16), four novel series of metallomesogens and two non-mesomorphic series of complexes derived from 1,10-phenanthroline, [MCb(LPhen-n)] (M = Mn2+, Fe2~, C02+, Ni2+, Cu2+. Zn2+~ LPhen = 2.9-bis-[3' A ',5' -tri(alkoxy)phenyliminon1ethyl]- 1,1 O-phenanthroline: n = 8, 10, 12, 14, 16). Structural determination by single crystal X-ray diffraction of the analogous methoxy complexes [MCb(LPhen-l)] (M = Mn2+, C02+, Ni2+, Zn2+), and the complex without any lateral aliphatic groups [CuCb(LPhen-O)], revealed the metal (II) complexes to have either distorted trigonal bipyramidal, square pyramidal or octahedral coordination geometry. The mesomorphic behaviour of the complexes [MCb(LPhen-n)] (M = Mn2+, C02+, Ni2+, Zn2+; n = 8, 10, 12, 14, 16) and the metal-free ligands LPhen-n (n = 10, 12, 14, 16) is columnar (with the exception of the non-mesomorphic [CoCb(LPhen-8)]), and the 2D symmetries of these mesophases vary between hexagonal, rectangular and oblique. Chapter 3 is introduced with a discussion of liquid crystalline compounds derived from 2,2' -bipyridine. Subsequently, the synthesis and characterisation of four new series of metallomesogens and two nonmesomorphic compounds derived from 2,2' -bipyridine, [MCb(L Bipy-n)] (M = Mn2+, Fe2+, C02+ Cu2+ and n = 16; M = Ni2+, Zn2+ and n = 10, 12, 14, 16; L Bipy = 6,6' -bis-[3' ,4',5' -tri(alkoxy)phenyliminomethyl]-2,2' -bipyridine) are detailed. Single crystal X-ray diffractometry revealed the coordination geometry• of [MnCb(LBipy-l)], [CoCb(LBipY-l)] and [NiCb(LBipy-l)] to be octahedral, whereas [ZnCb(LBipY-l)] is distorted trigonal bipyramidal. The complexes [MCb(L Bipy-n)] (M = Mn2+, C02+ and n = 16; M = Ni2+, Zn2+ and n = 10, 12, 14, 16) exhibit mesomorphic character and again generate columnar mesophases. Finally, Chapter 4 begins with a discussion on pyrrole-derived liquid crystals. Consequently, the synthesis and characterisation of hexacatenar d compoun s [M(LDipH/)] \: (M -- 2H ,Zn 2+, Pd2+.' n = 10 ,_l', 1'1"t o 16', x. = 1, ,-), tetracatenar compounds tc_[M(LDip~-16)1\ (M = 2H, Zn2+, Pd2T ; x = 1, 2) and extended dicatenar compounds ex_[M(LDipy-n)]x (M = 2H, Zn2+, Pd2+; x = 1. 2) are described. Characterisation by X-ray diffraction of single crystals of [Zn(LDipy-l)h, ex-[Zn(LDipy-l)h show they exhibit a distorted tetrahedral geometry, forming double stranded helical structures, while ex-[Pd(L Dipyl)] has a distorted square planar geometry. The metal-free ligands H2LDip~-1/ (n = 10, 12, 14, 16) and complexes [Zn(LDipy-16)h and [Pd(LDipy-n)] (n = 12, 14, 16) all exhibit narrow mesomorphic temperature ranges and unidentified mesophases. The tetracatenar compound tc_[Zn(LDipy-16)h generates a columnar hexagonal mesophase and the complex tc_[Pd(LDipy-16)] generates an unidentified liquid crystalline phase, whereas the metal-free ligand tc-H2L Dipy-16 has no mesomorphic character. Finally, two of the extended dicatenar compounds ex-H 2L DipY-16 and ex-[Zn(L Dipy-16)h are non-mesomorphic, while ex-[Pd(L Dipy-16)] was found to have a smectic A phase.
APA, Harvard, Vancouver, ISO, and other styles
28

Restivo, Andrea. "Core-mantle boundary heterogeneity." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.271844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

MONTEIRO, ANDREI ALHADEFF. "MANY-CORE FRAGMENTATION SIMULATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28800@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Apresentamos um método computacional na GPU que lida com eventos de fragmentação dinâmica, simulados por meio de elementos de zona coesiva. O trabalho é dividido em duas partes. Na primeira parte, tratamos o pré-processamento de informações e a verificação de corretude e eficácia da inserção dinâmica de elementos coesivos em malhas grandes. Para tal, apresentamos uma simples estrutura de dados topológica composta de triângulos. Na segunda parte, o código explícito de dinâmica é apresentado, que implementa a formulação extrínsica de zona coesiva, onde os elementos são inseridos dinamicamente quando e onde forem necessários. O principal desafio da implementação na GPU, usando a formulação de zona coesiva extrínsica, é ser capaz de adaptar dinamicamente a malha de uma forma consistente, inserindo elementos coesivos nas facetas fraturadas. Para isso, a estrutura de dados convencional usada no código de elementos finitos (baseado na incidência de elementos) é estendida, armazenando, para cada elemento, referências para elementos adjacentes. Para evitar concorrência ao acessar entidades compartilhadas, uma estratégia convencional de coloração de grafos é adotada. Na fase de pré-processamento, cada nó do grafo (elementos na malha) é associado a uma cor diferente das cores de seus nós adjacentes. Desta maneira, elementos da mesma cor podem ser processados em paralelo sem concorrência. Todos os procedimentos necessários para a inserção de elementos coesivos nas facetas fraturadas e para computar propriedades de nós são feitas por threads associados a triângulos, invocando um kernel por cor. Computações em elementos coesivos existentes também são feitas baseadas nos elementos adjacentes.
A GPU-based computational framework is presented to deal with dynamic failure events simulated by means of cohesive zone elements. The work is divided into two parts. In the first part, we deal with pre-processing of the information and verify the effectiveness of dynamic insertion of cohesive elements in large meshes. To this effect, we employ a simplified topological data structured specialized for triangles. In the second part, we present an explicit dynamics code that implements an extrinsic cohesive zone formulation where the elements are inserted on-the-fly, when needed and where needed. The main challenge for implementing a GPU-based computational framework using extrinsic cohesive zone formulation resides on being able to dynamically adapt the mesh in a consistent way, inserting cohesive elements on fractured facets. In order to handle that, we extend the conventional data structure used in finite element code (based on element incidence) and store, for each element, references to the adjacent elements. To avoid concurrency on accessing shared entities, we employ the conventional strategy of graph coloring. In a pre-processing phase, each node of the dual graph (bulk element of the mesh) is assigned a color different to the colors assigned to adjacent nodes. In that way, elements of a same color can be processed in parallel without concurrency. All the procedures needed for the insertion of cohesive elements along fracture facets and for computing node properties are performed by threads assigned to triangles, invoking one kernel per color. Computations on existing cohesive elements are also performed based on adjacent bulk elements.
APA, Harvard, Vancouver, ISO, and other styles
30

Damsgaard, Falck Hanna, Johanna Ring, and Erik Svensson. "Creating Bushing Core Geometries." Thesis, Uppsala universitet, Institutionen för materialvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-444328.

Full text
Abstract:
Bushings are a necessary component of the transformers in the power grid. A bushing is used to control the electric field's strength and shape. It is also an insulator for high-voltage conductors. The bushing enables a conductor to be safely brought through a grounded barrier. In this report, several methods for creating a 2D axi-symmetrical bushing core geometry in COMSOL Multiphysics were developed. The geometry includes the conductor, hollow area inside the conductor, the RIP, the mold and aluminum foils. First, the base-geometry was constructed, which includes all geometry parts except the foils. Afterward, two different approaches were used to construct the foils. The first approach was to automatically build a requested number of foils. The second approach was to create the foils based on data from excel-sheets. The developed method should be able to create both full foils and partial foils. A total of four foil methods were developed. The first method used COMSOL's Model Builder to create a requested number of foils uniformly distributed within the base-geometry. The second method used COMSOL's Application Builder to create a requested number of foils based on mathematical expressions. The third method reads data from an excel sheet to create the foils in COMSOL. Method four is an improved version of method three that can create partial foils as well as the base-geometry. Foil methods II, III, and IV, created every foil as a separate geometrical object. As a result, an associated method that deletes the foils were also developed for each of these methods. A conclusion that the fourth method was the most realistic method of creating a bushing core could be draw due to, among other factors, it is the only method that can build partial foils.
APA, Harvard, Vancouver, ISO, and other styles
31

Shayesteh, Anahita. "Factored multi-core architectures." Diss., Restricted to subscribing institutions, 2006. http://proquest.umi.com/pqdweb?did=1273137861&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bruno, William M. Bridges William B. "Powder core dielectric waveguides /." Diss., Pasadena, Calif. : California Institute of Technology, 1986. http://resolver.caltech.edu/CaltechETD:etd-03192008-084301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Agnoletto, Irene. "Overluminous Core-Collapse Supernovae." Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3427000.

Full text
Abstract:
This Thesis is focused on a photometric and spectroscopic study of four Type IIn supernovae (i.e. SN 20 06gy, 2007bt, 2007bw and 2008fz), which are among the brightest supernovae (SNe) ever detected. They belong to the sample of overluminous or Very Luminous SuperNovae (VLSNe) which currently includes other 3-4 well studied events. Their absolute luminosity at maximum, MV < -20 is much higher than any other previous supernovae, either core-collapse and or thermonuclear. Their huge brightness (> 10^(51)erg) are emitted in the first »200 days) link these events to massive or supermassive progenitors, which experi- enced extreme mass-losses during their last stages of evolution. However, other explosion mechanisms or sources of energy are being investigated; the debate on their nature is still open. The first object discussed in this Thesis is SN 2006gy, which is one the most debated supernovae ever. Contrary to typical IIn SNe, this event did dot show any strong x-rays or radio emission near the epoch of maximum. This has lead to consider other feasible non-standard sources of energy beyond interaction. In this thesis, the evolution of multiband light curves, the pseudo-bolometric (BVRI) light curve and an extended spectral sequence are presented and used to derive constraints on the origin and evolution of the nature of the SN. Its light curve is characterized by a broad, bright (MR = -21.7 at about 70 days) peak, followed by a rapid luminosity fading which turns into a slower decline by day 180. At late phases (> 237 days), because of the large lu- minosity drop (> 3 mag), only upper visibility limits are obtained in the B, R and I bands. In the near-infrared, two K-band detections on days 411 and 510 possibly indicate dust formation or IR echoes scenarios. At all epochs the spectra are characterized a multicomponent Halpha profile, without any P-Cygni. By means of a semi-analytical code, the light curve in the first 170 days is found to be consistent with the explosion of a compact progenitor (R (6 -8 x 10^(12) cm, Mej =5 -14Msol), whose ejecta collided with massive (6 - 10Msol¯), opaque clumps of previously ejected material. These clumps do not completely obscure the SN photosphere, so that at its peak the luminosity is due both to the decay of 56Ni and to interaction with the circumstellar medium (CSM). After 170 days spectroscopic and photometric similarities are found between SN 2006gy and bright, interaction-dominated SNe (e.g. SN 1997cy, SN 1999E and SN 2002ic). This suggests that ejecta-CSM interaction plays a key role in SN 2006gy about 6 to 8 months after maximum, sustaining the late-time-light curve. Alternatively, the late luminosity may be related to the radioactive decay of 3Msol of 56Ni. In this scenario, a supermassive star is not required to explain the observational data, nor is an extra-ordinarily large explosion energy. For the SNe 2007bt, 2007bw and 2008fz UBVRI light curves and an extended spectral sequence are also presented. Analogies and differences are highlighted, both among each other and with respect to the sample of VLSNe from the literature. Photometrically, it is shown that the light curves of SNe 2007bt and 2007bw are substantially dfferent from that of SN 2008fz, evolving more slowly, being redder at the earlier phases and decaying with a rate consistent with that pre- dicted by the radioactive decay of 56Co. On the contrary, the photometric evolution of SN 2008fz is reminiscent to the light curves of IIL SNe, showing a short peak followed by a steep decline. Spectroscopically the three events are characterized by high-velocity (up to 12000km/s), slowly-decelerating emission lines. The spectra of the SNe 2007bt and 2007bw are dominated by Balmer lines, overimposed on a relatively °at continuum (TBB = 6000 ¡ 7000 K); an asymmetry in the early profile of Halpha is observed, slowly disappearing with time. Measurements of the narrow components of Halpha in SN 2007bt indicate CSM speed of 320 km/s , which are only consistent with the winds surrounding luminous blue variable (LBV) stars. The early spectra of SN 2008fz are found to be similar to SN 2006gy; however, they show higher temperatures (TBB=14000 K) and a more rapid evolution. For the three events, the energetic, luminosity, initial radius (> 10^(15)cm) and the kinematic derived from the analysis of the light curves and spectra could be reproduced by the conversion of kinetic energy into radiation by a clumpy CSM which is hit by the energetic SN ejecta, similarly to what was proposed for SN 2006gy. For SNe 2007bt and 2007bw the asymmetry in the Halpha line can be explained if a massive (>10 Msol) clumpy CSM lies face-on in the direction of the observer. The asymmetry in the CSM distribution around the star could be due by a binarity fefect in the progenitor system, or asymmetric mass ejection of a single star. For SN 2008fz the rapid expansion of the black-body radius favor a less massive CSM ( 1Msol), which is efficiently warmed up and accelerated by the high-velocity SN ejecta. Because of the relatively small mass in the CSM/shell, the photon diffusion time is smaller than that calculated for SN 2006gy, and the radiated energy plummets rapidly as the light curve. As for the case of SN 2006gy, these scenarios have the advantage that they do not involve any exotic explosion mechanism for these VLSNe. However, other scenarios could be consistent with their photometric evolution. Among these, the possibility of a pair-instability explosion cannot be excluded. This and other likely hypothesis proposed by other authors are discussed.
Questa Tesi si incentra sullo studio fotometrico e spettroscopico di quattro su pernovae (SNe) di tipo IIn (cioµe SN 2006gy, 2007bt, 2007bw e 2008fz), che sono tra le supernovae più brillanti mai scoperte. Infatti appartengono alla classe delle SNe iperluminose o Very Luminous SuperNovae (VLSNe, supernovae molto brillanti), che al momento include altri 3-4 oggetti ben studiati. La loro luminosità assoluta all'epoca del massimo, MV < -20, è superiore rispetto a qualsiasi altro evento, sia di natura termonucleare che di collasso del core. L'enorme luminosità emessa (> 10^(51)erg nei primi 200 giorni) associa questi eventi a progenitori massicci o supermassicci, che hanno subito fenomeni di perdita di massa estremi durante le loro fasi evolutive finali. Comunque, al momento si stanno studiando anche altri meccanismi o possibili fonti di energia, e il dibattito sulla natura di questi eventi è tuttora aperto. Il primo oggetto discusso è la SN 2006gy, che è una delle supernovae più dibattute in assoluto. Contrariamente alle tipiche IIn, essa non mostrava alcuna emissione X o radio all'epoca del massimo di luminosità. Questo ha portato a considerare altre possibili sorgenti di energia oltre all'interazione. In questa Tesi, l'evoluzione delle curve di luce multibanda, la curva di luce pseudo-bolometrica e una sequenza di spettri vengono studiati per ricavare delle infor- mazioni sull'evoluzione e sulla natura della supernova e del progenitore. La curva di luce µe caratterizzata da un picco ampio e luminoso (MR = -21.7 a circa 70 giorni), seguito da un declino di luminosità veloce, il quale si assesta su un declino piµu lento, a circa 180 giorni. A fasi avanzate (>237 giorni), a causa del forte indebolimento della luminosità (>3 mag) vengono ricavati solo dei limiti ottici nelle bande B, R ed I. Nel vicino infrarosso, due detection nella banda K' indicano una possibile presenza di regioni di formazione di polvere, o eventualmente di echi infrarossi. A tutte le epoche gli spettri sono caratterizzati dalla presenza di pro¯li di righe a multi-componente, senza però alcun pro¯lo P-Cygni. Tramite un codice semi-analitico si trova che la curva di luce nei primi 170 giorni è consistente con l'esplosione di un progenitore compatto (R = 6-8 x 10^(12)cm, Mej = 5-14Msol), le cui ejecta collidono con dei clumps massicci (6-10 Msol) e opachi di materiale espulso precedentemente. Tali clumps non oscurano completamente la fotosfera della supernova, cosicché all'epoca del picco la luminosità è dovuta sia al decadimento radioattivo del 56Ni che all'interazione con il mezzo circumstellare. Vengono inoltre evidenziate, a partire da circa 170 giorni, delle analogie fotometriche e spettroscopiche tra la SN 2006gy e un gruppo di supernovae interagenti (cioè SN 1997cy, 1999E e 2002ic). Ciò suggerisce che l'interazione tra ejecta e CSM gioca un ruolo importante anche nella SN 2006gy a circa 6-8 mesi dal massimo, sostenendo la curva di luce a fasi avanzate. In alternativa, la luminositµa a queste fasi potrebbe essere dovuta al decadimento radioattivo di 3Msol di 56Ni. Questo scenario non richiede la presenza di una stella supermassiccia o di un'energia straordinariamente grande per spiegare i dati osservativi. Anche per le supernovae 2007bt, 2007bw e 2008fz vengono presentate delle curve di luce UBVRI e una sequenza di spettri estesa. Vengono messe in luce analogie e differenze tra tali supernovae e tra le VLSNe in letteratura. Dal punto di vista fotometrico si mostra che le curve di luce delle SNe 2007bt e 2007bw differiscono sostanzialmente da quella della SN 2008fz, poiché evolvono più lentamente, sono piµu rosse a fasi iniziali e decadono ad un tasso consistente con quello predetto dal decadimento del 56Co. Spettroscopicamente i tre eventi sono caratterizzati da righe di emissione ad alte velocità, ¯fino a 12000 km/s . Gli spettri delle supernovae 2007bt e 2007bw sono dominati dalle righe di Balmer su un continuo relativamente piatto (TBB = 6000-¡ 7000 K). Inoltre viene osservata un'asimmetria nel profilo iniziale di Halpha, che però si indebolisce col tempo. Dalla misura della componente strette di Halpha nella SN 2007bt si ricavano velocità di 320 km/s , le quali sono consistenti solo con i venti di stelle LBV (luminose, blu, variabili). Si trova inoltre che i primi spettri della SN 2008fz sono consistenti con quelli della SN 2006gy; tuttavia, essi indicano temperature maggiori (TBB = 14000 K) ed un'espansione piµu rapida. Per i tre eventi, l'energia in gioco, la luminositµa, il raggio iniziale (> 10^(15)cm) e la cinematica derivati dall'analisi delle curve di luce e degli spettri potrebbe essere riprodotta dalla conversione di energia cinetica in radiazione da parte di un mezzo circumstellare ricco di clumps, il quale viene raggiunto dalle ejecta energetiche della supernova, similmente a quanto supposto per SN 2006gy. Per le SNe 2007bt e 2007bw l'asimmetria del pro¯lo di Halpha può essere spie- gata se un mezzo massiccio (>10 Msol ) ricco di clumps si trova esattamente davanti all'osservatore, perpendicolarmente alla linea di vista. L'asimmetria nella distribuzione del mezzo circumstellare potrebbe essere dovuta ad effetti di binarietà del sistema del progenitore, o ad espulsioni di materiale asimmetriche in una stella singola. Per la SN 2008fz la rapida espansione del raggio iniziale di corpo nero tende a favorire un mezzo meno massiccio (> 10Msol), il quale viene riscaldato ed accelerato efficientemente dalle ejecta ad alta velocità. A causa della massa relativamente piccola del mezzo, il tempo di diffusione dei fotoni inferiore di quanto calcolato per la SN 2006gy, cosicché l'energia radiativa diminuisce rapidamente, come la curva di luce. Come nel caso della SN 2006gy, il vantaggio di questi scenari è che non involvono alcun meccanismo di esplosione esotico. Tuttavia, la loro evoluzione fotometrica può essere consistente anche con altri scenari. Tra questi, anche l'esplosione di una supernova di instabilità di coppia non può essere esclusa. Questi ed altri scenari vengono discussi nel capitolo conclusivo.
APA, Harvard, Vancouver, ISO, and other styles
34

Rooney, Kevin F. "The effects of an aquatic core training program and a pilates core training program on core strengthening in the college athlete /." Link to PDF version, 2005. http://libweb.cup.edu/thesis/umi-cup-1010.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Garcia, Ruben. "A Paranetric Study on Core Performance of Sodium Fast Reactors Using SERPENT Code." Thesis, KTH, Fysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-101995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Graillat, Amaury. "Génération de code pour un many-core avec des contraintes temps réel fortes." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM063/document.

Full text
Abstract:
La plupart des systèmes critiques sont dits «temps-réel durs» puisqu'ils requièrent des garanties temporelle fortes. Ces systèmes sont de plus en plus complexes et les processeurs mono-coeurs traditionnels ne sont plus assez puissants. Les multi-coeurs et les pluri-coeurs sont des alternatives plus puissantes, cependant ils contiennent des ressources partagées. Les accès concurrents à ces ressources provoquent des interférences qui doivent être prises en compte puisqu'elles rendent les délais d'accès non prédictibles. Pour les pluri-coeur, le réseau sur puce (NoC) doit être configuré pour éviter les interblocages et garantir des pires temps de traversée précis. Le MPPA2 de Kalray est un pluri-coeur avec de bonnes propriétés temporelles.Les langages Synchrones flot de données tels que Lustre ou Scade sont largement utilisés dans l'industrie aéronautique. Les programmes sont des réseaux de noeuds de calcul communicants. Nous présentons une méthode pour extraire le parallélisme des programmes Synchrones. Nous générons du code pour déployer les tâches parallèles sur la puce et pour implémenter les communications en mémoire partagée ou à travers le NoC. Notre solution permet la traçabilité du code. Elle est basée sur un modèle d'exécution dirigé par le temps où chaque tâche a une date de début. L'ordonnancement est statique et minimise les interférences grâce à l'utilisation de bancs mémoire. Une borne de pire temps d'exécution (WCET) est calculée. Elle inclut les interférences mémoire et les pires temps de traversée NoC. Nous générons la configuration du processeur qui permet une allocation équitable des bandes passantes sur le NoC, la garantie de temps de traversées bornés et la synchronisation des horloges. Enfin, nous appliquons notre outils sur des exemples de programmes aéronautiques et un exemple synthétique utilisant 64 coeurs
Most critical systems are subject to hard real-time requirements. These systems are more and more complex and the computational power of the predictable single-core processors is no longer sufficient. Multi- or many-core architectures are good alternatives but interferences on shared resources must be taken into account to avoid unpredictable timing effects. For many-core, the Network-on-Chip (NoC) must be configured such that deadlocks are avoided and a tight Worst Case Traversal Time (WCTT) of the communications can be computed. The Kalray MPPA2 is a many-core architecture with good timing properties.Dataflow Synchronous languages such as Lustre or Scade are widely used for avionics critical software. In these languages, programs are described by networks of computational nodes. We introduce a method to extract parallel tasks from synchronous programs. Then, we generate parallel code to deploy tasks on the chip and implement NoC and shared-memory communications. The generated code enables traceability. It is based on a time-triggered execution model which relies on a static schedule and minimizes the memory interferences thanks to usage of memory banks. The code enables the computation of a worst case execution time bound accounting for the memory interferences and the WCTT of NoC transmissions. We generate a configuration of the platform to enable a fair bandwidth attribution on the NoC, bounded transmissions through the NoC and clock synchronization. Finally, we apply this toolchain on avionic case studies and synthetic benchmarks running on 64 cores
APA, Harvard, Vancouver, ISO, and other styles
37

Evaldsson, Mattias. "NoC for Versatile Micro-Code Programmable Multi-Core Processor Targeting Convolutional Neural Networks." Thesis, Linköpings universitet, Datorteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-179763.

Full text
Abstract:
This thesis investigates building a network-on-chip for a multi-core chip computing convolutional neural networks (CNNs) using Imsys processors in a tree architecture. The division of work on a multi-core chip is investigated. Key patterns of communication are identified and three designs allowing for increasingly more advanced communication patterns are implemented in VHDL. Each design is evaluated on throughput, latency and design size by running tests on the communication patterns in simulation. A relation between design size and throughput is shown, though the throughput decreases for different communication patterns when resorting to networks with lower design size. Depending on what layers are present in a CNN of interest, a network can be chosen with as small design size as possible while still achieving desired results. Aspects such as implementation and usage difficulties and energy consumption are discussed in the thesis as well; however, only on a theoretical level.
APA, Harvard, Vancouver, ISO, and other styles
38

Abdallah, Laure. "Worst-case delay analysis of core-to-IO flows over many-cores architectures." Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/17836/1/abdallah_2.pdf.

Full text
Abstract:
Many-core architectures are more promising hardware to design real-time systems than multi-core systems as they should enable an easier mastered integration of a higher number of applications, potentially of different level of criticalities. In embedded real-time systems, these architectures will be integrated within backbone Ethernet networks, as they mostly provide Ethernet controllers as Input/Output(I/O) interfaces. Thus, a number of applications of different level of criticalities could be allocated on the Network-on-Chip (NoC) and required to communicate with sensors and actuators. However, the worst-case behavior of NoC for both inter-core and core-to-I/O communications must be established. Several NoCs targeting hard real-time systems, made of specific hardware extensions, have been designed. However, none of these extensions are currently available in commercially available NoC-based many-core architectures, that instead rely on wormhole switching with round-robin arbitration. Using this switching strategy, interference patterns can occur between direct and indirect flows on many-cores. Besides, the mapping over the NoC of both critical and non-critical applications has an impact on the network contention these core-to-I/O communications exhibit. These core-to-I/O flows (coming from the Ethernet interface of the NoC) cross two networks of different speeds: NoC and Ethernet. On the NoC, the size of allowed packets is much smaller than the size of Ethernet frames. Thus, once an Ethernet frame is transmitted over the NoC, it will be divided into many packets. When all the data corresponding to this frame are received by the DDR-SDRAM memory on the NoC, the frame is removed from the buffer of the Ethernet interface. In addition, the congestion on the NoC, due to wormhole switching, can delay these flows. Besides, the buffer in the Ethernet interface has a limited capacity. Then, this behavior may lead to a problem of dropping Ethernet frames. The idea is therefore to analyze the worst case transmission delays on the NoC and reduce the delays of the core-to-I/O flows. In this thesis, we show that the pessimism of the existing Worst-Case Traversal Time (WCTT) computing methods and the existing mapping strategies lead to drop Ethernet frames due to an internal congestion in the NoC. Thus, we demonstrate properties of such NoC-based wormhole networks to reduce the pessimism when modeling flows in contentions. Then, we propose a mapping strategy that minimizes the contention of core-to-I/O flows in order to solve this problem. We show that the WCTT values can be reduced up to 50% compared to current state-of-the-art real-time packet schedulability analysis. These results are due to the modeling of the real impact of the flows in contention in our proposed computing method. Besides, experimental results on real avionics applications show significant improvements of core-to-I/O flows transmission delays, up to 94%, without significantly impacting transmission delays of core-to-core flows. These improvements are due to our mapping strategy that allocates the applications in such a way to reduce the impact of non-critical flows on critical flows. These reductions on the WCTT of the core-to-I/O flows avoid the drop of Ethernet frames.
APA, Harvard, Vancouver, ISO, and other styles
39

Gandy, Nicole J. Greenwood Mike Shim Jaeho Stanford Matthew S. "An evaluation of the relationships between core stability, core strength, and running economy." Waco, Tex. : Baylor University, 2006. http://hdl.handle.net/2104/4896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Rogerson, Eleanor. "The determination of the core structure and core surfactant interface in overbased detergents." Thesis, University of Hull, 2002. http://hydra.hull.ac.uk/resources/hull:10451.

Full text
Abstract:
Overbased detergents are oil additives, which are included in oil to neutralise the acids that are generated as by-products during the combustion process within an engine. These overbased detergents have been investigated on an atomic scale by the preparation and characterisation of pure model complexes with relevant metal ions and ligands. Group 2 metal ion complexes have been prepared with sulfurised alkylphenol ligands from a range of conditions. The complexes prepared from methanol have shown that calcium cations and strontium cations give isostructural complexes and that the alkyl chain has a minimal effect on the structures of the complexes. In the solid state, the complexes all have the formula M₂L₂.6MeOH. Calix[8]arene complexes have been prepared, including a mixed metal ion complex with an ion-channel structure. Three calcium cation complexes have been prepared with calix[8]arene ligands, where two of the complexes are mimics for the precursors for overbased detergents as they contain calcium hydroxide cores. One of the complexes has a tetranuclear Ca₄(OH)₄ core and has shown that the conversion of the calcium hydroxide core to the calcium carbonate core in an overbased detergent is a facile reaction. The second precursor mimic contains a decanuclear calcium cation core and has the formula (Ca²+)₁₀(BC8⁵¯)₂ (OH)₈(OMe¯)₂(DMF)₁₀.5DMF. An unusual monodentate carboxylic acid complex with calcium has been prepared which utilised ligand design to achieve the desired monodentate coordination. Finally, a novel complex containing calcium cations and carbonate anions has been prepared, which has the formula Ca₁₀(M2²)₈ (CO₃²¯)₂(DMPD)₄(MeOH)₄.8acetone. This complex contains µ₆-CO₃²¯ anions and can be considered to be a model for overbased detergents.
APA, Harvard, Vancouver, ISO, and other styles
41

Dzhagan, V., A. G. Milekhin, M. Ya Valakh, S. Pedetti, M. Tessier, B. Dubertret, and D. R. T. Zahn. "Morphology-induced phonon spectra of CdSe/CdS nanoplatelets: core/shell vs. core–crown." Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-219936.

Full text
Abstract:
Recently developed two-dimensional colloidal semiconductor nanocrystals, or nanoplatelets (NPLs), extend the palette of solution-processable free-standing 2D nanomaterials of high performance. Growing CdSe and CdS parts subsequently in either side-by-side or stacked manner results in core–crown or core/shell structures, respectively. Both kinds of heterogeneous NPLs find efficient applications and represent interesting materials to study the electronic and lattice excitations and interaction between them under strong one-directional confinement. Here, we investigated by Raman and infrared spectroscopy the phonon spectra and electron–phonon coupling in CdSe/CdS core/shell and core–crown NPLs. A number of distinct spectral features of the two NPL morphologies are observed, which are further modified by tuning the laser excitation energy Eexc between in- and off-resonant conditions. The general difference is the larger number of phonon modes in core/shell NPLs and their spectral shifts with increasing shell thickness, as well as with Eexc. This behaviour is explained by strong mutual influence of the core and shell and formation of combined phonon modes. In the core–crown structure, the CdSe and CdS modes preserve more independent behaviour with only interface modes forming the phonon overtones with phonons of the core
Dieser Beitrag ist aufgrund einer (DFG-geförderten) Allianz- bzw. Nationallizenz frei zugänglich
APA, Harvard, Vancouver, ISO, and other styles
42

González-Tarrío, Carlos. "Expanding the core business : Understanding how to grow in the non-core business." Thesis, KTH, Maskinkonstruktion (Inst.), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277038.

Full text
Abstract:
Throughout history, large companies have been adapting to the new times. For this, they have had to be able to evolve and change the direction of their offer. In most cases, the offer they have now has nothing to do with what they offered. In other terms, they had to expand their core business. That means changing the main source of incomes. This can be done by moving to another market, creating new products, changing from a product to a service company etc. On the contrary, many companies that led the market have disappeared due to their lack of future vision. After using all its resources in their current core product, they forgot to invest in the future. Once that product became obsolete, the company failed. Looking to the future and deciding which direction to take is not easy. There is no standard successful formula for knowing which direction to take, but there are some common factors that can help reduce the risk. The results from the study show lack of focus and strategy as the main challenges for companies to expand the core. The conclusion is that there is not only one way to expand the core, and each company should adapt their strategies according to the status of the company. In the case of the case company, the strategy is aligned with their status, but the way it is applied should be improved to increase the speed and success rate. Three main recommendations are given to improve the actual expanding-the-core process: Isolate the non-core to increase the focus, better use of their brand position and the competitive advantages it can bring, and to improve the overall knowledge of the company in the non-core products.
Genom historien har stora företag anpassat sig till nya tider genom att ändra innehållet i deras erbjudande. I de flesta fall har det erbjudande de nu har ingenting att göra med vad de erbjöd initialt. Med andra ord var de tvungna att utvidga sin kärnverksamhet. Det innebär att bytt huvudinkomstkälla. Detta kan t ex göras genom att hitta en annan marknad, skapa nya produkter, och att gå från ett produkt- till ett serviceföretag. Många företag som tidigare ledde marknaden har försvunnit på grund av sin brist på framtidsvision. Efter att ha använt alla dess resurser i sin nuvarande kärnprodukt, glömde de att investera i framtiden. När produkten blev föråldrad misslyckades företaget. Det är inte enkelt att ta hänsyn till framtiden och bestämma vilken riktning erbjudandet ska ta. Det finns heller ingen standard formel för att veta vilken riktning man ska ta för att bli framgångsrik, men det finns några faktorer som kan bidra till att minska risken. Resultaten från denna studie visar att brist på fokus och strategi är de största utmaningarna för företag att utöka kärnan. Slutsatsen är att det inte bara finns ett sätt att utöka kärnan, och varje företag bör anpassa sina strategier efter företagets situation. Fyra huvudrekommendationer ges för att förbättra den faktiska utvidgningsprocessen
APA, Harvard, Vancouver, ISO, and other styles
43

Miller, Michael A. "21st century roles and missions : identifying Air Force core competencies and core capabilities /." Maxwell AFB, Ala. : School of Advanced Air and Space Studies, 2008. https://www.afresearch.org/skins/rims/display.aspx?moduleid=be0e99f3-fc56-4ccb-8dfe-670c0822a153&mode=user&action=downloadpaper&objectid=4424120d-705b-40e2-a107-0aead299c5d9&rs=PublishedSearch.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Ju, Zilong. "Fast Viterbi Decoder Algorithms for Multi-Core System." Thesis, KTH, Signalbehandling, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-98779.

Full text
Abstract:
In this thesis, fast Viterbi Decoder algorithms for a multi-core system are studied. New parallel Viterbi algorithms for decoding convolutional codes are proposed based on tail biting trellises. The performances of the new algorithms are first evaluated by MATLAB and then Eagle (E-UTRA algorithms for LTE) link level simulations where the optimal parameter settings are obtained based on various simulations. One of the algorithms is proposed for implementation in the product due to its good BLER performance and low implementation complexity. The new parallel algorithm is then implemented on target DSPs for Ericsson internal multi-core system to decode the PUSCH (Physical Uplink Shared Channel) CQI (Channel Quality Indicator) in LTE (Long Term Evolution). And the performance of the new algorithm in the real multi-core system is compared against the current implementation regarding both cycle and memory consumption. As a fast decoder, the proposed parallel Viterbi decoder is computationally efficient which reduces significantly the decoding latency and solves memory limitation problems on DSP.
APA, Harvard, Vancouver, ISO, and other styles
45

Okafor, Kenneth Chukwuemeka. "Construction and utilization of linear empirical core models for PWR in-core fuel management /." The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487588939091215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Fang, Yechang. "Realization of Differentiated Quality of Service for Wideband Code Division Multiple Access Core Network." FIU Digital Commons, 2010. http://digitalcommons.fiu.edu/etd/244.

Full text
Abstract:
The development of 3G (the 3rd generation telecommunication) value-added services brings higher requirements of Quality of Service (QoS). Wideband Code Division Multiple Access (WCDMA) is one of three 3G standards, and enhancement of QoS for WCDMA Core Network (CN) becomes more and more important for users and carriers. The dissertation focuses on enhancement of QoS for WCDMA CN. The purpose is to realize the DiffServ (Differentiated Services) model of QoS for WCDMA CN. Based on the parallelism characteristic of Network Processors (NPs), the NP programming model is classified as Pool of Threads (POTs) and Hyper Task Chaining (HTC). In this study, an integrated programming model that combines both of the two models was designed. This model has highly efficient and flexible features, and also solves the problems of sharing conflicts and packet ordering. We used this model as the programming model to realize DiffServ QoS for WCDMA CN. The realization mechanism of the DiffServ model mainly consists of buffer management, packet scheduling and packet classification algorithms based on NPs. First, we proposed an adaptive buffer management algorithm called Packet Adaptive Fair Dropping (PAFD), which takes into consideration of both fairness and throughput, and has smooth service curves. Then, an improved packet scheduling algorithm called Priority-based Weighted Fair Queuing (PWFQ) was introduced to ensure the fairness of packet scheduling and reduce queue time of data packets. At the same time, the delay and jitter are also maintained in a small range. Thirdly, a multi-dimensional packet classification algorithm called Classification Based on Network Processors (CBNPs) was designed. It effectively reduces the memory access and storage space, and provides less time and space complexity. Lastly, an integrated hardware and software system of the DiffServ model of QoS for WCDMA CN was proposed. It was implemented on the NP IXP2400. According to the corresponding experiment results, the proposed system significantly enhanced QoS for WCDMA CN. It extensively improves consistent response time, display distortion and sound image synchronization, and thus increases network efficiency and saves network resource.
APA, Harvard, Vancouver, ISO, and other styles
47

Emmert-Aronson, Ben. "Risk assessment and core affect." Connect to resource, 2006. http://hdl.handle.net/1811/6616.

Full text
Abstract:
Thesis (Honors)--Ohio State University, 2006.
Title from first page of PDF file. Document formatted into pages: contains 26 p.; also includes graphics. Includes bibliographical references (p. 13-15). Available online via Ohio State University's Knowledge Bank.
APA, Harvard, Vancouver, ISO, and other styles
48

Stevens, Nicholas Stamer. "Identifying core consciousness in animals /." view abstract or download text of file, 2006. http://hdl.handle.net/1794/2847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Gustafsson, Johan, and Mikael Lingbrand. "Resurshantering i Dual-core kluster." Thesis, University West, Department of Economics and Informatics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-540.

Full text
Abstract:

Med den nya generationen processorer där vi har flera cpu-kärnor på ett chip, så ökas prestandan genom parallell exekvering. I denna rapport presenterar vi en omvärldsstudie om allmän multiprocessorteori där vi går igenom olika tekniker för både hårdvara och mjukvara. Vi har även utfört empiriska tester på ett datorkluster, där vi har testat de två olika programmen Fluent och CFX, som utför CFD beräkningar. För varje program så har tre modeller använts för simuleringar med varierande antal beräkningsnoder. Vi har undersökt vad som är mest lönsamt, att använda en eller båda CPU-kärnorna vid de olika simuleringarna. För att testa detta har vi kört simuleringar där vi har kört med en respektive två cpu-kärnor på beräkningsnoderna. Under simuleringarna har vi samlat in mätvärden som nätverk, minne och cpu-belastning för alla noder samt exekveringstider. Dessa värden har sedan sammanställts där vi ser att ju större en modell är desto mer lönar det sig att köra med en cpu-kärna. I endast ett av våra tester har det visat sig lönsamt att använda båda cpu-kärnorna. En formel har sedan utarbetats för att påvisa skillnaderna mellan olika antal processer med en respektive två cpu-kärnor per nod. Denna formel kan appliceras för att räkna ut den totala kostnaden per simulering med hjälp av årskostnaden för de noder och licenser som används.

APA, Harvard, Vancouver, ISO, and other styles
50

Sistany, Bahman. "A Certified Core Policy Language." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34865.

Full text
Abstract:
We present the design and implementation of a Certified Core Policy Language (ACCPL) that can be used to express access-control rules and policies. Although full-blown access-control policy languages such as eXtensible Access Control Markup Language (XACML) [OAS13] already exist, because access rules in such languages are often expressed in a declarative manner using fragments of a natural language like English, it isn’t alwaysclear what the intended behaviour of the system encoded in these access rules should be. To remedy this ambiguity, formal specification of how an access-control mechanism should behave, is typically given in some sort of logic, often a subset of first order logic. To show that an access-control system actually behaves correctly with respect to its specification, proofs are needed, however the proofs that are often presented in the literature are hard or impossible to formally verify. The verification difficulty is partly due to the fact that the language used to do the proofs while mathematical in nature, utilizes intuitive justifications to derive the proofs. Intuitive language in proofs means that the proofs could be incomplete and/or contain subtle errors. ACCPL is small by design. By small we refer to the size of the language; the syntax, auxiliary definitions and the semantics of ACCPL only take a few pages to describe. This compactness allows us to concentrate on the main goal of this thesis which is the ability to reason about the policies written in ACCPL with respect to specific questions. By making the language compact, we have stayed away from completeness and expressive power in several directions. For example, ACCPL uses only a single policy combinator, the conjunction policy combinator. The design of ACCPL is therefore a trade-off between ease of formal proof of correctness and expressive power. We also consider ACCPL a core policy access-control language since we have retained the core features of many access-control policy languages. For instance ACCPL employs a single condition type called a “prerequisite” where other languages may have very expressive and rich sets of conditions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography