Rozprawy doktorskie na temat „Flux parallèle”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Flux parallèle”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Chouchene, Wissem. "Vers une reconfiguration dynamique partielle parallèle par prise en compte de la régularité des architectures FPGA-Xilinx". Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10135/document.
Pełny tekst źródłaThis work proposes two complementary design flows allowing the broadcast of a partial bitstream to a set of identical Partially Reconfigurable Regions (PRRs). These two design flows are applicable with FPGAs - Xilinx. The first one called ADForMe (Automatic DPPR Flow For Multi-RPRs Architecture) allows the automation of the traditional flow of Xilinx RDP through the automation of the floorplanning phase. This floorplanning is carried out by the AFLORA (Automatic Floorplanning For Multi-RPRs Architectures) algorithm which we have designed that allows the same allocation of these RPRs in terms of geometric shape taking into account the technological parameters of the FPGA and the architectural parameters of the design in order to allow the relocation of bitstream. The second proposed flow aims to promote the 1D and 2D relocation technique in order to allow the broadcast of a partial bitstream (functionality) to a set of RPRs for a system configuration. Therefore, this flow allows optimizing the size of the bitstream memory. We have also proposed suitable hardware architecture capable of performing this broadcast. The experimental results have been performed on the recent Xilinx FPGAs and have proved the speed of execution of our AFLORA algorithm as well as the efficiency of the results obtained by the application of the automation of the bitstream relocation technique flow. These two flows allow flexibility and reusability of IP components embedded in Multi-RPRs architectures to reduce complexity in design time and improve design productivity
Preud'Homme, Thomas. "Communication inter-cœurs optimisée pour le parallélisme de flux". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00931833.
Pełny tekst źródłaPerrinet, Laurent. "Comment déchiffrer le code impulsionnel de la Vision? Étude du flux parallèle, asynchrone et épars dans le traitement visuel ultra-rapide". Phd thesis, Université Paul Sabatier - Toulouse III, 2003. http://tel.archives-ouvertes.fr/tel-00002693.
Pełny tekst źródłaPerrinet, Laurent. "Comment déchiffrer le code impulsionnel de la vision ? Etude du flux parallèle, asynchrone et épars dans le traitement visuel ultra-rapide". Toulouse 3, 2003. http://www.theses.fr/2003TOU30033.
Pełny tekst źródłaAymard, Benjamin. "Simulation numérique d'un modèle multi-échelle de cinétique cellulaire formulé à partir d'équations de transport non conservatives". Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066254/document.
Pełny tekst źródłaThe thesis focuses on the numerical simulation of a biomathematical, multiscale model explaining the phenomenon of selection within the population of ovarian follicles, and grounded on a cellular basis. The PDE model consists of a large dimension hyperbolic quasilinear system governing the evolution of cell density functions for a cohort of follicles (around twenty in practice).The equations are coupled in a nonlocal way by control terms involving moments of the solution, defined on either the mesoscopic or macroscopic scale.Three chapters of the thesis, presented in the form of articles, develop the method used to simulate the model numerically. The numerical code is implemented on a parallel architecture. PDEs are discretized with a Finite Volume scheme on an adaptive mesh driven by a multiresolution analysis. Flux discontinuities, at the interfaces between different cellular states, require a specific treatment to be compatible with the high order numerical scheme and mesh refinement.A chapter of the thesis is devoted to the calibration method, which translates the biological knowledge into constraints on the parameters and model outputs. The multiscale character is crucial, since parameters are used at the microscopic level in the equations governing the evolution of the density of cells within each follicle, whereas quantitative biological data are rather available at the mesoscopic and macroscopic levels.The last chapter of the thesis focuses on the analysis of computational performances of the parallel code, based on statistical methods inspired from the field of uncertainty quantification
Belmajdoub, Fouad. "Développement d'une méthode de reconstruction 3D du tronc scoliotique par imagerie numérique stéréoscopique et modélisation des calculs par réseaux de Pétri à flux de données en vue d'une implémentation sur une architecture parallèle". Aix-Marseille 3, 1993. http://www.theses.fr/1993AIX30087.
Pełny tekst źródłaBouaziz, Mohamed. "Réseaux de neurones récurrents pour la classification de séquences dans des flux audiovisuels parallèles". Thesis, Avignon, 2017. http://www.theses.fr/2017AVIG0224/document.
Pełny tekst źródłaIn the same way as TV channels, data streams are represented as a sequence of successive events that can exhibit chronological relations (e.g. a series of programs, scenes, etc.). For a targeted channel, broadcast programming follows the rules defined by the channel itself, but can also be affected by the programming of competing ones. In such conditions, event sequences of parallel streams could provide additional knowledge about the events of a particular stream. In the sphere of machine learning, various methods that are suited for processing sequential data have been proposed. Long Short-Term Memory (LSTM) Recurrent Neural Networks have proven its worth in many applications dealing with this type of data. Nevertheless, these approaches are designed to handle only a single input sequence at a time. The main contribution of this thesis is about developing approaches that jointly process sequential data derived from multiple parallel streams. The application task of our work, carried out in collaboration with the computer science laboratory of Avignon (LIA) and the EDD company, seeks to predict the genre of a telecast. This prediction can be based on the histories of previous telecast genres in the same channel but also on those belonging to other parallel channels. We propose a telecast genre taxonomy adapted to such automatic processes as well as a dataset containing the parallel history sequences of 4 French TV channels. Two original methods are proposed in this work in order to take into account parallel stream sequences. The first one, namely the Parallel LSTM (PLSTM) architecture, is an extension of the LSTM model. PLSTM simultaneously processes each sequence in a separate recurrent layer and sums the outputs of each of these layers to produce the final output. The second approach, called MSE-SVM, takes advantage of both LSTM and Support Vector Machines (SVM) methods. Firstly, latent feature vectors are independently generated for each input stream, using the output event of the main one. These new representations are then merged and fed to an SVM algorithm. The PLSTM and MSE-SVM approaches proved their ability to integrate parallel sequences by outperforming, respectively, the LSTM and SVM models that only take into account the sequences of the main stream. The two proposed approaches take profit of the information contained in long sequences. However, they have difficulties to deal with short ones. Though MSE-SVM generally outperforms the PLSTM approach, the problem experienced with short sequences is more pronounced for MSE-SVM. Finally, we propose to extend this approach by feeding additional information related to each event in the input sequences (e.g. the weekday of a telecast). This extension, named AMSE-SVM, has a remarkably better behavior with short sequences without affecting the performance when processing long ones
Garrigues, Matthieu. "Accélération Algorithmique et Logicielle del’Analyse Vidéo du Mouvement". Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLY018.
Pełny tekst źródłaMotion analysis in a video consists in estimating, from a sequence of images, the displacement of the objects projected on the focal plane of a camera, static or mobile. A large number of fields such as robotics, video surveillance, cinema or military applications rely on this analysis to interpret the contentof a video.This problem was one of the first to be approached by researchers in image processing. Numerous solutions have been proposed and allow a sufficiently accurate and robust estimate for a large number of applications. However, the algorithmic complexity of these solutions and/or the lack of optimizations of their software implementations make their use in applications with high computational constraints difficult or impossible.In the work presented in this thesis, we optimized three types of motion analysis taking into account not only the algorithmic complexity, but also all the factors affecting computation time on current processors such as parallelization, memory consumption, the regularity of memory accesses, or the type of arithmetic operations. This led us to develop our thesis at the intersection of software engineering and image processing. Our contributions have enabled the development of real-time applications such as action recognition, video stabilization andsegmentation of mobile objects
Gorin, Jérôme. "Machine virtuelle universelle pour codage vidéo reconfigurable". Phd thesis, Institut National des Télécommunications, 2011. http://tel.archives-ouvertes.fr/tel-00997683.
Pełny tekst źródłaGrosjean, Alex. "Impact of geometry and shaping of the plasma facing components on hot spot generation in tokamak devices". Electronic Thesis or Diss., Aix-Marseille, 2020. http://www.theses.fr/2020AIXM0556.
Pełny tekst źródłaThis PhD falls within ITER project support, aiming to study the thermal behavior of ITER-like PFC prototypes in two superconducting tokamaks: EAST (Hefei) and WEST (Cadarache). These prototypes correspond to castellated tungsten monoblocks placed along a cooling tube with small gaps (0.5 mm) between them, called plasma-facing units, to extract the heat from the components. The introduction of gaps between monoblocks (toroidal) and plasma-facing units (poloidal), to relieve the thermomechanical stresses in the divertor, implies that poloidal leading edges may be exposed to near-normal incidence angle. A local overheating is expected in a thin lateral band at the top of each monoblocks, which can be enhanced when the neighboring components are misaligned. In this work, we propose to study the impact of two geometries (sharp and chamfered LEs) of these components, as well as their misalignments on local hot spot generation, by means of embedded diagnostics (TC/FBG), and a submillimeter infrared system (~0.1 mm/pixel), whose emissivity varies with wavelength, and the temperature, but above all, the surface state of the component, which evolves under plasma exposure, during the experimental campaigns. The divertor Langmuir probes measure the plasma temperature, and thus estimate the ion Larmor radius that may play a role in the local heat flux distribution around poloidal and toroidal edges. The results presented in this thesis, confirming the modelling predictions by experimental measurements, support the final decision by ITER to include 0.5 mm toroidal beveling of monoblocks on the vertical divertor targets to protect poloidal leading edges from excessive heat flux
Song, Ge. "Méthodes parallèles pour le traitement des flux de données continus". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC059/document.
Pełny tekst źródłaWe live in a world where a vast amount of data is being continuously generated. Data is coming in a variety of ways. For example, every time we do a search on Google, every time we purchase something on Amazon, every time we click a ‘like’ on Facebook, every time we upload an image on Instagram, every time a sensor is activated, etc., it will generate new data. Data is different than simple numerical information, it now comes in a variety of forms. However, isolated data is valueless. But when this huge amount of data is connected, it is very valuable to look for new insights. At the same time, data is time sensitive. The most accurate and effective way of describing data is to express it as a data stream. If the latest data is not promptly processed, the opportunity of having the most useful results will be missed.So a parallel and distributed system for processing large amount of data streams in real time has an important research value and a good application prospect. This thesis focuses on the study of parallel and continuous data stream Joins. We divide this problem into two categories. The first one is Data Driven Parallel and Continuous Join, and the second one is Query Driven Parallel and Continuous Join
Nguyen, Phuong Thanh. "Study of the aquatic dissolved organic matter from the Seine River catchment (France) by optical spectroscopy combined to asymmetrical flow field-flow fractionation". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0154/document.
Pełny tekst źródłaThe main goal of this thesis was to investigate the characteristics of dissolvedorganic matter (DOM) within the Seine River catchment in the Northern part of France. ThisPhD thesis was performed within the framework of the PIREN-Seine research program. Theapplication of UV/visible absorbance and EEM fluorescence spectroscopy combined toPARAFAC and PCA analyses allowed us to identify different sources of DOM andhighlighted spatial and temporal variations of DOM properties. The Seine River wascharacterized by the strongest biological activity. DOM from the Oise basin seemed to havemore "humic" characteristics, while the Marne basin was characterized by a third specifictype of DOM. For samples collected during low-water periods, the distributions of the 7components determined by PARAFAC treatment varied between the studied sub-basins,highlighting different organic materials in each zone. A homogeneous distribution of thecomponents was obtained for the samples collected in period of flood.Then, a semi-quantitative asymmetrical flow field-flow fractionation (AF4) methodology wasdeveloped to fractionate DOM. The following optimized parameters were determined: across-flow rate of 2 ml min-1 during the focus step with a focusing time of 2 min and anexponential gradient of cross-flow from 3.5 to 0.2 ml min-1 during the elution step. Thefluorescence properties of various size-based fractions of DOM were evaluated by applyingthe optimized AF4 methodology to fractionate 13 samples, selected from the three sub-basins.The fluorescence properties of these fractions were analysed, allowing us to discriminatebetween the terrestrial or autochthonous origin of DOM
Lalevée, Philippe. "Algorithmes paralleles par flux dans les graphes : des fondements aux applications". Paris 6, 1995. http://www.theses.fr/1995PA066640.
Pełny tekst źródłaDenoulet, Julien. "Architectures massivement parallèles de systèmes sur circuits (SoC) pour le traitement de flux vidéos". Paris 11, 2004. http://www.theses.fr/2004PA112223.
Pełny tekst źródłaThis thesis describes the evolution of the associative mesh, a massively parallel simd architecture dedicated to image processing. This design is drawn from a theoretical model called associative nets, which implements a large number of image processing algorithms in an efficient way. In the prospect of a system on chip (soc) implementation of the associative mesh, this study presents the various possibilities of evolution for this architecture, and evaluates their consequences in terms of hardware costs and algorithmic performances. We show that a reorganisation of the structure based on the virtualisation of its elementary processors allows to reduce the design's area in substantial proportions, and opens new prospects in terms of calculation or memory management. Using an evaluation environment based on a programming library of associative nets and a parameterized description of the architecture using the system c language, we show that a virtualised associative mesh achieves real-time treatments for a great number of algorithms: low-level operations such as convolution filters, statistical statistical algorithms or mathematical morphology, and more complex treatments such as a split & merge segmentation, watershed segmentation, and motion detection using markovian relaxation
Abellard, Patrick. "Contribution a l'etude d'extensions des reseaux de petri a flux de donnees pour la telesymbiotique assistee par calculateur". Toulon, 1988. http://www.theses.fr/1988TOUL0003.
Pełny tekst źródłaTogbe, Maurras Ulbricht. "Détection distribuée d'anomalies dans les flux de données". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS400.
Pełny tekst źródłaAnomaly detection is an important issue in many application areas such as healthcare, transportation, industry etc. It is a current topic that tries to meet the ever increasing demand in different areas such as intrusion detection, fraud detection, etc. In this thesis, after a general complet state of the art, the unsupervised method Isolation Forest (IForest) has been studied in depth by presenting its limitations that have not been addressed in the literature. Our new version of IForest called Majority Voting IForest improves its execution time. Our ADWIN-based IForest ASD and NDKSWIN-based IForest ASD methods allow the detection of anomalies in data stream with a better management of the drift concept. Finally, distributed anomaly detection using IForest has been studied and evaluated. All our proposals have been validated with experiments on different datasets
Gorin, Jérôme. "Machine virtuelle universelle pour codage vidéo reconfigurable". Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2011. http://www.theses.fr/2011TELE0025.
Pełny tekst źródłaThis thesis proposes a new paradigm that abstracts the architecture of computer systems for representing virtual machines’ applications. Current applications are based on abstraction of machine’s instructions and on an execution model that reflects operations of these instructions on the target machine. While these two models are efficient to make applications portable across a wide range of systems, they do not express concurrency between instructions. Expressing concurrency is yet essential to optimize processing of application as the number of processing units is increasing in computer systems. We first develop a “universal” representation of applications for virtual machines based on dataflow graph modeling. Thus, an application is modeled by a directed graph where vertices are computation units (the actors) and edges represent the flow of data between vertices. Each processing units can be treated apart independently on separate resources. Concurrency in the instructions is then made explicitly. Exploit this new description formalism of applications requires a change in programming rules. To that purpose, we introduce and define a “Minimal and Canonical Representation” of actors. It is both based on actor-oriented programming and on instructions ‘abstraction used in existing Virtual Machines. Our major contribution, which incorporates the two new representations proposed, is the development of a “Universal Virtual Machine” (UVM) for managing specific mechanisms of adaptation, optimization and scheduling based on the Low-Level Virtual Machine (LLVM) infrastructure. The relevance of the MVU is demonstrated on the MPEG Reconfigurable Video Coding standard. In fact, MPEG RVC provides decoder’s reference application compliant with the MPEG-4 part 2 Simple Profile in the form of dataflow graph. One application of this thesis is a new dataflow description of a decoder compliant with the MPEG-4 part 10 Constrained Baseline Profile, which is twice as complex as the reference MPEG RVC application. Experimental results show a gain in performance close to double on a two cores compare to a single core execution. Developed optimizations result in a gain on performance of 25% for compile times reduced by half. The work developed demonstrates the operational nature of this standard and offers a universal framework which exceeds the field of video domain (3D, sound, picture...)
Skordos, Panayotis Augoustos. "Modeling flue pipes--subsonic flow, lattice Boltzmann, and parallel distributed computers". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36534.
Pełny tekst źródłaIncludes bibliographical references (p. 251-256).
by Panayotis A. Skordos.
Ph.D.
Kang, Yong Tae. "Experimental investigation of critical heat flux in transient boiling systems with vertical thin rectangular parallel plate channels /". The Ohio State University, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=osu1244826053.
Pełny tekst źródłaCureton, Christopher Wayne. "The implementation of four additional inviscid flux methods in the U²NCLE parallel unstructured Navier-Stokes solver". Master's thesis, Mississippi State : Mississippi State University, 2007. http://library.msstate.edu/etd/show.asp?etd=etd-04032007-221145.
Pełny tekst źródłaEnomoto, Cristina. "Uma linguagem para especificação de fluxo de execução em aplicações paralelas". [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261813.
Pełny tekst źródłaDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-05T12:56:47Z (GMT). No. of bitstreams: 1 Enomoto_Cristina_M.pdf: 856279 bytes, checksum: ce524a49db0f67734e28d8458d5deb0b (MD5) Previous issue date: 2005
Resumo: Vários sistemas de grid e computação distribuída existentes só permitem a execução de aplicações com um fluxo de execução de tarefas básico, no qual é feita a distribuição das tarefas executadas em paralelo e depois a coleta de seus resultados. Outros sistemas permitem definir uma relação de dependências entre as tarefas, formando um grafo direcionado acíclico. Porém, mesmo com este modelo de fluxo de execução não é possível executar vários tipos de aplicações que poderiam ser paralelizadas, como, por exemplo, algoritmos genéticos e de cálculo numérico que utilizam algum tipo de processamento iterativo. Nesta dissertação é proposta uma linguagem de especificação para fluxo de execução de aplicações paralelas que permite um controle de fluxo de tarefas mais flexível, viabilizando desvios condicionais e laços com iterações controladas. A linguagem é baseada na notação XML (eXtensible Markup Language), o que lhe confere características importantes tais como flexibilidade e simplicidade. Para avaliar estas e outras características da linguagem proposta, foi feita uma implementação sobre o sistema de processamento paralelo JoiN. Além de viabilizar a criação e execução de novas aplicações paralelas cujos fluxos de tarefas contêm laços e/ou desvios condicionais, a linguagem se mostrou simples de usar e não causou sobrecarga perceptível ao sistema paralelo
Abstract: Many distributed and parallel systems allow only a basic task flow, in which the parallel tasks are distributed and their results collected. In some systems the application execution flow gives support to a dependence relationship among tasks, represented by a directed acyclic graph. Even with this model it is not possible to execute in parallel some important applications as, for example, genetic algorithms. Therefore, there is a need for a new specification model with more sophisticated flow controls that allow some kind of iterative processing at the level of task management. The purpose of this work is to present a proposal for a specification language for parallel application execution workflow, which provides new types of control structures and allows the implementation of a broader range of applications. This language is based on XML (eXtensible Markup Language) notation, which provides characteristics like simplicity and flexibility to the proposed language. To evaluate these and other characteristics of the language, it was implemented on the JoiN parallel processing system. Besides allowing the creation and execution of new parallel applications containing task flows with loops and conditional branches, the proposedlanguage was easy to use and did not cause any significant overhead to the parallel system
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
Magna, Patrícia. "Redução dos bits de emparelhamento da máquina de fluxo de dados de Manchester". Universidade de São Paulo, 1992. http://www.teses.usp.br/teses/disponiveis/54/54132/tde-17042009-115457/.
Pełny tekst źródłaThe dataflow model is specially relevant you research in high-performance architectures. In this model, the execution control is done by taking into account only the dates availability, thus allowing maximum exploitation of the paralelism implicit in programs. The present work is based on the Manchester dataflow machine, which, in to order you handle the reentran code, imposes the dates token you have, in addition you the destination instruction Field, albel. Additional This information, which corresponds you 70% of the dates token, compounds the machine implementation it substantially bounds the execution speed and prevents the full model utilization. This work presents approaches will be reducing the amount of information needed will be to proper machine operation in to order you achieve to simpler and lives effective implementation.
Magna, Patrícia. "Proposta e simulação de uma arquitetura a fluxo de dados de segunda geração". Universidade de São Paulo, 1997. http://www.teses.usp.br/teses/disponiveis/76/76132/tde-06042009-113436/.
Pełny tekst źródłaIn this work is presented the SEED architecture. This architecture was proposed considering the experiences obtained with existing architectures based on dataflow model. The SEED architecture uses dataflow model to schedule and execute sets of instructions, called code blocks. This approach tries to make use of the main quality of the dataflow model that is to expose the maximum parallelism of the programs. However, this architecture explores coarser granularity than the one usually considered in dataflow architectures in order to reduce the data token traffic in the architecture. This type of reduction tries to solve problems like excessive occupation of memory and high complexity of the hardware. Besides the specification of all units that compose the SEED architecture, this work also proposes a way of partitioning programs, creating code blocks that may be executed by SEED architecture. Some benchmarks were generated using this proposal for partitioning programs. These benchmarks were executed in the SEED architecture simulator, in order to analyze the behavior of the proposed architecture under special configurations.
Toss, Julio. "Algorithmes et structures de données parallèles pour applications interactives". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM056/document.
Pełny tekst źródłaThe quest for performance has been a constant through the history of computing systems. It has been more than a decade now since the sequential processing model had shown its first signs of exhaustion to keep performance improvements.Walls to the sequential computation pushed a paradigm shift and established the parallel processing as the standard in modern computing systems. With the widespread adoption of parallel computers, many algorithms and applications have been ported to fit these new architectures. However, in unconventional applications, with interactivity and real-time requirements, achieving efficient parallelizations is still a major challenge.Real-time performance requirement shows-up, for instance, in user-interactive simulations where the system must be able to react to the user's input within a computation time-step of the simulation loop. The same kind of constraint appears in streaming data monitoring applications. For instance, when an external source of data, such as traffic sensors or social media posts, provides a continuous flow of information to be consumed by an on-line analysis system. The consumer system has to keep a controlled memory budget and delivery fast processed information about the stream.Common optimizations relying on pre-computed models or static index of data are not possible in these highly dynamic scenarios. The dynamic nature of the data brings up several performance issues originated from the problem decomposition for parallel processing and from the data locality maintenance for efficient cache utilization.In this thesis we address data-dependent problems on two different application: one in physics-based simulation and other on streaming data analysis. To the simulation problem, we present a parallel GPU algorithm for computing multiple shortest paths and Voronoi diagrams on a grid-like graph. To the streaming data analysis problem we present a parallelizable data structure, based on packed memory arrays, for indexing dynamic geo-located data while keeping good memory locality
McLaughlin, Jared D. "Parallel Processing of Reactive Transport Models Using OpenMP". Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2328.pdf.
Pełny tekst źródłaSisman, Cagri Tahsin. "Parallel Processing Of Three-dimensional Navier-stokes Equations For Compressible Flows". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606544/index.pdf.
Pełny tekst źródłaBeucher, Jérôme. "Recherche et développement d'un détecteur gazeux PIM (Parallel Ionization Multiplier) pour la trajectographie de particules sous un haut flux de hadrons". Phd thesis, Université de Nantes, 2007. http://tel.archives-ouvertes.fr/tel-00191999.
Pełny tekst źródłaDans le cadre du programme européen de physique hadronique (EU-I3HP-JRA4), nous avons investigué le détecteur multi-étage PIM pour une application sous un haut flux de hadrons.
Lors de ce travail de recherche et développement, nous avons caractérisé de nombreuses configurations géométriques d'une structure PIM à 2 étages d'amplification séparés par un espace de transfert opérant avec un mélange gazeux Ne+10%CO2. Des tests réalisés sous faisceau de hadrons de hautes énergies auprès du CERN ont montrés que la probabilité de décharges peut être fortement réduite avec une structure adéquate du détecteur PIM. Un taux de décharges inférieur à 10-9 par hadron incident et une résolution spatiale de 51 µm ont par ailleurs été mesurés au point de fonctionnement correspondant au début du plateau d'efficacité (>96%).
Moraes, Jorge Marcos de. "Etude de la convection naturelle laminaire permanente entre deux plans paralleles avec des conditions pariétales imposées sur la densité du flux de chaleur". Perpignan, 1992. http://www.theses.fr/1992PERP0118.
Pełny tekst źródłaToss, Julio. "Parallel algorithms and data structures for interactive applications". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/172043.
Pełny tekst źródłaA busca por desempenho tem sido uma constante na história dos sistemas computacionais. Ha mais de uma década, o modelo de processamento sequencial já mostrava seus primeiro sinais de exaustão pare suprir a crescente exigência por performance. Houveram "barreiras"para a computação sequencial que levaram a uma mudança de paradigma e estabeleceram o processamento paralelo como padrão nos sistemas computacionais modernos. Com a adoção generalizada de computadores paralelos, novos algoritmos foram desenvolvidos e aplicações reprojetadas para se adequar às características dessas novas arquiteturas. No entanto, em aplicações menos convencionais, com características de interatividade e tempo real, alcançar paralelizações eficientes ainda representa um grande desafio. O requisito por desempenho de tempo real apresenta-se, por exemplo, em simulações interativas onde o sistema deve ser capaz de reagir às entradas do usuário dentro do tempo de uma iteração da simulação. O mesmo tipo de exigência aparece em aplicações de monitoramento de fluxos contínuos de dados (streams). Por exemplo, quando dados provenientes de sensores de tráfego ou postagens em redes sociais são produzidos em fluxo contínuo, o sistema de análise on-line deve ser capaz de processar essas informações em tempo real e ao mesmo tempo manter um consumo de memória controlada A natureza dinâmica desses dados traz diversos problemas de performance, tais como a decomposição do problema para processamento em paralelo e a manutenção da localidade de dados para uma utilização eficiente da memória cache. As estratégias de otimização tradicionais, que dependem de modelos pré-computados ou de índices estáticos sobre os dados, não atendem às exigências de performance necessárias nesses cenários. Nesta tese, abordamos os problemas dependentes de dados em dois contextos diferentes: um na área de simulações baseada em física e outro em análise de dados em fluxo contínuo. Para o problema de simulação, apresentamos um algoritmo paralelo, em GPU, para computar múltiplos caminhos mínimos e diagramas de Voronoi em um grafo com topologia de grade. Para o problema de análise de fluxos de dados, apresentamos uma estrutura de dados paralelizável, baseada em Packed Memory Arrays, para indexar dados dinâmicos geo-localizados ao passo que mantém uma boa localidade de memória.
The quest for performance has been a constant through the history of computing systems. It has been more than a decade now since the sequential processing model had shown its first signs of exhaustion to keep performance improvements. Walls to the sequential computation pushed a paradigm shift and established the parallel processing as the standard in modern computing systems. With the widespread adoption of parallel computers, many algorithms and applications have been ported to fit these new architectures. However, in unconventional applications, with interactivity and real-time requirements, achieving efficient parallelizations is still a major challenge. Real-time performance requirement shows up, for instance, in user-interactive simulations where the system must be able to react to the user’s input within a computation time-step of the simulation loop. The same kind of constraint appears in streaming data monitoring applications. For instance, when an external source of data, such as traffic sensors or social media posts, provides a continuous flow of information to be consumed by an online analysis system. The consumer system has to keep a controlled memory budget and deliver a fast processed information about the stream Common optimizations relying on pre-computed models or static index of data are not possible in these highly dynamic scenarios. The dynamic nature of the data brings up several performance issues originated from the problem decomposition for parallel processing and from the data locality maintenance for efficient cache utilization. In this thesis we address data-dependent problems on two different applications: one on physically based simulations and another on streaming data analysis. To deal with the simulation problem, we present a parallel GPU algorithm for computing multiple shortest paths and Voronoi diagrams on a grid-like graph. Our contribution to the streaming data analysis problem is a parallelizable data structure, based on packed memory arrays, for indexing dynamic geo-located data while keeping good memory locality.
Chen, Jiuqiang. "Designing scientific workflow following a structure and provenance-aware strategy". Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112221/document.
Pełny tekst źródłaBioinformatics experiments are usually performed using scientific workflows in which tasks are chained together forming very intricate and nested graph structures. Scientific workflow systems have then been developed to guide users in the design and execution of workflows. An advantage of these systems over traditional approaches is their ability to automatically record the provenance (or lineage) of intermediate and final data products generated during workflow execution. The provenance of a data product contains information about how the product was derived, and it is crucial for enabling scientists to easily understand, reproduce, and verify scientific results. For several reasons, the complexity of workflow and workflow execution structures is increasing over time, which has a clear impact on scientific workflows reuse.The global aim of this thesis is to enhance workflow reuse by providing strategies to reduce the complexity of workflow structures while preserving provenance. Two strategies are introduced.First, we propose an approach to rewrite the graph structure of any scientific workflow (classically represented as a directed acyclic graph (DAG)) into a simpler structure, namely, a series-parallel (SP) structure while preserving provenance. SP-graphs are simple and layered, making the main phases of workflow easier to distinguish. Additionally, from a more formal point of view, polynomial-time algorithms for performing complex graph-based operations (e.g., comparing workflows, which is directly related to the problem of subgraph homomorphism) can be designed when workflows have SP-structures while such operations are related to an NP-hard problem for DAG structures without any restriction on their structures. The SPFlow rewriting and provenance-preserving algorithm and its associated tool are thus introduced.Second, we provide a methodology together with a technique able to reduce the redundancy present in workflows (by removing unnecessary occurrences of tasks). More precisely, we detect "anti-patterns", a term broadly used in program design to indicate the use of idiomatic forms that lead to over-complicated design, and which should therefore be avoided. We thus provide the DistillFlow algorithm able to transform a workflow into a distilled semantically-equivalent workflow, which is free or partly free of anti-patterns and has a more concise and simpler structure.The two main approaches of this thesis (namely, SPFlow and DistillFlow) are based on a provenance model that we have introduced to represent the provenance structure of the workflow executions. The notion of provenance-equivalence which determines whether two workflows have the same meaning is also at the center of our work. Our solutions have been systematically tested on large collections of real workflows, especially from the Taverna system. Our approaches are available for use at https://www.lri.fr/~chenj/
Hu, Chih-Chieh. "Mechanistic modeling of evaporating thin liquid film instability on a bwr fuel rod with parallel and cross vapor flow". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28148.
Pełny tekst źródłaCommittee Chair: Abdel-Khalik, Said; Committee Member: Ammar, Mostafa H.; Committee Member: Ghiaasiaan, S. Mostafa; Committee Member: Hertel, Nolan E.; Committee Member: Liu, Yingjie.
Veloso, Lays Helena Lopes. "ALGORITMO K-MEANS PARALELO BASEADO EM HADOOP-MAPREDUCE PARA MINERAÇÃO DE DADOS AGRÍCOLAS". UNIVERSIDADE ESTADUAL DE PONTA GROSSA, 2015. http://tede2.uepg.br/jspui/handle/prefix/127.
Pełny tekst źródłaCoordenação de Aperfeiçoamento de Pessoal de Nível Superior
This study aimed to investigate the use of a parallel K-means clustering algorithm,based on parallel MapReduce model, to improve the response time of the data mining. The parallel K-Means was implemented in three phases, performed in each iteration: assignment of samples to groups with nearest centroid by Mappers, in parallel; local grouping of samples assigned to the same group from Mappers using a Combiner and update of the centroids by the Reducer. The performance of the algorithm was evaluated in respect to SpeedUp and ScaleUp. To achieve this, experiments were run in single-node mode and on a Hadoop cluster consisting of six of-the-shelf computers. The data were clustered comprise flux towers measurements from agricultural regions and belong to Ameriflux. The results showed performance gains with increasing number of machines and the best time was obtained using six machines reaching the speedup of 3,25. To support our results, ANOVA analysis was applied from repetitions using 3, 4 and 6 machines in the cluster, respectively. The ANOVA show low variance between the execution times obtained for the same number of machines and a significant difference between means of each number of machines. The ScaleUp analysis show that the application scale well with an equivalent increase in data size and the number of machines, achieving similar performance. With the results as expected, this paper presents a parallel and scalable implementation of the K-Means to run on a Hadoop cluster and improve the response time of clustering to large databases.
Este trabalho teve como objetivo investigar a utilização de um algoritmo de agrupamento K-Means paralelo, com base no modelo paralelo MapReduce, para melhorar o tempo de resposta da mineração de dados. O K-Means paralelo foi implementado em três fases, executadas em cada iteração: atribuição das amostras aos grupos com centróide mais próximo pelos Mappers, em paralelo; agrupamento local das amostras atribuídas ao mesmo grupo pelos Mappers usando um Combiner e atualização dos centróides pelo Reducer. O desempenho do algoritmo foi avaliado quanto ao SpeedUp e ScaleUp. Para isso foram executados experimentos em modo single-node e em um cluster Hadoop formado por seis computadores de hardware comum. Os dados agrupados são medições de torres de fluxo de regiões agrícolas e pertencem a Ameriflux. Os resultados mostraram que com o aumento do número de máquinas houve ganho no desempenho, sendo que o melhor tempo obtido foi usando seis máquinas chegando ao SpeedUp de 3,25. Para apoiar nossos resultados foi construída uma tabela ANOVA a partir de repetições usando 3, 4 e 6 máquinas no cluster, pespectivamente. Os resultados da análise ANOVA mostram que existe pouca variância entre os tempos de execução obtidos com o mesmo número de máquinas e existe uma diferença significativa entre as médias para cada número de máquinas. A partir dos experimentos para analisar o ScaleUp verificou-se que a aplicação escala bem com o aumento equivalente do tamanho dos dados e do número de máquinas no cluster,atingindo um desempenho próximo. Com os resultados conforme esperados, esse trabalho apresenta uma implementação paralela e escalável do K-Means para ser executada em um cluster Hadoop e melhorar o tempo de resposta do agrupamento de grandes bases de dados.
Ferlin, Edson Pedro. "Arquitetura paralela reconfigurável baseada em fluxo de dados implementada em FPGA". Universidade Tecnológica Federal do Paraná, 2008. http://repositorio.utfpr.edu.br/jspui/handle/1/128.
Pełny tekst źródłaMany real-world engineering problems require high computational power, especially concerning to the processing speed. Modern parallel processing techniques play an important role in reducing the processing time as a consequence of the parallel execution of machinelevel operations for a given application software , taking advantage of possible independence between data and operations during processing time. Recently, reconfigurable computation has gained large attention thanks to its ability to combine hardware performance and software flexibility, allowed the developmentof very complex, compact and powerful systems for custom application. Tjis work proposes a new architecturefor parallel reconfigurable computation that associate the power of parallel processing and the flexibility of reconfigurable devices. This architecture allows quick customization of the system for many problems and, particularly, for numerical computation. For instance, this architecture can exploit the inherent parallelism of the numerical computation of differential equations, where several operations can be executed at the same time using a dataflow graph model of the problem. The proposedarchitecture is composed by a Control Unit , responsible for the control of all Processing Elements (PEs) and the data flow between them; and many application-customized PEs, responsible for the executionof operations. Diferrently from sequential computation, the parallel computation takes advantageof the available PEs and theirspecificity for the aplication. Therefore, the proposed architecture can offerhigh performance, scalability and customized solutions for engineering problems.
Silva, Bruno de Abreu. "Gerenciamento de tags na arquitetura ChipCflow - uma máquina a fluxo de dados dinâmica". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-17052011-085128/.
Pełny tekst źródłaThe alternative architectures and softwares researches have been growing in the last years. These researches are happening due to the advance of hardware technology and such advances must be complemented by improvements on design methodologies, test and verification techniques in order to use technology effectively. Many of the alternative architectures and softwares, in general, explore the parallelism of applications, differently to von Neumann model. Among high performance alternative architectures, there is the Dataflow Architecture. In this kind of architecture, the execution of programs is determined by data availability, thus the parallelism is intrinsic in these systems. The dataflow architectures become again a highlighted research area due to hardware advances, in particular, the advances of Reconfigurable Computing and FPGAs (Field-Programmable Gate Arrays). ChipCflow project is a tool for execution of algorithms using dynamic dataflow graph in FPGA. The main goal in this module of the ChipCflow project is to define the tagged-token format, the iterative operators that will manipulate the tags of tokens and to implement them
Spelta, Michele. "Commissioning of the third n_TOF spallation target: characterization of the neutron flux and beam profile using PPACs". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25126/.
Pełny tekst źródłaSchulz, Leonhard Ferdinand [Verfasser], i Klaus [Akademischer Betreuer] Klitzing. "Parallel arrangements of quantum dots and quantum point contacts in high magnetic fields : periodic conductance modulations with magnetic flux change / Leonhard Ferdinand Schulz. Betreuer: Klaus Klitzing". Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2015. http://d-nb.info/1065235798/34.
Pełny tekst źródłaMarin, Manuel. "GPU-enhanced power flow analysis". Thesis, Perpignan, 2015. http://www.theses.fr/2015PERP0041.
Pełny tekst źródłaThis thesis addresses the utilization of Graphics Processing Units (GPUs) for improving the Power Flow (PF) analysis of modern power systems. Currently, GPUs are challenged by applications exhibiting an irregular computational pattern, as is the case of most known methods for PF analysis. At the same time, the PF analysis needs to be improved in order to cope with new requirements of efficiency and accuracy coming from the Smart Grid concept. The relevance of GPU-enhanced PF analysis is twofold. On one hand, it expands the application domain of GPU to a new class of problems. On the other hand, it consistently increases the computational capacity available for power system operation and design. The present work attempts to achieve that in two complementary ways: (i) by developing novel GPU programming strategies for available PF algorithms, and (ii) by proposing novel PF analysis methods that can exploit the numerous features present in GPU architectures. Specific contributions on GPU computing include: (i) a comparison of two programming paradigms, namely regularity and load-balancing, for implementing the so-called treefix operations; (ii) a study of the impact of the representation format over performance and accuracy, for fuzzy interval algebraic operations; and (iii) the utilization of architecture-specific design, as a novel strategy to improve performance scalability of applications. Contributions on PF analysis include: (i) the design and evaluation of a novel method for the uncertainty assessment, based on the fuzzy interval approach; and (ii) the development of an intrinsically parallel method for PF analysis, which is not affected by the Amdahl's law
Amine, Ramdani Ahmed, i Sebastian Rudnik. "Design and Construction of High Current Winding for a Transverse Flux Linear Generator Intended for Wave Power Generation". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240366.
Pełny tekst źródłaEfterfrågan på el från förnybara källor är hög och inget tyder på att det kommer ändras den närmsta tiden. En källa till förnybar el som än idag står relativt orörd är den där man använder energin från havsvå- gor. Det är denna förnybara källa Anders Hagnestål haft i åtanke när han nu bygger en unikt effektiv generator med syftet att i ett senare skede utvinna el med hjälp av flytande punktabsorberande vågkraft- system. Generatorn är av den linjära typen och omvandlar det punk- tabsorberande systemet rörelse till el. För att testa denna generator- modell så påbörjades bygget av två fullskaliga prototyper 2017. Denna uppsats behandlar specifikt arbetet med generatorlindningen till pro- totyperna och innefattar processen från design till själva byggnatio- nen. Lindingen består av flertalet mindre och isolerade lindningsleda- re med uppgift att bland annat minska skinneffekt och virvelströms- förluster. När man använder denna metod så uppkommer dock ett nytt problem vilket härstammar från att lindningsledarna är samman- kopplade i vardera ända och bildar på så sätt n slutna strömkretsar. Konsekvensen kan vara stora förluster från cirkulerande strömmar på grund av det magnetiska ströflöde som finns runt järnkärnan som lindningen omsluter. Utgångspunkten för att minimera dessa cirkule- rande strömmar är att transponera alla lindningsledare på ett sätt så att den resulterande elektromotoriska spänningen för varje strömkrets blir så liten som möjligt. Med hjälp av förenklade modeller samt FEM simuleringar så bestämdes ett lämpligt sätt att transponera lindningstrådarna utifrån oli- ka kriterier. Lösningen blev att lindningstrådarna endast transponera- des en gång med en så kallad 180 grader transponering. Detta ger en tillräckligt god minimering av de cirkulerande ström- marna, men den stora fördelen med denna lösning är att det är möjligt att linda maskinen med de små resurser projektet hade tillgång till, dock var detta till en stor nackdel då väldigt mycket tid gick till att hitta egna tillvägagångsätt för att utföra byggandet av lindningen på ibland okonventionella sätt.
Lopes, Joelmir José. "ChipCflow - uma ferramenta para execução de algoritmos utilizando o modelo a fluxo de dados dinâmico em hardware reconfigurável". Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-05122012-154304/.
Pełny tekst źródłaDue to the complexity of applications, the growing demand for both systems using millions of transistors and consecutive complex hardware, tools that convert C into a Hardware Description Language (HDL), as VHDL and Verilog, have been developed. In this context this thesis presents the ChipCflow project, which uses dataflow architecture to implement high-performance logics in Field Programmable Gate Array (FPGA). Dataflow machines are programmable computers whose hardware is optimized for fine-grain data-flow parallel computation. In other words the execution of programs is determined by data availability, thus parallelism is intrinsic in these systems. On the other hand, with the advance of technology of microelectronics, the FPGA has been used mainly because of its flexibility, facilities to implement complex systems and intrinsic parallelism. One of the challenges is to create tools for programmers who use HLL (High Level Language), such as C language, producing hardware directly. These tools should use the utmost experience of the programmers, the parallelism of dynamic dataflow architecture and the flexibility and parallelism of FPGA to produce efficient hardware optimized for high performance and lower power consumption. The ChipCflow project is a tool that converts application programs written in C language into VHDL, based on the dynamic dataflow architecture. The main goal in this thesis is to define and implement the operators of ChipCflow using dynamic dataflow architecture in FPGA. These operators use tagged tokens to identify data based on instances of operators and their implementation and instances use an asynchronous implementation model in FPGA to achieve faster speed and lower consumption
Rojas, Balderrama Javier. "Gestion du cycle de vie de services déployés sur une infrastructure de calcul distribuée en neuroinformatique". Phd thesis, Université de Nice Sophia-Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00804893.
Pełny tekst źródłaAlanazi, Mohammed Awwad. "Non-invasive Method to Measure Energy Flow Rate in a Pipe". Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/103179.
Pełny tekst źródłaMS
Zou, Mengchuan. "Aspects of efficiency in selected problems of computation on large graphs". Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7132.
Pełny tekst źródłaThis thesis presents three works on different aspects of efficiency of algorithm design for large scale graph computations. In the first work, we consider a setting of classical centralized computing, and we consider the question of generalizing modular decompositions and designing time efficient algorithm for this problem. Modular decomposition, and more broadly module detection, are ways to reveal and analyze modular properties in structured data. As the classical modular decomposition is well studied and have an optimal linear time algorithm, we firstly study the generalizations of these concepts to hypergraphs and present here positive results obtained for three definitions of modular decomposition in hypergraphs from the literature. We also consider the generalization of allowing errors in classical graph modules and present negative results for two this kind of definitions. The second work focuses on graph data query scenarios. Here the model differs from classical computing scenarios in that we are not designing algorithms to solve an original problem, but we assume that there is an oracle which provides partial information about the solution to the original problem, where oracle queries have time or resource consumption, which we model as costs, and we need to have an algorithm deciding how to efficiently query the oracle to get the exact solution to the original problem, thus here the efficiency is addressing to the query costs. We study the generalized binary search problem for which we compute an efficient query strategy to find a hidden target in graphs. We present the results of our work on approximating the optimal strategy of generalized binary search on weighted trees. Our third work draws attention to the question of memory efficiency. The setup in which we perform our computations is distributed and memory restricted. Specifically, every node stores its local data, exchanging data by message passing, and is able to proceed local computations. This is similar to the LOCAL/CONGEST model in distributed computing, but our model additionally requires that every node can only store a constant number of variables w.r.t. its degree. This model can also describe natural algorithms. We implement an existing procedure of multiplicative reweighting for approximating the maximum s–t flow problem on this model, this type of methodology may potentially provide new opportunities for the field of local or natural algorithms. From a methodological point of view, the three types of efficiency concerns correspond to the following types of scenarios: the first one is the most classical one given the problem, we try to design by hand the more efficient algorithm; the second one, the efficiency is regarded as an objective function .where we model query costs as an objective function, and using approximation algorithm techniques to get a good design of efficient strategy; the third one, the efficiency is in fact posed as a constraint of memory and we design algorithm under this constraint
MONTEIRO, Milson Silva. "INTERFACE DE ANÁLISE DA INTERCONEXÃO EM UMA LAN USANDO CORBA". Universidade Federal do Maranhão, 2002. http://tedebc.ufma.br:8080/jspui/handle/tede/311.
Pełny tekst źródłaConselho Nacional de Desenvolvimento Científico e Tecnológico
This works concern software development (graphical user interface) that makes possible to analyze the interconnection in a LAN (Local Area Network) using CORBA (Common Object Request Broker Architecture) on distributed and heterogeneous environment among several outlying machines. This works presents paradigms of graphs theory: shortest paths problems (Dijkstra-Ford-Moore-Belman), maximum flow problems (Edmonds-Karp) and minimum cost flow problems (Busacker-Gowen) to formalize the interface development. We discoursed on the graphs theory and networks flows that are essentials to guarantee theoretical insight.
O objeto de estudo deste trabalho é o desenvolvimento de um software (interface gráfica do usuário) que possibilita analisar a interconexão de uma LAN (Local Area Network) usando CORBA (Common Object Request Broker Architecture) em ambientes distribuídos e heterogêneos entre diversas máquinas periféricas. Este trabalho apresenta os paradigmas da teoria de grafos: menor caminho (Dijkstra, Ford-Moore-Belman), fluxo máximo (Edmonds-Karp) e fluxo de custo mínimo (Busacker-Gowen) para formalizar o desenvolvimento da interface. Discorremos sobre a teoria de grafos e fluxos em redes que são relevantes para garantir o embasamento teórico.
Fdhila, Walid. "Décentralisation optimisée et synchronisation des procédés métiers inter-organisationnels". Electronic Thesis or Diss., Nancy 1, 2011. http://www.theses.fr/2011NAN10058.
Pełny tekst źródłaIn mainstream service orchestration platforms, the orchestration model is executed by a centralized orchestrator through which all interactions are channeled. This architecture is not optimal in terms of communication overhead and has the usual problems of a single point of failure. Moreover, globalization and the increase of competitive pressures created the need for agility in business processes, including the ability to outsource, offshore, or otherwise distribute its once-centralized business processes or parts thereof. An organization that aims for such fragmentation of its business processes needs to be able to separate the process into different parts. Therefore, there is a growing need for the ability to fragment one's business processes in an agile manner, and be able to distribute and wire these fragments together so that their combined execution recreates the function of the original process. This thesis is focused on solving some of the core challenges resulting from the need to restructure enterprise interactions. Restructuring such interactions corresponds to the fragmentation of intra and inter enterprise business process models. This thesis describes how to identify, create, and execute process fragments without loosing the operational semantics of the original process models. It also proposes methods to optimize the fragmentation process in terms of QoS properties and communication overhead. Further, it presents a framework to model web service choreographies in Event Calculus formal language
Cousin, Bernard. "Méthodologie de validation des systèmes structurés en couches par réseaux de Petri : application au protocole Transport". Phd thesis, Université Pierre et Marie Curie - Paris VI, 1987. http://tel.archives-ouvertes.fr/tel-00864063.
Pełny tekst źródłaFdhila, Walid. "Décentralisation Optimisée et Synchronisation des Procédés Métiers Inter-Organisationnels". Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00643827.
Pełny tekst źródłaHamidi, Hamid-Reza. "Couplage à hautes performances de codes parallèles et distribués". Phd thesis, 2005. http://tel.archives-ouvertes.fr/tel-00010971.
Pełny tekst źródłaphysiques en même temps, est apparu. Ce type d'application est appelé "couplage de code". En effet, plusieurs codes (physiques) sont couplés ou interconnectés an qu'ils communiquent pour réaliser la simulation.
Cette thèse s'intéresse aux problématiques liées au couplage à hautes performances de codes parallèles et distribués. L'obtention des performances repose sur la conception d'applications distribuées dont certains composants sont parallélisés et dont les communications sont efcaces. L'idée de bas de cette thèse est d'utiliser un langage de programmation parallèle orienté flot de données (ici Athapascan) dans deux modèles de conception d'applications distribuées ; "modèle appel de procédure à distance (RPC)" et "modèle orienté flux de données (stream-oriented)". Les contributions apportées par ce travail de recherche sont les suivants :
- Utilisation d'un langage de flot de données dans un grille RPC de calcul ;
Dans le cadre de projet HOMA, les extensions au modèle RPC ont porté d'une part sur la sémantique de contrôle et de communication et d'autre part sur les supports exécutifs pour mieux exploiter le parallélisme. Les résultats théoriques de ces extensions pour une implantation sur le bus logiciel CORBA à l'aide du moteur exécutif KAAPI d'Athapascan et pour l'architecture homogène comme grappe de PC, sont présentés sous la forme d'un modèle de coût d'exécution. Les expériences (élémentaires et sur une application réelle) ont validé ce modèle de coût.
- Extension d'un modèle mémoire partagée pour couplage de codes ;
An d'étendre la sémantique d'accès aux données partagées du langage Athapascan, nous avons proposé la notion de "collection temporelle". Ce concept permet de décrire la sémantique d'accès de type flux de données. La "collection spatiale" permet de mieux exploiter les données parallèles. Pour préciser la sémantique associée à ces nouvelles notions, nous avons donné une nouvelle définition pour la donnée partagée. Puis dans le cadre de cette définition, nous avons défini trois types de données partagées ; "séquentielle", "collection temporelle" et "collection spatiale".
Eckerle, Kate. "Capriccio For Strings: Collision-Mediated Parallel Transport in Curved Landscapes and Conifold-Enhanced Hierarchies Among Mirror Quintic Flux Vacua". Thesis, 2017. https://doi.org/10.7916/D85H7TH2.
Pełny tekst źródłaWang, Bin 1984. "Parallel simulation of coupled flow and geomechanics in porous media". Thesis, 2014. http://hdl.handle.net/2152/28061.
Pełny tekst źródłatext
Huang, Yu-Chi, i 黃佑騏. "The Forced Convection Numerical Simulation using Finite Volume Method in the Entrance Region of a Parallel Plate Channel with Constant Heat Flux". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/40240006240377022501.
Pełny tekst źródła國立臺灣海洋大學
機械與機電工程學系
103
In the numerical analysis of engineering world, there are three commonly used numerical schemes, namely, Finite-volume, finite-difference, and finite-element methods. The finite volume method (Finite Volume Method, FVM) is the most common one in the thermal-fluid field. The flow area of interest is divided into many non-overlapping control volumes in the FVM and each grid node is surrounded by a control volume. An integral process is performed for each control volume so that conservation laws (such as mass, momentum, and energy) could be satisfied within each control volume specified. Due to its conservation nature, the FVM approach is applied in the discretization and the solution of the governing equations in this thesis. The FVM with SIMPLE algorithm by Patanker is used in this thesis. Several MATLAB programs are developed to study a steady two dimensional laminar forced convection flow with constant wall heat flux in a parallel plate channel. Staggered grid configuration is used in the numerical solutions. Velocity, pressure,temperature, local Nusselt number, and friction coefficient are solved numerically. Fully developed and developed flow are both studied. Results are compared with those of analytic solutions or empirical correlations available. The numerical results of MATLAB are also compared with those of commercial code Fluent. Firstly, the applicability of staggered grid is examined. The grid size is optimized for different Reynolds numbers up to 1000. The MATLAB program developed is then run for both fully developed and uniform velocity inlet. Hydrodynamical and thermal entry length are obtained and compared with those empirical correlations in the literature. Local friction coefficient and Nusselt number are numerically calculated and compared with those analytic solutions available or Fluent results. The applicability of the MATLAB program developed in this thesis using staggered grid is well justified through above comparisons for solving this type of forced convection problem. Secondly, this thesis also compare the results of the FVM and the Fluent numerical simulation. Except in the computation time, both the FVM and the Fluent numerical simulation could achieve satisfactory results. It is proposed that different thermal boundary conditions (such as variable wall temperature, variable wall heat flux), rectangular or circular pipe, or different algorithms, such as SIMPLEC, could be studied in the future.