Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Dynamic Scheduling Technique.

Rozprawy doktorskie na temat „Dynamic Scheduling Technique”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 18 najlepszych rozpraw doktorskich naukowych na temat „Dynamic Scheduling Technique”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Arafa, Hicham Abdel-Hamid. "An adaptive dynamic scheduling technique for parallel loops on shared memory multiprocessor systems". Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=851.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains viii, 82 p. : ill. (some col.) Vita. Includes abstract. Includes bibliographical references (p. 77-79).
Style APA, Harvard, Vancouver, ISO itp.
2

Tuffaha, Mutaz. "An evaluation of a new Pricing technique to integrate Wind energy using two Time scales scheduling". Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-17370.

Pełny tekst źródła
Streszczenie:
The topic of smart grids has become one of the most important research arenas recently. Spurred by the urges to reduce our dependence on fossil fuels for several environmental and economic reasons, researchers have written many treatises on this topic. M. He, S. Murugesan and J. Zhang suggested in their article, "Multiple Timescale Dispatch and Scheduling for Stochastic Reliability in Smart Grids with Wind Generation Integration", a new pricing and scheduling model to exploit the wind (or any other stochastic) energy to the fullest extent. I studied this model, and from my experiments, I found a defect. In this thesis, I try to evaluate this model. Firstly, I present it with detailed proofs of the main results. Secondly, I explain the experiments and simulations I did. Then, I analyze the results to show the defect I discovered. Finally, I suggest a solution for that defect, and I point out the advantages of that model.
Style APA, Harvard, Vancouver, ISO itp.
3

March, Cabrelles José Luis. "Dynamic Power-Aware Techniques for Real-Time Multicore Embedded Systems". Doctoral thesis, Editorial Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/48464.

Pełny tekst źródła
Streszczenie:
The continuous shrink of transistor sizes has allowed more complex and powerful devices to be implemented in the same area, which provides new capabilities and functionalities. However, this complexity increase comes with a considerable rise in power consumption. This situation is critical in portable devices where the energy budget is limited and, hence, battery lifetime defines the usefulness of the system. Therefore, power consumption has become a major concern in the design of real-time multicore embedded systems. This dissertation proposes several techniques aimed to save energy without sacrifying real-time schedulability in this type of systems. The proposed techniques deal with different main components of the system. In particular, the techniques affect the task partitioner and the scheduler, as well as the memory controller. Some of the techniques are especially tailored for multicores with shared Dynamic Voltage and Frequency Scaling (DVFS) domains. Workload balancing among cores in a given domain has a strong impact on power consumption, since all the cores sharing a DVFS domain must run at the speed required by the most loaded core. In this thesis, a novel workload partitioning algorithm is proposed, namely Loadbounded Resource Balancing (LRB). The proposal allocates tasks to cores to balance a given resource (processor or memory) consumption among cores, improving real-time schedulability by increasing overlapping between processor and memory. However, distributing tasks in this way regardless the individual core utilizations could lead to unfair load distributions. That is, one of the cores could become much loaded than the others. To avoid this scenario, when a given utilization threshold is exceeded, tasks are assigned to the least loaded core. Unfortunately, workload partitioning alone is sometimes not able to achieve a good workload balance among cores. Therefore, this work also explores novel task migration approaches. Two task migration heuristics are proposed. The first heuristic, referred to as Single Option Migration (SOM ), attempts to perform only one migration when the workload changes to improve utilization balance. Three variants of the SOM algorithm have been devised, depending on the point of time the migration attempt is performed: when a task arrives to the system (SOMin), when a task leaves the system (SOMout), and in both cases (SOMin−out). The second heuristic, referred to as Multiple Option Migration (MOM ) explores an additional alternative workload partitioning before performing the migration attempt. Regarding the memory controller, memory controller scheduling policies are devised. Conventional policies used in Non Real-Time (NRT) systems are not appropriate for systems providing support for both Hard Real-Time (HRT) and Soft Real-Time (SRT) tasks. Those policies can introduce variability in the latencies of the memory requests and, hence, cause an HRT deadline miss that could lead to a critical failure of the real-time system. To deal with this drawback, a simple policy, referred to as HR- first, which prioritizes requests of HRT tasks, is proposed. In addition, a more advanced approach, namely ATR-first, is presented. ATR-first prioritizes only those requests of HRT tasks that are necessary to ensure real-time schedulability, improving the Quality of Service (QoS) of SRT tasks. Finally, this thesis also tackles dynamic execution time estimation. The accuracy of this estimation is important to avoid deadline misses of HRT tasks but also to increase QoS in SRT systems. Besides, it can also help to improve the schedulability of the systems and reduce power consumption. The Processor-Memory (Proc-Mem) model, that dynamically predicts the execution time of real-time application for each frequency level, is proposed. This model measures at the first hyperperiod, making use of Performance Monitoring Counters (PMCs) at run-time, the portion of time that each core is performing computation (CPU ), waiting for memory (MEM ), or both (OVERLAP). This information will be used to estimate the execution time at any other working frequency
March Cabrelles, JL. (2014). Dynamic Power-Aware Techniques for Real-Time Multicore Embedded Systems [Tesis doctoral]. Editorial Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48464
TESIS
Style APA, Harvard, Vancouver, ISO itp.
4

Gangammanavar, Harshavardhana J. "OPTIMAL CODING AND SCHEDULING TECHNIQUES FOR BROADCASTING DEADLINE CONSTRAINT TRAFFIC OVER UNRELIABLE WIRELESS CHANNELS". The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1262111942.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Balasubramaniam, Mahadevan. "Performance analysis and evaluation of dynamic loop scheduling techniques in a competitive runtime environment for distributed memory architectures". Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-04022003-154254.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Azzamouri, Ahlam. "Construction de méthodes et d'outils de planification pour l'industrie minière du phosphate en contexte de Lean Management". Thesis, Paris 10, 2018. http://www.theses.fr/2018PA100125/document.

Pełny tekst źródła
Streszczenie:
La demande mondiale en ressource minérale en général et celle du phosphate en particulier connait une forte croissance depuis plusieurs années. C’est dans ce contexte tendu que chaque industriel cherche à être pionnier et à réduire ses coûts afin d’assurer sa pérennité tout en étant soumis aux contraintes légales dans une perspective de développement responsable. Face à des enjeux comparables, plusieurs industries manufacturières se sont orientées avec succès depuis plusieurs décennies vers l’approche japonaise du Lean Management. C’est pour cette raison que nous avons réfléchi à proposer à travers ce projet de recherche une méthodologie pour le Lean Mining Responsable visant à améliorer l’efficacité et l’efficience de l’industrie minière. L’ensemble des méthodes, briques logicielles et concepts proposés dans cette thèse ont été construit à partir d’un retour d’expérience réalisé sur le système minier d’extraction du phosphate de l’axe centre de l’OCP SA. Nos travaux, réalisés sous la forme d’une recherche action, se caractérisent par des apports à 3 niveaux :Apports méthodologiques ; nous proposons la Méthodologie ASCI-LM pour évaluer l’impact sur l’organisation industrielle liés à la mise en place d’outils et de méthodes en contexte minier. Cette approche constitue un guide méthodologique permettant une recherche action dans le contexte de l’industrie minière. Ce guide méthodologique explique comment formaliser les modèles de connaissance (cartographie des flux) et comment imbriquer les modèles informatiques permettant d’évaluer informatiquement des scenarios organisationnels liés à des problèmes de coordination des activités en contexte minier.Briques logicielles pour l’aide à la décision ; l’interdépendance spatio-temporelle qui caractérise la chaîne minière, rend difficile la prédiction de l’impact de la décision prise. Ce point est un facteur-clé qui explique la démotivation de l’industrie minière à l’égard de la mise en œuvre du Lean Mining. Un système d’aide à la prise de décision pour l’industrie minière aura comme objectif de pouvoir reproduire la réalité et par la suite de tester l’impact de différentes décisions en prenant compte de : l’état du gisement, le programme de maintenance, la disponibilité des machines, les demandes à satisfaire… Les décisions prises dans le processus extractif auront comme impact sur : l’évolution des stocks de minerai, l’occupation des machines mobilisées, la satisfaction de la demande… Nous proposons deux briques logicielles dont la première porte sur la planification minière pour assurer un pilotage intégré de la mine et aider à la prise de décision dans un délai court. La deuxième brique logicielle que nous avons imaginée évalue / optimise les méthodes d’organisations industrielles de la mine lors de la constitution des « mélanges » de minerai ; le mélange de minerai constitue l’output commercialisable du système minier et qui est envoyé aux clients internes ou externes de la supply chain minière. Nouvelles méthodes d’organisation industrielle ; Nous avons proposé une méthode de constitution des mélanges de minerai qui change complètement celle utilisée actuellement. Cette dernière est basée sur une logique de nomenclature figée alors que nous proposons des nomenclatures dynamiques que nous avons appelés « blending dynamique ». Ces nomenclatures dynamiques visent à définir les mélanges optimaux qui varient au fil du temps en fonction des minerais extrait du chantier et de leur composition chimique. Malgré les réticences qui pouvaient y avoir pour adopter cette approche (ce qui est logique dans une organisation « historique »), les mineurs ont montré un grand intérêt vis-à-vis de l’approche qui a été validée sur le terrain et qui est même généralisable sur d’autres contextes
Global demand for mineral resources in general, and phosphate in particular, has been growing strongly for several years. In this increased competitive environment, every industry seeks to be pioneering and to reduce its costs to ensure its sustainability while complying with new responsible development regulations. Faced with this kind of challenges, a number of manufacturing industries have turned to the Japanese Lean Management approach. With this in mind, we designed our research project to develop a Lean Sustainable Mining methodology aimed at improving the mining industry’s efficiency and effectiveness at the OCP-SA center axis phosphate mine. We first conducted a detailed review of the Lean Mining (LM) literature to assess how well such an approach had already been implemented, which points are directly relevant to the mine and which are not. We also reviewed the implementation methodologies and assessed how effectively they were used. This analysis highlighted multiple shortcomings including in relation to the methodological approaches, the software bricks for decision support systems, the industrial organization methods and adequate factoring in of all energy-related aspects.We first recommend applying the ASCI (Analyze, Specification, Conception, Implementation) methodology to LM in order first to identify all the steps upstream of development of the relevant knowledge model and then the associated action model. This phase was developed based on a thorough analysis of mine soil characteristics in order to build a robust knowledge base. This methodology was then applied to the Ben Guerir mine. We believe that this methodological approach will be found useful by other industries in their effort to switch to LM.Our next step was to construct a model based on discrete event simulation for short-term decision support of mine extraction planning. This model closely matches current extractive process operations (drilling, blasting, etc.) and takes into account all the constraints whether they be in relation to the field (geology, blocks, state of the initial system, distances, ...) or to the equipment (capacity differences, technical downtime, ...). Other considerations that we factored in include the decisions taken upstream of the chain (priority source layers, maintenance program, orders, among others). The model yields the following output: deposits blocks to be extracted in order to meet the demand defined over the planning horizon, the equipment Gantt to define the route to be taken by each piece of equipment and the cumulative feed curves for extracted source layers. The purpose is to extract material required by the downstream blending process, while avoiding any non-value added activities, and to improve overall chain performance.The phosphate industry needs to define the blends used to produce the ore qualities to be shipped to domestic and international customers. We have proposed a new method for optimal definition of these blends designed to replace fixed bills of materials by dynamic ones that change over time. Our "dynamic blending" model serves to define, based on available source layer stocks i) the feedings to be conveyed from the deposit to the stock and ii) the optimal quantities to be extracted from each layer while meeting the customer’s quality specifications charter. The purpose of this approach is to produce the right quality, preserve the phosphate-rich layers for the future, streamline stocks and ensure a connection between the pushed upstream flow (deposit) and the pulled downstream flow (definition of blends)
إن الطلب العالمي على الموارد المعدنية بشكل عام، والفوسفاط على وجه الخصوص، يزداد بسرعة منذ عدة سنوات. وفي هذا السياق الذي يعنى بمنافسة قوية بين الأطر الفعالة، يسعى كل مصنع الى أن يكون الرائد في مجاله، وأن يقلل من تكاليف الإنتاج من أجل ضمان متانته في إطار خضوعه لقيود قانونية تحت منظور التنمية المسؤولة. في إطار مواجهة قضايا مماثلة، انتقلت العديد من الصناعات التحويلية، بنجاح لعدة عقود، إلى تطبيق النهج الياباني Lean Management. ولهذا السبب، فكرنا من خلال مشروع البحث هذا في اقتراح منهجية من أجل Lean Mining مسؤول، هدفه تحسين كفاءة وفعالية صناعة المعادن. جميع الطرق والمنهجيات ومفاهيم البرمجيات المقترحة في هذه الأطروحة يتم بناؤها على أساس تجارب منجزة على نظام الصناعة المعدنية لاستخراج الفوسفاط للمحور المركزي لـ OCP-SA. يتميز عملنا، الذي تم تنفيذه على شكل بحث-عملي، بمساهمات على 3 مستويات:مساهمات منهجية: نقترح منهجية ASCI-LM لتقييم تأثير تنفيذ أساليب على المنظمة الصناعية المتعلقة بالصناعة المعدنية. هذا النهج هو عبارة عن دليل منهجي للبحث-العملي في هذا المجال، الذي يشرح كيفية إضفاء الطابع الرسمي على نماذج المعرفة (رسم خرائط التدفقات) وكيفية تضمين النماذج المعلوماتية التي تجعل من الممكن تقييم سيناريوهات تنظيمية تتعلق بمشاكل التنسيق بين الأنشطة في سياق الصناعة المعدنية.أسس البرمجيات لدعم اتخاذ القرار: إن الترابط الزمكاني الذي يميز سلسلة التعدين اللوجستيكية يجعل من الصعب التنبؤ بأثر القرار المتخذ. هذا الترابط يعتبر العامل الرئيسي الذي يفسر إبطاء صناعة التعدين فيما يتعلق بتطبيق ونهج سياسة Lean Mining. يهدف نظام دعم القرار الخاص بصناعة التعدين إلى إعادة إنتاج الواقع ومن ثم اختبار أثر القرارات المتخذة، مع الأخذ بعين الاعتبار: هيئة المنجم، برنامج الصيانة، توفر الآلات، المطالب الواجب تلبيتها ... القرارات التي اتخذت في سلسلة عملية الاستخراج سيكون لها تأثير على: تطور مخزونات المادة الخام، مدى استعمال الآلات المخصصة لذلك، و تلبية المطالب ... نقترح اسسين اثنين من البرمجيات: يتناول الأول تخطيط التعدين لضمان الإدارة المتكاملة للمنجم والمساعدة في اتخاذ القرار في وقت قصير. أما الثاني فيهتم بتقييم / تحسين أساليب التنظيم في المنجم أثناء تكوين "خليط" انطلاقا من المواد الخام؛ ويعتبر هذا الأخير الناتج النهائي القابل للتسويق في إطار النظام اللوجستيكي للتعدين، ويتم إرساله إلى الزبناء الداخليين أو الخارجيين لسلسلة التوريد.أساليب جديدة للتنظيم الصناعي: لقد اقترحنا طريقة لتكوين خليط من المواد الخام والتي تختلف تماما عن الطريقة المستخدمة حاليًا. وتستند هذه الأخيرة على منطق التركيبة الثابتة في حين أننا نقترح تركيبات ديناميكية أطلقنا عليها إسم "المزج الديناميكي". تهدف هذه التسميات الديناميكية إلى تحديد الخلطات المثالية التي تتغير بتغير الوقت، اعتمادًا على المواد الخام المستخرجة وتركيبتها الكيميائية. وبالرغم من الإحجام عن تبني هذا النهج (وهو أمر طبيعي في إطار تنظيم ذو أسس "تاريخية عريقة")، فقد أظهر عمال المناجم اهتماما كبيرا بالنهج المقترح والذي تم التحقق من مصداقيته في هذا المجال والذي من الممكن تعميمه في مجال صناعي آخر
Style APA, Harvard, Vancouver, ISO itp.
7

Poudel, Pavan. "Tools and Techniques for Efficient Transactions". Kent State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=kent1630591700589561.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Poudel, Pavan. "Tools and Techniques for Efficient Transactions". Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1630591700589561.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Giordano, Christophe. "Prédiction et optimisation des techniques pour l’observation à haute résolution angulaire et pour la future génération de très grands télescopes". Thesis, Nice, 2014. http://www.theses.fr/2014NICE4136/document.

Pełny tekst źródła
Streszczenie:
Avec l’avènement de la prochaine génération de télescope de plus de 30m de diamètre, il devient primordial de réduire le coût des observations et d’améliorer leur rendement scientifique. De plus il est essentiel de construire ces instruments sur des sites disposant d’une qualité optique maximale. J’ai donc essayé, au cours de ma thèse, de développer un outil fiable, facile d’utilisation et économique permettant de satisfaire ces exigences. J’ai donc utilisé le modèle de prévision météorologique Weather Research and Forecasting et le modèle de calcul de la turbulence optique Trinquet-Vernin pour prédire, plusieurs heures à l’avance, les conditions optiques du ciel tout au long de la nuit. Cette information permettrait d’améliorer la gestion du programme d’observation, appelée "flexible scheduling", et ainsi de réduire les pertes dues à la variation des conditions atmosphériques. Les résultats obtenus et les améliorations apportées au modèle WRF-TV lui permettent de présenter un bon accord entre les mesures et les prévisions ce qui est prometteur pour une utilisation réelle. Au delà de cette gestion, nous avons voulu créer un moyen d’améliorer la recherche et le test de sites astronomiquement intéressants. Nous avons donc définit un paramètre de qualité qui prend en compte les conditions météorologiques et optiques. Ce paramètre a été testé au-dessus de l’île de La Palma aux Canaries et a montré que l’Observatorio del Roque de los Muchachos est situé au meilleur emplacement de l’île. Enfin nous avons créé une routine d’automatisation du modèle WRF-TV afin d’avoir un outil opérationnel fonctionnant de manière autonome
With the next generation of extremely large telescope having mirror with a diameter larger than 30m, it becomes essential to reduce the cost of observations and to improve their scientific efficiency. Moreover it is fundamental to build these huge infrastructures in location having the best possible optical quality. The purpose of my thesis is to bring a solution easier and more economical than before. I used the Weather Research and Forecasting (WRF) model and the Trinquet-Vernin parametrization, which computes the values of the optical turbulence, to forecast a couple of hours in advance the evolution of the sky optical quality along the coming night. This information would improve the management of observation program, called "flexible scheduling", and thereby reduce losses due to the atmospheric variations. Our results and improvements allow the model us WRF-TV to have a good agreement between previsions and in-situ measurements in different sites, which is promising for a real use in an observatory. Beyond the flexible scheduling, we wanted to create a tool to improve the search for new sites or site testing for already existing sites. Therefore we defined a quality parameter which takes into account meteorological conditions (wind, humidity, precipitable water vapor) and optical conditions (seeing, coherence time, isoplanatic angle). This parameter has been tested above La Palma in Canary island showing that the Observatorio del Roque de los Muchachos is located close to the best possible location of the island. Finally we created an automated program to use WRF-TV model in order to have an operational tool working routinely
Style APA, Harvard, Vancouver, ISO itp.
10

Vacher, Blandine. "Techniques d'optimisation appliquées au pilotage de la solution GTP X-PTS pour la préparation de commandes intégrant un ASRS". Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2566.

Pełny tekst źródła
Streszczenie:
Les travaux présentés dans ce document portent sur des problèmes d'optimisation dans le domaine de la logistique interne des entrepôts. Le domaine est soumis à une forte concurrence et est en plein essor, poussé par les besoins croissants du marché et favorisé par l'automatisation. L'entreprise SAVOYE construit des équipements et propose sa propre solution GTP (Goods-To-Person) pour la préparation de commandes. La solution utilise un système de stockage automatisé appelé X-Picking Tray System (X-PTS) et achemine les charges automatiquement à des postes de travail via des carrousels pour effectuer des opérations séquencées. C'est un système de systèmes particulièrement complexe qui offre de nombreuses applications aux techniques de la recherche opérationnelle. Tout cela définit le périmètre applicatif et théorique des travaux menés dans cette thèse. Nous avons d'abord traité un problème d'ordonnancement de type Job Shop avec des contraintes de précédences. Le contexte particulier du problème a permis de le résoudre en un temps polynomial avec un algorithme exact. Celui-ci a permis de calculer les dates d'injection des charges provenant des différents flux de sortie du stockage pour s'agréger sur un carrousel, dans un ordre donné. Ainsi, la gestion inter-allées du stockage PTS a été améliorée et le débit du flux de charges maximisé, depuis le stockage jusqu'à un poste. Nous avons ensuite étudié des algorithmes de tri tels que le tri par base et développé un algorithme de tri en ligne, utilisé pour piloter des systèmes autonomes de tri appelés Buffers Séquenceurs (BS). Placés en amont de chaque poste de travail dans la solution GTP, les BS permettent de délocaliser la fonction de tri en aval du stockage, augmentant de facto le débit des flux de sortie. Enfin, nous avons considéré un problème de séquencement consistant à trouver une extension linéaire d'un ordre partiel minimisant une distance avec un ordre donné. Nous proposons de le résoudre par une approche de programmation linéaire en nombres entiers, par la construction de programmes dynamiques et par des heuristiques de type glouton. Une heuristique efficace a été développée en se basant sur des appels itératifs d'un des programmes dynamiques, permettant d'atteindre une solution proche ou égale à l'optimum en un temps très court. L'application de ce problème aux flux de sortie non ordonnés du stockage X-PTS permet de réaliser un pré-tri au niveau des carrousels. Les diverses solutions développées ont été validées par simulation et certaines ont été brevetées et/ou déjà été mises en application dans des entrepôts
The work presented in this PhD thesis deals with optimization problems in the context of internal warehouse logistics. The field is subject to strong competition and extensive growth, driven by the growing needs of the market and favored by automation. SAVOYE builds warehouse storage handling equipment and offers its own GTP (Goods-To-Person) solution for order picking. The solution uses an Automated Storage and Retrieval System (ASRS) called X-Picking Tray System (X-PTS) and automatically routes loads to workstations via carousels to perform sequenced operations. It is a highly complex system of systems with many applications for operational research techniques. All this defines the applicative and theoretical scope of the work carried out in this thesis. In this thesis, we have first dealt with a specific scheduling Job Shop problem with precedence constraints. The particular context of this problem allowed us to solve it in polynomial time with exact algorithms. These algorithms made it possible to calculate the injection schedule of the loads coming from the different storage output streams to aggregate on a carousel in a given order. Thus, the inter-aisle management of the X-PTS storage was improved and the throughput of the load flow was maximized, from the storage to a station. In the sequel of this work, the radix sort LSD (Least Significant Digit) algorithm was studied and a dedicated online sorting algorithm was developed. The second one is used to drive autonomous sorting systems called Buffers Sequencers (BS), which are placed upstream of each workstation in the GTP solution. Finally, a sequencing problem was considered, consisting of finding a linear extension of a partial order minimizing a distance with a given order. An integer linear programming approach, different variants of dynamic programming and greedy algorithms were proposed to solve it. An efficient heuristic was developed based on iterative calls of dynamic programming routines, allowing to reach a solution close or equal to the optimum in a very short time. The application of this problem to the unordered output streams of X-PTS storage allows pre-sorting at the carousel level. The various solutions developed have been validated by simulation and some have been patented and/or already implemented in warehouses
Style APA, Harvard, Vancouver, ISO itp.
11

Mallangi, Siva Sai Reddy. "Low-Power Policies Based on DVFS for the MUSEIC v2 System-on-Chip". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229443.

Pełny tekst źródła
Streszczenie:
Multi functional health monitoring wearable devices are quite prominent these days. Usually these devices are battery-operated and consequently are limited by their battery life (from few hours to a few weeks depending on the application). Of late, it was realized that these devices, which are currently being operated at fixed voltage and frequency, are capable of operating at multiple voltages and frequencies. By switching these voltages and frequencies to lower values based upon power requirements, these devices can achieve tremendous benefits in the form of energy savings. Dynamic Voltage and Frequency Scaling (DVFS) techniques have proven to be handy in this situation for an efficient trade-off between energy and timely behavior. Within imec, wearable devices make use of the indigenously developed MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). This system is optimized for efficient and accurate collection, processing, and transfer of data from multiple (health) sensors. MUSEIC v2 has limited means in controlling the voltage and frequency dynamically. In this thesis we explore how traditional DVFS techniques can be applied to the MUSEIC v2. Experiments were conducted to find out the optimum power modes to efficiently operate and also to scale up-down the supply voltage and frequency. Considering the overhead caused when switching voltage and frequency, transition analysis was also done. Real-time and non real-time benchmarks were implemented based on these techniques and their performance results were obtained and analyzed. In this process, several state of the art scheduling algorithms and scaling techniques were reviewed in identifying a suitable technique. Using our proposed scaling technique implementation, we have achieved 86.95% power reduction in average, in contrast to the conventional way of the MUSEIC v2 chip’s processor operating at a fixed voltage and frequency. Techniques that include light sleep and deep sleep mode were also studied and implemented, which tested the system’s capability in accommodating Dynamic Power Management (DPM) techniques that can achieve greater benefits. A novel approach for implementing the deep sleep mechanism was also proposed and found that it can obtain up to 71.54% power savings, when compared to a traditional way of executing deep sleep mode.
Nuförtiden så har multifunktionella bärbara hälsoenheter fått en betydande roll. Dessa enheter drivs vanligtvis av batterier och är därför begränsade av batteritiden (från ett par timmar till ett par veckor beroende på tillämpningen). På senaste tiden har det framkommit att dessa enheter som används vid en fast spänning och frekvens kan användas vid flera spänningar och frekvenser. Genom att byta till lägre spänning och frekvens på grund av effektbehov så kan enheterna få enorma fördelar när det kommer till energibesparing. Dynamisk skalning av spänning och frekvens-tekniker (såkallad Dynamic Voltage and Frequency Scaling, DVFS) har visat sig vara användbara i detta sammanhang för en effektiv avvägning mellan energi och beteende. Hos Imec så använder sig bärbara enheter av den internt utvecklade MUSEIC v2 (Multi Sensor Integrated circuit version 2.0). Systemet är optimerat för effektiv och korrekt insamling, bearbetning och överföring av data från flera (hälso) sensorer. MUSEIC v2 har begränsad möjlighet att styra spänningen och frekvensen dynamiskt. I detta examensarbete undersöker vi hur traditionella DVFS-tekniker kan appliceras på MUSEIC v2. Experiment utfördes för att ta reda på de optimala effektlägena och för att effektivt kunna styra och även skala upp matningsspänningen och frekvensen. Eftersom att ”overhead” skapades vid växling av spänning och frekvens gjordes också en övergångsanalys. Realtidsoch icke-realtidskalkyler genomfördes baserat på dessa tekniker och resultaten sammanställdes och analyserades. I denna process granskades flera toppmoderna schemaläggningsalgoritmer och skalningstekniker för att hitta en lämplig teknik. Genom att använda vår föreslagna skalningsteknikimplementering har vi uppnått 86,95% effektreduktion i jämförelse med det konventionella sättet att MUSEIC v2-chipets processor arbetar med en fast spänning och frekvens. Tekniker som inkluderar lätt sömn och djupt sömnläge studerades och implementerades, vilket testade systemets förmåga att tillgodose DPM-tekniker (Dynamic Power Management) som kan uppnå ännu större fördelar. En ny metod för att genomföra den djupa sömnmekanismen föreslogs också och enligt erhållna resultat så kan den ge upp till 71,54% lägre energiförbrukning jämfört med det traditionella sättet att implementera djupt sömnläge.
Style APA, Harvard, Vancouver, ISO itp.
12

Lin, Cheng-Hsian, i 林正祥. "An Adaptive Scheduling Technique Towards Efficient Transactions in Dynamic RFID Environments". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/83604628181784153019.

Pełny tekst źródła
Streszczenie:
碩士
中華大學
資訊工程學系(所)
95
With the emergence of wireless RFID technique, the problem of collisions between readers and tags become important in RFID system. In recent year, it has also instigated researches to propose different approaches. In this thesis, we proposed the method of Two Phase Dynamic Modulation (TPDM) protocol to decrease the problems of a lot of collision between readers and tag. The first benefit of TPDM can avoid hidden terminal problem, and the second is TPDM using the distributed protocol to scheduling without other hardware to execute coordinator. The final is TPDM can adjust the adaptive waiting time slot and improve the system throughput. We utilize distributed control mode to design the TPDM and chose the Pulse, and Colorwave to compare the simulation result. The performance evaluation indicates that the TPDM has the highest throughput and more stable then other two methods. Keywords: RFID, TPDM, Hidden Terminal, Throughput, Pulse, Colorwave
Style APA, Harvard, Vancouver, ISO itp.
13

Saranya, N. "Efficient Schemes for Partitioning Based Scheduling of Real-Time Tasks in Multicore Architecture". Thesis, 2015. https://etd.iisc.ac.in/handle/2005/4495.

Pełny tekst źródła
Streszczenie:
The correctness of hard real-time systems depends not only on its logical correctness but also, on its ability to meet all its deadline. Existing real-time systems use either a pure real-time scheduler or a real-time scheduler embedded as a real-time scheduling class in the scheduler of an operating system. Existing schedulers in multicore systems that support both real-time and non-real-time tasks, permit the execution of non-real-time tasks in all the cores with priorities lower than those of real-time tasks, but interrupts and softirqs associated with these non-real-time tasks can execute in any core with priorities higher than those of real-time tasks. In such systems, there is a need to develop a scheduler which minimizes the execution overhead of real-time tasks and ensures that the tasks runtime is not affected. To this end, we develop an integrated scheduler architecture on Linux kernel, called SchedISA, which executes hard real-time tasks with minimal interference from the Linux tasks while ensuring a fair share of CPU resources for the Linux tasks. We compared the execution overhead of real-time tasks in SchedISA implementing partitioned earliest deadline first (P-EDF) scheduling algorithm with SCHED_DEADLINEs P-EDF implementation. The experimental results show that the execution overhead of real-time tasks in SchedISA is considerably less than that in SCHED_DEADLINE. Having developed a multicore scheduling architecture for scheduling hard real-time tasks, we explore existing multicore scheduling techniques and propose a new scheduling technique that is better in terms of efficiency and suitability than the existing multicore scheduling techniques. Existing real-time multicore schedulers use either global or partitioned scheduling technique to schedule real-time tasks. Partitioned scheduling is a static approach in which, a task is mapped to a per-processor ready queue prior to scheduling it and it cannot migrate. Partitioned scheduling makes ineffective use of the available processing power and incurs high overhead when real-time tasks are dynamic in nature. Global scheduling is a dynamic scheduling approach, where the processors share a single ready queue to execute the highest priority tasks. Global scheduling allows task migration which results in high scheduling overhead. In our work, we present a dynamic partitioning-based scheduling of real-time tasks, called DP scheduling. In DP scheduling, jobs of tasks are assigned to cores when they are released and remain in the same core till they finish execution. The partitioning in DP scheduling is done based on the slack time and priority of the existing jobs. If a job cannot be allocated to any core, then it is split, and executed on more than one core. DP scheduling technique attempts to retain good features of both global and partitioned scheduling without compromising on resource utilization, and at the same time, also tries to minimize the scheduling overhead. We have tested DP scheduling technique with EDF scheduling policy at each core, called DP-EDF scheduling algorithm, and implemented it using the concept of SchedISA. We compared the performance of DP-EDF with P-EDF and global EDF scheduling algorithms. Both simulation and experimental results show that DP-EDF scheduling algorithm has better performance in terms of resource utilization, and comparable or better performance in terms of scheduling overhead in comparison to these scheduling algorithms.
Style APA, Harvard, Vancouver, ISO itp.
14

"Techniques for Decentralized and Dynamic Resource Allocation". Doctoral diss., 2017. http://hdl.handle.net/2286/R.I.46267.

Pełny tekst źródła
Streszczenie:
abstract: This thesis investigates three different resource allocation problems, aiming to achieve two common goals: i) adaptivity to a fast-changing environment, ii) distribution of the computation tasks to achieve a favorable solution. The motivation for this work relies on the modern-era proliferation of sensors and devices, in the Data Acquisition Systems (DAS) layer of the Internet of Things (IoT) architecture. To avoid congestion and enable low-latency services, limits have to be imposed on the amount of decisions that can be centralized (i.e. solved in the ``cloud") and/or amount of control information that devices can exchange. This has been the motivation to develop i) a lightweight PHY Layer protocol for time synchronization and scheduling in Wireless Sensor Networks (WSNs), ii) an adaptive receiver that enables Sub-Nyquist sampling, for efficient spectrum sensing at high frequencies, and iii) an SDN-scheme for resource-sharing across different technologies and operators, to harmoniously and holistically respond to fluctuations in demands at the eNodeB' s layer. The proposed solution for time synchronization and scheduling is a new protocol, called PulseSS, which is completely event-driven and is inspired by biological networks. The results on convergence and accuracy for locally connected networks, presented in this thesis, constitute the theoretical foundation for the protocol in terms of performance guarantee. The derived limits provided guidelines for ad-hoc solutions in the actual implementation of the protocol. The proposed receiver for Compressive Spectrum Sensing (CSS) aims at tackling the noise folding phenomenon, e.g., the accumulation of noise from different sub-bands that are folded, prior to sampling and baseband processing, when an analog front-end aliasing mixer is utilized. The sensing phase design has been conducted via a utility maximization approach, thus the scheme derived has been called Cognitive Utility Maximization Multiple Access (CUMMA). The framework described in the last part of the thesis is inspired by stochastic network optimization tools and dynamics. While convergence of the proposed approach remains an open problem, the numerical results here presented suggest the capability of the algorithm to handle traffic fluctuations across operators, while respecting different time and economic constraints. The scheme has been named Decomposition of Infrastructure-based Dynamic Resource Allocation (DIDRA).
Dissertation/Thesis
Doctoral Dissertation Electrical Engineering 2017
Style APA, Harvard, Vancouver, ISO itp.
15

Mochocki, Bren Christopher. "Voltage scheduling techniques for dynamic voltage scaling processors with practical limitations". 2004. http://etd.nd.edu/ETD-db/theses/available/etd-03042004-140126/.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--University of Notre Dame, 2004.
Thesis directed by Xiaobo (Sharon) Hu for the Department of Computer Science and Engineering. "March 2004." Includes bibliographical references (leaves 52-55).
Style APA, Harvard, Vancouver, ISO itp.
16

"Dynamic scheduling algorithm based on queue parameter balancing and generalized large deviation techniques". 2000. http://library.cuhk.edu.hk/record=b6073257.

Pełny tekst źródła
Streszczenie:
by Ma Yiguang.
"April 2000."
Thesis (Ph.D.)--Chinese University of Hong Kong, 2000.
Includes bibliographical references (p. 117-[124]).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Mode of access: World Wide Web.
Abstracts in English and Chinese.
Style APA, Harvard, Vancouver, ISO itp.
17

Barika, MSM. "Scheduling techniques for efficient execution of stream workflows in cloud environments". Thesis, 2020. https://eprints.utas.edu.au/34812/1/Barika_whole_thesis.pdf.

Pełny tekst źródła
Streszczenie:
Advancements in Internet of Things (IoT) technology have led to the development of advanced applications and services that rely on data generated from enormous amounts of connected devices such as sensors, mobile devices and smart cars. These applications process and analyse such data as it arrives to unleash the potential of live analytics. Considering that our future world will be fully automated, current IoT applications and services are categorised as data-driven workflows, which integrate multiple analytical components. Examples of these workflow applications are smart farming, smart retail and smart transportation. This work flow application also known as a stream work flow is one type of big data workflow application and is becoming gradually viable for solving real-time data computation problems that are more complex. The use of cloud computing technology which can provide on demand and elastic resources to execute stream workflow applications is ideal, but additional challenges are raised due to the location of data sources and end users' requirements in terms of data processing and deadline for decision making. The focus of existing research works in this domain is on the streaming operator graph generated by streaming data platforms, where this graph differs from a stream workflow as there is a single source of data for the whole operator graph and one end operator, while stream workflow has multiple input data sources and multiple output streams. Moreover, the majority of those works investigated one type of runtime change for the streaming graph operator, which is the fluctuation of data. This means that the structural changes that may happen at runtime are not studied. Considering the heterogeneity and dynamic behaviour of stream workflows, these workflow applications have unique features that make the scheduling problem have different assumptions and optimisation goals compared with the placement problem of streaming graph operators. As a consequence, the execution of stream workflow applications on the cloud environment requires advanced scheduling techniques to address the aforementioned challenges as well as handling different runtime changes that may occur during the execution of these applications. To this end, the Multicloud environment approach opens the door toward enhancing the execution of workflow applications by leveraging various clouds to utilise data locality and exploit deployment flexibility. Thus, the problem of scheduling a stream workflow in a Multicloud environment while meeting user real-time data analysis requirements needs to be investigated. In this thesis, we leverage the Multicloud environment approach to design novel scheduling techniques to efficiently schedule outsourcing stream workflow applications over various cloud infrastructures while minimising the execution cost. We also design dynamic scheduling techniques to continuously manage resources to handle structural and non-structural changes at runtime in order to maintain user-defined performance requirements at minimal execution cost. In summary, this thesis makes the following concrete contributions: • Comprehensive state of the art survey that analyses various big data workflow orchestration issues span over three different levels (workflow, data and cloud) by providing a research taxonomy of core requirements, challenges, and current tools, techniques and research prototypes. • Simulation toolkit named IoTSim-Stream to model and simulate stream workflow applications in cloud computing environments. • Two scheduling algorithms that generate scheduling plans at deployment time to execute stream workflow efficiently on cloud infrastructures with minimal monetary cost. • Two-phase adaptive scheduling technique that considers the problem of scheduling stream workflows to support runtime data fluctuations while guaranteeing real-time performance requirements and minimising monetary cost. • Pluggable dynamic scheduling technique that manages cloud resources over time to handle structural changes of stream workflow at runtime in a cost-effective manner, along with three plugin scheduling methods.
Style APA, Harvard, Vancouver, ISO itp.
18

Chen, Yi-Cheng, i 陳一誠. "An Efficient Multicast Scheme in WiMAX Mesh Networks Using Dynamic Scheduling Techniques for Concurrent Transmissions". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/78389709643569609608.

Pełny tekst źródła
Streszczenie:
碩士
逢甲大學
資訊工程所
98
In IEEE 802.16 mesh mode, the procedures of unicast and broadcast are defined, and procedure for multicast is not available. To multicast identical data packets to multiple receivers, the traditional approach uses multiple unicast to replace multicast to complete the delivery. Therefore, it not only wastes bandwidth of the entire network but also causes delay of data forwarding. In this thesis we propose a centralized multicast route and schedule scheme which uses dynamic scheduling techniques for concurrent transmissions to improve our previous work and boost network utilization. Concurrent transmissions exploit the characteristics of the wireless signal transmission, and use centralized scheduling to allow multiple transmissions at the same time without transmitting interference. However, as far as we know, no one uses it in multicast. The structure of multicast tree is ever-shifting which would in turn affect the length of transmission when using the same mesh scheduling scheme. The proposed multicast scheme has the ability to analyze the structure of multicast tree, and then dictates a suitable scheduling strategy based on the structure of multicast tree to enable more concurrent transmissions and decrease overall transmitting time. Moreover, we also consider the situation that multicast multiple packets to the same receivers in order to complete a multicast data delivery with concurrent transmissions. The simulation results show that when our new multicast scheme performed in different multicast trees, the proposed scheme not only has stable transmission time but also boosts the network utilization and reduces the delay time of each receiver.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii