Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Delay optimization.

Dissertationen zum Thema „Delay optimization“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Delay optimization" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Ma, Min. „RC delay metrics for interconnect optimization“. Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81554.

Der volle Inhalt der Quelle
Annotation:
The main challenge for developing accurate and efficient delay metrics has been the prediction of delay to points on the interconnect which are relatively close to the source. Those metrics which are relatively successful in meeting this challenge require two-dimensional look-up tables and algorithm tuning, and are quite challenging to implement. The simpler explicit metrics only work well on so-called far nodes, which are characterized by all-pole frequency responses.
In this thesis, we first review an existing delay metric for wires and then try to extend it to arbitrary tree networks. Thorough tests demonstrate it to be accurate and efficient for wires only. We then present an explicit delay metric for dealing with near nodes in RC interconnect, which is based on the first three moments of the impulse response. An accurate model for the delay to the internal node of a two-pole one-zero RC circuit serves as the core of the new metric. Since no simplifying assumption is made in the model, it returns excellent accuracy at the internal node in any two-node RC circuit, no matter how close the internal node is to the source. The delay at near nodes in arbitrary RC trees is then computed by order reduction to a two-pole system using the first three moments of the impulse response. A significant further improvement in accuracy is achieved by correcting for the skewness of the impulse response. In parallel, a simple explicit metric is introduced for predicting the delay to far nodes, where order reduction is not needed. This is based on the first moment of the node of interest and the second moment of the slowest node. Furthermore a simple criterion is derived for distinguishing near nodes from far nodes. Tests on RC models of wires and trees demonstrate that the combination of these two metrics is accurate within 2% for far nodes and within 5% for near nodes with delays which are as much as an order of magnitude smaller than that of the slowest node.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Anemogiannis, Emmanuel. „Integrated optical delay-lines : architectures, performance optimization, and applications“. Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/15398.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Yuan, Duojia, und S3024047@student rmit edu au. „Flight Delay-Cost Simulation Analysis and Airline Schedule Optimization“. RMIT University. Aerospace, Mechanical, Manufacturing Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080618.092923.

Der volle Inhalt der Quelle
Annotation:
In order to meet the fast-growing demand, airlines have applied much more compact air-fleet operation schedules which directly lead to airport congestion. One result is the flight delay, which appears more frequently and seriously; the flight delay can also significantly damage airline's profitability and reputation The aim of this project is to enhance the dispatch reliability of Australian X Airline's fleet through a newly developed approach to reliability modeling, which employs computer-aided numerical simulation of the departure delay distribution and related cost to achieve the flight schedule optimization. The reliability modeling approach developed in this project is based on the probability distributions and Monte Carlo Simulation (MCS) techniques. Initial (type I) delay and propagated (type II) delay are adopted as the criterion for data classification and analysis. The randomicity of type I delay occurrence and the internal relationship between type II delay and changed flight schedule are considered as the core factors in this new approach of reliability modeling, which compared to the conventional assessment methodologies, is proved to be more accurate on the departure delay and cost evaluation modeling. The Flight Delay and Cost Simulation Program (FDCSP) has been developed (Visual Basic 6.0) to perform the complicated numerical calculations through significant amount of pseudo-samples. FDCSP is also designed to provide convenience for varied applications in dispatch reliability modeling. The end-users can be airlines, airports and aviation authorities, etc. As a result, through this project, a 16.87% reduction in departure delay is estimated to be achieved by Australian X Airline. The air-fleet dispatch reliability has been enhanced to a higher level - 78.94% compared to initial 65.25%. Thus, 13.35% of system cost can be saved. At last, this project also achieves to set a more practical guideline for air-fleet database and management upon overall dispatch reliability optimization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ullah, Shafqat. „Algorithm for Non-Linear Feedback Shift Registers Delay Optimization“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-62815.

Der volle Inhalt der Quelle
Annotation:
Information Technology has revolutionized the way of our life, the raise of information technology has given birth to the information security. The design and implementation of information security techniques especially for wireless systems such as Mobiles, and RFIDs is receiving a lot of attentions. Stream ciphers are very good candidates for providing information security to wireless systems specially for RFIDs, because they are fast as compared to block ciphers, easy to implement, have small footprint, and consume less power. LFSRs can be use to implement the stream ciphers but they care exposed to different kind of cryptanalytic attacks on the other hand NLFSR based stream ciphers are resistant to cryptanalytic attacks to which pure LFSR based stream cipher are exposed. Just like LFSRs the NLFSRs can also be implemented in two types of configurations i.e. Fibonacci and Galois. The critical path of the Galois based NLFSRs is smaller than the Fibonacci NLFSRs this make Galois NLFSRs favorite for applications which need to run at a faster speed. Fibonacci NLFSRs can be converted to Galois NLFSRs but the conversion from Fibonacci to Galois is one-to-many relation i.e. for a single Fibonacci NLFSR we can have many equivalent Galois NLFSRs. The dilemma is that not all the equivalent Galois NLFSRs are optimal so in order for efficient implementation one has to search for the best possible Galois NLFSR. The complexity of search space is 0(nk), here represents the n − bit NLFSR and represents the number of products in the ANF of the feedback function of Fibonacci NLFSR, the NLFSR used in existing stream cipher usually havek less than or equal to 32( for hardware efficiency reasons) and n is of order of 128 (for cryptographic security reasons). The complexity of the search space shows that the normal brute force method will take considerable amount of time to produce the results. To address this problem a heuristic algorithm is proposed in [6] which uses the Primary Cost Function to estimate the critical path of the NLFSRs and produce the results, however the algorithm in [6] did not addressed a lot of issues for example it was unable to divide the products among the functions equally, it was unable to divide the product in such a way which would lead to optimization by synthesis tool. The Primary Cost Function proposed in [6] had flaws it was unable to find the difference between the function which can be optimized and which cannot be. This thesis proposes another heuristic algorithm which addresses the problem present in the [6]. The Primary Cost Function used in the [6] is also used in the proposed algorithm but with some modification and improvements. Besides using Primary Cost function, the proposed algorithm also uses other cost functions such Secondary Cost, XOR reduced Cost and Number of Literals Cost functions to find the best possible Galois NLFSR. The algorithm proposed in this thesis was tested on Vest, Achterbahn, Gain-128/80 ciphers and Cipher [8]. The Vest improved by 5.28% in delay and 17.39% in terms of area as compared to the results of [6], similarly Achterbahn, Gain, and Cipher [8] improved by 1.79%, 16.63%, 1.43% in delay and improvement in area were 2.09%, 1.001% , - 0.101% respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mostafa, Ahmad A. „Packet Delivery Delay and Throughput Optimization for Vehicular Networks“. University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1367924037.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chen, Chung-ping. „Performance-driven interconnect optimization /“. Digital version accessible at:, 1998. http://wwwlib.umi.com/cr/utexas/main.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Prakash, Piyush Martin Alain J. Martin Alain J. „Throughput optimization of quasi delay insensitive circuits via slack matching /“. Diss., Pasadena, Calif. : California Institute of Technology, 2008. http://resolver.caltech.edu/CaltechETD:etd-05262008-234258.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Gunawardana, Upul, und Kurt Kosbar. „OPTIMIZATION OF REFERENCE WAVEFORM FILTERS IN COHERENT DELAY LOCKED LOOPS“. International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/606804.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
In this paper, a new coherent correlation-loop architecture for tracking direct-sequence spread-spectrum signals is proposed. In the proposed correlation loop model, the mean-square tracking error is minimized by varying the cross-correlation function between the received signal and the locally generated signal. The locally generated signal is produced by passing a replica of the transmitted signal through a linear time-invariant filter, which is termed the VCC filter. The issue of bandwidth of a correlation loop is addressed and a bandwidth definition for comparative purposes is introduced. The filter characteristics to minimize the tracking errors are determined using numerical optimization algorithms. This work demonstrates that the amplitude response of the VCC filter is a function of the input signal-to-noise ratio (SNR). In particular, the optimum filter does not replicate a differentiator at finite signal-to-noise ratio as is sometimes assumed. The optimal filter characteristics and the knowledge of the input SNR can be combined to produce a device that has very low probability of loosing lock.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Calle, Laguna Alvaro Jesus. „Isolated Traffic Signal Optimization Considering Delay, Energy, and Environmental Impacts“. Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/74238.

Der volle Inhalt der Quelle
Annotation:
Traffic signal cycle lengths are traditionally optimized to minimize vehicle delay at intersections using the Webster formulation. This thesis includes two studies that develop new formulations to compute the optimum cycle length of isolated intersections, considering measures of effectiveness such as vehicle delay, fuel consumption and tailpipe emissions. Additionally, both studies validate the Webster model against simulated data. The microscopic simulation software, INTEGRATION, was used to simulate two-phase and four-phase isolated intersections over a range of cycle lengths, traffic demand levels, and signal timing lost times. Intersection delay, fuel consumption levels, and emissions of hydrocarbon (HC), carbon monoxide (CO), oxides of nitrogen (NOx), and carbon dioxide (CO2) were derived from the simulation software. The cycle lengths that minimized the various measures of effectiveness were then used to develop the proposed formulations. The first research effort entailed recalibrating the Webster model to the simulated data to develop a new delay, fuel consumption, and emissions formulation. However, an additional intercept was incorporated to the new formulations to enhance the Webster model. The second research effort entailed updating the proposed model against four study intersections. To account for the stochastic and random nature of traffic, the simulations were then run with twenty random seeds per scenario. Both efforts noted its estimated cycle lengths to minimize fuel consumption and emissions were longer than cycle lengths optimized for vehicle delay only. Secondly, the simulation results manifested an overestimation in optimum cycle lengths derived from the Webster model for high vehicle demands.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

李澤康 und Chak-hong Lee. „Nonlinear time-delay optimal control problem: optimality conditions and duality“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31212475.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Lee, Chak-hong. „Nonlinear time-delay optimal control problem : optimality conditions and duality /“. [Hong Kong] : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B16391640.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Wong, Man-kwun, und 黃文冠. „Some sensitivity results for time-delay optimal control problems“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223655.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Xiong, Haozhi. „Delay-Aware Cross-Layer Design in Multi-hop Networks“. The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1290107298.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Marla, Lavanya. „Airline schedule planning and operations : optimization-based approaches for delay mitigation“. Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62123.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D. in Transportation Studies)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 157-162).
We study strategic and operational measures of improving airline system performance and reducing delays for aircraft, crew and passengers. As a strategic approach, we study robust optimization models, which capture possible future operational uncertainties at the planning stage, in order to generate solutions that when implemented, are less likely to be disrupted, or incur lower costs of recovery when disrupted. We complement strategic measures with operational measures of managing delays and disruptions by integrating two areas of airline operations thus far separate - disruption management and flight planning. We study different classes of models to generate robust airline scheduling solutions. In particular, we study, two general classes of robust models: (i) extreme-value robust-optimization based and (ii) chance-constrained probability-based; and one tailored model, which uses domain knowledge to guide the solution process. We focus on the aircraft routing problem, a step of the airline scheduling process. We first show how the general models can be applied to the aircraft routing problem by incorporating domain knowledge. To overcome limitations of solution tractability and solution performance, we present budget-based extensions to the general model classes, called the Delta model and the Extended Chance-Constrained programming model. Our models enhance tractability by reducing the need to iterate and re-solve the models, and generate solutions that are consistently robust (compared to the basic models) according to our performance metrics. In addition, tailored approaches to robustness can be expressed as special cases of these generalizable models. The extended models, and insights gleaned, apply not only to the aircraft routing model but also to the broad class of large-scale, network-based, resource allocation. We show how our results generalize to resource allocation problems in other domains, by applying these models to pharmaceutical supply chain and corporate portfolio applications in collaboration with IBM's Zurich Research Laboratory. Through empirical studies, we show that the effectiveness of a robust approach for an application is dependent on the interaction between (i) the robust approach, (ii) the data instance and (iii) the decision-maker's and stakeholders' metrics. We characterize the effectiveness of the extreme-value models and probabilistic models based on the underlying data distributions and performance metrics. We also show how knowledge of the underlying data distributions can indicate ways of tailoring model parameters to generate more robust solutions according to the specified performance metrics. As an operational approach towards managing airline delays, we integrate flight planning with disruption management. We focus on two aspects of flight planning: (i) flight speed changes; and (ii) intentional flight departure holds, or delays, with the goal of optimizing the trade-off between fuel costs and passenger delay costs. We provide an overview of the state of the practice via dialogue with multiple airlines and show how greater flexibility in disruption management is possible through integration. We present models for aircraft and passenger recovery combined with flight planning, and models for approximate aircraft and passenger recovery combined with flight planning. Our computational experiments on data provided by a European airline show that decrease in passenger disruptions on the order of 47.2%-53.3% can be obtained using our approaches. We also discuss the relative benefits of the two mechanisms studied - that of flight speed changes, and that of intentionally holding flight departures, and show significant synergies in applying these mechanisms. We also show that as more information about delays and disruptions in the system is captured in our models, further cost savings and reductions in passenger delays are obtained.
by Lavanya Marla.
Ph.D.in Transportation Studies
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Kozynski, Waserman Fabián Ariel. „Distributed optimization of traffic delay on a periodic switched grid network“. Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/84867.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 39-40).
Given a switched network, e.g. a city grid with semaphores in its intersections or a packet network, each unit (car or packet) accumulates some delay while traversing the network. This delay is undesirable but unavoidable, which makes minimizing the total average delay of the network given certain constraints, a desirable objective. In this work, we consider the case of periodic networks, meaning that in every traffic cycle inputs to the system are the same and we try to arrive to an allocation of phases in every intersection that minimizes the total delay per cycle. We propose a model for such networks in which the delay is given as a function of external parameters (arrivals to the system) as well as internal parameters (switching decisions). Additionally, we present a distributed algorithm which makes use of messages passed between adjacent nodes to arrive at a solution with low delay, when compared with what is obtained when nodes take decisions independently. Furthermore, dealing with large networks proves difficult to arrive to theoretical results. This distributed algorithm gives an insight on how one can analyze big networks by taking a local approach and determining bounds in smaller networks that are part of the big picture.
by Fabián Ariel Kozynski Waserman.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Fu, Weihuang. „Analytical Model for Capacity and Delay Optimization in Wireless Mesh Networks“. University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1289937944.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Feyzmahdavian, Hamid Reza. „Performance Analysis of Positive Systems and Optimization Algorithms with Time-delays“. Doctoral thesis, KTH, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177651.

Der volle Inhalt der Quelle
Annotation:
Time-delay dynamical systems are used to model many real-world engineering systems, where the future evolution of a system depends not only on current states but also on the history of states. For this reason, the study of stability and control of time-delay systems is of theoretical and practical importance. In this thesis, we develop several stability analysis frameworks for dynamical systems in the presence of communication and computation time-delays, and apply our results to different challenging engineering problems. The thesis first considers delay-independent stability of positive monotone systems. We show that the asymptotic stability of positive monotone systems whose vector fields are homogeneous is independent of the magnitude and variation of time-varying delays. We present explicit expressions that allow us to give explicit estimates of the decay rate for various classes of time-varying delays. For positive linear systems, we demonstrate that the best decay rate that our results guarantee can be found via convex optimization. We also derive a set of necessary and sufficient conditions for asymptotic stability of general positive monotone (not necessarily homogeneous) systems with time-delays. As an application of our theoretical results, we discuss delay-independent stability of continuous-time power control algorithms in wireless networks. The thesis continues by studying the convergence of asynchronous fixed-point iterations involving maximum norm pseudo-contractions. We present a powerful approach for characterizing the rate of convergence of totally asynchronous iterations, where both the update intervals and communication delays may grow unbounded. When specialized to partially asynchronous iterations (where the update intervals and communication delays have a fixed upper bound), or to particular classes of unbounded delays and update intervals, our approach allows to quantify how the degree of asynchronism affects the convergence rate. In addition, we use our results to analyze the impact of asynchrony on the convergence rate of discrete-time power control algorithms in wireless networks. The thesis finally proposes an asynchronous parallel algorithm that exploits multiple processors to solve regularized stochastic optimization problems with smooth loss functions. The algorithm allows the processors to work at different rates, perform computations independently of each other, and update global decision variables using out-of-date gradients. We characterize the iteration complexity and the convergence rate of the proposed algorithm, and show that these compare favourably with the state of the art. Furthermore, we demonstrate that the impact of asynchrony on the convergence rate of the algorithm is asymptotically negligible, and a near-linear speedup in the number of processors can be expected.
Tidsfördröjningar uppstår ofta i tekniska system: det tar tid för två ämnen attblandas, det tar tid för en vätska att rinna från ett kärl till ett annat, och det tar tid att överföra information mellan delsystem. Dessa tidsfördröjningar lederofta till försämrad systemprestanda och ibland även till instabilitet. Det är därförviktigt att utveckla teori och ingenjörsmetodik som gör det möjligt att bedöma hur tidsfördröjningar påverkar dynamiska system. I den här avhandlingen presenteras flera bidrag till detta forskningsområde. Fokusligger på att karaktärisera hur tidsfördröjningar påverkar konvergenshastigheten hos olinjära dynamiska system. I kapitel 3 och 4 behandlar vi olinjära system varstillstånd alltid är positiva. Vi visar att stabiliteten av dessa positiva system är oberoende av tidsfördröjningar och karaktäriserar hur konvergenshastigheten hos olinjära positiva system beror på tidsfördröjningarnas storlek. I kapitel 5 betraktar vi iterationer som är kontraktionsavbildningar, och analyserar hur deras konvergens påverkas av begränsade och obegränsade tidsfördröjningar. I avhandlingens sistakapitel föreslår vi en asynkron algoritm för stokastisk optimering vars asymptotiska konvergenshastighet är oberoende av tidsfördröjningar i beräkningar och i kommunikation mellan beräkningselement.

QC 20151204

APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Zeck, Christiane Regina [Verfasser]. „Efficient Algorithms for Online Delay Management and Railway Optimization / Christiane Regina Zeck“. München : Verlag Dr. Hut, 2012. http://d-nb.info/1022535080/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Sun, Jingyuan. „Optimization of high-speed CMOS circuits with analytical models for signal delay“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0002/MQ43548.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Marsden, Christopher J. „Nonlinear dynamics of pattern recognition and optimization“. Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/10694.

Der volle Inhalt der Quelle
Annotation:
We associate learning in living systems with the shaping of the velocity vector field of a dynamical system in response to external, generally random, stimuli. We consider various approaches to implement a system that is able to adapt the whole vector field, rather than just parts of it - a drawback of the most common current learning systems: artificial neural networks. This leads us to propose the mathematical concept of self-shaping dynamical systems. To begin, there is an empty phase space with no attractors, and thus a zero velocity vector field. Upon receiving the random stimulus, the vector field deforms and eventually becomes smooth and deterministic, despite the random nature of the applied force, while the phase space develops various geometrical objects. We consider the simplest of these - gradient self-shaping systems, whose vector field is the gradient of some energy function, which under certain conditions develops into the multi-dimensional probability density distribution of the input. We explain how self-shaping systems are relevant to artificial neural networks. Firstly, we show that they can potentially perform pattern recognition tasks typically implemented by Hopfield neural networks, but without any supervision and on-line, and without developing spurious minima in the phase space. Secondly, they can reconstruct the probability density distribution of input signals, like probabilistic neural networks, but without the need for new training patterns to have to enter the network as new hardware units. We therefore regard self-shaping systems as a generalisation of the neural network concept, achieved by abandoning the "rigid units - flexible couplings'' paradigm and making the vector field fully flexible and amenable to external force. It is not clear how such systems could be implemented in hardware, and so this new concept presents an engineering challenge. It could also become an alternative paradigm for the modelling of both living and learning systems. Mathematically it is interesting to find how a self shaping system could develop non-trivial objects in the phase space such as periodic orbits or chaotic attractors. We investigate how a delayed vector field could form such objects. We show that this method produces chaos in a class systems which have very simple dynamics in the non-delayed case. We also demonstrate the coexistence of bounded and unbounded solutions dependent on the initial conditions and the value of the delay. Finally, we speculate about how such a method could be used in global optimization.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Li, Jinjian. „Traffic Modeling and Control at Intelligent Intersections : Time Delay and Fuel Consumption Optimization“. Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCA001/document.

Der volle Inhalt der Quelle
Annotation:
La congestion du trafic dans nos villes est un problème qui entrave la qualité de vie. L'intersection est un endroit où les congestions se produisent le plus fréquemment. Par conséquent, au lieu d'étendre les infrastructures, il serait plus intéressant économiquement de s’ocupper de la résolution du problème des retards en développant les stratégies de contrôle de la circulation.Les travaux de cette thèse concerne l’étude des intersections dites « intelligentes » dépourvues de feux de signalisation, et où la coopération est réalisée à partir de la communication véhicule-infrastructure (V2I). L’objectif étant de proposer une modélisation coopérative de ces intersections visant à réduire à la fois les temps de retards et la consommation de carburant.La méthode de résolution du problème comporte deux volets principaux. Le premier volet concerne l'itinéraire devant être choisi par les véhicules pour arriver à leur destination à partir d’un point de départ. Le deuxième volet étant les procédures coopératives proposées afin de permettre aux véhicules de passer rapidement et économiquement à travers chaque intersection. D'une part, selon les informations envoyées en temps réel par les véhicules via la communication V2I à l’intérieur d’une zone de communication, chaque intersection exécute un algorithme soit de « Programmation Dynamique » soit de « Colonie d'Abeilles Artificielles » suivant la taille du trafic et ceci afin de donner aux véhicules l’ordre de passage minimisant le temps de retard dans les intersections. D'autre part, et après avoir reçu l’ordre de passage, chaque véhicule doit calculer son profil optimal de vitesse lui assurant une consommation minimale de carburant.Une série de simulations a ainsi été exécutée sous différents volumes de trafic afin de montrer la robustesse et la performance des méthodes proposées. Les résultats ont aussi été comparés avec d'autres méthodes de contrôle de la littérature et leur efficacité a ainsi été validée
The traffic congestion is one of the most serious problems limiting the improvement of standing of life. The intersection is a place where the jams occur the most frequently. Therefore, it is more effective and economical to relieve the problem of the heavy traffic delays by ameliorating the traffic control strategies, instead of extending the infrastructures.The proposed method is a cooperative modeling to solve the problem of reducing traffic delays and decreasing fuel consumption simultaneously in a network of intersections without traffic lights, where the cooperation is executed based on the connection of Vehicle-to-Infrastructure (V2I). The resolution contains two main steps. The first step concerns the itinerary. An itinerary presents a list of intersections chosen by vehicles to arrive at their destinations from their origins. The second step is related to the following proposed cooperative procedures to make vehicles to pass through each intersection rapidly and economically: on the one hand, according to the real-time information sent by vehicles in the edge of the communication zone via V2I, each intersection applies Dynamic Programming (DP) or Artificial Bee Colony (ABC) to cooperatively optimize the vehicle passing sequence in intersection with the minimal time delay under the relevant safety constraints; on the other hand, after receiving this sequence, each vehicle finds the optimal speed profiles with the minimal fuel consumption by an exhaustive search.A series of simulation are executed under different traffic volumes to present the performance of proposed method. The results are compared with other control methods and research papers to prove the our new traffic control strategy
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Kuroiwa, Yohei. „Sensitivity Shaping under Degree Constraint : Nevanlinna-Pick Interpolation for Multivarible and Time-Delay Systems“. Licentiate thesis, KTH, Mathematics (Dept.), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4821.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Brown, Jeremiah. „DESIGN AND OPTIMIZATION OF NANOSTRUCTURED OPTICAL FILTERS“. Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2825.

Der volle Inhalt der Quelle
Annotation:
Optical filters encompass a vast array of devices and structures for a wide variety of applications. Generally speaking, an optical filter is some structure that applies a designed amplitude and phase transform to an incident signal. Different classes of filters have vastly divergent characteristics, and one of the challenges in the optical design process is identifying the ideal filter for a given application and optimizing it to obtain a specific response. In particular, it is highly advantageous to obtain a filter that can be seamlessly integrated into an overall device package without requiring exotic fabrication steps, extremely sensitive alignments, or complicated conversions between optical and electrical signals. This dissertation explores three classes of nano-scale optical filters in an effort to obtain different types of dispersive response functions. First, dispersive waveguides are designed using a sub-wavelength periodic structure to transmit a single TE propagating mode with very high second order dispersion. Next, an innovative approach for decoupling waveguide trajectories from Bragg gratings is outlined and used to obtain a uniform second-order dispersion response while minimizing fabrication limitations. Finally, high Q-factor microcavities are coupled into axisymmetric pillar structures that offer extremely high group delay over very narrow transmission bandwidths. While these three novel filters are quite diverse in their operation and target applications, they offer extremely compact structures given the magnitude of the dispersion or group delay they introduce to an incident signal. They are also designed and structured as to be formed on an optical wafer scale using standard integrated circuit fabrication techniques. A number of frequency-domain numerical simulation methods are developed to fully characterize and model each of the different filters. The complete filter response, which includes the dispersion and delay characteristics and optical coupling, is used to evaluate each filter design concept. However, due to the complex nature of the structure geometries and electromagnetic interactions, an iterative optimization approach is required to improve the structure designs and obtain a suitable response. To this end, a Particle Swarm Optimization algorithm is developed and applied to the simulated filter responses to generate optimal filter designs.
Ph.D.
Optics and Photonics
Optics and Photonics
Optics PhD
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Kittipiyakul, Somsak. „Cross-layer optimization for transmission of delay-sensitive and bursty traffic in wireless systems“. Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3320077.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed September 12, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 214-220).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Tan, Chin Hwee. „Optimization of power and delay in VLSI circuits using transistor sizing and input ordering“. Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35979.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Pei, Guanhong. „Distributed Scheduling and Delay-Throughput Optimization in Wireless Networks under the Physical Interference Model“. Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/19219.

Der volle Inhalt der Quelle
Annotation:
We investigate diverse aspects of the performance of wireless networks, including throughput, delay and distributed complexity.
One of the main challenges for optimizing them arises from radio interference, an inherent factor in wireless networks.
Graph-based interference models represent a large class of interference models widely used for the study of wireless networks,
and suffer from the weakness of over-simplifying the interference caused by wireless signals in a local and binary way.
A more sophisticated interference model, the physical interference model, based on SINR constraints,
is considered more realistic but is more challenging to study (because of its non-linear form and non-local property).
In this dissertation, we study the connections between the two types of interference models -- graph-based and physical interference models --
and tackle a set of fundamental problems under the physical interference model;
previously, some of the problems were still open even under the graph-based interference model, and to those we have provided solutions under both types of interference models.

The underlying interference models affect scheduling and power control -- essential building blocks in the operation of wireless networks -- that directly deal with the wireless medium; the physical interference model (compared to graph-based interference model) compounds the problem of efficient scheduling and power control by making it non-local and non-linear.
The system performance optimization and tradeoffs with respect to throughput and delay require a ``global\'\' view across
transport, network, media access control (MAC), physical layers (referred to as cross-layer optimization)
to take advantage of the control planes in different levels of the wireless network protocol stack.
This can be achieved by regulating traffic rates, finding traffic flow paths for end-to-end sessions,
controlling the access to the wireless medium (or channels),
assigning the transmission power, and handling signal reception under interference.

The theme of the dissertation is
distributed algorithms and optimization of QoS objectives under the physical interference model.
We start by developing the first low-complexity distributed scheduling and power control algorithms for maximizing the efficiency ratio for different interference models;
we derive end-to-end per-flow delay upper-bounds for our scheduling algorithms and our delay upper-bounds are the first network-size-independent result known for multihop traffic.
Based on that, we design the first cross-layer multi-commodity optimization frameworks for delay-constrained throughput maximization by incorporating the routing and traffic control into the problem scope.
Scheduling and power control is also inherent to distributed computing of ``global problems\'\', e.g., the maximum independent set problems in terms of transmitting links and local broadcasts respectively, and the minimum spanning tree problems.
Under the physical interference model, we provide the first sub-linear time distributed solutions to the maximum independent set problems, and also solve the minimum spanning tree problems efficiently.
We develop new techniques and algorithms and exploit the availability of technologies (full-/half-duplex radios, fixed/software-defined power control) to further improve our algorithms.
%This fosters a deeper understanding of distributed scheduling from the network computing point of view.


We highlight our main technical contributions, which might be of independent interest to the design and analysis of optimization algorithms.
Our techniques involve the use of linear and mixed integer programs in delay-constrained throughput maximization. This demonstrates the combined use of different kinds of combinatorial optimization approaches for multi-criteria optimization.
We have developed techniques for queueing analysis under general stochastic traffic to analyze network throughput and delay properties.
We use randomized algorithms with rigorously analyzed performance guarantees to overcome the distributed nature of wireless data/control communications.
We factor in the availability of emerging radio technologies for performance improvements of our algorithms.
Some of our algorithmic techniques that would be of broader use in algorithms for the physical interference model include:
formal development of the distributed computing model in the SINR model, and reductions between models of different technological capabilities, the redefinition of interference sets in the setting of SINR constraints, and our techniques for distributed computation of rulings (informally, nodes or links which are well-separated covers).

Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Ding, Zhen. „A Static Traffic Assignment Model Combined with an Artificial Neural Network Delay Model“. FIU Digital Commons, 2007. http://digitalcommons.fiu.edu/etd/51.

Der volle Inhalt der Quelle
Annotation:
As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Hanchate, Narender. „A game theoretic framework for interconnect optimization in deep submicron and nanometer design“. [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001523.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

YANG, DONGMEI. „A DYNAMIC PROGRAMMING APPROACH TO OPTIMAL CENTER DELAY ALLOCATION“. University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1116120758.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Lazzari, Cristiano. „Automatic layout generation of static CMOS circuits targeting delay and power“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2003. http://hdl.handle.net/10183/5690.

Der volle Inhalt der Quelle
Annotation:
A crescente evolução das tecnologias de fabricação de circuitos integrados demanda o desenvolvimento de novas ferramentas de CAD. O desenvolvimento tradicional de circuitos digitais a nível físico baseia-se em bibliotecas de células. Estas bibliotecas de células oferecem certa previsibilidade do comportamento elétrico do projeto devido à caracterização prévia das células. Além disto,diferentes versões para cada célula são requeridas de forma que características como atraso e consumo sejam atendidos, aumentando o número de células necessárias em uma bilioteca. A geração automática de leiautes é uma alternativa cada vez mais importante para a geracão baseada em células. Este método implementa transistores e conexões de acordo com padrões que são definidos em algoritmos sem as limitações impostas pelo uso de uma biblioteca de células. A previsibilidade em leiautes gerado automaticamente é oferecida por ferramentas de análise e estimativa. Estas ferramentas devem ser aptas a trabalhar com estimativas do leiaute e gerar informações relativas a atraso, potência e área. Este trabalho inclui a pesquisa de novos métodos de síntese física e a implementação de um gerador automático de leiautes cujas células são geradas no momento da síntese do leiaute. A pesquisa investiga diferentes estratégias de disposição dos componentes (transistores, contatos e conexões) em um leiaute e seus efeitos na ocupação de área e no atraso e de um circuito. A estratégia de leiaute utilizada aplica técnicas de otimização de atraso pela integração com uma técnicas de dimensionamento de transistores. Isto é feito de forma que o método de folding permita diferentes dimensionamentos para os transistores. As principais características da estratégia proposta neste trabalho são: linhas de alimentação entre bandas, roteamento sobre o leiaute (não são utilizados canais de roteamento) e geração de leiautes visando a redução do atraso do circuito pela aplicação da técnica de dimensionamento ao leiaute e redução do comprimento médio das conexões. O fato de permitir a implementação de qualquer combinação de equações lógicas, sem as restrições impostas pelo uso de uma biblioteca de células, permite a síntese de circuitos com uma otimização do número de transistores utilizados. Isto contribui para a diminuição de atrasos e do consumo, especialmente do consumo estático em circuitos submicrônicos. Comparações entre a estratégia proposta e outros métodos conhecidos são apresentadas de forma a validar a proposta apresentada.
The evolution of integrated circuits technologies demands the development of new CAD tools. The traditional development of digital circuits at physical level is based in library of cells. These libraries of cells offer certain predictability of the electrical behavior of the design due to the previous characterization of the cells. Besides, different versions of each cell are required in such a way that delay and power consumption characteristics are taken into account, increasing the number of cells in a library. The automatic full custom layout generation is an alternative each time more important to cell based generation approaches. This strategy implements transistors and connections according patterns defined by algorithms. So, it is possible to implement any logic function avoiding the limitations of the library of cells. Tools of analysis and estimate must offer the predictability in automatic full custom layouts. These tools must be able to work with layout estimates and to generate information related to delay, power consumption and area occupation. This work includes the research of new methods of physical synthesis and the implementation of an automatic layout generation in which the cells are generated at the moment of the layout synthesis. The research investigates different strategies of elements disposition (transistors, contacts and connections) in a layout and their effects in the area occupation and circuit delay. The presented layout strategy applies delay optimization by the integration with a gate sizing technique. This is performed in such a way the folding method allows individual discrete sizing to transistors. The main characteristics of the proposed strategy are: power supply lines between rows, over the layout routing (channel routing is not used), circuit routing performed before layout generation and layout generation targeting delay reduction by the application of the sizing technique. The possibility to implement any logic function, without restrictions imposed by a library of cells, allows the circuit synthesis with optimization in the number of the transistors. This reduction in the number of transistors decreases the delay and power consumption, mainly the static power consumption in submicrometer circuits. Comparisons between the proposed strategy and other well-known methods are presented in such a way the proposed method is validated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Balakrishnan, Anant. „Analysis and optimization of global interconnects for many-core architectures“. Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/39632.

Der volle Inhalt der Quelle
Annotation:
The objective of this thesis is to develop circuit-aware interconnect technology optimization for network-on-chip based many-core architectures. The dimensions of global interconnects in many-core chips are optimized for maximum bandwidth density and minimum delay taking into account network-on-chip router latency and size effects of copper. The optimal dimensions thus obtained are used to characterize different network-on-chip topologies based on wiring area utilization, maximum core-to-core channel width, aggregate chip bandwidth and worse case latency. Finally, the advantages of many-core many-tier chips are evaluated for different network-on-chip topologies. Area occupied by a router within a core is shown to be the bottleneck to achieve higher performance in network-on-chip based architectures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Barceló, Adrover Salvador. „An advanced Framework for efficient IC optimization based on analytical models engine“. Doctoral thesis, Universitat de les Illes Balears, 2013. http://hdl.handle.net/10803/128968.

Der volle Inhalt der Quelle
Annotation:
En base als reptes sorgits a conseqüència de l'escalat de la tecnologia, la present tesis desenvolupa i analitza un conjunt d'eines orientades a avaluar la sensibilitat a la propagació d'esdeveniments SET en circuits microelectrònics. S'han proposant varies mètriques de propagació de SETs considerant l'impacto dels emmascaraments lògic, elèctric i combinat lògic-elèctric. Aquestes mètriques proporcionen una via d'anàlisi per quantificar tant les regions més susceptibles a propagar SETs com les sortides més susceptibles de rebre'ls. S'ha desenvolupat un conjunt d'algorismes de cerca de camins sensibilitzables altament adaptables a múltiples aplicacions, un sistema lògic especific i diverses tècniques de simplificació de circuits. S'ha demostrat que el retard d'un camí donat depèn dels vectors de sensibilització aplicats a les portes que formen part del mateix, essent aquesta variació de retard comparable a la atribuïble a les variacions paramètriques del proces.
En base a los desafíos surgidos a consecuencia del escalado de la tecnología, la presente tesis desarrolla y analiza un conjunto de herramientas orientadas a evaluar la sensibilidad a la propagación de eventos SET en circuitos microelectrónicos. Se han propuesto varias métricas de propagación de SETs considerando el impacto de los enmascaramientos lógico, eléctrico y combinado lógico-eléctrico. Estas métricas proporcionan una vía de análisis para cuantificar tanto las regiones más susceptibles a propagar eventos SET como las salidas más susceptibles a recibirlos. Ha sido desarrollado un conjunto de algoritmos de búsqueda de caminos sensibilizables altamente adaptables a múltiples aplicaciones, un sistema lógico especifico y diversas técnicas de simplificación de circuitos. Se ha demostrado que el retardo de un camino dado depende de los vectores de sensibilización aplicados a las puertas que forman parte del mismo, siendo esta variación de retardo comparable a la atribuible a las variaciones paramétricas del proceso.
Based on the challenges arising as a result of technology scaling, this thesis develops and evaluates a complete framework for SET propagation sensitivity. The framework comprises a number of processing tools capable of handling circuits with high complexity in an efficient way. Various SET propagation metrics have been proposed considering the impact of logic, electric and combined logic-electric masking. Such metrics provide a valuable vehicle to grade either in-circuit regions being more susceptible of propagating SETs toward the circuit outputs or circuit outputs more susceptible to produce SET. A quite efficient and customizable true path finding algorithm with a specific logic system has been constructed and its efficacy demonstrated on large benchmark circuits. It has been shown that the delay of a path depends on the sensitization vectors applied to the gates within the path. In some cases, this variation is comparable to the one caused by process parameters variations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Sankara, Krishnan Shivaranjani. „Delay sensitive delivery of rich images over WLAN in telemedicine applications“. Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29673.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Jayant, Nikil; Committee Member: Altunbasak, Yucel; Committee Member: Sivakumar, Raghupathy. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Gatto, Michael Joseph. „On the impact of uncertainty on some optimization problems : combinatorial aspects of delay management and robust online scheduling /“. Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17452.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Huang, Fei. „On Reducing Delays in P2P Live Streaming Systems“. Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/29006.

Der volle Inhalt der Quelle
Annotation:
In the recent decade, peer-to-peer (P2P) technology has greatly enhanced the scalability of multimedia streaming on the Internet by enabling efficient cooperation among end-users. However, existing streaming applications are plagued by the problems of long playback latency and long churn-induced delays. First of all, many streaming applications, such as IPTV and video conferencing, have rigorous constraints on end-to-end delays. Moreover, churn-induced delays, including delays from channel switching and streaming recovery, in current P2P streaming applications are typically in the scale of 10-60 seconds, which is far below the favorable user experience as in cable TV systems. These two issues in terms of playback latency and churn-induced delays have hindered the extensive commercial deployment of P2P systems. Motivated by this, in this dissertation, we focus on reducing delays in P2P live streaming systems. Specifically, we propose solutions for reducing delays in P2P live streaming systems in four problem spaces: (1) minimizing the maximum end-to-end delay in P2P streaming; (2) minimizing the average end-to-end delay in P2P streaming; (3) minimizing the average delay in multi-channel P2P streaming; and (4) reducing churn-induced delays. We devise a streaming scheme to minimize the maximum end-to-end streaming delay under a mesh-based overlay network paradigm. We call this problem, the MDPS problem. We formulate the MDPS problem and prove its NP-completeness. We then present a polynomial-time approximation algorithm, called Fastream-I, for this problem, and show that the performance of Fastream-I is bounded by a ratio of O(SQRT(log n)), where n is the number of peers in the system. We also develop a distributed version of Fastream-I that can adapt to network dynamics. Our simulation study reveals the effectiveness of Fastream-I, and shows a reasonable message overhead. While Fastream-I yields the minimum maximum end-to-end streaming delay (within a factor of O(SQRT(log n)), in many P2P settings, users may desire the minimum average end-to-end P2P streaming delay. Towards this, we devise a streaming scheme which optimizes the bandwidth allocation to achieve the minimum average end-to-end P2P streaming delay. We call this problem, the MADPS problem. We first develop a generic analytical framework for the MADPS problem. We then present Fastream-II as a solution to the MADPS problem. The core part of Fastream-II is a fast approximation algorithm, called APX-Fastream-II, based on primal-dual schema. We prove that the performance of APX-Fastream-II is bounded by a ratio of 1+w, where w is an adjustable input parameter. Furthermore, we show that the flexibility of w provides a trade-off between the approximation factor and the running time of Fastream-II. The third problem space of the dissertation is minimizing the average delay in multi-channel P2P streaming systems. Toward this, we present an algorithm, called Fastream-III. To reduce the influence from frequent channel-switching behavior, we build Fastream-III for the view-upload decoupling (VUD) model, where the uploaded content from a serving node is independent of the channel it views. We devise an approximation algorithm based on primal-dual schema for the critical component of Fastream-III, called APX-Fastream-III. In contrast to APX-Fastream-II, APX-Fastream-III addresses the extra complexity in the multichannel scenario and maintains the approximation bound by a ratio of 1+w. Besides playback lag, delays occurring in P2P streaming may arise from two other factors: node churn and channel switching. Since both stem from the re-connecting request in churn, we call them churn-induced delays. Optimizing churn-induced delays is the dissertation's fourth problem space. Toward this, we propose NAP, a novel agent-based P2P scheme, that provides preventive connections to all channels. Each channel in NAP selects powerful peers as agents to represent the peers in the channel to minimize control and message overheads. Agents distill the bootstrapping peers with superior bandwidth and lifetime expectation to quickly serve the viewer in the initial period of streaming. We build a queueing theory model to analyze NAP. Based on this model, we numerically compare NAP's performance with past efforts. The results of the numerical analysis reveal the effectiveness of NAP.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Jayaraman, Dheepakkumaran. „Optimization Techniques for Performance and Power Dissipation in Test and Validation“. OpenSIUC, 2012. https://opensiuc.lib.siu.edu/dissertations/473.

Der volle Inhalt der Quelle
Annotation:
The high cost of chip testing makes testability an important aspect of any chip design. Two important testability considerations are addressed namely, the power consumption and test quality. The power consumption during shift is reduced by efficiently adding control logic to the design. Test quality is studied by determining the sensitization characteristics of a path to be tested. The path delay fault models have been used for the purpose of studying this problem. Another important aspect in chip design is performance validation, which is increasingly perceived as the major bottleneck in integrated circuit design. Given the synthesizable HDL code, the proposed technique will efficiently identify infeasible paths, subsequently, it determines the worst case execution time (WCET) in the HDL code.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Moety, Farah. „Joint minimization of power and delay in wireless access networks“. Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S108/document.

Der volle Inhalt der Quelle
Annotation:
Dans les réseaux d'accès sans fil, l'un des défis les plus récents est la réduction de la consommation d'énergie du réseau, tout en préservant la qualité de service perçue par les utilisateurs finaux. Cette thèse propose des solutions à ce problème difficile considérant deux objectifs, l'économie d'énergie et la minimisation du délai de transmission. Comme ces objectifs sont contradictoires, un compromis devient inévitable. Par conséquent, nous formulons un problème d’optimisation multi-objectif dont le but est la minimisation conjointe de la puissance consommée et du délai de transmission dans les réseaux sans-fil. La minimisation de la puissance est réalisée en ajustant le mode de fonctionnement des stations de base (BS) du réseau d’un niveau élevé de puissance d’émission vers un niveau d'émission plus faible ou même en mode veille. La minimisation du délai de transmission est réalisée par le meilleur rattachement des utilisateurs avec les BS du réseau. Nous couvrons deux réseaux sans-fil différents en raison de leur pertinence : les réseaux locaux sans-fil (IEEE 802.11 WLAN) et les réseaux cellulaires dotés de la technologie LTE
In wireless access networks, one of the most recent challenges is reducing the power consumption of the network, while preserving the quality of service perceived by the end users. The present thesis provides solutions to this challenging problem considering two objectives, namely, saving power and minimizing the transmission delay. Since these objectives are conflicting, a tradeoff becomes inevitable. Therefore, we formulate a multi-objective optimization problem with aims of minimizing the network power consumption and transmission delay. Power saving is achieved by adjusting the operation mode of the network Base Stations (BSs) from high transmit power levels to low transmit levels or even sleep mode. Minimizing the transmission delay is achieved by selecting the best user association with the network BSs. We cover two different wireless networks, namely IEEE 802.11 wireless local area networks and LTE cellular networks
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Tran, Nam. „THE EFFECT OF FIBER DEPTH ON THE ESTIMATION OF PERIPHERAL NERVE FIBER DIAMETER USING GROUP DELAY AND SIMULATED ANNEALING OPTIMIZATION“. DigitalCommons@CalPoly, 2014. https://digitalcommons.calpoly.edu/theses/1225.

Der volle Inhalt der Quelle
Annotation:
Peripheral neuropathy refers to diseases of or injuries to the peripheral nerves in the human body. The damage can interfere with the vital connection between the central nervous system and other parts of the body, and can significantly reduce the quality of life of those affected. In the US, approximately between 15 and 20 million people over the age of 40 have some forms of peripheral neuropathy. The diagnosis of peripheral neuropathy often requires an invasive operation such as a biopsy because different forms of peripheral neuropathy can affect different types of nerve fibers. There are non-invasive methods available to diagnose peripheral neuropathy such as the nerve conduction velocity test (NCV). Although the NCV is useful to test the viability of an entire nerve trunk, it does not provide adequate information about the individual functioning nerve fibers in the nerve trunk to differentiate between the different forms of peripheral neuropathy. A novel technique was proposed to estimate the individual nerve fiber diameters using group delay and simulated annealing optimization. However, this technique assumed that the fiber depth is always constant at 1 mm and the fiber activation due to a stimulus is depth independent. This study aims to incorporate the effect of fiber depth into the fiber diameter estimation technique and to make the simulation more realistic, as well as to move a step closer to making this technique a viable diagnostic tool. From the simulation data, this study found that changing the assumption of the fiber depth significantly impacts the accuracy of the fiber diameter estimation. The results suggest that the accuracy of the fiber diameter estimation is dependent on whether the type of activation function is depth dependent or not, and whether the template fiber diameter distribution contains mostly large fibers or both small and large fibers, but not dependent on whether the fiber depth is constant or variable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Alshaer, Mohammad. „An Efficient Framework for Processing and Analyzing Unstructured Text to Discover Delivery Delay and Optimization of Route Planning in Realtime“. Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1105/document.

Der volle Inhalt der Quelle
Annotation:
L'Internet des objets, ou IdO (en anglais Internet of Things, ou IoT) conduit à un changement de paradigme du secteur de la logistique. L'avènement de l'IoT a modifié l'écosystème de la gestion des services logistiques. Les fournisseurs de services logistiques utilisent aujourd'hui des technologies de capteurs telles que le GPS ou la télémétrie pour collecter des données en temps réel pendant la livraison. La collecte en temps réel des données permet aux fournisseurs de services de suivre et de gérer efficacement leur processus d'expédition. Le principal avantage de la collecte de données en temps réel est qu’il permet aux fournisseurs de services logistiques d’agir de manière proactive pour éviter des conséquences telles que des retards de livraison dus à des événements imprévus ou inconnus. De plus, les fournisseurs ont aujourd'hui tendance à utiliser des données provenant de sources externes telles que Twitter, Facebook et Waze, parce que ces sources fournissent des informations critiques sur des événements tels que le trafic, les accidents et les catastrophes naturelles. Les données provenant de ces sources externes enrichissent l'ensemble de données et apportent une valeur ajoutée à l'analyse. De plus, leur collecte en temps réel permet d’utiliser les données pour une analyse en temps réel et de prévenir des résultats inattendus (tels que le délai de livraison, par exemple) au moment de l’exécution. Cependant, les données collectées sont brutes et doivent être traitées pour une analyse efficace. La collecte et le traitement des données en temps réel constituent un énorme défi. La raison principale est que les données proviennent de sources hétérogènes avec une vitesse énorme. La grande vitesse et la variété des données entraînent des défis pour effectuer des opérations de traitement complexes telles que le nettoyage, le filtrage, le traitement de données incorrectes, etc. La diversité des données - structurées, semi-structurées et non structurées - favorise les défis dans le traitement des données à la fois en mode batch et en temps réel. Parce que, différentes techniques peuvent nécessiter des opérations sur différents types de données. Une structure technique permettant de traiter des données hétérogènes est très difficile et n'est pas disponible actuellement. En outre, l'exécution d'opérations de traitement de données en temps réel est très difficile ; des techniques efficaces sont nécessaires pour effectuer les opérations avec des données à haut débit, ce qui ne peut être fait en utilisant des systèmes d'information logistiques conventionnels. Par conséquent, pour exploiter le Big Data dans les processus de services logistiques, une solution efficace pour la collecte et le traitement des données en temps réel et en mode batch est essentielle. Dans cette thèse, nous avons développé et expérimenté deux méthodes pour le traitement des données: SANA et IBRIDIA. SANA est basée sur un classificateur multinomial Naïve Bayes, tandis qu'IBRIDIA s'appuie sur l'algorithme de classification hiérarchique (CLH) de Johnson, qui est une technologie hybride permettant la collecte et le traitement de données par lots et en temps réel. SANA est une solution de service qui traite les données non structurées. Cette méthode sert de système polyvalent pour extraire les événements pertinents, y compris le contexte (tel que le lieu, l'emplacement, l'heure, etc.). En outre, il peut être utilisé pour effectuer une analyse de texte sur les événements ciblés. IBRIDIA a été conçu pour traiter des données inconnues provenant de sources externes et les regrouper en temps réel afin d'acquérir une connaissance / compréhension des données permettant d'extraire des événements pouvant entraîner un retard de livraison. Selon nos expériences, ces deux approches montrent une capacité unique à traiter des données logistiques
Internet of Things (IoT) is leading to a paradigm shift within the logistics industry. The advent of IoT has been changing the logistics service management ecosystem. Logistics services providers today use sensor technologies such as GPS or telemetry to collect data in realtime while the delivery is in progress. The realtime collection of data enables the service providers to track and manage their shipment process efficiently. The key advantage of realtime data collection is that it enables logistics service providers to act proactively to prevent outcomes such as delivery delay caused by unexpected/unknown events. Furthermore, the providers today tend to use data stemming from external sources such as Twitter, Facebook, and Waze. Because, these sources provide critical information about events such as traffic, accidents, and natural disasters. Data from such external sources enrich the dataset and add value in analysis. Besides, collecting them in real-time provides an opportunity to use the data for on-the-fly analysis and prevent unexpected outcomes (e.g., such as delivery delay) at run-time. However, data are collected raw which needs to be processed for effective analysis. Collecting and processing data in real-time is an enormous challenge. The main reason is that data are stemming from heterogeneous sources with a huge speed. The high-speed and data variety fosters challenges to perform complex processing operations such as cleansing, filtering, handling incorrect data, etc. The variety of data – structured, semi-structured, and unstructured – promotes challenges in processing data both in batch-style and real-time. Different types of data may require performing operations in different techniques. A technical framework that enables the processing of heterogeneous data is heavily challenging and not currently available. In addition, performing data processing operations in real-time is heavily challenging; efficient techniques are required to carry out the operations with high-speed data, which cannot be done using conventional logistics information systems. Therefore, in order to exploit Big Data in logistics service processes, an efficient solution for collecting and processing data in both realtime and batch style is critically important. In this thesis, we developed and experimented with two data processing solutions: SANA and IBRIDIA. SANA is built on Multinomial Naïve Bayes classifier whereas IBRIDIA relies on Johnson's hierarchical clustering (HCL) algorithm which is hybrid technology that enables data collection and processing in batch style and realtime. SANA is a service-based solution which deals with unstructured data. It serves as a multi-purpose system to extract the relevant events including the context of the event (such as place, location, time, etc.). In addition, it can be used to perform text analysis over the targeted events. IBRIDIA was designed to process unknown data stemming from external sources and cluster them on-the-fly in order to gain knowledge/understanding of data which assists in extracting events that may lead to delivery delay. According to our experiments, both of these approaches show a unique ability to process logistics data. However, SANA is found more promising since the underlying technology (Naïve Bayes classifier) out-performed IBRIDIA from performance measuring perspectives. It is clearly said that SANA was meant to generate a graph knowledge from the events collected immediately in realtime without any need to wait, thus reaching maximum benefit from these events. Whereas, IBRIDIA has an important influence within the logistics domain for identifying the most influential category of events that are affecting the delivery. Unfortunately, in IBRIRDIA, we should wait for a minimum number of events to arrive and always we have a cold start. Due to the fact that we are interested in re-optimizing the route on the fly, we adopted SANA as our data processing framework
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Högdahl, Johan. „A Simulation-Optimization Approach for Improved Robustness of Railway Timetables“. Licentiate thesis, KTH, Transportplanering, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263761.

Der volle Inhalt der Quelle
Annotation:
The timetable is an essential part for the operations of railway traffic, and its quality is considered to have large impact on capacity utilization and reliability of the transport mode. The process of generating a timetable is most often a manual task with limited computer aid, and is known to be a complex planning problem due to inter-train dependencies. These inter-train dependencies makes it hard to manually generate feasible timetables, and also makes it hard to improve a given timetable as new conflicts and surprising effects easily can occur. As the demand for railway traffic is expected to continue grow, higher frequencies and more saturated timetables are required. However, in many European countries there is also an on-going public debate on the punctuality of the railway, which may worsen by increased capacity utilization. It is therefore also a need to increase the robustness of the services. This calls for increased precision of both the planning and the operation, which can be achieved with a higher degree of automation. The research in this thesis is aimed at improving the robustness of railway timetables by combining micro-simulation with mathematical optimization, two methods that today are used frequently by practitioners and researchers but rarely in combination. In this research a sequential approach based on simulating a given timetable and re-optimizing it to reduce the weighted sum of scheduled travel time and predicted average delay is proposed. The approach has generated promising results in simulation studies, in which it has been possible to substantially improve the punctuality and reduce the average delays by only increasing the advertised travel times slightly. Further, the results have also indicated a positive socio-economic benefit. This demonstrates the methods potential usefulness and motivates further research.
För järnvägen har tidtabellen en central roll, och dess kvalité har stor betydelse för kapacitet och tillförlitlighet. Processen att konstruera en tidtabell är ofta en uppgift som utförs manuellt med begränsat datorstöd och på grund av beroenden mellan enskilda tåg är det ofta ett tidskrävande och svårt arbete. Dessa tågberoenden gör det svårt att manuellt konstruera konfliktfria tidtabeller samtidigt som det också är svårt att manuellt förbättra en given tidtabell, vilket beror på att de är svårt att förutsäga vad effekten av en given ändring blir. Eftersom efterfrågan på järnväg fortsatt förväntas öka, finns det ett behov av att kunna köra fler tåg. Samtidigt pågår det redan i många europeiska länder en offentlig debatt om järnvägen punktlighet, vilken riskeras att försämras vid högre kapacitetsanvändning. Därför finns det även ett behov av att förbättra tidtabellernas robusthet, där robusthet syftar till en tidtabells möjlighet att stå emot och återhämta mindre förseningar. För att hantera denna målkonflikt kommer det behövas ökad precision vid både planering och drift, vilket kan uppnås med en högre grad av automation. Forskningen i denna avhandling syftar till att förbättra robustheten för tågtidtabeller genom att kombinera mikro-simulering med matematisk optimering, två metoder som redan används i hög grad av både yrkesverksamma trafikplanerare och forskare men som sällan kombineras. I den här avhandlingen förslås en sekventiell metod baserad på att simulera en given tidtabell och optimera den för att minska den viktade summan av planerad restid och predikterad medelförsening. Metoden har visat på lovande resultat i simuleringsstudier, där det har varit möjligt att uppnå en väsentligt bättre punktlighet och minskad medelförsening, genom att endast förlänga de planerade restiderna marginellt. Även förbättrad samhällsekonomisk nytta har observerats av att tillämpa den föreslagna metoden. Sammantaget visar detta metodens potentiella nytta och motiverar även fortsatt forskning.

QC 20191112

APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

ODHIAMBO, EVANS OTIENO. „Evaluation of Signal Optimization Software : Comparison of Optimal Signal Pans from TRANSYT and LinSig – A Case Study“. Thesis, KTH, Transportplanering, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259541.

Der volle Inhalt der Quelle
Annotation:
The design of traffic signal control plan is directly related to the level of traffic congestion experienced both at the junction level and the network particularly in urban areas. Ensuring signals are well designed is one of the most cost-effective ways of tackling urban congestion problems. Signal time plans are designed with the help of signal optimization models. Optimization can either be done for multiple or single objectives and is formulated as a problem of finding the appropriate cycle lengths, green splits, and offsets. Some of these objective functions include; better mobility, efficient energy use, and environmental sustainability. LinSig and TRANSYT are two of the most widely used traffic signal optimization tools in Sweden. Each of them has an inbuilt optimization function which differs from the other. LinSig optimizes based on delay or maximum reserve capacity while TRANSYT optimization is based on performance index (P.I) involving delay, progression, stops and fuel consumption.This thesis compared these optimization models through theoretical review and application to a case study in Norrköping. The theoretical review showed that both TRANSYT and LinSig have objective functions based on delay and its derivatives. The review also showed that these models suffer from the inability to accurately model block back as they are based on the assumption of vertical queuing of traffic at the stop line. Apart from these similarities, these two models also have significant variations with respect to modeling short congested sections of the network as well as modeling mixed traffic including different vehicle classes, pedestrians, and cyclists.From the case study, TRANSYT showed longer cycle time compared to LinSig in both scenarios as its optimization objectives include both delay and stops while LinSig accounts for only delay. The Allocation of phase green splits and individual junction delay was comparable for undersaturated junctions while congested network sections had significant differences. Total network delay was, however, less in LinSig compared to TRANSYT. This could be attributed to different modeling criteria for mixed traffic and congested network in addition to the fact that cyclists were not modeled in TRANSYT. VISSIM simulation of the two-signal time plans showed that network delay and queue lengths from TRANSYT signal timings are much less compared to LinSig time plans. A strong indication of better signal coordination.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Bocquillon, Ronan. „Data distribution optimization in a system of collaborative systems“. Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2232/document.

Der volle Inhalt der Quelle
Annotation:
Un système de systèmes est un système dont les composants sont eux-mêmes des systèmes indépendants, tous communiquant pour atteindre un objectif commun. Lorsque ces systèmes sont mobiles, il peut être difficile d'établir des connexions de bout-en-bout. L'architecture mise en place dans de telles situations est appelée réseau tolérant aux délais. Les données sont transmises d'un système à l'autre – selon les opportunités de communication, appelées contacts, qui apparaissent lorsque deux systèmes sont proches – et disséminées dans l'ensemble du réseau avec l'espoir que chaque message atteigne sa destination. Si une donnée est trop volumineuse, elle est découpée. Chaque fragment est alors transmis séparément.Nous supposons ici que la séquence des contacts est connue. On s'intéresse donc à des applications où la mobilité des systèmes est prédictible (les réseaux de satellites par exemple). Nous cherchons à exploiter cette connaissance pour acheminer efficacement des informations depuis leurs sources jusqu'à leurs destinataires. Nous devons répondre à la question : « Quels éléments de données doivent être transférés lors de chaque contact pour minimiser le temps de dissémination » ?Nous formalisons tout d'abord ce problème, appelé problème de dissémination, et montrons qu'il est NP-difficile au sens fort. Nous proposons ensuite des algorithmes pour le résoudre. Ces derniers reposent sur des règles de dominance, des procédures de prétraitement, la programmation linéaire en nombres entiers, et la programmation par contraintes. Une partie est dédiée à la recherche de solutions robustes. Enfin, nous rapportons des résultats numériques montrant l'efficacité de nos algorithmes
Systems of systems are supersystems comprising elements which are themselves independent operational systems, all interacting to achieve a common goal. When the subsystems are mobile, these may suffer from a lack of continuous end-to-end connectivity. To address the technical issues in such networks, the common approach is termed delay-tolerant networking. Routing relies on a store-forward mechanism. Data are sent from one system to another – depending on the communication opportunities, termed contacts, that arise when two systems are close – and stored throughout the network in hope that all messages will reach their destination. If data are too large, these must be split. Each fragment is then transmitted separately.In this work, we assume that the sequence of contacts is known. Thus, we focus on applications where it is possible to make realistic predictions about system mobility (e.g. satellite networks). We study the problem of making the best use of knowledge about possibilities for communication when data need to be routed from a set of systems to another within a given time horizon. The fundamental question is: "Which elements of the information should be transferred during each contact so that the dissemination length is minimized"?We first formalize the so-called dissemination problem, and prove this is strongly NP-Hard. We then propose algorithms to solve it. These relies on different dominance rules, preprocessing procedures, integer-linear programming, and constraint programming. A chapter is dedicated to the search for robust solutions. Finally experimental results are reported to show the efficiency of our algorithms in practice
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Schlake, Farimehr. „Optimal Consumer-Centric Delay-Efficient Security Management in Multi-Agent Networks: A Game and Mechanism Design Theoretic Approach“. Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/77362.

Der volle Inhalt der Quelle
Annotation:
The main aspiration behind the contributions of this research work is the achievement of simultaneuos delay-efficiency, autonomy, and security through innovative protocol design to address complex real-life problems. To achieve this, we take a holistic approach. We apply theoretical mathematical modeling implementing implications of social-economic behavioral characteristics to propose a cross-layer network security protocol. We further complement this approach by a layer-specific focus with implementations at two lower OSI layers. For the cross-layer design, we suggest the use of game and mechanism design theories. We design a network-wide consumer-centric and delay-efficient security protocol, DSIC-S. It induces a Dominant Strategy Incentive Compatible equilibrium among all rational and selfish nodes. We prove it is network-wide socially desirable and Pareto optimal. We address resource management and delay-efficiency through synergy of several design aspects. We propose a scenario-based security model with different levels. Furthermore, we design a valuation system to integrate the caused delay in selection of security algorithms at each node without consumer's knowledge of the actual delays. We achieve this by incorporating the consumer's valuation system, in the calculation of the credit transfers through the Vickrey-Clarke-Groves (VCG) payments with Clarke's pivotal rule. As the utmost significant contribution of this work, we solve the revelation theorem's problem of misrepresentation of agents' private information in mechanism design theory through the proposed design. We design an incentive model and incorporate the valuations in the incentives. The simulations validate the theoretical results. They prove the significance of this model and among others show the correlation of the credit transfers to actual delays and security valuations. In the layer-specific approach for the network-layer, we implement the DSIC-S protocol to extend current IPsec and IKEv2 protocols. IPsec-O and IKEv2-O inherit the strong properties of DSIC-S through the proposed extensions. Furthermore, we propose yet another layer-specific protocol, the SME_Q, for the datalink layer based on ATM. We develop an extensive simulation software, SMEQSIM, to simulate ATM security negotiations. We simulate the proposed protocol in a comprehensive real-life ATM network and prove the significance of this research work.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Tran, Martina. „Energy Consumption Optimizations for 5G networks“. Thesis, Uppsala universitet, Signaler och System, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-395146.

Der volle Inhalt der Quelle
Annotation:
The importance of energy efficiency has grown alongside awareness of climate change due to the rapid increase of greenhouse gases. With the increasing trend regarding mobile subscribers, it is necessary to prevent an expansion of energy consumption via mobile networks. In this thesis, the energy optimization of the new radio access technology called 5G NR utilizing different sleep states to put base stations to sleep when they are not transmitting data is discussed. Energy savings and file latency with heterogeneous and super dense urban scenarios was evaluated through simulations with different network deployments. An updated power model has been proposed and the sensitivity of the new power model was analyzed by adjusting wake-up time and sleep factors. This showed that careful implementation is necessary when adjusting these parameter settings, although in most cases it did not change the end results by much. Since 5G NR has more potential in energy optimization compared to the previous generation mobile network 4G LTE, up to 4 sleep states was implemented on the NR base stations and one idle mode on LTE base stations. To mitigate unnecessary sleep, deactivation timers are used which decides when to put base stations to sleep. Without deactivation timers, the delay could increase significantly, while with deactivation timers the delay increase would only be a few percent. Up to 42.5% energy could be saved with LTE-NR non-standalone deployment and 72.7% energy with NR standalone deployment compared to LTE standalone deployment, while minimally impacting the delay on file by 1%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Vigeh, Arya. „Investigation of a Simulated Annealing Cooling Schedule used to Optimize the Estimation of the Fiber Diameter Distribution in a Peripheral Nerve Trunk“. DigitalCommons@CalPoly, 2011. https://digitalcommons.calpoly.edu/theses/497.

Der volle Inhalt der Quelle
Annotation:
In previous studies it was determined that the fiber diameter distribution in a peripheral nerve could be estimated by a simulation technique known as group delay. These results could be further improved using a combinatorial optimization algorithm called simulated annealing. This paper explores the structure and behavior of simulated annealing for the application of optimizing the group delay estimated fiber diameter distribution. Specifically, a set of parameters known as the cooling schedule is investigated to determine its effectiveness in the optimization process. Simulated annealing is a technique for finding the global minimum (or maximum) of a cost function which may have many local minima. The set of parameters which comprise the cooling schedule dictate the rate at which simulated annealing reaches its final solution. Converging too quickly can result in sub-optimal solutions while taking too long to determine a solution can result in an unnecessarily large computational effort that would be impractical in a real-world setting. The goal of this study is to minimize the computational effort of simulated annealing without sacrificing its effectiveness at minimizing the cost function. The cost function for this application is an error value computed as the difference in the maximum compound evoked potentials between an empirically-determined template distribution of fiber diameters and an optimized set of fiber diameters. The resulting information will be useful when developing the group delay estimation and subsequent simulated annealing optimization in an experimental laboratory setting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Lang, Stanislav. „Optimalizace řídicího algoritmu pomocí evolučního algoritmu“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-228998.

Der volle Inhalt der Quelle
Annotation:
My thesis deals with possibilities of using evolutionary computation in the field of automation. The theoretical part of the thesis describes the techniques used in the automation and optimization. The practical part of the thesis connects these two disciplines, the output of this work is a program for automatic design of parameters of regulator using a genetic algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Zohdy, Ismail Hisham. „Development and Testing Of The iCACC Intersection Controller For Automated Vehicles“. Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/51743.

Der volle Inhalt der Quelle
Annotation:
Assuming that vehicle connectivity technology matures and connected vehicles hit the market, many of the running vehicles will be equipped with highly sophisticated sensors and communication hardware. Along with the goal of eliminating human distracted driving and increasing vehicle automation, it is necessary to develop novel intersection control strategies. Accordingly, the research presented in this dissertation develops an innovative system that controls the movement of vehicles using cooperative cruise control system (CACC) capabilities entitled: iCACC (intersection management using CACC). In the iCACC system, the main assumption is that the intersection controller receives vehicle requests from vehicles and advises each vehicle on the optimum course of action by ensuring no crashes occur while at the same time minimizing the intersection delay. In addition, an innovative framework has been developed (APP framework) using the iCACC platform to prioritize the movements of vehicles based on the number of passengers in the vehicle. Using CACC and vehicle-to-infrastructure connectivity, the system was also applied to a single-lane roundabout. In general terms, this application is considered quite similar to the concept of metering single-lane entrance ramps. The proposed iCACC system was tested and compared to three other intersection control strategies, namely: traffic signal control, an all-way stop control (AWSC), and a roundabout, considering different traffic demand levels ranging from low to high levels of congestion (volume-to-capacity ration from 0.2 to 0.9). The simulated results showed savings in delay and fuel consumption in the order of 90 to 45 %, respectively compared to AWSC and traffic signal control. Delays for the roundabout and the iCACC controller were comparable. The simulation results showed that fuel consumption for the iCACC controller was, on average, 33%, 45% and 11% lower than the fuel consumption for the traffic signal, AWSC and roundabout control strategies, respectively. In summary, the developed iCACC system is an innovative system because of its ability to optimize/model different levels of vehicle automation market penetrations, weather conditions, vehicle classes/models, shared movements, roundabouts, and passenger priority. In addition, the iCACC is capable of capturing the heterogeneity of roadway users (cyclists, pedestrians, etc.) using a video detection technique developed in this dissertation effort. It is anticipated that the research findings will contribute to the application of automated systems, connected vehicle technology, and the future of driverless vehicle management. Finally, the public acceptability of the new advanced in-vehicle technologies is a challenging task and this research will provide valuable feedback for researchers, automobile manufacturers, and decision makers in making the case to introduce such systems.
Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Santos, Cristiano Lopes dos. „Verificação e otimização de atraso durante a síntese física de circuitos integrados CMOS“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2005. http://hdl.handle.net/10183/17785.

Der volle Inhalt der Quelle
Annotation:
Este trabalho propõe um método de otimização de atraso, através de dimensionamento de transistores, o qual faz parte de um fluxo automático de síntese física de circuitos combinacionais em tecnologia CMOS estática. Este fluxo de síntese física é independente de biblioteca de células, sendo capaz de realizar, sob demanda, a geração do leiaute a partir de um netlist de transistores. O método de otimização proposto faz com que este fluxo de síntese física seja capaz de realizar a geração do leiaute orientado pelas restrições de atraso, garantindo a operação do circuito na freqüência especificada pelo projetista. Este trabalho inclui também uma pesquisa sobre os principais métodos de verificação e otimização de atraso, principalmente aqueles que podem ser aplicados quando a etapa de síntese física chega ao nível de transistores. Um método de análise de timing funcional é utilizado para identificar o atraso e o caminho críticos e, com isso, guiar o método de otimização proposto. Desta forma, não existe desperdício de esforço e desempenho para reduzir o atraso de caminhos que não contribuem efetivamente para determinar a freqüência do circuito. O método proposto neste trabalho explora as possibilidades oferecidas por ser independente de biblioteca de células, mas impõe restrições aos circuitos otimizados para reduzir o impacto do dimensionamento nas etapas de geração de leiaute. O desenvolvimento de um método incremental de seleção de caminhos críticos reduziu consideravelmente o tempo de processamento sem comprometer a qualidade dos resultados. Ainda, a realização de um método seletivo de dimensionamento de transistores, possibilitado pela adaptação de um modelo de atraso pino-a-pino, permitiu reduzir significativamente o acréscimo de área decorrente da otimização e aumentou a precisão das estimativas de atraso.
This work proposes a transistor sizing-based delay optimization method especially tailored for an automatic physical synthesis flow of static CMOS combinational circuits. Such physical synthesis flow is a library-free approach which is able to perform the layout generation using a transistor netlist level description of the circuit. The integration of the proposed optimization method to the automatic physical synthesis renders possible a timing-driven layout generation flow. This work also includes a research of the major delay verification and optimization methods, mainly those that can be applied during the physical synthesis step at the transistor level. A functional timing analysis method is used to identify the critical delay and the critical paths and thus drive the proposed optimization method. Hence, there is no waste of effort to optimize paths which are not responsible for the delay of the circuit. The optimization method proposed in this work explores the advantages provided by a library-free synthesis flow and imposes restrictions to the optimized circuits in order to minimize the impact of the transistor sizing in the layout generation steps. The development of a method for incremental critical path selection reduces the CPU time consumed by the delay optimization step. A pin-to-pin gate delay model was adapted to perform a selective transistor sizing, resulting in a significantly reduction of the area overhead.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Madamori, Oluwashina. „Optimal Gateway Placement in Low-cost Smart Cities“. UKnowledge, 2019. https://uknowledge.uky.edu/cs_etds/92.

Der volle Inhalt der Quelle
Annotation:
Rapid urbanization burdens city infrastructure and creates the need for local governments to maximize the usage of resources to serve its citizens. Smart city projects aim to alleviate the urbanization problem by deploying a vast amount of Internet-of-things (IoT) devices to monitor and manage environmental conditions and infrastructure. However, smart city projects can be extremely expensive to deploy and manage partly due to the cost of providing Internet connectivity via 5G or WiFi to IoT devices. This thesis proposes the use of delay tolerant networks (DTNs) as a backbone for smart city communication; enabling developing communities to become smart cities at a fraction of the cost. A model is introduced to aid policy makers in designing and evaluating the expected performance of such networks and results are presented based on a public transit network data-set from Chapel Hill, North Carolina and Louisville, Kentucky. We also demonstrate that the performance of our network can be optimized using algorithms associated on set-cover and Influence maximization problems. Several optimization algorithms are then developed to facilitate the effective placement of gateways within the network model and these algorithms are shown to outperform traditional centrality-based algorithms in terms of cost-efficiency and network performance. Finally, other innovative ways of improving network performance in a low-cost smart city is discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Kumar, Akshay. „Efficient Resource Allocation Schemes for Wireless Networks with with Diverse Quality-of-Service Requirements“. Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/87529.

Der volle Inhalt der Quelle
Annotation:
Quality-of-Service (QoS) to users is a critical requirement of resource allocation in wireless networks and has drawn significant research attention over a long time. However, the QoS requirements differ vastly based on the wireless network paradigm. At one extreme, we have a millimeter wave small-cell network for streaming data that requires very high throughput and low latency. At the other end, we have Machine-to-Machine (M2M) uplink traffic with low throughput and low latency. In this dissertation, we investigate and solve QoS-aware resource allocation problems for diverse wireless paradigms. We first study cross-layer dynamic spectrum allocation in a LTE macro-cellular network with fractional frequency reuse to improve the spectral efficiency for cell-edge users. We show that the resultant optimization problem is NP-hard and propose a low-complexity layered spectrum allocation heuristic that strikes a balance between rate maximization and fairness of allocation. Next, we develop an energy efficient downlink power control scheme in a energy harvesting small-cell base station equipped with local cache and wireless backhaul. We also study the tradeoff between the cache size and the energy harvesting capabilities. We next analyzed the file read latency in Distributed Storage Systems (DSS). We propose a heterogeneous DSS model wherein the stored data is categorized into multiple classes based on arrival rate of read requests, fault-tolerance for storage etc. Using a queuing theoretic approach, we establish bounds on the average read latency for different scheduling policies. We also show that erasure coding in DSS serves the dual purpose of reducing read latency and increasing the energy efficiency. Lastly, we investigate the problem of delay-efficient packet scheduling in M2M uplink with heterogeneous traffic characteristics. We classify the uplink traffic into multiple classes and propose a proportionally-fair delay-efficient heuristic packet scheduler. Using a queuing theoretic approach, we next develop a delay optimal multiclass packet scheduler and later extend it to joint medium access control and packet scheduling for M2M uplink. Using extensive simulations, we show that the proposed schedulers perform better than state-of-the-art schedulers in terms of average delay and packet delay jitter.
PHD
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie