To see the other types of publications on this topic, follow the link: Low latency.

Dissertations / Theses on the topic 'Low latency'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Low latency.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Yonghao. "Low latency audio processing." Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/44697.

Full text
Abstract:
Latency in the live audio processing chain has become a concern for audio engineers and system designers because significant delays can be perceived and may affect synchronisation of signals, limit interactivity, degrade sound quality and cause acoustic feedback. In recent years, latency problems have become more severe since audio processing has become digitised, high-resolution ADCs and DACs are used, complex processing is performed, and data communication networks are used for audio signal transmission in conjunction with other traffic types. In many live audio applications, latency thresholds are bounded by human perceptions. The applications such as music ensembles and live monitoring require low delay and predictable latency. Current digital audio systems either have difficulties to achieve or have to trade-off latency with other important audio processing functionalities. This thesis investigated the fundamental causes of the latency in a modern digital audio processing system: group delay, buffering delay, and physical propagation delay and their associated system components. By studying the time-critical path of a general audio system, we focus on three main functional blocks that have the significant impact on overall latency; the high-resolution digital filters in sigma-delta based ADC/DAC, the operating system to process low latency audio streams, and the audio networking to transmit audio with flexibility and convergence. In this work, we formed new theory and methods to reduce latency and accurately predict latency for group delay. We proposed new scheduling algorithms for the operating system that is suitable for low latency audio processing. We designed a new system architecture and new protocols to produce deterministic networking components that can contribute the overall timing assurance and predictability of live audio processing. The results are validated by simulations and experimental tests. Also, this bottom-up approach is aligned with the methodology that could solve the timing problem of general cyber-physical systems that require the integration of communication, software and human interactions.
APA, Harvard, Vancouver, ISO, and other styles
2

Riddoch, David James. "Low latency distributed computing." Thesis, University of Cambridge, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.619850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Friston, S. "Low latency rendering with dataflow architectures." Thesis, University College London (University of London), 2017. http://discovery.ucl.ac.uk/1544925/.

Full text
Abstract:
The research presented in this thesis concerns latency in VR and synthetic environments. Latency is the end-to-end delay experienced by the user of an interactive computer system, between their physical actions and the perceived response to these actions. Latency is a product of the various processing, transport and buffering delays present in any current computer system. For many computer mediated applications, latency can be distracting, but it is not critical to the utility of the application. Synthetic environments on the other hand attempt to facilitate direct interaction with a digitised world. Direct interaction here implies the formation of a sensorimotor loop between the user and the digitised world - that is, the user makes predictions about how their actions affect the world, and see these predictions realised. By facilitating the formation of the this loop, the synthetic environment allows users to directly sense the digitised world, rather than the interface, and induce perceptions, such as that of the digital world existing as a distinct physical place. This has many applications for knowledge transfer and efficient interaction through the use of enhanced communication cues. The complication is, the formation of the sensorimotor loop that underpins this is highly dependent on the fidelity of the virtual stimuli, including latency. The main research questions we ask are how can the characteristics of dataflow computing be leveraged to improve the temporal fidelity of the visual stimuli, and what implications does this have on other aspects of the fidelity. Secondarily, we ask what effects latency itself has on user interaction. We test the effects of latency on physical interaction at levels previously hypothesized but unexplored. We also test for a previously unconsidered effect of latency on higher level cognitive functions. To do this, we create prototype image generators for interactive systems and virtual reality, using dataflow computing platforms. We integrate these into real interactive systems to gain practical experience of how the real perceptible benefits of alternative rendering approaches, but also what implications are when they are subject to the constraints of real systems. We quantify the differences of our systems compared with traditional systems using latency and objective image fidelity measures. We use our novel systems to perform user studies into the effects of latency. Our high performance apparatuses allow experimentation at latencies lower than previously tested in comparable studies. The low latency apparatuses are designed to minimise what is currently the largest delay in traditional rendering pipelines and we find that the approach is successful in this respect. Our 3D low latency apparatus achieves lower latencies and higher fidelities than traditional systems. The conditions under which it can do this are highly constrained however. We do not foresee dataflow computing shouldering the bulk of the rendering workload in the future but rather facilitating the augmentation of the traditional pipeline with a very high speed local loop. This may be an image distortion stage or otherwise. Our latency experiments revealed that many predictions about the effects of low latency should be re-evaluated and experimenting in this range requires great care.
APA, Harvard, Vancouver, ISO, and other styles
4

Lancaster, Robert. "Low Latency Networking in Virtualized Environments." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1352993532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tridgell, Stephen. "Low Latency Machine Learning on FPGAs." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23030.

Full text
Abstract:
In recent years, there has been an exponential rise in the quantity of data being acquired and generated. Machine learning provides a way to use and analyze this data to provide a range of insights and services. In this thesis, two popular machine learning algorithms are explored in detail and implemented in hardware to achieve high throughput and low latency. The first algorithm discussed is the Na¨ıve Online regularised Risk Minimization Algorithm. This is a Kernel Adaptive Filter capable of high throughput online learning. In this work, a hardware architecture known as braiding is proposed and implemented on a Field-Programmable Gate Array. The application of this braiding technique to the Na¨ıve Online regularised Risk Minimization Algorithm results in a very high throughput and low latency design. Neural networks have seen explosive growth in research in the recent decade. A portion of this research has been dedicated to lowering the computational cost of neural networks by using lower precision representations. The second method explored and implemented in this work is the unrolling of ternary neural networks. Ternary neural networks can have the same structure as any floating point neural network with the key difference being the weights of the network are restricted to -1,0 and 1. Under certain assumptions, this work demonstrates that these networks can be implemented very efficiently for inference by exploiting sparsity and common subexpressions. To demonstrate the effectiveness of this technique, it is applied to two different systems and two different datasets. The first is on the common benchmarking dataset CIFAR10 and the Amazon Web Services F1 platform, and the second is for real-time automatic modulation classification of radio frequency signals using the radio frequency system on chip ZCU111 development board. These implementations both demonstrate very high throughput and low latency compared with other published literature while maintaining very high accuracy. Together this work provides techniques for real-time inference and training on parallel hardware which can be used to implement a wide range of new applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Marxer, Piñón Ricard. "Audio source separation for music in low-latency and high-latency scenarios." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/123808.

Full text
Abstract:
Aquesta tesi proposa mètodes per tractar les limitacions de les tècniques existents de separació de fonts musicals en condicions de baixa i alta latència. En primer lloc, ens centrem en els mètodes amb un baix cost computacional i baixa latència. Proposem l'ús de la regularització de Tikhonov com a mètode de descomposició de l'espectre en el context de baixa latència. El comparem amb les tècniques existents en tasques d'estimació i seguiment dels tons, que són passos crucials en molts mètodes de separació. A continuació utilitzem i avaluem el mètode de descomposició de l'espectre en tasques de separació de veu cantada, baix i percussió. En segon lloc, proposem diversos mètodes d'alta latència que milloren la separació de la veu cantada, gràcies al modelatge de components específics, com la respiració i les consonants. Finalment, explorem l'ús de correlacions temporals i anotacions manuals per millorar la separació dels instruments de percussió i dels senyals musicals polifònics complexes.
Esta tesis propone métodos para tratar las limitaciones de las técnicas existentes de separación de fuentes musicales en condiciones de baja y alta latencia. En primer lugar, nos centramos en los métodos con un bajo coste computacional y baja latencia. Proponemos el uso de la regularización de Tikhonov como método de descomposición del espectro en el contexto de baja latencia. Lo comparamos con las técnicas existentes en tareas de estimación y seguimiento de los tonos, que son pasos cruciales en muchos métodos de separación. A continuación utilizamos y evaluamos el método de descomposición del espectro en tareas de separación de voz cantada, bajo y percusión. En segundo lugar, proponemos varios métodos de alta latencia que mejoran la separación de la voz cantada, gracias al modelado de componentes que a menudo no se toman en cuenta, como la respiración y las consonantes. Finalmente, exploramos el uso de correlaciones temporales y anotaciones manuales para mejorar la separación de los instrumentos de percusión y señales musicales polifónicas complejas.
This thesis proposes specific methods to address the limitations of current music source separation methods in low-latency and high-latency scenarios. First, we focus on methods with low computational cost and low latency. We propose the use of Tikhonov regularization as a method for spectrum decomposition in the low-latency context. We compare it to existing techniques in pitch estimation and tracking tasks, crucial steps in many separation methods. We then use the proposed spectrum decomposition method in low-latency separation tasks targeting singing voice, bass and drums. Second, we propose several high-latency methods that improve the separation of singing voice by modeling components that are often not accounted for, such as breathiness and consonants. Finally, we explore using temporal correlations and human annotations to enhance the separation of drums and complex polyphonic music signals.
APA, Harvard, Vancouver, ISO, and other styles
7

Gazi, Orhan. "Parallelized Architectures For Low Latency Turbo Structures." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608110/index.pdf.

Full text
Abstract:
In this thesis, we present low latency general concatenated code structures suitable for parallel processing. We propose parallel decodable serially concatenated codes (PDSCCs) which is a general structure to construct many variants of serially concatenated codes. Using this most general structure we derive parallel decodable serially concatenated convolutional codes (PDSCCCs). Convolutional product codes which are instances of PDSCCCs are studied in detail. PDSCCCs have much less decoding latency and show almost the same performance compared to classical serially concatenated convolutional codes. Using the same idea, we propose parallel decodable turbo codes (PDTCs) which represent a general structure to construct parallel concatenated codes. PDTCs have much less latency compared to classical turbo codes and they both achieve similar performance. We extend the approach proposed for the construction of parallel decodable concatenated codes to trellis coded modulation, turbo channel equalization, and space time trellis codes and show that low latency systems can be constructed using the same idea. Parallel decoding operation introduces new problems in implementation. One such problem is memory collision which occurs when multiple decoder units attempt accessing the same memory device. We propose novel interleaver structures which prevent the memory collision problem while achieving performance close to other interleavers.
APA, Harvard, Vancouver, ISO, and other styles
8

Goel, Ashvin. "Operating system support for low-latency streaming /." Full text open access at:, 2003. http://content.ohsu.edu/u?/etd,194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guan, Xi. "MeteorShower: geo-replicated strongly consistent NoSQL data store with low latency : Achieving sequentially consistent keyvalue store with low latency." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180687.

Full text
Abstract:
According to CAP theorem, strong consistency is usually compromised in the design of NoSQL databases. Poor performance is often observed when strong data consistency level is required, especially when a system is deployed in a geographical environment. In such an environment, servers need to communicate through cross-datacenter messages, whose latency is much higher than message within a data center. However, maintaining strong consistency usually involves extensive usage of cross-datacenter messages. Thus, the large cross-data center communication delay is one of the most dominant reasons, which leads to poor performance of most algorithms achieving strong consistency in a geographical environment. This thesis work proposes a novel data consistency algorithm – I-Write-One-Read-One based on Write-One-Read- All. The novel approach allows a read request to be responded by performing a local read. Besides, it reduces the cross-datacenter-consistency-synchronization message delay from a round trip to a single trip. Moreover, the consistency model achieved in I-Write-One-Read-One is higher than sequential consistency, however, looser than linearizability. In order to verify the correctness and effectiveness of IWrite- One-Read-One, a prototype, MeteoerShower, is implemented on Cassandra. Furthermore, in order to reduce time skews among nodes, NTP servers are deployed. Compared to Cassandra with Write-One-Read-All consistency setup, MeteoerShower has almost the same write performance but much lower read latency in a real geographical deployment. The higher cross-datacenter network delay, the more evident of the read performance improvement. Same as Cassandra, MeteorShower also has excellent horizontal scalability, where its performance grows linearly with the increasing number of nodes per data center.
APA, Harvard, Vancouver, ISO, and other styles
10

Ge, Wu. "A Continuous Dataflow Pipeline For Low Latency Recommendations." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180695.

Full text
Abstract:
The goal of building recommender system is to generate personalized recommendations to users. Recommender system has great value in multiple business verticals like video on demand, news, advertising and retailing. In order to recommend to each individual, large number of personal preference data need to be collected and processed. Processing big data usually takes long time. The long delays from data entered system to results being generated makes recommender systems can only benefit returning users. This project is an attempt to build a recommender system as service with low latency, to make it applicable for more scenarios. In this paper, different recommendation algorithms, distributed computing frameworks are studied and compared to identify the most suitable design. Experiment results reviled the logarithmical relationship between recommendation quality and training data size in collaborative filtering. By applying the finding, a low latency recommendation workflow is achieved by reduce training data size and create parallel computing partitions with minimal cost of prediction quality. In this project the calculation time is successfully limited in 3 seconds (instead of 25 in control value) while maintaining 90% of the prediction quality.
APA, Harvard, Vancouver, ISO, and other styles
11

Sladic, Daniel. "Exploiting low-latency communication in single-chip multiprocessors." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0001/MQ40951.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Tay, Kah Keng. "Low-latency network coding for streaming video multicast." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46523.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 95-98).
Network coding has been successfully employed to increase throughput for data transfers. However, coding inherently introduces packet inter-dependencies and adds decoding delays which increase latency. This makes it difficult to apply network coding to real-time video streaming where packets have tight arrival deadlines. This thesis presents FLOSS, a wireless protocol for streaming video multicast. At the core of FLOSS is a novel network code. This code maximizes the decoding opportunities at every receiver, and at the same time minimizes redundancy and decoding latency. Instead of sending packets plainly to a single receiver, a sender mixes in packets that are immediately beneficial to other receivers. This simple technique not only allows us to achieve the coding benefits of increased throughput, it also decreases delivery latency, unlike other network coding approaches. FLOSS performs coding over a rolling window of packets from a video flow, and determines with feedback the optimal set of packet transmissions needed to get video across in a timely and reliable manner. A second important characteristic of FLOSS is its ability to perform both interand intra-flow network coding at the same time. Our technique extends easily to support multiple video streams, enabling us to effectively and transparently apply network coding and opportunistic routing to video multicast in a wireless mesh. We devise VSSIM*, an improved video quality metric based on [46]. Our metric addresses a significant limitation of prior art and allows us to evaluate video with streaming errors like skipped and repeated frames. We have implemented FLOSS using Click [22]. Through experiments on a 12-node testbed, we demonstrate that our protocol outperforms both a protocol that does not use network coding and one that does so naively. We show that the improvement in video quality comes from increased throughput, decreased latency and opportunistic receptions from our scheme.
by Kah Keng Tay.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
13

Rajiullah, Mohammad. "Towards a Low Latency Internet: Understanding and Solutions." Doctoral thesis, Karlstads universitet, Institutionen för matematik och datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-37487.

Full text
Abstract:
Networking research and development have historically focused on increasing network throughput and path resource utilization, which particularly helped bulk applications such as file transfer and video streaming. Recent over-provisioning in the core of the Internet has facilitated the use of interactive applications like interactive web browsing, audio/video conferencing, multi- player online gaming and financial trading applications. Although the bulk applications rely on transferring data as fast as the network permits, interactive applications consume rather little bandwidth, depending instead on low latency. Recently, there has been an increasing concern in reducing latency in networking research, as the responsiveness of interactive applications directly influences the quality of experience. To appreciate the significance of latency-sensitive applications for today's Internet, we need to understand their traffic pattern and quantify their prevalence. In this thesis, we quantify the proportion of potentially latency-sensitive traffic and its development over time. Next, we show that the flow start-up mechanism in the Internet is a major source of latency for a growing proportion of traffic, as network links get faster. The loss recovery mechanism in the transport protocol is another major source of latency. To improve the performance of latency-sensitive applications, we propose and evaluate several modifications in TCP. We also investigate the possibility of prioritization at the transport layer to improve the loss recovery. The idea is to trade reliability for timeliness. We particularly examine the applicability of PR-SCTP with a focus on event logging. In our evaluation, the performance of PR-SCTP is largely influenced by small messages. We analyze the inefficiency in detail and propose several solutions. We particularly implement and evaluate one solution that utilizes the Non-Renegable Selective Acknowledgments (NR-SACKs) mechanism, which has been proposed for standardization in the IETF. According to the results, PR-SCTP with NR-SCAKs significantly improves the application performance in terms of low latency as compared to SCTP and TCP.
Interactive applications such as web browsing, audio/video conferencing, multi-player online gaming and financial trading applications do not benefit (much) from more bandwidth. Instead, they depend on low latency. Latency is a key determinant of user experience. An increasing concern for reducing latency is therefore currently being observed among the networking research community and industry. In this thesis, we quantify the proportion of potentially latency-sensitive traffic and its development over time. Next, we show that the flow start-up mechanism in the Internet is a major source of latency for a growing proportion of traffic, as network links get faster. The loss recovery mechanism in the transport protocol is another major source of latency. To improve the performance of latency-sensitive applications, we propose and evaluate several modifications in TCP. We also investigate the possibility of prioritization at the transport layer to improve the loss recovery. The idea is to trade reliability for timeliness. We particularly examine the applicability of PR-SCTP with a focus on event logging. In our evaluation, the performance of PR-SCTP is largely influenced by small messages. We analyze the inefficiency in detail and propose several solutions. We particularly implement and evaluate one solution that utilizes the Non-Renegable Selective Acknowledgments (NR-SACKs) mechanism, which has been proposed for standardization in the IETF. According to the results, PR-SCTP with NR-SCAKs significantly improves the application performance in terms of low latency as compared to SCTP and TCP.
APA, Harvard, Vancouver, ISO, and other styles
14

Carver, Eric R. "Reducing Network Latency for Low-cost Beowulf Clusters." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1406880971.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kharel, B. (Binod). "Ultra reliable low latency communication in MTC network." Master's thesis, University of Oulu, 2018. http://jultika.oulu.fi/Record/nbnfioulu-201809212822.

Full text
Abstract:
Abstract. Internet of things is in progress to build the smart society, and wireless networks are critical enablers for many of its use cases. In this thesis, we present some of the vital concept of diversity and multi-connectivity to achieve ultra-reliability and low latency for machine type wireless communication networks. Diversity is one of the critical factors to deal with fading channel impairments, which in term is a crucial factor to achieve targeted outage probabilities and try to reach out such requirement of five 9’s as defined by some standardization bodies. We evaluate an interference-limited network composed of multiple remote radio heads connected to the user equipment. Some of those links are allowed to cooperate, thus reducing interference, or to perform more elaborated strategies such as selection combining or maximal ratio combining. Therefore, we derive their respective closed-form analytical solutions for respective outage probabilities. We provide extensive numerical analysis and discuss the gains of cooperation and multi-connectivity enabled to be a centralized radio access network.
APA, Harvard, Vancouver, ISO, and other styles
16

Patino, Villar José María. "Efficient speaker diarization and low-latency speaker spotting." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS003/document.

Full text
Abstract:
La segmentation et le regroupement en locuteurs (SRL) impliquent la détection des locuteurs dans un flux audio et les intervalles pendant lesquels chaque locuteur est actif, c'est-à-dire la détermination de ‘qui parle quand’. La première partie des travaux présentés dans cette thèse exploite une approche de modélisation du locuteur utilisant des clés binaires (BKs) comme solution à la SRL. La modélisation BK est efficace et fonctionne sans données d'entraînement externes, car elle utilise uniquement des données de test. Les contributions présentées incluent l'extraction des BKs basée sur l'analyse spectrale multi-résolution, la détection explicite des changements de locuteurs utilisant les BKs, ainsi que les techniques de fusion SRL qui combinent les avantages des BKs et des solutions basées sur un apprentissage approfondi. La tâche de la SRL est étroitement liée à celle de la reconnaissance ou de la détection du locuteur, qui consiste à comparer deux segments de parole et à déterminer s'ils ont été prononcés par le même locuteur ou non. Même si de nombreuses applications pratiques nécessitent leur combinaison, les deux tâches sont traditionnellement exécutées indépendamment l'une de l'autre. La deuxième partie de cette thèse porte sur une application où les solutions de SRL et de reconnaissance des locuteurs sont réunies. La nouvelle tâche, appelée détection de locuteurs à faible latence, consiste à détecter rapidement les locuteurs connus dans des flux audio à locuteurs multiples. Il s'agit de repenser la SRL en ligne et la manière dont les sous-systèmes de SRL et de détection devraient être combinés au mieux
Speaker diarization (SD) involves the detection of speakers within an audio stream and the intervals during which each speaker is active, i.e. the determination of ‘who spoken when’. The first part of the work presented in this thesis exploits an approach to speaker modelling involving binary keys (BKs) as a solution to SD. BK modelling is efficient and operates without external training data, as it operates using test data alone. The presented contributions include the extraction of BKs based on multi-resolution spectral analysis, the explicit detection of speaker changes using BKs, as well as SD fusion techniques that combine the benefits of both BK and deep learning based solutions. The SD task is closely linked to that of speaker recognition or detection, which involves the comparison of two speech segments and the determination of whether or not they were uttered by the same speaker. Even if many practical applications require their combination, the two tasks are traditionally tackled independently from each other. The second part of this thesis considers an application where SD and speaker recognition solutions are brought together. The new task, coined low latency speaker spotting (LLSS), involves the rapid detection of known speakers within multi-speaker audio streams. It involves the re-thinking of online diarization and the manner by which diarization and detection sub-systems should best be combined
APA, Harvard, Vancouver, ISO, and other styles
17

Patino, Villar José María. "Efficient speaker diarization and low-latency speaker spotting." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS003.

Full text
Abstract:
La segmentation et le regroupement en locuteurs (SRL) impliquent la détection des locuteurs dans un flux audio et les intervalles pendant lesquels chaque locuteur est actif, c'est-à-dire la détermination de ‘qui parle quand’. La première partie des travaux présentés dans cette thèse exploite une approche de modélisation du locuteur utilisant des clés binaires (BKs) comme solution à la SRL. La modélisation BK est efficace et fonctionne sans données d'entraînement externes, car elle utilise uniquement des données de test. Les contributions présentées incluent l'extraction des BKs basée sur l'analyse spectrale multi-résolution, la détection explicite des changements de locuteurs utilisant les BKs, ainsi que les techniques de fusion SRL qui combinent les avantages des BKs et des solutions basées sur un apprentissage approfondi. La tâche de la SRL est étroitement liée à celle de la reconnaissance ou de la détection du locuteur, qui consiste à comparer deux segments de parole et à déterminer s'ils ont été prononcés par le même locuteur ou non. Même si de nombreuses applications pratiques nécessitent leur combinaison, les deux tâches sont traditionnellement exécutées indépendamment l'une de l'autre. La deuxième partie de cette thèse porte sur une application où les solutions de SRL et de reconnaissance des locuteurs sont réunies. La nouvelle tâche, appelée détection de locuteurs à faible latence, consiste à détecter rapidement les locuteurs connus dans des flux audio à locuteurs multiples. Il s'agit de repenser la SRL en ligne et la manière dont les sous-systèmes de SRL et de détection devraient être combinés au mieux
Speaker diarization (SD) involves the detection of speakers within an audio stream and the intervals during which each speaker is active, i.e. the determination of ‘who spoken when’. The first part of the work presented in this thesis exploits an approach to speaker modelling involving binary keys (BKs) as a solution to SD. BK modelling is efficient and operates without external training data, as it operates using test data alone. The presented contributions include the extraction of BKs based on multi-resolution spectral analysis, the explicit detection of speaker changes using BKs, as well as SD fusion techniques that combine the benefits of both BK and deep learning based solutions. The SD task is closely linked to that of speaker recognition or detection, which involves the comparison of two speech segments and the determination of whether or not they were uttered by the same speaker. Even if many practical applications require their combination, the two tasks are traditionally tackled independently from each other. The second part of this thesis considers an application where SD and speaker recognition solutions are brought together. The new task, coined low latency speaker spotting (LLSS), involves the rapid detection of known speakers within multi-speaker audio streams. It involves the re-thinking of online diarization and the manner by which diarization and detection sub-systems should best be combined
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Chia-Hsin Ph D. Massachusetts Institute of Technology. "Design and implementation of low-latency, low-power reconfigurable on-chip networks." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/109002.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages [159]-187).
In this dissertation, I tackle large, low-latency, low-power on-chip networks. I focus on two key challenges in the realization of such NoCs in practice: (1) the development of NoC design toolchains that can ease and automate the design of large-scale NoCs, paving the way for advanced ultra-low-power NoC techniques to be embedded within many-core chips, and (2) the design and implementation of chip prototypes that demonstrate ultralow- latency, low-power NoCs, enabling rigorous understanding of the design tradeoff of such NoCs. I start off by presenting DSENT (joint work), a timing, area and power evaluation toolchain that supports flexibility in modeling while ensuring accuracy, through a technology-portable library of standard cells [108]. DSENT enables rigorous design space exploration for advanced technologies, and have been shown to provide fast and accurate evaluation of emerging opto-electronics. Next, low-swing signaling has been shown to substantially reduce NoC power, but requires custom circuit design in the past. I propose a toolchain that automates the embedding of low-swing cells into the NoC datapath, paving the way for low-swing signaling to be part of future many-core chips [17]. Third, clockless repeated links have been shown to be embeddable within a NoC datapath, allowing packets to go from source to destination cores without being latched at intermediate routers. I propose SMARTapp, a design that leverages theses clockless repeaters for configuration of a NoC into customized topologies tailored for each applications, and present a synthesis toolchain that takes each SoC application as input, and synthesize a NoC configured for that application, generating RTL to layout [18]. The thesis next presents two chip prototypes that I designed to obtain on-depth understanding of the practical implementation costs and tradeoffs of high-level architectural ideas. The SMART NoC chip is a 3 x 3 mm2 chip in 32 nm SOI realizing traversal of 7 hops within a cycle at 548 MHz, dissipating 1.57 to 2.53 W. It enables a rigorous understanding of the tradeoffs between router clock frequency, network latency and throughput, and is a demonstration of the proposed synthesis toolchain. The SCORPIO 36-core chip (joint work) is an 11 x 13 mm2 chip in 45 nm SOI demonstrating snoopy coherence on a scalable ordered mesh NoC, with the NoC taking just 19 % of tile power and 10 % of tile area [19, 28].
by Chia-Hsin Chen.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Ben, Yahia Mariem. "Low latency video streaming solutions based on HTTP/2." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2019. http://www.theses.fr/2019IMTA0136/document.

Full text
Abstract:
Les techniques adaptatives de transmission vidéo s’appuient sur un contenu qui est encodé à différents niveaux de qualité et divisé en segments temporels. Avant de télécharger un segment, le client exécute un algorithme d’adaptation pour décider le meilleur niveau de qualité à considérer. Selon les services, ce niveau de qualité doit correspondre aux ressources réseaux disponibles, mais aussi à d’autres éléments comme le mouvement de tête d’un utilisateur regardant une vidéo immersive (à 360°) afin de maximiser la qualité de la portion de la vidéo qui est regardée. L’efficacité de l’algorithme d’adaptation a un impact direct sur la qualité de l’expérience finale. En cas de mauvaise sélection de segment, un client HTTP/1 doit attendre le téléchargement du prochain segment afin de choisir une qualité appropriée. Dans cette thèse, nous proposons d’utiliser le protocole HTTP/2 pour remédier à ce problème. Tout d’abord, nous nous focalisons sur le service de vidéo en direct. Nous concevons une stratégie de rejet d’images vidéo quand la bande passante est très variable afin d’éviter les arrêts fréquents de la lecture vidéo et l’accumulation des retards. Le client doit demander chaque image vidéo dans un flux HTTP/2 dédié pour contrôler la livraison des images par appel aux fonctionnalités HTTP/2 au niveau des flux concernées. Ensuite, nous optimisons la livraison des vidéos immersives en bénéficiant de l’amélioration de la prédiction des mouvements de têtes de l’utilisateur grâce aux fonctionnalités d’initialisation et de priorité de HTTP/2. Les résultats montrent que HTTP/2 permet d’optimiser l’utilisation des ressources réseaux et de s’adapter aux latences exigées par chaque service
Adaptive video streaming techniques enable the delivery of content that is encoded at various levels of quality and split into temporal segments. Before downloading a segment, the client runs an adaptation algorithm to determine the level of quality that best matches the network resources. For immersive video streaming this adaptation mechanism should also consider the head movement of a user watching the 360° video to maximize the quality of the viewed portion. However, this adaptation may suffer from errors, which impact the end user’s quality of experience. In this case, an HTTP/1 client must wait for the download of the next segment to choose a suitable quality. In this thesis, we propose to use the HTTP/2 protocol instead to address this problem. First, we focus live streaming video. We design a strategy to discard video frames when the band width is very variable in order so as to avoid the rebuffering events and the accumulation of delays. The customer requests each video frame in an HTTP/2 stream which allows to control the delivery of frames by leveraging the HTTP/2 features at the level of the dedicated stream. Besides, we use the priority and reset stream features of HTTP/2 to optimize the delivery of immersive videos. We propose a strategy to benefit from the improvement of the user’s head movements prediction overtime. The results show that HTTP/2 allows to optimize the use of network resources and to adapt to the latencies required by each service
APA, Harvard, Vancouver, ISO, and other styles
20

Löser, Jork. "Low-Latency Hard Real-Time Communication over Switched Ethernet." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2006. http://nbn-resolving.de/urn:nbn:de:swb:14-1138799484082-54477.

Full text
Abstract:
With the upsurge in the demand for high-bandwidth networked real-time applications in cost-sensitive environments, a key issue is to take advantage of developments of commodity components that offer a multiple of the throughput of classical real-time solutions. It was the starting hypothesis of this dissertation that with fine grained traffic shaping as the only means of node cooperation, it should be possible to achieve lower guaranteed delays and higher bandwidth utilization than with traditional approaches, even though Switched Ethernet does not support policing in the switches as other network architectures do. This thesis presents the application of traffic shaping to Switched Ethernet and validates the hypothesis. It shows, both theoretically and practically, how commodity Switched Ethernet technology can be used for low-latency hard real-time communication, and what operating-system support is needed for an efficient implementation.
APA, Harvard, Vancouver, ISO, and other styles
21

Tafleen, Sana. "Fault Tolerance Strategies for Low-Latency Live Video Streaming." Thesis, University of Louisiana at Lafayette, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13420002.

Full text
Abstract:

This paper describes the effect of failures on various video QoS metrics like delay, packet loss, and recovery time. SDN network has been used to guarantee reliability and efficient data transmission. There are many failures that can occur within the SDN mesh network or between the non-SDN and the SDN network. There is a need for both reliable and low-latency transmission of live video streams, especially in situations such as public safety or public gathering events. This is because everyone is trying to use the limited network at the same time. That leads to oversubscription and network outages, and computing devices may fail. Existing mechanisms built into TCP/IP and video streaming protocols, and fault tolerance strategies (such as buffering), are inadequate due to low latency and reliability requirements for live streaming, especially in the presence of limited bandwidth and computational power of mobile or edge devices. The objective of this paper is to develop an efficient fault tolerant strategy at the source-side to produce a high-quality video with low latency and data loss. To recover the lost data during failures, buffering approach is used to store chunks in a buffer and retransmit the lost frames, requested by the receiver.

APA, Harvard, Vancouver, ISO, and other styles
22

Özenir, Onur. "Redundancy techniques for 5G Ultra Reliable Low Latency Communications." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25082/.

Full text
Abstract:
The 5G Core Network architecture is modeled to include instruments that can establish networks built on the same physical infrastructure but serve different service categories for communication types with varying characteristics. Relying on virtualization and cloud technologies, these instruments make the 5G system different from previous mobile communication systems, change the user profile, and allow new business models to be included in the system. The subject of this thesis includes the study of Ultra-reliable low latency communication, which is one of the fundamental service categories defined for the 5G system, and the analysis of the techniques presented in 3GPP’s Release 16, which enhance the service parameters by modifying the core network. In the theoretical part, the 5G system and URLLC are introduced with a particular focus on the user plane on the core network. In the implementation part, redundant transmission support on the N3 interface, one of the techniques presented in the technical specification, is modeled using open source software tools (Open5GS and UERANSIM) and network virtualization instruments. As a result of the tests and measurements performed on the model, it was observed that the implemented technique enhanced the system's reliability.
APA, Harvard, Vancouver, ISO, and other styles
23

Owens, Alisdair. "Using low latency storage to improve RDF store performance." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/185969/.

Full text
Abstract:
Resource Description Framework (RDF) is a flexible, increasingly popular data model that allows for simple representation of arbitrarily structured information. This flexibility allows it to act as an effective underlying data model for the growing Semantic Web. Unfortunately, it remains a challenge to store and query RDF data in a performant manner, with existing stores struggling to meet the needs of demanding applications: particularly low latency, human-interactive systems. This is a result of fundamental properties of RDF data: RDF's small statement size tends to engender large joins with a lot of random I/O, and its limited structure impedes the generation of compact, relevant statistics for query optimisation. This thesis posits that the problem of performant RDF storage can be effectively mitigated using in-memory storage, thanks to RAM's extremely high throughput and rapid random I/O relative to disk. RAM is rapidly reducing in cost, and is finally reaching the stage where it is becoming a practical medium for the storage of substantial databases, particularly given the relatively small size at which RDF datasets become challenging for disk-backed systems. In-memory storage brings with it its own challenges. The relatively high cost of RAM necessitates a very compact representation, and the changing relationship between memory and CPU (particularly increasing RAM access latency) benefits designs that are aware of that relationship. This thesis presents an investigation into creating CPU-friendly data structures, along with a deep study of the common characteristics of popular RDF datasets. Together, these are used to inform the creation of a new data structure called the Adaptive Hierarchical RDF Index (AHRI), an in-memory, RDF-specific structure that outperforms traditional storage mechanisms in nearly every respect. AHRI is validated with a comprehensive evaluation against other commonly used in-memory data structures, along with a real world test against a memory-backed store, and a fast disk-based store allowed to cache its data in RAM. The results show that AHRI outperforms these systems with regards to both space consumption and read/write behaviour. The document subsequently describes future work that should provide substantial further improvements, making the use of RAM for RDF storage even more compelling.
APA, Harvard, Vancouver, ISO, and other styles
24

Löser, Jork. "Low-Latency Hard Real-Time Communication over Switched Ethernet." Doctoral thesis, Technische Universität Dresden, 2005. https://tud.qucosa.de/id/qucosa%3A24637.

Full text
Abstract:
With the upsurge in the demand for high-bandwidth networked real-time applications in cost-sensitive environments, a key issue is to take advantage of developments of commodity components that offer a multiple of the throughput of classical real-time solutions. It was the starting hypothesis of this dissertation that with fine grained traffic shaping as the only means of node cooperation, it should be possible to achieve lower guaranteed delays and higher bandwidth utilization than with traditional approaches, even though Switched Ethernet does not support policing in the switches as other network architectures do. This thesis presents the application of traffic shaping to Switched Ethernet and validates the hypothesis. It shows, both theoretically and practically, how commodity Switched Ethernet technology can be used for low-latency hard real-time communication, and what operating-system support is needed for an efficient implementation.
APA, Harvard, Vancouver, ISO, and other styles
25

Arun, Balaji. "A Low-latency Consensus Algorithm for Geographically Distributed Systems." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/79945.

Full text
Abstract:
This thesis presents Caesar, a novel multi-leader Generalized Consensus protocol for geographically replicated systems. Caesar is able to achieve near-perfect availability, provide high performance - low latency and high throughput compared to the existing state-of-the- art, and tolerate replica failures. Recently, a number of state-of-the-art consensus protocols that implement the Generalized Consensus definition have been proposed. However, the major limitation of these existing approaches is the significant performance degradation when application workload produces conflicting requests. Caesar's main goal is to overcome this limitation by changing the way a fast decision is taken: its ordering protocol does not reject a fast decision for a client request if a quorum of nodes reply with different dependency sets for that request. It only switches to a slow decision if there is no chance to agree on the proposed order for that request. Caesar is able to achieve this using a combination of wait condition and logical time stamping. The effectiveness of Caesar is demonstrated through an evaluation study performed on Amazon's EC2 infrastructure using 5 geo-replicated sites. Caesar outperforms other multi-leader (e.g., EPaxos) competitors by as much as 1.7x in presence of 30% conflicting requests, and single-leader (e.g., Multi-Paxos) by as much as 3.5x. The protocol is also resistant to heavy client loads unlike existing protocols.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
26

ElAzzouni, Sherif. "Algorithm Design for Low Latency Communication in Wireless Networks." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587049831134061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yin, Bo. "Airtime Management for Low-Latency Densely Deployed Wireless Networks." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Batewela, Vidanelage S. (Sadeep). "Towards reliable and low-latency vehicular edge computing networks." Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201908132759.

Full text
Abstract:
Abstract. To enable autonomous driving in intelligent transportation systems, vehicular communication is one of the promising approaches to ensure safe, efficient, and comfortable travel. However, to this end, there is a huge amount of application data that needs to be exchanged and processed which makes satisfying the critical requirement in vehicular communication, i.e., low latency and ultra-reliability, challenging. In particular, the processing is executed at the vehicle user equipment (VUE) locally. To alleviate the VUE’s computation capability limitations, mobile edge computing (MEC), which pushes the computational and storage resources from the network core towards the edge, has been incorporated with vehicular communication recently. To ensure low latency and high reliability, jointly allocating resources for communication and computation is a challenging problem in highly dynamics and dense environments such as urban areas. Motivated by these critical issues, we aim to minimize the higher-order statistics of the end-to-end (E2E) delay while jointly allocating the communication and computation resources in a vehicular edge computing scenario. A novel risk-sensitive distributed learning algorithm is proposed with minimum knowledge and no information exchange among VUEs, where each VUE learns the best decision policy to achieve low latency and high reliability. Compared with the average-based approach, simulation results show that our proposed approach has the better network-wide standard deviation of E2E delay and comparable average E2E delay performance.
APA, Harvard, Vancouver, ISO, and other styles
29

Azeez, Babatunde. "Reliable low latency I/O in torus-based interconnection networks." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4842.

Full text
Abstract:
In today's high performance computing environment I/O remains the main bottleneck in achieving the optimal performance expected of the ever improving processor and memory technologies. Interconnection networks therefore combines processing units, system I/O and high speed switch network fabric into a new paradigm of I/O based network. It decouples the system into computational and I/O interconnections each allowing "any-to-any" communications among processors and I/O devices unlike the shared model in bus architecture. The computational interconnection, a network of processing units (compute-nodes), is used for inter-processor communication in carrying out computation tasks, while the I/O interconnection manages the transfer of I/O requests between the compute-nodes and the I/O or storage media through some dedicated I/O processing units (I /O-nodes). Considering the special functions performed by the I/O nodes, their placement and reliability become important issues in improving the overall performance of the interconnection system. This thesis focuses on design and topological placement of I/O-nodes in torus based interconnection networks, with the aim of reducing I/O communication latency between compute-nodes and I/O-nodes even in the presence of faulty I/O-nodes. We propose an efficient and scalable relaxed quasi-perfect placement scheme using Lee distance error correction code such that compute-nodes are at distance-t or at most distance-t+1 from an I/O-node for a given t. This scheme provides a better and optimal alternative placement than quasi perfect placement when perfect placement cannot be found for a particular torus. Furthermore, in the occurrence of faulty I/O-nodes, the placement scheme is also used in determining other alternative I/O-nodes for rerouting I/O traffic from affected compute-nodes with minimal slowdown. In order to guarantee the quality of service required of inter-processor communication, a scheduling algorithm was developed at the router level to prioritize message forwarding according to inter-process and I/O messages with the former given higher priority. Our simulation results show that relaxed quasi-perfect outperforms quasi-perfect and the conventional I/O placement (where I/O nodes are concentrated at the base of the torus interconnection) with little degradation in inter-process communication performance. Also the fault tolerant redirection scheme provides a minimal slowdown, especially when the number of faulty I/O nodes is less than half of the initial available I/O nodes.
APA, Harvard, Vancouver, ISO, and other styles
30

Tideström, Jakob. "Investigation into low latency live video streaming performance of WebRTC." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249446.

Full text
Abstract:
As WebRTC is intended for peer-to-peer real time communications, it contains the capability for streaming video at low latencies. This thesis leverages this ability to stream live video footage in a client-server scenario. Using a local broadcaster, server, and client setup, a static video file is streamed as live footage. The performance is compared with contemporary live streaming techniques, HTTP Live Streaming and Dynamic Adaptive Streaming over HTTP, streaming the same content. It is determined that WebRTC achieves lower latencies than both techniques.
Eftersom WebRTC är menat för peer-to-peer realtidskommunikation så har den förmågan att strömma video med låg latens. Denna avhandling utnyttjar den här förmågan för att strömma livevideo i ett klient-server-scenario. Med en uppsättning som omfattar en lokal sändare, server, och klient strömmas en statisk videofil som en live-video. Prestandan jämförs med hur de samtida liveströmningsteknikerna HTTP Live Streaming respective Dynamic Adaptive Streaming over HTTP strömmar samma innehåll. Slutsatsen är att WebRTC lyckas uppnå lägre latens än båda de andra teknikerna men utan relativt mycket finjustering så försämras kvaliteten på strömmen.
APA, Harvard, Vancouver, ISO, and other styles
31

McCaffery, Duncan James. "Supporting Low Latency Interactive Distributed Collaborative Applications in Mobile Environments." Thesis, Lancaster University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.524740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Martin, Steven. "Scalable Data Transformations for Low-Latency Large-Scale Data Analysis." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1366108187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Faxén, Linnea. "A Study on Segmentation for Ultra-Reliable Low-Latency Communications." Thesis, Linköpings universitet, Kommunikationssystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138568.

Full text
Abstract:
To enable wireless control of factories, such that sensor measurements can be sent wirelessly to an actuator, the probability to receive data correctly must be very high and the time it takes to the deliver the data from the sensor to the actuator must be very low. Earlier, these requirements have only been met by cables, but in the fifth generation mobile network this is one of the imagined use cases and work is undergoing to create a system capable of wireless control of factories. One of the problems in this scenario is when all data in a packet cannot be sent in one transmission while ensuring the very high probability of reception of the transmission. This thesis studies this problem in detail by proposing methods to cope with the problem and evaluating these methods in a simulator. The thesis shows that splitting the data into multiple segments and transmitting each at an even higher probability of reception is a good candidate, especially when there is time for a retransmission. When there is only one transmission available, a better candidate is to send the same packet twice. Even if the first packet cannot achieve the very high probability of reception, the combination of the first and second packet might be able to.
För att möjliggöra trådlös kontroll av fabriker, till exempel trådlös sändning av data uppmätt av en sensor till ett ställdon som agerar på den emottagna signalen, så måste sannolikheten att ta emot datan korrekt vara väldigt hög och tiden det tar att leverera data från sensorn till ställdonet vara mycket kort. Tidigare har endast kablar klarat av dessa krav men i den femte generationens mobila nätverk är trådlös kontroll av fabriker ett av användningsområdena och arbete pågår för att skapa ett system som klarar av det. Ett av problemen i detta användningsområde är när all data i ett paket inte kan skickas i en sändning och klara av den väldigt höga sannolikheten för mottagning. Denna uppsats studerar detta problem i detalj och föreslår metoder för att hantera problemet samt utvärderar dessa metoder i en simulator. Uppsatsen visar att delning av ett paket i flera segment och sändning av varje segment med en ännu högre sannolikhet för mottagning är en bra kandidat, speciellt när det finns tid för en omsändning. När det endast finns tid för en sändning verkar det bättre att skicka samma paket två gånger. Även om det första paketet inte kan uppnå den höga sannolikheten för mottagning så kan kanske kombinationen av det första och andra paketet göra det.
APA, Harvard, Vancouver, ISO, and other styles
34

Cho, Daewoong. "Network Function Virtualization (NFV) Resource Management For Low Network Latency." Thesis, The University of Sydney, 2017. http://hdl.handle.net/2123/17256.

Full text
Abstract:
NFV is an emerging network architecture to increase flexibility and agility within operator's networks by placing virtualized services on demand in Cloud data centers (CDCs). One of the main challenges for the NFV environment is how to efficiently allocate Virtual Network Functions (VNFs) to Virtual Machines (VMs) and how to minimize network latency in the rapidly changing network environments. Although a significant amount of work/research has been already conducted for the generic VNF placement problem and VM migration for efficient resource management in CDCs, network latency among various network components and VNF migration problem have not been comprehensively considered yet to the best of our knowledge. Firstly, to address VNF placement problem, we design a more comprehensive model based on real measurements to capture network latency among VNFs with more granularity to optimize placement of VNFs in CDCs. We consider resource demand of VNFs, resource capacity of VMs and network latency among various network components. Our objectives are to minimize both network latency and lead time (the time to find a VM to host a VNF). Experimental results are promising and indicate that our approach, namely VNF Low-Latency Placement (VNF-LLP), can reduce network latency by up to 64.24% compared with two generic algorithms. Furthermore, it has a lower lead time as compared with the VNF Best-Fit Placement algorithm. Secondly, to address VNF migration problem, we i) formulate the VNF migration problem and ii) develop a novel VNF migration algorithm called VNF Real-time Migration (VNF-RM) for lower network latency in dynamically changing resource availability. As a result of experiments, the effectiveness of our algorithm is demonstrated by reducing network latency by up to 59.45% after latency-aware VNF migrations.
APA, Harvard, Vancouver, ISO, and other styles
35

Bhat, Amit. "Low-latency Estimates for Window-Aggregate Queries over Data Streams." PDXScholar, 2011. https://pdxscholar.library.pdx.edu/open_access_etds/161.

Full text
Abstract:
Obtaining low-latency results from window-aggregate queries can be critical to certain data-stream processing applications. Due to a DSMS's lack of control over incoming data (typically, because of delays and bursts in data arrival), timely results for a window-aggregate query over a data stream cannot be obtained with guarantees about the results' accuracy. In this thesis, I propose a technique, which I term prodding, to obtain early result estimates for window-aggregate queries over data streams. The early estimates are obtained in addition to the regular query results. The proposed technique aims to maximize the contribution to a result-estimate computation from all the stateful operators across a multi-level query plan. I evaluate the benefits of prodding using real-world and generated data streams having different patterns in data arrival and data values. I conclude that, in various DSMS applications, prodding can generate low-latency estimates to window-aggregate query results. The main factors affecting the degree of inaccuracy in such estimates are: the aggregate function used in a query, the patterns in arrivals and values of stream data, and the aggressiveness of demanding the estimates. The utility of the estimates obtained using prodding should be optimized by tuning the aggressiveness in result-estimate demands to the specific latency and accuracy needs of a business, considering any available knowledge about patterns in the incoming data.
APA, Harvard, Vancouver, ISO, and other styles
36

Masoumiyan, Farzaneh. "Low-latency communications for wide area control of energy systems." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/135660/1/Farzaneh_Masoumiyan_Thesis.pdf.

Full text
Abstract:
This project provides reliable and low-latency communications for wide area control in smart grid. For this purpose, a priority differentiation approach is presented. It is embedded with an application-layer acknowledgment mechanism for reliable transmission of time-critical data with high priority.
APA, Harvard, Vancouver, ISO, and other styles
37

Boukhalfa, Mohamed Fouzi. "Low latency radio and visible light communications for autonomous driving." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS164.

Full text
Abstract:
Le sujet de cette thèse porte sur les réseaux sans fil véhiculaires et, plus précisément, sur l’utilisation de la transmission radio et de la communication par la lumière (VLC) pour améliorer la sécurité des véhicules. La thèse est motivée par les problèmes de fiabilité et d’évolutivité de la norme IEEE 802.11p. L’idée est d’évoluer vers de nouvelles techniques, et d’associer la transmission radio à la communication en VLC pour permettre une communication hybride. La première partie de la thèse concerne la mise au point de techniques d’accès radio à faible latence dans les réseaux véhiculaires. L’idée de la solution est de mélanger les techniques TDMA classiques et des mécanismes avancés de protocoles à compétition utilisant des signalements actifs. Cette solution a été spécifié, évalué et comparé à d’autres solutions de la littérature. Nous avons également introduit dans cette partie un schéma d’accès spécial pour les paquets d’urgence de haute priorité, tout en garantissant un accès fiable et à faible latence. La seconde partie de la thèse concerne la communication par lumière visible pour le contrôle de peloton. Pour cela, nous avons proposé et développé un algorithme qui sélectionne la communication radio, proposée dans la première partie, et la communication par lumière visible en se basant sur l’état du canal radio et d’alignement du peloton
The subject of this thesis is vehicle wireless networks and, more specifically, the use of radio transmission and Visible Light Communication (VLC) to improve vehicle safety. The thesis is motivated by the reliability and scalability issues of the IEEE 802.11p standard. The idea is to move towards new techniques, especially in future 803.11bd standards, and to combine radio transmission with VLC to enable hybrid communication. The first part of the thesis concerns the development of a low latency radio access technique in vehicular networks. The idea of the solution is to combine classical TDMA techniques and advanced mechanisms of competitive protocols using active signaling. This solution has been specified, evaluated, and compared to other solutions from literature. This part also introduces a special access scheme for high priority emergency packets, while ensuring reliable and low latency access. The second part of the thesis concerns visible light communication for platoon control. The idea is to develop an algorithm to select the radio communication, proposed in the first part, and visible light communication based on the radio channel conditions and platoon alignment
APA, Harvard, Vancouver, ISO, and other styles
38

Gong, Yixi. "La quête de latence faible sur les deux bords du réseau : conception, analyse, simulation et expériences." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0018/document.

Full text
Abstract:
Au cours de ces dernières années, les services Internet croissent considérablement ce qui crée beaucoup de nouveaux défis dans des scénarios variés. La performance globale du service dépend à son tour de la performance des multiples segments de réseau. Nous étudions deux défis représentatifs de conception dans différents segments : les deux les plus importants se trouvent sur les bords opposés la connectivité de bout en bout des chemins d’Internet, notamment, le réseau d’accès pour l’ utilisateur et le réseau de centre de données du fournisseur de services
In the recent years, the innovation of new services over Internet is considerably growing at a fast speed, which brings forward lots of new challenges under varied scenarios. The overall service performance depends in turn on the performance of multiple network segments. We investigated two representative design challenges in different segments : the two most important sit at the opposite edges of the end-to-end Internet path, namely, the end-user access network vs. the service provider data center network
APA, Harvard, Vancouver, ISO, and other styles
39

Pugh, Keith. "A home network infrastructure based on a simple low cost, low latency, scaleable, serial interconnect." Thesis, Keele University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Leber, Christian [Verfasser], and Ulrich [Akademischer Betreuer] Brüning. "Efficient hardware for low latency applications / Christian Leber. Betreuer: Ulrich Brüning." Mannheim : Universitätsbibliothek Mannheim, 2012. http://d-nb.info/1034315552/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Gessert, Felix [Verfasser], and Norbert [Akademischer Betreuer] Ritter. "Low Latency for Cloud Data Management / Felix Gessert ; Betreuer: Norbert Ritter." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2019. http://d-nb.info/1181947030/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Tan, Jerome Nicholas [Verfasser], and M. [Akademischer Betreuer] Weber. "Low-latency big data visualisation / Nicholas Tan Jerome ; Betreuer: M. Weber." Karlsruhe : KIT Scientific Publishing, 2019. http://d-nb.info/1199538108/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lopez, Brett Thomas. "Low-latency trajectory planning for high-speed navigation in unknown environments." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107052.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 105-109).
The ability for quadrotors to navigate autonomously through unknown, cluttered environments at high-speeds is still an open problem in the robotics community. Advancements in light-weight, small form factor computing has allowed the application of state-of-the-art perception and planning algorithms to the high-speed navigation problem. However, many of the existing algorithms are computationally intensive and rely on building a dense map of the environment. Computational complexity and map building are the main sources of latency in autonomous systems and ultimately limit the top speed of the vehicle. This thesis presents an integrated perception, planning, and control system that addresses the aforementioned limitations by using instantaneous perception data instead of building a map. From the instantaneous data, a clustering algorithm identifies and ranks regions of space the vehicle can potentially traverse. A minimum-time, state and input constrained trajectory is generated for each cluster until a collision-free trajectory is found (if one exists). Relaxing position constraints reduces the planning problem to finding the switching times for the minimum-time optimal solution, something that can be done in microseconds. Our approach generates collision-free trajectories within a millisecond of receiving perception data. This is two orders of magnitude faster than current state-of-the art systems. We demonstrate our approach in environments with varying degrees of clutter and at different speeds.
by Brett Thomas Lopez.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
44

Cziva, Richard. "Towards lightweight, low-latency network function virtualisation at the network edge." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/30758/.

Full text
Abstract:
Communication networks are witnessing a dramatic growth in the number of connected mobile devices, sensors and the Internet of Everything (IoE) equipment, which have been estimated to exceed 50 billion by 2020, generating zettabytes of traffic each year. In addition, networks are stressed to serve the increased capabilities of the mobile devices (e.g., HD cameras) and to fulfil the users' desire for always-on, multimedia-oriented, and low-latency connectivity. To cope with these challenges, service providers are exploiting softwarised, cost-effective, and flexible service provisioning, known as Network Function Virtualisation (NFV). At the same time, future networks are aiming to push services to the edge of the network, to close physical proximity from the users, which has the potential to reduce end-to-end latency, while increasing the flexibility and agility of allocating resources. However, the heavy footprint of today's NFV platforms and their lack of dynamic, latency-optimal orchestration prevents them from being used at the edge of the network. In this thesis, the opportunities of bringing NFV to the network edge are identified. As a concrete solution, the thesis presents Glasgow Network Functions (GNF), a container-based NFV framework that allocates and dynamically orchestrates lightweight virtual network functions (vNFs) at the edge of the network, providing low-latency network services (e.g., security functions or content caches) to users. The thesis presents a powerful formalisation for the latency-optimal placement of edge vNFs and provides an exact solution using Integer Linear Programming, along with a placement scheduler that relies on Optimal Stopping Theory to efficiently re-calculate the placement following roaming users and temporal changes in latency characteristics. The results of this work demonstrate that GNF's real-world vNF examples can be created and hosted on a variety of hosting devices, including VMs from public clouds and low-cost edge devices typically found at the customer's premises. The results also show that GNF can carefully manage the placement of vNFs to provide low-latency guarantees, while minimising the number of vNF migrations required by the operators to keep the placement latency-optimal.
APA, Harvard, Vancouver, ISO, and other styles
45

Gonzalez, Damian Mark. "Performance modeling and evaluation of topologies for low-latency SCI systems." [Florida] : State University System of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ane5950/thesis%5F001115.pdf.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2000.
Title from first page of PDF file. Document formatted into pages; contains ix, 81 p.; also contains graphics. Abstract copied from student-submitted information. Vita. Includes bibliographical references (p. 79-80).
APA, Harvard, Vancouver, ISO, and other styles
46

Le, Trung Kien. "Physical layer design for ultra-reliable low-latency communications in 5G." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS198.

Full text
Abstract:
L'émergence de nouveaux cas et d’applications tels que la réalité virtuelle/augmentée, l'automatisation industrielle, les véhicules autonomes, etc. en 5G fait définir au Third Generation Partnership Project (3GPP) Ultra-reliable low-latency communications (URLLC) comme un des trois services. Pour soutenir URLLC avec des exigences strictes de la fiabilité et de la latence, 3GPP Release 15 et 16 ont standardisé des fonctionnalités d’URLLC dans le spectre sous licence. Release 17 en cours agrandit des fonctionnalités d’URLLC au spectre sans licence pour cibler des nouveaux cas dans des scénarios industriels. Dans la première partie de cette thèse du Chapitre 2 au Chapitre 4, nous nous concentrons sur URLLC dans le spectre sous licence. La première étude est confrontée au problème de garantir le nombre des répétitions dans des uplink configured-grant (CG) ressources. Ensuite, nous étudions la collision entre une eMBB UL transmission d'un UE et une URLLC UL transmission d'un autre UE sur des CG ressources. Enfin, nous recherchons la DL transmission où le feedback de la DL semi-persistent scheduling transmission est abandonné à cause du conflit entre des DL/UL symboles. Dans la deuxième partie du Chapitre 5 au Chapitre 8, nous nous focalisons sur URLLC dans le spectre sans licence. Dans le spectre sans licence, un appareil demande d'accéder au canal en utilisant load based equipment (LBE) ou frame based equipment (FBE). L’incertitude d’acquérir un canal par LBE ou FBE pourrait empêcher la URLLC transmission d’atteindre l’exigence de la latence. Par conséquent, l'étude de l'impact de LBE ou FBE sur la URLLC transmission et des améliorations de LBE et de FBE sont nécessaires
The advent of new use cases and new applications such as augmented/virtual reality, industrial automation, autonomous vehicles, etc. in 5G has made the Third Generation Partnership Project (3GPP) specify Ultra-reliable low-latency communications (URLLC) as one of the service categories. To support URLLC with the strict requirements of reliability and latency, 3GPP Release 15 and Release 16 have specified the URLLC features in licensed spectrum. The ongoing 3GPP Release 17 extends the URLLC features to unlicensed spectrum to target the new use cases in the industrial scenario. In the first part of the thesis from Chapter 2 to Chapter 4, we focus on the URLLC in licensed spectrum. The first study deals with the problem of ensuring the configured number of uplink (UL) configured-grant (CG) repetitions of a transport block. Secondly, we study the collisions of an eMBB UL transmission of a user equipment (UE) and an URLLC UL transmission of another UE on the CG resources. Thirdly, the focus of this study is the downlink (DL) transmission where the feedback of the DL semi-persistent scheduling transmission is dropped due to the conflict of the DL/UL symbols. In the second part from Chapter 5 to Chapter 8, we focus on URLLC operation in unlicensed spectrum. In unlicensed spectrum, a 5G device is required to access to a channel by using load based equipment (LBE) or frame based equipment (FBE). The uncertainty of obtaining channel access through LBE or FBE can impede the achievement of the URLLC latency requirements. Therefore, the study of impact of LBE and FBE on URLLC transmission and the enhancements of LBE and FBE are needed
APA, Harvard, Vancouver, ISO, and other styles
47

Farshin, Alireza. "Realizing Low-Latency Internet Services via Low-Level Optimization of NFV Service Chains : Every nanosecond counts!" Licentiate thesis, KTH, Network Systems Laboratory (NS Lab), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249664.

Full text
Abstract:
By virtue of the recent technological developments in cloud computing, more applications are deployed in a cloud. Among these modern cloud-based applications, some require bounded and predictable low-latency responses. However, the current cloud infrastructure is unsuitable as it cannot satisfy these requirements, due to many limitations in both hardware and software. This licentiate thesis describes attempts to reduce the latency of Internet services by carefully studying the currently available infrastructure, optimizing it, and improving its performance. The focus is to optimize the performance of network functions deployed on commodity hardware, known as network function virtualization (NFV). The performance of NFV is one of the major sources of latency for Internet services. The first contribution is related to optimizing the software. This project began by investigating the possibility of superoptimizing virtualized network functions(VNFs). This began with a literature review of available superoptimization techniques, then one of the state-of-the-art superoptimization tools was selected to analyze the crucial metrics affecting application performance. The result of our analysis demonstrated that having better cache metrics could potentially improve the performance of all applications. The second contribution of this thesis employs the results of the first part by taking a step toward optimizing cache performance of time-critical NFV service chains. By doing so, we reduced the tail latencies of such systems running at 100Gbps. This is an important achievement as it increases the probability of realizing bounded and predictable latency for Internet services.
Tack vare den senaste tekniska utvecklingen inom beräkningar i molnet(“cloud computing”) används allt fler tillämpningar i molnlösningar. Flera avdessa moderna molnbaserade tillämpningar kräver korta svarstider är låga ochatt dessa ska vara förutsägbara och ligga inom givna gränser. Den nuvarandemolninfrastrukturen är dock otillräcklig eftersom den inte kan uppfylla dessa krav,på grund av olika typer av begränsningar i både hårdvara och mjukvara. I denna licentiatavhandling beskrivs försök att minska fördröjningen iinternettjänster genom att noggrant studera den nuvarande tillgängligainfrastrukturen, optimera den och förbättra dess prestanda. Fokus ligger påatt optimera prestanda för nätverksfunktioner som realiseras med hjälp avstandardhårdvara, känt som nätverksfunktionsvirtualisering (NFV). Prestanda hosNFV är en av de viktigaste källorna till fördröjning i internettjänster. Det första bidraget är relaterat till att optimera mjukvaran. Detta projektbörjade med att undersöka möjligheten att “superoptimera” virtualiseradenätverksfunktioner (VNF). Detta inleddes med en litteraturöversikt av tillgängligasuperoptimeringstekniker, och sedan valdes ett av de toppmodernasuperoptimeringsverktygen för att analysera de viktiga mätvärden som påverkartillämpningssprestanda. Resultatet av vår analys visade att bättre cache-mätningar potentiellt skulle kunna förbättra prestanda för alla tillämpningar. Det andra bidraget i denna avhandling utnyttjar resultaten från den förstadelen genom att ta ett steg mot att optimera cache-prestanda för tidskritiskakedjor av NFV-tjänster. Genom att göra så reducerade vi de långa fördröjningarnahos sådana system som kördes vid 100 Gbps. Detta är en viktig bedrift eftersomdetta ökar sannolikheten för att uppnå en begränsad och förutsägbar fördrörninghos internettjänster.

QC 20190415


Time-Critical Clouds
ULTRA
APA, Harvard, Vancouver, ISO, and other styles
48

Bono, John, and Preston Hauck. "IMPROVING REAL-TIME LATENCY PERFORMANCE ON COTS ARCHITECTURES." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/606746.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Telemetry systems designed to support the current needs of mission-critical applications often have stringent real-time requirements. These systems must guarantee a maximum worst-case processing and response time when incoming data is received. These real-time tolerances continue to tighten as data rates increase. At the same time, end user requirements for COTS pricing efficiencies have forced many telemetry systems to now run on desktop operating systems like Windows or Unix. While these desktop operating systems offer advanced user interface capabilities, they cannot meet the realtime requirements of the many mission-critical telemetry applications. Furthermore, attempts to enhance desktop operating systems to support real-time constraints have met with only limited success. This paper presents a telemetry system architecture that offers real-time guarantees while at the same time extensively leveraging inexpensive COTS hardware and software components. This is accomplished by partitioning the telemetry system onto two processors. The first processor is a NetAcquire subsystem running a real-time operating system (RTOS). The second processor runs a desktop operating system running the user interface. The two processors are connected together with a high-speed Ethernet IP internetwork. This architecture affords an improvement of two orders of magnitude over the real-time performance of a standalone desktop operating system.
APA, Harvard, Vancouver, ISO, and other styles
49

Tayarani, Najaran Mahdi. "Transport-level transactions : simple consistency for complex scalable low-latency cloud applications." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54520.

Full text
Abstract:
The classical move from single-server applications to scalable cloud services is to split the application state along certain dimensions into smaller partitions independently absorbable by a separate server in terms of size and load. Maintaining data consistency in the face of operations that cross partition boundaries imposes unwanted complexity on the application. While for most applications many ideal partitioning schemes readily exist, First-Person Shooter (FPS) games and Relational Database Management Systems (RDBMS) are instances of applications whose state can’t be trivially partitioned. For any partitioning scheme there exists an FPS/RDBMS workload that results in frequent cross-partition operations. In this thesis we propose that it is possible and effective to provide unpartitionable applications with a generic communication infrastructure that enforces strong consistency of the application’s data to simplify cross-partition communications. Using this framework the application can use a sub-optimal partitioning mechanism without having to worry about crossing boundaries. We apply our thesis to take a head-on approach at scaling our target applications. We build three scalable systems with competitive performances, used for storing data in a key/value datastore, scaling fast-paced FPS games to epic sized battles consisting of hundreds of players, and a scalable full-SQL compliant database used for storing tens of millions of items.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
50

"Certainty, Severity, and Low Latency Deception." Master's thesis, 2019. http://hdl.handle.net/2286/R.I.53939.

Full text
Abstract:
abstract: There has been an ongoing debate between the relative deterrent power of certainty and severity on deceptive and criminal activity, certainty being the likelihood of capture and severity being the magnitude of the potential punishment. This paper is a review of the current body of research regarding risk assessment and deception in games, specifically regarding certainty and severity. The topics of game theoretical foundations, balance, and design were covered, as were heuristics and individual differences in deceptive behavior. Using this background knowledge, this study implemented a methodology through which the risk assessments of certainty and severity can be compared behaviorally in a repeated conflict context. It was found that certainty had a significant effect on a person’s likelihood to lie, while severity did not. Exploratory data was collected using the dark triad personality quiz, though it did not ultimately show a pattern.
Dissertation/Thesis
Masters Thesis Human Systems Engineering 2019
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography