Dissertations / Theses on the topic 'Latency'

To see the other types of publications on this topic, follow the link: Latency.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Latency.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Marxer, Piñón Ricard. "Audio source separation for music in low-latency and high-latency scenarios." Doctoral thesis, Universitat Pompeu Fabra, 2013. http://hdl.handle.net/10803/123808.

Full text
Abstract:
Aquesta tesi proposa mètodes per tractar les limitacions de les tècniques existents de separació de fonts musicals en condicions de baixa i alta latència. En primer lloc, ens centrem en els mètodes amb un baix cost computacional i baixa latència. Proposem l'ús de la regularització de Tikhonov com a mètode de descomposició de l'espectre en el context de baixa latència. El comparem amb les tècniques existents en tasques d'estimació i seguiment dels tons, que són passos crucials en molts mètodes de separació. A continuació utilitzem i avaluem el mètode de descomposició de l'espectre en tasques de separació de veu cantada, baix i percussió. En segon lloc, proposem diversos mètodes d'alta latència que milloren la separació de la veu cantada, gràcies al modelatge de components específics, com la respiració i les consonants. Finalment, explorem l'ús de correlacions temporals i anotacions manuals per millorar la separació dels instruments de percussió i dels senyals musicals polifònics complexes.
Esta tesis propone métodos para tratar las limitaciones de las técnicas existentes de separación de fuentes musicales en condiciones de baja y alta latencia. En primer lugar, nos centramos en los métodos con un bajo coste computacional y baja latencia. Proponemos el uso de la regularización de Tikhonov como método de descomposición del espectro en el contexto de baja latencia. Lo comparamos con las técnicas existentes en tareas de estimación y seguimiento de los tonos, que son pasos cruciales en muchos métodos de separación. A continuación utilizamos y evaluamos el método de descomposición del espectro en tareas de separación de voz cantada, bajo y percusión. En segundo lugar, proponemos varios métodos de alta latencia que mejoran la separación de la voz cantada, gracias al modelado de componentes que a menudo no se toman en cuenta, como la respiración y las consonantes. Finalmente, exploramos el uso de correlaciones temporales y anotaciones manuales para mejorar la separación de los instrumentos de percusión y señales musicales polifónicas complejas.
This thesis proposes specific methods to address the limitations of current music source separation methods in low-latency and high-latency scenarios. First, we focus on methods with low computational cost and low latency. We propose the use of Tikhonov regularization as a method for spectrum decomposition in the low-latency context. We compare it to existing techniques in pitch estimation and tracking tasks, crucial steps in many separation methods. We then use the proposed spectrum decomposition method in low-latency separation tasks targeting singing voice, bass and drums. Second, we propose several high-latency methods that improve the separation of singing voice by modeling components that are often not accounted for, such as breathiness and consonants. Finally, we explore using temporal correlations and human annotations to enhance the separation of drums and complex polyphonic music signals.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Yonghao. "Low latency audio processing." Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/44697.

Full text
Abstract:
Latency in the live audio processing chain has become a concern for audio engineers and system designers because significant delays can be perceived and may affect synchronisation of signals, limit interactivity, degrade sound quality and cause acoustic feedback. In recent years, latency problems have become more severe since audio processing has become digitised, high-resolution ADCs and DACs are used, complex processing is performed, and data communication networks are used for audio signal transmission in conjunction with other traffic types. In many live audio applications, latency thresholds are bounded by human perceptions. The applications such as music ensembles and live monitoring require low delay and predictable latency. Current digital audio systems either have difficulties to achieve or have to trade-off latency with other important audio processing functionalities. This thesis investigated the fundamental causes of the latency in a modern digital audio processing system: group delay, buffering delay, and physical propagation delay and their associated system components. By studying the time-critical path of a general audio system, we focus on three main functional blocks that have the significant impact on overall latency; the high-resolution digital filters in sigma-delta based ADC/DAC, the operating system to process low latency audio streams, and the audio networking to transmit audio with flexibility and convergence. In this work, we formed new theory and methods to reduce latency and accurately predict latency for group delay. We proposed new scheduling algorithms for the operating system that is suitable for low latency audio processing. We designed a new system architecture and new protocols to produce deterministic networking components that can contribute the overall timing assurance and predictability of live audio processing. The results are validated by simulations and experimental tests. Also, this bottom-up approach is aligned with the methodology that could solve the timing problem of general cyber-physical systems that require the integration of communication, software and human interactions.
APA, Harvard, Vancouver, ISO, and other styles
3

Riddoch, David James. "Low latency distributed computing." Thesis, University of Cambridge, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.619850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hardwick, David R., and na. "Factors Associated with Saccade Latency." Griffith University. School of Psychology, 2008. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20100705.111516.

Full text
Abstract:
Part of the aim of this thesis was to explore a model for producing very fast saccade latencies in the 80 to 120ms range. Its primary motivation was to explore a possible interaction by uniquely combining three independent saccade factors: the gap effect, target-feature-discrimination, and saccadic inhibition of return (IOR). Its secondary motivation was to replicate (in a more conservative and tightly controlled design) the surprising findings of Trottier and Pratt (2005), who found that requiring a high resolution task at the saccade target location speeded saccades, apparently by disinhibition. Trottier and Pratt’s finding was so surprising it raised the question: Could the oculomotor braking effect of saccadic IOR to previously viewed locations be reduced or removed by requiring a high resolution task at the target location? Twenty naïve untrained undergraduate students participated in exchange for course credit. Multiple randomised temporal and spatial target parameters were introduced in order to increase probability of exogenous responses. The primary measured variable was saccade latency in milliseconds, with the expectation of higher probability of very fast saccades (i.e. 80-120ms). Previous research suggested that these very fast saccades could be elicited in special testing circumstances with naïve participants, such as during the gap task, or in highly trained observers in non-gap tasks (Fischer & Weber, 1993). Trottier and Pratt (2005) found that adding a task demand that required naïve untrained participants to obtain a feature of the target stimulus (and to then make a discriminatory decision) also produced a higher probability of very fast saccade latencies. They stated that these saccades were not the same as saccade latencies previously referred to as express saccades produced in the gap paradigm, and proposed that such very fast saccades were normal. Carpenter (2001) found that in trained participants the probability of finding very fast saccades during the gap task increased when the horizontal direction of the current saccade continued in the same direction as the previous saccade (as opposed to reversing direction) – giving a distinct bimodality in the distribution of latencies in five out of seven participants, and likened his findings to the well known IOR effect. The IOR effect has previously been found in both manual key-press RT and saccadic latency paradigms. Hunt and Kingstone (2003) stated that there were both cortical top-down and oculomotor hard-wired aspects to IOR. An experiment was designed that included obtain-target-feature and oculomotor-prior-direction, crossed with two gap level offsets (0ms & 200ms-gap). Target-feature discrimination accuracy was high (97%). Under-additive main effects were found for each factor, with a three-way interaction effect for gap by obtain-feature by oculomotor-prior-direction. Another new three-way interaction was also found for anticipatory saccade type. Anticipatory saccades became significantly more likely under obtain-target-feature for the continuing oculomotor direction. This appears to be a similar effect to the increased anticipatory direction-error rate in the antisaccade task. These findings add to the saccadic latency knowledge base and in agreement with both Carpenter and Trottier and Pratt, laboratory testing paradigms can affect saccadic latency distributions. That is, salient (meaningful) targets that follow more natural oculomotor trajectories produce higher probability of very fast latencies in the 80-120ms range. In agreement with Hunt and Kingstone, there appears to be an oculomotor component to IOR. Specifically, saccadic target-prior-location interacts differently for obtain-target-feature under 200-ms gap than under 0ms-gap, and is most likely due predominantly to a predictive disinhibitory oculomotor momentum effect, rather than being due to the attentional inhibitory effect proposed for key-press IOR. A new interpretation for the paradigm previously referred to as IOR is offered that includes a link to the smooth pursuit system. Additional studies are planned to explore saccadic interactions in more detail.
APA, Harvard, Vancouver, ISO, and other styles
5

Zieliń́ski, Piotr. "Minimizing latency of agreement protocols." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tran, Tony V. H. "IPv6 geolocation using latency constraints." Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/41452.

Full text
Abstract:
Approved for public release; distribution is unlimited.
IPv4 addresses are now exhausted, and as a result, the growth of IPv6 addresses has increased significantly since 2010. The rate of increase of IPv6 usage is expected to continue; thus the need to determine the geographic location of IPv6 hosts will grow to support location-aware applications. Examples of services that require or benefit from IPv6 geolocation include overlay networks, location-based security mechanisms, client language and policy determination, and location targeted advertising. Internet protocol (IP) geolocation is the process of obtaining the geographical location of a device or host using only the host’s IP address. This study looked at using constraint-based geolocation (CBG), a latency-based measurement technique, on IPv6 infrastructure and analyzed location accuracy against ground truth. Results show that overall IPv6 CBG had up to 30% larger average error distance estimates as compared to IPv4 CBG. However, CBG performance varied depending on the location of the target host. Hosts located in the Asia-Pacific region performed the worst, while hosts located in Europe had the best performance in median error distance. AS-level path differences between IPv4 and IPv6 and the number of landmarks had the most significant impact on CBG performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Lua, Eng Keong. "The structure of Internet latency." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yoo, Sirah. "Ineffable: Latency in Symbolic Languages." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4814.

Full text
Abstract:
The design process demands comprehensive knowledge of visual signs and symbols with a focus on visual literacy; it is related to visual syntax, semantics, and the pragmatics of contexts. My work is an interdisciplinary investigation into how designers integrate polysemantic signs into their design process for particular and highly individualized audiences. By analyzing the role of signs in specific contexts across the spectrum of arts, society, literature, and semiotics, a designer's understanding of the cyclical nature of interpretation and reinterpretation in complex environments creates an avenue for cultivating a new schema that provides further levels of interpretations and different access points. By removing elements from their original context, and fusing these elements into new narratives, we implement new meanings and emphasize the value of interpretation.
APA, Harvard, Vancouver, ISO, and other styles
9

Hardwick, David R. "Factors Associated with Saccade Latency." Thesis, Griffith University, 2008. http://hdl.handle.net/10072/365963.

Full text
Abstract:
Part of the aim of this thesis was to explore a model for producing very fast saccade latencies in the 80 to 120ms range. Its primary motivation was to explore a possible interaction by uniquely combining three independent saccade factors: the gap effect, target-feature-discrimination, and saccadic inhibition of return (IOR). Its secondary motivation was to replicate (in a more conservative and tightly controlled design) the surprising findings of Trottier and Pratt (2005), who found that requiring a high resolution task at the saccade target location speeded saccades, apparently by disinhibition. Trottier and Pratt’s finding was so surprising it raised the question: Could the oculomotor braking effect of saccadic IOR to previously viewed locations be reduced or removed by requiring a high resolution task at the target location? Twenty naïve untrained undergraduate students participated in exchange for course credit. Multiple randomised temporal and spatial target parameters were introduced in order to increase probability of exogenous responses. The primary measured variable was saccade latency in milliseconds, with the expectation of higher probability of very fast saccades (i.e. 80-120ms). Previous research suggested that these very fast saccades could be elicited in special testing circumstances with naïve participants, such as during the gap task, or in highly trained observers in non-gap tasks (Fischer & Weber, 1993). Trottier and Pratt (2005) found that adding a task demand that required naïve untrained participants to obtain a feature of the target stimulus (and to then make a discriminatory decision) also produced a higher probability of very fast saccade latencies. They stated that these saccades were not the same as saccade latencies previously referred to as express saccades produced in the gap paradigm, and proposed that such very fast saccades were normal. Carpenter (2001) found that in trained participants the probability of finding very fast saccades during the gap task increased when the horizontal direction of the current saccade continued in the same direction as the previous saccade (as opposed to reversing direction) – giving a distinct bimodality in the distribution of latencies in five out of seven participants, and likened his findings to the well known IOR effect. The IOR effect has previously been found in both manual key-press RT and saccadic latency paradigms. Hunt and Kingstone (2003) stated that there were both cortical top-down and oculomotor hard-wired aspects to IOR. An experiment was designed that included obtain-target-feature and oculomotor-prior-direction, crossed with two gap level offsets (0ms & 200ms-gap). Target-feature discrimination accuracy was high (97%). Under-additive main effects were found for each factor, with a three-way interaction effect for gap by obtain-feature by oculomotor-prior-direction. Another new three-way interaction was also found for anticipatory saccade type. Anticipatory saccades became significantly more likely under obtain-target-feature for the continuing oculomotor direction. This appears to be a similar effect to the increased anticipatory direction-error rate in the antisaccade task. These findings add to the saccadic latency knowledge base and in agreement with both Carpenter and Trottier and Pratt, laboratory testing paradigms can affect saccadic latency distributions. That is, salient (meaningful) targets that follow more natural oculomotor trajectories produce higher probability of very fast latencies in the 80-120ms range. In agreement with Hunt and Kingstone, there appears to be an oculomotor component to IOR. Specifically, saccadic target-prior-location interacts differently for obtain-target-feature under 200-ms gap than under 0ms-gap, and is most likely due predominantly to a predictive disinhibitory oculomotor momentum effect, rather than being due to the attentional inhibitory effect proposed for key-press IOR. A new interpretation for the paradigm previously referred to as IOR is offered that includes a link to the smooth pursuit system. Additional studies are planned to explore saccadic interactions in more detail.
Thesis (Masters)
Master of Philosophy (MPhil)
School of Psychology
Griffith Health
Full Text
APA, Harvard, Vancouver, ISO, and other styles
10

Poccardi, Nolwenn. "Etude du contrôle de l’etablissement de l’infection latente de HSV1 et de sa capacité de réactivation." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS148.

Full text
Abstract:
Le virus Herpes Simplex de type 1 (HSV1) est responsable chez l’homme, son seul hôte naturel, d’infections oculaires cornéennes (kératites) récurrentes, typiquement unilatérales, pouvant induire une perte majeure de la vision. Pendant toute la vie, le virus reste à l’état quiescent (latence) dans le système nerveux, en particulier dans les deux ganglions trigéminés (TG) qui sont responsables de l’innervation sensitive de la cornée. La réactivation du virus à partir de ces TG entraine la kératite. Jusqu’à présent, les seuls traitements disponibles contre HSV1 ne sont que curatifs, c’est à dire qu’ils permettent de contrôler la réactivation que lorsque est déclarée. Il n’existe pour l’instant aucune thérapeutique réellement préventive sur l’ensemble des récidives, en particulier aucun vaccin n’a fait la preuve de son efficacité.Notre équipe a caractérisé un modèle d’infection herpétique par primo-infection orale, qui reproduit chez la souris une grande partie de l’histoire naturelle de l’infection herpétique telle qu’elle est observée chez l’homme. Ce modèle reproduit aussi la latéralisation, puisque la kératite initiale (puis ses récidives éventuelles) n’est observée que du côté inoculé, alors que l’infection latente est retrouvée dans les deux TG. Cependant, cette latence bilatérale n’est pas parfaitement symétrique à l’échelle moléculaire: alors que la charge virale latente (nombre de copies de génome) est similaire entre les deux TG, la production de LAT (Latency-Associated Transcripts) est plus importante du côté inoculé, de même que le nombre de neurones exprimant ces LAT (Cavallero et al., 2014; Maillet et al., 2006). Or, l’expression des LAT, marqueur classique de l’infection latente par HSV1, est associée, d’après la littérature scientifique, à la possibilité ultérieure de réactivation virale. A l’inverse, une infection herpétique sans expression des LAT est considérée comme peu réactivable (Perng et al., 2000). L’asymétrie biologique observée dans notre modèle pourrait donc expliquer la plus forte capacité de réactivation de HSV1 du côté inoculé seulement.L’objectif de l’ensemble de notre projet a été de tenter de contraindre une souche virale sauvage (à virulence normale) à une infection latente mais à capacité réduite de réactivation (sans expression de LAT), c’est à dire comme observé du côté non-inoculé de notre modèle. Pour cela, nous avons étudié l’effet de la primo-infection herpétique d’une souche de HSV1 sur la sensibilité des tissus à héberger l’infection latente par une autre souche virale, inoculée ultérieurement et dans un autre site.Notre avons montré que la primo-infection par une souche de HSV1 a inhibé la pathogénie (morbidité et mortalité) induite une autre souche de HSV1 virulente, inoculée quelques jours après. La primo-infection a contraint cette souche réinfectante à une mise en latence sans réplication au préalable, cette latence ne s’accompagnant pas d’expression de LAT. Cet effet inhibiteur a également été observé lors de l’utilisation d’une souche atténuée, non virulente dans le système nerveux, lors de la primo-infection. L’étude de la réactivation des différentes souches a révélé que lors de l’utilisation d’une souche neurovirulente et réactivable en tant que souche de primo-infection, la souche réinfectante pouvait également réactiver (aussi bien que son contrôle). En revanche, lors d’une primo-infection avec une souche non réactivable, la réactivation de la souche réinfectante était presque entièrement inhibée.La primo-infection par une souche non neurovirulente a contraint une souche, réellement virulente et inoculée secondairement, à une infection latente sans capacité de réactivation. Nous disposons ainsi des bases du développement d’une stratégie réellement préventive de l’infection herpétique récidivante, premier temps d’une éventuelle utilisation à des fins vaccinales
The Herpes Simplex virus 1 (HSV1), whose only natural hosts are humans, can persist during the whole lifetime in a quiescent state (latent infection) in the nervous system, especially in both trigeminal ganglia (TGs, right and left), which innervate the cornea. The virus can reactivate in the TG, leading to recurrent corneal infections (keratitis) that are typically unilateral and can lead to major vision loss. To date, the only available therapies against HSV1 are curative, i.e. they control the reactivation process only after its onset. Until now, no efficient preventive treatment against HSV1 has been established, and more specifically no vaccine has been shown to be clinically effective.Our team has developed an oro-ocular murine model (based on viral inoculation in the lip), that mimics most of the aspects of the natural history of HSV1 infection in humans. In particular, lateralization is also found in this model, as only the eye ipsilateral to the inoculated lip develops keratitis (initial keratitis and recurrences), while latent virus is found in both TGs with similar levels of viral genome copies. However, the bilateral latency isn’t perfectly symmetrical at the molecular level, since the production of Latency-Associated Transcripts (LATs) and the number of LAT+ neurons are higher in the ipsilateral TG (Cavallero et al., 2014; Maillet et al., 2006). As LAT expression is associated with the capacity of the virus to reactivate, the asymmetry in LAT expression could explain the unilaterality of keratitis events.The aim of this project was to constraint a wild-type HSV1 strain to enter a non-reactivable state of latent infection in the both TGs. As this peculiar type of latent infection is observed only in the controlateral TG following a unilateral primary infection, we hypothesized that this phenomenon is linked to the kinetics of HSV1 infection in the both TGs, respectively. To test this, we studied the impact of a primary HSV1 infection on the behavior (acute phase, latency, LAT expression, capacity of reactivation) of a superinfecting HSV1 strain, inoculated at another anatomical site some days later.We have shown that the primary infection with a HSV1 strain can inhibit the pathogeny (morbidity and mortality) of a superinfecting virulent HSV1 strain, inoculated few days afterwards. Moreover, the superinfecting strain was found to be very rapidly driven in a latent state, with very poor LAT expression. This inhibitory effect also occurred when using a non-neurovirulent strain of HSV1 for the primary infection, with no further ability of the wild-type superinfecting strain to reactivate.These results clearly show that the onset of productive infection in the TGs and later on, latent infection with putative reactivation, is related to the kinetics of infection. These observations may have implications in the future for the potential development of innovative preventive strategies
APA, Harvard, Vancouver, ISO, and other styles
11

Bego, Mariana. "Study of human cytomegalovirus latency. initial characterization of UL81-82ast gene and in vitro latency models /." abstract and full text PDF (UNR users only), 2005. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3209137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sheng, Cheng. "Synchronous Latency Insensitive Design in FPGA." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2767.

Full text
Abstract:

A design methodology to mitigate timing problems due to long wire delays is proposed. The timing problems are taking care of at architecture level instead of layout level in this design method so that no change is needed when the whole design goes to backend design. Hence design iterations are avoided by using this design methodology. The proposed design method is based on STARI architecture, and a novel initialization mechanism is proposed in this paper. Low frequency global clock is used to synchronize the communication and PLLs are used to provide high frequency working clocks. The feasibility of new design methodology is proved on FPGA test board and the implementation details are also described in this paper. Only standard library cells are used in this design method and no change is made to the traditional design flow. The new design methodology is expected to reduce the timing closure effort in high frequency and complex digital design in deep submicron technologies.

APA, Harvard, Vancouver, ISO, and other styles
13

Gale, Andrew. "Tolerating memory latency through lightweight multithreading." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/842940/.

Full text
Abstract:
As processor clock frequencies continue to improve at a rate that exceeds the rate of improvement in the performance of semiconductor memories, so the effect of memory latency on processor efficiency increases. Unless steps are taken to mitigate the effect of memory latency, the increased processor frequency is of little benefit. This work demonstrates how multithreading can reduce the effect of memory latency on processor performance and how just a few threads are required to achieve close to optimal performance. A lightweight multithreaded architecture is discussed and simulated to show how threads derived from an application's instruction-level parallelism may be used to tolerate memory latency.
APA, Harvard, Vancouver, ISO, and other styles
14

Lackorzynski, Adam. "Secure Virtualization of Latency-Constrained Systems." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-164170.

Full text
Abstract:
Virtualization is a mature technology in server and desktop environments where multiple systems are consolidate onto a single physical hardware platform, increasing the utilization of todays multi-core systems as well as saving resources such as energy, space and costs compared to multiple single systems. Looking at embedded environments reveals that many systems use multiple separate computing systems inside, including requirements for real-time and isolation properties. For example, modern high-comfort cars use up to a hundred embedded computing systems. Consolidating such diverse configurations promises to save resources such as energy and weight. In my work I propose a secure software architecture that allows consolidating multiple embedded software systems with timing constraints. The base of the architecture builds a microkernel-based operating system that supports a variety of different virtualization approaches through a generic interface, supporting hardware-assisted virtualization and paravirtualization as well as multiple architectures. Studying guest systems with latency constraints with regards to virtualization showed that standard techniques such as high-frequency time-slicing are not a viable approach. Generally, guest systems are a combination of best-effort and real-time work and thus form a mixed-criticality system. Further analysis showed that such systems need to export relevant internal scheduling information to the hypervisor to support multiple guests with latency constraints. I propose a mechanism to export those relevant events that is secure, flexible, has good performance and is easy to use. The thesis concludes with an evaluation covering the virtualization approach on the ARM and x86 architectures and two guest operating systems, Linux and FreeRTOS, as well as evaluating the export mechanism.
APA, Harvard, Vancouver, ISO, and other styles
15

Friston, S. "Low latency rendering with dataflow architectures." Thesis, University College London (University of London), 2017. http://discovery.ucl.ac.uk/1544925/.

Full text
Abstract:
The research presented in this thesis concerns latency in VR and synthetic environments. Latency is the end-to-end delay experienced by the user of an interactive computer system, between their physical actions and the perceived response to these actions. Latency is a product of the various processing, transport and buffering delays present in any current computer system. For many computer mediated applications, latency can be distracting, but it is not critical to the utility of the application. Synthetic environments on the other hand attempt to facilitate direct interaction with a digitised world. Direct interaction here implies the formation of a sensorimotor loop between the user and the digitised world - that is, the user makes predictions about how their actions affect the world, and see these predictions realised. By facilitating the formation of the this loop, the synthetic environment allows users to directly sense the digitised world, rather than the interface, and induce perceptions, such as that of the digital world existing as a distinct physical place. This has many applications for knowledge transfer and efficient interaction through the use of enhanced communication cues. The complication is, the formation of the sensorimotor loop that underpins this is highly dependent on the fidelity of the virtual stimuli, including latency. The main research questions we ask are how can the characteristics of dataflow computing be leveraged to improve the temporal fidelity of the visual stimuli, and what implications does this have on other aspects of the fidelity. Secondarily, we ask what effects latency itself has on user interaction. We test the effects of latency on physical interaction at levels previously hypothesized but unexplored. We also test for a previously unconsidered effect of latency on higher level cognitive functions. To do this, we create prototype image generators for interactive systems and virtual reality, using dataflow computing platforms. We integrate these into real interactive systems to gain practical experience of how the real perceptible benefits of alternative rendering approaches, but also what implications are when they are subject to the constraints of real systems. We quantify the differences of our systems compared with traditional systems using latency and objective image fidelity measures. We use our novel systems to perform user studies into the effects of latency. Our high performance apparatuses allow experimentation at latencies lower than previously tested in comparable studies. The low latency apparatuses are designed to minimise what is currently the largest delay in traditional rendering pipelines and we find that the approach is successful in this respect. Our 3D low latency apparatus achieves lower latencies and higher fidelities than traditional systems. The conditions under which it can do this are highly constrained however. We do not foresee dataflow computing shouldering the bulk of the rendering workload in the future but rather facilitating the augmentation of the traditional pipeline with a very high speed local loop. This may be an image distortion stage or otherwise. Our latency experiments revealed that many predictions about the effects of low latency should be re-evaluated and experimenting in this range requires great care.
APA, Harvard, Vancouver, ISO, and other styles
16

Yedugundla, Kiran. "Evaluating and Reducing Multipath Transport Latency." Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-71223.

Full text
Abstract:
Access to the Internet is a very significant part of everyday life with increasing online services such as news delivery, banking, gaming, audio and high quality movies. Applications require different transport guarantees with some requiring higher bandwidth and others low latency. Upgrading access link capacity does not guarantee faster access to the Internet as it offers higher bandwidth but may not offer low latency. With increasing number of mobile devices supporting more than one access technologies (e.g., WLAN, 3G, 4G,..), there is a need to analyse the impact of using multiple such technologies at the same time. Legacy transport protocols such as TCP or SCTP are only able to connect to one access network at a time to create an end-to-end connection. When more than one access technology is used, there may be a large difference in the data rate offered by each technology. This asymmetry might impact latency sensitive applications by creating out of order delivery. In this thesis, we focus on the latency aspect of multipath transport protocol performance. We consider CMT-SCTP and Multipath TCP as available multipath protocols that were designed for exploiting multiple paths for better throughput and reliability. We consider various real world traffic scenarios such as Video, Gaming and Web traffic to measure end-to-end latency. We perform simulations, emulations and experiments using heterogeneous network settings involving access networks with different bandwidth, delay and loss characteristics. MPTCP performs better in terms of latency than CMT-SCTP and TCP in certain scenarios where available paths are symmetric. However, MPTCP does not perform well in asymmetric scenarios with latency sensitive traffic. This analysis provides insights in to various areas of improvement in MPTCP such as scheduling and loss recovery to achieve low latency. We further focus on packet loss recovery in MPTCP for specific cases of tail losses to reduce latency. Tail losses are the losses that occur at the end of a packet stream. Recovering such losses is of higher significance to latency sensitive applications. We propose a modification to the use of TLP, a mechanism in TCP for tail loss recovery. We evaluate the performance of proposed TLP modification, first using emulations and with real world network experiments. Our results show significant improvements in latency for specific loss scenarios in emulations and up to 50% improvement in experiments.
With an increasing number of mobile devices supporting more than one access technologies (e.g., WLAN, 3G, 4G), there is a need to analyse the impact of using multiple such technologies at the same time. The inherent asymmetry among these technologies might affect latency sensitive applications by creating out of order delivery. In this thesis, we consider CMT-SCTP and Multipath TCP as available multipath protocols designed to exploit multiple paths for better throughput and reliability.  We perform simulations, emulations and experiments using various real-world traffic scenarios such as Video, Gaming and Web traffic to measure end-to-end latency. MPTCP performs better in terms of latency than CMT-SCTP and TCP in certain scenarios where available paths are symmetric. This analysis provides insights into various areas of improvement in MPTCP such as scheduling and loss recovery to achieve low latency. We further focus on packet loss recovery in MPTCP for specific cases of tail losses (losses that occur at the end of a packet stream) to reduce latency. This thesis presents a modification to the use of Tail Loss Probe (TLP) in MPTCP that provides improvements in latency for specific loss scenarios in emulations and upto 50% improvement in experiments.
APA, Harvard, Vancouver, ISO, and other styles
17

Vijayaraghavan, Muralidaran. "Theory of composable latency-insensitive refinements." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53319.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 36-37).
Simulation of a synchronous system on a hardware platform, for example an FPGA, can be performed using a hardware prototype of the system. But the prototype may not meet the resource and timing constraints of that platform. One way to meet the constraints is to partition the prototype hierarchically into modules, and to refine the individual modules while preserving the overall behavior of the system. In this thesis we formalize the notion of a refinement that preserves the behavior of the original modules - we call such refinements latency-insensitive refinements. We show that if these latency-insensitive refinements of the modules obey certain conditions, then these refinements can be composed together hierarchically in order to obtain the latency-insensitive refinement of the original system. We call the latency-insensitive refinements that obey these conditions as composable latency-insensitive refinements. We also give a procedure to automatically transform a module to a latency-insensitive refinement while obeying the conditions that enable it to be composed hierarchically. The transformation serves as a starting point for making further refinements and optimizations, and thus, gives a methodology to design hardware simulators for synchronous systems.
by Muralidaran Vijayaraghavan.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
18

Lancaster, Robert. "Low Latency Networking in Virtualized Environments." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1352993532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lunetta, Jennine Marie. "Molecular studies of human cytomegalovirus latency /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2002. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Tridgell, Stephen. "Low Latency Machine Learning on FPGAs." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/23030.

Full text
Abstract:
In recent years, there has been an exponential rise in the quantity of data being acquired and generated. Machine learning provides a way to use and analyze this data to provide a range of insights and services. In this thesis, two popular machine learning algorithms are explored in detail and implemented in hardware to achieve high throughput and low latency. The first algorithm discussed is the Na¨ıve Online regularised Risk Minimization Algorithm. This is a Kernel Adaptive Filter capable of high throughput online learning. In this work, a hardware architecture known as braiding is proposed and implemented on a Field-Programmable Gate Array. The application of this braiding technique to the Na¨ıve Online regularised Risk Minimization Algorithm results in a very high throughput and low latency design. Neural networks have seen explosive growth in research in the recent decade. A portion of this research has been dedicated to lowering the computational cost of neural networks by using lower precision representations. The second method explored and implemented in this work is the unrolling of ternary neural networks. Ternary neural networks can have the same structure as any floating point neural network with the key difference being the weights of the network are restricted to -1,0 and 1. Under certain assumptions, this work demonstrates that these networks can be implemented very efficiently for inference by exploiting sparsity and common subexpressions. To demonstrate the effectiveness of this technique, it is applied to two different systems and two different datasets. The first is on the common benchmarking dataset CIFAR10 and the Amazon Web Services F1 platform, and the second is for real-time automatic modulation classification of radio frequency signals using the radio frequency system on chip ZCU111 development board. These implementations both demonstrate very high throughput and low latency compared with other published literature while maintaining very high accuracy. Together this work provides techniques for real-time inference and training on parallel hardware which can be used to implement a wide range of new applications.
APA, Harvard, Vancouver, ISO, and other styles
21

Jenkins, Peter John. "Transcriptional regulation of the Epstein-Barr virus immediate early genes." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Cattan, Elie. "Analyse de la latence et de sa compensation pour l'interaction au toucher direct : aspects techniques et humains." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM039/document.

Full text
Abstract:
La latence, c'est-à-dire le délai entre l'action d'un utilisateur en entrée d'un système et la réponse correspondante fournie par le système, est un problème majeur pour l'utilisabilité des dispositifs interactifs. La latence est particulièrement perceptible pour l’interaction au toucher et détériore la performance de l'utilisateur même à des niveaux de l'ordre de la dizaine de millisecondes. Or, la latence des écrans tactiles actuels (smartphones ou tablettes) est en général supérieure à 70 ms.Notre objectif est d'améliorer nos connaissances sur la latence (ses causes, ses effets) et de trouver des méthodes pour la compenser ou en diminuer les effets négatifs. Nous proposons un état de l'art des travaux en IHM sur le sujet, puis nous effectuons un rapprochement avec la littérature du contrôle moteur qui a aussi étudié le comportement humain face à des perturbations visuomotrices et en particulier l’adaptation des mouvements à un retard du retour visuel.Nous détaillons ensuite nos quatre contributions. Nos résultats contribuent de manière à la fois pratique et théorique à la résolution du problème de la latence lors de l'interaction au toucher direct. Deux contributions complètent le diagnostic de la latence : la première est une nouvelle technique de mesure de latence; la seconde est une étude de l'effet de la latence sur l'interaction bimanuelle, importante pour l'interaction sur les grandes surfaces tactiles. Nous montrons que l'interaction bimanuelle est autant touchée par la latence que l'interaction à une main, ce qui suggère que des tâches plus complexes, qui augmenterait la charge cognitive, ne réduisent pas nécessairement l'effet de la latence. Nos deux autres contributions portent sur la réduction des effets de la latence. D'une part, nous proposons un système à faible latence (25 ms) associé à une compensation prédictive logicielle, et nous mettons en évidence que ce système permet d'améliorer la performance des utilisateurs comme s'ils utilisaient un système à 9 ms de latence. D'autre part nous étudions la capacité des utilisateurs à s'adapter à la latence pour améliorer leur performance sur une tâche de suivi de cible et nous montrons que l'impact négatif de la latence se réduit sur le long terme grâce aux capacités d'adaptation humaine
Latency, the delay between a user input on a system and the corresponding response from the system, is a major issue for the usability of interactive systems. In direct-touch interaction, latency is particularly perceivable and alters user performance even at levels in the order of ten milliseconds. Yet, current touch devices such as smartphones or tablet-pc exhibit in general latencies over 70 ms.Our goal is to improve the knowledge on latency (its causes, its effects) and to find strategies to compensate it or to decrease its negative effects. We present a review of the HCI literature on the topic, then we link this literature with the motor control research field that has studied human behaviour when facing visuomotor perturbations, and in particular the adaptation to feedback delay.We then present our four contributions. We contribute both in a practical and a theoretical manner to the problem of latency in direct-touch interaction. Two of our contributions supplement the diagnosis of latency: the first one is a new latency measurement technique; the second one is a study of the impact of latency on bimanual interaction, which is important when interacting on large tactile surfaces. We show that bimanual interaction is as much affected by latency as a single hand interaction, suggesting that more complex tasks, suppose to increase the cognitive load, do not necessarily reduce the effect of latency. Our two other contributions address the reduction of the effects of latency. On one hand, we introduce a low latency system (25 ms) associated with a predictive software compensation, and we show that the system enables users to improve their performances as if they were using a system with 9 ms of latency. On the other hand we study users' ability to adapt to latency in order to improve their performance on a tracking task, and we show that the negative impact of latency is reduced with long-term training thanks to human adaptability
APA, Harvard, Vancouver, ISO, and other styles
23

Gong, Yixi. "La quête de latence faible sur les deux bords du réseau : conception, analyse, simulation et expériences." Thesis, Paris, ENST, 2016. http://www.theses.fr/2016ENST0018/document.

Full text
Abstract:
Au cours de ces dernières années, les services Internet croissent considérablement ce qui crée beaucoup de nouveaux défis dans des scénarios variés. La performance globale du service dépend à son tour de la performance des multiples segments de réseau. Nous étudions deux défis représentatifs de conception dans différents segments : les deux les plus importants se trouvent sur les bords opposés la connectivité de bout en bout des chemins d’Internet, notamment, le réseau d’accès pour l’ utilisateur et le réseau de centre de données du fournisseur de services
In the recent years, the innovation of new services over Internet is considerably growing at a fast speed, which brings forward lots of new challenges under varied scenarios. The overall service performance depends in turn on the performance of multiple network segments. We investigated two representative design challenges in different segments : the two most important sit at the opposite edges of the end-to-end Internet path, namely, the end-user access network vs. the service provider data center network
APA, Harvard, Vancouver, ISO, and other styles
24

Lo, Edward Chi Lup. "Performance evaluation and comparison of a token ring network with full latency stations and dual latency stations." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28498.

Full text
Abstract:
A method of performance improvement of token ring networks is presented, based on the use of stations with two latency states. Station latency is defined as the time delay introduced in passing data through a station. Most token ring protocol standards (e.g. IEEE 802.5 or ANSI X3T9.5) require incoming data to be decoded and encoded in the station before transmission onto the ring. These encoding and decoding operations add significantly to the station latency. The bypassing of the encoding and decoding steps is proposed, which improves the mean message waiting time. A detailed evaluation and comparison of the networks is based on both analytical and simulation results. The performance of identical stations and symmetric traffic is obtained analytically. A discrete event simulation model for a token ring network is written in GPSS for general traffic. Results show a significant reduction in mean waiting time for the dual latency ring, with performance approaching or exceeding that of gated and exhaustive service, for certain ranges of network utilization.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
25

Pal, Asmita. "Split Latency Allocator: Process Variation-Aware Register Access Latency Boost in a Near-Threshold Graphics Processing Unit." DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7155.

Full text
Abstract:
Over the last decade, Graphics Processing Units (GPUs) have been used extensively in gaming consoles, mobile phones, workstations and data centers, as they have exhibited immense performance improvement over CPUs, in graphics intensive applications. Due to their highly parallel architecture, general purpose GPUs (GPGPUs) have gained the foreground in applications where large data blocks can be processed in parallel. However, the performance improvement is constrained by a large power consumption. Likewise, Near Threshold Computing (NTC) has emerged as an energy-efficient design paradigm. Hence, operating GPUs at NTC seems like a plausible solution to counteract the high energy consumption. This work investigates the challenges associated with NTC operation of GPUs and proposes a low-power GPU design, Split Latency Allocator, to sustain the performance of GPGPU applications.
APA, Harvard, Vancouver, ISO, and other styles
26

Tong, Phuoc Bao Viet. "Développement d’une nouvelle classe d'agents de sortie de latence du VIH-1." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTT009.

Full text
Abstract:
Bien que le traitement antirétroviral (ART) supprime efficacement la multiplication du VIH-1 chez les patients infectés, l’ART ne guérit pas l'infection. En effet, si l'ART est arrêté, on observe un rebond viral. Celui-ci est principalement dû à l'activation stochastique de cellules latentes qui contiennent le génome viral intégré mais ne produisent pas de virus et ne sont donc pas ciblées par l'ART ou le système immunitaire. Ces cellules latentes sont peu nombreuses (1-10 par million de cellules T-CD4+ quiescentes) mais elles apparaissent rapidement après la primo infection (en quelques jours), ont une longue durée de vie (près de 4 ans de demi-vie). Ces cellules réservoirs constituent donc un obstacle majeur à l'éradication virale.La stratégie la plus prometteuse pour supprimer ces cellules, dite "Shock and Kill", (ou "kick and kill" est de les activer pour qu'elles soient ensuite détruites par la production virale, ciblées par l'ART et/ou lysées par les cellules T cytotoxiques. Un certain nombre d’agents de sortie de latence (LRAs) ont été développés pour activer ces cellules. Ils ciblent des protéines cellulaires telles que les histone-désacétylases (HDAC) ou la protéine kinase C. La plupart d'entre eux présentent donc des effets non spécifiques, comme l'inhibition des lymphocytes cytotoxiques, et parfois une toxicité. Ils ont été incapables de diminuer la taille du réservoir chez les patients VIH+.Nous avons développé une nouvelle famille d’agents de levée de latence du VIH ciblant une protéine virale. Sur la base des structures disponibles, nous avons identifié par criblage in silico des ligands potentiels de cette protéine. Dix molécules ont été sélectionnées. Aucune n'est toxique pour les cellules T CD4+. Une molécule appelée D10 se fixe spécifiquement à la cible avec une affinité de l’ordre de 30 -50 nM et affecte l'activité biologique de cette protéine. De plus, D10 présente une activité LRA sur les lignées cellulaires latentes JLat-9.2 et OM-10.1. L’activité LRA de D10 sur ces lignées représente 50 à 70% de celle du SAHA (vorinostat), un inhibiteur des HDAC candidat LRA en cours d’essais cliniques (Phase 2). Lors de tests ex vivo sur les cellules latentes de patients VIH traités, D10 à 50 nM a une activité LRA très efficace, 80% supérieure à celle de la bryostatine-1 qui agit sur la PKC et est considéré comme le LRA le plus prometteur actuellement. En utilisant une approche chémoinformatique nous avons sélectionné 11 analogues de D10, dit N1-N11. Certains de ces analogues (N5, N8) montrent un effet plus fort que D10 sur les lignées cellulaires latentes. L'étude de cette famille nous a permis d'ébaucher une relation structure chimique / activité LRA de ces molécules. Nous avons donc identifié de nouveaux agents de sortie de latence du VIH-1 qui ciblent une protéine virale et devraient donc être plus spécifiques que les LRAs ciblant les protéines cellulaires
Despite its efficiency to prevent viral multiplication, antiretroviral therapy (ART) is unable to cure patients with HIV-1. Indeed, if ART is stopped, a viral rebound is observed. This increase in blood viral load is due to the activation of HIV-1 reservoirs, among which latently-infected memory CD4+ T cells. These cells are rare (1-10 per million of quiescent T cells), appear very quickly following infection and have a long half-life (almost 4 years). To purge this long-lived reservoir the "Shock and Kill" (or kick and kill) approach was developed. This strategy relies on the use of latency reversing agents (LRAs) to induce reservoir activation. All LRAs developed until now target cellular proteins such as histone deacetylases or protein kinase C. These LRAs did not affect the reservoir size of HIV+ patients.Here we present a new LRA family that binds to and activates an HIV-1 protein. These compounds were identified by in silico screening, are not cytotoxic and affect the biological activity of their target. They were less efficient than available LRAs on HIV-1 latent cell lines. Nevertheless, when tested on latent T-cells from HIV-1 patients in ex vivo assays, the lead compound D10 at 50 nM was ~ 80% more efficient than bryostatin-1, one of the best LRA available to date.Using a chemoinformatic approach, we selected 11 analogs of D10, termed N1 to N11. Some of these analogs (N5, N8) showed a stronger effect than D10 on latent cell lines. The study of this family enabled us to elaborate a structure/ function relationship.We thus identified a new family of HIV latency reversing agents targeting a viral protein and that should therefore be more specific than LRAs that target cellular proteins
APA, Harvard, Vancouver, ISO, and other styles
27

Lundberg, Fredrik. "How does asymmetric latency in a closed network affect audio signals and strategies for dealing with asymmetric latency." Thesis, Luleå tekniska universitet, Medier ljudteknik och upplevelseproduktion och teater, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-69065.

Full text
Abstract:
This study investigates Audio over IP. A stress test was used to see what impact asymmetric latency had on the audio signal in a closed network. The study was constructed into two parts. The first part is the stress test where two AoIP solutions were tested. The two solutions where exposed in two forms of asymmetric latency. First a fixed value was used, next, a custom script was used to simulate changing values of asymmetric latency. The second part of this study involved interviews that where conducted with representatives from the audio industry that are working with audio over IP on a dayto-day usage. The goal for these interviews was to figure out what knowledge the audio industry had about asymmetric latency, if the industry had experienced problems related to latency and what general knowledge the industry has about networks. It was found in the interviews that the limitation in AoIP isn’t the technology in itself but rather missing knowledge with the people that are using the systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Haugen, Daniel. "Seismic Data Compression and GPU Memory Latency." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9973.

Full text
Abstract:

The gap between processing performance and the memory bandwidth is still increasing. To compensate for this gap various techniques have been used, such as using a memory hierarchy with faster memory closer to the processing unit. Other techniques that have been tested include the compression of data prior to a memory transfer. Bandwidth limitations exists not only at low levels within the memory hierarchy, but also between the central processing unit (CPU) and the graphics processing unit (GPU), suggesting the use of compression to mask the gap. Seismic datasets are often very large, e.g. several terabytes. This thesis explores compression of seismic data to hide the bandwidth limitation between the CPU and the GPU for seismic applications. The compression method considered is subband coding, with both run-length encoding (RLE) and Huffman encoding as compressors of the quantized data. These methods has shown on CPU implementations to give very good compression ratios for seismic data. A proof of concept implementation for decompression of seismic data on GPUs is developed. It consists of three main components: First the subband synthesis filter reconstructing the input data processed by the subband analysis filter. Second, the inverse quantizer generating an output close to the input given to the quantizer. Finally, the decoders decompressing the compressed data using Huffman and RLE. The results of our implementation show that the seismic data compression algorithm investigated is probably not suited to hide the bandwidth limitation between CPU and GPU. This is because of the steps taken to do the decompression are likely slower than a simple memory copy of the uncompressed seismic data. It is primarily the decompressors that are the limiting factor, but in our implementation the subband synthesis is also limiting. The sequential nature of the decompres- sion algorithms used makes them difficult to parallelize to make use of the processing units on the GPUs in an efficient way. Several suggestions for future work is then suggested as well as results showing how our GPU implementation can be very useful for data compres- sion for data to be sent over a network. Our compression results give a compression factor between 27 and 32, and a SNR of 24.67dB for a cube of dimension 643. A speedup of 2.5 for the synthesis filter compared to the CPU implementation is achieved (2029.00/813.76 2.5). Although not currently suited for the GPU-CPU compression, our implementations indicate

APA, Harvard, Vancouver, ISO, and other styles
29

Gazi, Orhan. "Parallelized Architectures For Low Latency Turbo Structures." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608110/index.pdf.

Full text
Abstract:
In this thesis, we present low latency general concatenated code structures suitable for parallel processing. We propose parallel decodable serially concatenated codes (PDSCCs) which is a general structure to construct many variants of serially concatenated codes. Using this most general structure we derive parallel decodable serially concatenated convolutional codes (PDSCCCs). Convolutional product codes which are instances of PDSCCCs are studied in detail. PDSCCCs have much less decoding latency and show almost the same performance compared to classical serially concatenated convolutional codes. Using the same idea, we propose parallel decodable turbo codes (PDTCs) which represent a general structure to construct parallel concatenated codes. PDTCs have much less latency compared to classical turbo codes and they both achieve similar performance. We extend the approach proposed for the construction of parallel decodable concatenated codes to trellis coded modulation, turbo channel equalization, and space time trellis codes and show that low latency systems can be constructed using the same idea. Parallel decoding operation introduces new problems in implementation. One such problem is memory collision which occurs when multiple decoder units attempt accessing the same memory device. We propose novel interleaver structures which prevent the memory collision problem while achieving performance close to other interleavers.
APA, Harvard, Vancouver, ISO, and other styles
30

Grannæs, Marius. "Reducing Memory Latency by Improving Resource Utilization." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11065.

Full text
Abstract:
Integrated circuits have been in constant progression since the first prototype in 1958, with the semiconductor industry maintaining a constant rate of miniaturisation of transistors and wires. Up until about the year 2002, processor performance increased by about 55% per year. Since then, limitations on power, ILP and memory latency have slowed the increase in uniprocessor performance to about 20% per year. Although the capacity of DRAM increases by about 40% per year, the latency only decreases by about 6 { 7% per year. This performance gap between the processor and DRAM leads to a problem known as the memory wall. This thesis aims to improve system memory latency by leveraging available resources with excess capacity. This has been achieved through multiple techniques, but mainly by using excess bandwidth and improving scheduling policies. The first approach presented, destructive read DRAM, changes the underlying assumptions about the contents of a DRAM cell being unchanged after a read. The latency of a read is reduced, but the rest of the memory system requires changes to conserve data. Prefetching predicts what data is needed in the future and fetches that data into the cache before it is referenced. This dissertation presents a technique for generating highly accurate prefetches with good timeliness called Delta Correlating Prediction Tables (DCPT). DCPT uses a table indexed by the load's address to store the delta history of individual loads. Delta correlation is then used to predict future misses. Delta Correlating Prediction Tables with Partial Matching (DCPT-P) extends DCPT by introducing L1 hoisting which moves data from the L2 to the L1 to further increase performance. In addition, DCPT-P leverages partial matching which reduces the spatial resolution of deltas to expose more patterns. The interaction between the memory controller and the prefetcher is especially important, because of the complex 3D structure of modern DRAM. Utilizing open pages can increase the performance of the system significantly. Memory controllers can increase bandwidth utilization and reduce latency at the same time by scheduling prefetches such that the number of page hits are maximized. The interaction between the program, prefetcher and the memory controller is explored. This thesis examines the impact of having a shared memory system in a CMP. When resources are shared, one core might interfere with another core's execution by delaying memory requests or displacing useful data in the cache. This effect is quantified and which components are most prone to interference between cores identified. Finally, we present a framework for measuring interference at runtime.
APA, Harvard, Vancouver, ISO, and other styles
31

Piccinini, Federico. "Dynamic load balancing based on latency prediction." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-143333.

Full text
Abstract:
Spotify is a music streaming service that offers access to a vast music catalogue; it counts more than 24 million active users in 28 different countries. Spotify's backend is made up by a constellation of independent loosely-coupled services; each service consists of a set of replicas, running on a set of servers in multiple data centers: each request to a service needs to be routed to an appropriate replica. Balancing the load across replicas is crucial to exploit available resources in the best possible way, and to provide optimal performances to clients. The main aim of this project is exploring the possibility of developing a load balancing algorithm that exploits request-reply latencies as its only load index. There are two reasons why latency is an appealing load index: in the first place it has a significant impact on the experience of Spotify users; in the second place, identifying a good load index in a distributed system presents significant challenges due to phenomena that might arise from the interaction of the different system components such as multi-bottlenecks. The use of latency as load index is even more attractive under this light, because it allows for a simple black box model where it is not necessary to model resource usage patterns and bottlenecks of every single service individually: modeling each system would be an impractical task, due both to the number of services and to the speed at which these services evolve. In this work, we justify the choice of request-reply latency as a load indicator, by presenting empirical evidence that it correlates well with known reliable load index obtained through a white box approach. In order to assess the correlation between latency and a known load index obtained through a white box approach, we present measurements from the production environment and from an ad-hoc test environment. We present the design of a novel load balancing algorithm based on a modified ' accrual failure detector that exploits request-reply latency as an indirect measure of the load on individual backends; we analyze the algorithm in detail, providing an overview of potential pitfalls and caveats; we also provide an empirical evaluation of our algorithm, compare its performances to a pure round-robin scheduling discipline and discuss which parameters can be tuned and how they affect the overall behavior of the load balancer.
APA, Harvard, Vancouver, ISO, and other styles
32

Baxi, Mohit K. "Molecular studies of equine herpesvirus 1 latency." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.390272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Gao, Jiyang. "Modelling latency removal in mechanical pulping processes." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/47091.

Full text
Abstract:
Latency removal is an essential step in the mechanical pulping process. It occurs in a continuous stirred-tank reactor (CSTR) and non-ideal mixing lowers the performance. In order to optimize the latency removal process and reduce the energy consumption in the operation, a kinetic study was carried out. In this work, latency removal was studied at both the individual fibre and the pulp suspension frames of reference. In the first study, the removal of latency of individual TMP fibres was studied using optical microscopy. The fibre deflection under the influence of the heat and water absorption was measured as a function of time. At the pulp suspension level, latency removal was characterized by the change of different pulp properties and the dependency of each property on treatment conditions was determined. Kinetic models of latency removal for secondary refiner TMP and BCTMP pulps were developed, which were based on the rate of latency elimination characterized by freeness. The kinetic study reveals that a potential energy reduction in industrial operation of latency removal can be achieved by properly increasing the power intensity to get better mixing. These results were then complemented in a third study of a more direct measure of latency, i.e. curl index. The change in curl index of TMP pulp was examined and its dependence on temperature and other treatment conditions was determined. The development of tensile and tear strengths of TMP pulp was explored in terms of different treatment conditions and the results were analyzed in terms of fibre straightening and fibre deflocculation. Linear correlations between strength properties, curl index and freeness have been found. In the final portion of the work an industrial case study was performed, where the latency removal of primary BCTMP pulp was examined for the purpose of optimizing an industrial latency removal process. The result of the laboratory test and the onsite measurement in the mill shows latency removal of primary BCTMP pulp is a much faster process in comparison with the secondary BCTMP pulp, and the latency removal process in the pulp mill can be optimized using an existing smaller sized mixing chest.
APA, Harvard, Vancouver, ISO, and other styles
34

Selvidge, Charles William. "Compilation-based prefetching for memory latency tolerance." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/13236.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1992.
Includes bibliographical references (leaves 160-164).
by Charles William Selvidge.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
35

Fleming, Kermin Elliott Jr. "Scalable reconfigurable computing leveraging latency-insensitive channels." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79212.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 190-197).
Traditionally, FPGAs have been confined to the limited role of small, low-volume ASIC replacements and as circuit emulators. However, continued Moore's law scaling has given FPGAs new life as accelerators for applications that map well to fine-grained parallel substrates. Examples of such applications include processor modelling, compression, and digital signal processing. Although FPGAs continue to increase in size, some interesting designs still fail to fit in to a single FPGA. Many tools exist that partition RTL descriptions across FPGAs. Unfortunately, existing tools have low performance due to the inefficiency of maintaining the cycle-by-cycle behavior of RTL among discrete FPGAs. These tools are unsuitable for use in FPGA program acceleration, as the purpose of an accelerator is to make applications run faster. This thesis presents latency-insensitive channels, a language-level mechanism by which programmers express points in their their design at which the cycle-by-cycle behavior of the design may be modified by the compiler. By decoupling the timing of portions of the RTL from the high-level function of the program, designs may be mapped to multiple FPGAs without suffering the performance degradation observed in existing tools. This thesis demonstrates, using a diverse set of large designs, that FPGA programs described in terms of latency-insensitive channels obtain significant gains in design feasibility, compilation time, and run-time when mapped to multiple FPGAs.
by Kermin Elliott Fleming, Jr.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
36

Kadi, Sabry. "Measuring Maintainability and latency of Node.js frameworks." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-22160.

Full text
Abstract:
Context: Node.js is an established web framework built using JavaScript. As a result, there are a wide variety of frameworks that have emerged that specialize in different quality attributes and functionalities. Some of which are heavily geared to performance and benchmarking while others might focus on security, availability, robustness, etc. Objectives: The project aims to explore different Node.js server-side frameworks and determine their maintainability using metrics such as Halstead metrics, Maintainability index, source line of code as well as Logical source lines of code. This thesis also explores if there is a correlation between the quality attributes maintainability and performance. Realization: In order to explore the different quality attributes, the thesis relied upon experiments and a literature review. The hierarchical method in this thesis was first to examine their performance, later examine their overall maintainability. Examined is also the impact of comments and how they can affect the results of the maintainability index Results: The results indicate all the selected frameworks have a low-to borderline medium cyclomatic complexity, also a high degree of maintainability using two different 3 metric maintainability index formulas. The latency tests indicate the different frameworks produce similar performance results. Conclusion: Concluded in this thesis is, there seems to be no relationship between both lines of code, logical lines of code, and cyclomatic complexity. There also seems to be no correlation between Halstead volume and the overall maintainability index for both the 3 metric formulas used. There is a slight indication of a relationship between Halstead Effort and Cyclomatic Complexity using one of the 3 metric formulas i.e., as the cyclomatic complexity decreases the overall maintainability (using Halsted’s effort instead of Halstead’s volume) increases.
APA, Harvard, Vancouver, ISO, and other styles
37

kadwadkar, shivanand. "Latency Aware SmartNIC based Load Balancer (LASLB)." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-86039.

Full text
Abstract:
In the 21th century, we see a trend in which CPU processing power is not evolving at the same pace as it did in the century before. Also, in the current generation, the data requirements and the need for higher speed are increasing every day. This increasing demand requires multiple middlebox instances in order to scale. With recent progress in virtualization, middleboxes are getting virtualized and deployed as software (Network Function (NF)s) behind commodity CPUs. Various systems perform Load Balancing (LB) functionality in software, which consumes extra CPU at the NF side. There are research work in the past which tried to move the LB functionality from software to hardware. Majority of hardware­based load balancer only provides basic LB functionality and depends on NF to provide the current performance statistics. Providing statistics feedback to LB consumes processing power at the NF and creates an inter­dependency.   In this thesis work, we explore the possibility of moving the load balancing functionality to a Smart Network Interface Card (smartNIC). Our load balancer will distribute traffic among the set of CPUs where NF instances run. We will use P4 and C programming language in our design, which gives us the combination of high­speed parallel packet processing and the ability to implement relatively complex load balancing features. Our LB approach uses latency experienced by the packet as an estimate for the current CPU loading. In our design, higher latency is a sign of a more busy CPU. The Latency Aware smartNIC based Load Balancer (LASLB) also aims to reduce the tail latency by moving traffic from CPUs where traffic experiences high latency to CPU that processes traffic under low latency. The approach followed in the design does not require any statistics feedback support from the NF, which avoids the tight binding of LB with NF.   Our experiment on different traffic profiles has shown that LASLB can save ~30% CPU for NF. In terms of fairness of CPU loading, our evaluation indicates that in imbalanced traffic, the LASLB can load more evenly than other evaluated methods in smartNIC­ based LB category. Our evaluation also shows that LASLB can reduce 95th percentile tail latency by ~22% compared to software load balancing.
APA, Harvard, Vancouver, ISO, and other styles
38

Toczé, Klervie. "Latency-aware Resource Management at the Edge." Licentiate thesis, Linköpings universitet, Programvara och system, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-163388.

Full text
Abstract:
The increasing diversity of connected devices leads to new application domains being envisioned. Some of these need ultra low latency or have privacy requirements that cannot be satisfied by the current cloud. By bringing resources closer to the end user, the recent edge computing paradigm aims to enable such applications. One critical aspect to ensure the successful deployment of the edge computing paradigm is efficient resource management. Indeed, obtaining the needed resources is crucial for the applications using the edge, but the resource picture of this paradigm is complex. First, as opposed to the nearly infinite resources provided by the cloud, the edge devices have finite resources. Moreover, different resource types are required depending on the applications and the devices supplying those resources are very heterogeneous. This thesis studies several challenges towards enabling efficient resource management for edge computing. The thesis begins by a review of the state-of-the-art research focusing on resource management in the edge computing context. A taxonomy is proposed for providing an overview of the current research and identify areas in need of further work. One of the identified challenges is studying the resource supply organization in the case where a mix of mobile and stationary devices is used to provide the edge resources. The ORCH framework is proposed as a means to orchestrate this edge device mix. The evaluation performed in a simulator shows that this combination of devices enables higher quality of service for latency-critical tasks. Another area is understanding the resource demand side. The thesis presents a study of the workload of a killer application for edge computing: mixed reality. The MR-Leo prototype is designed and used as a vehicle to understand the end-to-end latency, the throughput, and the characteristics of the workload for this type of application. A method for modeling the workload of an application is devised and applied to MR-Leo in order to obtain a synthetic workload exhibiting the same characteristics, which can be used in further studies.
APA, Harvard, Vancouver, ISO, and other styles
39

Omer, Mahgoub Saied Khalid. "Network Latency Estimation Leveraging Network Path Classification." Thesis, KTH, Network Systems Laboratory (NS Lab), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229955.

Full text
Abstract:
With the development of the Internet, new network services with strict network latency requirements have been made possible. These services are implemented as distributed systems deployed across multiple geographical locations. To provide low response time, these services require knowledge about the current network latency. Unfortunately, network latency among geo-distributed sites often change, thus distributed services rely on continuous network latency measurements. One goal of such measurements is to differentiate between momentary latency spikes from relatively long-term latency changes. The differentiation is achieved through statistical processing of the collected samples. This approach of high-frequency network latency measurements has high overhead, slow to identify network latency changes and lacks accuracy. We propose a novel approach for network latency estimation by correlating network paths to network latency. We demonstrate that network latency can be accurately estimated by first measuring and identifying the network path used and then fetching the expected latency for that network path based on previous set of measurements. Based on these principles, we introduce Sudan traceroute, a network latency estimation tool. Sudan traceroute can be used to both reduce the latency estimation time as well as to reduce the overhead of network path measurements. Sudan traceroute uses an improved path detection mechanism that sends only a few carefully selected probes in order to identify the current network path. We have developed and evaluated Sudan traceroute in a test environment and evaluated the feasibility of Sudan traceroute on real-world networks using Amazon EC2. Using Sudan traceroute we have shortened the time it takes for hosts to identify network latency level changes compared to existing approaches.
Med utvecklingen av Internet har nya nätverkstjänster med strikta fördröjningskrav möjliggjorts. Dessa tjänster är implementerade som distribuerade system spridda över flera geografiska platser. För att tillgodose låg svarstid kräver dessa tjänster kunskap om svarstiden i det nuvarande nätverket. Tyvärr ändras ofta nätverksfördröjningen bland geodistribuerade webbplatser, således är distribuerade tjänster beroende av kontinuerliga mätvärden för nätverksfördröjning. Ett mål med sådana mätningar är att skilja mellan momenta ökade svarstider från relativt långsiktiga förändringar av svarstiden. Differentieringen uppnås genom statistisk bearbetning av de samlade mätningarna. Denna högfrekventa insamling av mätningar av nätverksfördröjningen har höga overheadkostnader, identifierar ändringar långsamt och saknar noggrannhet. Vi föreslår ett nytt tillvägagångssätt för beräkningen av nätverksfördröjning genom att korrelera nätverksvägar till nätverksfördröjning. Vi visar att nätverksfördröjningen kan vara exakt uppskattad genom att man först mäter och identifierar den nätverksväg som används och sedan hämtar den förväntade fördröjningen för den nätverksvägen baserad på en tidigare uppsättning av mätningar. Baserat på dessa principer introducerar vi Sudan traceroute, ett Verktyg för att uppskatta nätverksfördröjning. Sudan traceroute kan användas för att både minska tiden att uppskatta fördröjningen samt att minska overhead för mätningarna i nätverket. Sudan traceroute använder en förbättrad vägdetekteringsmekanism som bara skickar några försiktigt valda prober för att identifiera den aktuella vägen i nätverket. Vi har utvecklat och utvärderat Sudan traceroute i en testmiljö och utvärderade genomförbarheten av Sudan traceroute i verkliga nätverk med hjälp av Amazon EC2. Med hjälp av Sudan traceroute har vi förkortat den tid det tar för värdar att identifiera nätverksfördröjnings förändringar jämfört med befintliga tillvägagångssätt.
APA, Harvard, Vancouver, ISO, and other styles
40

Avranas, Apostolos. "Resource allocation for latency sensitive wireless systems." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT021.

Full text
Abstract:
La nouvelle génération de systèmes de communication sans fil 5G vise non seulement à dépasser le débit de données du prédécesseur (LTE), mais à améliorer le système sur d'autres dimensions. Dans ce but, davantage de classes d'utilisateurs ont été introduites afin de fournir plus de choix de types de service. Chaque classe est un point différent sur le compromis entre le débit de données, la latence et la fiabilité. Maintenant, beaucoup de nouvelles applications, notamment la réalité augmentée, la conduite autonome, l'automatisation de l'industrie et la téléchirurgie, poussent vers un besoin de communications fiables avec une latence extrêmement faible. Comment gérer la couche physique afin de garantir ces services sans gaspiller des ressources précieuses et coûteuses est une question difficile. En outre, comme les latences de communication autorisées diminuent, l'utilisation d'un protocole de retransmission est contestable. Dans cette thèse, nous tentons de répondre à ces deux questions. En particulier, nous considérons un système de communication point à point, et nous voulons répondre s'il existe une allocation de ressources de puissance et de bande passante qui pourrait rendre le protocole Hybrid Automatic ReQuest (HARQ) avec n'importe quel nombre de retransmissions avantageux. Malheureusement, les exigences de très faible latence obligent à transmettre qu'un nombre limité de symboles. Par conséquent, l'utilisation de la théorie traditionnelle de Shannon est inadaptée et une autre beaucoup plus compliquée doit être employée, qui s'appelle l'analyse à bloc fini. Nous parvenons à résoudre le problème dans le cas du bruit additif blanc gaussien (AWGN) en appliquant des manipulations mathématiques et l'introduction d'un algorithme basé sur la programmation dynamique. À l'étape suivante, nous passons au cas plus général où le signal est déformé par un évanouissement de Rice. Nous étudions comment l'allocation de ressources est affectées étant donné les deux cas opposés d'informations sur l'état du canal (CSI), l'un où seules les propriétés statistiques du canal sont connues (CSI statistique), et l'autre où la valeur exacte du canal est fournie au émetteur(CSI complet).Finalement, nous posons la même question concernant le couche au-dessus, c'est-à-dire le Medium Access Control (MAC). L'allocation des ressources est maintenant effectuée sur plusieurs utilisateurs. La configuration pour chaque utilisateur reste la même, c'est-à-dire qu'une quantité précise de données doit être délivrée sous des contraintes de latence stricte et il y a toujours la possibilité d'utiliser des retransmissions. Comme la 5G classe les utilisateurs en classes d'utilisateurs différentes selon leurs besoins, nous modélisons le trafic d'utilisateurs avec le même concept. Chaque utilisateur appartient à une classe différente qui détermine sa latence et ses besoins en données. Nous développons un algorithme d'apprentissage par renforcement profond qui réussit à entraîner un modèle de réseau de neurones artificiels que nous comparons avec des méthodes conventionnelles en utilisant des algorithmes d'optimisation ou d'approches combinatoires. En fait, dans nos simulations le modèle de réseau de neurones artificiels parvient à les surpasser dans les deux cas de connaissance du canal (CSI statistique et complet)
The new generation of wireless systems 5G aims not only to convincingly exceed its predecessor (LTE) data rate but to work with more dimensions. For instance, more user classes were introduced associated with different available operating points on the trade-off of data rate, latency, reliability. New applications, including augmented reality, autonomous driving, industry automation and tele-surgery, push the need for reliable communications to be carried out under extremely stringent latency constraints. How to manage the physical level in order to successfully meet those service guarantees without wasting valuable and expensive resources is a hard question. Moreover, as the permissible communication latencies shrink, allowing retransmission protocol within this limited time interval is questionable. In this thesis, we first pursue to answer those two questions. Concentrating on the physical layer and specifically on a point to point communication system, we aim to answer if there is any resource allocation of power and blocklength that will render an Hybrid Automatic ReQuest (HARQ) protocol with any number of retransmissions beneficial. Unfortunately, the short latency requirements force only a limited number of symbols to possibly be transmitted which in its turn yields the use of the traditional Shannon theory inaccurate. Hence, the more involved expression using finite blocklength theory must be employed rendering the problem substantially more complicate. We manage to solve the problem firstly for the additive white gaussian noise (AWGN) case after appropriate mathematical manipulations and the introduction of an algorithm based on dynamic programming. Later we move on the more general case where the signal is distorted by a Ricean channel fading. We investigate how the scheduling decisions are affected given the two opposite cases of Channel State Information (CSI), one where only the statistical properties of the channel is known, i.e. statistical CSI, and one where the exact value of the channel is provided to the transmitter, i.e., full CSI.Finally we ask the same question one layer above, i.e. the Medium Access Contron (MAC). The resource allocation must be performed now accross multiple users. The setup for each user remains the same, meaning that a specific amount of information must be delivered successfully under strict latency constraints within which retransmissions are allowed. As 5G categorize users to different classes users according to their needs, we model the traffic under the same concept so each user belongs to a different class defining its latency and data needs. We develop a deep reinforcement learning algorithm that manages to train a neural network model that competes conventional approaches using optimization or combinatorial algorithms. In our simulations, the neural network model actually manages to outperform them in both statistical and full CSI case
APA, Harvard, Vancouver, ISO, and other styles
41

Kaaresoja, Topi Johannes. "Latency guidelines for touchscreen virtual button feedback." Thesis, University of Glasgow, 2016. http://theses.gla.ac.uk/7075/.

Full text
Abstract:
Touchscreens are very widely used, especially in mobile phones. They feature many interaction methods, pressing a virtual button being one of the most popular ones. In addition to an inherent visual feedback, virtual button can provide audio and tactile feedback. Since mobile phones are essentially computers, the processing causes latencies in interaction. However, it has not been known, if the latency is an issue in mobile touchscreen virtual button interaction, and what the latency recommendations for visual, audio and tactile feedback are. The research in this thesis has investigated multimodal latency in mobile touchscreen virtual button interaction. For the first time, an affordable, but accurate tool was built to measure all three feedback latencies in touchscreens. For the first time, simultaneity perception of touch and feedback, as well as the effect of latency on virtual button perceived quality has been studied and thresholds found for both unimodal and bimodal feedback. The results from these studies were combined as latency guidelines for the first time. These guidelines enable interaction designers to establish requirements for mobile phone engineers to optimise the latencies on the right level. The latency measurement tool consisted of a high-speed camera, a microphone and an accelerometer for visual, audio and tactile feedback measurements. It was built with off-the-shelf components and, in addition, it was portable. Therefore, it could be copied at low cost or moved wherever needed. The tool enables touchscreen interaction designers to validate latencies in their experiments, making their results more valuable and accurate. The tool could benefit the touchscreen phone manufacturers, since it enables engineers to validate latencies during development of mobile phones. The tool has been used in mobile phone R&D within Nokia Corporation and for validation of a research device within the University of Glasgow. The guidelines established for unimodal feedback was as follows: visual feedback latency should be between 30 and 85 ms, audio between 20 and 70 ms and tactile between 5 and 50 ms. The guidelines were found to be different for bimodal feedback: visual feedback latency should be 95 and audio 70 ms when the feedback was visual-audio, visual 100 and tactile 55 ms when the feedback was visual-tactile and tactile 25 and audio 100 ms when the feedback was tactile-audio. These guidelines will help engineers and interaction designers to select and optimise latencies to be low enough, but not too low. Designers using these guidelines will make sure that most of the users will both perceive the feedback as simultaneous with their touch and experience high quality virtual buttons. The results from this thesis show that latency has a remarkable effect on touchscreen virtual buttons, and it is a key part of virtual button feedback design. The novel results enable researchers, designers and engineers to master the effect of latencies in research and development. This will lead to more accurate and reliable research results and help mobile phone manufacturers make better products.
APA, Harvard, Vancouver, ISO, and other styles
42

Goel, Ashvin. "Operating system support for low-latency streaming /." Full text open access at:, 2003. http://content.ohsu.edu/u?/etd,194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Oljira, Dejene Boru. "Telecom Networks Virtualization : Overcoming the Latency Challenge." Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-67243.

Full text
Abstract:
Telecom service providers are adopting a Network Functions Virtualization (NFV) based service delivery model, in response to the unprecedented traffic growth and an increasing customers demand for new high-quality network services. In NFV, telecom network functions are virtualized and run on top of commodity servers. Ensuring network performance equivalent to the legacy non-virtualized system is a determining factor for the success of telecom networks virtualization. Whereas in virtualized systems, achieving carrier-grade network performance such as low latency, high throughput, and high availability to guarantee the quality of experience (QoE) for customer is challenging. In this thesis, we focus on addressing the latency challenge. We investigate the delay overhead of virtualization by comprehensive network performance measurements and analysis in a controlled virtualized environment. With this, a break-down of the latency incurred by the virtualization and the impact of co-locating virtual machines (VMs) of different workloads on the end-to-end latency is provided. We exploit this result to develop an optimization model for placement and provisioning of the virtualized telecom network functions to ensure both the latency and cost-efficiency requirements. To further alleviate the latency challenge, we propose a multipath transport protocol MDTCP, that leverage Explicit Congestion Notification (ECN) to quickly detect and react to an incipient congestion to minimize queuing delays, and achieve high network utilization in telecom datacenters.
HITS, 4707
APA, Harvard, Vancouver, ISO, and other styles
44

Norton, Nicholas James. "Cellular and viral factors affecting HIV-1 silencing and reactivation." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/290018.

Full text
Abstract:
Despite advances in the treatment of HIV-1 a cure remains elusive. A significant barrier to the eradication of the virus from an infected individual is a pool of cells infected with transcriptionally silent proviruses. A key pillar of the strategy to eradicate latent viruses has been called 'kick and kill', whereby the latent virus is stimulated to transcribe rendering the host cell vulnerable to eradication by cytotoxic T cells. Optimising the reactivation signal is therefore critical to this approach. Here the established model system of latency 'J-lat' is used to probe optimum reactivation signals. Single clones are observed to respond to maximal stimulation with a single agent with a fixed proportion of cells. Here it is shown that this proportion can be overcome by dosing with two agents in combination and critically that maximum synergies between agents occur at concentrations of agents close to those achieved in vivo. The role of SETDB1 recruitment by the recently described HUSH complex is examined using shRNA knockdowns of these proteins. Knockdown does not increase expression from the majority of J-lat clones tested. Viral factors which influence silencing and reactivation from latency have not been explored to the same extent. Here mutations affecting the binding of splicing factors to HIV-1 mRNA were cloned into laboratory viruses. A reduction in splice factor binding is seen to change the use of splice junctions required for the production of Tat mRNA; in turn this alters the rate at which proviruses are silenced. In addition the threshold for transcription in response to stimulation is increased in mutants with reduced splice factor binding.
APA, Harvard, Vancouver, ISO, and other styles
45

Guan, Xi. "MeteorShower: geo-replicated strongly consistent NoSQL data store with low latency : Achieving sequentially consistent keyvalue store with low latency." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180687.

Full text
Abstract:
According to CAP theorem, strong consistency is usually compromised in the design of NoSQL databases. Poor performance is often observed when strong data consistency level is required, especially when a system is deployed in a geographical environment. In such an environment, servers need to communicate through cross-datacenter messages, whose latency is much higher than message within a data center. However, maintaining strong consistency usually involves extensive usage of cross-datacenter messages. Thus, the large cross-data center communication delay is one of the most dominant reasons, which leads to poor performance of most algorithms achieving strong consistency in a geographical environment. This thesis work proposes a novel data consistency algorithm – I-Write-One-Read-One based on Write-One-Read- All. The novel approach allows a read request to be responded by performing a local read. Besides, it reduces the cross-datacenter-consistency-synchronization message delay from a round trip to a single trip. Moreover, the consistency model achieved in I-Write-One-Read-One is higher than sequential consistency, however, looser than linearizability. In order to verify the correctness and effectiveness of IWrite- One-Read-One, a prototype, MeteoerShower, is implemented on Cassandra. Furthermore, in order to reduce time skews among nodes, NTP servers are deployed. Compared to Cassandra with Write-One-Read-All consistency setup, MeteoerShower has almost the same write performance but much lower read latency in a real geographical deployment. The higher cross-datacenter network delay, the more evident of the read performance improvement. Same as Cassandra, MeteorShower also has excellent horizontal scalability, where its performance grows linearly with the increasing number of nodes per data center.
APA, Harvard, Vancouver, ISO, and other styles
46

Nicoll, Michael Peter. "The role of the Herpes simplex virus type 1 latency-associated transcripts during the establishment and maintenance of latency." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

García, Vidal Edurne. "Identification and characterization of novel latency-reversing agents to clear HIV-1 viral reservoir." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/669732.

Full text
Abstract:
La terapia antirretroviral ha cambiado la perspectiva sobre la infección por VIH-1 de enfermedad letal a crónica. Aun así, el reservorio latente del VIH-1 es una de las mayores barreras para lograr una cura. La estrategia “shock and kill” se basa en inducir la transcripción viral del provirus latente del VIH-1 seguido de la muerte selectiva de las células reactivadas. Aunque muchos agentes reversores de la latencia (LRAs) han sido identificados y testados, ninguno ha logrado erradicar eficazmente dicho reservorio. Dada la necesidad de nuevos agentes y estrategias capaces de eliminar el reservorio latente, hemos propuesto moduladores de la respuesta inmune innata o del ciclo celular para dicho fin. El estudio de moduladores de la inmunidad innata puede representar una alternativa dadas sus funciones intrínsecas, i. e., protección y eliminación de infecciones. Acitretina, regulador de la inmunidad innata aprobado contra la psoriasis, ha sido propuesto como inductor de la reactivación del VIH-1 y la muerte de las células infectadas. A pesar de ello, el efecto de acitretina en la reactivación fue muy modesto en la mayoría de modelos celulares que testamos, aunque se detectó la activación de la vía de RIG-I, y una ligera inducción de la reactivación viral en un modelo no clonal de células T. Además, acitretina tampoco promovió la eliminación de las células infectadas. Los compuestos anti-cáncer también han sido propuestos como estrategia contra el reservorio, debido a la habilidad de ciertos agentes para modificar la transcripción génica o promover la apoptosis. El cribado de una librería de compuestos anti-cancer reportó varias dianas cuya inhibición reactivó el VIH-1, incluyendo histona deacetilasas (HDACs), Janus quinasas (JAKs), IκB quinasas (IKKs) y proteínas “heat-shock” (HSPs). Entre los nuevos LRAs identificados, los inhibidores de Aurora kinasas (AURKi) representaron la mayor familia de compuestos, no descritos como LRAs, que mostraron capacidad de reactivación del VIH-1 de forma significativa y consistente. Los AURKi mejoraron la reactivación mediada por los HDACi, sugiriendo la habilidad para reactivar distintos provirus insertados. Curiosamente, AURKi restringió la replicación aguda del VIH-1, insinuando un papel dual para dichos compuestos en la infección por VIH-1. Midostaurin, un inhibidor de multi-quinasas aprobado contra la leucemia, también se identificó como LRA. Midostaurin reactivó el VIH-1, tanto por si solo como en combinación con otros LRAs, corroborando previos reportes que asociaron esa actividad con la activación de la vía de NF-κB. Además, también se observó una inhibición de la infección aguda del VIH-1 en células primarias dependiente de SAMHD1 no descrita. El hecho de que los AURKi y midostaurina mejoren la reactivación del VIH-1 en combinación con otros LRAs, corrobora la idea de que distintos compuestos pueden ser necesarios para reactivar todos los provirus integrados, presentando así distintas especificidades para la reactivación del provirus que dependan de su lugar de integración en el genoma. Estas observaciones plantean dudas sobre los modelos usados para estudiar la latencia del VIH-1, pues los modelos clonales podrían ser inadecuados por la falta de heterogeneidad de lugares de integración. En conjunto, nuestros resultados sugieren que la modulación de la inmunidad innata y ciclo celular podrían incluirse en el desarrollo de futuros LRAs para la estrategia “shock and kill”; aun así, investigaciones adicionales siguen siendo necesarias con tal de avanzar hacia la cura del VIH-1.
Current antiretroviral therapy has changed the perspective of HIV-1 infection from a lethal illness to a chronic disease. However, the HIV-1 latent reservoir is a major hurdle to achieve a cure for HIV-1. The “shock and kill” strategy is based on inducing viral transcription of latent HIV-1 provirus followed by the selective killing of reactivated cells. Although several latency-reversing agents (LRAs) have been identified and tested, none of them has been able to efficiently eradicate the HIV-1 latent reservoir. Based on the need of novel agents and strategies to efficiently clear the latent reservoir, we evaluated compounds developed as modulators of the innate immune response or designed to modulate the cell cycle progression as novel agents able to purge the viral reservoir. The study of innate immune modulators as agents able to clear the HIV-1 reservoir might represent an alternative due to its intrinsic functions, i. e., protection and clearance of infections. The innate immune regulator acitretin, an FDA-approved compound for psoriasis, has been proposed to induce HIV-1 reactivation and selective killing of the infected cells. However, the effect of acitretin on HIV-1 reactivation was negligible in the vast majority of models tested, albeit activation of RIG-I pathway was detected and a mild induction of viral reactivation was observed in a non-clonal T cell model. Moreover, acitretin treatment did not induce the selective killing of the infected cells. Anti-cancer compounds have also been proposed as candidate therapies targeting the latent reservoir, mainly due to the ability of certain agents to modify gene transcription or to promote cell apoptosis. The assessment of the HIV-1 reactivation potential of an anti-cancer compound library reported several molecular targets whose inhibition promoted HIV-1 latency reversal, including the histone deacetylases (HDAC), Janus kinases (JAK), IκB kinases (IKKs) and heat shock proteins (HSPs). Among the new identified LRAs, Aurora kinases inhibitors (AURKi) represented the largest family of compounds not previously described as LRA that significantly and consistently showed HIV-1 reactivation capacity. AURKi were able to enhance the HDACi-mediated reactivation, suggesting that AURKi are able to target a distinct set of integrated provirus than that reactivated by the well-described HDAC inhibitors. Interestingly, AURKi restricted acute HIV-1 infection, suggesting a dual role for these compounds on HIV-1 infection. Midostaurin, a multi-kinase inhibitor approved for leukemia treatment, was also identified as an LRA. Midostaurin induced HIV-1 latency reactivation, either alone or in combination with other LRAs, consistent with previous reports that associated this activity with the activation of the innate immune NF-κB pathway. Moreover, we also observed a non-yet-reported and SAMHD1-dependent inhibitory effect of HIV-1 replication in primary cells. The enhanced capacity to promote HIV-1 reactivation of AURKi and midostaurin in combination with other LRAs supports the idea that different agents are needed to reactivate all latent provirus, presenting different specificities towards HIV-1 provirus reactivation depending on its integration site in the host genome. Furthermore, these observations also raise concerns on the models used to study HIV-1 latency, as clonal models might not be suitable due to the lack of heterogeneity in proviral insertion site, characteristic of non-clonal models. Altogether, our results suggest that modulation of innate immunity and cell cycle may be taken into account for the design of future LRAs for the “shock and kill” strategy; however, further research is still necessary before it can lead to an HIV-1 cure.
APA, Harvard, Vancouver, ISO, and other styles
48

Vu, Thanh Long X. "360 Gunner - A 2D platformer to evaluate network latency compensation." Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-theses/1335.

Full text
Abstract:
Online gaming is rapidly growing as an entertainment choice, as it provides players with a high variety in genres, affordability, ubiquity and also real-time online interactions. However, slow networks or congestion can cause perceivable network latency and make players suffer from a degraded gameplay experience. Latency compensation techniques have been developed to combat the negative effects of network latency, but more understanding of latencies affects and latency compensations benefits are still needed. Our project studied the degradation of different game actions with latency and how player prediction - a classic latency compensation technique - affects gameplay in a 2D platformer. We designed and implemented an original 2D platformer with player prediction implemented for player movement actions, then invited players to play our game under different network and latency compensation conditions. Based on the subjective and objective data collected, we found that 2D platformers are sensitive to even modest amounts of network latency. Player prediction helped players have fewer deaths below 200ms of latency, but at 400ms and above its benefits were outweighed by its disadvantages to visual consistency.
APA, Harvard, Vancouver, ISO, and other styles
49

Ogawa, Atsushi. "Night-to-night variability of sleep latency significantly predicts the magnitude of subsequent change in sleep latency during placebo administration." 京都大学 (Kyoto University), 2013. http://hdl.handle.net/2433/180379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Smutná, Katarína 1991. "Schlafen 12, a novel HIV restriction factor involved in latency." Doctoral thesis, Universitat Pompeu Fabra, 2019. http://hdl.handle.net/10803/666297.

Full text
Abstract:
The process of HIV latency establishment and maintenance is not clearly understood. Homeostatic proliferation (HSP) is a major mechanism by which long-lived naive and memory CD4 T cells are maintained in vivo. HSP also contributes to the persistence of HIV latent reservoir. Furthermore, HIV-infected naive CD4 T cells cultured under HSP condition are refractory to reactivation, in contrast to TCR-activated memory CD4 T cells. This might be due to the suggested post-transcriptional block in naive HSP-cultured cells. Here we compared a transcriptomic signature of naive and memory CD4 T cells. Among differentially expressed genes that may influence HIV latency, we identified Schlafen 12 (SLFN12) as an interesting candidate for a potential HIV restriction factor. Our results showed that SLFN12 establishes post-transcriptional block in HIV infected cells and thus inhibits both, HIV production as well as its reactivation from latently infected cells. These findings may help to better understand the mechanisms underlying HIV latency and its reversal in HSP-maintained naive CD4 T cells. All together it might contribute to the design of novel HIV eradication strategies.
El proceso por el cual el virus de la Inmunodeficiencia Humana (VIH) establece y mantiene un estado de latencia no se conoce en su totalidad. La proliferación homeostática (HSP, de sus siglas en ingés “Homeostatic proliferation”) es uno de los mecanismos por el cual las células T CD4 “naive” y de memoria se mantienen in vivo. Además, HSP también contribuye al mantenimiento del reservorio de virus en forma latente. Además, las células T CD4 “naive” infectadas y cultivadas en condiciones de HSP no son capaces de reactivarse a diferencia de las células T CD4 de memoria activadas vía TCR. Estudios previos sugieren que esta observación se debe a un bloqueo post-transcripcional en células T “naive” cultivadas en condiciones de HSP. En esta tesis comparamos la perfil del transcriptoma de células T CD4 “naive” y de memoria. Entre los genes diferencialmente expresados que podrían participar en el proceso de latencia del VIH, identificamos Schlafen 12 (SLFN12) como un candidato interesante que podría ser un factor de restricción del virus. Los resultados de este trabajo muestran que SLFN12 establece un bloqueo post-transcripcional en células infectadas por VIH, y de esta forma inhibe tanto la producción del virus como su reactivación en células infectadas de forma latente. Estas observaciones pueden ser de gran ayuda para entender mejor los mecanismos subyacentes a la latencia del VIH así como su reactivación en células CD4 T “naive” mantenidas bajo condiciones de HSP. En su conjunto, estos resultados podrían contribuir al diseño de nuevas estrategias para erradicar el VIH.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography