Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Effacement bit à bit.

Dissertationen zum Thema „Effacement bit à bit“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Effacement bit à bit" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Knight, Michael K. „Bit by bit“. Claremont Graduate University, 2010. http://ccdl.libraries.claremont.edu/u?/stc,82.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Melul, Franck. „Développement d'une nouvelle génération de point mémoire de type EEPROM pour les applications à forte densité d'intégration“. Electronic Thesis or Diss., Aix-Marseille, 2022. http://www.theses.fr/2022AIXM0266.

Der volle Inhalt der Quelle
Annotation:
L’objectif de ces travaux de thèse a été de développer une nouvelle génération de point mémoire de type EEPROM pour les applications à haute fiabilité et à haute densité d’intégration. Dans un premier temps, une cellule mémoire très innovante développée par STMicroelectronics – eSTM (mémoire à stockage de charges de type Splitgate avec transistor de sélection vertical enterré) – a été étudiée comme cellule de référence. Dans une deuxième partie, dans un souci d’améliorer la fiabilité de la cellule eSTM et de permettre une miniaturisation plus agressive de la cellule EEPROM, une nouvelle architecture mémoire a été proposée : la cellule BitErasable. Elle a montré une excellente fiabilité et a permis d’apporter des éléments de compréhension sur les mécanismes de dégradation présents dans ces dispositifs mémoires à transistor de sélection enterré. Cette nouvelle architecture offre de plus la possibilité d’effacer les cellules d’un plan mémoire de façon individuelle : bit à bit. Conscient du grand intérêt que présente l’effacement bit à bit, un nouveau mécanisme d’effacement pour injection de trous chauds a été proposé pour la cellule eSTM. Il a montré des performances et un niveau de fiabilité parfaitement compatible avec les exigences industrielles des applications Flash-NOR
The objective of this thesis was to develop a new generation of EEPROM memory for high reliability and high density applications. First, an innovative memory cell developed by STMicroelectronics - eSTM (Split-gate charge storage memory with buried vertical selection transistor) - was studied as a reference cell. In a second part, to improve the reliability of the eSTM cell and to allow a more aggressive miniaturization of the EEPROM cell, a new memory architecture has been proposed: the BitErasable cell. It showed an excellent reliability and allowed to bring elements of under-standing on the degradation mechanisms present in these memory devices with buried selection transistor. This new architecture also offers the possibility to individually erase cells in a memory array: bit by bit. Aware of the great interest of bit-by-bit erasing, a new erasing mechanism by hot hole injection has been proposed for the eSTM cell. It has shown performances and a level of reliability perfectly compatible with the industrial requirements of Flash-NOR applications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Malm, Martin. „Beckholmen bit för bit“. Thesis, KTH, Arkitektur, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-150351.

Der volle Inhalt der Quelle
Annotation:
På Beckholmen ligger stockholmsregionens enda aktiva reparationsvarv med kapacitet att ta emot såväl större fartyg samt skärgårdstrafikens fartyg. För att kunna bedriva verksamheten på ett säkert och miljövänligt sätt krävs infrastrukturella investeringar i form av byggnader och anläggningar. Stockholms stad har påbörjat ett programarbete och  presenterat ett förslag till detaljplan som möjliggör dessa investeringar.  Då det handlar om stora investeringar och förändringar i stadsbilden är den nya detaljplanen ett känsligt projekt, särskilt om varvsverksamheten skulle hamna i ekonomiska svårigheter. I det förslag till detaljplan som presenterats hösten 2012 har varsverksamhetens funktioner lagts inom en och samma byggnadsvolym, som till en följd blir väldigt stor.   Frågeställningen i ”Beckholmen bit för bit” är om en detaljplan som låter verksamheten växa mer inkrementellt fördelat på fler byggnader kan göra projektet mer flexibelt.   Efter studier av varvsverksamhetens program samt volym och planstudier, blir slutsatsen att det finns fördelar med att fördela varvsfunktionerna på fler byggnadsvolymer. Särskilt om byggnaderna gestaltas på ett sätt så att kompositionen är öppen för förändring. Ett förslag på en gestaltning som uppfyller dessa egenskaper presenteras även.
The only active shipyard in the Stockholm region  with capacity for large ships and the archipelago fleet is situated at Beckholmen. Large investments in infrastructure and buildings are needed in order to secure the enviroment and worker safety. The city has presented a proposal for new planning enabeling these investments. As the investment are large and the site is sensitive the project has many potential risks, especially if the shipyard should encounter economic difficulties in the future. In the citys planning proposal presented in 2012 the whole program for the shipyard is contained within one single volume.   The question at issue in ”Beckholmen bit by bit” thus is if planning that splits the program into separate buildings and permits incremental growth is prefarable.   After program studies, volume and plan studies the conclusion is that there are many advantages with planning that encourages incremental growth of the shipyard at Beckholmen, especially if the building design is open to change from the beginning. A proposal of a planning and design fulfilling these qualities is also presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hoesel, Stan van. „De constructie van de informatiesnelweg "bit by bit" /“. [Maastricht : Maastricht : Universiteit Maastricht] ; University Library, Maastricht University [Host], 2001. http://arno.unimaas.nl/show.cgi?fid=13064.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Karaer, Arzu. „Optimum bit-by-bit power allocation for minimum distortion transmission“. Texas A&M University, 2005. http://hdl.handle.net/1969.1/4760.

Der volle Inhalt der Quelle
Annotation:
In this thesis, bit-by-bit power allocation in order to minimize mean-squared error (MSE) distortion of a basic communication system is studied. This communication system consists of a quantizer. There may or may not be a channel encoder and a Binary Phase Shift Keying (BPSK) modulator. In the quantizer, natural binary mapping is made. First, the case where there is no channel coding is considered. In the uncoded case, hard decision decoding is done at the receiver. It is seen that errors that occur in the more significant information bits contribute more to the distortion than less significant bits. For the uncoded case, the optimum power profile for each bit is determined analytically and through computer-based optimization methods like differential evolution. For low signal-to-noise ratio (SNR), the less significant bits are allocated negligible power compared to the more significant bits. For high SNRs, it is seen that the optimum bit-by-bit power allocation gives constant MSE gain in dB over the uniform power allocation. Second, the coded case is considered. Linear block codes like (3,2), (4,3) and (5,4) single parity check codes and (7,4) Hamming codes are used and soft-decision decoding is done at the receiver. Approximate expressions for the MSE are considered in order to find a near-optimum power profile for the coded case. The optimization is done through a computer-based optimization method (differential evolution). For a simple code like (7,4) Hamming code simulations show that up to 3 dB MSE gain can be obtained by changing the power allocation on the information and parity bits. A systematic method to find the power profile for linear block codes is also introduced given the knowledge of input-output weight enumerating function of the code. The information bits have the same power, and parity bits have the same power, and the two power levels can be different.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sandvik, Jørgen Moe. „En variabel bit lengde 9-bit 50MS/S SAR ADC“. Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-20654.

Der volle Inhalt der Quelle
Annotation:
A 9-bit 50MS/s SAR ADC with a simulated power consumption of 24.5 µW was designed for this thesis. Specifications were made for application with in-probe electronic as part of an ultrasound system. A novel switching-scheme - employing variable bit length encoding – was introduced in order to simplify successive approximation. Pre-layout results reported a FoM of just 1.37 fJ/conversion step, which is favorable to all published designs to date. Recent technology advancements has seen the ultrasound field expanding into handheld markets [33]. More power efficient solutions, in addition to existing enhanced resolution 3-D technology both place strict requirements for analog/mixed-signal design. Composite electronics within the probe casing - allowing close-to-source signal processing - is believed to be the future of ultrasound devices. ADC designs suitable for in-probe technology require ultra low power and noise characteristics towards supporting multiple channels on a single SoC. Excellent performance of recent SAR ADCs make them a viable alternative for in-probe technology [2,7,12,4]. Work in this thesis show the flexibility of the SAR algorithm. The relatively simple implementation/decoding of the VBL approach, complimented by the accuracy dependency of the level detection range makes the ADC reconfigurable by digital signal processing. Recent published design has reported relatively low power consumption for the comparator [15,7]. A motivation for the thesis was to see whether multiple operated comparators could reduce power in remaining circuitry. Implementation of a level-detector - supporting the VBL switching-scheme - has lead to improvements in: Power efficiency, speed and metastability-induced errors. The device consists of two comparators operated in parallel, with a relative DC-offset generated by difference in the capacitive load. Decision points of the comparators shift with DC-offset, and are atoned for a range desired by the modified SAR algorithm. An extensive literary search of recent methodologies and results was conducted, and a summery presenting state-of-the-art designs is included with the work. An approach using no external references where chosen as a basis for the DAC design. Emphasize was made on constant common-mode voltage suitably for comparator design eliminating pre-amplifiers or buffers. Digital logic consisting of serial connected bitslices using a novel differential approach is proposed. Level detector outputs are connected to the digital logic switching only a portion of transistors in the bitslice during conversion. Trade-off between switching activity and circuit area proves effective, with only 12.5% of overall power consumed in the digital part. Power simulations reported the level-detector as the dominant source of consumption, thereby being subject to further optimization with regards to power. Nonetheless a proof-of-concept 8-bit ADC implementation - operated with the novel switching-scheme - produced 8.96 ENOB while dissipating less power.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

James, Calvin L. „ENHANCE BIT SYNCHRONIZER BIT ERROR PERFORMANCE WITH A SINGLE ROM“. International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613417.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Although prefiltering prevents the aliasing phenomenon with discrete signal processing, degradation in bit error performance results even when the prefilter implementation is ideal. Degradation occurs when decisions are based on statistics derived from correlated samples, processed by a sample mean estimator. i.e., a discrete linear filter. However, an orthonormal transformation can be employed to eliminate prefiltered sample statistical dependencies, thus permitting the sample mean estimator to provide near optimum performance. This paper will present mathematical justification for elements which adversely affect the bit synchronizer’s decision process and suggest an orthonormal transform alternative. The suggested transform can be implemented in most digital bit synchronizer designs with the addition of a Read Only Memory (ROM).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Fontana, Giulia <1993&gt. „BIT or no BIT? La tutela degli investimenti italiani all’estero“. Master's Degree Thesis, Università Ca' Foscari Venezia, 2018. http://hdl.handle.net/10579/12317.

Der volle Inhalt der Quelle
Annotation:
Gli accordi bilaterali per la promozione e protezione degli investimenti esteri sono stati, dall'inizio del XX secolo,oggetto di una grande attenzione da parte di tutti coloro i quali volessero espandere i propri orizzonti commerciali al di fuori dei confini nazionali di appartenenza. Nel presente lavoro di tesi si intende innanzitutto dare una visione d’insieme (fare una panoramica) sul processo storico che ha portato gli Stati a stipulare questi accordi, sul perfezionamento del diritto commerciale internazionale in materia di investimenti e sui cambiamenti che ha apportato l’entrata in vigore del trattato di Lisbona e l’inzio della competenza esclusiva dell’Unione Europea in tale materia. Verrà in seguito analizzato un accordo bilaterale in tutte le sue parti e infine verranno esaminate due controversie, dal quale confronto si rifletterà sulla rilevanza degli accordi bilaterali in materia di tutela degli investimenti italiani all’estero.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Segars, Tara. „8-Bit Hunger“. Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1619176909244462.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Näslund, Mats. „Bit extraction, hard-core predicates and the bit security of RSA“. Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 1998. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-2687.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Xiao, Xin, Bane Vasic und Shu Lin. „MULTI-BIT BIT-FLIPPING ALGORITHM FOR COLUMN WEIGHT-4 LDPC CODES“. International Foundation for Telemetering, 2017. http://hdl.handle.net/10150/627015.

Der volle Inhalt der Quelle
Annotation:
Low-density parity-check (LDPC) codes with column weight-4 are widely used in many commu-nication and storage systems. However, traditional hard decision decoding algorithms such as the bit-flipping (BF) algorithm suffer from error floor due to trapping sets in LDPC codes. In this paper, to lower error floor of the BF algorithm over the Binary Symmetric Channel (BSC), we design a set of decoding rules incorporated within the BF algorithm for column weight-4 LDPC codes. Given a column weight-4 LDPC code, the dominate error patterns of the BF algorithm are first specified, and according to the designed rules, additional bits at both variable nodes (VN) and check nodes (CN) provide more information for the BF algorithm to identify the dominate error patterns, so that the BF algorithm could deliberately flip some bits to break them. Simulation results show that the modified BF algorithm eliminates all 4-error patterns and lowers the Bit Error Rate (BER) for at least two orders of magnitude with a trivial increment of complexity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

吳景濤 und King-to Ng. „A novel bit allocation buffer control algorithm for low bit-rate videocompression“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31221518.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Zimic, Sheila. „Internetgenerationen bit för bit : Representationer av IT och ungdom i ett informationssamhälle“. Doctoral thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21926.

Der volle Inhalt der Quelle
Annotation:
The aim of this thesis is to gain a deeper understanding in relation to the construction of a ‘Net Generation’. With regards to the idea of an information society, technologies and young people are given certain positions, which are not in any sense natural but are socially constructed. This thesis explores these socially given meanings and shows what types of meanings are prioritized and legitimized. The exploration is conducted by examining, both externally and internally, given meanings of a generation identity. The external (nominal identification) in this study is understood as the construction of an abstract user and is studied by means of academic texts concerning the ‘Net Generation’. The internal (virtual identification) involves young people’s construction of their generation identity and is studied by means of collage. The collages are used to understand how the young participants position themselves in contemporary society and how they, as concrete users, articulate their relationship with information technologies.   The findings show that the ‘type of behavior’ which is articulated in the signifying practice of the construction of the abstract user, ‘Net Generation’, reduces users and technology to a marketing / economical discourse. In addition the idea of the abstract user implies that all users have the same possibilities to achieve ‘success’ in the information society, by being active ‘prosumers’. The concrete users articulate that they feel stressed and pressured in relation to all the choices that they are expected to make. In this sense, the participants do not articulate the (economical) interests as assumed for the ‘Net Generation’, but, rather articulate interests to play, to have a hobby and be social when using information technologies.   What this thesis thus proposes, is to critically explore the ‘taken for granted’ notions of a technological order in society as pertaining to young people. Only if we understand how socially given meaning is constructed can we break loose from the temporarily prioritized values to which the position of technology and users are fixed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Simón, Gallego Carlos. „Bit-Torrent in Erlang“. Thesis, Uppsala University, Department of Information Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-98284.

Der volle Inhalt der Quelle
Annotation:

The goal of my project has been to programme a Bit-torrent application using Erlang language. A Bit-Torrent application permits the user to download files from the system and share files at the same time. The reason Erlang was chosen is because this programming language has suitable features for concurrency and distributed system.

The most important aspect I have considered in my project has been to manage a proper behaviour of the system, more than the simply fact of transferring stuff. This way, the program will be able to response to changes immediately. The changes could be: a user uploads a new file to share with other peers, a file is removed, new chunks of a file appear… and others like that.

My Bit-Torrent system contains five modules: Bittorrent, Tracker, Statistic, Server and User module, and all of them will be explained going into details in this document.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Burgess, George, und Lloyd Bridges. „Single Board Bit Synchronizer“. International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614628.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California
ASIC developments have made it possible to include the essential signal processing functions for data detection, clock recovery, and NCO in a single custom-designed chip. Using this chip and PLDs enabled the implementation of a fully-featured bit synchronizer on a single VME board in a rack-mountable 1.75" high, 19" wide chassis. This represents a space savings of 2/3 over existing units. The data rates supported are 250 bps to 5Mbps (2.5 Mbps biphase).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Grayson, Neil R. „The Bit - Collected Stories“. The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555428898931184.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Montalvo, Ramirez Luis Anibal. „VLSI implementation of control section of overlapped 3-bit scanning 64-bit multiplier“. Ohio : Ohio University, 1986. http://www.ohiolink.edu/etd/view.cgi?ohiou1183141735.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Blue, Ryan. „The bit probe model for membership queries [electronic resource] : non-adaptive bit queries /“. College Park, Md.: University of Maryland, 2009. http://hdl.handle.net/1903/9649.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.) -- University of Maryland, College Park, 2009.
Thesis research directed by: Dept. of Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Ng, King-to. „A novel bit allocation buffer control algorithm for low bit-rate video compression /“. Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20192733.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Wyman, Richard Hayden. „Bit-plane differential EZW for the compression of video for available bit-rate channels“. Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313533.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Wilson-Scorgie, Dorothea Jane. „Every last bit of you“. Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44885.

Der volle Inhalt der Quelle
Annotation:
Every Last Bit of You is a realistic fiction novel employing an alternative narrative format. Profiles, passwords, wall posts, chats, and private messages tell the story of fifteen-year-old Hailey Milner, and sixteen-year-old Noah Huntington. This young adolescent novel explores the themes of grief and obsession through interconnecting communications of several characters, illustrating problems faced by modern teens such as Internet stalking and family difficulties, and ultimately leading the reader through a journey of reconstruction. Hailey Milner, born and raised in Victoria, BC, is having a rough year at school, ignoring invitations from her best friend, Sophia, and struggling to make sense of her family’s changing circumstances. And what more could she possibly take on? A recommendation from the school’s counselor suggests joining the school’s badminton team. Moving from Calgary, AB, Noah and his twin brother, Jack, arrive at school in the middle of the spring term. The last thing on this sporty jock’s mind is playing badminton. But when Noah discovers it’s the only school team left this year, he tries out. Hailey’s ‘safe space’ in these hard months is writing to her older sister, Zoe, that is, until Hailey and Noah are partnered in the mixed doubles on the badminton team. Hailey’s year finally starts to look up when a tentative friendship develops between her and Noah. Hailey senses that Noah has some family secrets of his own and she sees the potential for a fresh start for herself. Could their friendship possibly mean more? Hailey’s daydreams burst when she discovers that Noah is not likely to reciprocate her feelings. This event, compounding her original problems, drives Hailey into a grief-induced obsession over Noah, which soon gives way to full-out Facebook stalking. On the anniversary of a significant event, the root of Hailey’s grief becomes fodder for the News Feed on Facebook. But Hailey soon learns that not all public news is bad news. At the same time, Noah’s home life is also in flux, and through his muddled friendship with Hailey, he gains an appreciation for what, or rather who, matters in life.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Ambrose, D. „Diamond core bit performance analysis“. Thesis, University of Nottingham, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378761.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Ahmad, Nurul Nadia. „Minimum bit error ratio beamforming“. Thesis, University of Southampton, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418966.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

FREITAS, LUCAS DE. „MANGUE: BIT, SCENE AND AUTORSHIP“. PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2013. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28349@1.

Der volle Inhalt der Quelle
Annotation:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
O Manguebit, cena Mangue ou, como é mais comumente conhecido, Manguebeat, movimentação cultural que eclodiu em Recife no início dos anos de 1990, foi abordado por duas dimensões dos impactos que os usos dos aparatos tecnológicos tiveram em relação à autoria. Primeiro, refletiu-se sobre os diferentes horizontes de expectativas mais ou menos rascunhados pelas próprias mídias (LP, K7, CD etc.), nos diversos momentos em que os articuladores do Manguebit as utilizaram em experimentos musicais. Dos K7, o amadorismo e descentramento da figura unitária do autor, às práticas da indústria fonográfica quando contrata alguns mangueboys, o profissional e o filtro imposto pelas estratégias de marketing – negociações do choque de distintos modus operandi. Depois, a construção da cena Mangue foi abordada a partir das estratégias coletivas e o uso de equipamentos e espaços precários enquanto condições de existência, as saídas encontradas pelos mangueboys para a formação de uma cena cultural num momento de extrema hostilidade ao contemporâneo e de forte tensionamento socioeconômico – criação de circuitos alternativos a partir de festas, bares, festivais e coletâneas, o investimento em amplas parcerias e autopromoção.
Manguebit, Mangue scene or, as is more commonly known, Manguebeat – a cultural movement which pop up in Recife in early 1990 – was addressed using two different dimensions of the impacts brought by the use of technological devices on the matter of authorship. First, this study reflects on the different horizons of expectations more or less framed by the media themselves (LP, K7, CD etc.), in the distinct moments in which the organizers of Manguebit utilized them in their musical experiments. From K7 – the amateurism and decentration of unitary figure of the author – to the practices of the music industry when hires some mangueboys, and the professional posture and filters imposed by marketing strategies – including negotiations after shock of different modus operandi. Then development of the Mangue scene was discussed from collective strategies and the use of precarious equipment and spaces as existence conditions, the way outs created by mangueboys to the formation of a cultural scene at a time of extreme hostility to the contemporary and strong socioeconomic tension – creation of alternative circuits with parties, bars, festivals and compilations, extensive investment in partnerships and self-promotion.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Esterina, Ria. „Commercialization of bit-patterned media“. Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54199.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2009.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 75-79).
Realm of data storage density has expanded from gigabyte- to terabyte-domain. In such a high areal density, bit-patterned media is a promising candidate to overcome the superparamagnetic limit faced by the conventional continuous media. However, the patterned media concept has not been realized in mass production due to several reasons. Beside the stringent requirement of high-resolution lithography, high production cost is inevitably the major challenging problem. If a low-cost mass fabrication scheme is available, bit-patterned media will be an innovative way in hard disk technology to achieve a storage density beyond 1 Tb/in². The objective of this thesis is to review the patterned media technology and discuss its challenges and commercialization viability. A possible mass-production scheme is discussed. Electron beam lithography and self assembly process of block copolymer are used to fabricate the master template. To ensure high throughput, template replication as well as disk fabrication are carried out by UV-nanoimprint lithography (UV-NIL). Considering the large opportunity of patterned media to enter the market, a business plan was constructed. Enormous profit was proved to be possible when the barrier of technology, intellectual property, and funding can be surpassed. Therefore, patterned media shows to be superior in terms of performance and cost compared to the conventional media.
by Ria Esterina.
M.Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Kritzinger, Carl. „Low bit rate speech coding“. Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/2078.

Der volle Inhalt der Quelle
Annotation:
Thesis (MScIng (Electrical and Electronic Engineering))--University of Stellenbosch, 2006.
Despite enormous advances in digital communication, the voice is still the primary tool with which people exchange ideas. However, uncompressed digital speech tends to require prohibitively high data rates (upward of 64kbps), making it impractical for many applications. Speech coding is the process of reducing the data rate of digital voice to manageable levels. Parametric speech coders or vocoders utilise a-priori information about the mechanism by which speech is produced in order to achieve extremely efficient compression of speech signals (as low as 1 kbps). The greater part of this thesis comprises an investigation into parametric speech coding. This consisted of a review of the mathematical and heuristic tools used in parametric speech coding, as well as the implementation of an accepted standard algorithm for parametric voice coding. In order to examine avenues of improvement for the existing vocoders, we examined some of the mathematical structure underlying parametric speech coding. Following on from this, we developed a novel approach to parametric speech coding which obtained promising results under both objective and subjective evaluation. An additional contribution by this thesis was the comparative subjective evaluation of the effect of parametric speech coding on English and Xhosa speech. We investigated the performance of two different encoding algorithms on the two languages.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Guadiana, Juan M., und Fil Macias. „ENCRYPTED BIT ERROR RATE TESTING“. International Foundation for Telemetering, 2002. http://hdl.handle.net/10150/607507.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 21, 2002 / Town & Country Hotel and Conference Center, San Diego, California
End-to-End testing is a tool for verifying that Range Telemetry (TM) System Equipment will deliver satisfactory performance throughout a planned flight test. A thorough test verifies system thresholds while gauging projected mission loading all in the presence of expected interference. At the White Sands Missile Range (WSMR) in New Mexico, system tests are routinely conducted by Range telemetry Engineers and technicians in the interest of ensuring highly reliable telemetry acquisition. Even so, flight or integration tests are occasionally halted, unable to complete these telemetry checks. The Navy Standard Missile Program Office and the White Sands Missile Range, have proactively conducted investigations to identify and eliminate problems. A background discussion is provided on the serious problems with the launcher acquisition, which were resolved along the way laying the ground work for effective system testing. Since there were no provisions to test with the decryption equipment an assumption must be made. Encryption is operationally transparent and reliable. Encryption has wide application, and for that reason the above assumption must be made with confidence. A comprehensive mission day encrypted systems test is proposed. Those involved with encrypted telemetry systems, and those experiencing seemingly unexplainable data degradations and other problems with or without encryption should review this information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Loebner, Christopher E. „Bit Error Problems with DES“. International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/611877.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The Data Encryption Standard (DES) was developed in 1977 by IBM for the National Bureau of Standards (NBS) as a standard way to encrypt unclassified data for security protection. When the DES decrypts the encrypted data blocks, it assumes that there are no bit errors in the data blocks. It is the object of this project to determine the Hamming distance between the original data block and the data block after decryption if there occurs a single bit error anywhere in the encrypted bit block of 64 bits. This project shows that if a single bit error occurs anywhere in the 64-bit encrypted data block, a mean Hamming distance of 32 with a standard deviation of 4 is produced between the original bit block an the decrypted bit block. Furthermore, it is highly recommended by this project to use a forward error correction scheme like BCH (127, 64) or Reed-Solomon (127, 64) so that the probability of this bit error occurring is decreased.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Al-Rababa'a, Ahmad. „Arithmetic bit recycling data compression“. Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/26759.

Der volle Inhalt der Quelle
Annotation:
Tableau d’honneur de la Faculté des études supérieures et postdoctorales, 2015-2016
La compression des données est la technique informatique qui vise à réduire la taille de l'information pour minimiser l'espace de stockage nécessaire et accélérer la transmission des données dans les réseaux à bande passante limitée. Plusieurs techniques de compression telles que LZ77 et ses variantes souffrent d'un problème que nous appelons la redondance causée par la multiplicité d'encodages. La multiplicité d'encodages (ME) signifie que les données sources peuvent être encodées de différentes manières. Dans son cas le plus simple, ME se produit lorsqu'une technique de compression a la possibilité, au cours du processus d'encodage, de coder un symbole de différentes manières. La technique de compression par recyclage de bits a été introduite par D. Dubé et V. Beaudoin pour minimiser la redondance causée par ME. Des variantes de recyclage de bits ont été appliquées à LZ77 et les résultats expérimentaux obtenus conduisent à une meilleure compression (une réduction d'environ 9% de la taille des fichiers qui ont été compressés par Gzip en exploitant ME). Dubé et Beaudoin ont souligné que leur technique pourrait ne pas minimiser parfaitement la redondance causée par ME, car elle est construite sur la base du codage de Huffman qui n'a pas la capacité de traiter des mots de code (codewords) de longueurs fractionnaires, c'est-à-dire qu'elle permet de générer des mots de code de longueurs intégrales. En outre, le recyclage de bits s'appuie sur le codage de Huffman (HuBR) qui impose des contraintes supplémentaires pour éviter certaines situations qui diminuent sa performance. Contrairement aux codes de Huffman, le codage arithmétique (AC) peut manipuler des mots de code de longueurs fractionnaires. De plus, durant ces dernières décennies, les codes arithmétiques ont attiré plusieurs chercheurs vu qu'ils sont plus puissants et plus souples que les codes de Huffman. Par conséquent, ce travail vise à adapter le recyclage des bits pour les codes arithmétiques afin d'améliorer l'efficacité du codage et sa flexibilité. Nous avons abordé ce problème à travers nos quatre contributions (publiées). Ces contributions sont présentées dans cette thèse et peuvent être résumées comme suit. Premièrement, nous proposons une nouvelle technique utilisée pour adapter le recyclage de bits qui s'appuie sur les codes de Huffman (HuBR) au codage arithmétique. Cette technique est nommée recyclage de bits basé sur les codes arithmétiques (ACBR). Elle décrit le cadriciel et les principes de l'adaptation du HuBR à l'ACBR. Nous présentons aussi l'analyse théorique nécessaire pour estimer la redondance qui peut être réduite à l'aide de HuBR et ACBR pour les applications qui souffrent de ME. Cette analyse démontre que ACBR réalise un recyclage parfait dans tous les cas, tandis que HuBR ne réalise de telles performances que dans des cas très spécifiques. Deuxièmement, le problème de la technique ACBR précitée, c'est qu'elle requiert des calculs à précision arbitraire. Cela nécessite des ressources illimitées (ou infinies). Afin de bénéficier de cette dernière, nous proposons une nouvelle version à précision finie. Ladite technique devienne ainsi efficace et applicable sur les ordinateurs avec les registres classiques de taille fixe et peut être facilement interfacée avec les applications qui souffrent de ME. Troisièmement, nous proposons l'utilisation de HuBR et ACBR comme un moyen pour réduire la redondance afin d'obtenir un code binaire variable à fixe. Nous avons prouvé théoriquement et expérimentalement que les deux techniques permettent d'obtenir une amélioration significative (moins de redondance). À cet égard, ACBR surpasse HuBR et fournit une classe plus étendue des sources binaires qui pouvant bénéficier d'un dictionnaire pluriellement analysable. En outre, nous montrons qu'ACBR est plus souple que HuBR dans la pratique. Quatrièmement, nous utilisons HuBR pour réduire la redondance des codes équilibrés générés par l'algorithme de Knuth. Afin de comparer les performances de HuBR et ACBR, les résultats théoriques correspondants de HuBR et d'ACBR sont présentés. Les résultats montrent que les deux techniques réalisent presque la même réduction de redondance sur les codes équilibrés générés par l'algorithme de Knuth.
Data compression aims to reduce the size of data so that it requires less storage space and less communication channels bandwidth. Many compression techniques (such as LZ77 and its variants) suffer from a problem that we call the redundancy caused by the multiplicity of encodings. The Multiplicity of Encodings (ME) means that the source data may be encoded in more than one way. In its simplest case, it occurs when a compression technique with ME has the opportunity at certain steps, during the encoding process, to encode the same symbol in different ways. The Bit Recycling compression technique has been introduced by D. Dubé and V. Beaudoin to minimize the redundancy caused by ME. Variants of bit recycling have been applied on LZ77 and the experimental results showed that bit recycling achieved better compression (a reduction of about 9% in the size of files that have been compressed by Gzip) by exploiting ME. Dubé and Beaudoin have pointed out that their technique could not minimize the redundancy caused by ME perfectly since it is built on Huffman coding, which does not have the ability to deal with codewords of fractional lengths; i.e. it is constrained to generating codewords of integral lengths. Moreover, Huffman-based Bit Recycling (HuBR) has imposed an additional burden to avoid some situations that affect its performance negatively. Unlike Huffman coding, Arithmetic Coding (AC) can manipulate codewords of fractional lengths. Furthermore, it has attracted researchers in the last few decades since it is more powerful and flexible than Huffman coding. Accordingly, this work aims to address the problem of adapting bit recycling to arithmetic coding in order to improve the code effciency and the flexibility of HuBR. We addressed this problem through our four (published) contributions. These contributions are presented in this thesis and can be summarized as follows. Firstly, we propose a new scheme for adapting HuBR to AC. The proposed scheme, named Arithmetic-Coding-based Bit Recycling (ACBR), describes the framework and the principle of adapting HuBR to AC. We also present the necessary theoretical analysis that is required to estimate the average amount of redundancy that can be removed by HuBR and ACBR in the applications that suffer from ME, which shows that ACBR achieves perfect recycling in all cases whereas HuBR achieves perfect recycling only in very specific cases. Secondly, the problem of the aforementioned ACBR scheme is that it uses arbitrary-precision calculations, which requires unbounded (or infinite) resources. Hence, in order to benefit from ACBR in practice, we propose a new finite-precision version of the ACBR scheme, which makes it efficiently applicable on computers with conventional fixed-sized registers and can be easily interfaced with the applications that suffer from ME. Thirdly, we propose the use of both techniques (HuBR and ACBR) as the means to reduce the redundancy in plurally parsable dictionaries that are used to obtain a binary variable-to-fixed length code. We theoretically and experimentally show that both techniques achieve a significant improvement (less redundancy) in this respect, but ACBR outperforms HuBR and provides a wider class of binary sources that may benefit from a plurally parsable dictionary. Moreover, we show that ACBR is more flexible than HuBR in practice. Fourthly, we use HuBR to reduce the redundancy of the balanced codes generated by Knuth's algorithm. In order to compare the performance of HuBR and ACBR, the corresponding theoretical results and analysis of HuBR and ACBR are presented. The results show that both techniques achieved almost the same significant reduction in the redundancy of the balanced codes generated by Knuth's algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Merrill, Elise. „Blossoming Bit by Bit: Exploring the Role of Theatre Initiatives in the Lives of Criminalized Women“. Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32176.

Der volle Inhalt der Quelle
Annotation:
This thesis explores to role of theatre in the lives of criminalized women. It seeks to better understand the ways in which theatre initiatives can be used as a tool for participants through various means, such as potentially being a form of self-expression, or a way to gain voice. This exploration was facilitated by conducting a case study of the Clean Break Theatre Company, a theatre company for criminalized women in London, England. Data was collected through performance and course observations and interviews with twelve women. The final themes shape the exploration as participants identify the importance of self expression through theatre, and its ability to aid in personal transformation or growth. Theatre initiatives are important because they create a unique lens into the experiences of these women, as well as being used as a tool for change in their lives.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Wei, Lan. „Implementation of Pipelined Bit-parallel Adders“. Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1943.

Der volle Inhalt der Quelle
Annotation:

Bit-parallel addition can be performed using a number of adder structures with different area and latency. However, the power consumption of different adder structures is not well studied. Further, the effect of pipelining adders to increase the throughput is not well studied. In this thesis four different adders are described, implemented in VHDL and compared after synthesis. The results give a general idea of the time-delay-power tradeoffs between the adder structures. Pipelining is shown to be a good technique for increasing the circuit speed.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Oskarsson, Jakob. „Tribological testing of drill bit inserts“. Thesis, Uppsala universitet, Tillämpad materialvetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-160584.

Der volle Inhalt der Quelle
Annotation:
This thesis work sought to find a tribological testing method suitable for cementedcarbide drill bit inserts used when drilling rock. A review of the literature publishedon the matter showed that there are quite a few test methods developed for wearstudies with cemented carbides, but most of them were not designed for the rockdrilling industry. Published studies performed with the found methods and articleswith analyzed field tests have been studied. It is generally agreed upon that the stepsof wear is that the binder disappears first, followed by removal of carbide grains. Themechanisms of binder phase and carbide grain removal is somewhat debated, butalmost every study observes fracture of the carbide grains. The wear test created inthis thesis was shown to give wear linear with time, but not with load. The newmethod was shown to be capable of distinguishing between different cementedcarbides worn in three body abrasion against different rocks. Analysis of the wornsamples shows that there are similarities with bit inserts worn in field testing. Many ofthe observations made during the analysis are also similar to observations inliterature.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Bose, Gourav. „The 128-bit block cipher MARS“. FIU Digital Commons, 2003. http://digitalcommons.fiu.edu/etd/1770.

Der volle Inhalt der Quelle
Annotation:
The purpose of the research is to investigate the emerging data security methodologies that will work with most suitable applications in the academic, industrial and commercial environments. Of several methodologies considered for Advanced Encryption Standard (AES), MARS (block cipher) developed by IBM, has been selected. Its design takes advantage of the powerful capabilities of modern computers to allow a much higher level of performance than can be obtained from less optimized algorithms such as Data Encryption Standards (DES). MARS is unique in combining virtually every design technique known to cryptographers in one algorithm. The thesis presents the performance of 128-bit cipher flexibility, which is a scaled down version of the algorithm MARS. The cryptosystem used showed equally comparable performance in speed, flexibility and security, with that of the original algorithm. The algorithm is considered to be very secure and robust and is expected to be implemented for most of the applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Plain, Simon E. M. „Bit rate scalability in audio coding“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0034/MQ64243.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Anhari, Alireabout:za Kenarsari. „Rehashing the bit-interleaved coded modulation“. Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/22473.

Der volle Inhalt der Quelle
Annotation:
Bit-interleaved coded modulation (BICM) is a pragmatic yet powerful approach for spectrally efficient coded transmission. BICM was originally designed as a superior alternative to the conventional trellis coded modulation in fading channels. However, its flexibility and ease of implementation also make BICM an attractive scheme for transmission over unfaded channels. In fact, a noticeable advantage of BICM is its simplicity and flexibility. Notably, most of today’s communication systems that achieve high spectral efficiency such as ADSL, Wireless LANs, and WiMax feature BICM. Perceptibly, the design of efficient BICM-based transmission strategies relies on the existence of a general analytical framework for evaluating its performance. Therefore, alongside its vast popularity and deployment, performance evaluation of BICM has attracted considerable attention. Developing such a performance evaluation framework is one of the main contributions of this thesis. In addition to conventional additive white Gaussian noise model, the practically important case of transmission over fading channels impaired by Gaussian mixture noise has also been studied. Different from previously proposed methods, our scheme results in closed-form expressions and is valid for arbitrary mapping rules and fading distributions. Furthermore, making use of the newly developed framework, we propose two novel transmission strategies. First, we consider the problem of optimal power allocation for a BICM system employing orthogonal frequency division multiplexing. In particular, we show that this problem translates into a linear program in the high signal-to-noise ratio regime. This reformulation extends the applicability and delivers considerable complexity reduction in comparison to existing algorithms. Finally, we propose novel detector architectures for a BICM system employing iterative decoding using hard-decision feedback at the receiver. We show that, taking the feedback error into account results in considerable performance improvement while retains decoding complexity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Abdul, Gaffar Altaf Munawar. „Bit-width optimisation for arithmetic hardware“. Thesis, Imperial College London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.420977.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Ng, Chiu-wa, und 吳潮華. „Bit-stream signal processing on FPGA“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B41633842.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Bicket, John C. (John Charles). „Bit-rate selection in wireless networks“. Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34116.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 49-50).
This thesis evaluates bit-rate selection techniques to maximize throughput over wireless links that are capable of multiple bit-rates. The key challenges in bit-rate selection are determining which bit-rate provides the most throughput and knowing when to switch to another bit-rate that would provide more throughput. This thesis presents the SampleRate bit-rate selection algorithm. SampleRate sends most data packets at the bit-rate it believes will provide the highest throughput. SampleRate periodically sends a data packet at some other bit-rate in order to update a record of that bit-rate's loss rate. SampleRate switches to a different bit-rate if the throughput estimate based on the other bit-rate's recorded loss rate is higher than the current bit-rate's throughput. Measuring the loss rate of each supported bit-rate would be inefficient because sending packets at lower bit-rates could waste transmission time, and because successive unicast losses are time-consuming for bit-rates that do not work. SampleRate addresses this problem by only sampling at bit-rates whose lossless throughput is better than the current bit-rate's throughput. SampleRate also stops probing at a bit-rate if it experiences several successive losses. This thesis presents measurements from indoor and outdoor wireless networks that demonstrate that SampleRate performs as well or better than other bit-rate selection algorithms.
(cont.) SampleRate performs better than other algorithms on links where all bit-rates suffer from significant loss.
by John C. Bicket.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Lam, S. C. K. „Gallium arsenide bit-serial integrated circuits“. Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/11027.

Der volle Inhalt der Quelle
Annotation:
Bit-Serial architecture and Gallium Arsenide have essentially been mutually exclusive fields in the past. Digital Gallium Arsenide integrated circuits have increasingly adopted the conventional approach of bit-parallel structures that do not always suit the properties and problems of the technology. This thesis proposes an alternative by using a least significant bit first bit-serial architecture, and presents a group of 'cells' designed for signal processing applications. The main features of the cells include the extensive use of pseudo-dynamic latches for pipelining, modularity, and programmability. The logic circuits are mainly based on direct-coupled FET logic. They are also compatible with silicon ECL circuits. The target clock rates for these cells are 500MHz, at least ten times faster than previous silicon bit-serial circuits. The differences between GaAs and silicon technologies meant that the cells were designed from circuit level upwards. Further to these cells, a multi-level signaling scheme has been developed to substantially alleviate off-chip signaling. Synchonisation between signals are simplified, improving even further on the conventional bit-serial system, especially at the high bit-rates encountered in GaAs circuits. For on-chip signals, a single phase clock scheme has been developed for the GaAs cells, which maintains the low clock loading and high speed charactersitics of the pseudo-dynamic cells, while substantially simplifying clock distribution and generation. Two novel latch designs are proposed for this scheme. Test results available have already proved the concepts behind the two-phase clocking scheme, the latches, and the multi-level scheme. Further tests are taking place to establish their speed performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Moen, Selmer, und Charles Jones. „BIT RATE AGILITY FOR EFFICIENT TELEMETRY“. International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/606754.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The Bit Rate Agile Onboard Telemetry Formatting (BRAOTF) system was developed by Killdeer Mountain Manufacturing to address increasing demands on the efficiency of telemetry systems. The BRAOTF thins and reorders data streams, adjusting the bit rate of a pulse code modulation (PCM) stream using a bit-locked loop to match the desired information rate exactly. The BRAOTF accomplishes the adjustment in hardware, synthesizing a clock whose operating frequency is derived from the actual timing of the input format. Its firmware manages initialization and error management. Testing has confirmed that the BRAOTF implementation meets its design goals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Cubiss, Christopher. „Low bit-rate image sequence coding“. Thesis, University of Edinburgh, 1994. http://hdl.handle.net/1842/13506.

Der volle Inhalt der Quelle
Annotation:
Digital video, by its very nature, contains vast amounts of data. Indeed, the storage and transmission requirements of digital video frequency far exceed practical storage and transmission capacity. Therefore such research has been dedicated to developing compression algorithms for digital video. This research has recently culminated in the introduction of several standards for image compression. The CCITT H.261 and the motion picture experts group (MPEG) standards both target full-motion video and are based upon a hybrid architecture which combines motion-compensated prediction with transform coding. Although motion-compensated transform coding has been shown to produce reasonable quality reconstructed images, it has also been shown that as the compression ratio is progressively increased the quality of the reconstructed image rapidly degrades. The reasons for this degradation are twofold: firstly, the transform coder is optimised for encoding real-world images, not prediction errors; and secondly, the motion-estimation and transform-coding algorithms both decompose the image into a regular array of blocks which, as the coding distortion is progressively increased, results in the well known 'blocking' effect. The regular structure of this coding artifact makes this error particularly disturbing. This research investigates motion estimation and motion compensated prediction with the aim of characterising the prediction error so that more optimal spatial coding algorithms can be chosen. Motion-compensated prediction was considered in detail. Simple theoretical models of the prediction error were developed and it was shown that, for sufficiently accurate motion estimates, motion-compensated prediction could be considered as a non-ideal spatial band-pass filtering operation. Rate-distortion theory was employed to show that the inverse spectral flatness measure of the prediction error provides a direct indication of the expected coding gain of an optimal hybrid motion-compensated prediction algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Carlson, John R. „Specifying and Evaluating PCM Bit Synchronizers“. International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614626.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California
As we enter the 1990's PCM Bit Synchronizers continue to be of major importance to data recovery systems. This paper explains the specification of PCM Bit Synchronizers and provides insight into real world performance requirements and verification methods. Topics include: Theoretical bit error ratio for wideband versus prefiltered data, probability of cycle slip, jitter, transitition density and transition gaps. The merits of multiple and/or adaptive, loop bandwidth, input signal dynamic range, and embedded Viterbi decoders are also discussed. Emphasis is on the new high data rate applications, but the concepts apply to the specification of bit synchronizers in general.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

CHAKRABORTY, S. K., und R. K. RAJANGAM. „PROGRAMMABLE HIGH BIT RATE FRAME SYNCHRONISER“. International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614490.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California
The first Indian Remote Sensing Satellite was launched on 17th March 1988 from a Soviet Cosmodrome into a 904 Km Polar Sunsynchronous orbit. The data transmission from the satellite is at 5.2 Mega Bits/sec in S-Band and 10.4 Mega Bits/sec in X-Band. The payload data is formatted into custom made 8328 words format. A programmable unique versatile frame sync and Decommutation unit has been developed to test the data from the data handling system during its various phases of development. The system works upto 50 Mega Bits/sec and can handle frame sync code length upto 128 bits and a frame length of 2 Exp 20 bits. Provision has been made for programming the allowable bit errors as well as bit slippages, using a front panel setting. This paper describes the design and implementation of such a high bit rate frame synchroniser developed specially for IRS Spacecraft application. It will also highlight the performance of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Butler, Madeline J., und Calvin L. James. „Single Chip Fixed Frequency Bit Synchronizer“. International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614687.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 30-November 02, 1989 / Town & Country Hotel & Convention Center, San Diego, California
Over the past several years programmable logic devices have become a very attractive alternative to the application specific, Very Large Scale Integrated (VLSI) design approach. This trend is mainly due to the low cost and short design to production cycle time. This paper will describe a single chip, fixed frequency suboptimum bit synchronizer design which was implemented utilizing a programmable logic device. The bit synchronizer presented here is modeled after a Digital Transition Tracking Loop (DTTL) for symbol estimation, and employs a first-order Incremental Phase Modulator (IPM) for closed-loop symbol synchronization. Although the material presented below focuses on square wave subcarriers, with the appropriate modifications, this synchronizer will also process NRZ symbols. The Bit Error Rate (BER) and tracking performance is modeled and compared to optimum designs. The bit synchronizer presented here was developed for the Space Transportation System program under contract NAS5-27600 for meteorological data evaluation from the European Space Agency's (ESA) METEOSAT Spacecraft.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Begley, Mary. „Faces on a Bit of Ivory“. UNF Digital Commons, 1990. http://digitalcommons.unf.edu/etd/94.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Gimenes, Roseli. „Literatura brasileira: do átomo ao bit“. Pontifícia Universidade Católica de São Paulo, 2016. https://tede2.pucsp.br/handle/handle/19509.

Der volle Inhalt der Quelle
Annotation:
Submitted by Filipe dos Santos (fsantos@pucsp.br) on 2016-12-06T18:29:22Z No. of bitstreams: 1 Roseli Gimenes.pdf: 3008590 bytes, checksum: 803e8f7533a4e976710ffe76d8fd6d11 (MD5)
Made available in DSpace on 2016-12-06T18:29:22Z (GMT). No. of bitstreams: 1 Roseli Gimenes.pdf: 3008590 bytes, checksum: 803e8f7533a4e976710ffe76d8fd6d11 (MD5) Previous issue date: 2016-11-04
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The main aim of this work is to feature the development of Brazilian literature considering principally, and from the nineteenth century onwards, the so-called literature of invention such as Memórias Póstumas de Brás Cubas by Machado de Assis, analyzing diachronically and synchronically the works of Gregório de Matos, during the Baroque period, Tomás Antonio Gonzaga, during Arcadismo, Oswald de Andrade during Modernism and, in the contemporary scene, the book A festa by Ivan Angelo, in terms of its interaction with the printed book, to finally focus on the cyberliterature by Clarah Averbuck, and the concrete and digital poetry by Augusto de Campos. When looking for the signs of printed and digital narratives in Brazilian literature, certain conclusions can be reached that show a poetry of invention and interaction, the influence of concrete poetry in cyberliterature as well as the marks of concretism in hyper digital short stories. As regards the teaching of literature, even in Distance Education, we have as our aim to observe the cognitive process of the students of Portuguese at university level, in private institutions, and their relationship with printed and digital narratives when studying Brazilian literature. This work also has as one of its hypotheses to prove the relevance of poetic works created by computers and having artificial intelligence as their starting point. This kind of literary works already foreshadows robots that tell stories, participate in scientific actions, and win chess matches, at the time that create meaningful narratives that will, eventually substitute poetic works, among other human actions and emotions. This part of the research was based on the works of theoreticians, such as John Searle who, with his metaphoric experience of The Chinese Room argued against strong artificial intelligence, and Roger Schank who, through the observation of children and, opposing Searle´s theories, believes in the learning process carried out by machines. The theoretical readings that vied the present research and conclusions on literature and the new technologies found inspiration, in particular, in the works of Lucia Santaella and her concept of “extended literature” and literature in the web, the works of contemporary literary theory critics such as Haroldo de Campos and Augusto de Campos, as well as the already canonical works on literary theory, for the analysis of printed works, prior to cyber literature by Alfredo Bosi, Antonio Cândido and Marisa Lajolo, among others, which waver in between the printed and the digital. It is to be expected that the discovery that creation, invention and interaction are inherent to poetical literary works will inspire teachers of Brazilian literature in their also inventive, creative and interactive interpretations done in class, in turn leading their students to surf the social medias, like blogs, magazines and literary sites, not only in search of entertainment but also of knowledge
O objetivo deste trabalho é colocar em destaque o percurso da literatura brasileira considerando escrituras, principalmente, a partir do século XIX, de caráter de invenção, como as Memórias Póstumas de Brás Cubas, de Machado de Assis, analisando diacrônica e sincronicamente obras e autores como Gregório de Matos, no Barroco, Tomás Antônio Gonzaga, no Arcadismo, e Oswald de Andrade, no Modernismo, observando qualidades de literatura de invenção; passando ao contemporâneo pela obra A festa, de Ivan Ângelo, apostando em sua interatividade ainda em livro impresso, à ciberliteratura de Clarah Averbuck, nas redes sociais, e à poesia concreta e digital de Augusto de Campos, aliada às novas tecnologias digitais. A buscar indagações sobre estilos de literatura impressa e digital, chegam-se a resultados que apontam criação poética de invenção e interatividade na literatura brasileira, da influência da poesia concreta aos fazeres da poesia ciberliterária, assim como de marcas do concretismo nos hipercontos digitais. Ao lado de questões acerca do ensinoaprendizagem, inclusive no ensino a distância de literatura brasileira, procuramos observar o perfil cognitivo dos alunos de cursos de Letras de instituições privadas e suas relações com o mundo impresso e digital quando trabalham a literatura brasileira. O trabalho também tenta alcançar a hipótese da criação poética feita por computadores a partir da inteligência artificial que já se prenuncia em instigantes trabalhos de robôs que contam histórias, participam de ações científicas e que ganham partidas de xadrez, mas que também constroem o sentido de que poderão substituir as criações poéticas, entre outras ações e emoções humanas, partindo de teorias como as de John Searle, que com sua metafórica experiência O quarto chinês argumenta desfavoravelmente à inteligência artificial forte, e Roger Schank que também com experiências na observação de crianças, contrário a Searle, aposta na aprendizagem pelas máquinas. As leituras teóricas que propiciaram as indagações e os resultados sobre literatura e novas tecnologias partiram, notadamente, das obras de Lucia Santaella a respeito de “literatura expandida”, de literatura nas redes sociais, assim como no apoio de contemporâneos da teoria literária como Haroldo de Campos e Augusto de Campos, sem deixar de percorrer os cânones dessa teoria literária para a análise de obras impressas e anteriores à ciberliteratura, como Alfredo Bosi, Antônio Candido e Marisa Lajolo- que navega entre o impresso e o digital-, entre outros. A descoberta de que a criação, a invenção e a interatividade são motes das obras poéticas literárias, esperamos, possa incentivar o trabalho de professores em suas análises também inventivas, criativas e interativas em suas aulas de literatura brasileira, incentivando seus alunos a perscrutarem os caminhos das redes sociais não apenas em busca de entretenimento, mas também de estudo em blogues, revistas e sites literários
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Adler, W. Alexander III. „Testing and Understanding Screwdriver Bit Wear“. Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36701.

Der volle Inhalt der Quelle
Annotation:
This thesis is focused on gaining a better knowledge of how to design and test Phillips screwdriver bits. Wear is the primary concern in applications where the bit is used in a power driver. Such applications include drywalling, decking and other construction and home projects. To pursue an optimal design, designers must have an understanding how the bit geometry changes with wear. To make use of the geometrical data, the designer must also have an understanding of the fundamentals of the bit/screw surface contact and its effect on force distribution. This thesis focuses on three areas. First, understanding how the tool and bit are used, and what factors contribute to bit wear. With this understanding, a test rig has been designed to emulate typical users and, in doing so, produce the factors that cause wear. Second, there must be a means to analyze geometric changes in the bit as it wears. A method for doing this was developed and demonstrated for a Phillips bit, but the process can be applied to other bits. Finally, the fundamentals of surface contact must be understood in order to apply the geometrical information obtained to improved bit design.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Liu, Chun Hung. „Bit-depth expansion and tone mapping /“. View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20LIU.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Ng, Chiu-wa. „Bit-stream signal processing on FPGA“. Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B41633842.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Boppana, Naga Venkata Vijaya Krishna. „16-bit Digital Adder Design in 250nm and 64-bit Digital Comparator Design in 90nm CMOS Technologies“. Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1420674477.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie