Dissertations / Theses on the topic 'Generic decoding'

To see the other types of publications on this topic, follow the link: Generic decoding.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Generic decoding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Florjanczyk, Jan. "The locking-decoding frontier for generic dynamics." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=106400.

Full text
Abstract:
The intuition that the amount of classical correlations between two systems be bounded by their size does not hold true in general for quantum states. In the setting of information locking, measurements on a pair of quantum systems that appear to be completely uncorrelated can become maximally correlated with a small increment in the size of one of the systems. A new information locking scheme based on generic unitary channels is presented and a strengthened definition of locking based on a measure of indistinguishability is used. The new definition demonstrates that classical information can be kept arbitrarily low until it can be completely decoded. Unlike previous locking results, non-uniform input messages are allowed and shared entanglement between the pair of quantum systems is considered. Whereas past locking results relied on schemes with an explicit "key" register, this requirement is eliminated in favor of an arbitrary quantum subsystem. Furthermore, past results considered only projective measurements at the receiver. Here locking effects can be shown even in the case where the receiver is armed with the most general type of measurement. The locking effect is found to be generic and finds applications in entropic security and models for black hole evaporation.
L'intuition que le montant des corrélations classiques entre deux systèmes sont limités par leur taille est incorrect en général pour les états quantiques. En cas de verrouillage, des mesures sur une paire de systèmes quantiques qui semblent être totalement décorrélées peuvent devenir corrélées au maximum avec une minuscule augmentation de la taille d'un des systèmes. Une nouvelle forme de verrouillage utilisant des canaux unitaire génériques est introduite et la définition de verrouillage est renforcée a base d'une mesure d'indiscernabilité. La nouvelle définition montre que l'information classique peut être arbitrairement bas jusqu'à ce qu'elle puisse être complètement décodée. Aux contraire des résultats précédents, des messages non-uniforme et l'intrication entre la paire de systèmes sont considérés. Auparavant, il était nécessaire d'avoir un registre explicite pour une "clé", cette nécessité est supprimée en faveure d'un sous-système quantique arbitraire. De plus, les résultats précédent considéraient que les mesures projective mais nous démontrons des effets de verrouillage même dans le cas où le récepteur est armé avec les mesures les plus générales. Nous trouvons l'effet de verrouillage générique et montrons des applications pour la sécurité entropique et pour un modèl d'évaporation des trous noirs.
APA, Harvard, Vancouver, ISO, and other styles
2

Mahmudi, Ali. "The investigation into generic VHDL implementation of generalised minimum distance decoding for Reed Solomon codes." Thesis, University of Huddersfield, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417302.

Full text
Abstract:
This thesis is concerned with the hardware implementation in VHDL (VHSIC Hardware Description Language) of a Generalised Minimum Distance (GMD) decoder for Reed Solomon (RS) codes. The generic GMD decoder has been implemented for the Reed Solomon codes over GF(28 ). It works for a number of RS codes: RS(255, 239), RS(255, 241), RS(255, 243), RS(255, 245), RS(255, 247), RS(255, 249), and RS(255, 251). As a comparison, a Hard Decision Decoder (HDD) using the Welch-Berlekamp algorithm for the same RS codes is also implemented. The designs were first implemented in MAT LAB. Then, the designs were written in VHDL and the target device was the AItera Field Programmable Gate Array (FPGA) Stratix EP 1 S25-B672C6. The GMD decoder achieved an internal clock speed of 66.29 MHz with RS(255, 251) down to 57.24 MHz with RS(255, 239). In the case of HDD, internal clock speeds were 112.01 MHz with RS(255, 251) down to 86.23 MHz with RS(255, 239). It is concluded that the GMD needs a lot of extra hardware compared to the HDD. The decoder GMD needs as little as 35% extra hardware in the case of RS(255, 251) decoder, but it needs 100% extra hardware for the RS(255, 241) decoder. If there is an option to choose the type of RS code to use, it is preferable to use the HDD decoder rather than the GMD decoder. In real world, the type of RS code to use is usually fixed by the standard regulation. Hence, one of the alternative way to enhance the decoding performance is by using the GMD decoder
APA, Harvard, Vancouver, ISO, and other styles
3

Leuschner, Jeff. "A new generic maximum-likelihood metric expression for space-time block codes with applications to decoding." Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/633.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shi, Aishan. "Decoding the Genetic Code: Unraveling the Language of Scientific Paradigms." Thesis, The University of Arizona, 2013. http://hdl.handle.net/10150/297762.

Full text
Abstract:
Scientific revolutions have not only significantly broadened our knowledge underlying physical laws and natural patterns, but also shifted the cultural paradigm through which science is understood and practiced. These paradigm shifts, as Thomas Kuhn denoted them, are facilitated through changes in language, because language is the only method of articulating - and thereby establishing - truth, according to Friedrich Nietzsche and Michel Foucalt. Steven Shapin analyzed the progression of these linguistic changes in global scientific revolutions and Bruno Latour categorized them in the local laboratory setting. One of the most recent revolutions in science, the discovery of the double helix by James Watson and Francis Crick, altered the understanding and application of genetics. James Watson’s personal account of this discovery, the Double Helix, makes the point to change the general perception of science as intellectual labor to innovative play. However, he does not portray his discovery in the scientific elegance that it deserves as the culmination of nearly a century of research and the marriage of quantum physics and biology. This thesis explores the paradigm shifts that developed with each scientific revolution, how they led to the double helix, and finally, the paradigm shift of the "Structure-Function Relationship" that accompanied this discovery.
APA, Harvard, Vancouver, ISO, and other styles
5

Halsteinli, Erlend. "Real-Time JPEG2000 Video Decoding on General-Purpose Computer Hardware." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8996.

Full text
Abstract:

There is widespread use of compression in multimedia content delivery, e.g. within video on demand services and transport links between live events and production sites. The content must undergo compression prior to transmission in order to deliver high quality video and audio over most networks, this is especially true for high definition video content. JPEG2000 is a recent image compression standard and a suitable compression algorithm for high definition, high rate video. With its highly flexible embedded lossless and lossy compression scheme, JPEG2000 has a number of advantages over existing video codecs. The only evident drawbacks with respect to real-time applications, are that the computational complexity is quite high and that JPEG2000, being an image compression codec as opposed to video codec, typically has higher bandwidth requirements. Special-purpose hardware can deliver high performance, but is expensive and not easily updated. A JPEG2000 decoder application running on general-purpose computer hardware can complement solutions depending on special-purpose hardware and will experience performance scaling together with the available processing power. In addition, production costs will be none-existing, once developed. The application implemented in this project is a streaming media player. It receives a compressed video stream through an IP interface, decodes it frame by frame and presents the decoded frames in a window. The decoder is designed to better take advantage of the processing power available in today's desktop computers. Specifically, decoding is performed on both CPU and GPU in order to decode minimum 50 frames per second of a 720p JPEG2000 video stream. The CPU executed part of the decoder application is written in C++, based on the Kakadu SDK and involve all decoding steps up to and including reverse wavelet transform. The GPU executed part of the decoder is enabled by the CUDA programming language, and include luma upsampling and irreversible color transform. Results indicate that general purpose computer hardware today easily can decode JPEG2000 video at bit rates up to 45 Mbit/s. However, when the video stream is received at 50 fps through the IP interface, packet loss at the socket level limits the attained frame rate to about 45 fps at rates of 40 Mbit/s or lower. If this packet loss could be eliminated, real-time decoding would be obtained up to 40 Mbit/s. At rates above 40 Mbit/s, the attained frame rate is limited by the decoder performance and not the packet loss. Higher codestream rates should be endurable if reverse wavelet transform could be mapped from the CPU to the GPU, since the current pipeline is highly unbalanced.

APA, Harvard, Vancouver, ISO, and other styles
6

Kessy, Regina. "Decoding the donor gaze : documentary, aid and AIDS in Africa." Thesis, University of Huddersfield, 2014. http://eprints.hud.ac.uk/id/eprint/23747/.

Full text
Abstract:
The discourse of ‘the white man’s burden’ that originated in the nineteenth century with missionaries and colonialism still underpins much of the development ideology towards Africa today. The overwhelming assumption that rich Western countries can and should address ‘underdevelopment’ through aid only stigmatizes African reality, framing it to mirror the worldview of the international donors who fund most non-profit interventionist documentaries. In the ‘parachute filmmaking’ style that results, facilitated by financial resources and reflecting the self-serving intentions of the donors, the non-profit filmmaker functions simply as an agent of meaning rather than authentic author of the text. Challenged by limited production schedules and lacking in cultural understanding most donor-sponsored films fall back on an ethnocentric one-size-fits-all template of an ‘inferior other’ who needs to be ‘helped’. This study sets out to challenge the ‘donor gaze’ in documentary films which ‘speak about’ Africa, arguing instead for a more inclusive style of filmmaking that gives voice to its subjects by ‘speaking with’ them. The special focus is on black African women whose images are used to signify helplessness, vulnerability and ignorance, particularly in donor-funded documentaries addressing HIV/AIDS. Through case studies of four films this study asks: 1. How do documentary films reinforce the donor gaze? (how is the film speaking and why?) 2. Can the donor gaze be challenged? (should intentionality always override subjectivity of the filmed subjects?) Film studies approach the gaze psychoanalytically (e.g. Mulvey 1975) but this study focuses on the conscious gaze of filmmakers because they reinforce or challenge ‘the pictures in our heads.’ Sight is an architect of meaning. Gaze orders reality but the documentary gaze can re-order it. The study argues that in Africa, the ‘donor gaze’ constructs meaning by ‘speaking about’ reality and calls instead for a new approach for documentary to ‘speak with’ reality.
APA, Harvard, Vancouver, ISO, and other styles
7

Al-Wasity, Salim Mohammed Hussein. "Application of fMRI for action representation : decoding, aligning and modulating." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/30761/.

Full text
Abstract:
Functional magnetic resonance imaging (fMRI) is an important tool for understanding neural mechanisms underlying human brain function. Understanding how the human brain responds to stimuli and how different cortical regions represent the information, and if these representational spaces are shared across brains and critical for our understanding of how the brain works. Recently, multivariate pattern analysis (MVPA) has a growing importance to predict mental states from fMRI data and to detect the coarse and fine scale neural responses. However, a major limitation of MVPA is the difficulty of aligning features across brains due to high variability in subjects’ responses and hence MVPA has been generally used as a subject specific analysis. Hyperalignment, solved this problem of feature alignment across brains by mapping neural responses into a common model to facilitate between subject classifications. Another technique of growing importance in understanding brain function is real-time fMRI Neurofeedback, which can be used to enable individuals to alter their brain activity. It facilitates people’s ability to learn control of their cognitive processes like motor control and pain by learning to modulate their brain activation in targeted regions. The aim of this PhD research is to decode and to align the motor representations of multi-joint arm actions based on different modalities of motor simulation, for instance Motor Imagery (MI) and Action Observation (AO) using functional Magnetic Resonance Imaging (fMRI) and to explore the feasibility of using a real-time fMRI neurofeedback to alter these action representations. The first experimental study of this thesis was performed on able-bodied participants to align the neural representation of multi-joint arm actions (lift, knock and throw) during MI tasks in the motor cortex using hyperalignment. Results showed that hyperalignment affords a statistically higher between-subject classification (BSC) performance compared to anatomical alignment. Also, hyperalignment is sensitive to the order in which subjects entered the hyperalignment algorithm to create the common model space. These results demonstrate the effectiveness of hyperalignment to align neural responses in motor cortex across subjects to enable BSC of motor imagery. The second study extended the use of hyperalignment to align fronto-parietal motor regions by addressing the problems of localization and cortical parcellation using cortex based alignment. Also, representational similarity analysis (RSA) was applied to investigate the shared neural code between AO+MI and MI of different actions. Results of MVPA revealed that these actions as well as their modalities can be decoded using the subject’s native or the hyperaligned neural responses. Furthermore, the RSA showed that AO+MI and MI representations formed separate clusters but that the representational organization of action types within these clusters was identical. These findings suggest that the neural representations of AO+MI and MI are neither the same nor totally distinct but exhibit a similar structural geometry with respect to different types of action. Results also showed that MI dominates in the AO+MI condition. The third study was performed on phantom limb pain (PLP) patients to explore the feasibility of using real-time fMRI neurofeedback to down-regulate the activity of premotor (PM) and anterior cingulate (ACC) cortices and whether the successful modulation will reduce the pain intensity. Results demonstrated that PLP patients were able to gain control and decrease the ACC and PM activation. Those patients reported decrease in the ongoing level of pain after training, but it was not statistically significant. The fourth study was conducted on healthy participants to study the effectiveness of fMRI neurofeedback on improving motor function by targeting Supplementary Motor Cortex (SMA). Results showed that participants learnt to up-regulate their SMA activation using MI of complex body actions as a mental strategy. In addition, behavioural changes, i.e. shortening of motor reaction time was found in those participants. These results suggest that fMRI neurofeedback can assist participants to develop greater control over motor regions involved in motor-skill learning and it can be translated into an improvement in motor function. In summary, this PhD thesis extends and validates the usefulness of hyperalignment to align the fronto-parietal motor regions and explores its ability to generalise across different levels of motor representation. Furthermore, it sheds light on the dominant role of MI in the AO+MI condition by examining the neural representational similarity of AO+MI and MI tasks. In addition, the fMRI neurofeedback studies in this thesis provide proof-of-principle of using this technology to reduce pain in clinical applications and to enhance motor functions in a healthy population, with the potential for translation into the clinical environment.
APA, Harvard, Vancouver, ISO, and other styles
8

Carrier, Kevin. "Recherche de presque-collisions pour le décodage et la reconnaissance de codes correcteurs." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS281.

Full text
Abstract:
Les codes correcteurs d'erreurs sont des outils ayant pour fonction originale de corriger les erreurs produites par des canaux de communication imparfaits. Dans un contexte non coopératif, se pose le problème de reconnaître des codes inconnus à partir de la seule connaissance de mots de code bruités. Ce problème peut s'avérer difficile pour certaines familles de codes, notamment pour les codes LDPC qui sont très présents dans nos systèmes de télécommunication modernes. Dans cette thèse, nous proposons de nouvelles techniques pour reconnaître plus facilement ces codes. À la fin des années 70, McEliece eu l'idée de détourner la fonction première des codes pour les utiliser dans des chiffrements, initiant ainsi une famille de solutions cryptographiques alternative à celle fondée sur la théorie des nombres. Un des avantages de la cryptographie fondée sur les codes est qu'elle semble résister au paradigme de calcul quantique ; notamment grâce à la robustesse du problème de décodage générique. Ce dernier a été profondément étudié ces 60 dernières années. Les dernières améliorations utilisent toutes des algorithmes de recherche de couples de points proches dans une liste. Dans cette thèse, nous améliorons le décodage générique en proposant notamment une nouvelle façon de rechercher des couples proches. Notre méthode repose sur l'utilisation de décodages en liste de codes polaires pour construire des fonctions de hachage floues. Dans ce manuscrit, nous traitons également la recherche de couples de points éloignés. Notre solution peut être utilisée pour améliorer le décodage en grandes distances qui a récemment trouvé des applications dans des designs de signature
Error correcting codes are tools whose initial function is to correct errors caused by imperfect communication channels. In a non-cooperative context, there is the problem of identifying unknown codes based solely on knowledge of noisy codewords. This problem can be difficult for certain code families, in particular LDPC codes which are very common in modern telecommunication systems. In this thesis, we propose new techniques to more easily recognize these codes. At the end of the 1970s, McEliece had the idea of ​​redirecting the original function of codes to use in ciphers; thus initiating a family of cryptographic solutions which is an alternative to those based on number theory problems. One of the advantages of code-based cryptography is that it seems to withstand the quantum computing paradigm; notably thanks to the robustness of the generic decoding problem. The latter has been thoroughly studied for more than 60 years. The latest improvements all rely on using algorithms for finding pairs of points that are close to each other in a list. This is the so called near-collisions search problem. In this thesis, we improve the generic decoding by asking in particular for a new way to find close pairs. To do this, we use list decoding of Arikan's polar codes to build new fuzzy hashing functions. In this manuscript, we also deal with the search for pairs of far points. Our solution can be used to improve decoding over long distances. This new type of decoding finds very recent applications in certain signature models
APA, Harvard, Vancouver, ISO, and other styles
9

Ramis, Zaldívar Juan Enrique. "Decoding the genetic landscape of pediatric and young adult germinal center-derived B-cell non-Hodgkin lymphoma." Doctoral thesis, Universitat de Barcelona, 2021. http://hdl.handle.net/10803/672372.

Full text
Abstract:
B-cell non-Hodgkin lymphoma (B-NHL) of pediatric and young adult population is a diverse group of neoplasms predominantly composed of aggressive B-cell lymphomas from the germinal center (GC). Molecular characterization of pediatric series has allowed the identification of several subtypes that predominantly occur in this subgroup of age. Despite of that, genomic features of these pediatric entities and their relationship to other B-NHL in this group of patients have not been extensively investigated. This thesis has aimed to address this gap of knowledge by performing a genetic and molecular characterization of large series of pediatric and young adult variants of GC-derived B-NHL including the Burkitt- like lymphoma with 11q aberration (BLL-11q) , pediatric type follicular lymphoma (PTFL) and large B-cell lymphomas (LBCL) such as diffuse large B-cell lymphomas (DLBCL) , high grade B- cell lymphomas, not otherwise specified (HGBCL, NOS) and large B-cell lymphoma with IRF4 rearrangement (LBCL-IRF4) entities. In the Study 1 we have molecularly characterized a series of 11 BLL-11q observing that BLL- 11q differed clinically, morphologically and immunophenotypically from conventional BL and instead showed features more consistent with HGBCL or DLBCL. Genomic profile was also different from that of BL and DLBCL with a mutational landscape characterized by the lack of typical BL mutations in ID3, TCF3, or CCND3 genes and recurrent specific BTG2 and ETS1 mutations, not present in BL but in germinal center B-cell (GCB) DLBCL subtype. All these observations suggest that BLL-11q is a neoplasm closer to other GC-derived lymphomas rather than BL. In Study 2, we expanded our knowledge on the genetic alterations associated to PTFL by verifying the presence of MAP2K1 and IRF8 mutations in a previously well characterized series of 43 PTFL. We demonstrate the activating effect of MAP2K1 mutations by immunohistochemical analysis observing phosphorylation of the MAP2K1 downstream target extracellular signal-regulated kinase in those mutated cases. Besides, we demonstrate the specificity of MAP2K1 and IRF8-K66R mutations since they are absent in conventional FL or in t(14;18)-negative FL. Finally, in the Study 3 we characterized a large series of LBCL including DLBCL, HGBCL, NOS and LBCL-IRF4 through an integrative analysis including targeted next generation sequencing, copy number, and transcriptome data. Results showed that each subgroup displayed different molecular profiles. LBCL-IRF4 had frequent mutations in IRF4 and NF-κB pathway genes (CARD11, CD79B) whereas DLBCL, NOS was predominantly of GCB-DLBCL subtype and carried gene mutations similar to the adult counterpart (e.g., SOCS1 and KMT2D). A subset of HGBCL, NOS displayed recurrent alterations of BL-related genes such as MYC, ID3, CCND3 and SMARCA4, whereas other cases were genetically closer to GCB-DLBCL. Interestingly, we could identify age-related differences in pediatric DLBCL since pediatric and young adult cases were mainly of GCB subtype, displayed low genetic complexity and virtually lacked primary aberrations (BCL2, MYC and BCL6 rearrangements). Finally, we identify clinical and molecular features related to unfavorable outcome such as age >18 years, high LDH levels, activated B-cell (ABC) DLBCL profile, high genetic complexity, homozygous deletions of 19p13.3/TNFSF7/TNFSF9, gains of 1q21-q44/MDM4/MCL1 and TP53 mutations. Altogether, we conclude that GC-derived B-NHL of pediatric and young adult population is a heterogeneous group of tumors including different entities with specific molecular profiles and clinical behavior. This thesis has contributed to increase the knowledge of these lymphoma entities identifying biomarkers that might be helpful to improve their diagnosis and to design management strategies more adapted to their particular biological behavior.
APA, Harvard, Vancouver, ISO, and other styles
10

Kamel, Ehab. "Decoding cultural landscapes : guiding principles for the management of interpretation in cultural world heritage sites." Thesis, University of Nottingham, 2011. http://eprints.nottingham.ac.uk/11845/.

Full text
Abstract:
Conserving the cultural significance of heritage sites - as the guardians of social unity, place identity, and national pride - plays an essential role in maintaining sustainable social development, as well as preserving the variations identifying cultural groups and enriching the interaction between them. Consequently, and considering the importance of the built environment in communicating, as well as documenting, cultural messages, this research project, started in 2007, develops a set of guiding principles for interpretation management, as a process for conserving cultural World Heritage Sites; by maintaining and communicating their cultural significance through managing newly added architectural, urban, and landscape designs to such heritage sites. This research was mainly conducted to investigate and explain a concern regarding a gap that is increasing between people and the cultural heritage contexts they reside- particularly in Egypt- and to suggest a strategy for professionals to understand such sites from a perspective that reflects the public cognition. Adopting Grounded Theory methodology, the research develops a series of principles, which are intended to guide the process of cultural heritage conservation; through a critical analysis of current heritage conservation practices in World Heritage Sites. The research shows how they [the guiding principles] correspond to the contemporary perception of cultural heritage in literature, for which, a thorough discussion of literature, as well as critical analysis of UNESCO’s heritage conventions and ICOMOS charters are carried out. The research raises, discusses, and answers several key questions concerning heritage conservation, such as: whether UNESCO’s conventions target the right heritage or not; the conflicts appearing between heritage conservation documents (conventions and charters); whether intangible heritage can be communicated through design; and the effect of Western heritage ideology on heritage conservation practices. This is carried out through the use of interpretive discourse analysis of literature and heritage documents, and personal site observations and questionnaire surveys carried out in two main World Heritage Sites: Historic Cairo in Egypt and Liverpool city in the UK. The two case studies contributed to the understanding of the general public’s perception of cultural Heritage Sites, and how such perception is reflected in current heritage conservation practices. The thesis decodes cultural World Heritage Sites into three intersecting levels: the ‘cultural significances’ (or ‘open codes’), which represent different categories under which people perceive historic urban landscapes; the ‘cultural concepts’ (or ‘axial codes’), which are considered as the objectives of heritage conservation practice, and represent the general concepts under which cultural significances influence the heritage interpretation process; and finally, the ‘interpretation strategy tactics’, the UNCAP Strategy (or the ‘selective coding’), which are the five overarching principles guiding the interpretation management process in cultural heritage sites. This strategy, the UNCAP (Understanding people; Narrating the story; Conserving the spirit of place; Architectural engagement; and Preserving the built heritage), developed throughout this research, is intended to help heritage site managers, curators, architects, urban designers, landscape architects, developers, and decision makers to build up a thorough understanding of heritage sites, which should facilitate the establishment of more interpretive management plans for such sites, and enhance the communication of meanings and values of their physical remains, as well as emphasizing the ‘spirit of place’; for achieving socio-cultural sustainability in the development of World Heritage Sites.
APA, Harvard, Vancouver, ISO, and other styles
11

Wachter-Zeh, Antonia. "Decoding of block and convolutional codes in rank metric." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-01056746.

Full text
Abstract:
Rank-metric codes recently attract a lot of attention due to their possible application to network coding, cryptography, space-time coding and distributed storage. An optimal-cardinality algebraic code construction in rank metric was introduced some decades ago by Delsarte, Gabidulin and Roth. This Reed-Solomon-like code class is based on the evaluation of linearized polynomials and is nowadays called Gabidulin codes. This dissertation considers block and convolutional codes in rank metric with the objective of designing and investigating efficient decoding algorithms for both code classes. After giving a brief introduction to codes in rank metric and their properties, we first derive sub-quadratic-time algorithms for operations with linearized polynomials and state a new bounded minimum distance decoding algorithm for Gabidulin codes. This algorithm directly outputs the linearized evaluation polynomial of the estimated codeword by means of the (fast) linearized Euclidean algorithm. Second, we present a new interpolation-based algorithm for unique and (not necessarily polynomial-time) list decoding of interleaved Gabidulin codes. This algorithm decodes most error patterns of rank greater than half the minimum rank distance by efficiently solving two linear systems of equations. As a third topic, we investigate the possibilities of polynomial-time list decoding of rank-metric codes in general and Gabidulin codes in particular. For this purpose, we derive three bounds on the list size. These bounds show that the behavior of the list size for both, Gabidulin and rank-metric block codes in general, is significantly different from the behavior of Reed-Solomon codes and block codes in Hamming metric, respectively. The bounds imply, amongst others, that there exists no polynomial upper bound on the list size in rank metric as the Johnson bound in Hamming metric, which depends only on the length and the minimum rank distance of the code. Finally, we introduce a special class of convolutional codes in rank metric and propose an efficient decoding algorithm for these codes. These convolutional codes are (partial) unit memory codes, built upon rank-metric block codes. This structure is crucial in the decoding process since we exploit the efficient decoders of the underlying block codes in order to decode the convolutional code.
APA, Harvard, Vancouver, ISO, and other styles
12

Freire, Kelia Margarida Barros. "POLIMORFISMOS DO GENE DGAT1 (REGIÃO 5’UTR) EM BOVINOS NELORES (PO) E MESTIÇOS (SRD), E SUA RELAÇÃO COM A CIRCUNFERÊNCIA ESCROTAL E ABERTURA BI ISQUIÁTICA." Pontifícia Universidade Católica de Goiás, 2017. http://tede2.pucgoias.edu.br:8080/handle/tede/3714.

Full text
Abstract:
Submitted by admin tede (tede@pucgoias.edu.br) on 2017-06-28T13:20:09Z No. of bitstreams: 1 KÉLIA MARGARIDA BARROS FREIRE.pdf: 2376286 bytes, checksum: 59c7ba5d89ca07d45590e5f866950733 (MD5)
Made available in DSpace on 2017-06-28T13:20:09Z (GMT). No. of bitstreams: 1 KÉLIA MARGARIDA BARROS FREIRE.pdf: 2376286 bytes, checksum: 59c7ba5d89ca07d45590e5f866950733 (MD5) Previous issue date: 2017-03-15
After the opening of diverse branches of Genetics Science, and the decoding of human and animal (bovine) genome, our production animals are being evaluated by molecular markers, which read the genome and translate it in four letters: AT-CG. The variations are analyzed as monomorphic and polymorphic regions. The aim of this research was to evaluate the “polymorphisms of the gene DGAT1 and its relation to the external sciatic bi opening and the scrotal circumference”. 109 animals have been analyzed, n= 73 Nelores bovines and n= 36 with no determined breed (NDB), with the age adjusted to 550 days, owned by breeders from the state of Goiás. In the morphometric data analysis through the test T of Student, comparing the groups Nelores PO and Bovine NDB, The analysis of morphometric data through the test T of Student comparing the groups Nelore PO and Bovine NDB does not indicate that the identified differences are substantially important to the variables ABi and CE (p ≤ 0,0001). In the genomic analysis, the test Chi-square was not relevant (p ≥ 0,05), revealing, this, that the groups of Nelore PO and of animals of NDB do not differ when it comes to the obtained genotypic and allelic frequencies. In this situation, we have observed so far variations in the allelic and genotypic frequencies for the sampled animals and the three SNP used in the study: rs471462296, rs456245081 and rs438495570. The genotype of the sampled bovines did not affect the evaluated characteristics, revealing there are, this, other genetic and non-genetic effects which affect the characteristics studied in Nelore, which deserve to be investigated.
Com o desenvolvimento dos vários ramos da Ciência Genética e a decodificação do genoma humano e animal (bovinos), nossos animais de produção estão sendo avaliados por marcadores moleculares, que fazem a leitura do genoma e o traduz em quatro letras: AT-CG. As variações são analisadas como regiões monomórficas e polimórficas. O objetivo desta pesquisa foi avaliar os “Polimorfismos do Gene DGAT1 e sua relação com a abertura bi isquiática externa (ABI) e o circunferência escrotal (CE)”. Foram analisados 109 bovinos, sendo73 bovinos daraça Nelore (PO) e36 bovinos sem raça definida (SRD), com idade ajustada para 550 dias, pertencentes a diferentes criadores do Estado de Goiás. A análise dos dados morfométricos pelo teste T de Student, comparando os grupos Nelore PO e bovinos SRD, não indica que as diferenças encontradas são substantivamente importantes para as variáveis ABI e CE (p ≤ 0,0001).Na análise genômica, o teste do Qui-quadrado não foi significativo (p ≥ 0,05), revelando, portanto, que os grupos de Nelore PO e de animais SRD não diferem quanto às frequências genotípicas e alélicas encontradas. Nessa situação, não foram observados até o momento variações na frequência alélica e genotípica para os animais amostrados e para os três SNPs usados no estudo: rs471462296, rs456245081 e rs438495570. O genótipo dos bovinos amostrados não apresentou influência sobre as características avaliadas, existindo, portanto, outros efeitos genéticos e não genéticos que afetam as características estudadas em Nelore, as quais merecem ser investigadas.
APA, Harvard, Vancouver, ISO, and other styles
13

Trontin, Charlotte. "Decoding the complexity of natural variation for shoot growth and response to the environment in Arabidopsis thaliana." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00998373.

Full text
Abstract:
Genotypes adapted to contrasting environments are expected to behave differently when placed in common controlled conditions, if their sensitivity to environmental cues or intrinsic growth behaviour are set to different thresholds, or are limited at distinct levels. This allows natural variation to be exploited as an unlimited source of new alleles or genes for the study of the genetic basis of quantitative trait variation. My doctoral work focuses on analysing natural variation for shoot growth and response to the environment in A. thaliana. Natural variation analyses aim at understanding how molecular genetic or epigenetic diversity controls phenotypic variation at different scales and times of plant development and under different environmental conditions, and how selection or demographic processes influence the frequency of those molecular variants in populations for them to get adapted to their local environment. As such, the analysis of A. thaliana natural variation can be addressed using a variety of approaches, from genetics and molecular methods to ecology and evolutionary questions. During my PhD, I got the chance to tackle several of those aspects through my contributions to three independent projects which have in common to exploit A. thaliana natural variation. The first one is the analysis of the pattern of polymorphism from a set of 102 A. thaliana accessions at the MOT1 gene coding for a molybdate transporter (an essential micronutrient) and responsible for contrasted growth and fitness among accessions in response to Mo availability in the soil. I showed at different geographical scales that MOT1 pattern of polymorphisms is not consistent with neutral evolution and shows signs of diversifying selection. This work helped reinforce the hypothesis that in some populations, mutations in MOT1 have been selected to face soils rich in Mo and potentially deleterious despite their negative effect on Mo-limiting soils. The second project consists in the characterisation and functional analysis of two putative receptor-like kinases (RLKs) identified from their effect on shoot growth specifically under mannitol-supplemented media and not in response to other osmotic constraints. The function of such RLKs in A. thaliana, which is not known to synthesize mannitol was intriguing at first but, through different experiments, we built the hypothesis that those RLKs could be activated by the mannitol produced by some pathogens such as fungi and participate to plant defensive response. The third project, in collaboration with Michel Vincentz's team from CBMEG (Brasil) and Vincent Colot (IBENS, Paris), consists in the analysis of the occurrence of natural epigenetic variants of the QQS gene in different populations from Central Asia and their possible phenotypic and adaptive consequences. Overall, these analyses of the genetic and epigenetic molecular variation leading to the biomass phenotype(s) in interaction with the environment provide clues as to how and where in the pathways adaptation is shaping natural variation.
APA, Harvard, Vancouver, ISO, and other styles
14

Jägerskog, Mattias. "Naturligt farligt : Hur visualiseringar av klimatförändringar är laddade med tecken och känslor." Thesis, Örebro University, School of Humanities, Education and Social Sciences, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-9187.

Full text
Abstract:

The purpose of this thesis was to examine the relationship between feelings and visualizations of climate change. A case study was done on visualizations of climate change from a web page concerning climate change published by the Swedish newspaper Expressen and from the American photographer Gary Braasch’s web page “World view of global warming”. The thesis is based on the article ”Emotional anchoring and objectification in the media reporting on climate change” by Birgitta Höijer. I have been aiming to understand the feelings of fear, hope, guilt, compassion and nostalgia through semiotic theories of icon, index and symbol.

Previous research has proven the difficulties in bringing the issue of climate change up on the public agenda – which is connected to the difficulties of visualizing climate change. The nature of climate change being slow and hard to spot on an individual level has been highlighted as a cause of both of these difficulties. Pictures and photos have in this thesis been seen as the “interface” between science and the public – and hence decoders of the science of climate change. Höijer’s article about feelings has been used to understand this process of decoding.

The results show that the analyzed material could be linked to and described by the semiotic theories of icon, index and symbol. The emotional anchoring found in the material and the semiotic application have been shown to work complementarily with each other, leading to a broader understanding of the material’s relationship to social cognitions. The results further demonstrated that context is essential in some of the analyzed visualizations of climate change. Generic pictures found in the material could have been regarded as icon, index or symbol of other messages – but is through its contexts anchored with feelings, and becomes visualizations of climate change. The analysis also suggests that if icons of nature could be connected with feelings – so could nature itself. The consequences are speculated to lead to objectification of nature and ecophobia. By objectifying nature and using generic pictures, the material’s relationship to the concepts of “truth” and “myth” is questioned.

In conclusion, understanding of the analyzed material is advantageously achieved through complementary use of Höijers emotional categories and the semiotic theories of icon, index and symbol.

APA, Harvard, Vancouver, ISO, and other styles
15

Önnebro, BrittMarie. "Intensiv avkodningsträning för en vuxen andraspråkselev : En interventionsstudie på sfi." Thesis, Umeå universitet, Institutionen för språkstudier, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-131300.

Full text
Abstract:
This study analyses the effects of an intervention using Wendickmodellens Intensivläsning to practice word decoding. The focus was to study if this intervention could help adult L2 learners in their decoding progress. The study also investigated whether the students´ phonological knowledge increased during the intervention. Two adult students with word decoding difficulties in Swedish were identified and selected to participate in this study. One student took part in the intervention as the intervention group, and the other one participated in ordinary classroom activities, as a control group. The intervention group received decoding practice, five days a week, 20-30 minutes each time, for a total of 20 sessions during a five week period. Both students were tested before and after the intervention. The results showed that the student participating in the intervention made progress in decoding ability in each test compared to the control student. Both students increased their phonological awareness. The conclusion of this study is that decoding practice with Wendickmodellens Intensivläsning increases decoding ability.
APA, Harvard, Vancouver, ISO, and other styles
16

Gulmez, Baskoy Ulku. "A Turbo Detection Scheme For Egprs." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1259415/index.pdf.

Full text
Abstract:
Enhanced Data Rates for Global Evolution (EDGE) is one of the 3G wireless communication standards, which provides higher data rates by adopting 8-PSK modulation in TDMA system infrastructure of GSM. In this thesis, a turbo detection receiver for Enhanced General Packet Radio Services (EGPRS) system, which is the packet switching mode of EDGE, is studied. In turbo detection, equalization and channel decoding are performed iteratively. Due to 8-ary alphabet of EGPRS modulation, full state trellis based equalization, as usually performed in GSM, is too complex not only for turbo detection but even for conventional equalization
so suboptimum schemes have to be considered. The Delayed Decision Feedback Sequence Estimation (DDFSE) is chosen as suboptimal and less complex trellis based scheme and it is examined as a conventional equalization technique firstly. It is shown that the DDFSE has a fine tradeoff between performance and complexity and can be a promising candidate for EGPRS. Then it is employed to reduce the number of the trellis state in turbo detection. Max-log-MAP algorithm is used for soft output calculations of both SISO equalizer and SISO decoder. Simulation results illustrate that proposed turbo detection structure improves bit error rate and block error rate performance of the receiver with respect to the conventional equalization and decoding scheme. The iteration gain varies depending on modulation type and coding rate of Modulation Coding Scheme (MCS) employed in EGPRS.
APA, Harvard, Vancouver, ISO, and other styles
17

Wu, Meng-Lin, and 吳孟霖. "Theory and Performance of ML Decoding for LDPC Codes Using Genetic Algorithm." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/64317738328812502720.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
97
Low-density parity-check (LDPC) codes drawn large attention lately due to their exceptional performance. Typical decoders operate based on the belief-propagation principle. Although these decoding algorithms work remarkably well, it is generally suspected that they do not achieve the performance of ML decoding. The ML performance of LDPC codes remains unknown because efficient ML decoders have not been discovered. Although it has been proved that for various appropriately chosen ensembles of LDPC codes, low error probability and reliable communication is possible up to channel capacity, we still want to know the actual limit for one specific code. Thus, in this thesis, our goal is to establish the ML performance. At a word error probability (WEP) of 10^{-5} or lower, we find that perturbed decoding can effectively achieve the ML performance at reasonable complexity. In higher error probability regime, the complexity of PD becomes prohibitive. In light of this, we propose the use of gifts. Proper gifts can induce high likelihood decoded codewords. We investigate the feasibility of using gifts in detail and discover that the complexity is dominated by the effort to identify small gifts that can pass the trigger criterion. A greedy concept is proposed to maximize the probability for a receiver to produce such a gift. Here we also apply the concept of gift into the genetic algorithm to find the ML bounds of LDPC codes. In genetic decoding algorithm (GDA), chromosomes are amount of gift sequence with some known gift bits. A conventional SPA decoder is used to assign fitness values for the chromosomes in the population. After evolution in many generations, chromosomes that correspond to decoded codewords of very high likelihood emerge. We also propose a parallel genetic decoding algorithm (P-GDA) based on the greedy concept and feasibility research of gifts. The most important aspect of GDA, in our opinion, is that one can utilize the ML bounding technique and GDA to empirically determine an effective lower bound on the error probability with ML decoding. Our results show that GDA and P-GDA outperform conventional decoder by 0.1 ~ 0.13 dB and the two bounds converge at a WEP of $10^{-5}$. Our results also indicate that, for a practical block size of thousands of bits, the SNR-error probability relationship of LDPC codes trends smoothly in the same fashion as the sphere packing bound. The abrupt cliff-like error probability curve is actually an artifact due to the ineffectiveness of iterative decoding. If additional complexity is allowed, our methods can be applied to improve on the typical decoders.
APA, Harvard, Vancouver, ISO, and other styles
18

Hsueh, Tsun-Chih. "Theory and Performance of ML Decoding for Turbo Codes using Genetic Algorithm." 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-3107200702055600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hsueh, Tsun-Chih, and 薛存志. "Theory and Performance of ML Decoding for Turbo Codes using Genetic Algorithm." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/51236587751523493601.

Full text
Abstract:
碩士
臺灣大學
電信工程學研究所
95
Although yielding the lowest error probability, ML decoding of turbo codes has been considered unrealistic so far because efficient ML decoders have not been discovered. In this thesis, we propose an experimental bounding technique for ML decoding and the Genetic Decoding Algorithm (GDA) for turbo codes. The ML bounding technique establishes both lower and upper bounds for ML decoding. GDA combines the principles of perturbed decoding and genetic algorithm. In GDA, chromosomes are random additive perturbation noises. A conventional turbo decoder is used to assign fitness values to the chromosomes in the population. After generations of evolution, good chromosomes that correspond to decoded codewords of very good likelihood emerge. GDA can be used as a practical decoder for turbo codes in certain contexts. It is also a natural multiple-output decoder. The most important aspect of GDA, in our opinion, is that one can utilize the ML bounding technique and GDA to empirically determine a effective lower bound on the error probability with ML decoding. Our results show that, at a word error probability of 10^{-4}, GDA achieves the performance of ML decoding. Using the ML bounding technique and GDA, we establish that an ML decoder only slightly outperforms a MAP-based iterative decoder at this word error probability for the block size we used and the turbo code defined for WCDMA.
APA, Harvard, Vancouver, ISO, and other styles
20

Guo, Hong-Zhi, and 郭泓志. "A Two-phase Decoding Genetic Algorithm Approach for Dynamic Scheduling in TFT-LCD Array Manufacturing." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/755z3p.

Full text
Abstract:
碩士
國立清華大學
工業工程與工程管理學系所
105
Due to the rapid change of the market and decision flexibilities of intelligent manufacturing, TFT-LCD industries are facing the challenges of a huge number of customers and different kinds of products. Therefore, it is important to enhance productivity as well as remain product quality. Because photolithography stage is the bottleneck, this study focuses on photolithography scheduling which considers job arrivals. To deal with the uncertainty of arrival time, this study develops Two-phase Decoding Genetic Algorithm (TDGA) combined with rolling strategy for dynamic scheduling in photolithography stage under complex restrictions. TDGA can also avoid the reworked problem and load unbalancing through the design of chromosome. For validation, TDGA is also compared with GA which has the left-shift mechanism through empirical data from a leading TFT-LCD industry in Taiwan. The experimental result shows that TDGA can shorten the idle time between jobs. It can obtain a high quality solution with 99% machine utilization. Thus, TDGA performances better than GA in all scenarios.
APA, Harvard, Vancouver, ISO, and other styles
21

Lu, Yi-Te, and 呂怡德. "Design of High-Performance LDPC Decoders Based on a New Genetic-Aided Message Passing Decoding Algorithm." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/88789802269066714375.

Full text
Abstract:
碩士
國立交通大學
電子研究所
100
Low-Density parity check (LDPC) code was first introduced by Gallager, which can achieve performance close to Shannon bound. In the recent years, LDPC codes have been adopted by many communication systems. In the decoding algorithm of LDPC codes, although MP (Message Passing) algorithm family has already a good decoding performance, it does not perform as well as maximum likelihood (ML) decoding. This thesis proposed a GA-MP algorithm based on the concept of genetic algorithm and MP algorithm. The decoding performance of GA-MP decoding is close to the ML decoding for different parity check matrices. Besides, due to the high computational complexity of ML decoding, the solution of ML decoding is hard to obtain in simulation. In order to obtain the solution of ML decoding in simulation rapidly, this thesis proposed a fast obtainable super ML decoding to approximate the ML decoding. The decoding performance of proposed super ML decoding is better than or equal to ML decoding but with low computation complexity. We find that the decoding performance of GA-MP decoding is almost completely close to the super ML decoding for EG-LDPC (63, 37, 8, 8). In addition, with the generations set to 1000 for GA-MP algorithm, GA-MP algorithm has about 0.5 dB performance improvement from the original MP algorithm for 802.16e(576, 288). The decoding performance of GA-MP decoding gets better when the maximum number of generations (iterations) goes larger. It solves the problem that when the iteration number of a MP algorithm increases to some extent, the decoding performance is no longer improved as the iteration number increases. Besides, at present, there is no proposed algorithm of extent literature approaching to the maximum likelihood decoding can be implemented for the acceptable area of hardware design. The proposed GA-MP decoder design has the advantages of high decoding performance and low complexity. In this thesis, we implemented the GA-MP decoder for EG-LDPC (63, 37, 8, 8). By using UMC 90 and setting the maximum clock frequency to 200MHz, the total gate count of the proposed design is about 379K, the power consumption is about 67.551(mW), and the decoding performance is very close to the decoding performance of floating-point.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Jian-Hong, and 陳建宏. "Algebraic Decoding Algorithm for Single-Syndrome Correctable Binary Cyclic Codes Based on General Error-Location-Checking Polynomials." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/natx2v.

Full text
Abstract:
博士
義守大學
資訊工程學系博士班
99
There are many good even best codes belong to the single-syndrome correctable codes. In this dissertation, a new general decoding algorithm is proposed for these codes. We define three pre-calculated polynomials which are used to determine the error positions and called the "error-location-checking polynomials". The roots of such polynomial are all the syndromes of those correctable error patterns with an error occurred at a fixed position. In the decoding scheme, the error locations can be determined by the output of plugging the syndrome into the error-location-checking polynomials. However, applying this method to the single-syndrome correctable codes, we encounter a huge computing complexity for the cases with big error-capabilities. For solving this problem, a recursive relation is proposed to generate an error-location-checking polynomial by two pre-calculated error-location-checking polynomials, which provides a faster computation of error-location-checking polynomials for the codes with big error-capabilities. Moreover, especially for the codes with error-capabilities 2, we prove that their error-location-checking polynomials can be determined only by the code length. This provides a precise method to estimate the hardware complexity of circuit implementation for the codes with big error-capabilities. There are three general decoding algorithms for the single-syndrome correctable codes. The mutual decoding procedure is using a pre-calculated polynomial to decode. The comparison with those algorithms, our complexities of computing the needed pre-calculated polynomial and circuit implementation are the lowest. That is, the decoding algorithm can be used in practical applications.
APA, Harvard, Vancouver, ISO, and other styles
23

Radovich, Milan. "DECODING THE TRANSCRIPTIONAL LANDSCAPE OF TRIPLE-NEGATIVE BREAST CANCER USING NEXT GENERATION WHOLE TRANSCRIPTOME SEQUENCING." Thesis, 2012. http://hdl.handle.net/1805/2745.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Triple-negative breast cancers (TNBCs) are negative for the expression of estrogen (ER), progesterone (PR), and HER-2 receptors. TNBC accounts for 15% of all breast cancers and results in disproportionally higher mortality compared to ER & HER2-positive tumours. Moreover, there is a paucity of therapies for this subtype of breast cancer resulting primarily from an inadequate understanding of the transcriptional differences that differentiate TNBC from normal breast. To this end, we embarked on a comprehensive examination of the transcriptomes of TNBCs and normal breast tissues using next-generation whole transcriptome sequencing (RNA-Seq). By comparing RNA-seq data from these tissues, we report the presence of differentially expressed coding and non-coding genes, novel transcribed regions, and mutations not previously reported in breast cancer. From these data we have identified two major themes. First, BRCA1 mutations are well known to be associated with development of TNBC. From these data we have identified many genes that work in concert with BRCA1 that are dysregulated suggesting a role of BRCA1 associated genes with sporadic TNBC. In addition, we observe a mutational profile in genes also associated with BRCA1 and DNA repair that lend more evidence to its role. Second, we demonstrate that using microdissected normal epithelium maybe an optimal comparator when searching for novel therapeutic targets for TNBC. Previous studies have used other controls such as reduction mammoplasties, adjacent normal tissue, or other breast cancer subtypes, which may be sub-optimal and have lead to identifying ineffective therapeutic targets. Our data suggests that the comparison of microdissected ductal epithelium to TNBC can identify potential therapeutic targets that may lead to be better clinical efficacy. In summation, with these data, we provide a detailed transcriptional landscape of TNBC and normal breast that we believe will lead to a better understanding of this complex disease.
APA, Harvard, Vancouver, ISO, and other styles
24

"Decoding Brood Pheromone: The Releaser and Primer Effects of Young and Old Larvae on Honey Bee (Apis mellifera) Workers." Doctoral diss., 2014. http://hdl.handle.net/2286/R.I.24844.

Full text
Abstract:
abstract: How a colony regulates the division of labor to forage for nutritional resources while accommodating for changes in colony demography is a fundamental question in the sociobiology of social insects. In honey bee, Apis mellifera, brood composition impacts the division of labor, but it is unknown if colonies adjust the allocation of foragers to carbohydrate and protein resources based on changes in the age demography of larvae and the pheromones they produce. Young and old larvae produce pheromones that differ in composition and volatility. In turn, nurses differentially provision larvae, feeding developing young worker larvae a surplus diet that is more queen-like in protein composition and food availability, while old larvae receive a diet that mimics the sugar composition of the queen larval diet but is restrictively fed instead of provided ad lib. This research investigated how larval age and the larval pheromone e-β ocimene (eβ) impact foraging activity and foraging load. Additional cage studies were conducted to determine if eβ interacts synergistically with queen mandibular pheromone (QMP) to suppress ovary activation and prime worker physiology for nursing behavior. Lastly, the priming effects of larval age and eβ on worker physiology and the transition from in-hive nursing tasks to outside foraging were examined. Results indicate that workers differentially respond to larvae of different ages, likely by detecting changes in the composition of the pheromones they emit. This resulted in adjustments to the foraging division of labor (pollen vs. nectar) to ensure that the nutritional needs of the colony's brood were met. For younger larvae and eβ, this resulted in a bias favoring pollen collection. The cage studies reveal that both eβ and QMP suppressed ovary activation, but the larval pheromone was more effective. Maturing in an environment of young or old larvae primed bees for nursing and impacted important endocrine titers involved in the transition to foraging, so bees maturing in the presence of larvae foraged earlier than control bees reared with no brood.
Dissertation/Thesis
Ph.D. Biology 2014
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography