Dissertations / Theses on the topic 'DEE9'

To see the other types of publications on this topic, follow the link: DEE9.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'DEE9.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

MAZZOLENI, SARA. "SOLVING THE PUZZLE OF PROTOCADHERIN-19 MOSAICISM TO UNDERSTAND THE PATHOPHYSIOLOGY OF DEVELOPMENTAL AND EPILEPTIC ENCEPHALOPATHY 9 (DEE9)." Doctoral thesis, Università degli Studi di Milano, 2022. http://hdl.handle.net/2434/918930.

Full text
Abstract:
Developmental and Epileptic Encephalopathy 9 (DEE9) is a severe neurological disorder characterized by clustered epilepsy, intellectual disability (ID) and autism spectrum disorder (ASD) (Dibbens et al., 2008). DEE9 is caused by mutations affecting the X – linked gene PCDH19, which encodes for a calcium – dependent cell – cell adhesion molecule, called protocadherin – 19 (PCDH19) (Dibbens et al., 2008). PCDH19 is mainly expressed in the Central Nervous System (CNS), where it is involved in cell – adhesion, neuronal migration, and circuit formation (Cooper et al 2015). Even though DEE9 is a X – linked disorder, the 90% of the patients are females (Shibata et al., 2021). This peculiarity was attributed to a cellular interference mechanism: due to random chromosome X inactivation, female patients have a mosaic expression of PCDH19 in the brain. This mosaicism is supposed to be responsible for a scrambled neuronal communication, promoting the onset of DEE9 features (Dibbens et al., 2018). The cellular interference hypothesis was supported by the identification of few DEE9 male patients with PCDH19 somatic mutations (Niazi et al., 2019). However, pathophysiological mechanisms behind DEE9 are still unclear and the generation of animal models could help in elucidating them. In our laboratory, we generated a new conditional knock – out (cKO) mouse model for PCDH19, through the Cre – Lox P technology (Pcdh19 floxed mouse). Two different approaches were used to deliver Cre recombinase: 1) crossbreeding of Pcdh19 floxed mice with mice expressing Cre under the rat Synapsin – 1 promoter, to target specifically neurons; 2) intracerebroventricular (ICV) injection in Pcdh19 floxed mice of an adeno – associated virus (AAV) expressing Cre fused to GFP. This last approach allowed to discriminate PCDH19 positive from PCDH19 negative neurons. Once evaluated in vitro specific Cre – mediated excision and absence of protein production by activation of the Nonsense Mediated Decay (NMD) system, we molecularly, functionally, and behaviorally characterized the new Pcdh19 cKO mouse model. In cortical and hippocampal tissues, Pcdh19 cKO female mice were characterized by a reduction in both PCDH19 mRNA and protein of 40% compared to control female mice. Interestingly, also Pcdh19 cKO male mice were mosaic for PCDH19 expression, most likely due to the low Cre expression under the relatively weak Synapsin – 1 promoter. Indeed, they displayed a mRNA and protein reduction of 60% compared to their sex – related controls. So, both Pcdh19 cKO female and male mice recapitulated PCDH19 brain mosaicism, considered DEE9 triggering feature. This allowed us to perform some of the analyses on both sexes, to identify a possible gender effect associated to DEE9. Pcdh19 cKO female mice were characterized by synaptic defects in the hippocampal CA1 region. Indeed, they showed a reduced number of excitatory synapses with a reduced number of neurotransmitter vesicles and reduced post-synaptic density (PSD) thickness compared to control female mice. In association with synaptic structural defects, Pcdh19 cKO female mice presented also impaired synaptic functionality. Indeed, Pcdh19 cKO female mice were characterized by a reduced Long-Term Potentiation (LTP) and a reduced Paired Pulse Ratio (PPR) compared to their sex – matched control mice. These synaptic defects prompted us to investigate the behavioral features of Pcdh19 cKO mice. Since DEE9 is characterized by ID and ASD, we investigated these two aspects. Pcdh19 cKO female and male mice displayed an increased number and duration of self – grooming events, suggesting an ASD-like phenotype. Moreover, Pcdh19 cKO mice of both sexes showed impairment in learning and memory plasticity, evaluated through the Morris Water Maze (MWM) test. Interestingly, the Fear Conditioning Test reconfirmed hippocampal – related memory defects exclusively in female cKO, suggesting that the female sex could be more susceptible to Pcdh19 loss. Concerning epilepsy, our Pcdh19 cKO mouse model didn’t show spontaneous seizures, as observed in the constitutive Pcdh19 KO mouse models (Pederick et al., 2016, Hoshina et al., 2021). However, Pcdh19 cKO mice displayed some hyperexcitability features at subclinical level. Indeed, PCDH19 negative neurons in the mosaic brain of Pcdh19 floxed mice were characterized by a reduced rheobase and by a higher firing frequency compared to neighboring cells retaining PCDH19 expression. Moreover, Pcdh19 cKO mice were characterized by an aberrant surface expression of the GABAARs 1 subunit, underlying possible GABAergic defects. To conclude, we generated a new Pcdh19 cKO mouse model which was able to recapitulate Pcdh19 brain mosaicism and features of ID and ASD, as in DEE9 pathology. Besides behavioral alterations, also functional and morphological synaptic defects in hippocampus were noticed. Finally, our mouse model provided clues of a GABAergic impairment and a possible gender – effect at the basis of DEE9 pathophysiology.
APA, Harvard, Vancouver, ISO, and other styles
2

Westphal, Robert. "Effect of unilateral neurodegeneration on brain morphology, connectivity and pharmacology in Parkinson's disease rat models : a multimodal MRI study with corroborative techniques." Thesis, King's College London (University of London), 2016. https://kclpure.kcl.ac.uk/portal/en/theses/effect-of-unilateral-neurodegeneration-on-brain-morphology-connectivity-and-pharmacology-in-parkinsons-disease-rat-models(41d9dc8b-dee9-4a64-a173-bf5bfd207941).html.

Full text
Abstract:
The unilaterally-lesioned 6-OHDA rat is one of the most commonly used experimental models of Parkinson’s disease (PD), a progressive neurodegenerative movement disorder characterized by nigral dopaminergic cell loss and striatal dopamine deficiency, which underlie many of the typical motor symptoms seen in patients. Here I investigated whether magnetic resonance imaging (MRI), which is a neuroimaging technology widely-used in human PD, has the potential to non-invasively detect and characterize parkinsonism and monitor the effect of pharmacotherapy in the 6-OHDA rat. To this end, I used resting-state functional MRI (rsfMRI), structural MRI and pharmacological MRI, alongside a battery of corroborative methods to morphologically, functionally, behaviourally and histologically phenotype the 6-OHDA rat. Using high resolution three-dimensional MRI and automated voxel-based morphometry (VBM) I found the grey matter volume loss in various brain areas, including the substantia nigra and the sensorimotor cortex, three weeks after 6-OHDA lesioning. The VBM analysis results were consistent with the findings reported in patients. These structural changes were associated with marked dopaminergic cell loss and cortical denervation confirmed by post-mortem histological examination. An attempt to reverse the dopaminergic neurodegeneration using the anti-diabetic drug exendin-4 was not successful. Together with structural brain changes in the 6-OHDA rat, I also found a functional reorganization in the resting-state network, whereby the lesioned hemisphere was found to have a decreased overall connectivity whereas the contralateral hemisphere showed compensatory changes as evidenced by increased functional connectivity. After the administration of the unselective dopamine agonist apomorphine in 6-OHDA rats, I found electrophysiological, behavioural and metabolic evidence of an imbalance in the basal ganglia (BG) activation, which is consistent with striatal dopamine receptor supersensitivity in the denervated hemisphere and in agreement with the classic model of BG circuitry changes in PD. Focussing on the thalamus, I further demonstrated that the beneficial effect of apomorphine lies in attenuating the increased glucose utilization and increasing of neuronal synchronization. Finally, I attempted to establish another, more progressive PD rat model in our laboratory, which, unlike the 6-OHDA rat, features adeno viral vector induced overexpression of alpha-synuclein, a protein that accumulates in PD. I evaluated its utility for longitudinal MRI experiments to test the aforementioned biomarkers identified in 6-OHDA rats, but the alpha-synuclein model failed to show the expected time course of behavioural and atrophic brain changes. My findings support the utility of preclinical MRI to detect subtle anatomical and functional brain changes. In particular, rodent-specific whole-brain VBM and rsfMRI will be a valuable technique for in vivo measurements of developing pathology in more relevant (i.e. progressive) models of PD, and may be particularly useful for correlating early, histologically undetectable, but MRI sensitive changes with behavioural deficits. This way, we might be able to provide valuable insights into the complex mechanisms underlying PD therefore providing a direct link between human and rat imaging studies.
APA, Harvard, Vancouver, ISO, and other styles
3

Allen, James Robert. "The structure, function and specificity of the Rhodobacter sphaeroides membrane-associated chemotaxis array." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:ce7de07a-dee6-471b-9f70-22714e617693.

Full text
Abstract:
Bacterial chemotaxis is the movement of bacteria towards or away from chemical stimuli in the surrounding media. Bacteria respond to chemotactic signals through chemoreceptors which bind specific ligands and transduce signals through a modified two-component system. Typical chemoreceptors bind a ligand in the periplasm and signal across the inner membrane to the cytoplasmic chemosensory array through the inner membrane. Bacterial chemoreceptors must integrate multiple signals within an array of different receptor homologues to a single output. Chemoreceptors act cooperatively to allow a rapid signal spread across the array and large signal gain. Chemoreceptors adapt to a signal by chemical modification of their cytoplasmic domains in order respond across a wide range of effector concentrations. How bacterial chemoreceptors transduce signals through the inner membrane, integrate multiple effector responses, signal cooperatively and adapt to result in a single output signal is not currently fully known. In Rhodobacter sphaeroides, additional complexity arises from the presence of multiple homologues of various chemotactic components, notably the array scaffold protein CheW. Decoding this signalling mechanism and heterogeneity involved in this system is important in decoding the action of a biological system, with implications for biotechnology and synthetic biology. This study used the two model systems Escherichia coli and R. sphaeroides to analyse the mechanism of signalling through bacterial chemoreceptors. Rational design of activity-shifting chemoreceptor mutations was undertaken and these variants were analysed in phenotypic and fluorescence localisation studies. Molecular-dynamics simulations showed an increase in flexibility of chemoreceptors corresponds to a decrease in kinase output activity, which was determined by the computational tracking of bacteria free-swimming in media. Fluorescence recovery after photobleaching was used to show that this increase in flexibility results in a decrease in binding of receptors to their array scaffold proteins. A two-hybrid screen also suggested that inter-receptor affinity is also likely to decrease. These results show that signalling through chemoreceptors is likely through a mechanism involving the selective flexibility of chemoreceptor cytoplasmic domains. Analysis of R. sphaeroides chemoreceptors and CheW scaffold proteins in E. coli showed that it should be possible to design, from the bottom-up, a functional bacterial chemotaxis system in order to analyse individual protein specificity. Expression of R. sphaeroides MCPs in this E. coli system show the reconstitution of a chemotactic array, but not one capable of signalling specifically to proposed attractants. Results gained from this system suggest the R. sphaeroides CheW proteins are not homologous and their differential binding affinities may allow array activity 'fine-tuning'.
APA, Harvard, Vancouver, ISO, and other styles
4

Peralta, Yaddyra. "Deep Waters." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/622.

Full text
Abstract:
The purpose of this creative thesis was to explore the state of exile via the use of the contemporary lyric poem. Written primarily in free verse, with some poems written in the traditional forms of the sonnet, haiku and senryu, the thesis explored exile and its variant themes of colonization, assimilation, familial history, cultural and personal myth. The result was the discovery that the lyric poem is an ideal, productive and fluid medium through which a poet can consider and encounter the liminality of exile identity.
APA, Harvard, Vancouver, ISO, and other styles
5

Straube, Nicolas. "Deep divergence." Diss., Ludwig-Maximilians-Universität München, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-138186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Joseph, Caberbe. "DEEP WITHIN." Master's thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2794.

Full text
Abstract:
As a contemporary photographer, I focus most on light and color to bring out the uniqueness of my images. Photography is about lighting and I manipulate lights to raise questions in my viewers. Manipulating light is my way of being curious about how it may change mood physically and emotionally. Inspired by classical paintings, I have developed a body of photographs that can be admired by anyone. Although the main focus of my work is light and color, this body of work is also intended to empower those with little confidence in themselves and those who have been rejected, abused, or mistrusted.
M.F.A.
Department of Art
Arts and Humanities
Studio Art and the Computer MFA
APA, Harvard, Vancouver, ISO, and other styles
7

Krotevych, K. "Deep web." Thesis, Sumy State University, 2015. http://essuir.sumdu.edu.ua/handle/123456789/40487.

Full text
Abstract:
We got accustomed to the fact that all the information on the Internet instantly could be found by search engines. They know everything about everyone. But is it really so? It turns out there are areas in WWW, neither Google nor Yandex have access. Moreover, according to most experts, their size is hundreds of times greater than the size of the rest of the internet. This secret web called deep web.
APA, Harvard, Vancouver, ISO, and other styles
8

Wood, Rebecca. "Deep Surface." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1427899904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Peterson, Grant. "Deep time /." abstract, 2008. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1455664.

Full text
Abstract:
Thesis (M.A.)--University of Nevada, Reno, 2008.
"May, 2008." Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2009]. 1 microfilm reel ; 35 mm. Online version available on the World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
10

Traxl, Dominik. "Deep graphs." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17785.

Full text
Abstract:
Netzwerk Theorie hat sich als besonders zweckdienlich in der Darstellung von Systemen herausgestellt. Jedoch fehlen in der Netzwerkdarstellung von Systemen noch immer essentielle Bausteine um diese generell zur Datenanalyse heranzuziehen zu können. Allen voran fehlt es an einer expliziten Assoziation von Informationen mit den Knoten und Kanten eines Netzwerks und einer schlüssigen Darstellung von Gruppen von Knoten und deren Relationen auf verschiedenen Skalen. Das Hauptaugenmerk dieser Dissertation ist der Einbindung dieser Bausteine in eine verallgemeinerte Rahmenstruktur gewidmet. Diese Rahmenstruktur - Deep Graphs - ist in der Lage als Bindeglied zwischen einer vereinheitlichten und generalisierten Netzwerkdarstellung von Systemen und den Methoden der Statistik und des maschinellen Lernens zu fungieren (Software: https://github.com/deepgraph/deepgraph). Anwendungen meiner Rahmenstruktur werden dargestellt. Ich konstruiere einen Regenfall Deep Graph und analysiere raumzeitliche Extrem-Regenfallcluster. Auf Grundlage dieses Graphs liefere ich einen statistischen Beleg, dass die Größenverteilung dieser Cluster einem exponentiell gedämpften Potenzgesetz folgt. Mit Hilfe eines generativen Sturm-Modells zeige ich, dass die exponentielle Dämpfung der beobachteten Größenverteilung durch das Vorhandensein von Landmasse auf unserem Planeten zustande kommen könnte. Dann verknüpfe ich zwei hochauflösende Satelliten-Produkte um raumzeitliche Cluster von Feuer-betroffenen Gebieten im brasilianischen Amazonas zu identifizieren und deren Brandeigenschaften zu charakterisieren. Zuletzt untersuche ich den Einfluss von weißem Rauschen und der globalen Kopplungsstärke auf die maximale Synchronisierbarkeit von Oszillatoren-Netzwerken für eine Vielzahl von Oszillatoren-Modellen, welche durch ein breites Spektrum an Netzwerktopologien gekoppelt sind. Ich finde ein allgemeingültiges sigmoidales Skalierungsverhalten, und validiere dieses mit einem geeignetem Regressionsmodell.
Network theory has proven to be a powerful instrument in the representation of complex systems. Yet, even in its latest and most general form (i.e., multilayer networks), it is still lacking essential qualities to serve as a general data analysis framework. These include, most importantly, an explicit association of information with the nodes and edges of a network, and a conclusive representation of groups of nodes and their respective interrelations on different scales. The implementation of these qualities into a generalized framework is the primary contribution of this dissertation. By doing so, I show how my framework - deep graphs - is capable of acting as a go-between, joining a unified and generalized network representation of systems with the tools and methods developed in statistics and machine learning. A software package accompanies this dissertation, see https://github.com/deepgraph/deepgraph. A number of applications of my framework are demonstrated. I construct a rainfall deep graph and conduct an analysis of spatio-temporal extreme rainfall clusters. Based on the constructed deep graph, I provide statistical evidence that the size distribution of these clusters is best approximated by an exponentially truncated powerlaw. By means of a generative storm-track model, I argue that the exponential truncation of the observed distribution could be caused by the presence of land masses. Then, I combine two high-resolution satellite products to identify spatio-temporal clusters of fire-affected areas in the Brazilian Amazon and characterize their land use specific burning conditions. Finally, I investigate the effects of white noise and global coupling strength on the maximum degree of synchronization for a variety of oscillator models coupled according to a broad spectrum of network topologies. I find a general sigmoidal scaling and validate it with a suitable regression model.
APA, Harvard, Vancouver, ISO, and other styles
11

Jönsson, Jennifer Annie Patricia. "Deep Impression." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-22025.

Full text
Abstract:
The scope of this thesis is to reveal the hidden dimensions of fashion. With the aim to stress the worth of participation and the individual experience of fashion. This work is questioning what we see, and later what is actually there. Through a thorough investigation of the knit technique the relationship of loop and thread (pause and activity) is the focus of this paper. Enhancing the significant qualities of the knitted technique, where material and shape is born simultaneously, the result presented holds a variety of results. With the aim to discuss multiple dimensions this knit investigation is presented in a fashion context. Styled with technical sportswear this work is challenging knitwear -as well as sportswear. By clashing sports connotated materials with the knitted wool, both fields are expanded and new options and expression are presented. The motive of this investigation is to further state the worth of fashion. To create a space for the experience of fashion, stating the various result that is not depending on the presentation on body. This work questions the pre-set truths and conventions of what fashion could be, and our ability to judge what is presented for us.
APA, Harvard, Vancouver, ISO, and other styles
12

Seraku, Tohru. "Clefts, relatives, and language dynamics : the case of Japanese." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:0448acc3-dee6-4b1b-9020-95fd84895f24.

Full text
Abstract:
The goal of this thesis is to develop a grammar model of Japanese within the framework of Dynamic Syntax (Cann et al. 2005, Kempson et al. 2001), with special reference to constructions that involve the nominaliser no: clefts and certain kinds of relatives. The more general theoretical position which it aims to defend is that an account of these constructions in terms of ‘language dynamics’ is preferable to other ‘static’ approaches currently available. What is here meant by ‘language dynamics,’ in a nutshell, is the time-linear processing of a string and attendant growth of an interpretation. First, I shall motivate, and articulate, an integrated account of the two types of no- nominalisation. These two classes are uniformly modelled as an outcome of incremental semantic-tree growth. The analysis is corroborated by naturally-occurring data extracted from the Corpus of Spontaneous Japanese (CSJ). Moreover, novel data with regard to coordination are accounted for without losing uniformity. Second, the composite entry of no and the topic marker wa handles the two types of clefts uniformly. This account fits well with the CSJ findings. New data concerning case-marking of foci are explained in terms of whether an unfixed relation in a semantic tree is resolvable in incremental processing. The account also solves the island-puzzle without abandoning uniformity. As a further confirmation, the analysis is extendable to stripping/sluicing, making some novel predictions on case-marking patterns. Third, the entry of no characterises free relatives and change relatives in a unitary manner. Furthermore, the composite entry of no and a case particle predicts a vast range of properties of head-internal relatives, including new data (e.g., negation in the relative clause, locality restriction on the Relevancy Condition). In sum, the thesis presents a realistic, integrated, and empirically preferable model of Japanese. Some consequences stand out. The various new data reported are beneficial theory-neutrally. Formal aspects of Dynamic Syntax are advanced. The insights brought by a language dynamics account challenge the standard, static conception of grammar.
APA, Harvard, Vancouver, ISO, and other styles
13

Hope, Kofi N. "In search of solidarity : international solidarity work between Canada and South Africa 1975-2010." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:94fc88ca-de19-4e97-b66f-97cd9f5d4595.

Full text
Abstract:
This thesis provides an account of the work of Canadian organizations that took part in the global anti-apartheid movement and then continued political advocacy work in South Africa post-1994. My central research question is: What explains the rise and fall of international solidarity movements? I answer this question by exploring the factors that allowed the Canadian anti-apartheid network to grow into an international solidarity movement and explaining how a change in these factors sent the network into a period of decline post-1994. I use two organizations, the United Church of Canada and CUSO, as case studies for my analysis. I argue that four factors were behind the growth of the Canadian solidarity network: the presence of large CSOs in Canada willing to become involved in solidarity work, the presence of radical spaces in these organizations from which activists could advocate for and carry out solidarity work, the frame resonance of the apartheid issue in Canada and the political incentives the apartheid state provided for South African activists to encourage Northern support. Post-1994 all of these factors shifted in ways that restricted the continuation of international solidarity work with South Africa. Accordingly I argue that the decline of the Canadian network was driven in part by specific South African factors, but was also connected to a more general stifling of the activist work of progressive Canadian CSOs over the 1990s. This reduction of capacity was driven by the ascent of neo-liberal policy in Canada and worldwide. Using examples from a wide swath of cases I outline this process and explain how all four factors drove the growth and decline of Canadian solidarity work towards South Africa.
APA, Harvard, Vancouver, ISO, and other styles
14

Chitengi, Howard S. "Deriving lessons for urban planning and housing delivery from the resilience of informal housing systems in Zambia." Thesis, University of Dundee, 2015. https://discovery.dundee.ac.uk/en/studentTheses/a0f283ed-de59-4b8f-89d8-5380a9b919ae.

Full text
Abstract:
The study explores the factors that sustain urban informal housing resilience to draw lessons for enhancement of housing provision. This is in response to the challenge in housing provision evidenced in the burgeoning informal housing delivery system that characterise most developing countries. Using a case study approach, involving two informal settlements in Lusaka City, Zambia, the study examines the push and pull factors that influence the resilience. This is premised on the argument that identification of the factors sustaining the resilience holds the key to making the planning system reflective of the context in which housing needs, demands and access abilities are embedded. To this end, grounded on both literature and empirical interrogations, the study shows informal housing resilience is sustained by several factors of which the following are pertinent. The study demonstrates regulatory frameworks, land property rights, contractual practices and fiscal policies which shape the general context of housing development to be influencing factors of the informal housing resilience. In this regard, the study suggests provision of housing that meets the needs of different groups and attainment of sustainable neighbourhoods, can mainly be reached through flexibilities in standards and adaptive governance approach that blend in social-cultural financing and contractual practices, building methods, innovations and land delivery systems. Besides the study shows informal housing resilience to be sustained by urban planning frameworks which are not amenable to contemporary approaches like partnerships, participation, collaborations and decentralisation for housing finance provision. In this view the study suggests new changes and approaches to housing governance anchored on these planning principles. The study further shows that informal housing resilience is influenced by location and internal structuring of residential areas which are incompatible with local dwelling contexts. Accordingly, the study demonstrates the common strategies of eviction, demolitions or relocations employed by planners and policy makers as a display of obliviousness to the realities that make people reside in particular localities considered ‘unauthorised’. In regard of this, the study suggests new changes and approaches to the planning of human settlements to include adaptation of local and social-cultural dwelling contexts and proximity concerns in lay out plans and patterns.
APA, Harvard, Vancouver, ISO, and other styles
15

Huang, Ruobing. "Delving deep into fetal neurosonography : an image analysis approach." Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:63aec035-dee2-40d4-9e00-ee1674a52494.

Full text
Abstract:
Ultrasound screening has been used for decades as the main modality to examine fetal brain development and to diagnose possible anomalies. However, basic clinical ultrasound examination of the fetal head is limited to axial planes of the brain and linear measurements which may have restrained its potential and efficacy. The recent introduction of three-dimensional (3D) ultrasound provides the opportunity to navigate to different anatomical planes and to evaluate structures in 3D within the developing brain. Regardless of acquisition methods, interpreting 2D/3D ultrasound fetal brain images require considerable skill and time. In this thesis, a series of automatic image analysis algorithms are proposed that exploit the rich sonographic patterns captured by the scans and help to simplify clinical examination. The original contributions include: 1. An original skull detection method for 3D ultrasound images, which achieves mean accuracy of 2.2 ± 1.6 mm compared to the ground truth (GT). In addition, the algorithm is utilised for accurate automated measurement of essential biometry in standard examinations: biparietal diameter (mean accuracy: 2.1 ± 1.4 mm) and head circumference (mean accuracy: 4.5 ± 3.7 mm). 2. A plane detection algorithm. It automatically extracts mid-sagittal plane that provides visualization of midline structures, which are crucial to assess central nervous system malformations. The automated planes are in accordance with manual ones (within 3.0 ± 3.5°). 3. A general segmentation framework for delineating fetal brain structures in 2D images. The automatically generated predictions are found to be agreed with the manual delineations (mean dice-similarity coefficient: 0.79 ± 0.07). As a by-product, the algorithm generated automated biometry. The results might be further utilized for morphological evaluation in future research. 4. An efficient localization model that is able to pinpoint the 3D locations of five key brain structures that are examined in a routine clinical examination. The predictions correlate with the ground truth: the average centre deviation is 1.8 ± 1.4 mm, and the size difference between them is 1.9 ± 1.5 mm. The application of this model may greatly reduce the time required for routine examination in clinical practice. 5. A 3D affine registration pipeline. Leveraging the power of convolutional neural networks, the model takes raw 3D brain images as input and geometrically transforms fetal brains into a unified coordinate system (proposed as a Fetal Brain Talairach system). The integration of these algorithms into computer-assisted analysis tools may greatly reduce the time and effort to evaluate 3D fetal neurosonography for clinicians. Furthermore, they will assist understanding of fetal brain maturation by distilling 2D/3D information directly from the uterus.
APA, Harvard, Vancouver, ISO, and other styles
16

Cusack, Martin. "The role of DNA methylation on transcription factor occupancy and transcriptional activity." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:7d0b7fe7-dee1-433f-8656-c9ee2a216d48.

Full text
Abstract:
DNA methylation is an epigenetic mark that is deposited throughout the genome of mammals and plays an important role in the maintenance of transcriptionally repressive states across cell divisions. There are two major mechanisms by which DNA methylation has been proposed to act: one involves the recognition of the mark by protein complexes containing histone deacetylases (HDACs) that can remodel the local chromatin. Alternatively, methylation has been suggested to directly affect the interaction between transcription factors and their cognate binding sequence. The aim of this research was to determine the contributions of these two mechanisms in cells. The importance of HDAC activity in mediating DNA methylation-dependent transcriptional repression was assessed by comparing the genes and retrotransposons that are upregulated in response to DNA methylation loss or the disruption of HDAC activity. To this purpose, we performed whole-genome transcriptional analysis in wild type and DNA methylation-deficient mouse embryonic stem cells (DNMT.TKO mESCs) in the presence and absence of the HDAC inhibitor trichostatin A. Our data suggests that there are few genes whose repression is solely dependent on the recruitment of HDACs by DNA methylation in mESCs. Rather it appears that DNA methylation and HDAC-mediated silencing represent two independent layers of repression that converge at certain transcriptional elements. To investigate the contribution of DNA methylation on the genome-wide occupancy of transcription factors, we compared the global chromatin accessibility landscape and the binding profile of candidate transcription factors in the absence or presence of DNA methylation. We found that loss of DNA methylation associates with localised gains in accessibility, some of which can be linked to the novel binding of transcription factors such as GABPA, MAX, NRF1 and YY1. Altogether, our results present new insights into the interplay between DNA methylation and histone deacetylation and their impact on the localisation of transcription factors from different families.
APA, Harvard, Vancouver, ISO, and other styles
17

Lynch, Cassie A. "Korangan: Deep Time and Deep Transformation in Noongar Country." Thesis, Curtin University, 2020. http://hdl.handle.net/20.500.11937/81989.

Full text
Abstract:
Recent research suggests that Indigenous stories that feature 'cold times' and rising seas are in fact eyewitness accounts of the last ice age and the rise in sea-level that followed it. Building on this notion, this research explores whether writing fiction in the scale of deep time can be employed to explore colonial pasts, the contested present and radical futures.
APA, Harvard, Vancouver, ISO, and other styles
18

Backstad, Sebastian. "Federated Averaging Deep Q-NetworkA Distributed Deep Reinforcement Learning Algorithm." Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149637.

Full text
Abstract:
In the telecom sector, there is a huge amount of rich data generated every day. This trend will increase with the launch of 5G networks. Telco companies are interested in analyzing their data to shape and improve their core businesses. However, there can be a number of limiting factors that prevents them from logging data to central data centers for analysis.  Some examples include data privacy, data transfer, network latency etc. In this work, we present a distributed Deep Reinforcement Learning (DRL) method called Federated Averaging Deep Q-Network (FADQN), that employs a distributed hierarchical reinforcement learning architecture. It utilizes gradient averaging to decrease communication cost. Privacy concerns are also satisfied by training the agent locally and only sending aggregated information to the centralized server. We introduce two versions of FADQN: synchronous and asynchronous. Results on the cart-pole environment show 80 times reduction in communication without any significant loss in performance. Additionally, in case of asynchronous approach, we see a great improvement in convergence.
APA, Harvard, Vancouver, ISO, and other styles
19

Dunlop, J. S., R. J. McLure, A. D. Biggs, J. E. Geach, M. J. Michałowski, R. J. Ivison, W. Rujopakarn, et al. "A deep ALMA image of the Hubble Ultra Deep Field." OXFORD UNIV PRESS, 2017. http://hdl.handle.net/10150/623849.

Full text
Abstract:
We present the results of the first, deep Atacama Large Millimeter Array ( ALMA) imaging covering the full similar or equal to 4.5 arcmin(2) of the Hubble Ultra Deep Field ( HUDF) imaged with Wide Field Camera 3/IR on HST. Using a 45-pointing mosaic, we have obtained a homogeneous 1.3-mm image reaching sigma 1.3 similar or equal to 35 mu Jy, at a resolution of similar or equal to 0.7 arcsec. From an initial list of similar or equal to 50 > 3.5 sigma peaks, a rigorous analysis confirms 16 sources with S-1.3 > 120 mu Jy. All of these have secure galaxy counterparts with robust redshifts (< z > = 2.15). Due to the unparalleled supporting data, the physical properties of the ALMA sources are well constrained, including their stellar masses ( M-*) and UV+FIR star formation rates ( SFR). Our results show that stellar mass is the best predictor of SFR in the high-redshift Universe; indeed at z = 2 our ALMA sample contains seven of the nine galaxies in the HUDF withM(*) = 2 x 10(10)M circle dot, and we detect only one galaxy at z > 3.5, reflecting the rapid drop-off of high-mass galaxies with increasing redshift. The detections, coupled with stacking, allow us to probe the redshift/mass distribution of the 1.3-mm background down to S1.3 similar or equal to 10 mu Jy. We find strong evidence for a steep star-forming `main sequence' at z similar or equal to 2, with SFR. M* and a mean specific SFR similar or equal to 2.2 Gyr(-1). Moreover, we find that similar or equal to 85 per cent of total star formation at z similar or equal to 2 is enshrouded in dust, with similar or equal to 65 per cent of all star formation at this epoch occurring in high-mass galaxies ( M-* > 2 x 10(10)M circle dot), for which the average obscured: unobscured SF ratio is similar or equal to 200. Finally, we revisit the cosmic evolution of SFR density; we find this peaks at z similar or equal to 2.5, and that the star-forming Universe transits from primarily unobscured to primarily obscured at z similar or equal to 4.
APA, Harvard, Vancouver, ISO, and other styles
20

Carvalho, Micael. "Deep representation spaces." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS292.

Full text
Abstract:
Ces dernières années, les techniques d’apprentissage profond ont fondamentalement transformé l'état de l'art de nombreuses applications de l'apprentissage automatique, devenant la nouvelle approche standard pour plusieurs d’entre elles. Les architectures provenant de ces techniques ont été utilisées pour l'apprentissage par transfert, ce qui a élargi la puissance des modèles profonds à des tâches qui ne disposaient pas de suffisamment de données pour les entraîner à partir de zéro. Le sujet d'étude de cette thèse couvre les espaces de représentation créés par les architectures profondes. Dans un premier temps, nous étudions les propriétés de leurs espaces, en prêtant un intérêt particulier à la redondance des dimensions et la précision numérique de leurs représentations. Nos résultats démontrent un fort degré de robustesse, pointant vers des schémas de compression simples et puissants. Ensuite, nous nous concentrons sur le l'affinement de ces représentations. Nous choisissons d'adopter un problème multi-tâches intermodal et de concevoir une fonction de coût capable de tirer parti des données de plusieurs modalités, tout en tenant compte des différentes tâches associées au même ensemble de données. Afin d'équilibrer correctement ces coûts, nous développons également un nouveau processus d'échantillonnage qui ne prend en compte que des exemples contribuant à la phase d'apprentissage, c'est-à-dire ceux ayant un coût positif. Enfin, nous testons notre approche sur un ensemble de données à grande échelle de recettes de cuisine et d'images associées. Notre méthode améliore de 5 fois l'état de l'art sur cette tâche, et nous montrons que l'aspect multitâche de notre approche favorise l'organisation sémantique de l'espace de représentation, lui permettant d'effectuer des sous-tâches jamais vues pendant l'entraînement, comme l'exclusion et la sélection d’ingrédients. Les résultats que nous présentons dans cette thèse ouvrent de nombreuses possibilités, y compris la compression de caractéristiques pour les applications distantes, l'apprentissage multi-modal et multitâche robuste et l'affinement de l'espace des caractéristiques. Pour l'application dans le contexte de la cuisine, beaucoup de nos résultats sont directement applicables dans une situation réelle, en particulier pour la détection d'allergènes, la recherche de recettes alternatives en raison de restrictions alimentaires et la planification de menus
In recent years, Deep Learning techniques have swept the state-of-the-art of many applications of Machine Learning, becoming the new standard approach for them. The architectures issued from these techniques have been used for transfer learning, which extended the power of deep models to tasks that did not have enough data to fully train them from scratch. This thesis' subject of study is the representation spaces created by deep architectures. First, we study properties inherent to them, with particular interest in dimensionality redundancy and precision of their features. Our findings reveal a strong degree of robustness, pointing the path to simple and powerful compression schemes. Then, we focus on refining these representations. We choose to adopt a cross-modal multi-task problem, and design a loss function capable of taking advantage of data coming from multiple modalities, while also taking into account different tasks associated to the same dataset. In order to correctly balance these losses, we also we develop a new sampling scheme that only takes into account examples contributing to the learning phase, i.e. those having a positive loss. Finally, we test our approach in a large-scale dataset of cooking recipes and associated pictures. Our method achieves a 5-fold improvement over the state-of-the-art, and we show that the multi-task aspect of our approach promotes a semantically meaningful organization of the representation space, allowing it to perform subtasks never seen during training, like ingredient exclusion and selection. The results we present in this thesis open many possibilities, including feature compression for remote applications, robust multi-modal and multi-task learning, and feature space refinement. For the cooking application, in particular, many of our findings are directly applicable in a real-world context, especially for the detection of allergens, finding alternative recipes due to dietary restrictions, and menu planning
APA, Harvard, Vancouver, ISO, and other styles
21

Carter, Justin Ryan. "Assume Deer Dead." Bowling Green State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1395065120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Dufourq, Emmanuel. "Evolutionary deep learning." Doctoral thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/30357.

Full text
Abstract:
The primary objective of this thesis is to investigate whether evolutionary concepts can improve the performance, speed and convenience of algorithms in various active areas of machine learning research. Deep neural networks are exhibiting an explosion in the number of parameters that need to be trained, as well as the number of permutations of possible network architectures and hyper-parameters. There is little guidance on how to choose these and brute-force experimentation is prohibitively time consuming. We show that evolutionary algorithms can help tame this explosion of freedom, by developing an algorithm that robustly evolves near optimal deep neural network architectures and hyper-parameters across a wide range of image and sentiment classification problems. We further develop an algorithm that automatically determines whether a given data science problem is of classification or regression type, successfully choosing the correct problem type with more than 95% accuracy. Together these algorithms show that a great deal of the current "art" in the design of deep learning networks - and in the job of the data scientist - can be automated. Having discussed the general problem of optimising deep learning networks the thesis moves on to a specific application: the automated extraction of human sentiment from text and images of human faces. Our results reveal that our approach is able to outperform several public and/or commercial text sentiment analysis algorithms using an evolutionary algorithm that learned to encode and extend sentiment lexicons. A second analysis looked at using evolutionary algorithms to estimate text sentiment while simultaneously compressing text data. An extensive analysis of twelve sentiment datasets reveal that accurate compression is possible with 3.3% loss in classification accuracy even with 75% compression of text size, which is useful in environments where data volumes are a problem. Finally, the thesis presents improvements to automated sentiment analysis of human faces to identify emotion, an area where there has been a tremendous amount of progress using convolutional neural networks. We provide a comprehensive critique of past work, highlight recommendations and list some open, unanswered questions in facial expression recognition using convolutional neural networks. One serious challenge when implementing such networks for facial expression recognition is the large number of trainable parameters which results in long training times. We propose a novel method based on evolutionary algorithms, to reduce the number of trainable parameters whilst simultaneously retaining classification performance, and in some cases achieving superior performance. We are robustly able to reduce the number of parameters on average by 95% with no loss in classification accuracy. Overall our analyses show that evolutionary algorithms are a valuable addition to machine learning in the deep learning era: automating, compressing and/or improving results significantly, depending on the desired goal.
APA, Harvard, Vancouver, ISO, and other styles
23

He, Fengxiang. "Theoretical Deep Learning." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25674.

Full text
Abstract:
Deep learning has long been criticised as a black-box model for lacking sound theoretical explanation. During the PhD course, I explore and establish theoretical foundations for deep learning. In this thesis, I present my contributions positioned upon existing literature: (1) analysing the generalizability of the neural networks with residual connections via complexity and capacity-based hypothesis complexity measures; (2) modeling stochastic gradient descent (SGD) by stochastic differential equations (SDEs) and their dynamics, and further characterizing the generalizability of deep learning; (3) understanding the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems, which sheds light in reconciling the over-representation and excellent generalizability of deep learning; and (4) discovering the interplay between generalization, privacy preservation, and adversarial robustness, which have seen rising concerns in deep learning deployment.
APA, Harvard, Vancouver, ISO, and other styles
24

Manna, Amin(Amin A. ). "Deep linguistic lensing." Thesis, Massachusetts Institute of Technology, 2018. https://hdl.handle.net/1721.1/121630.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 81-84).
Language models and semantic word embeddings have become ubiquitous as sources for machine learning features in a wide range of predictive tasks and real-world applications. We argue that language models trained on a corpus of text can learn the linguistic biases implicit in that corpus. We discuss linguistic biases, or differences in identity and perspective that account for the variation in language use from one speaker to another. We then describe methods to intentionally capture "linguistic lenses": computational representations of these perspectives. We show how the captured lenses can be used to guide machine learning models during training. We define a number of lenses for author-to-author similarity and word-to-word interchangeability. We demonstrate how lenses can be used during training time to imbue language models with perspectives about writing style, or to create lensed language models that learn less linguistic gender bias than their un-lensed counterparts.
by Amin Manna.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
25

FRACCAROLI, MICHELE. "Explainable Deep Learning." Doctoral thesis, Università degli studi di Ferrara, 2023. https://hdl.handle.net/11392/2503729.

Full text
Abstract:
Il grande successo che il Deep Learning ha ottenuto in ambiti strategici per la nostra società quali l'industria, la difesa, la medicina etc., ha portanto sempre più realtà a investire ed esplorare l'utilizzo di questa tecnologia. Ormai si possono trovare algoritmi di Machine Learning e Deep Learning quasi in ogni ambito della nostra vita. Dai telefoni, agli elettrodomestici intelligenti fino ai veicoli che guidiamo. Quindi si può dire che questa tecnologia pervarsiva è ormai a contatto con le nostre vite e quindi dobbiamo confrontarci con essa. Da questo nasce l’eXplainable Artificial Intelligence o XAI, uno degli ambiti di ricerca che vanno per la maggiore al giorno d'oggi in ambito di Deep Learning e di Intelligenza Artificiale. Il concetto alla base di questo filone di ricerca è quello di rendere e/o progettare i nuovi algoritmi di Deep Learning in modo che siano affidabili, interpretabili e comprensibili all'uomo. Questa necessità è dovuta proprio al fatto che le reti neurali, modello matematico che sta alla base del Deep Learning, agiscono come una scatola nera, rendendo incomprensibile all'uomo il ragionamento interno che compiono per giungere ad una decisione. Dato che stiamo delegando a questi modelli matematici decisioni sempre più importanti, integrandole nei processi più delicati della nostra società quali, ad esempio, la diagnosi medica, la guida autonoma o i processi di legge, è molto importante riuscire a comprendere le motivazioni che portano questi modelli a produrre determinati risultati. Il lavoro presentato in questa tesi consiste proprio nello studio e nella sperimentazione di algoritmi di Deep Learning integrati con tecniche di Intelligenza Artificiale simbolica. Questa integrazione ha un duplice scopo: rendere i modelli più potenti, consentendogli di compiere ragionamenti o vincolandone il comportamento in situazioni complesse, e renderli interpretabili. La tesi affronta due macro argomenti: le spiegazioni ottenute grazie all'integrazione neuro-simbolica e lo sfruttamento delle spiegazione per rendere gli algoritmi di Deep Learning più capaci o intelligenti. Il primo macro argomento si concentra maggiormente sui lavori svolti nello sperimentare l'integrazione di algoritmi simbolici con le reti neurali. Un approccio è stato quelli di creare un sistema per guidare gli addestramenti delle reti stesse in modo da trovare la migliore combinazione di iper-parametri per automatizzare la progettazione stessa di queste reti. Questo è fatto tramite l'integrazione di reti neurali con la Programmazione Logica Probabilistica (PLP) che consente di sfruttare delle regole probabilistiche indotte dal comportamento delle reti durante la fase di addestramento o ereditate dall'esperienza maturata dagli esperti del settore. Queste regole si innescano allo scatenarsi di un problema che il sistema rileva durate l'addestramento della rete. Questo ci consente di ottenere una spiegazione di cosa è stato fatto per migliorare l'addestramento una volta identificato un determinato problema. Un secondo approccio è stato quello di far cooperare sistemi logico-probabilistici con reti neurali per la diagnosi medica da fonti di dati eterogenee. La seconda tematica affrontata in questa tesi tratta lo sfruttamento delle spiegazioni che possiamo ottenere dalle rete neurali. In particolare, queste spiegazioni sono usate per creare moduli di attenzione che aiutano a vincolare o a guidare le reti neurali portandone ad avere prestazioni migliorate. Tutti i lavori sviluppati durante il dottorato e descritti in questa tesi hanno portato alle pubblicazioni elencate nel Capitolo 14.2.
The great success that Machine and Deep Learning has achieved in areas that are strategic for our society such as industry, defence, medicine, etc., has led more and more realities to invest and explore the use of this technology. Machine Learning and Deep Learning algorithms and learned models can now be found in almost every area of our lives. From phones to smart home appliances, to the cars we drive. So it can be said that this pervasive technology is now in touch with our lives, and therefore we have to deal with it. This is why eXplainable Artificial Intelligence or XAI was born, one of the research trends that are currently in vogue in the field of Deep Learning and Artificial Intelligence. The idea behind this line of research is to make and/or design the new Deep Learning algorithms so that they are interpretable and comprehensible to humans. This necessity is due precisely to the fact that neural networks, the mathematical model underlying Deep Learning, act like a black box, making the internal reasoning they carry out to reach a decision incomprehensible and untrustable to humans. As we are delegating more and more important decisions to these mathematical models, it is very important to be able to understand the motivations that lead these models to make certain decisions. This is because we have integrated them into the most delicate processes of our society, such as medical diagnosis, autonomous driving or legal processes. The work presented in this thesis consists in studying and testing Deep Learning algorithms integrated with symbolic Artificial Intelligence techniques. This integration has a twofold purpose: to make the models more powerful, enabling them to carry out reasoning or constraining their behaviour in complex situations, and to make them interpretable. The thesis focuses on two macro topics: the explanations obtained through neuro-symbolic integration and the exploitation of explanations to make the Deep Learning algorithms more capable or intelligent. The neuro-symbolic integration was addressed twice, by experimenting with the integration of symbolic algorithms with neural networks. A first approach was to create a system to guide the training of the networks themselves in order to find the best combination of hyper-parameters to automate the design of these networks. This is done by integrating neural networks with Probabilistic Logic Programming (PLP). This integration makes it possible to exploit probabilistic rules tuned by the behaviour of the networks during the training phase or inherited from the experience of experts in the field. These rules are triggered when a problem occurs during network training. This generates an explanation of what was done to improve the training once a particular issue was identified. A second approach was to make probabilistic logic systems cooperate with neural networks for medical diagnosis on heterogeneous data sources. The second topic addressed in this thesis concerns the exploitation of explanations. In particular, the explanations one can obtain from neural networks are used in order to create attention modules that help in constraining and improving the performance of neural networks. All works developed during the PhD and described in this thesis have led to the publications listed in Chapter 14.2.
APA, Harvard, Vancouver, ISO, and other styles
26

Hervert, John Joseph. "Mule deer use of water developments in Arizona." Thesis, The University of Arizona, 1985. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_e9791_1985_270_sip1_w.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Dan-Jumbo, F. G. "Material and structural properties of a novel Aer-Tech material." Thesis, Coventry University, 2015. http://curve.coventry.ac.uk/open/items/699ca3a1-deec-4549-b907-0e06bcdad83f/1.

Full text
Abstract:
This study critically investigates the material and structural behaviour of Aer-Tech material. Aer- Tech material is composed of 10% by volume of foam mechanically entrapped in a plastic mortar. The research study showed that the density of the material mix controls all other properties such as fresh state properties, mechanical properties, functional properties and acoustic properties. Appreciably, the research had confirmed that Aer-Tech material despite being classified as a light weight material had given high compressive strength of about 33.91N/mm2. The compressive strength characteristics of Aer-Tech material make the material a potential cost effective construction material, comparable to conventional concrete. The material also showed through this study that it is a structural effective material with its singly reinforced beam giving ultimate moment of about 38.7KN. In addition, the Aer-Tech material is seen as a very good ductile material since, the singly reinforced beam in tension showed visible signs of diagonal vertical cracks long before impending rapture. Consequently, the SEM test and the neural network model predictions, carried out had showed how billions of closely tight air cells are evenly distributed within the Aer-Tech void system as well as the close prediction of NN model for compressive strength and density are same with the experimental results of compressive strength and density. The result shows that the Aer-Tech NN-model can simulate inputs data and predicts their corresponding output data.
APA, Harvard, Vancouver, ISO, and other styles
28

Parker, Israel David. "Effects of translocation and deer-vehicle collision mitigation on Florida Key deer." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Marchesini, Gregorio. "Caratterizzazione della Sardinia Deep Space Antenna in supporto di missioni deep space." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20809/.

Full text
Abstract:
Nel seguente elaborato verranno analizzate le caratteristiche principali della Sardinia Deep Space Antenna, il radio telescopio italiano co-finanziato da INAF e ASI per supportare sia la ricerca nell’ambito dell’astronomia, sia le missioni planetarie attualmente in corso e quelle future. Nello specifico, si analizzeranno le capacità della SDSA nell’ambito delle missioni deep space partendo da un confronto con le Deep Space Antennas da 35-m e 34-m, che vengono attualmente impiegate rispettivamente da ESA e NASA. Particolare attenzione verrà data alle soluzioni che accomunano e differenziano le tre DSA per valutare quale potrebbe essere il contributo innovativo della SDSA nell’ambito delle missioni deep space.
APA, Harvard, Vancouver, ISO, and other styles
30

Storm, Daniel J. "White-tailed deer ecology and deer-human conflict in an exurban landscape /." Available to subscribers only, 2005. http://proquest.umi.com/pqdweb?did=1095427551&sid=6&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Benge, Sarah Elizabeth. "Nutrient selection by fallow deer (Dama dama) and roe deer (Capreolus capreolus)." Thesis, University of Southampton, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.249639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Perjeru, Florentine. "Deep Defects in Wide Bandgap Materials Investigated Using Deep Level Transient Spectroscopy." Ohio University / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou997365452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Mansour, Tarek M. Eng Massachusetts Institute of Technology. "Deep neural networks are lazy : on the inductive bias of deep learning." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121680.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 75-78).
Deep learning models exhibit superior generalization performance despite being heavily overparametrized. Although widely observed in practice, there is currently very little theoretical backing for such a phenomena. In this thesis, we propose a step forward towards understanding generalization in deep learning. We present evidence that deep neural networks have an inherent inductive bias that makes them inclined to learn generalizable hypotheses and avoid memorization. In this respect, we propose results that suggest that the inductive bias stems from neural networks being lazy: they tend to learn simpler rules first. We also propose a definition of simplicity in deep learning based on the implicit priors ingrained in deep neural networks.
by Tarek Mansour.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
34

Merkt, Juan R. "Social structure of Andean deer (Hippocamelus antisensis) in southern Peru." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/24864.

Full text
Abstract:
The taruca (Hippocamelus antisensis) is the only deer species found permanently in rugged mountainous habitat above the tree line. I studied the social organization of this deer in relation to its reproductive cycle and habitat use in the high Andes of southern Peru. Tarucas bred seasonally. Most fawns were observed towards the end of the rainy season between February and April. Mating was most common in June, during the dry season, and antler-shedding in males occurred in September/October, at the onset of the rainy season. The deer lived in social groups and, unlike most seasonally breeding cervids, formed large mixed-sex groups nearly all year. During the birth season, however, all pregnant females segregated to form female associations. At this time, adult males were found equally in mixed-sex groups or in small all-male groups. These groups differed in their habitat use. Female groups used areas of higher elevation, steeper slopes, and greater rock-cover than either male or mixed-sex groups. I suggest that selection of more rugged and concealed habitats by lactating females is primarily an antipredator strategy to reduce risk of predation on fawns. Tarucas are compared with other social Cervidae and with their ecological counterpart: the mountain Caprinae. The social structure of Hippocamelus resembles that of wild goats (Capra spp) and other Caprinae of similar ecology but it differs from that of wild sheep (Ovis spp).
Science, Faculty of
Zoology, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
35

Ebersole, Regina L. "Efficacy of a controlled hunt for managing white-tailed deer on Fair Hill Natural Resource Management Area, Cecil County, Maryland." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 66 p, 2007. http://proquest.umi.com/pqdweb?did=1253509781&sid=1&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Daniels, Kelly L. "Deep water, open water." Master's thesis, Mississippi State : Mississippi State University, 2009. http://library.msstate.edu/etd/show.asp?etd=etd-04022009-163550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Burchfield, Monica R. "Fish from Deep Water." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/english_theses/100.

Full text
Abstract:
These poems are lyrical narratives dealing primarily with the joys and sufferings of familial relationships in present and past generations, and how one is influenced and haunted by these interactions. There is a particular emphasis placed on the relationship between parent and child. Other poems deal with passion, both in the tangible and spiritual realms. The poems aim to use vivid figurative language to explore complex and sometimes distressing situations and emotions.
APA, Harvard, Vancouver, ISO, and other styles
38

Stone, Rebecca E. "Deep mixed layer entrainment." Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8198.

Full text
Abstract:
Approved for public release; distribution is unlimited.
A bulk turbulence-closure mixed layer model is generalized to allow prediction of very deep polar sea mixing. The model includes unsteady three- component turbulent kinetic energy budgets. In addition to terms for shear production, pressure redistribution, and dissipation, special attention is devoted to realistic treatment of thermobaric enhancement of buoyancy flux and to Coriolis effect on turbulence. The model is initialized and verified with CTD data taken by R/V Valdivia in the Greenland Sea during winter 1993-1994. Model simulations show (1) mixed layer deepening is significantly enhanced when the thermal expansion coefficient's increase with pressure is included; (2) entrainment rate is sensitive to the direction of wind stress because of Coriolis; and (3) the predicted mixed layer depth evolution agrees qualitatively with the observations. Results demonstrate the importance of water column initial conditions, accurate representation of strong surface cooling events, and inclusion of the thermobaric effect on buoyancy, to determine the depth of mixing and ultimately the heat and salt flux into the deep ocean. Since coupling of the ocean to the atmosphere through deep mixed layers in polar regions is fundamental to our climate system, it is important that regional and global models be developed that incorporate realistic representation of this coupling
APA, Harvard, Vancouver, ISO, and other styles
39

Beyer, Franziska C. "Deep levels in SiC." Doctoral thesis, Linköpings universitet, Halvledarmaterial, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70356.

Full text
Abstract:
Silicon carbide (SiC) has been discussed as a promising material for high power bipolar devices for almost twenty years. Advances in SiC crystal growth especially the development of chemical vapor deposition (CVD) have enabled the fabrication of high quality material. Much progress has further been achieved in identifying minority charge carrier lifetime limiting defects, which may be attributed to structural defects, surface recombination or point defects located in the band gap of SiC. Deep levels can act as recombination centers by interacting with both the valence and conduction band. As such, the defect levels reduce the minority charge carrier lifetime, which is of great importance in bipolar devices. Impurities in semiconductors play an important role to adjust their semiconducting properties. Intentional doping can introduce shallow defect levels to increase the conductivity or deep levels for achieving semi-insulating (SI) SiC. Impurities, especially transition metals generate defect levels deep in the band gap of SiC, which trap charge carriers and thus reduce the charge carrier lifetime. Transition metals, such as vanadium, are used in SiC to compensate the residual nitrogen doping. It has previously been reported that valence band edges of the different SiC polytypes are pinned to the same level and that deep levels related to transition metals can serve as a common reference level; this is known as the LANGER-HEINRICH (LH) rule. Electron irradiation introduces or enhances the concentration of existing point defects, such as the carbon vacancy (VC) and the carbon interstitial (Ci). Limiting the irradiation energy, Eirr, below the displacement energy of silicon in the SiC lattice (Eirr < 220 keV), the generated defects can be attributed to carbon related defects, which are already created at lower Eirr. Ci are mobile at low temperatures and using low temperature heat treatments, the annealing behavior of the introduced Ci and their complexes can be studied. Deep levels, which appear and disappear depending on the electrical, thermal and optical conditions prior to the measurements are associated with metastable defects. These defects can exist in more than one configuration, which itself can have different charge states. Capacitance transient investigations, where the defect’s occupation is studied by varying the depletion region in a diode, can be used to observe such occupational changes. Such unstable behavior may influence device performance, since defects may be electrically active in one configuration and inactive after transformation to another configuration. This thesis is focused on electrical characterization of deep levels in SiC using deep level transient spectroscopy (DLTS). The first part, papers 1-4, is dedicated to defect studies of both impurities and intrinsic defects in as-grown material. The second part, consisting of papers 5-7, is dealing with the defect content after electron irradiation and the annealing behavior of the introduced deep levels. In the first part, transition metal incorporation of iron (Fe) and tungsten (W) is discussed in papers 1 and 2, respectively. Fe and W are possible candidates to compensate the residual nitrogen doping in SiC. The doping with Fe resulted in one level in n-type material and two levels in p-type 4H-SiC. The capture process is strongly coupled to the lattice. Secondary ion mass spectrometry measurements detected the presence of B and Fe. The defects are suggested to be related to Fe and/or Fe-B-pairs. Previous reports on tungsten doping showed that W gives rise to two levels (one shallow and one deep) in 4H- and only one deep level in 6H-SiC. In 3C-SiC, we detected two levels, one likely related to W and one intrinsic defect, labeled E1. The W related energy level aligns well with the deeper levels observed in 4H- and 6H-SiC in agreement with the LH rule. The LH rule is observed from experiments to be also valid for intrinsic levels. The level related to the DLTS peak EH6=7 in 4H-SiC aligns with the level related to E7 in 6H-SiC as well as with the level related to E1 in 3C-SiC. The alignment suggests that these levels may originate from the same defect, probably the VC, which has been proposed previously for 4H- and 6H-SiC. In paper 3, electrical characterization of 3C-layers grown heteroepitaxially on different SiC substrates is discussed. The material was of high quality with a low background doping concentration and SCHOTTKY diodes were fabricated. It was observed that nickel as rectifying contact material exhibits a similar barrier height as the previously suggested gold. A leakage current in the low nA range at a reverse bias of -2 V was achieved, which allowed capacitance transient measurements. One defect related to DLTS peak E1, previously presented in paper 2, was detected and suggested to be related to an intrinsic defect. Paper 4 gives the evidence that chloride-based CVD grown material yields the same kind of defects as reported for standard CVD growth processes. However, for very high growth rates, exceeding 100 mm/h, an additional defect is observed as well as an increase of the Ti-concentration. Based on the knowledge from paper 2, the origin of the additional peak and the assumed increase of Ti-concentration can instead both be attributed to the deeper and the shallower level of tungsten in 4H-SiC, respectively. In the second part of the thesis, studies of low-energy (200 keV) electron irradiated as-grown 4H-SiC were performed. In paper 5, bistable defects, labeled EB-centers, evolved in the DLTS spectrum after the annihilation of the irradiation induced defect levels related to DLTS peaks EH1, EH3 and the bistable M-center. In a detailed annealing study presented in paper 6, the partial transformation of M-centers into the EB-centers is discussed. The transition between the two defects (M-centers → EB-centers) takes place at rather low temperatures (T ≈ 400 oC), which suggests a mobile defect as origin. The M-center and the EB-centers are suggested to be related to Ci and/or Ci complex defects. The EB-centers anneal out at about 700 oC. In paper 7, the DLTS peak EH5, which is observed after low- and high-energy electron irradiation is presented. The peak is associated with a bistable defect, labeled F-center. Configuration A exists unoccupied and occupied by an electron, whereas configuration B is only stable when filled by an electron. Reconfiguration temperatures for both configurations were determined and the reconfiguration energies were calculated from the transition kinetics. The reconfiguration B→A can also be achieved by minority charge carrier injection. The F-center is likely a carbon related defect, since it is already present after low-energy irradiation.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Qian. "Deep spiking neural networks." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/deep-spiking-neural-networks(336e6a37-2a0b-41ff-9ffb-cca897220d6c).html.

Full text
Abstract:
Neuromorphic Engineering (NE) has led to the development of biologically-inspired computer architectures whose long-term goal is to approach the performance of the human brain in terms of energy efficiency and cognitive capabilities. Although there are a number of neuromorphic platforms available for large-scale Spiking Neural Network (SNN) simulations, the problem of programming these brain-like machines to be competent in cognitive applications still remains unsolved. On the other hand, Deep Learning has emerged in Artificial Neural Network (ANN) research to dominate state-of-the-art solutions for cognitive tasks. Thus the main research problem emerges of understanding how to operate and train biologically-plausible SNNs to close the gap in cognitive capabilities between SNNs and ANNs. SNNs can be trained by first training an equivalent ANN and then transferring the tuned weights to the SNN. This method is called ‘off-line’ training, since it does not take place on an SNN directly, but rather on an ANN instead. However, previous work on such off-line training methods has struggled in terms of poor modelling accuracy of the spiking neurons and high computational complexity. In this thesis we propose a simple and novel activation function, Noisy Softplus (NSP), to closely model the response firing activity of biologically-plausible spiking neurons, and introduce a generalised off-line training method using the Parametric Activation Function (PAF) to map the abstract numerical values of the ANN to concrete physical units, such as current and firing rate in the SNN. Based on this generalised training method and its fine tuning, we achieve the state-of-the-art accuracy on the MNIST classification task using spiking neurons, 99.07%, on a deep spiking convolutional neural network (ConvNet). We then take a step forward to ‘on-line’ training methods, where Deep Learning modules are trained purely on SNNs in an event-driven manner. Existing work has failed to provide SNNs with recognition accuracy equivalent to ANNs due to the lack of mathematical analysis. Thus we propose a formalised Spike-based Rate Multiplication (SRM) method which transforms the product of firing rates to the number of coincident spikes of a pair of rate-coded spike trains. Moreover, these coincident spikes can be captured by the Spike-Time-Dependent Plasticity (STDP) rule to update the weights between the neurons in an on-line, event-based, and biologically-plausible manner. Furthermore, we put forward solutions to reduce correlations between spike trains; thereby addressing the result of performance drop in on-line SNN training. The promising results of spiking Autoencoders (AEs) and Restricted Boltzmann Machines (SRBMs) exhibit equivalent, sometimes even superior, classification and reconstruction capabilities compared to their non-spiking counterparts. To provide meaningful comparisons between these proposed SNN models and other existing methods within this rapidly advancing field of NE, we propose a large dataset of spike-based visual stimuli and a corresponding evaluation methodology to estimate the overall performance of SNN models and their hardware implementations.
APA, Harvard, Vancouver, ISO, and other styles
41

Sheiretov, Yanko Konstantinov. "Deep penetration magnetoquasistatic sensors." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/16772.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 193-198).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
This research effort extends the capabilities of existing model-based spatially periodic quasistatic-field sensors. The research developed three significant improvements in the field of nondestructive evaluation. The impact of each is detailed below: 1. The design of a distributed current drive magneto resistive magnetometer that matches the model response sufficiently to perform air calibration and absolute property measurement. Replacing the secondary winding with a magnetoresistive sensor allows the magnetometer to be operated at frequencies much lower than ordinarily possible, including static (DC) operation, which enables deep penetration defect imaging. Low frequencies are needed for deep probing of metals, where the depth of penetration is otherwise limited by the skin depth due to the shielding effect of induced eddy currents. The capability to perform such imaging without dependence on calibration standards has both substantial cost, ease of use, and technological benefits. The absolute property measurement capability is important because it provides a robust comparison for manufacturing quality control and monitoring of aging processes. Air calibration also alleviates the dependence on calibration standards that can be difficult to maintain. 2. The development and validation of cylindrical geometry models for inductive and capacitive sensors. The development of cylindrical geometry models enable the design of families of circularly symmetric magnetometers and dielectrometers with the "model-based" methodology, which requires close agreement between actual sensor response and simulated response. These kinds of sensors are needed in applications where the components being tested have circular symmetry, e.g. cracks near fasteners, or if it is important to measure the spatial average of an anisotropic property. 3. The development of accurate and efficient two-dimensional inverse interpolation and grid look-up techniques to determine electromagnetic and geometric properties. The ability to perform accurate and efficient grid interpolation is important for all sensors that follow the model-based principle, but it is particularly important for the complex shaped grids used with the magnetometers and dielectrometers in this thesis. A prototype sensor that incorporates all new features, i.e. a circularly symmetric magnetometer with a distributed current drive that uses a magnetoresistive secondary element, was designed, built, and tested. The primary winding is designed to have no net dipole moment, which improves repeatability by reducing the influence of distant objects. It can also support operation at two distinct effective spatial wavelengths. A circuit is designed that places the magnetoresistive sensor in a feedback configuration with a secondary winding to provide the necessary biasing and to ensure a linear transfer characteristic. Efficient FFT-based methods are developed to model magnetometers with a distributed current drive for both Cartesian and cylindrical geometry sensors. Results from measurements with a prototype circular dielectrometer that agree with the model-based analysis are also presented. In addition to the main contributions described so far, this work also includes other related enhancements to the time and space periodic-field sensor models, such as incorporating motion in the models to account for moving media effects. This development is important in low frequency scanning applications. Some improvements of the existing semi-analytical collocation point models for the standard Cartesian magnetometers and dielectrometers are also presented.
by Yanko Sheiretov.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
42

Börger, Luca. "Roe deer mating tactics." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Waber, Kristin. "Landscape scale deer management." Thesis, University of East Anglia, 2010. https://ueaeprints.uea.ac.uk/33047/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Patil, Raj. "Deep UV Raman Spectroscopy." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/613378.

Full text
Abstract:
This thesis examines the performance of a custom built deep UV laser (257.5nm) for Raman spectroscopy and the advantages of Raman spectroscopy with a laser in the deep UV over a laser in the visible range (532 nm). It describes the theory of resonance Raman scattering, the experimental setup for Raman spectroscopy and a few Raman spectroscopy measurements. The measurements were performed on biological samples oak tree leaf and lactobacillus acidophilus and bifidobacteria from probotioc medicinal capsules. Fluorescence free Raman spectra were acquired for the two samples with 257.5 nm laser. The Raman spectra for the two samples with a 532nm laser was masked with fluorescence. Raman measurements for an inorganic salt sodium nitrate showed a resonance Raman effect with 257.5 nm laser which led to enhancement in the Raman intensity as compared to that with 532 nm laser. Therefore we were able to demonstrate two advantages of deep UV Raman spectroscopy. First one is the possibility of acquiring fluorescence free spectra for biological samples. Second is the possibility of gaining enhancement in Raman intensity due to resonance Raman effect. It was observed that 257.5 nm laser requires optimization to reduce the bandwidth of the laser to get better resolution. The 257.5 nm laser also needs to be optimized to obtain higher power to get better signal to noise ratio. The experimental setup can also be further improved to obtain better resolution. If the improvements required in the setup are implemented, the deep UV Raman setup will become an important tool for spectroscopy.
APA, Harvard, Vancouver, ISO, and other styles
45

Fahr, Mignon. "As Runs the Deer." ScholarWorks@UNO, 2003. http://scholarworks.uno.edu/td/11.

Full text
Abstract:
These eleven chapters comprise Part One of a novel of thirty-seven chapters, entitled As Runs the Deer. It is a dialectic play on the processes of Time, as well as a play with evolving dialects. Nominally set in the 19th c., in an Appalachian-like terrain, it shows the difficulties James Ian Pierson meets when emerging out of his wilderness to re-enter his former life. Opening his own story by means of his sycamore cane, the 19- yr.-old amnesiac must soon reconcile his past with the invading "Now!" He evades the intrusion of a drunken hunter, is overcome by the wintry elements, brought from his icebed by Welsh woodsman Eustace, befriended by Mercury, ancient herbalist, keeper of the Myths. Frivolous Emily Marie Marchault must also reconcile herself with Ian's uneasy re-entry. Shackled by gilded chains of manners, she sees herself as overprotected by her guardian, Breton, and chips away at his ivory tower.
APA, Harvard, Vancouver, ISO, and other styles
46

Debain, Yann. "Deep Convolutional Nonnegative Autoencoders." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287352.

Full text
Abstract:
In this thesis, nonnegative matrix factorization (NMF) is viewed as a feedbackward neural network and generalized to a deep convolutional architecture with forwardpropagation under β-divergence. NMF and feedfoward neural networks are put in relation and a new class of autoencoders is proposed, namely the nonnegative autoencoders. It is shown that NMF is essentially the decoder part of an autoencoder with nonnegative weights and input. The shallow autoencoder with fully connected neurons is extended to a deep convolutional autoencoder with the same properties. Multiplicative factor updates are used to ensure nonnegativity of the weights in the network. As a result, a shallow nonnegative autoencoder (NAE), a shallow convolutional nonnegative autoencoder (CNAE) and a deep convolutional nonnegative autoencoder (DCNAE) are developed. Finally, all three variants of the nonnegative autoencoder are tested on different tasks, such as signal reconstruction and signal enhancement.
I den här rapporten betraktas icke-negativ matrisfaktorisering (eng: nonnegative matrix factorization, NMF) som ett återkopplat neuralt nätverk. NMF är generaliserat till en djup faltningsarkitektur med “forwardpropagation” och β-divergens. NMF och “feedforward” neurala nät jämförs och en ny typ av autokodare är presenterat. Den nya typen av autokodare kallas icke-negativ autokodare. NMF betraktas avkodardelen av en autokodare med icke-negativa vikter och ingång. Den grunda autokodare med summationsdelen är utbyggd till en djup faltningsautokodare med icke-negativa vikter och ingång. I den här rapporten utvecklades en grund icke-negativ autokodare (eng: nonnegative autoencoder, NAE), en grund icke-negativ faltningsautokodare (eng: convolutional nonnegative autoencoder, CNAE) och en djup icke-negativ faltningsautokodare (eng: deep convolutional nonnegative autoencoder, DCNAE). Slutligen testas de tre varianterna av icke-negativ autokodare på några olika uppgifter som signalrekonstruktion och signalförbättring.
APA, Harvard, Vancouver, ISO, and other styles
47

Halle, Alex, and Alexander Hasse. "Topologieoptimierung mittels Deep Learning." Technische Universität Chemnitz, 2019. https://monarch.qucosa.de/id/qucosa%3A34343.

Full text
Abstract:
Die Topologieoptimierung ist die Suche einer optimalen Bauteilgeometrie in Abhängigkeit des Einsatzfalls. Für komplexe Probleme kann die Topologieoptimierung aufgrund eines hohen Detailgrades viel Zeit- und Rechenkapazität erfordern. Diese Nachteile der Topologieoptimierung sollen mittels Deep Learning reduziert werden, so dass eine Topologieoptimierung dem Konstrukteur als sekundenschnelle Hilfe dient. Das Deep Learning ist die Erweiterung künstlicher neuronaler Netzwerke, mit denen Muster oder Verhaltensregeln erlernt werden können. So soll die bislang numerisch berechnete Topologieoptimierung mit dem Deep Learning Ansatz gelöst werden. Hierzu werden Ansätze, Berechnungsschema und erste Schlussfolgerungen vorgestellt und diskutiert.
APA, Harvard, Vancouver, ISO, and other styles
48

Goh, Hanlin. "Learning deep visual representations." Paris 6, 2013. http://www.theses.fr/2013PA066356.

Full text
Abstract:
Les avancées récentes en apprentissage profond et en traitement d'image présentent l'opportunité d'unifier ces deux champs de recherche complémentaires pour une meilleure résolution du problème de classification d'images dans des catégories sémantiques. L'apprentissage profond apporte au traitement d'image le pouvoir de représentation nécessaire à l'amélioration des performances des méthodes de classification d'images. Cette thèse propose de nouvelles méthodes d'apprentissage de représentations visuelles profondes pour la résolution de cette tache. L'apprentissage profond a été abordé sous deux angles. D'abord nous nous sommes intéressés à l'apprentissage non supervisé de représentations latentes ayant certaines propriétés à partir de données en entrée. Il s'agit ici d'intégrer une connaissance à priori, à travers un terme de régularisation, dans l'apprentissage d'une machine de Boltzmann restreinte (RBM). Nous proposons plusieurs formes de régularisation qui induisent différentes propriétés telles que la parcimonie, la sélectivité et l'organisation en structure topographique. Le second aspect consiste au passage graduel de l'apprentissage non supervisé à l'apprentissage supervisé de réseaux profonds. Ce but est réalisé par l'introduction sous forme de supervision, d'une information relative à la catégorie sémantique. Deux nouvelles méthodes sont proposées. Le premier est basé sur une régularisation top-down de réseaux de croyance profonds à base de RBMs. Le second optimise un cout intégrant un critre de reconstruction et un critre de supervision pour l'entrainement d'autoencodeurs profonds. Les méthodes proposées ont été appliquées au problme de classification d'images. Nous avons adopté le modèle sac-de-mots comme modèle de base parce qu'il offre d'importantes possibilités grâce à l'utilisation de descripteurs locaux robustes et de pooling par pyramides spatiales qui prennent en compte l'information spatiale de l'image. L'apprentissage profonds avec agrÉgation spatiale est utilisé pour apprendre un dictionnaire hiÉrarchique pour l'encodage de reprÉsentations visuelles de niveau intermÉdiaire. Cette mÉthode donne des rÉsultats trs compétitifs en classification de scènes et d'images. Les dictionnaires visuels appris contiennent diverses informations non-redondantes ayant une structure spatiale cohérente. L'inférence est aussi très rapide. Nous avons par la suite optimisé l'étape de pooling sur la base du codage produit par le dictionnaire hiérarchique précédemment appris en introduisant introduit une nouvelle paramétrisation dérivable de l'opération de pooling qui permet un apprentissage par descente de gradient utilisant l'algorithme de rétro-propagation. Ceci est la premire tentative d'unification de l'apprentissage profond et du modèle de sac de mots. Bien que cette fusion puisse sembler évidente, l'union de plusieurs aspects de l'apprentissage profond de représentations visuelles demeure une tache complexe à bien des égards et requiert encore un effort de recherche important
Recent advancements in the areas of deep learning and visual information processing have presented an opportunity to unite both fields. These complementary fields combine to tackle the problem of classifying images into their semantic categories. Deep learning brings learning and representational capabilities to a visual processing model that is adapted for image classification. This thesis addresses problems that lead to the proposal of learning deep visual representations for image classification. The problem of deep learning is tackled on two fronts. The first aspect is the problem of unsupervised learning of latent representations from input data. The main focus is the integration of prior knowledge into the learning of restricted Boltzmann machines (RBM) through regularization. Regularizers are proposed to induce sparsity, selectivity and topographic organization in the coding to improve discrimination and invariance. The second direction introduces the notion of gradually transiting from unsupervised layer-wise learning to supervised deep learning. This is done through the integration of bottom-up information with top-down signals. Two novel implementations supporting this notion are explored. The first method uses top-down regularization to train a deep network of RBMs. The second method combines predictive and reconstructive loss functions to optimize a stack of encoder-decoder networks. The proposed deep learning techniques are applied to tackle the image classification problem. The bag-of-words model is adopted due to its strengths in image modeling through the use of local image descriptors and spatial pooling schemes. Deep learning with spatial aggregation is used to learn a hierarchical visual dictionary for encoding the image descriptors into mid-level representations. This method achieves leading image classification performances for object and scene images. The learned dictionaries are diverse and non-redundant. The speed of inference is also high. From this, a further optimization is performed for the subsequent pooling step. This is done by introducing a differentiable pooling parameterization and applying the error backpropagation algorithm. This thesis represents one of the first attempts to synthesize deep learning and the bag-of-words model. This union results in many challenging research problems, leaving much room for further study in this area
APA, Harvard, Vancouver, ISO, and other styles
49

Geirsson, Gunnlaugur. "Deep learning exotic derivatives." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-430410.

Full text
Abstract:
Monte Carlo methods in derivative pricing are computationally expensive, in particular for evaluating models partial derivatives with regard to inputs. This research proposes the use of deep learning to approximate such valuation models for highly exotic derivatives, using automatic differentiation to evaluate input sensitivities. Deep learning models are trained to approximate Phoenix Autocall valuation using a proprietary model used by Svenska Handelsbanken AB. Models are trained on large datasets of low-accuracy (10^4 simulations) Monte Carlo data, successfully learning the true model with an average error of 0.1% on validation data generated by 10^8 simulations. A specific model parametrisation is proposed for 2-day valuation only, to be recalibrated interday using transfer learning. Automatic differentiation approximates sensitivity to (normalised) underlying asset prices with a mean relative error generally below 1.6%. Overall error when predicting sensitivity to implied volatililty is found to lie within 10%-40%. Near identical results are found by finite difference as automatic differentiation in both cases. Automatic differentiation is not successful at capturing sensitivity to interday contract change in value, though errors of 8%-25% are achieved by finite difference. Model recalibration by transfer learning proves to converge over 15 times faster and with up to 14% lower relative error than training using random initialisation. The results show that deep learning models can efficiently learn Monte Carlo valuation, and that these can be quickly recalibrated by transfer learning. The deep learning model gradient computed by automatic differentiation proves a good approximation of the true model sensitivities. Future research proposals include studying optimised recalibration schedules, using training data generated by single Monte Carlo price paths, and studying additional parameters and contracts.
APA, Harvard, Vancouver, ISO, and other styles
50

Wolfe, Traci. "Digging deep for meaning." Online version, 2008. http://www.uwstout.edu/lib/thesis/2008/2008wolfet.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography