Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Data synthesis.

Dissertationen zum Thema „Data synthesis“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Data synthesis" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Viklund, Joel. „Synthesis of sequential data“. Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-443542.

Der volle Inhalt der Quelle
Annotation:
Good generative models for short time series data exist and have been applied for both data augmentation and privacy protection purposes in the past. A common theme for existing generative models is that they all use a recurrent neural network (RNN) architecture, which makes the models limited regarding the length of the sequences. In real world problems, we might have to deal with data containing longer sequences, and it is such data we in this thesis attempt to synthesize. By combining the recently successful TimeGAN framework with a temporal convolutional network component architecture, we generate synthetic sequential data for two toy data sets: sequential MNIST and multivariate sine waves. The results strongly indicate, although relying solely on a visual inspection, that the model manage to capture long temporal dynamics over time and also relations between different features for the multivariate sine waves data set. In order to make our model applicable for real world data sets, we suggest two improvements. Firstly, the validation of the generated data should not only rely on visual inspection, but also ensure that the synthetic data has the same statistical distribution. Secondly, depending on the task, model refinements such that the synthetic samples look even more realistic should be made.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhang, Xi. „Data Synthesis for Object Recognition“. Thesis, Illinois Institute of Technology, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10642437.

Der volle Inhalt der Quelle
Annotation:

Large and balanced datasets are normally crucial for many machine learning models, especially when the problem is defined in a high dimensional space due to high complexity. In real-world applications, it is usually very hard and/or expensive to obtain adequate amounts of labeled data, even with the help of crowd-sourcing. To address these problems, a possible approach is to create synthetic data and use it for training. This approach has been applied in many application areas of computer vision including document recognition, object retrieval, and object classification. While a boosted performance has been demonstrated using synthetic data, the boosted performance is limited by two main factors in existing approaches. First, most existing approaches for creating and using synthetic data are application-specific and thus lack the ability to benefit other application areas. Further, such application-specific approaches are often heuristic in nature. Second, existing approaches do not recognize an inherent difference between synthetic data and actual data which is termed as a synthetic gap in my proposal. The synthetic gap in existing approaches is due to the fact that not all possible patterns and structures of actual data are present in the synthetic data. To address the problems of using synthetic data and using it to better improve the performance of learning algorithm, this proposal considers general ways of creating and using synthetic data. The problem caused by the synthetic gap is studied and approaches to overcome the gap are proposed. Experimental results demonstrate that the proposed approach is efficient and can boost the performance of many computer vision applications including building roof classification, character classification, and point cloud object classification.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hardwick, Jonathan Robert. „Synthesis of Noise from Flyover Data“. Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/50531.

Der volle Inhalt der Quelle
Annotation:
Flyover noise is a problem that affects citizens, primarily those that live near or around places with high air traffic such as airports or military bases. Such noise can be of great annoyance. The focus of this thesis is in determining a method to create a high fidelity sound source simulation of rotorcraft noise for the purpose of producing a complete flyover scenario to be used in psychoacoustic testing. The focus of the sound source simulation is simulating rotorcraft noise fluctuations during level flight to aid in psychoacoustic testing to determine human perception of such noise. Current methods only model the stationary or time-average components when synthesizing the sound source. The synthesis process described in this thesis determines the steady-state waveform of the noise as well as the time-varying fluctuations for each rotor individually. The process explored in this thesis uses an empirical approach to synthesize flyover noise by directly using physical flyover recordings. Four different methods of synthesis were created to determine the combination of components that produce high fidelity sound source simulation. These four methods of synthesis are: a) Unmodulated main rotor b) Modulated main rotor c) Unmodulated main rotor combined with the unmodulated tail rotor d) Modulated main rotor combined with the modulated tail rotor Since the time-varying components of the source sound are important to the creation of high fidelity sound source simulation, five different types of time-varying fluctuations, or modulations, were implemented to determine the importance of the fluctuating components on the sound source simulation. The types of modulation investigated are a) no modulation, b) randomly applied generic modulation, c) coherently applied generic modulation, d) randomly applied specific modulation, and e) coherently applied specific modulation. Generic modulation is derived from a different section of the source recording to which it is applied. For the purposes of this study, it is not clearly dominated by either thickness or loading noise characteristics, but still displays long-term modulation. Random application of the modulation implies that there is a loss of absolute modulation phase and amplitude information across the frequency spectrum. Coherent application of the modulation implies that an attempt is made to line up the absolute phase and amplitude of the modulation signal with that which is being replaced (i.e. that which was stripped from the original recording and expanding or contracting to fit the signal to which it is applied). Specific modulation is the modulation from the source recording which is being reconstructed. A psychoacoustic test was performed to rank the fidelity of each synthesis method and each type of modulation. Performing this comparison for two different emission angles provides insight as to whether the ranking will differ between the emission angles. The modulated main rotor combined with the modulated tail rotor showed the highest fidelity and had a much higher fidelity than any of the other synthesis methods. The psychoacoustic test proved that modulation is necessary to produce a high fidelity sound source simulation. However, the use of a generic modulation or a randomly applied specific modulation proved to be an inadequate substitute for the coherently applied specific modulation. The results from this research show that more research is necessary to properly simulate a full flyover scenario. Specifically, more data is needed in order to properly model the modulation for level flight.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Tanco, L. Molina. „Human motion synthesis from captured data“. Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/844411/.

Der volle Inhalt der Quelle
Annotation:
Animation of human motion is one of the most challenging topics in computer graphics. This is due to the large number of degrees of freedom of the body and to our ability to detect unnatural motion. Keyframing and interpolation remains the form of animation that is preferred by most animators because of the control and flexibility it provides. However this is a labour intensive process that requires skills that take years to acquire. Human motion capture techniques provide accurate measurement of the motion of a performer that can be mapped onto an animated character to provide strikingly natural animation. This raises the problem of how to allow an animator to modify captured movement to produce a desired animation whilst preserving the natural quality. This thesis introduces a new approach to the animation of human motion based on combining the flexibility of keyframing with the visual quality of motion capture data. In particular it addresses the problem of synthesising natural inbetween motion for sparse keyframes. This thesis proposes to obtain this motion by sampling high quality human motion capture data. The problem of keyframe interpolation is formulated as a search problem in a graph. This presents two difficulties: The complexity of the search makes it impractical for the large databases of motion capture required to model human motion. The second difficulty is that the global temporal structure in the data may not be preserved in the search. To address these difficulties this thesis introduces a layered framework that both reduces the complexity of the search and preserves the global temporal structure of the data. The first layer is a simplification of the graph obtained by clustering methods. This layer enables efficient planning of the search for a path between start and end keyframes. The second layer directly samples segments of the original motion data to synthesise realistic inbetween motion for the keyframes. A number of additional contributions are made including novel representations for human motion, pose similarity cost functions, dynamic programming algorithms for efficient search and quantitative evaluation methods. Results of realistic inbetween motion are presented with databases of up to 120 sequences (35000 frames). Key words: Human Motion Synthesis, Motion Capture, Character Animation, Graph Search, Clustering, Unsupervised Learning, Markov Models, Dynamic Programming.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Futang, Zhang. „Data Synthesis in PCM Telemetry System“. International Foundation for Telemetering, 1986. http://hdl.handle.net/10150/615425.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 13-16, 1986 / Riviera Hotel, Las Vegas, Nevada
In the field of re-entry telemetry, data synthesis is an important research task for multibeam and multi-reseiver system. This paper presents a microcmputer-based method to synthesize PCM data in real-time. The performances of various criteria used in data synthesis systems are also analyzed here.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Koecher, Matthew R. „Hardware Synthesis of Synchronous Data Flow Models“. BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/20.

Der volle Inhalt der Quelle
Annotation:
Synchronous Dataflow (SDF) graphs are a convenient way to represent many signal processing and dataflow operations. Nodes within SDF graphs represent computation while arcs represent dependencies between nodes. Using a graph representation, SDF graphs formally specify a dataflow algorithm without any assumptions on the final implementation. This allows an SDF model to be synthesized into a variety of implementation techniques including both software and hardware. This thesis presents a technique for generating an abstract hardware representation from SDF models. The techniques presented here operate on SDF models defined structurally within the Ptolemy modeling environment. The behavior of the nodes within Ptolemy SDF models is specified in software and can be simple, such as a single arithmetic operation, or arbitrarily complex. This thesis presents a technique for extracting the behavior of a limited class of SDF nodes defined in software and generating a structural description of the SDF model based on primitive arithmetic and logical operations. This synthesized graph can be used for subsequent hardware synthesis transformations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Gordon, Michael. „SGLS COMMAND DATA ENCODING USING DIRECT DIGITAL SYNTHESIS“. International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/608937.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California
The Space Ground Link Subsystem (SGLS) provides full duplex communications for commanding, tracking, telemetry and ranging between spacecraft and ground stations. The up-link command signal is an S-Band carrier phase modulated with the frequency shift keyed (FSK) command data. The command data format is a ternary (S, 1, 0) signal. Command data rates of 1, 2, and 10 Kbps are used. The method presented uses direct digital synthesis (DDS) to generate the SGLS command data and clock signals. The ternary command data and clock signals are input to the encoder, and an FSK subcarrier with an amplitude modulated clock is digitally generated. The command data rate determines the frequencies of the S, 1, 0 tones. DDS ensures that phase continuity will be maintained, and frequency stability will be determined by the microprocessor crystal accuracy. Frequency resolution can be maintained to within a few Hz from DC to over 2 MHZ. This allows for the generation of the 1 and 2 Kbps command data formats as well as the newer 10 Kbps format. Additional formats could be accommodated through software modifications. The use of digital technology provides for encoder self-testing and more comprehensive error reporting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Yu, Qingzhao. „Bayesian synthesis“. Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155324080.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Scott, Simon David. „A data-driven approach to visual speech synthesis“. Thesis, University of Bath, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307116.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Inanoglu, Zeynep. „Data driven parameter generation for emotional speech synthesis“. Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612250.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Jun, Yao, und Liu Shi-yan. „Real Time Telemetry Data Synthesis with the TMS320C25“. International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/611931.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California
This paper presents the method of real time telemetry data synthesis for multi-beams and multi-receivers system in theory. For the practical implementation, we introduce a TMS320C25-based data synthesis board. Through a large number of simulating experiments, the satisfactory results are obtained, which obviously improve the performance of telemetry system. Therefore, all those technigues and results have the value of practical applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Dall, Rasmus. „Statistical parametric speech synthesis using conversational data and phenomena“. Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/29016.

Der volle Inhalt der Quelle
Annotation:
Statistical parametric text-to-speech synthesis currently relies on predefined and highly controlled prompts read in a “neutral” voice. This thesis presents work on utilising recordings of free conversation for the purpose of filled pause synthesis and as an inspiration for improved general modelling of speech for text-to-speech synthesis purposes. A corpus of both standard prompts and free conversation is presented and the potential usefulness of conversational speech as the basis for text-to-speech voices is validated. Additionally, through psycholinguistic experimentation it is shown that filled pauses can have potential subconscious benefits to the listener but that current text-to-speech voices cannot replicate these effects. A method for pronunciation variant forced alignment is presented in order to obtain a more accurate automatic speech segmentation something which is particularly bad for spontaneously produced speech. This pronunciation variant alignment is utilised not only to create a more accurate underlying acoustic model, but also as the driving force behind creating more natural pronunciation prediction at synthesis time. While this improves both the standard and spontaneous voices the naturalness of spontaneous speech based voices still lags behind the quality of voices based on standard read prompts. Thus, the synthesis of filled pauses is investigated in relation to specific phonetic modelling of filled pauses and through techniques for the mixing of standard prompts with spontaneous utterances in order to retain the higher quality of standard speech based voices while still utilising the spontaneous speech for filled pause modelling. A method for predicting where to insert filled pauses in the speech stream is also developed and presented, relying on an analysis of human filled pause usage and a mix of language modelling methods. The method achieves an insertion accuracy in close agreement with human usage. The various approaches are evaluated and their improvements documented throughout the thesis, however, at the end the resulting filled pause quality is assessed through a repetition of the psycholinguistic experiments and an evaluation of the compilation of all developed methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Kline, David. „Systematically Missing Subject-Level Data in Longitudinal Research Synthesis“. The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1440067809.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Yiu, Lai Kuen Candy. „Chinese character synthesis : towards universal Chinese information exchange“. HKBU Institutional Repository, 2003. http://repository.hkbu.edu.hk/etd_ra/477.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Burroughes, Janet Eirlys. „The synthesis of estuarine bathymetry from sparse sounding data“. Thesis, University of Plymouth, 2001. http://hdl.handle.net/10026.1/1887.

Der volle Inhalt der Quelle
Annotation:
The two aims of the project involved: 1. Devising a system for prediction o f areas of bathymetric change within the Fal estuary 2. Formulating and evaluating a method for interpolating single beam acoustic bathymetry to avoid artefacts o f interpolation. In order to address these aims, sources of bathymetric data for the Fal estuary were identified as Truro Harbour Office, Cornwall County Council and the Environment Agency. The data collected from these sources included red wavelength Lidar, aerial photography and single beam acoustic bathymetry from a number of different years. These data were input into a Geographic Information System (GIS) and assessed for suitability for the purposes o f data comparison and hence assessment of temporal trends in bathymetry within the estuary Problems encountered during mterpolation of the acoustic bathymetry resulted in the later aim of the project, to formulate an interpolation system suitable for interpolation of the single beam, bathymetric data in a realistic way, avoiding serious artefacts of interpolation. This aim was met, successfully, through the following processes: 1. An interpolation system was developed, using polygonal zones, bounded by channels and coastlines, to prevent interpolation across these boundaries. This system, based on Inverse Distance Weighting (IDW) interpolation, was referred to as Zoned Inverse Distance Weighting (ZIDW). 2. ZIDW was found, by visual inspection, to eliminate the interpolation artefacts described above. 3. The processes of identification of sounding lines and charmels, and the allocation of soundings and output grid cells to polygons, were successfully automated to allow ZIDW to be applied to large and multiple data sets. Manual intervention was maintained for processes performed most successfully by the human brain to optimise the results o f ZIDW. 4. To formalise the theory of ZIDW it was applied to a range of idealised, mathematically defined chaimels. For simple straight and regular curved, mathematical channels interpolation by the standard TIN method was found to perform as well as ZIDW. 5. Investigation of sinusoidal channels within a rectangular estuary, however, revealed that the TIN method begins to produce serious interpolation artefacts where sounding lines are not parallel to the centre lines o f channels and ridges. Hence, overall ZIDW was determined mathematically to represent the optimum method o f interpolation for single beam, bathymelric data. 6. Finally, ZIDW was refined, using data from the Humber and Gironde estuaries, to achieve universal applicability for interpolation of single beam, echo soimding data from any estuary. 7. The refinements involved allowance for non-continuous, flood and ebb type charmels; consideration of the effects of the scale of the estuary; smoothing of the channels using cubic splines; interpolation using a 'smart' ellipse and the option to reconstruct sounding lines from data that had previously been re-ordered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Spaniol, Jutta. „Synthesis of fractal-like surfaces from sparse data bases“. Thesis, University of Exeter, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Nevitt, S. J. „Data sharing and transparency : the impact on evidence synthesis“. Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3017585/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Williams, Alan C. „A behavioural VHDL synthesis system using data path optimisation“. Thesis, University of Southampton, 1997. https://eprints.soton.ac.uk/251233/.

Der volle Inhalt der Quelle
Annotation:
MOODS (Multiple Objective Optimisation in Data and control path synthesis) is a synthesis system which provides the ability to automatically optimise a design from a behavioural to a structural VHDL description. This thesis details two sets of enhancements made to the original system to improve the overall quality of the final hardware implementations obtained, and expand the range of the accepted VHDL subset. Whereas the original MOODS considered each functional unit in the target module library to be a purely combinational logic block, the 'expanded modules' developed for this project provide a means of implementing sequential multi-cycle modules. These modules are defined as technology-independent templates, which are inline expanded into the internal design structure during synthesis. This enables inter-module optimisation to occur at the sub-module level, thus affording greater opportunities for unit sharing and module binding. The templates also facilitate the development of specialised interface modules. These enable the use of fixed timing I/O protocols for external interfacing, while maintaining maximum scheduling flexibility within the body of the behaviour. The second set of enhancements includes an improved implementation of behavioural VHDL as input to the system. This expands the previously limited subset to include such elements as signals, wait statements, concurrent processes, and functions and procedures. These are implemented according to the IEEE standard thereby preserving the computational effects of the VHDL simulation model. The final section of work involves the development and construction of an FPGA-based real-time audio-band spectrum analyser, synthesised within the MOODS environment. This design process provides valuable insights into the strengths and weaknesses of both MOODS and behavioural synthesis in general, serving as a firm foundation to guide future development of the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Hu, Tianrui. „Detecting Bots using Stream-based System with Data Synthesis“. Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98595.

Der volle Inhalt der Quelle
Annotation:
Machine learning has shown great success in building security applications including bot detection. However, many machine learning models are difficult to deploy since model training requires the continuous supply of representative labeled data, which are expensive and time-consuming to obtain in practice. In this thesis, we build a bot detection system with a data synthesis method to explore detecting bots with limited data to address this problem. We collected the network traffic from 3 online services in three different months within a year (23 million network requests). We develop a novel stream-based feature encoding scheme to support our model to perform real-time bot detection on anonymized network data. We propose a data synthesis method to synthesize unseen (or future) bot behavior distributions to enable our system to detect bots with extremely limited labeled data. The synthesis method is distribution-aware, using two different generators in a Generative Adversarial Network to synthesize data for the clustered regions and the outlier regions in the feature space. We evaluate this idea and show our method can train a model that outperforms existing methods with only 1% of the labeled data. We show that data synthesis also improves the model's sustainability over time and speeds up the retraining. Finally, we compare data synthesis and adversarial retraining and show they can work complementary with each other to improve the model generalizability.
Master of Science
An internet bot is a computer-controlled software performing simple and automated tasks over the internet. Although some bots are legitimate, many bots are operated to perform malicious behaviors causing severe security and privacy issues. To address this problem, machine learning (ML) models that have shown great success in building security applications are widely used in detecting bots since they can identify hidden patterns learning from data. However, many ML-based approaches are difficult to deploy since model training requires labeled data, which are expensive and time-consuming to obtain in practice, especially for security tasks. Meanwhile, the dynamic-changing nature of malicious bots means bot detection models need the continuous supply of representative labeled data to keep the models up-to-date, which makes bot detection more challenging. In this thesis, we build an ML-based bot detection system to detect advanced malicious bots in real-time by processing network traffic data. We explore using a data synthesis method to detect bots with limited training data to address the limited and unrepresentative labeled data problem. Our proposed data synthesis method synthesizes unseen (or future) bot behavior distributions to enable our system to detect bots with extremely limited labeled data. We evaluate our approach using real-world datasets we collected and show that our model outperforms existing methods using only 1% of the labeled data. We show that data synthesis also improves the model's sustainability over time and helps to keep it up-to-date easier. Finally, we show that our method can work complementary with adversarial retraining to improve the model generalizability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Robb, Esther Anne. „Data-Efficient Learning in Image Synthesis and Instance Segmentation“. Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/104676.

Der volle Inhalt der Quelle
Annotation:
Modern deep learning methods have achieve remarkable performance on a variety of computer vision tasks, but frequently require large, well-balanced training datasets to achieve high-quality results. Data-efficient performance is critical for downstream tasks such as automated driving or facial recognition. We propose two methods of data-efficient learning for the tasks of image synthesis and instance segmentation. We first propose a method of high-quality and diverse image generation from finetuning to only 5-100 images. Our method factors a pretrained model into a small but highly expressive weight space for finetuning, which discourages overfitting in a small training set. We validate our method in a challenging few-shot setting of 5-100 images in the target domain. We show that our method has significant visual quality gains compared with existing GAN adaptation methods. Next, we introduce a simple adaptive instance segmentation loss which achieves state-of-the-art results on the LVIS dataset. We demonstrate that the rare categories are heavily suppressed by textit{correct background predictions}, which reduces the probability for all foreground categories with equal weight. Due to the relative infrequency of rare categories, this leads to an imbalance that biases towards predicting more frequent categories. Based on this insight, we develop DropLoss -- a novel adaptive loss to compensate for this imbalance without a trade-off between rare and frequent categories.
Master of Science
Many of the impressive results seen in modern computer vision rely on learning patterns from huge datasets of images, but these datasets may be expensive or difficult to collect. Many applications of computer vision need to learn from a very small number of examples, such as learning to recognize an unusual traffic event and behave safely in a self-driving car. In this thesis we propose two methods of learning from only a few examples. Our first method generates novel, high-quality and diverse images using a model fine-tuned on only 5-100 images. We start with an image generation model that was trained a much larger image set (70K images), and adapts it to a smaller image set (5-100 images). We selectively train only part of the network to encourage diversity and prevent memorization. Our second method focuses on the instance segmentation setting, where the model predicts (1) what objects occur in an image and (2) their exact outline in the image. This setting commonly suffers from long-tail distributions, where some of the known objects occur frequently (e.g. "human" may occur 1000+ times) but most only occur a few times (e.g. "cake" or "parrot" may only occur 10 times). We observed that the "background" label has a disproportionate effect of suppressing the rare object labels. We use this to develop a method to balance suppression from background classes during training.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Karlsson, Anton, und Torbjörn Sjöberg. „Synthesis of Tabular Financial Data using Generative Adversarial Networks“. Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273633.

Der volle Inhalt der Quelle
Annotation:
Digitalization has led to tons of available customer data and possibilities for data-driven innovation. However, the data needs to be handled carefully to protect the privacy of the customers. Generative Adversarial Networks (GANs) are a promising recent development in generative modeling. They can be used to create synthetic data which facilitate analysis while ensuring that customer privacy is maintained. Prior research on GANs has shown impressive results on image data. In this thesis, we investigate the viability of using GANs within the financial industry. We investigate two state-of-the-art GAN models for synthesizing tabular data, TGAN and CTGAN, along with a simpler GAN model that we call WGAN. A comprehensive evaluation framework is developed to facilitate comparison of the synthetic datasets. The results indicate that GANs are able to generate quality synthetic datasets that preserve the statistical properties of the underlying data and enable a viable and reproducible subsequent analysis. It was however found that all of the investigated models had problems with reproducing numerical data.
Digitaliseringen har fört med sig stora mängder tillgänglig kunddata och skapat möjligheter för datadriven innovation. För att skydda kundernas integritet måste dock uppgifterna hanteras varsamt. Generativa Motstidande Nätverk (GANs) är en ny lovande utveckling inom generativ modellering. De kan användas till att syntetisera data som underlättar dataanalys samt bevarar kundernas integritet. Tidigare forskning på GANs har visat lovande resultat på bilddata. I det här examensarbetet undersöker vi gångbarheten av GANs inom finansbranchen. Vi undersöker två framstående GANs designade för att syntetisera tabelldata, TGAN och CTGAN, samt en enklare GAN modell som vi kallar för WGAN. Ett omfattande ramverk för att utvärdera syntetiska dataset utvecklas för att möjliggöra jämförelse mellan olika GANs. Resultaten indikerar att GANs klarar av att syntetisera högkvalitativa dataset som bevarar de statistiska egenskaperna hos det underliggande datat, vilket möjliggör en gångbar och reproducerbar efterföljande analys. Alla modellerna som testades uppvisade dock problem med att återskapa numerisk data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Saker, Vanessa. „Automated feature synthesis on big data using cloud computing resources“. Master's thesis, University of Cape Town, 2020. http://hdl.handle.net/11427/32452.

Der volle Inhalt der Quelle
Annotation:
The data analytics process has many time-consuming steps. Combining data that sits in a relational database warehouse into a single relation while aggregating important information in a meaningful way and preserving relationships across relations, is complex and time-consuming. This step is exceptionally important as many machine learning algorithms require a single file format as an input (e.g. supervised and unsupervised learning, feature representation and feature learning, etc.). An analyst is required to manually combine relations while generating new, more impactful information points from data during the feature synthesis phase of the feature engineering process that precedes machine learning. Furthermore, the entire process is complicated by Big Data factors such as processing power and distributed data storage. There is an open-source package, Featuretools, that uses an innovative algorithm called Deep Feature Synthesis to accelerate the feature engineering step. However, when working with Big Data, there are two major limitations. The first is the curse of modularity - Featuretools stores data in-memory to process it and thus, if data is large, it requires a processing unit with a large memory. Secondly, the package is dependent on data stored in a Pandas DataFrame. This makes the use of Featuretools with Big Data tools such as Apache Spark, a challenge. This dissertation aims to examine the viability and effectiveness of using Featuretools for feature synthesis with Big Data on the cloud computing platform, AWS. Exploring the impact of generated features is a critical first step in solving any data analytics problem. If this can be automated in a distributed Big Data environment with a reasonable investment of time and funds, data analytics exercises will benefit considerably. In this dissertation, a framework for automated feature synthesis with Big Data is proposed and an experiment conducted to examine its viability. Using this framework, an infrastructure was built to support the process of feature synthesis on AWS that made use of S3 storage buckets, Elastic Cloud Computing services, and an Elastic MapReduce cluster. A dataset of 95 million customers, 34 thousand fraud cases and 5.5 million transactions across three different relations was then loaded into the distributed relational database on the platform. The infrastructure was used to show how the dataset could be prepared to represent a business problem, and Featuretools used to generate a single feature matrix suitable for inclusion in a machine learning pipeline. The results show that the approach was viable. The feature matrix produced 75 features from 12 input variables and was time efficient with a total end-to-end run time of 3.5 hours and a cost of approximately R 814 (approximately $52). The framework can be applied to a different set of data and allows the analysts to experiment on a small section of the data until a final feature set is decided. They are able to easily scale the feature matrix to the full dataset. This ability to automate feature synthesis, iterate and scale up, will save time in the analytics process while providing a richer feature set for better machine learning results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Khailtash, Amal. „Handling large data storage in synthesis of multiple FPGA systems“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0007/MQ43653.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Argudo, Jaime Fernando. „Evaluation and synthesis of experimental data for autoclaved aerated concrete /“. Full-text Adobe Acrobat (PDF) file, 2003. http://www.engr.utexas.edu/research/fsel/FSEL_reports/Thesis/Argudo,%20Jaime.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Clarke, Stuart J. „The analysis and synthesis of texture in sidescan sonar data“. Thesis, Heriot-Watt University, 1992. http://hdl.handle.net/10399/791.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Cropanese, Frank C. „Synthesis of low k1 projection lithography utilizing interferometry /“. Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/1235.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Popescu, Vlad M. „Airspace analysis and design by data aggregation and lean model synthesis“. Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49126.

Der volle Inhalt der Quelle
Annotation:
Air traffic demand is growing. New methods of airspace design are required that can enable new designs, do not depend on current operations, and can also support quantifiable performance goals. The main goal of this thesis is to develop methods to model inherent safety and control cost so that these can be included as principal objectives of airspace design, in support of prior work which examines capacity. The first contribution of the thesis is to demonstrate two applications of airspace analysis and design: assessing the inherent safety and control cost of the airspace. Two results are shown, a model which estimates control cost depending on autonomy allocation and traffic volume, and the characterization of inherent safety conditions which prevent unsafe trajectories. The effects of autonomy ratio and traffic volume on control cost emerge from a Monte Carlo simulation of air traffic in an airspace sector. A maximum likelihood estimation identifies the Poisson process to be the best stochastic model for control cost. Recommendations are made to support control-cost-centered airspace design. A novel method to reliably generate collision avoidance advisories, in piloted simulations, by the widely-used Traffic Alert and Collision Avoidance System (TCAS) is used to construct unsafe trajectory clusters. Results show that the inherent safety of routes can be characterized, determined, and predicted by relatively simple convex polyhedra (albeit multi-dimensional and involving spatial and kinematic information). Results also provide direct trade-off relations between spatial and kinematic constraints on route geometries that preserve safety. Accounting for these clusters thus supports safety-centered airspace design. The second contribution of the thesis is a general methodology that generalizes unifying principles from these two demonstrations. The proposed methodology has three steps: aggregate data, synthesize lean model, and guide design. The use of lean models is a result of a natural flowdown from the airspace view to the requirements. The scope of the lean model is situated at a level of granularity that identifies the macroscopic effects of operational changes on the strategic level. The lean model technique maps low-level changes to high-level properties and provides predictive results. The use of lean models allows the mapping of design variables (route geometry, autonomy allocation) to design evaluation metrics (inherent safety, control cost).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Torbey, Elie. „Control/data flow graph synthesis using evolutionary computation and behavioral estimation“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ37080.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Logan, Clinton Andrew. „The computer-aided evaluation and synthesis of a data communications network“. Thesis, University of Canterbury. Computer Science, 1991. http://hdl.handle.net/10092/9404.

Der volle Inhalt der Quelle
Annotation:
With most terrestrial telecommunication networks experiencing growth the need for powerful computer design tools is becoming mandatory. Such tools facilitate the quick and accurate quantification of many complex technical and economical interactions enabling planners to control the evolution of their networks. This thesis focuses on several issues surrounding the computer-aided design of a wide area data communications network. Three main topics are addressed: the application of interactive computer graphics to network design tools; the inherent shortcomings of several contemporary design methods; and the application of the tools developed during this study for the evaluation of an existing wide area network. Network Designers Workshop (NDW), the computer planning tool presented in this thesis has been developed to address some of the main inadequacies found in current day design tools. NDW utilizes high resolution graphics to provide the designer with a highly interactive framework for the rapid prototyping of communication networks. In addition, NDW's network synthesis methodology emphasises the importance of adopting an integrated approach to network design by enabling the planner to find a minimum cost solution through a series of iterative designs. The architecture and facilities of a modern packet switching network are also examined with a special focus on the mechanisms available for the collection of the essential performance data needed for the evaluation and design stage. The final section of this thesis concentrates on the application of the design tools presented in this study for the evaluation and cost driven optimization of a multimillion dollar packet switching network. Finally the impact of nodal cost and access network tariff structures on the optimum cost topology are illustrated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Matthews, Brett Alexander. „Probabilistic modeling of neural data for analysis and synthesis of speech“. Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/50116.

Der volle Inhalt der Quelle
Annotation:
This research consists of probabilistic modeling of speech audio signals and deep-brain neurological signals in brain-computer interfaces. A significant portion of this research consists of a collaborative effort with Neural Signals Inc., Duluth, GA, and Boston University to develop an intracortical neural prosthetic system for speech restoration in a human subject living with Locked-In Syndrome, i.e., he is paralyzed and unable to speak. The work is carried out in three major phases. We first use kernel-based classifiers to detect evidence of articulation gestures and phonological attributes speech audio signals. We demonstrate that articulatory information can be used to decode speech content in speech audio signals. In the second phase of the research, we use neurological signals collected from a human subject with Locked-In Syndrome to predict intended speech content. The neural data were collected with a microwire electrode surgically implanted in speech motor cortex of the subject's brain, with the implant location chosen to capture extracellular electric potentials related to speech motor activity. The data include extracellular traces, and firing occurrence times for neural clusters in the vicinity of the electrode identified by an expert. We compute continuous firing rate estimates for the ensemble of neural clusters using several rate estimation methods and apply statistical classifiers to the rate estimates to predict intended speech content. We use Gaussian mixture models to classify short frames of data into 5 vowel classes and to discriminate intended speech activity in the data from non-speech. We then perform a series of data collection experiments with the subject designed to test explicitly for several speech articulation gestures, and decode the data offline. Finally, in the third phase of the research we develop an original probabilistic method for the task of spike-sorting in intracortical brain-computer interfaces, i.e., identifying and distinguishing action potential waveforms in extracellular traces. Our method uses both action potential waveforms and their occurrence times to cluster the data. We apply the method to semi-artificial data and partially labeled real data. We then classify neural spike waveforms, modeled with single multivariate Gaussians, using the method of minimum classification error for parameter estimation. Finally, we apply our joint waveforms and occurrence times spike-sorting method to neurological data in the context of a neural prosthesis for speech.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Zhou, Yu. „Automatic synthesis and optimisation of asynchronous data paths using partial acknowledgement“. Thesis, University of Newcastle Upon Tyne, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445589.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Waide, Paul. „The characterisation and synthesis of weather data for solar thermal applications“. Thesis, Cranfield University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359587.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Torbey, Elie Carleton University Dissertation Engineering Electronics. „Control/data flow graph synthesis using evolutionary computation and behavioral estimation“. Ottawa, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Antone, Matthew E. „Synthesis of navigable 3-D environments from human-augmented image data“. Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10265.

Der volle Inhalt der Quelle
Annotation:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 95-96).
by Matthew E. Antone.
M.Eng.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Daubmann, Karl (Karl Matthew) 1973. „Data synthesis : a comprehensive visual approach to material selection in architecture“. Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/31082.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 1999.
Includes bibliographical references (leaves 74-75).
The condition of information overload has become a common problem facing contemporary designers. This has resulted in a critical need for the development of "intelligent decision support" systems. One such system would allow a designer to visualize not only material qualities and behavior characteristics, but provide the means to distill, organize, and select relevant data. Material databases and the enabling power of the internet can only be exploited as "information providers" for designers when such a system is adapted. Material selection has been identified as a means of exploring this type of analysis / assessment strategy because it is a specific activity (within the broader realm of design) that is concerned with complex decision making processes. The decision matrix is defined by multiple disciplines, incomplete and varied information , and an explosion in the production of materials that have dynamic properties. Emphasis must be placed on distilling and presenting critical information in a comprehensive and accessible format.
by Karl Daubmann.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Lim, Dongwon. „Synthesis of PID controller from empirical data and guaranteeing performance specifications“. [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2773.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Cavallo, Gianni. „Orthogonal synthesis of poly(alkoxyamine phosphodiester)s for data storage applications“. Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAF016.

Der volle Inhalt der Quelle
Annotation:
Une stratégie itérative de type AB+CD a été développée afin de synthétiser des polymères à séquences définies. Cette approche, basée sur des réactions orthogonales, a permis la synthèse de poly(alcoxyamine phosphodiester)s sans utiliser de groupements protecteurs. Ces copolymères sont composés de deux sous-unités, définies arbitrairement comme bit 0 et bit 1, ce qui permet le stockage d’information à l’échelle moléculaire. Les polymères synthétisés se sont révélés faciles à séquencer, la fragmentation par MS/MS des liaisons labiles de type alcoxyamine générant des spectres prédictibles. D’autres techniques de séquençage, comme l’approche de séquençage sans fragmentation ont aussi été utilisées. En outre, la chaine principale des polymères a été modifiée afin de pouvoir utiliser un alphabet étendu, optimisation permettant d’augmenter la densité d’information tout en maintenant la simplicité du séquençage. Enfin, deux liaisons alcoxyamine de stabilités différentes ont été insérées dans la chaine principale. Ceci permet l’obtention de polymères pouvant être fragmentés à des positions définies via l’utilisation de différents stimuli
Uniform sequence-defined polymers were synthesized using a new iterative (AB+CD) strategy involving two orthogonal reactions. This approach allowed the protecting-group free synthesis of uniform poly(alkoxyamine phosphodiester)s. These molecules, having a defined sequence of comonomers defined as bits 0 and 1, enable the data storage of binary information at the molecular level. Interestingly, poly(alkoxyamine phosphodiester)s were found to be extremely easy to sequence. Indeed, the cleavage of the labile alkoxyamine bond in MS/MS generates “easy-to-read” fragmentation patterns. The sequencing was also tested using non-conventional techniques as fragmentation-free sequencing. Furthermore, the poly(alkoxyamine phosphodiester) backbone was modified using an extended alphabet. This optimization increases the storage capacity maintaining the read-out by MS/MS easy. Finally, two alkoxyamine bonds, having different stabilities, were inserted in the poly(alkoxyamine phosphodiester) backbone to obtain sequence-defined polymers which can be fragmented in defined positions of the chain using different stimuli
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Lundberg, Anton. „Data-Driven Procedural Audio : Procedural Engine Sounds Using Neural Audio Synthesis“. Thesis, KTH, Datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280132.

Der volle Inhalt der Quelle
Annotation:
The currently dominating approach for rendering audio content in interactivemedia, such as video games and virtual reality, involves playback of static audiofiles. This approach is inflexible and requires management of large quantities of audio data. An alternative approach is procedural audio, where sound models are used to generate audio in real time from live inputs. While providing many advantages, procedural audio has yet to find widespread use in commercial productions, partly due to the audio produced by many of the proposed models not meeting industry standards. This thesis investigates how procedural audio can be performed using datadriven methods. We do this by specifically investigating how to generate the sound of car engines using neural audio synthesis. Building on a recently published method that integrates digital signal processing with deep learning, called Differentiable Digital Signal Processing (DDSP), our method obtains sound models by training deep neural networks to reconstruct recorded audio examples from interpretable latent features. We propose a method for incorporating engine cycle phase information, as well as a differentiable transient synthesizer. Our results illustrate that DDSP can be used for procedural engine sounds; however, further work is needed before our models can generate engine sounds without undesired artifacts and before they can be used in live real-time applications. We argue that our approach can be useful for procedural audio in more general contexts, and discuss how our method can be applied to other sound sources.
Det i dagsläget dominerande tillvägagångssättet för rendering av ljud i interaktivamedia, såsom datorspel och virtual reality, innefattar uppspelning av statiska ljudfiler. Detta tillvägagångssätt saknar flexibilitet och kräver hantering av stora mängder ljuddata. Ett alternativt tillvägagångssätt är procedurellt ljud, vari ljudmodeller styrs för att generera ljud i realtid. Trots sina många fördelar används procedurellt ljud ännu inte i någon vid utsträckning inom kommersiella produktioner, delvis på grund av att det genererade ljudet från många föreslagna modeller inte når upp till industrins standarder. Detta examensarbete undersöker hur procedurellt ljud kan utföras med datadrivna metoder. Vi gör detta genom att specifikt undersöka metoder för syntes av bilmotorljud baserade på neural ljudsyntes. Genom att bygga på en nyligen publicerad metod som integrerar digital signalbehandling med djupinlärning, kallad Differentiable Digital Signal Processing (DDSP), kan vår metod skapa ljudmodeller genom att träna djupa neurala nätverk att rekonstruera inspelade ljudexempel från tolkningsbara latenta prediktorer. Vi föreslår en metod för att använda fasinformation från motorers förbränningscykler, samt en differentierbar metod för syntes av transienter. Våra resultat visar att DDSP kan användas till procedurella motorljud, men mer arbete krävs innan våra modeller kan generera motorljud utan oönskade artefakter samt innan de kan användas i realtidsapplikationer. Vi diskuterar hur vårt tillvägagångssätt kan vara användbart inom procedurellt ljud i mer generella sammanhang, samt hur vår metod kan tillämpas på andra ljudkällor
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Kim, Jung Hoon. „Performance Analysis and Sampled-Data Controller Synthesis for Bounded Persistent Disturbances“. 京都大学 (Kyoto University), 2015. http://hdl.handle.net/2433/199317.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Chuan-hang, Fan. „A Useful Method of Error-Correction and Data Synthesis for Telemetry“. International Foundation for Telemetering, 1991. http://hdl.handle.net/10150/612910.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada
In the field of telemetry, data synthesis is an interesting problem for multi-beam and multi-receiver system. This paper introduces a useful method of coding and decoding for linear block code, and describes a decoding method of M repeatition codes---a special product code, the data synthesis is based on this method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Namvar, Gharehshiran Amir. „High Level Synthesis Evaluation of Tools and Methodology“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177362.

Der volle Inhalt der Quelle
Annotation:
The advances in silicon technology, as well as competitive time to market, in the recent decade have forced the design tools and methodologies to progress towards higher levels of abstraction. Raising the level of abstraction shortens the design cycle via elimination of details in design specification. One such new methodology is High Level Synthesis (HLS). HLS tools accept the behavioral design in the abstract level as the input and generate the detailed Register Transfer Level (RTL) code. In this thesis project, the HLS methodology is introduced in the design flow and its advantages are outlined. We then evaluate and compare three HLS tools developed by market leading vendors, namely, C-to-Silicon, CatapultC and Synphonycc. To compare the HLS tools, an HLS input is developed for one of the Ericsson’s designs and the generated RTL is compared with the hand-written RTL based on several performance criteria. Thereof, we discuss the choice of the best tool so as to facilitate adoption of HLS in Ericsson’s design flow. At last, capability of the HLS tools in the synthesis of designs with pure control flow is investigated.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

BOURDEAUDUCQ, SÉBASTIEN. „A performance-driven SoC architecture for video synthesis“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-26151.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Jacob, Philipp-Maximilian. „Towards algorithmic use of chemical data“. Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275643.

Der volle Inhalt der Quelle
Annotation:
The growth of chemical knowledge available via online databases opens opportunities for new types of chemical research. In particular, by converting the data into a network, graph theoretical approaches can be used to study chemical reactions. In this thesis several research questions from the field of data science and graph theory are re-formulated for the chemistry-specific data. Firstly, the structure of chemical reactions data was studied using graph theory. It was found that the network of reactions obtained from the Reaxys data was scale-free, that on average any two species were separated by six reactions, and that evidence for a hierarchy of nodes existed, most clearly in that the hubs that combine a large share of connections onto them also facilitate a large proportion of routes across the network. The hierarchy was also evidenced in the clustering and degree correlations of nodes. Next, it was investigated whether Reaxys could be mined to construct a network of reactions and use it to plan and evaluate synthesis routes in two case studies. A number of heuristics were developed to find synthesis routes using the network taking chemical structures into account. These routes were fed into a multi-criteria decision making framework scoring the routes along environmental sustainability considerations. The approach was successful in discovering and scoring synthesis route candidates. It was found that Reaxys lacked process data in many instances. To address this a proposal for extension of the RInChI reaction data format was developed. The final question addressed was whether the network could be used to predict future reactions by using Stochastic Block Models. Block model-based link prediction performed impressively, being able to achieve a classification accuracy of close to 95% during time-split validation on historic data, differentiating future reaction discoveries from random data. Next, a set of transformation suggestions was thus evaluated and a framework for analysing these results was presented. Overall, the thesis was able to further the understanding of the network’s topology and to present a framework allowing the mining of Reaxys to plan synthesis routes and target R&D efforts in a specific area to discover new reactions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Arnesen, Adam T. „Increasing Design Productivity for FPGAs Through IP Reuse and Meta-Data Encapsulation“. BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2614.

Der volle Inhalt der Quelle
Annotation:
As Moore's law continues to progress, it is becoming increasingly difficult for hardware designers to fully utilize the increasing number of transistors available semiconductor devices including FPGAs. This design productivity gap must be addressed to allow designs to take full advantage of the increased logic density that results from rising transistor density. The reuse of previously developed and verified intellectual property (IP) is one approach that has claimed to narrow the design productivity gap. Reuse, however, has proved difficult to realize in practice because of the complexity of IP and the reluctance of designers to reuse IP that they do not understand. This thesis proposes to narrow the design productivity gap for FPGAs by simplifying the reuse problem by encapsulating IP with extra machine-readable information or meta-data. This meta-data simplifies reuse by providing a language independent format for composing complex systems, providing a parameter representation system, defining high-level data types for FPGA IP, and allowing arbitrary IP to be described as actors in the homogeneous synchronous dataflow model of computation.This work implements meta-data in XML and presents two XML schemas that enable reuse. A new XML schema known as CHREC XML is presented as well as extensions that enable IP-XACT to be used to describe FPGA dataflow IP. Two tools developed in this work are also presented that leverage meta-data to simplify reuse of arbitrary IP. These tools simplify structural composition of IP, allow designers to manipulate parameters, check and validate high-level data types, and automatically synthesize control circuitry for dataflow designs. Productivity improvements are also demonstrated by reusing IP to quickly compose software radio receivers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Deaguero, Andria Lynn. „Improving the enzymatic synthesis of semi-synthetic beta-lactam antibiotics via reaction engineering and data-driven protein engineering“. Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42727.

Der volle Inhalt der Quelle
Annotation:
Semi-synthetic β-lactam antibiotics are the most prescribed class of antibiotics in the world. Chemical coupling of a β-lactam moiety with an acyl side chain has dominated the industrial production of semi-synthetic β-lactam antibiotics since their discovery in the early 1960s. Enzymatic coupling of a β-lactam moiety with an acyl side chain can be accomplished in a process that is much more environmentally benign but also results in a much lower yield. The goal of the research presented in this dissertation is to improve the enzymatic synthesis of β-lactam antibiotics via reaction engineering, medium engineering and data-drive protein engineering. Reaction engineering was employed to demonstrate that the hydrolysis of penicillin G to produce the β-lactam nucleus 6-aminopenicillanic acid (6-APA), and the synthesis of ampicillin from 6-APA and (R)-phenylglycine methyl ester ((R)-PGME), can be combined in a cascade conversion. In this work, penicillin G acylase (PGA) was utilized to catalyze the hydrolysis step, and PGA and α-amino ester hydrolase (AEH) were both studied to catalyze the synthesis step. Two different reaction configurations and various relative enzyme loadings were studied. Both configurations present a promising alternative to the current two-pot set-up which requires intermittent isolation of the intermediate, 6-APA. Medium engineering is primarily of interest in β-lactam antibiotic synthesis as a means to suppress the undesired primary and secondary hydrolysis reactions. The synthesis of ampicillin from 6-APA and (R)-PGME in the presence of ethylene glycol was chosen for study after a review of the literature. It was discovered that the transesterification product of (R)-PGME and ethylene glycol, (R)-phenylglycine hydroxyethyl ester, is transiently formed during the synthesis reactions. This never reported side reaction has the ability to positively affect yield by re-directing a portion of the consumption of (R)-PGME to an intermediate that could be used to synthesize ampicillin, rather than to an unusable hydrolysis product. Protein engineering was utilized to alter the selectivity of wild-type PGA with respect to the substituent on the alpha carbon of its substrates. Four residues were identified that had altered selectivity toward the desired product, (R)-ampicillin. Furthermore, the (R)-selective variants improved the yield from pure (R)-PGME up to 2-fold and significantly decreased the amount of secondary hydrolysis present in the reactions. Overall, we have expanded the applicability of PGA and AEH for the synthesis of semi-synthetic β-lactam antibiotics. We have shown the two enzymes can be combined in a novel one-pot cascade, which has the potential to eliminate an isolation step in the current manufacturing process. Furthermore, we have shown that the previously reported ex-situ mixed donor synthesis of ampicillin for PGA can also occur in-situ in the presence of a suitable side chain acyl donor and co-solvent. Finally, we have made significant progress towards obtaining a selective PGA that is capable of synthesizing diastereomerically pure semi-synthetic β-lactam antibiotics from racemic substrates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Hung, Rong-I. „Computational studies of protein sequence and structure“. Thesis, University of Oxford, 1999. http://ora.ox.ac.uk/objects/uuid:9905c946-86dd-4bb3-8824-7c50df136913.

Der volle Inhalt der Quelle
Annotation:
This thesis explores aspects protein function, structure and sequence by computational approaches. A comparative study of definitions of protein secondary structures was performed. Disagreements in assignment resulting from three different algorithms were observed. The causes of inaccuracies in structure assignments were discussed and possibilities of projecting protein secondary structures by different structural descriptors were tested. The investigation of inconsistent assignments of protein secondary structure led to a study of a more specific issue concerning protein structure/function relationships, namely cis/trans isomerisation of a peptide bond. Surveys were carried out at the level of protein molecules to detect the occurrences of the cis peptide bond, and at the level of protein domains to explore the possible biological implications of the occurrences of the structural motif. Research was then focussed on andalpha;-helical integral membrane proteins. A detailed analysis of sequences and putative transmembrane helical structures was conducted on the ABC transporters from different organisms. Interesting relationships between protein sequences, putative a-helical structures and transporter functions were identified. Applications of molecular dynamics simulations to the transmembrane helices of a specific human ABC transporter, cystic flbrosis transmembrane conductance regulator (CFTR), explored some of these relationships at the atomic resolution. Functional and structural implications of individual residues within membrane-spanning helices were revealed by these simulations studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Jalilian, Romaneh. „Novel synthesis techniques for nanostructures“. 2004. http://etd.louisville.edu/data/UofL0086t2004.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

CHEN, GIU-TAN, und 陳秋潭. „Automatic synthesis of data path“. Thesis, 1987. http://ndltd.ncl.edu.tw/handle/20298332954478598628.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Lin, Jou-Chun, und 林柔君. „Synthesis of Bundled Data Asynchronous Circuits“. Thesis, 2010. http://ndltd.ncl.edu.tw/handle/05846171636999972725.

Der volle Inhalt der Quelle
Annotation:
碩士
大同大學
資訊工程學系(所)
98
Bundled data asynchronous circuits have the following advantages: low power consumption, low cost and low EMI (electromagnetic interference) compared to other types of circuits. To realize bundled data circuits, however, we must solve the following issues: first, how to add matched delays to the datapath latches or to fix timing violations (setup and hold time constraints) if edge-triggered flip flops are used. Second, how to deal with missing handshake components due to synchronous circuit optimization tool (such as Quartus). Third, how to design a parameterized handshake components to facilitate system design, debug and delay adjustment. In this thesis, we develop a CAD tool written in Java to synthesize Balsa Breeze circuits into debugable bundled data asynchronous circuits based on a set of parameterized handshake components. The circuits are tested and verified in Altera Quartus II 9.0 and a DE2 FPGA board. The experimental results show that our bundled data asynchronous circuits outperform dual-rail asynchronous and Altera C2H synchronous circuits in terms of circuit cost, power consumption and speed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Chen, Jhih-Rong, und 陳之容. „Texture Synthesis Using Data Mining Technique“. Thesis, 2004. http://ndltd.ncl.edu.tw/handle/48354363991458229077.

Der volle Inhalt der Quelle
Annotation:
碩士
國立東華大學
資訊工程學系
92
We present a new texture synthesis algorithm, which combines texture synthesis with data mining technique. And our approach works well for many types of textures without any knowledge of their physical information process. Our approach first analyzes input texture to construct patch candidate data, and then we use this data to find frequent pattern sequences for synthesis results by using data mining technique─Sequential Pattern Mining.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie