Dissertations / Theses on the topic 'CNN MODELS'

To see the other types of publications on this topic, follow the link: CNN MODELS.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'CNN MODELS.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lind, Johan. "Evaluating CNN-based models for unsupervised image denoising." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176092.

Full text
Abstract:
Images are often corrupted by noise which reduces their visual quality and interferes with analysis. Convolutional Neural Networks (CNNs) have become a popular method for denoising images, but their training typically relies on access to thousands of pairs of noisy and clean versions of the same underlying picture. Unsupervised methods lack this requirement and can instead be trained purely using noisy images. This thesis evaluated two different unsupervised denoising algorithms: Noise2Self (N2S) and Parametric Probabilistic Noise2Void (PPN2V), both of which train an internal CNN to denoise images. Four different CNNs were tested in order to investigate how the performance of these algorithms would be affected by different network architectures. The testing used two different datasets: one containing clean images corrupted by synthetic noise, and one containing images damaged by real noise originating from the camera used to capture them. Two of the networks, UNet and a CBAM-augmented UNet resulted in high performance competitive with the strong classical denoisers BM3D and NLM. The other two networks - GRDN and MultiResUNet - on the other hand generally caused poor performance.
APA, Harvard, Vancouver, ISO, and other styles
2

Söderström, Douglas. "Comparing pre-trained CNN models on agricultural machines." Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Norlund, Tobias. "The Use of Distributional Semantics in Text Classification Models : Comparative performance analysis of popular word embeddings." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-127991.

Full text
Abstract:
In the field of Natural Language Processing, supervised machine learning is commonly used to solve classification tasks such as sentiment analysis and text categorization. The classical way of representing the text has been to use the well known Bag-Of-Words representation. However lately low-dimensional dense word vectors have come to dominate the input to state-of-the-art models. While few studies have made a fair comparison of the models' sensibility to the text representation, this thesis tries to fill that gap. We especially seek insight in the impact various unsupervised pre-trained vectors have on the performance. In addition, we take a closer look at the Random Indexing representation and try to optimize it jointly with the classification task. The results show that while low-dimensional pre-trained representations often have computational benefits and have also reported state-of-the-art performance, they do not necessarily outperform the classical representations in all cases.
APA, Harvard, Vancouver, ISO, and other styles
4

Suresh, Sreerag. "An Analysis of Short-Term Load Forecasting on Residential Buildings Using Deep Learning Models." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/99287.

Full text
Abstract:
Building energy load forecasting is becoming an increasingly important task with the rapid deployment of smart homes, integration of renewables into the grid and the advent of decentralized energy systems. Residential load forecasting has been a challenging task since the residential load is highly stochastic. Deep learning models have showed tremendous promise in the fields of time-series and sequential data and have been successfully used in the field of short-term load forecasting at the building level. Although, other studies have looked at using deep learning models for building energy forecasting, most of those studies have looked at limited number of homes or an aggregate load of a collection of homes. This study aims to address this gap and serve as an investigation on selecting the better deep learning model architecture for short term load forecasting on 3 communities of residential buildings. The deep learning models CNN and LSTM have been used in the study. For 15-min ahead forecasting for a collection of homes it was found that homes with a higher variance were better predicted by using CNN models and LSTM showed better performance for homes with lower variances. The effect of adding weather variables on 24-hour ahead forecasting was studied and it was observed that adding weather parameters did not show an improvement in forecasting performance. In all the homes, deep learning models are shown to outperform the simple ANN model.
Master of Science
Building energy load forecasting is becoming an increasingly important task with the rapid deployment of smart homes, integration of renewables into the grid and the advent of decentralized energy systems. Residential load forecasting has been a challenging task since residential load is highly stochastic. Deep learning models have showed tremendous promise in the fields of time-series and sequential data and have been successfully used in the field of short-term load forecasting. Although, other studies have looked at using deep learning models for building energy forecasting, most of those studies have looked at only a single home or an aggregate load of a collection of homes. This study aims to address this gap and serve as an analysis on short term load forecasting on 3 communities of residential buildings. Detailed analysis on the model performances across all homes have been studied. Deep learning models have been used in this study and their efficacy is measured compared to a simple ANN model.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zhihao. "Land Cover Classification on Satellite Image Time Series Using Deep Learning Models." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu159559249009195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nilsson, Kristian, and Hans-Eric Jönsson. "A comparison of image and object level annotation performance of image recognition cloud services and custom Convolutional Neural Network models." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18074.

Full text
Abstract:
Recent advancements in machine learning has contributed to an explosive growth of the image recognition field. Simultaneously, multiple Information Technology (IT) service providers such as Google and Amazon have embraced cloud solutions and software as a service. These factors have helped mature many computer vision tasks from scientific curiosity to practical applications. As image recognition is now accessible to the general developer community, a need arises for a comparison of its capabilities, and what can be gained from choosing a cloud service over a custom implementation. This thesis empirically studies the performance of five general image recognition services (Google Cloud Vision, Microsoft Computer Vision, IBM Watson, Clarifai and Amazon Rekognition) and image recognition models of the Convolutional Neural Network (CNN) architecture that we ourselves have configured and trained. Image and object level annotations of images extracted from different datasets were tested, both in their original state and after being subjected to one of the following six types of distortions: brightness, color, compression, contrast, blurriness and rotation. The output labels and confidence scores were compared to the ground truth of multiple levels of concepts, such as food, soup and clam chowder. The results show that out of the services tested, there is currently no clear top performer over all categories and they all have some variations and similarities in their output, but on average Google Cloud Vision performs the best by a small margin. The services are all adept at identifying high level concepts such as food and most mid-level ones such as soup. However, in terms of further specifics, such as clam chowder, they start to vary, some performing better than others in different categories. Amazon was found to be the most capable at identifying multiple unique objects within the same image, on the chosen dataset. Additionally, it was found that by using synonyms of the ground truth labels, performance increased as the semantic gap between our expectations and the actual output from the services was narrowed. The services all showed vulnerability to image distortions, especially compression, blurriness and rotation. The custom models all performed noticeably worse, around half as well compared to the cloud services, possibly due to the difference in training data standards. The best model, configured with three convolutional layers, 128 nodes and a layer density of two, reached an average performance of almost 0.2 or 20%. In conclusion, if one is limited by a lack of experience with machine learning, computational resources and time, it is recommended to make use of one of the cloud services to reach a more acceptable performance level. Which to choose depends on the intended application, as the services perform differently in certain categories. The services are all vulnerable to multiple image distortions, potentially allowing adversarial attacks. Finally, there is definitely room for improvement in regards to the performance of these services and the computer vision field as a whole.
APA, Harvard, Vancouver, ISO, and other styles
7

You, Yantian. "Sparsity Analysis of Deep Learning Models and Corresponding Accelerator Design on FPGA." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-204409.

Full text
Abstract:
Machine learning has achieved great success in recent years, especially the deep learning algorithms based on Artificial Neural Network. However, high performance and large memories are needed for these models , which makes them not suitable for IoT device, as IoT devices have limited performance and should be low cost and less energy-consuming. Therefore, it is necessary to optimize the deep learning models to accommodate the resource-constrained IoT devices. This thesis is to seek for a possible solution of optimizing the ANN models to fit into the IoT devices and provide a hardware implementation of the ANN accelerator on FPGA. The contribution of this thesis mainly lies in two aspects: 1). analyze the sparsity in the two mainstream deep learning models – DBN and CNN. The DBN model consists of two hidden layers with Restricted Boltzmann Machines while the CNN model consists of 2 convolutional layers and 2 sub-sampling layer. Experiments have been done on the MNIST data set with the sparsity of 75%. The ratio of the multiplications resulting in near-zero values has been tested. 2). FPGA implementation of an ANN accelerator. This thesis designed a hardware accelerator for the inference process in ANN models on FPGA (Stratix IV: EP4SGX530KH40C2). The main part of hardware design is the processing array consists of 256 Multiply-Accumulators array, which can conduct multiply-accumulate operations of 256 synaptic connections simultaneously. 16-bit fixed point computation is used to reduce the hardware complexity, thus saving power and area. Based on the evaluation results, it is found that the ratio of the multiplications under the threshold of 2-5 is 75% for CNN with ReLU activation function, and is 83% for DBN with sigmoid activation function, respectively. Therefore, there still exists large space for complex ANN models to be optimized if the sparsity of data is fully utilized. Meanwhile, the implemented hardware accelerator is verified to provide correct results through 16-bit fixed point computation, which can be used as a hardware testing platform for evaluating the ANN models.
APA, Harvard, Vancouver, ISO, and other styles
8

Huss, Anders. "Hybrid Model Approach to Appliance Load Disaggregation : Expressive appliance modelling by combining convolutional neural networks and hidden semi Markov models." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-179200.

Full text
Abstract:
The increasing energy consumption is one of the greatest environmental challenges of our time. Residential buildings account for a considerable part of the total electricity consumption and is further a sector that is shown to have large savings potential. Non Intrusive Load Monitoring (NILM), i.e. the deduction of the electricity consumption of individual home appliances from the total electricity consumption of a household, is a compelling approach to deliver appliance specific consumption feedback to consumers. This enables informed choices and can promote sustainable and cost saving actions. To achieve this, accurate and reliable appliance load disaggregation algorithms must be developed. This Master's thesis proposes a novel approach to tackle the disaggregation problem inspired by state of the art algorithms in the field of speech recognition. Previous approaches, for sampling frequencies 1 Hz, have primarily focused on different types of hidden Markov models (HMMs) and occasionally the use of artificial neural networks (ANNs). HMMs are a natural representation of electric appliances, however with a purely generative approach to disaggregation, basically all appliances have to be modelled simultaneously. Due to the large number of possible appliances and variations between households, this is a major challenge. It imposes strong restrictions on the complexity, and thus the expressiveness, of the respective appliance model to make inference algorithms feasible. In this thesis, disaggregation is treated as a factorisation problem where the respective appliance signal has to be extracted from its background. A hybrid model is proposed, where a convolutional neural network (CNN) extracts features that correlate with the state of a single appliance and the features are used as observations for a hidden semi Markov model (HSMM) of the appliance. Since this allows for modelling of a single appliance, it becomes computationally feasible to use a more expressive Markov model. As proof of concept, the hybrid model is evaluated on 238 days of 1 Hz power data, collected from six households, to predict the power usage of the households' washing machine. The hybrid model is shown to perform considerably better than a CNN alone and it is further demonstrated how a significant increase in performance is achieved by including transitional features in the HSMM.
Den ökande energikonsumtionen är en stor utmaning för en hållbar utveckling. Bostäder står för en stor del av vår totala elförbrukning och är en sektor där det påvisats stor potential för besparingar. Non Intrusive Load Monitoring (NILM), dvs. härledning av hushållsapparaters individuella elförbrukning utifrån ett hushålls totala elförbrukning, är en tilltalande metod för att fortlöpande ge detaljerad information om elförbrukningen till hushåll. Detta utgör ett underlag för medvetna beslut och kan bidraga med incitament för hushåll att minska sin miljöpåverakan och sina elkostnader. För att åstadkomma detta måste precisa och tillförlitliga algoritmer för el-disaggregering utvecklas. Denna masteruppsats föreslår ett nytt angreppssätt till el-disaggregeringsproblemet, inspirerat av ledande metoder inom taligenkänning. Tidigare angreppsätt inom NILM (i frekvensområdet 1 Hz) har huvudsakligen fokuserat på olika typer av Markovmodeller (HMM) och enstaka förekomster av artificiella neurala nätverk. En HMM är en naturlig representation av en elapparat, men med uteslutande generativ modellering måste alla apparater modelleras samtidigt. Det stora antalet möjliga apparater och den stora variationen i sammansättningen av dessa mellan olika hushåll utgör en stor utmaning för sådana metoder. Det medför en stark begränsning av komplexiteten och detaljnivån i modellen av respektive apparat, för att de algoritmer som används vid prediktion ska vara beräkningsmässigt möjliga. I denna uppsats behandlas el-disaggregering som ett faktoriseringsproblem, där respektive apparat ska separeras från bakgrunden av andra apparater. För att göra detta föreslås en hybridmodell där ett neuralt nätverk extraherar information som korrelerar med sannolikheten för att den avsedda apparaten är i olika tillstånd. Denna information används som obervationssekvens för en semi-Markovmodell (HSMM). Då detta utförs för en enskild apparat blir det beräkningsmässigt möjligt att använda en mer detaljerad modell av apparaten. Den föreslagna Hybridmodellen utvärderas för uppgiften att avgöra när tvättmaskinen används för totalt 238 dagar av elförbrukningsmätningar från sex olika hushåll. Hybridmodellen presterar betydligt bättre än enbart ett neuralt nätverk, vidare påvisas att prestandan förbättras ytterligare genom att introducera tillstånds-övergång-observationer i HSMM:en.
APA, Harvard, Vancouver, ISO, and other styles
9

Jonsson, Tim, and Isabella Tapper. "Evaluation of two CNN models, VGGNet-16 & VGGNet-19, for classification of Alzheimer’s disease in brain MRI scans." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280141.

Full text
Abstract:
Computer-aided-diagnosis (CAD) emerged in the early 1950s and since then CAD has facilitated the diagnosing of many medical conditions and diseases. In particular, CADfor Alzheimer’s disease (AD) has been immensely researched the last decade thanks to advanced neuroimaging techniques such as magnetic resonance imaging (MRI) and positron emission tomography (PET). Today around 44 million people worldwide have AD and researchers hope to discover accurate ways to detect AD before the symptoms begin. There are currently no validated so-called biological markers (biomarkers) for AD, meaning that there are no reliable indicators that can accurately diagnose AD. However, according to experts, machine learning and neuroimaging is among the most promising areas of research focused on biomarkers and early diagnosis of AD. The state-of-the-art machine learning method for image classification are convolutional neural networks (CNNs). At a recent study at Bharati Vidyapeeth’s College of Engineering and Karunya University, a convolutional neural network VGGNet-16 was used in an experiment in order to correctly classify AD using MRI scans. Experimentation was performed on data collected from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The accuracy using the described method was 95.73% for the validation set. The purpose of this bachelor thesis was to compare two different convolution neural network models: VGGNet-16 and VGGNet-19, comparing their results and performances for classifying AD using MRI scans from ADNI database. Sets of images were elected, some include and some exclude the hippocampus, since AD starts spreading in the hippocampus. Using transfer learning, the CNN models were trained with (a) random validation split, (b) cross validation and (c) different slice range not including the hippocampus. The results of this study show that the models were good at classifying true-negative, which is diagnosing a healthy patient as healthy. Hippocampus seems to be a promising biomarker for AD because experiment (c) achieved a lower accuracy than (a) and (b). In conclusion there is no real statistically proven difference between VGGNet-16 and VGGNet-19. Even then, this thesis showed that simpler CNN architectures can be utilized to classify AD with equally mild success rate on a very limited dataset. The two CNN models’ accuracy were between 66.6- 74.8% for classifying AD depending on the training approach.
Datorstödd diagnostisk (CAD) uppkom under tidigt 50-tal och har sedan dess använts för att diagnostisera många medicinska tillstånd och sjukdomar. Specifikt CAD för Alzheimers sjukdom (AD) har undersökts kraftigt det senaste decenniet till följd av uppkomsten av avancerade hjärnavbildningstekniker såsom Magnetic Resonanse Imaging (MRI) och Positron Emission Tomography (PET). I dagsläget lider 44 miljoner människor av AD. Forskare hoppas i framtiden kunna upptäcka sjukdomen i ett tidigt stadie, men i dagsläget finns ingen pålitlig indikator som med god säkerhet kan klassificera AD. Enligt experter är dock maskininlärning och hjärnavbildningstekniker de mest lovande områdena för tidig diagnostik av AD. Den masknikinlärningsmodell som ligger i framkant för bildigenkänning är faltningsnätverk (CNN). Vid en ny studie av Bharati Vidyapeeth’s College of Engineering och Karunya University användes ett CNN, VGG-16, för att klassificera AD med hjälp av MRI-bilder. Experimentet utfördes på data från Alzheimer’s Disease Neuroimaging Initiative (ADNI) och uppnådde en träffsäkerhet på 95.73%. Syftet med vår studie var att utvärdera två CNN-modeller, VGGNet-16 och VGGNet-19, för att jämföra deras resultat och prestanda vid klassificering av AD med bilder från ADNI-databasen. Uppsättningar av bilder valdes varav hippocampus inkluderades i vissa och exkluderades i andra, detta då AD tros börja i hippocampus. Med överförningsinlärning tränades CNN modellerna på (a) slumpmässigt utvalt valideringsdata, (b) korsvalidering, och (c) bilder utan hippocampus. Resultatet visade att modellerna var bra på att klassificera sanna-negativa, d.v.s. friska patienter klassas som friska. Därefter visade även resultatet att modellerna uppnådde en högre träffsäkerhet i experiment (a) och (b) än i (c). Detta medför att hippocampus kan ses som en användbar biomarkör. Slutligen visade resultatet att modellerna statistiskt sett inte kan urskiljas från varandra, vilket kan tyda på att de presterar lika. Dock visade denna studie att simpla CNN-modeller kan användas för att klassificera AD på väldigt begränsad mängd data. De två modellerna uppnådde en träffsäkerhet på mellan 66,6% – 74,8% vid klassificering av AD beroende på hur modellerna tränats.
APA, Harvard, Vancouver, ISO, and other styles
10

Mukhedkar, Dhananjay. "Polyphonic Music Instrument Detection on Weakly Labelled Data using Sequence Learning Models." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279060.

Full text
Abstract:
Polyphonic or multiple music instrument detection is a difficult problem compared to detecting single or solo instruments in an audio recording. As music is time series data it be can modelled using sequence learning methods within deep learning. Recently, temporal convolutional networks (TCN) have shown to outperform conventional recurrent neural networks (RNN) on various sequence modelling tasks. Though there have been significant improvements in deep learning methods, data scarcity becomes a problem in training large scale models. Weakly labelled data is an alternative where a clip is annotated for presence or absence of instruments without specifying the times at which an instrument is sounding. This study investigates how TCN model compares to a Long Short-Term Memory (LSTM) model while trained on weakly labelled dataset. The results showed successful training of both models along with generalisation on a separate dataset. The comparison showed that TCN performed better than LSTM, but only marginally. Therefore, from the experiments carried out it could not be explicitly concluded if TCN is convincingly a better choice over LSTM in the context of instrument detection, but definitely a strong alternative.
Polyfonisk eller multipel musikinstrumentdetektering är ett svårt problem jämfört med att detektera enstaka eller soloinstrument i en ljudinspelning. Eftersom musik är tidsseriedata kan den modelleras med hjälp av sekvensinlärningsmetoder inom djup inlärning. Nyligen har ’Temporal Convolutional Network’ (TCN) visat sig överträffa konventionella ’Recurrent Neural Network’ (RNN) på flertalet sekvensmodelleringsuppgifter. Även om det har skett betydande förbättringar i metoder för djup inlärning, blir dataknapphet ett problem vid utbildning av storskaliga modeller. Svagt märkta data är ett alternativ där ett klipp kommenteras för närvaro av frånvaro av instrument utan att ange de tidpunkter då ett instrument låter. Denna studie undersöker hur TCN-modellen jämförs med en ’Long Short-Term Memory’ (LSTM) -modell medan den tränas i svagt märkta datasätt. Resultaten visade framgångsrik utbildning av båda modellerna tillsammans med generalisering i en separat datasats. Jämförelsen visade att TCN presterade bättre än LSTM, men endast marginellt. Därför kan man från de genomförda experimenten inte uttryckligen dra slutsatsen om TCN övertygande är ett bättre val jämfört med LSTM i samband med instrumentdetektering, men definitivt ett starkt alternativ.
APA, Harvard, Vancouver, ISO, and other styles
11

Albert, Florea George, and Filip Weilid. "Deep Learning Models for Human Activity Recognition." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20201.

Full text
Abstract:
AMI Meeting Corpus (AMI) -databasen används för att undersöka igenkännande av gruppaktivitet. AMI Meeting Corpus (AMI) -databasen ger forskare fjärrstyrda möten och naturliga möten i en kontorsmiljö; mötescenario i ett fyra personers stort kontorsrum. För attuppnågruppaktivitetsigenkänninganvändesbildsekvenserfrånvideosoch2-dimensionella audiospektrogram från AMI-databasen. Bildsekvenserna är RGB-färgade bilder och ljudspektrogram har en färgkanal. Bildsekvenserna producerades i batcher så att temporala funktioner kunde utvärderas tillsammans med ljudspektrogrammen. Det har visats att inkludering av temporala funktioner både under modellträning och sedan förutsäga beteende hos en aktivitet ökar valideringsnoggrannheten jämfört med modeller som endast använder rumsfunktioner[1]. Deep learning arkitekturer har implementerats för att känna igen olika mänskliga aktiviteter i AMI-kontorsmiljön med hjälp av extraherade data från the AMI-databas.Neurala nätverks modellerna byggdes med hjälp av KerasAPI tillsammans med TensorFlow biblioteket. Det finns olika typer av neurala nätverksarkitekturer. Arkitekturerna som undersöktes i detta projektet var Residual Neural Network, Visual GeometryGroup 16, Inception V3 och RCNN (LSTM). ImageNet-vikter har använts för att initialisera vikterna för Neurala nätverk basmodeller. ImageNet-vikterna tillhandahålls av Keras API och är optimerade för varje basmodell [2]. Basmodellerna använder ImageNet-vikter när de extraherar funktioner från inmatningsdata. Funktionsextraktionen med hjälp av ImageNet-vikter eller slumpmässiga vikter tillsammans med basmodellerna visade lovande resultat. Både Deep Learning användningen av täta skikt och LSTM spatio-temporala sekvens predikering implementerades framgångsrikt.
The Augmented Multi-party Interaction(AMI) Meeting Corpus database is used to investigate group activity recognition in an office environment. The AMI Meeting Corpus database provides researchers with remote controlled meetings and natural meetings in an office environment; meeting scenario in a four person sized office room. To achieve the group activity recognition video frames and 2-dimensional audio spectrograms were extracted from the AMI database. The video frames were RGB colored images and audio spectrograms had one color channel. The video frames were produced in batches so that temporal features could be evaluated together with the audio spectrogrames. It has been shown that including temporal features both during model training and then predicting the behavior of an activity increases the validation accuracy compared to models that only use spatial features [1]. Deep learning architectures have been implemented to recognize different human activities in the AMI office environment using the extracted data from the AMI database.The Neural Network models were built using the Keras API together with TensorFlow library. There are different types of Neural Network architectures. The architecture types that were investigated in this project were Residual Neural Network, Visual Geometry Group 16, Inception V3 and RCNN(Recurrent Neural Network). ImageNet weights have been used to initialize the weights for the Neural Network base models. ImageNet weights were provided by Keras API and was optimized for each base model[2]. The base models uses ImageNet weights when extracting features from the input data.The feature extraction using ImageNet weights or random weights together with the base models showed promising results. Both the Deep Learning using dense layers and the LSTM spatio-temporal sequence prediction were implemented successfully.
APA, Harvard, Vancouver, ISO, and other styles
12

Koskela, von Sydow Anita. "Regulation of fibroblast activity by keratinocytes, TGF-β and IL-1α : studies in two- and three dimensional in vitro models." Doctoral thesis, Örebro universitet, Institutionen för medicinska vetenskaper, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-48225.

Full text
Abstract:
Dysregulated wound healing is commonly associated with excessive fibrosis. Connective tissue growth factor (CTGF/CCN2) is characteristically overexpressed in fibrotic diseases and stimulated by transforming growth factor-β (TGF-β) in dermal fibroblasts. Reepithelialisation and epidermal wound coverage counteract excessive scar formation. We have previously shown that interleukin-1α (IL-1α) derived from keratinocytes conteracts TGF-β-stimulated CTGF-expression. The aim of this thesis was to further explore the effects of keratinocytes and IL-1α on gene and protein expression, as well as pathways, in TGF-β stimulated fibroblasts. Fibroblasts were studied in vitro by conventional two dimensional cell culture models and in a three dimensional keratinocyte-fibroblast organotypic skin culture model. The results showed that IL-1 suppresses basal and TGF-β-induced CTGF mRNA and protein, involving a possible TAK1 mechanism. Keratinocytes regulate the expression of fibroblast genes important for the turnover of the extracellular matrix. Most of the genes analysed (11/13) were regulated by TGF-β and counter regulated by keratinocytes. The overall results support a view that keratinocytes regulate fibroblasts to act catabolically (anti-fibrotic) on the extracellular matrix. Transcriptional microarray and gene set enrichment analysis showed that antagonizing effects of IL-1α on TGF-β were much more prominent than the synergistic effects. The most confident of these pathways was the interferon signaling, which were inhibited by TGF-β and activated by IL-1α. A proteomics study confirmed that IL-1α preferentially conteracts TGF-β effects. Six new fibroblast proteins involved in synthesis/ regulation were identified, being regulated by TGF-β and antagonized by IL-1α. Pathway analysis confirmed counter-regulation of interferon signaling by the two cytokines. These findings have implications for understanding the role of fibroblasts for inflammatory responses and development of fibrosis in the skin.
APA, Harvard, Vancouver, ISO, and other styles
13

Ornstein, Charlotte, and Karin Sandahl. "Coopetition and business models : How can they be integrated, and what effect does it have on value creation, delivery and capture?" Thesis, Umeå universitet, Företagsekonomi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-105963.

Full text
Abstract:
Technological innovations and development have caused rapid changes in the business environment. These changes have forced firms to change in the way they do business and operate. Two industries that are affected by these changes are the telecommunication industry and the information technology (IT) industry. Here, it is no longer possible for firms to operate completely individually, and many firms are pushed to engage in so called coopetition, which is cooperation with both vertical and horizontal competitors. As a consequence of the environmental changes, firms’ business models also need to change. They need to find new ways to create and deliver value that meet customer demand, and to capture a fair portion of that value from customers. We have found a connection between coopetition and business models, since value creation and value capture is central in both concepts. Previous research has however only touched the connection between coopetition and business model, and literature still lack research on this new subject. The research gap has led us to formulate the following problem definition: How can coopetition and business models be integrated, and what effect does it have on firms’ value creation, delivery, and capturing? With this problem definition the study has three purposes. Firstly, the study aims to find how coopetition and business models can be seen and understood through the lenses of each other. Secondly, how such integration can lead to that the complex nature of coopetition can be managed more appropriately. Thirdly, create an understanding for what effects coopetition and business models can have on value creation, delivery, and capturing when integrated. As the aim with this degree project is to develop a deeper understanding for this connection, we have chosen to do a qualitative study. We have conducted interviews with participants from seven different firms. In order to complement the theoretical framework we have held an expert interview with Professor Devi Gnyawali. The analysis has led us to the conclusion that coopetition and business models are connected in more ways than is admitted in the literature today. We have found that coopetition and business models are not only connected in value creation and value capture, but also in value delivery. We can also conclude that it is important to develop principles in the business model of when, why, and how to engage in different forms of coopetition to better manage it. This can have a positive influence on value creation, value delivery and value capture.
APA, Harvard, Vancouver, ISO, and other styles
14

Plotegher, Silvio Luiz. "Proposta de método de referência aplicado a retrofitting de máquinas-ferramentas." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/18/18156/tde-12092013-094431/.

Full text
Abstract:
A obsolescência das Máquinas-Ferramentas (MFs) é um processo natural de envelhecimento neste tipo de equipamento. Esse processo compreende uma série de degradações tecnológicas, os quais se instalam e ocorrem em toda estrutura física de uma MF, independentemente da tecnologia utilizada. As degradações podem, no entanto, ser resultantes de vários fatores, como uso, aplicação, regime de trabalho, dentre outros. Os aspectos tecnológicos ficam mais evidentes quando se analisam as máquinas a Comando Numérico Computadorizado (CNC). Por sua vez, o CNC é um equipamento que apresenta uma evolução constante, mas que muitas vezes não pode ser facilmente incorporado às máquinas já existentes sem um processo de retrofitting anteriormente definido. Um processo de retrofitting pode ser considerado como um processo que permite aplicar incrementos tecnológicos em uma MF. Mais recentemente, diversos estudos têm apresentado tendências que fazem uma ligação entre MFs e sustentabilidade, sem, no entanto, definir ou propor modelos. Estudar os aspectos do comportamento das MFs existentes e poder aplicar recursos das novas tecnologias pode contribuir para melhorar os aspectos de sua produtividade e, por conseguinte, contribuir para uma produção mais sustentável. No entanto, isso deve ser feito a partir de uma metodologia que oriente um processo de tomada de decisão. Dessa forma, este trabalho propõe um método de referência aplicado a retrofitting de máquinas a partir da definição de coeficientes e de índices numéricos cujos resultados atuam como ferramenta orientativa a um processo de tomada de decisão. O método é aplicado a diversos tipos de MFs e os resultados indicam a viabilidade ou não de um retrofitting.
The obsolescence of Machine Tools (MT) may be defined is a natural process of aging in this type of equipment. This process comprises a series of technological degradation, which settles and occurs across the entire and physical structure of a MT, regardless of the technology used. The degradations may, however, be due to several factors such as usage, application, machining process and so on. The technology aspects are more evident when analyzing the machines Computerized Numerical Control (CNC). By the other hand the CNC is an equipment which naturally contributes with a constant evolution, but they often cant be easily incorporated into an existing machine without a procedure for retrofitting previously defined. A process of retrofitting can be defined and considered as a process incorporates technology resulting in increments of technology to a MT. However, the addressing of such process is usually applied without any models or tools within a process of decision making. More recently, several studies have shown trends making a connection between MT and sustainability without, however, defining templates for such process. The studies of all aspects of the behavior of an existing MT, even with its limitations, suggest the resources of new technologies can help to improve in general all aspects of their productivity and therefore contribute to a more sustainable production. However, this must be done from a methodology that guides the process of decision making. This decision process represents a process that results from a technological gain in a MT or not. Thus, this work proposes a reference method applied to retrofitting machines by defining a series coefficients and numerical indices whose results intends to be used as a tool for helping a decision-making process.
APA, Harvard, Vancouver, ISO, and other styles
15

Venne, Simon. "Can Species Distribution Models Predict Colonizations and Extinctions?" Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38465.

Full text
Abstract:
Aim MaxEnt, a very popular species distribution modelling technique, has been used extensively to relate species’ geographic distributions to environmental variables and to predict changes in species’ distributions in response to environmental change. Here, we test its predictive ability through time (rather than through space, as is commonly done) by modeling colonizations and extinctions. Location Continental U.S. and southern Canada. Time period 1979-2009 Major taxa studied Twenty-one species of passerine birds. Methods We used MaxEnt to relate species’ geographic distributions to the variation in environmental conditions across North America. We then modelled site-specific colonizations and extinctions between 1979 and 2009 as functions of MaxEnt-estimated previous habitat suitability and inter- annual change in habitat suitability and neighborhood occupancy. We evaluated whether the effects were in the expected direction, we partitioned model’s explained deviance, and we compared colonization and extinction model’s accuracy to MaxEnt’s AUC. Results IV Colonization and extinction probabilities both varied as functions of previous habitat suitability, change in habitat suitability, and neighborhood occupancy, in the expected direction. Change in habitat suitability explained very little deviance compared to other predictors. Neighborhood occupancy accounted for more explained deviance in colonization models than in extinction models. MaxEnt AUC correlates with extinction models’ predictive ability, but not with that of colonization models. Main conclusions MaxEnt appears to sometime capture a real effect of the environment on species’ distributions since a statistical effect of habitat suitability is detected through both time and space. However, change in habitat suitability (which is much smaller through time than through space) is a poor predictor of change in occupancy. Over short time scales, proximity of sites occupied by conspecifics predicts changes in occupancy just as well as MaxEnt. The ability of MaxEnt models to predict spatial variation in occupancy (as measured by AUC) gives little indication of transferability through time. Thus, the predictive value of species distribution models may be overestimated when evaluated through space only. Future prediction of species’ responses to climate change should make a distinction between colonization and extinction, recognizing that the two processes are not equally well predicted by SDMs.
APA, Harvard, Vancouver, ISO, and other styles
16

Kovář, Pavel. "Model CNC frézky." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219700.

Full text
Abstract:
The master's thesis deals with the basic parts and principles of CNC machines with a focus on a CNC milling machine. There are also comparisons of several CNC machines sold. Furthermore, the project deals with the description of the manipulator, which is situated in laboratory E-132 in area Kolejní 4, Brno. The work also includes a description of the device, which is located in the manipulator model. We can find there a description of the program generating G-code from the image that you create in the editor. The next section describes possible modifications of the manipulator for its reconstruction on a CNC milling machine. The last chapter deals with the description of program developed for controlling the manipulator as a CNC milling machine.
APA, Harvard, Vancouver, ISO, and other styles
17

Kurbanoglu, Ozgur. "Electric Energy Policy Models In The European Union: Can There Be A Model For Turkey?" Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12605585/index.pdf.

Full text
Abstract:
The thesis discusses Turkish energy sector by using examples, projections made by the European Union, and positions of the experts and scholars. The work discusses the process of reformation of Energy sector, and what the obstacles and difficulties are. It is important that Turkey needs progress in the process of reformation that can be satisfied by using a functioning model in the field. Turkey has to apply the legislation of the European Union as an applicant country. Turkey needs a strategy for achieving the application of the energy legislation. Different countries in the European Union have been examined in the work for finding the strategy for Turkish energy sector. The countries have been selected for their peculiarities (Greece) and their strategical approaches for shaping their markets (France, Italy, Germany, United Kingdom &ndash
G8 countries in the European Union). The result of the study shows that the energy pool applied in England and Wales of the United Kingdom is a successful example, and it can be used for electricity policy along with some other developments in the field. The work tries to propose a model for the reform to be done, for the benefit of the society.
APA, Harvard, Vancouver, ISO, and other styles
18

Du, Chenguang. "How Well Can Two-Wave Models Recover the Three-Wave Second Order Latent Model Parameters?" Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103856.

Full text
Abstract:
Although previous studies on structural equation modeling (SEM) have indicated that the second-order latent growth model (SOLGM) is a more appropriate approach to longitudinal intervention effects, its application still requires researchers to collect at least three-wave data (e.g. randomized pretest, posttest, and follow-up design). However, in some circumstances, researchers can only collect two-wave data for resource limitations. With only two-wave data, the SOLGM can not be identified and researchers often choose alternative SEM models to fit two-wave data. Recent studies show that the two-wave longitudinal common factor model (2W-LCFM) and latent change score model (2W-LCSM) can perform well for comparing latent change between groups. However, there still lacks empirical evidence about how accurately these two-wave models can estimate the group effects of latent change obtained by three-wave SOLGM (3W-SOLGM). The main purpose of this dissertation, therefore, is trying to examine to what extent the fixed effects of the tree-wave SOLGM can be recovered from the parameter estimates of the two-wave LCFM and LCSM given different simulation conditions. Fundamentally, the supplementary study (study 2) using three-wave LCFM was established to help justify the logistics of different model comparisons in our main study (study 1). The data generating model in both studies is 3W-SOLGM and there are in total 5 simulation factors (sample size, group differences in intercept and slope, the covariance between the slope and intercept, size of time-specific residual, change the pattern of time-specific residual). Three main types of evaluation indices were used to assess the quality of estimation (bias/relative bias, standard error, and power/type I error rate). The results in the supplementary study show that the performance of 3W-LCFM and 3W-LCSM are equivalent, which further justifies the different models' comparison in the main study. The point estimates for the fixed effect parameters obtained from the two-wave models are unbiased or identical to the ones from the three-wave model. However, using two-wave models could reduce the estimation precision and statistical power when the time-specific residual variance is large and changing pattern is heteroscedastic (non-constant). Finally, two real datasets were used to illustrate the simulation results
Doctor of Philosophy
To collect and analyze the longitudinal data is a very important approach to understand the phenomenon of development in the real world. Ideally, researchers who are interested in using a longitudinal framework would prefer collecting data at more than two points in time because it can provide a deeper understanding of the developmental processes. However, in real scenarios, data may only be collected at two-time points. With only two-wave data, the second-order latent growth model (SOLGM) could not be used. The current dissertation compared the performance of two-wave models (longitudinal common factor model and latent change score model) with the three-wave SOLGM in order to better understand how the estimation quality of two-wave models could be comparable to the tree-wave model. The results show that on average, the estimation from two-wave models is identical to the ones from the three-wave model. So in real data analysis with only one sample, the point estimate by two-wave models should be very closed to that of the three-wave model. But this estimation may not be as accurate as it is obtained by the three-wave model when the latent variable has large variability in the first or last time point. This latent variable is more likely to exist as a statelike construct in the real world. Therefore, the current study could provide a reference framework for substantial researchers who could only have access to two-wave data but are still interested in estimating the growth effect that supposed to obtain by three-wave SOLGM.
APA, Harvard, Vancouver, ISO, and other styles
19

Thomas, Kerry J. "Teaching Mathematical Modelling to Tomorrow's Mathematicians or, You too can make a million dollars predicting football results." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-83131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Nixdorf, Timothy Allen. "A Mathematical Model for Carbon Nanoscrolls." University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1406060123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Thomas, Kerry J. "Teaching Mathematical Modelling to Tomorrow''s Mathematicians or, You too can make a million dollars predicting football results." Turning dreams into reality: transformations and paradigm shifts in mathematics education. - Grahamstown: Rhodes University, 2011. - S. 334 - 339, 2012. https://slub.qucosa.de/id/qucosa%3A1949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ferreira, Bruno Miguel Afonso. "Can Google data measure market sentiment." Master's thesis, Instituto Superior de Economia e Gestão, 2016. http://hdl.handle.net/10400.5/13300.

Full text
Abstract:
Mestrado em Economia Monetária e Financeira
O propósito deste artigo, na temático de Finanças Comportamentais, é usar os dados da pesquisa on-line do Google, o maior motor de busca do mundo, e seu produto Google Trends, para criar uma variável que servirá como uma proxy do sentimento no mercado. O artigo irá concentrar-se em estudar a correlação do sentimento do mercado medido pelos dados providenciados pelo Google, com os retornos do Índice da Bolsa Portuguesa, o PSI-20. Para realizar este teste, irão ser aplicadas ambas regressões lineares de OLS e modelos VAR, usando dados do Google como uma variável independente dos retornos do PSI-20, enquanto que ao mesmo tempo, serão usados dados de outras variáveis de controlo para filtrar a análise financeira fundamental. Além disso, a proxy de sentimento criada será comparada com outras previamente utilizadas, no que toca a precisão, prontidão, e capacidade para explicar o comportamento do mercado em geral. O documento conclui que os dados do Google são realmente capazes de medir adequadamente a influência do sentimento no mercado Português, e mostra resultados mais completos do que outras proxies previamente utilizadas noutros trabalhos.
The purpose of this paper, on the subject of Behavioral Finance, is to use data from Google's Online Search Query, the largest search engine in the world, and its product Google Trends, to create a variable which will serve as a measurement proxy for market sentiment . The paper will focus on studying the correlation of Google measured market sentiment with the returns of the Portuguese Stock Index, PSI-20. To test this, both linear OLS and VAR regressions will be implemented, using Google data as an explanatory variable for PSI-20 returns, while at the same time using data from other control variables to filtrate the fundamental financial analysis. Additionally, the created sentiment proxy will be compared with other known sentiment proxies in terms of accuracy and promptness in explaining market behavior. The paper concludes that Google data is indeed capable of appropriately measuring sentiment's influence on the Portuguese market, and it shows more complete results than other proxies from previous research.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
23

Nuthmann, Antje, Wolfgang Einhäuser, and Immo Schütz. "How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models." Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-232614.

Full text
Abstract:
Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead (“central bias”). This problem is further exacerbated in the context of model comparisons, because some—but not all—models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox “GridFix” available.
APA, Harvard, Vancouver, ISO, and other styles
24

Meng, Zhaoxin. "A deep learning model for scene recognition." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36491.

Full text
Abstract:
Scene recognition is a hot research topic in the field of image recognition. It is necessary that we focus on the research on scene recognition, because it is helpful to the scene understanding topic, and can provide important contextual information for object recognition. The traditional approaches for scene recognition still have a lot of shortcomings. In these years, the deep learning method, which uses convolutional neural network, has got state-of-the-art results in this area. This thesis constructs a model based on multi-layer feature extraction of CNN and transfer learning for scene recognition tasks. Because scene images often contain multiple objects, there may be more useful local semantic information in the convolutional layers of the network, which may be lost in the full connected layers. Therefore, this paper improved the traditional architecture of CNN, adopted the existing improvement which enhanced the convolution layer information, and extracted it using Fisher Vector. Then this thesis introduced the idea of transfer learning, and tried to introduce the knowledge of two different fields, which are scene and object. We combined the output of these two networks to achieve better results. Finally, this thesis implemented the method using Python and PyTorch. This thesis applied the method to two famous scene datasets. the UIUC-Sports and Scene-15 datasets. Compared with traditional CNN AlexNet architecture, we improve the result from 81% to 93% in UIUC-Sports, and from 79% to 91% in Scene- 15. It shows that our method has good performance on scene recognition tasks.
APA, Harvard, Vancouver, ISO, and other styles
25

Tang, Hao. "Bidirectional LSTM-CNNs-CRF Models for POS Tagging." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-362823.

Full text
Abstract:
In order to achieve state-of-the-art performance for part-of-speech(POS) tagging, the traditional systems require a significant amount of hand-crafted features and data pre-processing. In this thesis, we present a discriminative word embedding, character embedding and byte pair encoding (BPE) hybrid neural network architecture to implement a true end-to-end system without feature engineering and data pre-processing. The neural network architecture is a combination of bidirectional LSTM, CNNs, and CRF, which can achieve a state-of-the-art performance for a wide range of sequence labeling tasks. We evaluate our model on Universal Dependencies (UD) dataset for English, Spanish, and German POS tagging. It outperforms other models with 95.1%, 98.15%, and 93.43% accuracy on testing datasets respectively. Moreover, the largest improvements of our model appear on out-of-vocabulary corpora for Spanish and German. According to statistical significance testing, the improvements of English on testing and out-of-vocabulary corpora are not statistically significant. However, the improvements of the other more morphological languages are statistically significant on their corresponding corpora.
APA, Harvard, Vancouver, ISO, and other styles
26

Mutarelli, Rita de Cássia. "Estudo da responsabilidade social do Instituto de Pesquisas Energéticas e Nucleares de São Paulo (IPEN/CNEN - SP)." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/85/85133/tde-16072014-141824/.

Full text
Abstract:
Ao longo dos anos, a evolução do conceito socioambiental tem se solidificado por meio de programas, conferências e diversas atividades que ocorrem no Brasil e no mundo. A sustentabilidade e a responsabilidade social passaram a ser parte integrante do cotidiano das organizações. O Instituto de Pesquisas Energéticas e Nucleares (IPEN) 1, que é o foco desse trabalho tem como parte da sua missão o compromisso com a melhoria da qualidade de vida da população brasileira e com base na missão do IPEN e na falta de instrumentos de avaliação de ações socioambientais, este trabalho tem como objetivo propor um instrumento que avalie a responsabilidade social e sirva como uma opção metodológica fortemente comprometida com o aprimoramento do Instituto. Por meio de indicadores e dimensões, construiu-se uma metodologia que busca avaliar a responsabilidade social e identificar tanto os pontos fortes como os fracos. Essa metodologia foi aplicada ao IPEN, e os resultados apresentados nesse trabalho identificaram aspectos positivos com respeito às suas ações para com o público interno e pontos a serem melhorados com relação ao seu público externo. Os resultados foram satisfatórios, no entanto, esse trabalho poderá ter uma continuidade, pois o tema é amplo e não se esgota nesse estudo. Por meio dessa pesquisa, os gestores do IPEN poderão identificar ações socioambientais viáveis que possam ser implementadas no Instituto.
Over the years, the socio-environmental concept has grown through programs, conferences and several activities that have been held in Brazil and worldwide. Sustainability and social responsibility are now an integral part of everyday life of organizations The Instituto de Pesquisas Energéticas e Nucleares (IPEN)2, which is the focus of this research, is committed to the improvement of Brazilian quality of life. Based on IPEN´s mission, and due to the lack of tools for assessing socio-environmental actions, this research aims to propose an assessment tool for social responsibility, which may also be a methodological resource committed to the improvement of the Institute. Through indicators and dimensions, a methodology to assess social responsibility and identify both strengths and weaknesses was designed. The methodology was administered to IPEN, and the results demonstrated positive aspects regarding actions towards the internal publics and negative aspects towards the external publics that require improvement. The results obtained were satisfactory. Nevertheless, as the subject of this study is a broad theme, further studies are suggested. IPEN´s board may use the results of this research as a tool to help them identify feasible socio-environmental actions to be implemented in the institute.
APA, Harvard, Vancouver, ISO, and other styles
27

Sivaraman, Gokul. "Development of PMSM and drivetrain models in MATLAB/Simulink for Model Based Design." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301027.

Full text
Abstract:
When developing three-phase drives for Electric Vehicles (EVs), it is essential to verify the controller design. This will help in understanding how fast and accurately the torque of the motor can be controlled. In order to do this, it is always better to test the controller using the software version of the motor or vehicle drivetrain than using actual hardware as it could lead to component damage when replicating extreme physical behavior. In this thesis, plant modelling of Permanent Magnet Synchronous Machine (PMSM) and vehicle drivetrain in MATLAB/Simulink for Model Based Design (MBD) is presented. MBD is an effective method for controller design that, if adopted can lead to cost savings of 25%-30% and time savings of 35%-40% (according to a global study by Altran Technologies, the chair of software and systems engineering and the chair of Information Management of the University of Technology in Munich) [1]. The PMSM plant models take effects like magnetic saturation, cross- coupling, spatial harmonics and temperature into account. Two PMSM models in d-q frame based on flux and inductance principles were implemented. Flux, torque maps from Finite Element Analysis (FEA) and apparent inductance from datasheets were used as inputs to the flux- and inductance-based models, respectively. The FEA of PMSM was done using COMSOL Multiphysics. The PMSM model results were compared with corresponding FEA simulated results for verification. A comparison of these PMSM models with conventional low fidelity models has also been done to highlight the impact of inclusion of temperature and spatial harmonics. These motor models can be combined with an inverter plant model and a controller can be developed for the complete model. Low frequency oscillations of drivetrain in EVs lead to vibrations which can cause discomfort and torsional stresses. In order to control these oscillations, an active oscillation damping controller can be implemented. For implementation of this control, a three-mass mechanical plant model of drivetrain with an ABS (Anti-lock Braking System) wheel speed sensor has been developed in this thesis. Analysis of the model transfer function to obtain the pole zero maps was performed. This was used to observe and verify presence of low frequency oscillations in the drivetrain. In order to include the effects of ABS wheel speed sensor and CAN communication, a model was developed for the sensor.
Testning av regulatorernas inställningar med hänsyn till snabbhet och noggrannhet i momentreglering är avgörande i trefasiga drivsystem för elektriska fordon. Oftast är det bättre att simulera i stället för att utföra experimentella tester där komponenter kan skadas på grund av fysisk stress. Detta kallas för Model Based Design (MBD). MBD är an effektiv metod för utformningen av styrningen som kan leda till kostnadsbesparingar på 25%-30% och tidsbesparingar på 35%-40% enligt en studie från Altran Technologies i samarbete med Tekniska universitet i München, TUM. Detta examensarbete behandlar en modell för en synkronmaskin med permanentmagneter (PMSM) samt en modell för drivlinan utvecklad i Matlab/Simulink för MBD. PMSMs modellen inkluderar magnetisk mättnad och tvärkoppling, MMF övervågor och temperatur. Två PMSM modeller har utvecklats. Den första baseras på magnetiskt flöde som erhålls från finita element beräkningar i COMSOL Multiphysics medan den andra bygger på induktanser givna från datablad. En jämförelse av dessa PMSM-modeller med konventionella low fidelity-modeller har också gjorts för att illustrera påverkan temperaturberoende och MMF övervågor. Modellerna kan kombineras med en växelriktarmodell för att utveckla en hel styrenhet. Lågfrekventa oscillationer i drivlinan leder till vibrationer som kan orsaka vridspänningar och försämra komforten i elfordonet. En aktiv dämpningsregulator kan implementeras för att kontrollera spänningarna men en mekanisk drivlinemodell med tre massor och en ABS (anti-lock braking system) hastighetssensor behövs. Den mekaniska modellen har implementerats och analyserats även beaktande en modell för en CAN kommunikationskanal. Oscillationer med låg frekvens kunde observeras i modellen.
APA, Harvard, Vancouver, ISO, and other styles
28

Keating, Daniel. "Model Checking Time Triggered CAN Protocols." Thesis, University of Canterbury. Electrical and Computer Engineering, 2011. http://hdl.handle.net/10092/5754.

Full text
Abstract:
Model checking is used to aid in the design and verification of complex concurrent systems. An abstracted finite state model of a system and a set of mathematically based correctness properties based on the design specifications are defined. The model checker then performs an exhaustive state space search of the model, checking that the correctness properties hold at each step. This thesis describes how the SPIN model checker has been used to find and correct problems in the software design of a distributed marine vessel control system currently under development at a control systems specialist in New Zealand. The system under development is a mission critical control system used on large marine vessels. Hence, the requirement to study its architecture and verify the implementation of the system. The model checking work reported here focused on analysing the implementation of the Time-Triggered Controller-Area-Network (TTCAN) protocol, as this is used as the backbone for communications between devices and thus is a crucial part of their control system. A model of the ISO TTCAN protocol has been created using the SPIN model checker. This was based on work previously done by Leen and Heffernan modelling the protocol with the UPPAAL model checker [Leen and Heffernan 2002a]. In the process of building the ISO TTCAN model, a set of general techniques were developed for model checking TTCAN-like protocols. The techniques developed include modelling the progression of time efficiently in SPIN, TTCAN message transmission, TTCAN error handling, and CAN bus arbitration. These techniques then form the basis of a set of models developed to check the sponsoring organisation’s implementation of TTCAN as well as the fault tolerance schemes added to the system. Descriptions of the models and properties developed to check the correctness of the TTCAN implementation are given, and verification results are presented and discussed. This application of model checking to an industrial design problem has been successful in identifying a number of potential issues early in the design phase. In cases where problems are identified, the sequences of events leading to the problems are described, and potential solutions are suggested and modelled to check their effect of the system.
APA, Harvard, Vancouver, ISO, and other styles
29

Ondroušek, Jakub. "Ekonometrický model cen bytů v Brně." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2019. http://www.nusl.cz/ntk/nusl-399645.

Full text
Abstract:
The goal of the thesis „Econometric model of flat prices in Brno“ is to create econometric model based on data from housing market. The theoretical part of the thesis defines variables, and use descriptive statistics. The practical part of the thesis deals with creation econometric model and interactive calculator.
APA, Harvard, Vancouver, ISO, and other styles
30

Abalos, Choque Melisa. "Modelo Arima con intervenciones." Universidad Mayor de San Andrés. Programa Cybertesis BOLIVIA, 2009. http://www.cybertesis.umsa.bo:8080/umsa/2009/abalos_cme/html/index-frames.html.

Full text
Abstract:
El desarrollo de gran parte de los modelos y métodos estadísticos, específicamente relacionados con series temporales, ha ido ligado al deseo de estudiar aplicaciones específicas dentro de diversos ámbitos científicos. El presente trabajo también surgió con el objetivo de resolver diversos problemas que se plantean dentro del ámbito econométrico, aunque también puede ser usado en otros ámbitos, todos ellos ligados con un conjunto de datos históricos y con una aplicación muy concreta al estudio del “egreso de divisas” en Bolivia. Se han estudiado a profundidad los modelos para series temporales que únicamente dependían del pasado de la propia serie. En el presente trabajo se inicia el análisis de una serie temporal teniendo en cuenta algún tipo de información externa. En el capítulo 1 se sustenta fuertemente el hecho de investigar acerca de aspectos ajenos a la serie temporal que llegan de algún modo a alterar su normal comportamiento. El capítulo 2 desarrolla minuciosamente modelos univariantes conocidos con el nombre de ARIMA, desarrollando su parte teórica. Posteriormente se complementa esta perspectiva univariante añadiéndose una parte determinística correspondiente al análisis de intervención construyendo así el modelo ARIMA CON INTERVENCIONES, la utilización de éstos modelos es comparada en el capítulo 3, de esta manera se distingui cual de los dos es más efectivo cuando los datos son afectados por eventos circunstanciales. La metodología del modelo ARIMA CON INTERVENCIONES es una herramienta útil para “modelizar” el comportamiento de las series temporales que presentan modificaciones a raíz de eventos ajenos que no pueden ser controlados.
APA, Harvard, Vancouver, ISO, and other styles
31

Simoes, Jose Filipe Castanheira Pereira Antunes. "Advanced machining technologies in the ceramics industry." Thesis, Staffordshire University, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tůmová, Petra. "Konstrukce předpovědních modelů cen zlata a stříbra." Master's thesis, Česká zemědělská univerzita v Praze, 2016. http://www.nusl.cz/ntk/nusl-260507.

Full text
Abstract:
Diplomová práce je zaměřena na analýzu vývoje časových řad, nominálních cen zlata a stříbra od roku 1968 do roku 2014 a následné využití ARIMA modelu pro sestavení predikce cen obou komodit. Při sestavování predikce bylo využito statistického softwaru Statistica od společnosti StatSoft ČR s. r. o. (licence získána z ČZU) a tabulkového procesoru Microsoft Excel.
APA, Harvard, Vancouver, ISO, and other styles
33

Hubková, Helena. "Named-entity recognition in Czech historical texts : Using a CNN-BiLSTM neural network model." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385682.

Full text
Abstract:
The thesis presents named-entity recognition in Czech historical newspapers from Modern Access to Historical Sources Project. Our goal was to create a specific corpus and annotation manual for the project and evaluate neural networks methods for named-entity recognition within the task. We created the corpus using scanned Czech historical newspapers. The scanned pages were converted to digitize text by optical character recognition (OCR) method. The data were preprocessed by deleting some OCR errors. We also defined specific named entities types for our task and created an annotation manual with examples for the project. Based on that, we annotated the final corpus. To find the most suitable neural networks model for our task, we experimented with different neural networks architectures, namely long short-term memory (LSTM), bidirectional LSTM and CNN-BiLSTM models. Moreover, we experimented with randomly initialized word embeddings that were trained during the training process and pretrained word embeddings for contemporary Czech published as open source by fastText. We achieved the best result F1 score 0.444 using CNN-BiLSTM model and the pretrained word embeddings by fastText. We found out that we do not need to normalize spelling of our historical texts to get closer to contemporary language if we use the neural network model. We provided a qualitative analysis of observed linguistics phenomena as well. We found out that some word forms and pair of words which were not frequent in our training data set were miss-tagged or not tagged at all. Based on that, we can say that larger data sets could improve the results.
APA, Harvard, Vancouver, ISO, and other styles
34

Dantas, Gustavo Ferreira. "How business models can affect startup failure : Monkey´n Apps Business Study." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19331.

Full text
Abstract:
Mestrado em Gestão/MBA
As startups são jovens empresas de base tecnológica focadas no desenvolvimento de produtos ou serviços de ponta, sob condições de incerteza. Nesse cenário, um modelo de negócios inadequado pode levar a uma falha nos negócios, pois o modelo de negócios descreve a arquitetura dos elementos que permitem que uma organização crie, configure e valor apropriado. Esta dissertação tem como objetivo identificar como os modelos de negócios estão associados ao fracasso de startups. Para esse fim, usamos um único estudo de caso baseado em uma startup brasileira, a Monkey'n Apps. Os dados foram coletados por meio de entrevistas com o fundador e um funcionário. A análise avalia as construções apresentadas no modelo de negócios integrado de Wirtz (2016) e, em seguida, relacionamos esses modelos parciais aos processos de criação de valor, configuração de valor e apropriação de valor. Nossos resultados sugerem que a inicialização falhou devido ao modelo de recursos. Apesar de ser o modelo parcial mais crítico, o modelo de recursos foi caracterizado por um desalinhamento entre os fundadores, o que levou a uma liderança fraca. A falta de habilidades gerenciais contribuiu para deteriorar o ambiente da empresa, que mais tarde deixou o fundador ignorar seu principal ativo, seus funcionários.
Startups are young technology-based companies focused on developing state-of-the-art products or services under conditions of uncertainty. In this scenario, an inappropriate business model can lead to business failure since the business model describes the architecture of the elements that allow an organization to create, configure, and appropriate value. This dissertation aims to identify how business models are associated with the failure of startups. For this purpose, we use a single case-study based on one Brazilian startup, Monkey'n Apps. The data was collected through interviews with the founder and one employee. Our analyses evaluate the constructs presented on Wirtz's (2016) integrated business model and then we relate those partial models to the processes of value creation, value configuration and value appropriation. Our results suggest that the start-up failed because of the resource model. Despite being the most critical partial model, the resource model was characterized by a misalignment between the founders led to a poor leadership. The lack of management skills contributed deteriorate the environment in the company that later on let the founder to ignore their primary asset, their employees.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
35

Soto, Barra Claudia Naiomi. "Reconocimiento rápido de objetos usando objects proposals y deep learning." Tesis, Universidad de Chile, 2017. http://repositorio.uchile.cl/handle/2250/150337.

Full text
Abstract:
Ingeniera Civil Eléctrica
El reconocimiento (o detección) de objetos es un área activa y en continua mejora de la visión computacional. Recientemente se han introducido distintas estrategias para mejorar el desempeño y disminuir los costos y el tiempo de detección. Entre estas, se encuentran la generación de Object Proposals (regiones en la imágen donde hay alta probabilidad de encontrar un objeto) para acelerar la etapa de localización, como respuesta al paradigma de ventana deslizante; el cada vez más popular uso de redes Deep Learning y, en particular, para la clasi cación y detección de imágenes, las redes convolucionales (CNN). Si bien existen diversos trabajos que utilizan ambas técnicas, todos ellos se centran en tener una buena performance en conocidas bases de datos y competencias en lugar de estudiar su comportamiento en problemas reales y el efecto que tiene la modi cación de arquitecturas de redes convencionales y la elección adecuada de un sistema de generación de proposals. En este trabajo de título, entonces, se tiene como objetivo principal el caracterizar métodos de generación de proposals para su uso en el reconocimiento de objetos con redes CNN, comparando el desempeño tanto de los proposals generados como del sistema completo en bases de datos fabricadas manualmente. Para estudiar el sistema completo, se comparan dos estructuras conocidas, llamadas R-CNN y Fast R-CNN, que utilizan de distintas formas ambas técnicas (generación de proposals y detección) y donde se considera en el estado del arte mejor Fast R-CNN. Se propone en este trabajo que esta hipótesis no es del todo cierta en el caso de que se trabaje con un número su cientemente bajo de proposals (donde las bases de datos acá construidas se enfocan en precisamente asegurar una cantidad baja de objetos de tamaños similares presentes en cada una: objetos sobre super cies y objetos de una sala de estar) y se acelere el proceso de clasi cación alterando el tamaño de entrada de la red convolucional utilizada. Se eligieron tres métodos de generación de Proposals de la literatura a partir de su desempe ño reportado, y fueron comparados en distintos escenarios sus tiempos de procesamiento, calidad de proposals generados (mediante análisis visual y numérico) en función del número generados de estos. El método llamado BING presenta una ventaja sustancial en términos del tiempo de procesamiento y tiene un desempeño competitivo medido con el recall (fracción de los objetos del ground truth correctamente detectados) para las aplicaciones escogidas. Para implementar R-CNN se entrenan dos redes del tipo SqueezeNet pero con entradas reducidas y seleccionando los 50 mejores proposals generados por BING se encuentra que para una red de entrada 64x64 se alcanza casi el mismo recall (~ 40%) que se obtiene con el Fast R-CNN original y con una mejor precisión, aunque es 5 veces más lento (0.75s versus 0.14s). El sistema R-CNN implementado en este trabajo, entonces, no sólo acelera entre 10 y 20 veces la etapa de generación de proposals en comparación a su implementación original, si no que el efecto de reducir la entrada de la red utilizada logra disminuir el tiempo de detección a uno que es sólo 5 veces más lento que Fast R-CNN cuando antes era hasta 100 veces más lento y con un desempeño equivalente.
APA, Harvard, Vancouver, ISO, and other styles
36

Truong, Quan, and trunongluongquan@yahoo com au. "Continuous-time Model Predictive Control." RMIT University. Electrical and Computer Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090813.163701.

Full text
Abstract:
Model Predictive Control (MPC) refers to a class of algorithms that optimize the future behavior of the plant subject to operational constraints [46]. The merits of the class algorithms include its ability to handle imposed hard constraints on the system and perform on-line optimization. This thesis investigates design and implementation of continuous time model predictive control using Laguerre polynomials and extends the design ap- proaches proposed in [43] to include intermittent predictive control, as well as to include the case of the nonlinear predictive control. In the Intermittent Predictive Control, the Laguerre functions are used to describe the control trajectories between two sample points to save the com- putational time and make the implementation feasible in the situation of the fast sampling of a dynamic system. In the nonlinear predictive control, the Laguerre polynomials are used to describe the trajectories of the nonlinear control signals so that the reced- ing horizon control principle are applied in the design with respect to the nonlinear system constraints. In addition, the thesis reviews several Quadratic Programming methods and compares their performances in the implementation of the predictive control. The thesis also presents simulation results of predictive control of the autonomous underwater vehicle and the water tank.
APA, Harvard, Vancouver, ISO, and other styles
37

Kongiranda, Ganapathi Changappa, and Erappa Vivek Mandanna Balapanda. "Design Automation For CNC Machining : A case study for generating CNC codes from geometric CAD models." Thesis, Linköpings universitet, Maskinkonstruktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178695.

Full text
Abstract:
The intent of the thesis is to automate the generation of computer numerical control (CNC) codes from geometrical computer-aided design (CAD) models. The work is carried out using two machines, the MDX 40A milling machine, and the HAAS VM3 machine. The case study is to determine the potential in automating the code generation process. The empirical findings from the research studies reveal that the manual process for programming the codes for the models pose a challenge of consuming more time using the procedures in computer-aided manufacturing and manual operations performed by a programmer. One crucial factor to meet these requirements is the productivity of the machining process. Design automation for machining is the potent tool to increase productivity in this process. The main methods used in the study to fulfill the main objective of the thesis are addressed. The programming of the code is automated for these two machines and the outcome is compared with the manual approach. The need of automating the codes is to provide better accuracy and efficiency. Further, automation is beneficial as it increases the capability to accommodate the new changes in the design of the model. The conclusion drawn from the results of the study is that the automation for code programming results in increasing the speed of the machining process by reducing time consumption. Though the approach of automating codes is carried out for two machines, the potential of this approach is that the code generation process is not dependant on the post-processor of any specific machine

The Master Thesis (Master of Science degree in Mechanical Engineering)

APA, Harvard, Vancouver, ISO, and other styles
38

Vasconcelos, Jivago B. Ximenes de. "Can a habit formation model really explain the Forward Premium Anomaly?" reponame:Repositório Institucional do FGV, 2009. http://hdl.handle.net/10438/2714.

Full text
Abstract:
Submitted by Daniella Santos (daniella.santos@fgv.br) on 2009-08-07T12:32:56Z No. of bitstreams: 1 Dissertação_Jivago_Vasconcelos.pdf: 444244 bytes, checksum: a4ae0c0f31d2c2371cb0e5b822e7da78 (MD5)
Approved for entry into archive by Antoanne Pontes(antoanne.pontes@fgv.br) on 2009-08-07T17:37:15Z (GMT) No. of bitstreams: 1 Dissertação_Jivago_Vasconcelos.pdf: 444244 bytes, checksum: a4ae0c0f31d2c2371cb0e5b822e7da78 (MD5)
Made available in DSpace on 2009-08-07T17:37:15Z (GMT). No. of bitstreams: 1 Dissertação_Jivago_Vasconcelos.pdf: 444244 bytes, checksum: a4ae0c0f31d2c2371cb0e5b822e7da78 (MD5)
Verdelhan (2009) shows that if one is to explain the foreign ex- change forward premium behavior using Campbell and Cochrane (1999) s habit formation model one must specify it in such a way to generate pro-cyclical short term risk free rates. At the calibration procedure, we show that this is only possible in Campbell and Cochrane s framework under implausible parameters speci cations given that the priceconsumption ratio diverges in almost all parameters sets. We, then, adopt Verdelhan s shortcut of xing the sensivity function (st) at its steady state level to attain a nite value for the price-consumption ratio and release it in the simulation stage to ensure pro-cyclical risk free rates. Beyond the potential inconsistencies that such procedure may generate, as suggested by Wachter (2006), with pro-cyclical risk free rates the model generates a downward sloped real yield curve, which is at odds with the data.
Verdelhan (2009) mostra que desejando-se explicar o comporta- mento do prêmio de risco nos mercados de títulos estrangeiros usando- se o modelo de formação externa de hábitos proposto por Campbell e Cochrane (1999) será necessário especificar o retorno livre de risco de equilíbrio de maneira pró-cíclica. Mostramos que esta especificação só é possível sobre parâmetros de calibração implausíveis. Ainda no processo de calibração, para a maioria dos parâmetros razoáveis, a razão preço-consumo diverge. Entretanto, adotando a sugestão proposta por Verdelhan (2009) - de xara função sensibilidade (st) no seu valor de steady-state durante a calibração e liberá-la apenas durante a simulação dos dados para se garantir taxas livre de risco prócíclicas - conseguimos encontrar um valor nito e bem comportado para a razão preço-consumo de equilíbrio e replicar o foward premium anomaly. Desconsiderando possíveis inconsistências deste procedimento, sobre retornos livres de risco pró-cíclicos, conforme sugerido por Wachter (2006), o modelo utilizado gera curvas de yields reais decrescentes na maturidade, independentemente do estado da economia - resultado que se opõe à literatura subjacente e aos dados reais sobre yields.
APA, Harvard, Vancouver, ISO, and other styles
39

Al-Kadhimi, Staffan, and Paul Löwenström. "Identification of machine-generated reviews : 1D CNN applied on the GPT-2 neural language model." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280335.

Full text
Abstract:
With recent advances in machine learning, computers are able to create more convincing text, creating a concern for an increase in fake information on the internet. At the same time, researchers are creating tools for detecting computer-generated text. Researchers have been able to exploit flaws in neural language models and use them against themselves; for example, GLTR provides human users with a visual representation of texts that assists in classification as human-written or machine-generated. By training a convolutional neural network (CNN) on GLTR output data from analysis of machine-generated and human-written movie reviews, we are able to take GLTR a step further and use it to automatically perform this classification. However, using a CNN with GLTR as the main source of data for classification does not appear to be enough to be on par with the best existing approaches.
I och med de senaste framstegen inom maskininlärning kan datorer skapa mer och mer övertygande text, vilket skapar en oro för ökad falsk information på internet. Samtidigt vägs detta upp genom att forskare skapar verktyg för att identifiera datorgenererad text. Forskare har kunnat utnyttja svagheter i neurala språkmodeller och använda dessa mot dem. Till exempel tillhandahåller GLTR användare en visuell representation av texter, som hjälp för att klassificera dessa som människo- skrivna eller maskingenererade. Genom att träna ett faltningsnätverk (convolutional neural network, eller CNN) på utdata från GLTR-analys av maskingenererade och människoskrivna filmrecensioner, tar vi GLTR ett steg längre och använder det för att genomföra klassifikationen automatiskt. Emellertid tycks det ej vara tillräckligt att använda en CNN med GLTR som huvuddatakälla för att klassificera på en nivå som är jämförbar med de bästa existerande metoderna.
APA, Harvard, Vancouver, ISO, and other styles
40

Agulhas, Jaclyn Margaret. "International labour standards and international trade :can the two be linked?" Thesis, University of the Western Cape, 2005. http://etd.uwc.ac.za/index.php?module=etd&amp.

Full text
Abstract:
In this paper I delve into the connection between trade policy and labour rights as probably one of the most controversial issues, which the international trading system is faced with today. Labour laws differ from country to country and of course it is a cause for concern where some countries have higher standards than others, it becomes problematic for these countries with high standards to compete with countries with lower standards. Even though there is a definite link between trade and labour, my argument is that incorporating labour standards into the international trading system is not the best way forward to deal with the problem of abuse of labour standards.

I further investigate the two organizations at the forefront of this debate, being the WTO and the ILO. In an attempt to ascertain which of the two is the best forum to deal with the issue I further look at the relationship between these two organizations. Compliance with international labour standards is a growing concern as worldwide standards are deteriorating and nothing is being done to alleviate the problem. Accordingly, I explore the causes for the abuse of labour standards and seek to find the better alternative, by looking at the respective positions of the parties who are for and against the linkage of trade with labour standards. Here the views and concerns of the developed world are weighed up against those of the developing world and looking at possible alternatives concludes the paper.
APA, Harvard, Vancouver, ISO, and other styles
41

Kim, Jongmyeong. "Can a model for welfare states be found in East Asia? : a comparative analysis of welfare models in Japan, Taiwan and Korea." Thesis, University of Birmingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Karlsson, Linda. "Advertising Theories and Models - how well can these be transferred from text into reality?" Thesis, Halmstad University, School of Business and Engineering (SET), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-4194.

Full text
Abstract:

This study has been examined an international organisation to get a deeper understanding over how they feel about using advertising theories and models. By interviewing a high positioned, Nordic Brand Manager, employee in the organisation the researcher has tried to find out what attitudes they have towards the theories and models used in this study. The study has also interviewed customers from the organisation that has been exposed to one of the organisations advertising campaigns and that has bought their products in the past. This has been done to see to how the use of these models has been apprehended by the customers.

The study has focused on finding out if there are any traces of the theories and models in the organisations advertisements.

APA, Harvard, Vancouver, ISO, and other styles
43

Vichare, Parag. "A novel methodology for modelling CNC machining system resources." Thesis, University of Bath, 2009. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Šmerda, Ondřej. "Návrh koncepce leteckého motoru na CNG." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-401574.

Full text
Abstract:
Master’s thesis deals with comparsion and rating the compressed natural gas as an aircraft piston engine fuel. An information search of conventional fuels and differences of the fuel systems for AVGAS and CNG is included. Next part describes the aircraft and its engine on which is the mathematic model based. After that perfomance and consumption data are calculated for both fuels and the results are then compared. At the end of the thesis, a design of the CNG fuel system with components selection is described.
APA, Harvard, Vancouver, ISO, and other styles
45

Pettersson, Hampus, and Markus Holmgren. "Can hidden Markov models be used for inference about operational risk?" Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-148807.

Full text
Abstract:
This thesis aims to investigate the possibility if hidden Markov models (HMM) can be used for inference about operational risk given financial time series data of Auditchanges and Audit prices. The models tested vary in the number of possible states each underlying latent process can take. All models have been implemented usingR-statisticalsoftware along with the depmixS4 package. From the evaluation of the work, it was shown that there was a clear difference between the states, according to the the types of observation they emitted, for the final model. The thesis shows that the biggest factors affecting operational risk were the number of changes of the trades and the time between those changes. It also showed that it was, in large part, the same trader who carried out all the trades as well as changes and only within the internal department. The final conclusion is therefore that HMMs are possible and appropriate to use for inference about operational risk, but that more labeled data are required to express the models predictive performance.
APA, Harvard, Vancouver, ISO, and other styles
46

Svensson, William. "CAN STATISTICAL MODELS BEAT BENCHMARK PREDICTIONS BASED ON RANKINGS IN TENNIS?" Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447384.

Full text
Abstract:
The aim of this thesis is to beat a benchmark prediction of 64.58 percent based on player rankings on the ATP tour in tennis. That means that the player with the best rank in a tennis match is deemed as the winner. Three statistical model are used, logistic regression, random forest and XGBoost. The data are over a period between the years 2000-2010 and has over 60 000 observations with 49 variables each. After the data was prepared, new variables were created and the difference between the two players in hand taken all three statistical models did outperform the benchmark prediction. All three variables had an accuracy around 66 percent with the logistic regression performing the best with an accuracy of 66.45 percent. The most important variable overall for the models is the total win rate on different surfaces, the total win rate and rank.
APA, Harvard, Vancouver, ISO, and other styles
47

Cotroneo, Orazio. "Mining declarative process models with quantitative temporal constraints." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24636/.

Full text
Abstract:
Time has always been a subject of study in science, philosophy and religion. Time was referred by the ancient Greeks with two separate words: Chronos and Kairos. Chronos referring to the quantitative aspect of it, while Kairos referring to the qualitative part of it. In this work, time, as a measurement system for a given business context, would be explored in both of its forms. Specifically in the last few years, embedding the notion of quantitative time in discovering declarative mining models has been a point of focus in research. The aim of this work is to enrich declarative process mining models with the notion of quantitative time, and then to adapt the discovery algorithm, inspired by Mooney in 1995, and then modified by Palmieri in 2020, to discover the enriched models.
APA, Harvard, Vancouver, ISO, and other styles
48

Gasparini, Roberto. "Developing models of aerosol representation to investigate composition, evolution, optical properties, and CCN spectra using measurements of size-resolved hygroscopicity." Diss., Texas A&M University, 2003. http://hdl.handle.net/1969.1/3878.

Full text
Abstract:
A Differential Mobility Analyzer/Tandem Differential Mobility Analyzer (DMA/TDMA) was used to measure size distributions, hygroscopicity, and volatility during the May 2003 Aerosol Intensive Operational Period at the Central Facility of the Atmospheric Radiation Measurement Southern Great Plains site. Hygroscopic growth factor distributions for particles at eight dry diameters ranging from 0.012 µm to 0.600 µm were measured. These measurements, along with backtrajectory clustering, were used to infer aerosol composition and evolution. The hygroscopic growth of the smallest and largest particles analyzed was typically less than that of particles with dry diameters of about 0.100 µm. Condensation of secondary organic aerosol on nucleation mode particles may be responsible for the minimal growth observed at the smallest sizes. Growth factor distributions of the largest particles typically contained a non-hygroscopic mode believed to be composed of dust. A model was developed to characterize the hygroscopic properties of particles within a size distribution mode through analysis of the fixed-size hygroscopic growth measurements. This model was used to examine three cases in which the sampled aerosol evolved over a period of hours or days. Additionally, size and hygroscopicity information were combined to model the aerosol as a population of multi-component particles. With this model, the aerosol hygroscopic growth factor f(RH), relating the submicron scattering at high RH to that at low RH, is predicted. The f(RH) values predicted when the hygroscopic fraction of the aerosol is assumed to be metastable agree better with measurements than do those predicted under the assumption of crystalline aerosol. Agreement decreases at RH greater than 65%. This multi-component aerosol model is used to derive cloud condensation nuclei (CCN) spectra for comparison with spectra measured directly with two Desert Research Institute (DRI) CCN spectrometers. Among the 1490 pairs of DMA/TDMA-predicted and DRI-measured CCN concentrations at various critical supersaturations from 0.02-1.05%, the sample number-weighted mean R2 value is 0.74. CCN concentrations are slightly overpredicted at both the lowest (0.02-0.04%) and highest (0.80-1.05%) supersaturations measured. Overall, this multi-component aerosol model based on size distributions and size-resolved hygroscopicity yields reasonable predictions of the humidity-dependent optical properties and CCN spectra of the aerosol.
APA, Harvard, Vancouver, ISO, and other styles
49

HANNES, EGON M. "Gestão de projetos de P&D no IPEN: diagnóstico e sugestões ao Escritório de Projetos (PMO)." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/23826.

Full text
Abstract:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2015-07-23T11:08:26Z No. of bitstreams: 0
Made available in DSpace on 2015-07-23T11:08:26Z (GMT). No. of bitstreams: 0
Dissertação (Mestrado em Tecnologia Nuclear)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
50

Costantini, Mauro, Cuaresma Jesus Crespo, and Jaroslava Hlouskova. "Can Macroeconomists Get Rich Forecasting Exchange Rates?" WU Vienna University of Economics and Business, 2014. http://epub.wu.ac.at/4181/1/wp176.pdf.

Full text
Abstract:
We provide a systematic comparison of the out-of-sample forecasts based on multivariate macroeconomic models and forecast combinations for the euro against the US dollar, the British pound, the Swiss franc and the Japanese yen. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations help to improve over benchmark trading strategies for the exchange rate against the US dollar and the British pound, although the excess return per unit of deviation is limited. For the euro against the Swiss franc or the Japanese yen, no evidence of generalized improvement in profit measures over the benchmark is found. (authors' abstract)
Series: Department of Economics Working Paper Series
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography