Dissertations / Theses on the topic 'CNN MODEL'

To see the other types of publications on this topic, follow the link: CNN MODEL.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'CNN MODEL.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Meng, Zhaoxin. "A deep learning model for scene recognition." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36491.

Full text
Abstract:
Scene recognition is a hot research topic in the field of image recognition. It is necessary that we focus on the research on scene recognition, because it is helpful to the scene understanding topic, and can provide important contextual information for object recognition. The traditional approaches for scene recognition still have a lot of shortcomings. In these years, the deep learning method, which uses convolutional neural network, has got state-of-the-art results in this area. This thesis constructs a model based on multi-layer feature extraction of CNN and transfer learning for scene recognition tasks. Because scene images often contain multiple objects, there may be more useful local semantic information in the convolutional layers of the network, which may be lost in the full connected layers. Therefore, this paper improved the traditional architecture of CNN, adopted the existing improvement which enhanced the convolution layer information, and extracted it using Fisher Vector. Then this thesis introduced the idea of transfer learning, and tried to introduce the knowledge of two different fields, which are scene and object. We combined the output of these two networks to achieve better results. Finally, this thesis implemented the method using Python and PyTorch. This thesis applied the method to two famous scene datasets. the UIUC-Sports and Scene-15 datasets. Compared with traditional CNN AlexNet architecture, we improve the result from 81% to 93% in UIUC-Sports, and from 79% to 91% in Scene- 15. It shows that our method has good performance on scene recognition tasks.
APA, Harvard, Vancouver, ISO, and other styles
2

Hubková, Helena. "Named-entity recognition in Czech historical texts : Using a CNN-BiLSTM neural network model." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385682.

Full text
Abstract:
The thesis presents named-entity recognition in Czech historical newspapers from Modern Access to Historical Sources Project. Our goal was to create a specific corpus and annotation manual for the project and evaluate neural networks methods for named-entity recognition within the task. We created the corpus using scanned Czech historical newspapers. The scanned pages were converted to digitize text by optical character recognition (OCR) method. The data were preprocessed by deleting some OCR errors. We also defined specific named entities types for our task and created an annotation manual with examples for the project. Based on that, we annotated the final corpus. To find the most suitable neural networks model for our task, we experimented with different neural networks architectures, namely long short-term memory (LSTM), bidirectional LSTM and CNN-BiLSTM models. Moreover, we experimented with randomly initialized word embeddings that were trained during the training process and pretrained word embeddings for contemporary Czech published as open source by fastText. We achieved the best result F1 score 0.444 using CNN-BiLSTM model and the pretrained word embeddings by fastText. We found out that we do not need to normalize spelling of our historical texts to get closer to contemporary language if we use the neural network model. We provided a qualitative analysis of observed linguistics phenomena as well. We found out that some word forms and pair of words which were not frequent in our training data set were miss-tagged or not tagged at all. Based on that, we can say that larger data sets could improve the results.
APA, Harvard, Vancouver, ISO, and other styles
3

Al-Kadhimi, Staffan, and Paul Löwenström. "Identification of machine-generated reviews : 1D CNN applied on the GPT-2 neural language model." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280335.

Full text
Abstract:
With recent advances in machine learning, computers are able to create more convincing text, creating a concern for an increase in fake information on the internet. At the same time, researchers are creating tools for detecting computer-generated text. Researchers have been able to exploit flaws in neural language models and use them against themselves; for example, GLTR provides human users with a visual representation of texts that assists in classification as human-written or machine-generated. By training a convolutional neural network (CNN) on GLTR output data from analysis of machine-generated and human-written movie reviews, we are able to take GLTR a step further and use it to automatically perform this classification. However, using a CNN with GLTR as the main source of data for classification does not appear to be enough to be on par with the best existing approaches.
I och med de senaste framstegen inom maskininlärning kan datorer skapa mer och mer övertygande text, vilket skapar en oro för ökad falsk information på internet. Samtidigt vägs detta upp genom att forskare skapar verktyg för att identifiera datorgenererad text. Forskare har kunnat utnyttja svagheter i neurala språkmodeller och använda dessa mot dem. Till exempel tillhandahåller GLTR användare en visuell representation av texter, som hjälp för att klassificera dessa som människo- skrivna eller maskingenererade. Genom att träna ett faltningsnätverk (convolutional neural network, eller CNN) på utdata från GLTR-analys av maskingenererade och människoskrivna filmrecensioner, tar vi GLTR ett steg längre och använder det för att genomföra klassifikationen automatiskt. Emellertid tycks det ej vara tillräckligt att använda en CNN med GLTR som huvuddatakälla för att klassificera på en nivå som är jämförbar med de bästa existerande metoderna.
APA, Harvard, Vancouver, ISO, and other styles
4

Huss, Anders. "Hybrid Model Approach to Appliance Load Disaggregation : Expressive appliance modelling by combining convolutional neural networks and hidden semi Markov models." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-179200.

Full text
Abstract:
The increasing energy consumption is one of the greatest environmental challenges of our time. Residential buildings account for a considerable part of the total electricity consumption and is further a sector that is shown to have large savings potential. Non Intrusive Load Monitoring (NILM), i.e. the deduction of the electricity consumption of individual home appliances from the total electricity consumption of a household, is a compelling approach to deliver appliance specific consumption feedback to consumers. This enables informed choices and can promote sustainable and cost saving actions. To achieve this, accurate and reliable appliance load disaggregation algorithms must be developed. This Master's thesis proposes a novel approach to tackle the disaggregation problem inspired by state of the art algorithms in the field of speech recognition. Previous approaches, for sampling frequencies 1 Hz, have primarily focused on different types of hidden Markov models (HMMs) and occasionally the use of artificial neural networks (ANNs). HMMs are a natural representation of electric appliances, however with a purely generative approach to disaggregation, basically all appliances have to be modelled simultaneously. Due to the large number of possible appliances and variations between households, this is a major challenge. It imposes strong restrictions on the complexity, and thus the expressiveness, of the respective appliance model to make inference algorithms feasible. In this thesis, disaggregation is treated as a factorisation problem where the respective appliance signal has to be extracted from its background. A hybrid model is proposed, where a convolutional neural network (CNN) extracts features that correlate with the state of a single appliance and the features are used as observations for a hidden semi Markov model (HSMM) of the appliance. Since this allows for modelling of a single appliance, it becomes computationally feasible to use a more expressive Markov model. As proof of concept, the hybrid model is evaluated on 238 days of 1 Hz power data, collected from six households, to predict the power usage of the households' washing machine. The hybrid model is shown to perform considerably better than a CNN alone and it is further demonstrated how a significant increase in performance is achieved by including transitional features in the HSMM.
Den ökande energikonsumtionen är en stor utmaning för en hållbar utveckling. Bostäder står för en stor del av vår totala elförbrukning och är en sektor där det påvisats stor potential för besparingar. Non Intrusive Load Monitoring (NILM), dvs. härledning av hushållsapparaters individuella elförbrukning utifrån ett hushålls totala elförbrukning, är en tilltalande metod för att fortlöpande ge detaljerad information om elförbrukningen till hushåll. Detta utgör ett underlag för medvetna beslut och kan bidraga med incitament för hushåll att minska sin miljöpåverakan och sina elkostnader. För att åstadkomma detta måste precisa och tillförlitliga algoritmer för el-disaggregering utvecklas. Denna masteruppsats föreslår ett nytt angreppssätt till el-disaggregeringsproblemet, inspirerat av ledande metoder inom taligenkänning. Tidigare angreppsätt inom NILM (i frekvensområdet 1 Hz) har huvudsakligen fokuserat på olika typer av Markovmodeller (HMM) och enstaka förekomster av artificiella neurala nätverk. En HMM är en naturlig representation av en elapparat, men med uteslutande generativ modellering måste alla apparater modelleras samtidigt. Det stora antalet möjliga apparater och den stora variationen i sammansättningen av dessa mellan olika hushåll utgör en stor utmaning för sådana metoder. Det medför en stark begränsning av komplexiteten och detaljnivån i modellen av respektive apparat, för att de algoritmer som används vid prediktion ska vara beräkningsmässigt möjliga. I denna uppsats behandlas el-disaggregering som ett faktoriseringsproblem, där respektive apparat ska separeras från bakgrunden av andra apparater. För att göra detta föreslås en hybridmodell där ett neuralt nätverk extraherar information som korrelerar med sannolikheten för att den avsedda apparaten är i olika tillstånd. Denna information används som obervationssekvens för en semi-Markovmodell (HSMM). Då detta utförs för en enskild apparat blir det beräkningsmässigt möjligt att använda en mer detaljerad modell av apparaten. Den föreslagna Hybridmodellen utvärderas för uppgiften att avgöra när tvättmaskinen används för totalt 238 dagar av elförbrukningsmätningar från sex olika hushåll. Hybridmodellen presterar betydligt bättre än enbart ett neuralt nätverk, vidare påvisas att prestandan förbättras ytterligare genom att introducera tillstånds-övergång-observationer i HSMM:en.
APA, Harvard, Vancouver, ISO, and other styles
5

Laine, Emmi. "Desirability, Values and Ideology in CNN Travel -- Discourse Analysis on Travel Stories." Thesis, Stockholms universitet, Institutionen för mediestudier, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-102742.

Full text
Abstract:
Title: Values, Desirability and Ideology in CNN Travel -- a Discourse Analysis on Travel Stories Author: Emmi Laine Course: Journalistikvetenskap, Kandidatkurs, H13 J Kand (Bachelor of Journalism, Fall 2013), JMK, Stockholm University, Sweden Aim: The aim is to examine which values and ideologies CNN Travel fulfills in their stories. Method: Qualitative discourse analysis. Summary: This Bachelor ́s thesis asks what is desirable, which are the values of CNN Travel, the major U.S. news corporation CNN ́s online travel site. The question has been answered through a qualitative discourse analysis on 20 chosen travel stories, picked by their relevancy, diversity, and their expressive tone. Due to the limited space and the specific textual method, the analysis was restricted to the editorial texts of these stories. The chosen method was discourse analyst Norman Fairclough ́s model of evaluation, which revealed the explicit and implicit ways the media texts suggest desired characteristics. These linguistic devices took the readers ́ agreement for granted, as they imposed a shared cultural ground with common values, which is a base for a mutual understanding. After identifying the explicit and implicit evaluations, they were organized according to some major discursive themes found in the texts, and finally analyzed in order to expose their underlying values. The results showed how these certain values brought forth certain ideologies, to some extent in keeping with recent research of tourism and travel journalism. As the study has been put into a larger context of related research, the following pages will first explain some larger concepts of discourse analysis, such as representation, cultural stereotypes, ideology and power. A cross-section from older to more contemporary theories in culture studies has been utilized; moving from Edward Said ́s postcolonial classic Orientalism, an example of cultural stereotyping, to the more recent topics of ‘promotion culture’ and consumerism, and tourism researcher John Urry ́s ideas about the consumption of places and the ‘tourist gaze.’ In the end, the study considers what kind of power does travel journalism possess over the represented tourism destinations. Finally, when questioning the travel journalists ́ legitimacy and power to represent the travel destinations, poststructuralist Michel Foucault ́s theory about the ‘regime of truth,’ as well as Antonio Gramsci ́s ideas of ‘hegemony,’ theory of dominance through consent, were discussed and confirmed.
APA, Harvard, Vancouver, ISO, and other styles
6

Appelstål, Michael. "Multimodal Model for Construction Site Aversion Classification." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-421011.

Full text
Abstract:
Aversion on construction sites can be everything from missingmaterial, fire hazards, or insufficient cleaning. These aversionsappear very often on construction sites and the construction companyneeds to report and take care of them in order for the site to runcorrectly. The reports consist of an image of the aversion and atext describing the aversion. Report categorization is currentlydone manually which is both time and cost-ineffective. The task for this thesis was to implement and evaluate an automaticmultimodal machine learning classifier for the reported aversionsthat utilized both the image and text data from the reports. Themodel presented is a late-fusion model consisting of a Swedish BERTtext classifier and a VGG16 for image classification. The results showed that an automated classifier is feasible for thistask and could be used in real life to make the classification taskmore time and cost-efficient. The model scored a 66.2% accuracy and89.7% top-5 accuracy on the task and the experiments revealed someareas of improvement on the data and model that could be furtherexplored to potentially improve the performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Anam, Md Tahseen. "Evaluate Machine Learning Model to Better Understand Cutting in Wood." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-448713.

Full text
Abstract:
Wood cutting properties for the chains of chainsaw is measured in the lab by analyzing the force, torque, consumed power and other aspects of the chain as it cuts through the wood log. One of the essential properties of the chains is the cutting efficiency which is the measured cutting surface per the power used for cutting per the time unit. These data are not available beforehand and therefore, cutting efficiency cannot be measured before performing the cut. Cutting efficiency is related to the relativehardness of the wood which means that it is affected by the existence of knots (hardstructure areas) and cracks (no material areas). The actual situation is that all the cuts with knots and cracks are eliminated and just the clean cuts are used, therefore estimating the relative wood hardness by identifying the knots and cracks beforehand can significantly help to automate the process of testing the chain properties, saving time and material and give a better understanding of cutting wood logs to improve chains quality.Many studies have been done to develop methods to analyze and measure different features of an end face. This thesis work is carried out to evaluate a machinelearning model to detect knots and cracks on end faces and to understand their impact on the average cutting efficiency. Mask R-CNN is widely used for instance segmentation and in this thesis work, Mask R-CNN is evaluated to detect and segment knots and cracks on an end face. Methods are also developed to estimatepith’s vertical position from the wood image and generate average cutting efficiency graph based on knot’s and crack’s percentage at each vertical position of wood image.
APA, Harvard, Vancouver, ISO, and other styles
8

Ghibellini, Alessandro. "Trend prediction in financial time series: a model and a software framework." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24708/.

Full text
Abstract:
The research has the aim to build an autonomous support for traders which in future can be translated in an Active ETF. My thesis work is characterized for a huge focus on problem formulation and an accurate analysis on the impact of the input and the length of the future horizon on the results. I will demonstrate that using financial indicators already used by professional traders every day and considering a correct length of the future horizon, it is possible to reach interesting scores in the forecast of future market states, considering both accuracy, which is around 90% in all the experiments, and confusion matrices which confirm the good accuracy scores, without an expensive Deep Learning approach. In particular, I used a 1D CNN. I also emphasize that classification appears to be the best approach to address this type of prediction in combination with proper management of unbalanced class weights. In fact, it is standard having a problem of unbalanced class weights, otherwise the model will react for inconsistent trend movements. Finally I proposed a Framework which can be used also for other fields which allows to exploit the presence of the Experts of the sector and combining this information with ML/DL approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Rydén, Anna, and Amanda Martinsson. "Evaluation of 3D motion capture data from a deep neural network combined with a biomechanical model." Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176543.

Full text
Abstract:
Motion capture has in recent years grown in interest in many fields from both game industry to sport analysis. The need of reflective markers and expensive multi-camera systems limits the business since they are costly and time-consuming. One solution to this could be a deep neural network trained to extract 3D joint estimations from a 2D video captured with a smartphone. This master thesis project has investigated the accuracy of a trained convolutional neural network, MargiPose, that estimates 25 joint positions in 3D from a 2D video, against a gold standard, multi-camera Vicon-system. The project has also investigated if the data from the deep neural network can be connected to a biomechanical modelling software, AnyBody, for further analysis. The final intention of this project was to analyze how accurate such a combination could be in golf swing analysis. The accuracy of the deep neural network has been evaluated with three parameters: marker position, angular velocity and kinetic energy for different segments of the human body. MargiPose delivers results with high accuracy (Mean Per Joint Position Error (MPJPE) = 1.52 cm) for a simpler movement but for a more advanced motion such as a golf swing, MargiPose achieves less accuracy in marker distance (MPJPE = 3.47 cm). The mean difference in angular velocity shows that MargiPose has difficulties following segments that are occluded or has a greater motion, such as the wrists in a golf swing where they both move fast and are occluded by other body segments. The conclusion of this research is that it is possible to connect data from a trained CNN with a biomechanical modelling software. The accuracy of the network is highly dependent on the intention of the data. For the purpose of golf swing analysis, this could be a great and cost-effective solution which could enable motion analysis for professionals but also for interested beginners. MargiPose shows a high accuracy when evaluating simple movements. However, when using it with the intention of analyzing a golf swing in i biomechanical modelling software, the outcome might be beyond the bounds of reliable results.
APA, Harvard, Vancouver, ISO, and other styles
10

Gerima, Kassaye. "Night Setback Identification of District Heating Substations." Thesis, Högskolan Dalarna, Mikrodataanalys, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:du-36071.

Full text
Abstract:
Energy efficiency of district heating systems is of great interest to energy stakeholders. However, it is not uncommon that district heating systems fail to achieve the expected performance due to inappropriate operations. Night setback is one control strategy, which has been proved to be not a suitable setting for well-insulated modern buildings in terms of both economic and energy efficiency. Therefore, identification of a night setback control is vital to district heating companies to smoothly manage their heat energy distribution to their customers. This study is motivated to automate this identification process. The method used in this thesis is a Convolutional Neural Network(CNN) approach using the concept of transfer learning. 133 substations in Oslo are used in this case study to design a machine learning model that can identify a substation as night setback or non-night setback series. The results show that the proposed method can classify the substations with approximately 97% accuracy and 91% F1-score. This shows that the proposed method has a high potential to be deployed and used in practice to identify a night setback control in district heating substations.
APA, Harvard, Vancouver, ISO, and other styles
11

Sievert, Rolf. "Instance Segmentation of Multiclass Litter and Imbalanced Dataset Handling : A Deep Learning Model Comparison." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175173.

Full text
Abstract:
Instance segmentation has a great potential for improving the current state of littering by autonomously detecting and segmenting different categories of litter. With this information, litter could, for example, be geotagged to aid litter pickers or to give precise locational information to unmanned vehicles for autonomous litter collection. Land-based litter instance segmentation is a relatively unexplored field, and this study aims to give a comparison of the instance segmentation models Mask R-CNN and DetectoRS using the multiclass litter dataset called Trash Annotations in Context (TACO) in conjunction with the Common Objects in Context precision and recall scores. TACO is an imbalanced dataset, and therefore imbalanced data-handling is addressed, exercising a second-order relation iterative stratified split, and additionally oversampling when training Mask R-CNN. Mask R-CNN without oversampling resulted in a segmentation of 0.127 mAP, and with oversampling 0.163 mAP. DetectoRS achieved 0.167 segmentation mAP, and improves the segmentation mAP of small objects most noticeably, with a factor of at least 2, which is important within the litter domain since small objects such as cigarettes are overrepresented. In contrast, oversampling with Mask R-CNN does not seem to improve the general precision of small and medium objects, but only improves the detection of large objects. It is concluded that DetectoRS improves results compared to Mask R-CNN, as well does oversampling. However, using a dataset that cannot have an all-class representation for train, validation, and test splits, together with an iterative stratification that does not guarantee all-class representations, makes it hard for future works to do exact comparisons to this study. Results are therefore approximate considering using all categories since 12 categories are missing from the test set, where 4 of those were impossible to split into train, validation, and test set. Further image collection and annotation to mitigate the imbalance would most noticeably improve results since results depend on class-averaged values. Doing oversampling with DetectoRS would also help improve results. There is also the option to combine the two datasets TACO and MJU-Waste to enforce training of more categories.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhong, Shifa. "Permanganate Reaction Kinetics and Mechanisms and Machine Learning Application in Oxidative Water Treatment." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1618686803768471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kovář, Pavel. "Model CNC frézky." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219700.

Full text
Abstract:
The master's thesis deals with the basic parts and principles of CNC machines with a focus on a CNC milling machine. There are also comparisons of several CNC machines sold. Furthermore, the project deals with the description of the manipulator, which is situated in laboratory E-132 in area Kolejní 4, Brno. The work also includes a description of the device, which is located in the manipulator model. We can find there a description of the program generating G-code from the image that you create in the editor. The next section describes possible modifications of the manipulator for its reconstruction on a CNC milling machine. The last chapter deals with the description of program developed for controlling the manipulator as a CNC milling machine.
APA, Harvard, Vancouver, ISO, and other styles
14

Viebke, André. "Accelerated Deep Learning using Intel Xeon Phi." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-45491.

Full text
Abstract:
Deep learning, a sub-topic of machine learning inspired by biology, have achieved wide attention in the industry and research community recently. State-of-the-art applications in the area of computer vision and speech recognition (among others) are built using deep learning algorithms. In contrast to traditional algorithms, where the developer fully instructs the application what to do, deep learning algorithms instead learn from experience when performing a task. However, for the algorithm to learn require training, which is a high computational challenge. High Performance Computing can help ease the burden through parallelization, thereby reducing the training time; this is essential to fully utilize the algorithms in practice. Numerous work targeting GPUs have investigated ways to speed up the training, less attention have been paid to the Intel Xeon Phi coprocessor. In this thesis we present a parallelized implementation of a Convolutional Neural Network (CNN), a deep learning architecture, and our proposed parallelization scheme, CHAOS. Additionally a theoretical analysis and a performance model discuss the algorithm in detail and allow for predictions if even more threads are available in the future. The algorithm is evaluated on an Intel Xeon Phi 7120p, Xeon E5-2695v2 2.4 GHz and Core i5 661 3.33 GHz using various architectures and thread counts on the MNIST dataset. Findings show a 103.5x, 99.9x, 100.4x speed up for the large, medium, and small architecture respectively for 244 threads compared to 1 thread on the coprocessor. Moreover, a 10.9x - 14.1x (large to small) speed up compared to the sequential version running on Xeon E5. We managed to decrease training time from 7 days on the Core i5 and 31 hours on the Xeon E5, to 3 hours on the Intel Xeon Phi when training our large network for 15 epochs
APA, Harvard, Vancouver, ISO, and other styles
15

Nixdorf, Timothy Allen. "A Mathematical Model for Carbon Nanoscrolls." University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1406060123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ornstein, Charlotte, and Karin Sandahl. "Coopetition and business models : How can they be integrated, and what effect does it have on value creation, delivery and capture?" Thesis, Umeå universitet, Företagsekonomi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-105963.

Full text
Abstract:
Technological innovations and development have caused rapid changes in the business environment. These changes have forced firms to change in the way they do business and operate. Two industries that are affected by these changes are the telecommunication industry and the information technology (IT) industry. Here, it is no longer possible for firms to operate completely individually, and many firms are pushed to engage in so called coopetition, which is cooperation with both vertical and horizontal competitors. As a consequence of the environmental changes, firms’ business models also need to change. They need to find new ways to create and deliver value that meet customer demand, and to capture a fair portion of that value from customers. We have found a connection between coopetition and business models, since value creation and value capture is central in both concepts. Previous research has however only touched the connection between coopetition and business model, and literature still lack research on this new subject. The research gap has led us to formulate the following problem definition: How can coopetition and business models be integrated, and what effect does it have on firms’ value creation, delivery, and capturing? With this problem definition the study has three purposes. Firstly, the study aims to find how coopetition and business models can be seen and understood through the lenses of each other. Secondly, how such integration can lead to that the complex nature of coopetition can be managed more appropriately. Thirdly, create an understanding for what effects coopetition and business models can have on value creation, delivery, and capturing when integrated. As the aim with this degree project is to develop a deeper understanding for this connection, we have chosen to do a qualitative study. We have conducted interviews with participants from seven different firms. In order to complement the theoretical framework we have held an expert interview with Professor Devi Gnyawali. The analysis has led us to the conclusion that coopetition and business models are connected in more ways than is admitted in the literature today. We have found that coopetition and business models are not only connected in value creation and value capture, but also in value delivery. We can also conclude that it is important to develop principles in the business model of when, why, and how to engage in different forms of coopetition to better manage it. This can have a positive influence on value creation, value delivery and value capture.
APA, Harvard, Vancouver, ISO, and other styles
17

Truzzi, Stefano. "Event classification in MAGIC through Convolutional Neural Networks." Doctoral thesis, Università di Siena, 2022. http://hdl.handle.net/11365/1216295.

Full text
Abstract:
The Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescopes are able to detect gamma rays from the ground with energies beyond several tens of GeV emitted by the most energetic known objects, including Pulsar Wind Nebulae, Active Galactic Nuclei, and Gamma-Ray Bursts. Gamma rays and cosmic rays are detected by imaging the Cherenkov light produced by the charged superluminal leptons in the extended air shower originated when the primary particle interacts with the atmosphere. These Cherenkov flashes brighten the night sky for short times in the nanosecond scale. From the image topology and other observables, gamma rays can be separated from the unwanted cosmic rays, and thereafter incoming direction and energy of the primary gamma rays can be reconstructed. The standard algorithm in MAGIC data analysis for the gamma/hadron separation is the so-called Random Forest, that works on a parametrization of the stereo events based on the shower image parameters. Until a few years ago, these algorithms were limited by the computational resources but modern devices, such as GPUs, make it possible to work efficiently on the pixel maps information. Most neural network applications in the field perform the training on Monte Carlo simulated data for the gamma-ray sample. This choice is prone to systematics arising from discrepancies between observational data and simulations. Instead, in this thesis I trained a known neural network scheme with observation data from a giant flare of the bright TeV blazar Mrk421 observed by MAGIC in 2013. With this method for gamma/hadron separation, the preliminary results compete with the standard MAGIC analysis based on Random Forest classification, which also shows the potential of this approach for further improvement. In this thesis first an introduction to the High-Energy Astrophysics and the Astroparticle physics is given. The cosmic messengers are briefly reviewed, with a focus on the photons, then astronomical sources of γ rays are described, followed by a description of the detection techniques. In the second chapter the MAGIC analysis pipeline starting from the low level data acquisition to the high level data is described. The MAGIC Instrument Response Functions are detailed. Finally, the most important astronomical sources used in the standard MAGIC analysis are listed. The third chapter is devoted to Deep Neural Network techniques, starting from an historical Artificial Intelligence excursus followed by a Machine Learning description. The basic principles behind an Artificial Neural Network and the Convolutional Neural Network used for this work are explained. Last chapter describes my original work, showing in detail the data selection/manipulation for training the Inception Resnet V2 Convolutional Neural Network and the preliminary results obtained from four test sources.
APA, Harvard, Vancouver, ISO, and other styles
18

Lind, Johan. "Evaluating CNN-based models for unsupervised image denoising." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176092.

Full text
Abstract:
Images are often corrupted by noise which reduces their visual quality and interferes with analysis. Convolutional Neural Networks (CNNs) have become a popular method for denoising images, but their training typically relies on access to thousands of pairs of noisy and clean versions of the same underlying picture. Unsupervised methods lack this requirement and can instead be trained purely using noisy images. This thesis evaluated two different unsupervised denoising algorithms: Noise2Self (N2S) and Parametric Probabilistic Noise2Void (PPN2V), both of which train an internal CNN to denoise images. Four different CNNs were tested in order to investigate how the performance of these algorithms would be affected by different network architectures. The testing used two different datasets: one containing clean images corrupted by synthetic noise, and one containing images damaged by real noise originating from the camera used to capture them. Two of the networks, UNet and a CBAM-augmented UNet resulted in high performance competitive with the strong classical denoisers BM3D and NLM. The other two networks - GRDN and MultiResUNet - on the other hand generally caused poor performance.
APA, Harvard, Vancouver, ISO, and other styles
19

Söderström, Douglas. "Comparing pre-trained CNN models on agricultural machines." Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Venne, Simon. "Can Species Distribution Models Predict Colonizations and Extinctions?" Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38465.

Full text
Abstract:
Aim MaxEnt, a very popular species distribution modelling technique, has been used extensively to relate species’ geographic distributions to environmental variables and to predict changes in species’ distributions in response to environmental change. Here, we test its predictive ability through time (rather than through space, as is commonly done) by modeling colonizations and extinctions. Location Continental U.S. and southern Canada. Time period 1979-2009 Major taxa studied Twenty-one species of passerine birds. Methods We used MaxEnt to relate species’ geographic distributions to the variation in environmental conditions across North America. We then modelled site-specific colonizations and extinctions between 1979 and 2009 as functions of MaxEnt-estimated previous habitat suitability and inter- annual change in habitat suitability and neighborhood occupancy. We evaluated whether the effects were in the expected direction, we partitioned model’s explained deviance, and we compared colonization and extinction model’s accuracy to MaxEnt’s AUC. Results IV Colonization and extinction probabilities both varied as functions of previous habitat suitability, change in habitat suitability, and neighborhood occupancy, in the expected direction. Change in habitat suitability explained very little deviance compared to other predictors. Neighborhood occupancy accounted for more explained deviance in colonization models than in extinction models. MaxEnt AUC correlates with extinction models’ predictive ability, but not with that of colonization models. Main conclusions MaxEnt appears to sometime capture a real effect of the environment on species’ distributions since a statistical effect of habitat suitability is detected through both time and space. However, change in habitat suitability (which is much smaller through time than through space) is a poor predictor of change in occupancy. Over short time scales, proximity of sites occupied by conspecifics predicts changes in occupancy just as well as MaxEnt. The ability of MaxEnt models to predict spatial variation in occupancy (as measured by AUC) gives little indication of transferability through time. Thus, the predictive value of species distribution models may be overestimated when evaluated through space only. Future prediction of species’ responses to climate change should make a distinction between colonization and extinction, recognizing that the two processes are not equally well predicted by SDMs.
APA, Harvard, Vancouver, ISO, and other styles
21

Kurbanoglu, Ozgur. "Electric Energy Policy Models In The European Union: Can There Be A Model For Turkey?" Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12605585/index.pdf.

Full text
Abstract:
The thesis discusses Turkish energy sector by using examples, projections made by the European Union, and positions of the experts and scholars. The work discusses the process of reformation of Energy sector, and what the obstacles and difficulties are. It is important that Turkey needs progress in the process of reformation that can be satisfied by using a functioning model in the field. Turkey has to apply the legislation of the European Union as an applicant country. Turkey needs a strategy for achieving the application of the energy legislation. Different countries in the European Union have been examined in the work for finding the strategy for Turkish energy sector. The countries have been selected for their peculiarities (Greece) and their strategical approaches for shaping their markets (France, Italy, Germany, United Kingdom &ndash
G8 countries in the European Union). The result of the study shows that the energy pool applied in England and Wales of the United Kingdom is a successful example, and it can be used for electricity policy along with some other developments in the field. The work tries to propose a model for the reform to be done, for the benefit of the society.
APA, Harvard, Vancouver, ISO, and other styles
22

Du, Chenguang. "How Well Can Two-Wave Models Recover the Three-Wave Second Order Latent Model Parameters?" Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103856.

Full text
Abstract:
Although previous studies on structural equation modeling (SEM) have indicated that the second-order latent growth model (SOLGM) is a more appropriate approach to longitudinal intervention effects, its application still requires researchers to collect at least three-wave data (e.g. randomized pretest, posttest, and follow-up design). However, in some circumstances, researchers can only collect two-wave data for resource limitations. With only two-wave data, the SOLGM can not be identified and researchers often choose alternative SEM models to fit two-wave data. Recent studies show that the two-wave longitudinal common factor model (2W-LCFM) and latent change score model (2W-LCSM) can perform well for comparing latent change between groups. However, there still lacks empirical evidence about how accurately these two-wave models can estimate the group effects of latent change obtained by three-wave SOLGM (3W-SOLGM). The main purpose of this dissertation, therefore, is trying to examine to what extent the fixed effects of the tree-wave SOLGM can be recovered from the parameter estimates of the two-wave LCFM and LCSM given different simulation conditions. Fundamentally, the supplementary study (study 2) using three-wave LCFM was established to help justify the logistics of different model comparisons in our main study (study 1). The data generating model in both studies is 3W-SOLGM and there are in total 5 simulation factors (sample size, group differences in intercept and slope, the covariance between the slope and intercept, size of time-specific residual, change the pattern of time-specific residual). Three main types of evaluation indices were used to assess the quality of estimation (bias/relative bias, standard error, and power/type I error rate). The results in the supplementary study show that the performance of 3W-LCFM and 3W-LCSM are equivalent, which further justifies the different models' comparison in the main study. The point estimates for the fixed effect parameters obtained from the two-wave models are unbiased or identical to the ones from the three-wave model. However, using two-wave models could reduce the estimation precision and statistical power when the time-specific residual variance is large and changing pattern is heteroscedastic (non-constant). Finally, two real datasets were used to illustrate the simulation results
Doctor of Philosophy
To collect and analyze the longitudinal data is a very important approach to understand the phenomenon of development in the real world. Ideally, researchers who are interested in using a longitudinal framework would prefer collecting data at more than two points in time because it can provide a deeper understanding of the developmental processes. However, in real scenarios, data may only be collected at two-time points. With only two-wave data, the second-order latent growth model (SOLGM) could not be used. The current dissertation compared the performance of two-wave models (longitudinal common factor model and latent change score model) with the three-wave SOLGM in order to better understand how the estimation quality of two-wave models could be comparable to the tree-wave model. The results show that on average, the estimation from two-wave models is identical to the ones from the three-wave model. So in real data analysis with only one sample, the point estimate by two-wave models should be very closed to that of the three-wave model. But this estimation may not be as accurate as it is obtained by the three-wave model when the latent variable has large variability in the first or last time point. This latent variable is more likely to exist as a statelike construct in the real world. Therefore, the current study could provide a reference framework for substantial researchers who could only have access to two-wave data but are still interested in estimating the growth effect that supposed to obtain by three-wave SOLGM.
APA, Harvard, Vancouver, ISO, and other styles
23

Norlund, Tobias. "The Use of Distributional Semantics in Text Classification Models : Comparative performance analysis of popular word embeddings." Thesis, Linköpings universitet, Datorseende, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-127991.

Full text
Abstract:
In the field of Natural Language Processing, supervised machine learning is commonly used to solve classification tasks such as sentiment analysis and text categorization. The classical way of representing the text has been to use the well known Bag-Of-Words representation. However lately low-dimensional dense word vectors have come to dominate the input to state-of-the-art models. While few studies have made a fair comparison of the models' sensibility to the text representation, this thesis tries to fill that gap. We especially seek insight in the impact various unsupervised pre-trained vectors have on the performance. In addition, we take a closer look at the Random Indexing representation and try to optimize it jointly with the classification task. The results show that while low-dimensional pre-trained representations often have computational benefits and have also reported state-of-the-art performance, they do not necessarily outperform the classical representations in all cases.
APA, Harvard, Vancouver, ISO, and other styles
24

Keating, Daniel. "Model Checking Time Triggered CAN Protocols." Thesis, University of Canterbury. Electrical and Computer Engineering, 2011. http://hdl.handle.net/10092/5754.

Full text
Abstract:
Model checking is used to aid in the design and verification of complex concurrent systems. An abstracted finite state model of a system and a set of mathematically based correctness properties based on the design specifications are defined. The model checker then performs an exhaustive state space search of the model, checking that the correctness properties hold at each step. This thesis describes how the SPIN model checker has been used to find and correct problems in the software design of a distributed marine vessel control system currently under development at a control systems specialist in New Zealand. The system under development is a mission critical control system used on large marine vessels. Hence, the requirement to study its architecture and verify the implementation of the system. The model checking work reported here focused on analysing the implementation of the Time-Triggered Controller-Area-Network (TTCAN) protocol, as this is used as the backbone for communications between devices and thus is a crucial part of their control system. A model of the ISO TTCAN protocol has been created using the SPIN model checker. This was based on work previously done by Leen and Heffernan modelling the protocol with the UPPAAL model checker [Leen and Heffernan 2002a]. In the process of building the ISO TTCAN model, a set of general techniques were developed for model checking TTCAN-like protocols. The techniques developed include modelling the progression of time efficiently in SPIN, TTCAN message transmission, TTCAN error handling, and CAN bus arbitration. These techniques then form the basis of a set of models developed to check the sponsoring organisation’s implementation of TTCAN as well as the fault tolerance schemes added to the system. Descriptions of the models and properties developed to check the correctness of the TTCAN implementation are given, and verification results are presented and discussed. This application of model checking to an industrial design problem has been successful in identifying a number of potential issues early in the design phase. In cases where problems are identified, the sequences of events leading to the problems are described, and potential solutions are suggested and modelled to check their effect of the system.
APA, Harvard, Vancouver, ISO, and other styles
25

Ondroušek, Jakub. "Ekonometrický model cen bytů v Brně." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2019. http://www.nusl.cz/ntk/nusl-399645.

Full text
Abstract:
The goal of the thesis „Econometric model of flat prices in Brno“ is to create econometric model based on data from housing market. The theoretical part of the thesis defines variables, and use descriptive statistics. The practical part of the thesis deals with creation econometric model and interactive calculator.
APA, Harvard, Vancouver, ISO, and other styles
26

Mutarelli, Rita de Cássia. "Estudo da responsabilidade social do Instituto de Pesquisas Energéticas e Nucleares de São Paulo (IPEN/CNEN - SP)." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/85/85133/tde-16072014-141824/.

Full text
Abstract:
Ao longo dos anos, a evolução do conceito socioambiental tem se solidificado por meio de programas, conferências e diversas atividades que ocorrem no Brasil e no mundo. A sustentabilidade e a responsabilidade social passaram a ser parte integrante do cotidiano das organizações. O Instituto de Pesquisas Energéticas e Nucleares (IPEN) 1, que é o foco desse trabalho tem como parte da sua missão o compromisso com a melhoria da qualidade de vida da população brasileira e com base na missão do IPEN e na falta de instrumentos de avaliação de ações socioambientais, este trabalho tem como objetivo propor um instrumento que avalie a responsabilidade social e sirva como uma opção metodológica fortemente comprometida com o aprimoramento do Instituto. Por meio de indicadores e dimensões, construiu-se uma metodologia que busca avaliar a responsabilidade social e identificar tanto os pontos fortes como os fracos. Essa metodologia foi aplicada ao IPEN, e os resultados apresentados nesse trabalho identificaram aspectos positivos com respeito às suas ações para com o público interno e pontos a serem melhorados com relação ao seu público externo. Os resultados foram satisfatórios, no entanto, esse trabalho poderá ter uma continuidade, pois o tema é amplo e não se esgota nesse estudo. Por meio dessa pesquisa, os gestores do IPEN poderão identificar ações socioambientais viáveis que possam ser implementadas no Instituto.
Over the years, the socio-environmental concept has grown through programs, conferences and several activities that have been held in Brazil and worldwide. Sustainability and social responsibility are now an integral part of everyday life of organizations The Instituto de Pesquisas Energéticas e Nucleares (IPEN)2, which is the focus of this research, is committed to the improvement of Brazilian quality of life. Based on IPEN´s mission, and due to the lack of tools for assessing socio-environmental actions, this research aims to propose an assessment tool for social responsibility, which may also be a methodological resource committed to the improvement of the Institute. Through indicators and dimensions, a methodology to assess social responsibility and identify both strengths and weaknesses was designed. The methodology was administered to IPEN, and the results demonstrated positive aspects regarding actions towards the internal publics and negative aspects towards the external publics that require improvement. The results obtained were satisfactory. Nevertheless, as the subject of this study is a broad theme, further studies are suggested. IPEN´s board may use the results of this research as a tool to help them identify feasible socio-environmental actions to be implemented in the institute.
APA, Harvard, Vancouver, ISO, and other styles
27

Tenruh, Mahmut. "Extending Controller Area Networks : CAN/CAN cut-through bridging, CAN over ATM, and CAN based ATM FieldBus." Thesis, University of Sussex, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Truong, Quan, and trunongluongquan@yahoo com au. "Continuous-time Model Predictive Control." RMIT University. Electrical and Computer Engineering, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20090813.163701.

Full text
Abstract:
Model Predictive Control (MPC) refers to a class of algorithms that optimize the future behavior of the plant subject to operational constraints [46]. The merits of the class algorithms include its ability to handle imposed hard constraints on the system and perform on-line optimization. This thesis investigates design and implementation of continuous time model predictive control using Laguerre polynomials and extends the design ap- proaches proposed in [43] to include intermittent predictive control, as well as to include the case of the nonlinear predictive control. In the Intermittent Predictive Control, the Laguerre functions are used to describe the control trajectories between two sample points to save the com- putational time and make the implementation feasible in the situation of the fast sampling of a dynamic system. In the nonlinear predictive control, the Laguerre polynomials are used to describe the trajectories of the nonlinear control signals so that the reced- ing horizon control principle are applied in the design with respect to the nonlinear system constraints. In addition, the thesis reviews several Quadratic Programming methods and compares their performances in the implementation of the predictive control. The thesis also presents simulation results of predictive control of the autonomous underwater vehicle and the water tank.
APA, Harvard, Vancouver, ISO, and other styles
29

Suresh, Sreerag. "An Analysis of Short-Term Load Forecasting on Residential Buildings Using Deep Learning Models." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/99287.

Full text
Abstract:
Building energy load forecasting is becoming an increasingly important task with the rapid deployment of smart homes, integration of renewables into the grid and the advent of decentralized energy systems. Residential load forecasting has been a challenging task since the residential load is highly stochastic. Deep learning models have showed tremendous promise in the fields of time-series and sequential data and have been successfully used in the field of short-term load forecasting at the building level. Although, other studies have looked at using deep learning models for building energy forecasting, most of those studies have looked at limited number of homes or an aggregate load of a collection of homes. This study aims to address this gap and serve as an investigation on selecting the better deep learning model architecture for short term load forecasting on 3 communities of residential buildings. The deep learning models CNN and LSTM have been used in the study. For 15-min ahead forecasting for a collection of homes it was found that homes with a higher variance were better predicted by using CNN models and LSTM showed better performance for homes with lower variances. The effect of adding weather variables on 24-hour ahead forecasting was studied and it was observed that adding weather parameters did not show an improvement in forecasting performance. In all the homes, deep learning models are shown to outperform the simple ANN model.
Master of Science
Building energy load forecasting is becoming an increasingly important task with the rapid deployment of smart homes, integration of renewables into the grid and the advent of decentralized energy systems. Residential load forecasting has been a challenging task since residential load is highly stochastic. Deep learning models have showed tremendous promise in the fields of time-series and sequential data and have been successfully used in the field of short-term load forecasting. Although, other studies have looked at using deep learning models for building energy forecasting, most of those studies have looked at only a single home or an aggregate load of a collection of homes. This study aims to address this gap and serve as an analysis on short term load forecasting on 3 communities of residential buildings. Detailed analysis on the model performances across all homes have been studied. Deep learning models have been used in this study and their efficacy is measured compared to a simple ANN model.
APA, Harvard, Vancouver, ISO, and other styles
30

Vichare, Parag. "A novel methodology for modelling CNC machining system resources." Thesis, University of Bath, 2009. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Nuthmann, Antje, Wolfgang Einhäuser, and Immo Schütz. "How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models." Universitätsbibliothek Chemnitz, 2018. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-232614.

Full text
Abstract:
Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead (“central bias”). This problem is further exacerbated in the context of model comparisons, because some—but not all—models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox “GridFix” available.
APA, Harvard, Vancouver, ISO, and other styles
32

Šmerda, Ondřej. "Návrh koncepce leteckého motoru na CNG." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-401574.

Full text
Abstract:
Master’s thesis deals with comparsion and rating the compressed natural gas as an aircraft piston engine fuel. An information search of conventional fuels and differences of the fuel systems for AVGAS and CNG is included. Next part describes the aircraft and its engine on which is the mathematic model based. After that perfomance and consumption data are calculated for both fuels and the results are then compared. At the end of the thesis, a design of the CNG fuel system with components selection is described.
APA, Harvard, Vancouver, ISO, and other styles
33

Sivaraman, Gokul. "Development of PMSM and drivetrain models in MATLAB/Simulink for Model Based Design." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301027.

Full text
Abstract:
When developing three-phase drives for Electric Vehicles (EVs), it is essential to verify the controller design. This will help in understanding how fast and accurately the torque of the motor can be controlled. In order to do this, it is always better to test the controller using the software version of the motor or vehicle drivetrain than using actual hardware as it could lead to component damage when replicating extreme physical behavior. In this thesis, plant modelling of Permanent Magnet Synchronous Machine (PMSM) and vehicle drivetrain in MATLAB/Simulink for Model Based Design (MBD) is presented. MBD is an effective method for controller design that, if adopted can lead to cost savings of 25%-30% and time savings of 35%-40% (according to a global study by Altran Technologies, the chair of software and systems engineering and the chair of Information Management of the University of Technology in Munich) [1]. The PMSM plant models take effects like magnetic saturation, cross- coupling, spatial harmonics and temperature into account. Two PMSM models in d-q frame based on flux and inductance principles were implemented. Flux, torque maps from Finite Element Analysis (FEA) and apparent inductance from datasheets were used as inputs to the flux- and inductance-based models, respectively. The FEA of PMSM was done using COMSOL Multiphysics. The PMSM model results were compared with corresponding FEA simulated results for verification. A comparison of these PMSM models with conventional low fidelity models has also been done to highlight the impact of inclusion of temperature and spatial harmonics. These motor models can be combined with an inverter plant model and a controller can be developed for the complete model. Low frequency oscillations of drivetrain in EVs lead to vibrations which can cause discomfort and torsional stresses. In order to control these oscillations, an active oscillation damping controller can be implemented. For implementation of this control, a three-mass mechanical plant model of drivetrain with an ABS (Anti-lock Braking System) wheel speed sensor has been developed in this thesis. Analysis of the model transfer function to obtain the pole zero maps was performed. This was used to observe and verify presence of low frequency oscillations in the drivetrain. In order to include the effects of ABS wheel speed sensor and CAN communication, a model was developed for the sensor.
Testning av regulatorernas inställningar med hänsyn till snabbhet och noggrannhet i momentreglering är avgörande i trefasiga drivsystem för elektriska fordon. Oftast är det bättre att simulera i stället för att utföra experimentella tester där komponenter kan skadas på grund av fysisk stress. Detta kallas för Model Based Design (MBD). MBD är an effektiv metod för utformningen av styrningen som kan leda till kostnadsbesparingar på 25%-30% och tidsbesparingar på 35%-40% enligt en studie från Altran Technologies i samarbete med Tekniska universitet i München, TUM. Detta examensarbete behandlar en modell för en synkronmaskin med permanentmagneter (PMSM) samt en modell för drivlinan utvecklad i Matlab/Simulink för MBD. PMSMs modellen inkluderar magnetisk mättnad och tvärkoppling, MMF övervågor och temperatur. Två PMSM modeller har utvecklats. Den första baseras på magnetiskt flöde som erhålls från finita element beräkningar i COMSOL Multiphysics medan den andra bygger på induktanser givna från datablad. En jämförelse av dessa PMSM-modeller med konventionella low fidelity-modeller har också gjorts för att illustrera påverkan temperaturberoende och MMF övervågor. Modellerna kan kombineras med en växelriktarmodell för att utveckla en hel styrenhet. Lågfrekventa oscillationer i drivlinan leder till vibrationer som kan orsaka vridspänningar och försämra komforten i elfordonet. En aktiv dämpningsregulator kan implementeras för att kontrollera spänningarna men en mekanisk drivlinemodell med tre massor och en ABS (anti-lock braking system) hastighetssensor behövs. Den mekaniska modellen har implementerats och analyserats även beaktande en modell för en CAN kommunikationskanal. Oscillationer med låg frekvens kunde observeras i modellen.
APA, Harvard, Vancouver, ISO, and other styles
34

TAORMINA, Vincenzo. "DEVELOPMENT AND IMPLEMENTATION OF MACHINE LEARNING METHODS FOR THE IIF IMAGES ANALYSIS." Doctoral thesis, Università degli Studi di Palermo, 2021. http://hdl.handle.net/10447/479046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Thomas, Kerry J. "Teaching Mathematical Modelling to Tomorrow's Mathematicians or, You too can make a million dollars predicting football results." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-83131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Brien, Jeffrey. "Mixed Emotions: Can People Feel Happy and Sad at the Same Time?" Thesis, Boston College, 2003. http://hdl.handle.net/2345/426.

Full text
Abstract:
Thesis advisor: Timothy A. Duket
I studied whether or not people can feel happy and sad at the same moment in time. Participants used a computerized procedure to continuously rate their feelings as they viewed backwardly masked faces designed to elicit pleasant, unpleasant, or mixed feelings. The backward masking procedure and grid were poorly calibrated as participants found all conditions to be unpleasant. Evidence is presented that participants did not perceive the mask faces as neutral. Directions for future studies are discussed
Thesis (BA) — Boston College, 2003
Submitted to: Boston College. College of Arts and Sciences
Discipline: Psychology
Discipline: College Honors Program
APA, Harvard, Vancouver, ISO, and other styles
37

Abalos, Choque Melisa. "Modelo Arima con intervenciones." Universidad Mayor de San Andrés. Programa Cybertesis BOLIVIA, 2009. http://www.cybertesis.umsa.bo:8080/umsa/2009/abalos_cme/html/index-frames.html.

Full text
Abstract:
El desarrollo de gran parte de los modelos y métodos estadísticos, específicamente relacionados con series temporales, ha ido ligado al deseo de estudiar aplicaciones específicas dentro de diversos ámbitos científicos. El presente trabajo también surgió con el objetivo de resolver diversos problemas que se plantean dentro del ámbito econométrico, aunque también puede ser usado en otros ámbitos, todos ellos ligados con un conjunto de datos históricos y con una aplicación muy concreta al estudio del “egreso de divisas” en Bolivia. Se han estudiado a profundidad los modelos para series temporales que únicamente dependían del pasado de la propia serie. En el presente trabajo se inicia el análisis de una serie temporal teniendo en cuenta algún tipo de información externa. En el capítulo 1 se sustenta fuertemente el hecho de investigar acerca de aspectos ajenos a la serie temporal que llegan de algún modo a alterar su normal comportamiento. El capítulo 2 desarrolla minuciosamente modelos univariantes conocidos con el nombre de ARIMA, desarrollando su parte teórica. Posteriormente se complementa esta perspectiva univariante añadiéndose una parte determinística correspondiente al análisis de intervención construyendo así el modelo ARIMA CON INTERVENCIONES, la utilización de éstos modelos es comparada en el capítulo 3, de esta manera se distingui cual de los dos es más efectivo cuando los datos son afectados por eventos circunstanciales. La metodología del modelo ARIMA CON INTERVENCIONES es una herramienta útil para “modelizar” el comportamiento de las series temporales que presentan modificaciones a raíz de eventos ajenos que no pueden ser controlados.
APA, Harvard, Vancouver, ISO, and other styles
38

Thomas, Kerry J. "Teaching Mathematical Modelling to Tomorrow''s Mathematicians or, You too can make a million dollars predicting football results." Turning dreams into reality: transformations and paradigm shifts in mathematics education. - Grahamstown: Rhodes University, 2011. - S. 334 - 339, 2012. https://slub.qucosa.de/id/qucosa%3A1949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Tůmová, Petra. "Konstrukce předpovědních modelů cen zlata a stříbra." Master's thesis, Česká zemědělská univerzita v Praze, 2016. http://www.nusl.cz/ntk/nusl-260507.

Full text
Abstract:
Diplomová práce je zaměřena na analýzu vývoje časových řad, nominálních cen zlata a stříbra od roku 1968 do roku 2014 a následné využití ARIMA modelu pro sestavení predikce cen obou komodit. Při sestavování predikce bylo využito statistického softwaru Statistica od společnosti StatSoft ČR s. r. o. (licence získána z ČZU) a tabulkového procesoru Microsoft Excel.
APA, Harvard, Vancouver, ISO, and other styles
40

Kim, Taejung 1969. "Time-optimal CNC tool paths : a mathematical model of machining." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/8861.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2001.
Includes bibliographical references (p. 181-188).
Free-form surface machining is a fundamental but time-consuming process in modern manufacturing. The central question we ask in this thesis is how to reduce the time that it takes for a 5-axis CNC (Computer Numerical Control) milling machine to sweep an entire free-form surface in its finishing stage. We formulate a non-classical variational time-optimization problem defined on a 2-dimensional manifold subject to both equality and inequality constraints. The machining time is the cost functional in this optimization problem. We seek for a preferable vector field on a surface to obtain skeletal information on the toolpaths. This framework is more amenable to the techniques of continuum mechanics and differential geometry rather than to path generation and conventional CAD/CAM (Computer Aided Design and Manufacturing) theory. After the formulation, this thesis derives the necessary conditions for optimality. We decompose the problem into a series of optimization problems defined on 1-dimensional streamlines of the vector field and, as a result, simplify the problem significantly. The anisotropy in kinematic performance has a practical importance in high-speed machining. The greedy scheme, which this thesis implements for a parallel hexapod machine tool, uses the anisotropy for finding a preferable vector field.
(cont.) Numerical integration places tool paths along its integral curves. The gaps between two neighboring toolpaths are controlled so that the surface can be machined within a specified tolerance. A conservation law together with the characteristic theory for partial differential equations comes into play in finding appropriately-spaced toolpaths, avoiding unnecessarily-overlapping areas. Since the greedy scheme is based on a local approximation and does not search for the global optimum, it is necessary to judge how well the greedy paths perform. We develop an approximation theory and use it to economically evaluate the performance advantage of the greedy paths over other standard schemes. In this thesis, we achieved the following two objectives: laying down the theoretical basis for surface machining and finding a practical solution for the machining problem. Future work will address solving the optimization problem in a stricter sense.
by Taejung Kim.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
41

Sztendel, Sebastian. "Model referenced condition monitoring of high performance CNC machine tools." Thesis, University of Huddersfield, 2016. http://eprints.hud.ac.uk/id/eprint/34112/.

Full text
Abstract:
Generally, machine tool monitoring is the prediction of the system’s health based on signal acquisition and processing and classification in order to identify the causes of the problem. The producers of machine tools need to pay more attention to their products life cycle because their customers increasingly focus on machine tool reliability and costs. The present study is concerned with the development of a condition monitoring system for high speed Computer Numerical Control (CNC) milling machine tools. A model is a simplification of a real machine to visualize the dynamics of a mechatronic system. This thesis applies recent modelling techniques to represent all parameters which affect the accuracy of a component produced automatically. The control can achieve an accuracy approaching the tolerance restrictions imposed by the machine tool axis repeatability and its operating environment. The motion control system of the CNC machine tool is described and the elements, which compose the axis drives including both the electrical components and the mechanical ones, are analysed and modelled. SIMULINK models have been developed to represent the majority of the dynamic behaviour of the feed drives from the actual CNC machine tool. Various values for the position controller and the load torque have been applied to the motor to show their behaviour. Development of a mechatronic hybrid model for five-axis CNC machine tool using Multi-Body-System (MBS) simulation approach is described. Analysis of CNC machine tool performance under non-cutting conditions is developed. ServoTrace data have been used to validate the Multi-body simulation of tool-to-workpiece position. This thesis aspects the application of state of art sensing methods in the field of condition monitoring of electromechanical systems. The ballscrew-with-nut is perhaps the most prevalent CNC machine subsystem and the condition of each element is crucial to the success of a machining operation. It’s essential to know of the health status of ballscrew, bearings and nut. Acoustic emission analysis of machines has been carried out to determine the deterioration of the ballscrew. Standard practices such as use of a Laser Interferometer have been used to determine the position of the machine tool. A novel machine feed drive condition monitoring system using acoustic emission (AE) signals has been proposed. The AE monitoring techniques investigated can be categorised into traditional AE parameters of energy, event duration and peak amplitude. These events are selected and normalised to estimate remaining life of the machine. This method is shown to be successfully applied for the ballscrew subsystem of an industrial high-speed milling machine. Finally, the successful outcome of the project will contribute to machine tool industry making possible manufacturing of more accurate products with lower costs in shorter time.
APA, Harvard, Vancouver, ISO, and other styles
42

Hamouzová, Michaela. "Analýza vývoje cen nemovitostí v České republice." Master's thesis, Vysoká škola ekonomická v Praze, 2016. http://www.nusl.cz/ntk/nusl-264620.

Full text
Abstract:
The aim of this diploma thesis is to analyze the development of the prices of real estate in the Czech Republic. The thesis is divided into three main parts. The first one deals with the theoretical introduction to valuation of the real estate. Moreover, the thesis presents the current development of the prices of real estate on the Czech market. The last part focuses on co-integration analysis, within which an ADL model is created. This model serves as a base for an error correction model, which describes short-term as well as long-term relations within the time series. The explanatory variables are gross domestic product, consumer price index, the amount of finished apartments, interest rate of mortgage loans, common rate of unemployment, and average gross monthly income. It is the one-equation model which describes the relation among the already mentioned explanatory variables and the HPI index. analysis of the development of the prices of real estate
APA, Harvard, Vancouver, ISO, and other styles
43

Statham, Craig G. "An open CNC interface for intelligent control of grinding." Thesis, Liverpool John Moores University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Srinivasan, Srikant. "A Compact Model for the Coaxially Gated Schottky Barrier Carbon Nanotube Field Effect Transistor." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1161897189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Vopatřilová, Lenka. "Podnik v regulovaném vodohospodářském odvětví." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-113498.

Full text
Abstract:
The Thesis intends to introduce business in government regulated Water Industry, accurately the area of Water Supply and Sewerage. Its aim is to describe the business market of this field. Moreover, to find out whether the companies in private multinational corporation ownership don't burden their customers with too high price of water supply and find if there is any relationship between the price of water and the profitability of these enterprises. To achieve this aim, companies are categorized into five groups and compared. These groups are created on the basis of the shares of private foreign corporation. A separate group is made of companies that are in ownership of municipalities that means without the influence of private foreign corporations. Firstly the one component prices are compared, then two-component ones. In conclusion, these prices are compared to profitability of these enterprises.
APA, Harvard, Vancouver, ISO, and other styles
46

GUPTA, RASHI. "IMAGE FORGERY DETECTION USING CNN MODEL." Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19175.

Full text
Abstract:
Image forgery detection has become more relevant in the real world in recent years since it is so easy to change a particular image and share it throughout social media, which may quickly lead to fake news and fake rumors all over the world. These editing softwares have posed a significant challenge to image forensics in terms of proposing and implementing various methods and strategies for detecting image counterfeiting. There have been a variety of traditional approaches for forgery detection, but they all focus on simple feature extraction and are more specialized to the type of forgery. However, as research advances, multiple deep learning approaches are being implemented to identify forgeries in images. Deep learning approaches have demonstrated exceptional outcomes in image forgery when compared to traditional methods. The numerous sorts of image forgeries are discussed in this work. The work presents and compares different applied and proven image forgery detection approaches, as well as a comprehensive literature analysis of deep learning algorithms for detecting various types of image counterfeiting. Also CNN network is build based on a prior study and compare its performance on two different datasets to address this issue. Furthermore, the impact of a data augmentation approach is assessed as well as several hyperparameters on classification accuracy. Our findings imply that the dataset's difficulty has a significant influence on the outcomes. In this study, we have also aimed to determine detection of image forgery using deep learning approach. The CNN Model is used along with the ELA extraction model which is then used for detection of forgery in images. Later we also used two CNN Models, VGG16 Model and VGG19 Model for the better comparison and understanding.
APA, Harvard, Vancouver, ISO, and other styles
47

LIAO, PEN-MIN, and 廖本閔. "Streamflow Forecasting by CNN-GRU Model." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/8rs76r.

Full text
Abstract:
碩士
逢甲大學
水利工程與資源保育學系
107
During the last two decades, the application of artificial intelligence in the field of flood forecasting has increased noticeably. Since the information of flood forecasting is the most important part of disaster management, also the emergency response and the mechanism of Recurrent Neural Network (RNN) include the behavior of the time series, this study attempt to adopt the Gated Recurrent Unit (GRU) which is a type of RNN used to develop a rainfall-runoff model for the mentioned purpose above. In this research RNN is using Gated Recurrent Unit (GRU). In each field, applicability of GRU is still in researching. Thereby, this paper will discuss the application GRU in the flood forecast. In order to improve the prediction accuracy of the GRU, the data is processed by using the Convolutional Neural Network (CNN) and then input into the GRU for prediction, called CNN-GRU. In the past, most studies used to extract every rainfall from the data before learning artificial neural networks for flood flow prediction. However this study will use a different approach, because GRU cell can remember the status from past. In addition, optimal hyperparameters setting for artificial neural networks will be found by genetic algorithm (GA) to modeling Dali River hourly rainfall-runoff model. Evaluation indicators show that CNN-GRU is better than GRU, the evaluation indicators show that CNN-GRU is better than GRU, because CNN-GRU uses CNN to extract eigenvalues from input data before using GRU for prediction.
APA, Harvard, Vancouver, ISO, and other styles
48

Lin, Chun-Man, and 林君蔓. "Apparent Age Estimation Based on CNN Model." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/g9wspb.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
107
Age estimation has been one of hot topics in computer vision. Identifying personal characteristics such as age, personal identity, gender, and ethnicity through images is an interesting but challenging problem. In recent years, age estimation has become an attractive research topic because it can be widely applied to human life. For example: (1) devices with age recognition can automatically filter age-restricted products, such as cigarettes and alcohol. (2) Since shopping habit and preferences of different age groups are very different, the automatic collection of age data can provide relevant information for market analysis, such as Electronic customer relationship management (ECRM). (3) Age is a biological feature that can be used to assist major biometrics to improve the accuracy of human recognition, verification or authentication applications. As deep learning is widely used in the computer vision, the accuracy of age estimation is also increasing. Early CNN-based works used four to five layers for depth; however, current works adopts a deeper structure and it results in more accurate results. This thesis proposes an age estimation system based on CNN deep learning architecture. The proposed system is modified from DEX system of which VGG16 is adopted as the learning core. We incorporate a multi-loss function which takes into accout the losses of softmax, mean and variance. In the experiments, we show the performance on the apparent database ChaLearn LAP (2015). Moreover, the proposed system is also tested on the real age databases, i.e. AFAD and MORPH II. Experimental results show that the system has a better performance in both the apparent and real age databases.
APA, Harvard, Vancouver, ISO, and other styles
49

SONI, ANKIT. "DETECTING DEEPFAKES USING HYBRID CNN-RNN MODEL." Thesis, 2022. http://dspace.dtu.ac.in:8080/jspui/handle/repository/19168.

Full text
Abstract:
We are living in the world of digital media and are connected to various types of digital media contents present in form of images and videos. Our lives are surrounded by digital contents and thus originality of content is very important. In the recent times, there is a huge emergence of deep learning-based tools that are used to create believable manipulated media known as Deepfakes. These are realistic fake media, that can cause threat to reputation, privacy and can even prove to be a serious threat to public security. These can even be used to create political distress, spread fake terrorism or for blackmailing anyone. As with growing technology, the tampered media getting generated are way more realistic that it can even bluff the human eyes. Hence, we need better deepfake detection algorithms for efficiently detect deepfakes. The proposed system that has been presented is based on a combination of CNN followed by RNN. The CNN model deployed here is SE-ResNeXt-101. The system proposed uses the CNN model SE-ResNeXt-101 model for extraction of feature vectors from the videos and further these feature vectors are utilized to train the RNN model which is LSTM model for classification of videos as Real or Deepfake. We evaluate our method on the dataset made by collecting huge number of videos from various distributed sources. We demonstrate how a simple architecture can be used to attain competitive results.
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Ya-Bo, and 黃雅博. "Perceptual-Based CNN Model for Watercolor Mixing Prediction." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/nf4394.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
107
In the paper, we propose a model to predict the mixture of watercolor pigments using convolutional neural networks (CNN). With a watercolor dataset, we train our model to minimize the loss function of sRGB differences. In metric of color difference ∆ELab, our model achieves 88.7% of data that ∆ELab < 5 on the test set, which means the difference cannot easily be detected by the human eye. In addition, an interesting phenomenon is found; Even if the reflectance curve of the predicted color is not as smooth as the ground truth curve, the RGB color is still close to the ground truth.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography