To see the other types of publications on this topic, follow the link: Data Network Effects.

Dissertations / Theses on the topic 'Data Network Effects'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data Network Effects.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Örblom, Markus. "Effects of Network Performance on Smartphone User Behavior." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177547.

Full text
Abstract:
While the relation between smartphone user behavior and contextual factors have been explored in previous research, the mobile networks’ influence on the smartphone user behavior is largely unknown. Through statistical analysis of a data set collected globally from ˜ 1000 users by an Android app called Ericsson Apps, this study investigates how the users’ app choices and app usage depend on the network performance. The results show, for instance, that the choice of app depends strongly on the network performance, suggesting that it is a factor in the users’ app choices. For example, Swedish users are ˜ 3 times more likely to use Facebook on LTE than when disconnected, i.e., no access to the mobile networks, while ˜ 4.6 times more likely to make a phone call when disconnectedthan on LTE. Additionally, the data analysis finds a demand for better performance in the mobile networks as the downlink data consumption grows linearly without decline, with respect to network performance, for media and video types of apps.
APA, Harvard, Vancouver, ISO, and other styles
2

Sathyanarayana, Supreeth. "Characterizing the effects of device components on network traffic." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47640.

Full text
Abstract:
When a network packet is formed by a computer's protocol stack, there are many components (e.g., Memory, CPU, etc.) of the computer that are involved in the process. The objective of this research is to identify, characterize and analyze the effects of the various components of a device (e.g., Memory, CPU, etc.) on the device's network traffic by measuring the changes in its network traffic with changes in its components. We also show how this characterization can be used to effectively perform counterfeit detection of devices which have counterfeit components (e.g., Memory, CPU, etc.). To obtain this characterization, we measure and apply statistical analyses like probability distribution fucntions (PDFs) on the interarrival times (IATs) of the device's network packets (e.g., ICMP, UDP, TCP, etc.). The device is then modified by changing just one component (e.g., Memory, CPU, etc.) at a time while holding the rest constant and acquiring the IATs again. This, over many such iterations provides an understanding of the effect of each component on the overall device IAT statistics. Such statistics are captured for devices (e.g., field-programmable gate arrays (FPGAs) and personal computers (PCs)) of different types. Some of these statistics remain stable across different IAT captures for the same device and differ for different devices (completely different devices or even the same device with its components changed). Hence, these statistical variations can be used to detect changes in a device's composition, which lends itself well to counterfeit detection. Counterfeit devices are abundant in today's world and cause billions of dollars of loss in revenue. Device components are substituted with inferior quality components or are replaced by lower capacity components. Armed with our understanding of the effects of various device components on the device's network traffic, we show how such substitutions or alterations of legitimate device components can be detected and hence perform effective counterfeit detection by statistically analyzing the deviation of the device's IATs from that of the original legitimate device. We perform such counterfeit detection experiments on various types of device configurations (e.g., PC with changed CPU, RAM, etc.) to prove the technique's efficacy. Since this technique is a fully network-based solution, it is also a non-destructive technique which can quickly, inexpensively and easily verify the device's legitimacy. This research also discusses the limitations of network-based counterfeit detection.
APA, Harvard, Vancouver, ISO, and other styles
3

Merchán, Dueñas Daniel Esteban. "Effects of road-network circuity on strategic decisions in urban logistics." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119911.

Full text
Abstract:
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 114-120).
This thesis proposes a research framework that leverages high-resolution traffic and urban infrastructure data to improve analytical approximation methods used to inform strategic decisions in designing last-mile distribution systems. In particular, this thesis explores the effects of the road-network on the circuity of local trips, and introduces data-driven extensions to improve predictive performance of route distance approximation methods by increasing the resolution of the underlying urban road-network. Overall, these circuity-based extensions significantly increase the real-world validity of routing approximations compared to classical methods, and entail relevant implications in the configuration of logistics networks within urban markets. The framework presented in this thesis entails three inter-dependent levels of analysis: individual trip, consolidated route and last-mile network levels. In Chapter 2, we introduce a method to quantify and analyze the network circuity of local trips leveraging contemporary traffic datasets. Using the city of Sao Paulo as the primary illustrative example and a combination of supervised and unsupervised machine learning methods, significant heterogeneities in local network circuity are observed, explained by dimensional and topological properties of the road-network. Results from Sao Paulo are compared to seven additional large and medium-sized urban areas in Latin America and the United States. At a coarse-grained level of analysis, we observe similar correlations between road-network properties and local circuity across these cities. In Chapter 3, this thesis proposes a data-driven extension to continuum approximation-based methods used to predict urban route distances. This extension efficiently incorporates the circuity of the underlying road-network into the approximation method to improve distance predictions in more realistic settings. The proposed extension significantly outperforms classic methods, which build on the assumption of travel according to the rectilinear distance metric within urban areas. By only marginally increasing the data collection effort, results of the proposed extension yield error reductions between 20-30% in mean absolute percentage error compared to classical approximation methods and are within 10 - 20% compared to near-optimal solutions obtained with a local search heuristic. Further, by providing a real-world validation of classic continuum approximation-based methods, we explore how contemporary mapping technologies and novel sources of geo-spatial and traffic data can be efficiently leveraged to improve the predictive performance of these methods. Finally, building on the augmented route distance approximation, in Chapter 4 we explore the effect of road-network circuity on the design and planning of urban last-mile distribution systems. These improved routing approximations are used within an integer linear programming model to solve large-scale, real-world instances of the two-echelon capacitated location routing problem. Using the parcel delivery operation of Brazil's largest e-commerce platform in the city of Sao Paulo as the primary example to illustrate the impact and relevance of this work, we demonstrate how explicitly accounting for local variations in road-network circuity can yield relevant implications for fleet capacity planning, the location of urban distribution facilities, and the definition of facility-specific service areas. Results indicate that failing to account for local circuity would underestimate the necessary fleet size by 20% and would increase the total last-mile network cost by approximately 8%.
by Daniel Esteban Merchán Dueñas.
Ph. D. in Engineering Systems
APA, Harvard, Vancouver, ISO, and other styles
4

Vuyyuru, Sisir. "Data Collection Network and Data Analysis for the Prototype Local Area Augmentation System Ground Facility." Ohio University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1195158113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Raoufi-Danner, Torrin. "Effects of Missing Values on Neural Network Survival Time Prediction." Thesis, Linköpings universitet, Statistik och maskininlärning, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150339.

Full text
Abstract:
Data sets with missing values are a pervasive problem within medical research. Building lifetime prediction models based solely upon complete-case data can bias the results, so imputation is preferred over listwise deletion. In this thesis, artificial neural networks (ANNs) are used as a prediction model on simulated data with which to compare various imputation approaches. The construction and optimization of ANNs is discussed in detail, and some guidelines are presented for activation functions, number of hidden layers and other tunable parameters. For the simulated data, binary lifetime prediction at five years was examined. The ANNs here performed best with tanh activation, binary cross-entropy loss with softmax output and three hidden layers of between 15 and 25 nodes. The imputation methods examined are random, mean, missing forest, multivariate imputation by chained equations (MICE), pooled MICE with imputed target and pooled MICE with non-imputed target. Random and mean imputation performed poorly compared to the others and were used as a baseline comparison case. The other algorithms all performed well up to 50% missingness. There were no statistical differences between these methods below 30% missingness, however missing forest had the best performance above this amount. It is therefore the recommendation of this thesis that the missing forest algorithm is used to impute missing data when constructing ANNs to predict breast cancer patient survival at the five-year mark.
APA, Harvard, Vancouver, ISO, and other styles
6

Fadul, Waad. "Data-Driven Health Services: an Empirical Investigation on the Role of Artificial Intelligence and Data Network Effects in Value Creation." Thesis, Uppsala universitet, Informationssystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447507.

Full text
Abstract:
The purpose of this study is to produce new knowledge concerning the perceived user’s value generated using machine learning technologies that activate data network effects factors that create value through various business model themes. The data network effects theory represents a set of factors that increase the user’s perceived value for a platform that uses artificial intelligence capabilities. The study followed an abductive research approach where initially found facts were matched against the data network effects theory to be put in context and understood. The study’s data was gathered through semi-structured interviews with experts who were active within the research area and chosen based on their practical experience and their role in the digitization of the healthcare sector. The results show that three out of six factors were fully realized contributing to value creation while two of the factors showed to be partially realized in order to contribute to value creation and that is justified by the exclusion of users' perspectives in the scope of the research. Lastly, only one factor has limited contribution to the value creation due to the heavy regulations limiting its realization in the health sector. It is concluded that data network effects moderators contributed differently in the activation of various business model themes for value creation in a general manner where further studies should apply the theory in the assessment of one specific AI health offering to take full advantage of the theory potential. The theoretical implications showed that the data network factors may not necessarily be equally activated to contribute to value creation which was not initially highlighted by the theory. Additionally, the practical implications of the study’s results may help managers in their decision-making process on which factors to be activated for which business model theme.
APA, Harvard, Vancouver, ISO, and other styles
7

Dayton, Jonathan Bryan. "Adversarial Deep Neural Networks Effectively Remove Nonlinear Batch Effects from Gene-Expression Data." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7521.

Full text
Abstract:
Gene-expression profiling enables researchers to quantify transcription levels in cells, thus providing insight into functional mechanisms of diseases and other biological processes. However, because of the high dimensionality of these data and the sensitivity of measuring equipment, expression data often contains unwanted confounding effects that can skew analysis. For example, collecting data in multiple runs causes nontrivial differences in the data (known as batch effects), known covariates that are not of interest to the study may have strong effects, and there may be large systemic effects when integrating multiple expression datasets. Additionally, many of these confounding effects represent higher-order interactions that may not be removable using existing techniques that identify linear patterns. We created Confounded to remove these effects from expression data. Confounded is an adversarial variational autoencoder that removes confounding effects while minimizing the amount of change to the input data. We tested the model on artificially constructed data and commonly used gene expression datasets and compared against other common batch adjustment algorithms. We also applied the model to remove cancer-type-specific signal from a pan-cancer expression dataset. Our software is publicly available at https://github.com/jdayton3/Confounded.
APA, Harvard, Vancouver, ISO, and other styles
8

Larsson, Marcus, and Christoffer Möckelind. "The effects of Deep Belief Network pre-training of a Multilayered perceptron under varied labeled data conditions." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187374.

Full text
Abstract:
Sometimes finding labeled data for machine learning tasks is difficult. This is a problem for purely supervised models like the Multilayered perceptron(MLP). A Discriminative Deep Belief Network(DDBN) is a semi-supervised model that is able to use both labeled and unlabeled data. This research aimed to move towards a rule of thumb of when it is beneficial to use a DDBN instead of an MLP, given the proportions of labeled and unlabeled data. Several trials with different amount of labels, from the MNIST and Rectangles-Images datasets, were conducted to compare the two models. It was found that for these datasets, the DDBNs had better accuracy when few labels were available. With 50% or more labels available, the DDBNs and MLPs had comparable accuracies. It is concluded that a rule of thumb of using a DDBN when less than 50% of labels are available for training, would be in line with the results. However, more research is needed to make any general conclusions.
Märkt data kan ibland vara svårt att hitta för maskininlärningsuppgifter. Detta är ett problem för modeller som bygger på övervakad inlärning, exem- pelvis Multilayerd Perceptron(MLP). Ett Discriminative Deep Belief Network (DDBN) är en semi-övervakad modell som kan använda både märkt och omärkt data. Denna forskning syftar till att närma sig en tumregel om när det är för- delaktigt att använda en DDBN i stället för en MLP, vid olika proportioner av märkt och omärkt data. Flera försök med olika mängd märkt data, från MNIST och Rectangle-Images datamängderna, genomfördes för att jämföra de två mo- dellerna. Det konstaterades att för dessa datamängder hade DDBNerna bättre precision när ett fåtal märkt data fanns tillgängligt. När 50% eller mer av datan var märkt, hade DDBNerna och MLPerna jämförbar noggrannhet. Slutsatsen är att en tumregel att använda en DDBN när mindre än 50% av av träningsdatan är märkt, skulle vara i linje med resultaten. Det behövs dock mer forskning för att göra några generella slutsatser.
APA, Harvard, Vancouver, ISO, and other styles
9

Diaz, Boada Juan Sebastian. "Polypharmacy Side Effect Prediction with Graph Convolutional Neural Network based on Heterogeneous Structural and Biological Data." Thesis, KTH, Numerisk analys, NA, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288537.

Full text
Abstract:
The prediction of polypharmacy side effects is crucial to reduce the mortality and morbidity of patients suffering from complex diseases. However, its experimental prediction is unfeasible due to the many possible drug combinations, leaving in silico tools as the most promising way of addressing this problem. This thesis improves the performance and robustness of a state-of-the-art graph convolutional network designed to predict polypharmacy side effects, by feeding it with complexity properties of the drug-protein network. The modifications also involve the creation of a direct pipeline to reproduce the results and test it with different datasets.
För att minska dödligheten och sjukligheten hos patienter som lider av komplexa sjukdomar är det avgörande att kunna förutsäga biverkningar från polyfarmaci. Att experimentellt förutsäga biverkningarna är dock ogenomförbart på grund av det stora antalet möjliga läkemedelskombinationer, vilket lämnar in silico-verktyg som det mest lovande sättet att lösa detta problem. Detta arbete förbättrar prestandan och robustheten av ett av det senaste grafiska faltningsnätverken som är utformat för att förutsäga biverkningar från polyfarmaci, genom att mata det med läkemedel-protein-nätverkets komplexitetsegenskaper. Ändringarna involverar också skapandet av en direkt pipeline för att återge resultaten och testa den med olika dataset.
APA, Harvard, Vancouver, ISO, and other styles
10

McMorries, David W. "Investigation into the effects of voice and data convergence on a Marine Expeditionary Bridgade TRI-TAC digital transmission network." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA379684.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, June 2000.
Thesis advisors, Osmundson, John S. ; Brady, Terrence C. "June 2000." Includes bibliographical references (p. 69). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
11

Karaj, Enxhi. "An exploratory study on the mechanisms that allow value capture when a multi-sided platform activates data network effects." Thesis, Uppsala universitet, Informationssystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-448021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Gruenhage, Gina [Verfasser], Manfred [Akademischer Betreuer] Opper, Simon [Akademischer Betreuer] Barthelme, Manfred [Gutachter] Opper, Simon [Gutachter] Barthelme, and Barbara [Gutachter] Hammer. "Low dimensional visualization and modelling of data using distance-based models : Part I: Visualization of the effects of a changing distance on data using continuous MDS (cMDS); Part II: Inference of the latent space model for network data using expectation propagation / Gina Gruenhage ; Gutachter: Manfred Opper, Simon Barthelme, Barbara Hammer ; Manfred Opper, Simon Barthelme." Berlin : Technische Universität Berlin, 2018. http://d-nb.info/1173786228/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sridhar, Adarsh. "Minimum-energy transmission and effect of network architecture on downlink performance of wireless data networks." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2728.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
14

Rock, Daniel Ian. "Estimating peer effects in networked panel data." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105074.

Full text
Abstract:
Thesis: S.M. in Management Research, Massachusetts Institute of Technology, Sloan School of Management, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 20-22).
After product adoption, consumers make decisions about continued use. These choices can be influenced by peer decisions in networks, but identifying causal peer influence effects is challenging. Correlations in peer behavior may be driven by correlated effects, exogenous consumer and peer characteristics, or endogenous peer effects of behavior (Manski 1993). Extending the work of Bramoullé et al. (2009), we apply proofs of peer effect identification in networks under a set of exogeneity assumptions to the panel data case. With engagement data for Yahoo Go, a mobile application, we use the network topology of application users in an instrumental variables setup to estimate usage peer effects, comparing the performance of a variety of regression models. We find analyses of this type may be especially useful for ruling out endogenous peer effects as a driver of behavior. Omitted variables (especially ones related to network homophily) and violation of the exogeneity assumptions can bias regression coefficients toward finding statistically significant peer effects.
by Daniel Ian Rock.
S.M. in Management Research
APA, Harvard, Vancouver, ISO, and other styles
15

Berglöf, Olle, and Adam Jacobs. "Effects of Transfer Learning on Data Augmentation with Generative Adversarial Networks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259485.

Full text
Abstract:
Data augmentation is a technique that acquires more training data by augmenting available samples, where the training data is used to fit model parameters. Data augmentation is utilized due to a shortage of training data in certain domains and to reduce overfitting. Augmenting a training dataset for image classification with a Generative Adversarial Network (GAN) has been shown to increase classification accuracy. This report investigates if transfer learning within a GAN can further increase classification accuracy when utilizing the augmented training dataset. The method section describes a specific GAN architecture for the experiments that includes a label condition. When using transfer learning within the specific GAN architecture, a statistical analysis shows a statistically significant increase in classification accuracy for a classification problem with the EMNIST dataset, which consists of images of handwritten alphanumeric characters. In the discussion section, the authors analyze the results and motivates other use cases for the proposed GAN architecture.
Datautökning är en metod som skapar mer träningsdata genom att utöka befintlig träningsdata, där träningsdatan används för att anpassa modellers parametrar. Datautökning används på grund av en brist på träningsdata inom vissa områden samt för att minska overfitting. Att utöka ett träningsdataset för att genomföra bildklassificering med ett generativt adversarialt nätverk (GAN) har visats kunna öka precisionen av klassificering av bilder. Denna rapport undersöker om transferlärande inom en GAN kan vidare öka klassificeringsprecisionen när ett utökat träningsdataset används. Metoden beskriver en specific GANarkitektur som innehåller ett etikettvillkor. När transferlärande används inom den utvalda GAN-arkitekturen visar en statistisk analys en statistiskt säkerställd ökning av klassificeringsprecisionen för ett klassificeringsproblem med EMNIST datasetet, som innehåller bilder på handskrivna bokstäver och siffror. I diskussionen diskuteras orsakerna bakom resultaten och fler användningsområden nämns.
APA, Harvard, Vancouver, ISO, and other styles
16

Inoue, Isao. "On the Effect of Training Data on Artificial Neural Network Models for Prediction." 名古屋大学大学院国際言語文化研究科, 2010. http://hdl.handle.net/2237/14090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Gest, Johann. "Discrete fiber Raman amplifiers for agile all-photonic networks." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103199.

Full text
Abstract:
This thesis is dedicated to the study of gain transients of discrete fiber Raman amplifiers and to the all-optical gain-clamping technique which is used to mitigate those transients.
First, we study the standing-wave and the traveling-wave gain-clamping techniques when applied to a single discrete fiber Raman amplifier in the context of WDM channel add and drop. We take into account the operational regime of the amplifier and the location of the surviving channel in the amplification band. We demonstrate that the gain-clamped amplifier has to be operated in a regime below the critical regime to ensure that gain-clamping will be in effect. The efficiency of gain-clamping also depends on the feedback level of the lasing signal and on the implementation.
Next, we investigate the dynamic behaviour of a single discrete fiber Raman amplifier fed by multi-channel packet traffic. Our study shows that the efficiency of the gain-clamping technique to reduce the gain transients is dependent upon the operational regime of the amplifier and the packet duration. However, we also demonstrate that gain-clamping is not required to control the gain transients as the gain variations of the unclamped amplifier are small enough to be neglected.
We then theoretically analyse the dynamic response of cascades of discrete fiber Raman amplifiers subject to WDM channel add and drop. We consider cascades of mixed unclamped and gain-clamped amplifiers, varying the number and the position of the gain-clamped amplifiers in the cascade and taking into account the location of the surviving channel and the operational regime of the amplifiers. Our results show that the location of the gain-clamped amplifiers in a mixed cascade affects the transient characteristics and that it is possible to control the transients within tolerable limits.
Finally, we investigate the gain transients that occur in hybrid amplifiers in the presence of channel add and drop. We demonstrate that the gain-clamping technique can be used to mitigate the gain transients in hybrid amplifiers and that the surviving channel location does not influence the transient characteristics, contrary to the case of single and cascaded fiber Raman amplifiers.
APA, Harvard, Vancouver, ISO, and other styles
18

Chikhi, Yacine. "Reducing the Hot Spot Effect in Wireless Sensor Networks with the Use of Mobile Data Sink." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/365.

Full text
Abstract:
The Hot Spot effect is an issue that reduces the network lifetime considerably. The network on the field forms a tree structure in which the sink represents the root and the furthest nodes in the perimeter represent the leaves. Each node collects information from the environment and transmits data packets to a "reachable" node towards the sink in a multi-hop fashion. The closest nodes to the sink not only transmit their own packets but also the packets that they receive from "lower" nodes and therefore exhaust their energy reserves and die faster than the rest of the network sensors. We propose a technique to allow the data sink to identify nodes severely suffering from the Hot Spot effect and to move beyond these nodes. We will explore the best trajectory that the data sink should follow. Performance results are presented to support our claim of superiority for our scheme.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Jindong. "Pooling strategies for graph convolution neural networks and their effect on classification." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288953.

Full text
Abstract:
With the development of graph neural networks, this novel neural network has been applied in a broader and broader range of fields. One of the thorny problems researchers face in this field is selecting suitable pooling methods for a specific research task from various existing pooling methods. In this work, based on the existing mainstream graph pooling methods, we develop a benchmark neural network framework that can be used to compare these different graph pooling methods. By using the framework, we compare four mainstream graph pooling methods and explore their characteristics. Furthermore, we expand two methods for explaining neural network decisions for convolution neural networks to graph neural networks and compare them with the existing GNNExplainer. We run experiments on standard graph classification tasks using the developed framework and discuss the different pooling methods’ distinctive characteristics. Furthermore, we verify the proposed extensions of the explanation methods’ correctness and measure the agreements among the produced explanations. Finally, we explore the characteristics of different methods for explaining neural network decisions and the insights of different pooling methods by applying these explanation methods.
Med utvecklingen av grafneurala nätverk har detta nya neurala nätverk tillämpats i olika område. Ett av de svåra problemen för forskare inom detta område är hur man väljer en lämplig poolningsmetod för en specifik forskningsuppgift från en mängd befintliga poolningsmetoder. I den här arbetet, baserat på de befintliga vanliga grafpoolingsmetoderna, utvecklar vi ett riktmärke för neuralt nätverk ram som kan användas till olika diagram pooling metoders jämförelse. Genom att använda ramverket jämför vi fyra allmängiltig diagram pooling metod och utforska deras egenskaper. Dessutom utvidgar vi två metoder för att förklara beslut om neuralt nätverk från convolution neurala nätverk till diagram neurala nätverk och jämföra dem med befintliga GNNExplainer. Vi kör experiment av grafisk klassificering uppgifter under benchmarkingramverk och hittade olika egenskaper av olika diagram pooling metoder. Dessutom verifierar vi korrekthet i dessa förklarningsmetoder som vi utvecklade och mäter överenskommelserna mellan dem. Till slut, vi försöker utforska egenskaper av olika metoder för att förklara neuralt nätverks beslut och deras betydelse för att välja pooling metoder i grafisk neuralt nätverk.
APA, Harvard, Vancouver, ISO, and other styles
20

Chen, Ye. "Effect of packetized data on gain dynamics in erbium-doped fiber amplifiers fed by live local area network traffic." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0016/MQ55744.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Methawut, Elena. "The effect of computer mediated communication to communication patterns." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2644.

Full text
Abstract:
Computer mediated communication (CMC) fundamentally influences the function of communication. It influences the organization's management and administration, but it most affects the dynamics of middle and lower level employees. The most simplistic model is that of an electronic office in which its employees need to know and understand the role of CMC. The purpose of this study is to investigate the performance and satisfaction of co-workers who use CMC to communicate within their organization, and to check employees' performance when using CMC.
APA, Harvard, Vancouver, ISO, and other styles
22

Dimitrova, Elena Stanimirova. "Polynomial Models for Systems Biology: Data Discretization and Term Order Effect on Dynamics." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/28490.

Full text
Abstract:
Systems biology aims at system-level understanding of biological systems, in particular cellular networks. The milestones of this understanding are knowledge of the structure of the system, understanding of its dynamics, effective control methods, and powerful prediction capability. The complexity of biological systems makes it inevitable to consider mathematical modeling in order to achieve these goals. The enormous accumulation of experimental data representing the activities of the living cell has triggered an increasing interest in the reverse engineering of biological networks from data. In particular, construction of discrete models for reverse engineering of biological networks is receiving attention, with the goal of providing a coarse-grained description of such networks. In this dissertation we consider the modeling framework of polynomial dynamical systems over finite fields constructed from experimental data. We present and propose solutions to two problems inherent in this modeling method: the necessity of appropriate discretization of the data and the selection of a particular polynomial model from the set of all models that fit the data. Data discretization, also known as binning, is a crucial issue for the construction of discrete models of biological networks. Experimental data are however usually continuous, or, at least, represented by computer floating point numbers. A major challenge in discretizing biological data, such as those collected through microarray experiments, is the typically small samples size. Many methods for discretization are not applicable due to the insufficient amount of data. The method proposed in this work is a first attempt to develop a discretization tool that takes into consideration the issues and limitations that are inherent in short data time courses. Our focus is on the two characteristics that any discretization method should possess in order to be used for dynamic modeling: preservation of dynamics and information content and inhibition of noise. Given a set of data points, of particular importance in the construction of polynomial models for the reverse engineering of biological networks is the collection of all polynomials that vanish on this set of points, the so-called ideal of points. Polynomial ideals can be represented through a special finite generating set, known as Gröbner basis, that possesses some desirable properties. For a given ideal, however, the Gröbner basis may not be unique since its computation depends on the choice of leading terms for the multivariate polynomials in the ideal. The correspondence between data points and uniqueness of Gröbner bases is studied in this dissertation. More specifically, an algorithm is developed for finding all minimal sets of points that, added to the given set, have a corresponding ideal of points with a unique Gröbner basis. This question is of interest in itself but the main motivation for studying it was its relevance to the construction of polynomial dynamical systems. This research has been partially supported by NIH Grant Nr. RO1GM068947-01.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Paulson, Jörgen. "The Effect of 5-anonymity on a classifier based on neural network that is applied to the adult dataset." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17918.

Full text
Abstract:
Privacy issues relating to having data made public is relevant with the introduction of the GDPR. To limit problems related to data becoming public, intentionally or via an event such as a security breach, anonymization of datasets can be employed. In this report, the impact of the application of 5-anonymity to the adult dataset on a classifier based on a neural network predicting whether people had an income exceeding $50,000 was investigated using precision, recall and accuracy. The classifier was trained using the non-anonymized data, the anonymized data, and the non-anonymized data with those attributes which were suppressed in the anonymized data removed. The result was that average accuracy dropped from 0.82 to 0.76, precision from 0.58 to 0.50, and recall increased from 0.82 to 0.87. The average values and distributions seem to support the estimation that the majority of the performance impact of anonymization in this case comes from the suppression of attributes.
APA, Harvard, Vancouver, ISO, and other styles
24

McCullen, Jeffrey Reynolds. "Predicting the Effects of Sedative Infusion on Acute Traumatic Brain Injury Patients." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/105140.

Full text
Abstract:
Healthcare analytics has traditionally relied upon linear and logistic regression models to address clinical research questions mostly because they produce highly interpretable results [1, 2]. These results contain valuable statistics such as p-values, coefficients, and odds ratios that provide healthcare professionals with knowledge about the significance of each covariate and exposure for predicting the outcome of interest [1]. Thus, they are often favored over new deep learning models that are generally more accurate but less interpretable and scalable. However, the statistical power of linear and logistic regression is contingent upon satisfying modeling assumptions, which usually requires altering or transforming the data, thereby hindering interpretability. Thus, generalized additive models are useful for overcoming this limitation while still preserving interpretability and accuracy. The major research question in this work involves investigating whether particular sedative agents (fentanyl, propofol, versed, ativan, and precedex) are associated with different discharge dispositions for patients with acute traumatic brain injury (TBI). To address this, we compare the effectiveness of various models (traditional linear regression (LR), generalized additive models (GAMs), and deep learning) in providing guidance for sedative choice. We evaluated the performance of each model using metrics for accuracy, interpretability, scalability, and generalizability. Our results show that the new deep learning models were the most accurate while the traditional LR and GAM models ii i maintained better interpretability and scalability. The GAMs provided enhanced interpretability through pairwise interaction heat maps and generalized well to other domains and class distributions since they do not require satisfying the modeling assumptions used in LR. By evaluating the model results, we found that versed was associated with better discharge dispositions while ativan was associated with worse discharge dispositions. We also identified other significant covariates including age, the Northeast region, the Acute Physiology and Chronic Health Evaluation (APACHE) score, Glasgow Coma Scale (GCS), and ethanol level. The versatility of versed may account for its association with better discharge dispositions while ativan may have negative effects when used to facilitate intubation. Additionally, most of the significant covariates pertain to the clinical state of the patient (APACHE, GCS, etc.) whereas most non-significant covariates were demographic (gender, ethnicity, etc.). Though we found that deep learning slightly improved over LR and generalized additive models after fine-tuning the hyperparameters, the deep learning results were less interpretable and therefore not ideal for making the aforementioned clinical insights. However deep learning may be preferable in cases with greater complexity and more data, particularly in situations where interpretability is not as critical. Further research is necessary to validate our findings, investigate alternative modeling approaches, and examine other outcomes and exposures of interest.
Master of Science
Patients with Traumatic Brain Injury (TBI) often require sedative agents to facilitate intubation and prevent further brain injury by reducing anxiety and decreasing level of consciousness. It is important for clinicians to choose the sedative that is most conducive to optimizing patient outcomes. Hence, the purpose of our research is to provide guidance to aid this decision. Additionally, we compare different modeling approaches to provide insights into their relative strengths and weaknesses. To achieve this goal, we investigated whether the exposure of particular sedatives (fentanyl, propofol, versed, ativan, and precedex) was associated with different hospital discharge locations for patients with TBI. From best to worst, these discharge locations are home, rehabilitation, nursing home, remains hospitalized, and death. Our results show that versed was associated with better discharge locations and ativan was associated with worse discharge locations. The fact that versed is often used for alternative purposes may account for its association with better discharge locations. Further research is necessary to further investigate this and the possible negative effects of using ativan to facilitate intubation. We also found that other variables that influence discharge disposition are age, the Northeast region, and other variables pertaining to the clinical state of the patient (severity of illness metrics, etc.). By comparing the different modeling approaches, we found that the new deep learning methods were difficult to interpret but provided a slight improvement in performance after optimization. Traditional methods such as linear ii i regression allowed us to interpret the model output and make the aforementioned clinical insights. However, generalized additive models (GAMs) are often more practical because they can better accommodate other class distributions and domains.
APA, Harvard, Vancouver, ISO, and other styles
25

Wise, Barbara. "THE EFFECT OF CLOSURE ON THE RELATIONSHIP BETWEEN ADHD SYMPTOMS AND SMOKING INITIATION: A MODERATION MODEL USING ADD HEALTH DATA." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1448998688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Caeyers, Bet Helena. "Social networks, community-based development and empirical methodologies." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:61dbdd9e-9341-4959-a6ca-15547720df3c.

Full text
Abstract:
This thesis consists of two parts: Part I (Chapters 2 and 3) critically assesses a set of methodological tools that are widely used in the literature and that are applied to the empirical analysis in Part II (Chapters 4 and 5). Using a randomised experiment, the first chapter compares pen-and-paper interviewing (PAPI) with computer-assisted personal interviewing (CAPI). We observe a large error count in PAPI, which is likely to introduce sample bias. We examine the effect of PAPI consumption measurement error on poverty analysis and compare both applications in terms of interview length, costs and respondents’ perceptions. Next, we formalise an unproven source of ordinary least squares estimation bias in standard linear-in-means peer effects models. Deriving a formula for the magnitude of the bias, we discuss its underlying parameters. We show when the bias is aggravated in models adding cluster fixed effects and how it affects inference and interpretation of estimation results. We reveal that two-stage least squares (2SLS) estimation strategies eliminate the bias and provide illustrative simulations. The results may explain some counter-intuitive findings in the social interaction literature. We then use the linear-in-means model to estimate endogenous peer effects on the awareness of a community-based development programme of vulnerable groups in rural Tanzania. We denote the geographically nearest neighbours set as the relevant peer group in this context and employ a popular 2SLS estimation strategy on a unique spatial household dataset, collected using CAPI, to identify significant average and heterogeneous endogenous peer effects. The final chapter investigates social network effects in decentralised food aid (free food and food for work) allocation processes in Ethiopia, in the aftermath of a serious drought. We find that food aid is responsive to need, as well as being targeted at households with less access to informal support. However, we also find strong correlations with political connections, especially in the immediate aftermath of the drought.
APA, Harvard, Vancouver, ISO, and other styles
27

Aran, Meltem A. "Measuring treatment effects in poverty alleviation programs : three essays using data from Turkish household surveys." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:98fada59-d38d-4179-b151-c17196c86acf.

Full text
Abstract:
The dissertation is a compilation of three essays on Turkey's poverty alleviation programs. The first paper focuses on the welfare impact of the global financial Crisis on Turkish households. The second paper considers the protective impact of the Green Card non-contributory health insurance program in Turkey during the Crisis in 2008-2009. The third paper uses experimental data from the field in eastern Turkey, to look at patterns of agricultural technology diffusion in a rural development program implemented in a post-conflict setting.
APA, Harvard, Vancouver, ISO, and other styles
28

Bresso, Emmanuel. "Organisation et exploitation des connaissances sur les réseaux d'intéractions biomoléculaires pour l'étude de l'étiologie des maladies génétiques et la caractérisation des effets secondaires de principes actifs." Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0122/document.

Full text
Abstract:
La compréhension des pathologies humaines et du mode d'action des médicaments passe par la prise en compte des réseaux d'interactions entre biomolécules. Les recherches récentes sur les systèmes biologiques produisent de plus en plus de données sur ces réseaux qui gouvernent les processus cellulaires. L'hétérogénéité et la multiplicité de ces données rendent difficile leur intégration dans les raisonnements des utilisateurs. Je propose ici des approches intégratives mettant en oeuvre des techniques de gestion de données, de visualisation de graphes et de fouille de données, pour tenter de répondre au problème de l'exploitation insuffisante des données sur les réseaux dans la compréhension des phénotypes associés aux maladies génétiques ou des effets secondaires des médicaments. La gestion des données sur les protéines et leurs propriétés est assurée par un système d'entrepôt de données générique, NetworkDB, personnalisable et actualisable de façon semi-automatique. Des techniques de visualisation de graphes ont été couplées à NetworkDB pour utiliser les données sur les réseaux biologiques dans l'étude de l'étiologie des maladies génétiques entrainant une déficience intellectuelle. Des sous-réseaux de gènes impliqués ont ainsi pu être identifiés et caractérisés. Des profils combinant des effets secondaires partagés par les mêmes médicaments ont été extraits de NetworkDB puis caractérisés en appliquant une méthode de fouille de données relationnelles couplée à Network DB. Les résultats permettent de décrire quelles propriétés des médicaments et de leurs cibles (incluant l'appartenance à des réseaux biologiques) sont associées à tel ou tel profil d'effets secondaires
The understanding of human diseases and drug mechanisms requires today to take into account molecular interaction networks. Recent studies on biological systems are producing increasing amounts of data. However, complexity and heterogeneity of these datasets make it difficult to exploit them for understanding atypical phenotypes or drug side-effects. This thesis presents two knowledge-based integrative approaches that combine data management, graph visualization and data mining techniques in order to improve our understanding of phenotypes associated with genetic diseases or drug side-effects. Data management relies on a generic data warehouse, NetworkDB, that integrates data on proteins and their properties. Customization of the NetworkDB model and regular updates are semi-automatic. Graph visualization techniques have been coupled with NetworkDB. This approach has facilitated access to biological network data in order to study genetic disease etiology, including X-linked intellectual disability (XLID). Meaningful sub-networks of genes have thus been identified and characterized. Drug side-effect profiles have been extracted from NetworkDB and subsequently characterized by a relational learning procedure coupled with NetworkDB. The resulting rules indicate which properties of drugs and their targets (including networks) preferentially associate with a particular side-effect profile
APA, Harvard, Vancouver, ISO, and other styles
29

Muthukumar, Subrahmanyam. "The application of advanced inventory techniques in urban inventory data development to earthquake risk modeling and mitigation in mid-America." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26662.

Full text
Abstract:
Thesis (Ph.D)--City Planning, Georgia Institute of Technology, 2009.
Committee Chair: French, Steven P.; Committee Member: Drummond, William; Committee Member: Goodno, Barry; Committee Member: McCarthy, Patrick; Committee Member: Yang, Jiawen. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhen, Zuguang. "The effect of mobile cellular network performance and contextual factors on smartphone users’ satisfaction : A study on QoE evaluation for YouTube video streaming via CrowdSourcing." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177566.

Full text
Abstract:
Mobile data traffic will continue to show rapid growth in the coming years; however the data revenue is not rising fast enough to ensure the operators’ profitability. Therefore, mobile operators must seek new approaches to find out what service does the customers need and what quality makes the customers satisfied in order to keep their increasingly sophisticated customers satisfied at the same time minimizing their revenue gap. This paper investigate the effect of mobile cellular network performance and contextual factors on smartphone users’ satisfaction, this was done via crowdsourcing through an experiment where an Android Application and a user Survey were included, which is able to evaluate and analyze the perceived quality of experience (QoE) for YouTube service for Android Smartphone users. To achieve this goal, the App NPT performs measurements of objective quality of service (QoS) parameters, whereas the survey carriers out collecting subjective user opinion. The result show that network performance parameters do impact the MOS (Mean Opinion Score) exponentially, either in a positively or negatively way, however, multiple parameters need to be considered together in order to draw a more accurate correlation with QoE. In addition, QoE are heavily affected by many other contextual factors, such as age and gender as well as users location. QoE are also impacted by several subjective factors, such as user expectation. Not always the highest throughput will lead to the best QoE, and not always the best technology (LTE) deserves the best MOS. Even though user received very high downlink throughput, their MOS value may still be low due to they might think the video were not fun to watch and the quality has not meet their expectation.
APA, Harvard, Vancouver, ISO, and other styles
31

Kolanowski, Mikael, and David Stevens. "A Comparative Study of the Effect of Features on Neural Networks within Computer-Aided Diagnosis of Alzheimer's Disease." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-255260.

Full text
Abstract:
Alzheimer’s disease is a neurodegenerative disease that affects approximately 6% of the global population aged over 65 and is forecasted to become even more prevalent in the future. Accurately diagnosing the disease in an early stage can play a large role in improving the quality of life for the patient. One key development for performing this diagnosis is applying machine learning to perform computer-aided diagnosis. Current research in the field has been focused on removing assumptions about the used data sets, but in doing so they have often discarded objective metadata such as the patient’s age, sex or priormedical history. This study aimed to investigate the effect of including such metadata as additional input features to neural networks used for diagnosing Alzheimer’s disease through binary classification of magnetic resonance imaging scans. Two similar neural networks were developed and compared, one with these additional features and the other without them. Including the metadata led to significant improvements in the network’s classification accuracy, and should therefore be considered in future computer-aided diagnostic systems for Alzheimer’s disease.
Alzheimers sjukdom är en form av demens som påverkar ungefär 6% av den globala befolkningen som är äldre än 65 och förutspås bli ännu vanligare i framtiden. Tidig diagnos av sjukdomen är viktigt för att säkerställa högre livskvalitet för patienten. En viktig utveckling inom fältet är datorstödd diagnos av sjukdomen med hjälp av maskininlärning. Dagens forskning fokuserar på att ta bort subjektiva antaganden om datamängden som används, men har ofta även förkastat objektiv metadata såsom patientens ålder, kön eller tidigare medicinska historia. Denna studier ämnade därför undersöka om inkluderandet av denna metadata ledde till bättre prestanda hos neuronnät som används för datorstödd diagnos av Alzheimers genom binär klassificering av bilder tagna med magnetisk resonanstomografi. Två snarlika neuronnät utvecklades och jämfördes, med skillnaden att den ena även tog metadata om patienten som indata. Inkluderandet av metadatan ledde till en markant ökning i neuronnätets prestanda, och bör därför övervägas i framtida system för datorstödd diagnos av Alzheimers sjukdom.
APA, Harvard, Vancouver, ISO, and other styles
32

Fisk, Nathan W. "Social learning theory as a model for illegitimate peer-to-peer use and the effects of implementing a legal music downloading service on peer-to-peer music piracy /." Online version of thesis, 2006. https://ritdml.rit.edu/dspace/handle/1850/2737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Forslind, Patrik, and Lucia Edwards. "Trading with Artificial Neural Networks on Large-, Mid- and Small-Cap Stocks : Exploring if Market Cap has an effect on portfolio performance when trading with Artificial Neural Networks trained on historical stock data." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209782.

Full text
Abstract:
n this report one-day ahead stock prediction using artificial neural networks (ANN) is studied on stocks belonging to different market caps. Hennes & Mauritz, EnQuest PLC and Rottneros have been selected, representing large-, mid- and small-cap companies. This report aims to investigate whether a company's market cap affects the ability to predict stock prices when ANNs are trained using historical stock data. The study was carried out using feedforward ANNs and trained using the Levenberg-Marquardt backpropogation algorithm. The results from the study show that the large-cap company H&M was easier to predict than the mid- and small-cap companies. Although the results from this study indicate that a company's market cap affects the ability to predict stock prices using ANNs, a deeper, more extensive investigation has to be carried out in order to draw any real conclusions.
I den här rapporten studeras endags aktieprognoser med hjälp av artificiella neurala nätverk (ANN) på aktier med olika marknadsvärden. Aktierna som har valts är Hennes & Mauritz, EnQuest PLC och Rottneros, som är exempel på företag tillhörande high-, mid- och low-cap. Syftet med den här rapporten är att undersöka hurvida ett företags marknadsvärde påverkar hur väl det går att förutspå aktiepriser när ANN tränas på historisk aktiedata. Studien utfördes med feedforward ANN som tränandes med Levenberg-Marquradt backpropogation algoritm. Resultaten från studien visar att H&M, som hade högst marknadsvärde, presterade bättre än EnQuest PLC och Roternos, som hade lägre marknadsvärden. Trots att resultaten från denna studie indikerar att ett företags marknadsvärde påverkar förmågan att utföra aktieprognoser med ANN så måste en djupare, mer omfattande undersökning genomföras för att kunna dra några riktiga slutsatser.
APA, Harvard, Vancouver, ISO, and other styles
34

Hassani, Mujtaba. "CONSTRUCTION EQUIPMENT FUEL CONSUMPTION DURING IDLING : Characterization using multivariate data analysis at Volvo CE." Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-49007.

Full text
Abstract:
Human activities have increased the concentration of CO2 into the atmosphere, thus it has caused global warming. Construction equipment are semi-stationary machines and spend at least 30% of its life time during idling. The majority of the construction equipment is diesel powered and emits toxic emission into the environment. In this work, the idling will be investigated through adopting several statistical regressions models to quantify the fuel consumption of construction equipment during idling. The regression models which are studied in this work: Multivariate Linear Regression (ML-R), Support Vector Machine Regression (SVM-R), Gaussian Process regression (GP-R), Artificial Neural Network (ANN), Partial Least Square Regression (PLS-R) and Principal Components Regression (PC-R). Findings show that pre-processing has a significant impact on the goodness of the prediction of the explanatory data analysis in this field. Moreover, through mean centering and application of the max-min scaling feature, the accuracy of models increased remarkably. ANN and GP-R had the highest accuracy (99%), PLS-R was the third accurate model (98% accuracy), ML-R was the fourth-best model (97% accuracy), SVM-R was the fifth-best (73% accuracy) and the lowest accuracy was recorded for PC-R (83% accuracy). The second part of this project estimated the CO2 emission based on the fuel used and by adopting the NONROAD2008 model.  Keywords:
APA, Harvard, Vancouver, ISO, and other styles
35

Xiong, Xiaolu. "Theory and Practice: Improving Retention Performance through Student Modeling and System Building." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/139.

Full text
Abstract:
The goal of Intelligent Tutoring systems (ITSs) is to engage the students in sustained reasoning activity and to interact with students based on a deep understanding of student behavior. In order to understand student behavior, ITSs rely on student modeling methods to observes student actions in the tutor and creates a quantitative representation of student knowledge, interests, affective states. Good student models are going to effectively help ITSs customize instructions, engage student's interest and then promote learning. Thus, the work of building ITSs and advancing student modeling should be considered as two interconnected components of one system rather than two separate topics. In this work, we utilized the theoretical support of a well-known learning science theory, the spacing effect, to guide the development of an ITS, called Automatic Reassessment and Relearning System (ARRS). ARRS not only validated the effectiveness of spacing effect, but it also served as a testing field which allowed us to find out new approaches to improve student learning by conducting large-scale randomized controlled trials (RCTs). The rich data set we gathered from ARRS has advanced our understanding of robust learning and helped us build student models with advanced data mining methods. At the end, we designed a set of API that supports the development of ARRS in next generation ASSISTments platform and adopted deep learning algorithms to further improve retention performance prediction. We believe our work is a successful example of combining theory and practice to advance science and address real- world problems.
APA, Harvard, Vancouver, ISO, and other styles
36

Skepetzis, Vasilios, and Pontus Hedman. "The Effect of Beautification Filters on Image Recognition : "Are filtered social media images viable Open Source Intelligence?"." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-44799.

Full text
Abstract:
In light of the emergence of social media, and its abundance of facial imagery, facial recognition finds itself useful from an Open Source Intelligence standpoint. Images uploaded on social media are likely to be filtered, which can destroy or modify biometric features. This study looks at the recognition effort of identifying individuals based on their facial image after filters have been applied to the image. The social media image filters studied occlude parts of the nose and eyes, with a particular interest in filters occluding the eye region. Our proposed method uses a Residual Neural Network Model to extract features from images, with recognition of individuals based on distance measures, based on the extracted features. Classification of individuals is also further done by the use of a Linear Support Vector Machine and XGBoost classifier. In attempts to increase the recognition performance for images completely occluded in the eye region, we present a method to reconstruct this information by using a variation of a U-Net, and from the classification perspective, we also train the classifier on filtered images to increase the performance of recognition. Our experimental results showed good recognition of individuals when filters were not occluding important landmarks, especially around the eye region. Our proposed solution shows an ability to mitigate the occlusion done by filters through either reconstruction or training on manipulated images, in some cases, with an increase in the classifier’s accuracy of approximately 17% points with only reconstruction, 16% points when the classifier trained on filtered data, and  24% points when both were used at the same time. When training on filtered images, we observe an average increase in performance, across all datasets, of 9.7% points.
APA, Harvard, Vancouver, ISO, and other styles
37

Newlon, Christine Mae. "The effect of shared dynamic understanding on willingness to contribute information| Design and analysis of a mega-collaborative interface." Thesis, Indiana University - Purdue University Indianapolis, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10159859.

Full text
Abstract:

Collaborative helping via social networking conversation threads can pose serious challenges in emergency situations. Interfaces that support complex group interaction and sense-making can help. This research applies human-computer interaction (HCI), computer-supported cooperative work (CSCW), and collaboration engineering in developing an interactive design, the Mega-Collaboration Tool (MCT). The goal is to reduce the cognitive load of a group’s growing mental model, thus increasing the general public’s ability to organize spontaneous collaborative helping.

The specific aims of this research include understanding the dynamics of mental model negotiation and determining whether MCT can assist the group’s sense-making ability without increasing net cognitive load.

The proposed HCI theory is that interfaces supporting collaborative cognition motivate contribution and reduce information bias, thus increasing the information shared. These research questions are addressed: 1. Does MCT support better collaborative cognition? 2. Does increasing the size of the shared data repository increase the amount of information shared? 3. Does this happen because group members experience 1) a greater sense of strategic commitment to the knowledge structure, 2) increased intrinsic motivation to contribute, and 3) reduced resistance to sharing information?

These questions were affirmed to varying degrees, giving insight into the collaborative process. Greater content did not motive group members directly; instead, half of their motivation came from awareness of their contribution’s relevance. Greater content and organization improved this awareness, and also encouraged sharing through increased enthusiasm and reduced bias. Increased commitment was a result of this process, rather than a cause. Also, MCT increased collaborative cognition but was significantly hampered by Internet performance. This challenge indicates MCT’s system components should be redesigned to allow asynchronous interaction. These results should contribute to the development of MCT, other collaboration engineering applications, and HCI and information science theory.

APA, Harvard, Vancouver, ISO, and other styles
38

Marcinkowska, Anna. "Exploratory study of market entry strategies for digital payment platforms." Thesis, Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-147994.

Full text
Abstract:
The digital payment industry has become one of the fastest evolving markets in the world, but in the wake of its rapid advancement, an ever increasing gap between academic theory and the actual reality of this market widens - and especially so when it comes to entry theory. It is widely acknowledged that the world is moving towards an ever more homogeneous economy, but despite the fact that payment preferences differ greatly from country to country - research on this subject continues to revolve mainly around localized efforts. But as historical inequalities between poor and rich societies continue to dissipate - learning from nations at the forefront of technological advancement increases the likelihood that the developed strategy becomes applicable to an increased number of countries. By selecting a nation most conducive to technological growth, the purpose of this report is to map the present dynamics in its digital payment industry using both recent and traditional market entry theory. However, studies geared towards globalized strategy formulation cannot be assumed as having guaranteed access to internal company-data at all times. So in order to facilitate such studies, the level of dependency on primary data required for conducting such research needs to be understood first, which is why the work in this report is constrained strictly to data of secondary nature. This, not only to further map the characteristics of this market, but also to see how open the market is to public inspection. Ultimately, the academic contribution becomes that of providing a road-map towards adapting currently available market entry theory to suit the rapidly evolving conditions of the digital payment industry from a global perspective and, when failing to do so, the aim is to also explore avenues for further research towards this end goal.
APA, Harvard, Vancouver, ISO, and other styles
39

Bělohlávek, Jiří. "Agent pro kurzové sázení." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235980.

Full text
Abstract:
This master thesis deals with design and implementation of betting agent. It covers issues such as theoretical background of an online betting, probability and statistics. In its first part it is focused on data mining and explains the principle of knowledge mining form data warehouses and certain methods suitable for different types of tasks. Second, it is concerned with neural networks and algorithm of back-propagation. All the findings are demonstrated on and supported by graphs and histograms of data analysis, made via SAS Enterprise Miner program. In conclusion, the thesis summarizes all the results and offers specific methods of extension of the agent.
APA, Harvard, Vancouver, ISO, and other styles
40

Van, Wyk Byron Jay. "E-trust: a building block for developing valuable online platforms in Higher Education." Thesis, Cape Peninsula University of Technology, 2013. http://hdl.handle.net/20.500.11838/1852.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Master of Technology Design in the Faculty of Informatics and Design at the Cape Peninsula University of Technology Supervisor: Prof J Messeter Cape Town, 2013
The aim of this research project was to provide an answer to the question: “How can an understanding of online trust be used to build valuable online applications in Higher Education?” In order to present an answer to this question, a literature survey was conducted to establish: • An understanding of the phenomenon of online trust • What the factors are that influence a loss of trust in the online environment The literature survey highlighted several factors that influence a loss of trust in the online environment, called trust cues. These factors, however, were often tested within the E-commerce environment, and not in organization-specific contexts, such as online platforms in use in Higher Education. In order to determine whether or not these factors would influence the development of trust in context-specific environments, the author of this research grouped the indentified trust factors into three focus areas, i.e. content, ease of use, and navigation. These factors were then incorporated into a series of nine different prototypes. These prototypes were different versions of a particular online platform currently in use at the Cape Peninsula University of Technology (CPUT). The prototypes were tested over a three week period, with certain staff members at the institution in question recruited as test participants. During each week of user observations, a different focus area was targeted, in order to establish the impact that it would have on the perceived trustworthiness of the platform in question. User observations were conducted while test participants completed a standard process using the various prototypes. Semi-structured interviews were also conducted while participants completed the specific process. Participants were asked to evaluate each screen in the process according to its perceived trust worthiness, by assigning a trust level score. At the completion of the three rounds of user observations, in-depth interviews were conducted with test participants. The participants’ trust level scores for each prototype were captured and graphed. A detailed description for the score given for a particular screen was presented on each graph. These scores were combined to provide an analysis of the focus area tested during the specific round. After the three rounds of user observations were completed, an analysis of all the trust factors tested were done. Data captured during interviews were transcribed, combined with feedback received from questionnaires, and analysed. An interpretation of the results showed that not all trust factors had a similar influence in the development of trust in the online platform under investigation. Trust cues such as content organization, clear instructions and useful content were by far the most significant trust factors, while others such as good visual design elements, professional images of products, and freedom from grammatical and typographical errors had little or no impact in the overall trustworthiness of the platform under investigation. From the analysis done it was clear that the development of trust in organization-specific contexts is significantly different than developing trust in an E-commerce environment and that factors that influence the development of trust in one context might not always be significant in another. In conclusion, it is recommended that when software applications are developed in organization-specific contexts, such as Higher Education, that trust factors such as good content organization, clear instructions and useful content be considered as the most salient. Organization-specific contexts differ quite significantly in that the users of these systems often convey a certain degree of trust toward the online platforms that they work with on a daily basis. Trust factors that are geared toward developing an initial or basic trust in a particular platform, which is often the case with first time users engaging in an E-commerce platform, would therefore not be as significant in the development of a more developed level of trust, which is what is needed within the development of organization-specific online platforms.
APA, Harvard, Vancouver, ISO, and other styles
41

Tröger, Ralph. "Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Full text
Abstract:
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
APA, Harvard, Vancouver, ISO, and other styles
42

Hallqvist, Karl. "Högtempererat borrhålslager för fjärrvärme." Thesis, Uppsala universitet, Naturresurser och hållbar utveckling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-231586.

Full text
Abstract:
The district heating load is seasonally dependent, with a low load during periods of high ambient temperature. Thermal energy storage (TES) has the potential to shift heating loads from winter to summer, thus reducing cost and environmental impact of District Heat production. In this study, a concept of high temperature borehole thermal energy storage (HT-BTES) together with a pellet heating plant for temperature boost, is presented and evaluated by its technical limitations, its ability to supply heat, its function within the district heating system, as well as its environmental impact and economic viability in Gothenburg, Sweden, a city with access to high quantities of waste heat. The concept has proven potentially environmentally friendly and potentially profitable if its design is balanced to achieve a good enough supply temperature from the HT-BTES. The size of the heat storage, the distance between boreholes and low borehole thermal resistance are key parameters to achieve high temperature. Profitability increases if a location with lower temperature demand, as well as risk of future shortage of supply, can be met. Feasibility also increases if existing pellet heating plant and district heating connection can be used and if lower rate of return on investment can be accepted. Access to HT-BTES in the district heating network enables greater flexibility and availability of production of District Heating, thereby facilitating readjustments to different strategies and policies. However, concerns for the durability of feasible borehole heat exchangers (BHE) exist in high temperature application.
Värmebehovet är starkt säsongsberoende, med låg last under perioder av högre omgivningstemperatur och hög last under perioder av lägre omgivningstemperaturer. I Göteborg finns en stor mängd spillvärme tillgängligt för fjärrvärmeproduktion sommartid när behovet av värme är lågt. Tillgång till säsongsvärmelager möjliggör att fjärrvärmeproduktion flyttas från vinterhalvår till sommarhalvår, vilket kan ge såväl lönsamhet som miljönytta. Borrhålsvärmelager är ett förhållandevis billigt sätt att lagra värme, och innebär att berggrunden värms upp under sommaren genom att varmt vatten flödar i borrhål, för att under vinterhalvåret användas genom att låta kallt vatten flöda i borrhålen och värmas upp. I traditionella borrhålsvärmelager används ofta värmepump för att höja värmelagrets urladdade temperatur, men på grund av höga temperaturkrav för fjärrvärme kan kostnaden för värmepump bli hög. I denna rapport föreslås ett system för att klara av att nå höga temperaturer till en lägre kostnad. Systemet består av ett borrhålsvärmelager anpassat för högre temperaturer (HT-BTES) samt pelletspannor för att spetsa lagrets utgående fluid för att nå hög temperatur. Syftet med rapporten är att undersöka potentialen för detta HT-BTES-system med avseende på dess tekniska begränsningar, förmåga till fjärrvärmeleverans, konsekvenser för fjärrvärmesystemet, samt lönsamhet och miljöpåverkan. För att garantera att inlagringen av värme inte är så stor att priset för inlagrad värme ökar väsentligt, utgår inlagringen från hur mycket värme som kyls bort i fjärrvärmenätet sommartid. I verkligheten finns betydligt mer värme tillgänglig till låg kostnad. När HT-BTES-systemet producerar fjärrvärme, ersätts fjärrvärmeproduktion från andra produktionsenheter, förutsatt att HT-BTES-systemets rörliga kostnader är lägre. I Göteborg ersätts främst naturgas från kraftvärme, men också en del flis. Kostnadsbesparingen beror på differensen för total fjärrvärmeproduktionskostnad med och utan HT-BTES-systemet. Undersökningen visar att besparingen är större om HT-BTES-systemet placeras i ett område där det är möjligt att mata ut fjärrvärme med lägre temperatur. Om urladdning från HT-BTES kan ske med hög temperatur ökar också besparingen. Detta sker om lagrets volym ökar, om avståndet mellan borrhål minskar eller om värmeöverföringen mellan det flödande vattnet i borrhålen och berggrunden ökar. Dessa egenskaper för lagret leder också till minskade koldioxidutsläpp. Storleken på besparingen beror dock i hög grad på hur bränslepriser utvecklas i framtiden. Strategiska fördelar med HT-BTES-systemet inkluderar; minskad miljöpåverkan, robust system med lång teknisk livslängd (för delar av HT-BTES-systemet), samt att inlagring av värme kan ske från många olika produktionsenheter. Dessutom kan positiva bieffekter identifieras. Undersökningen visar att HT-BTES-systemet har god potential att ge lönsamhet och minskad miljöpåverkan, och att anläggning och drift av lagret kan ske utan omfattande lokal miljöpåverkan. Det har också visats att de geologiska förutsättningarna för HT-BTES är goda på många platser i Göteborg, även om lokala förhållanden kan skilja sig åt. För att nå lönsamhet för HT-BTES-systemet krävs en avvägning på utformning av lagret för att nå hög urladdad temperatur utan att investeringskostnaden blir för stor. Undersökningen visar att om anslutning av HT-BTES-systemet kan ske mot befintlig anslutningspunkt eller till befintlig värmepanna kan investeringskostnaden minska och därmed lönsamheten öka. Placering av HT-BTES-systemet i områden med risk för överföringsbegränsningar kan också minska behovet av att förstärka fjärrvärmenätet, och således bidra till att minska de kostnader som förstärkning av nätet innebär. Betydelsefulla parametrar för att nå lönsamhet för HT-BTES-system inkluderar dessutom kostnaden för inlagrad värme liksom vilket vinstkrav som kan accepteras. Tillgång till HT-BTES möjliggör ökad nyttjandegrad och flexibilitet för fjärrvärmeproduktionsenheter, och därmed ökad anpassningsmöjlighet till förändrade förutsättningar på värmemarknaden. Dock återstår att visa att komponenter som klarar de höga temperaturkraven kan tillverkas till acceptabel kostnad.
APA, Harvard, Vancouver, ISO, and other styles
43

Behrouzvaziri, Abolhassan. "Thermoregulatory effects of psychostimulants and exercise: data-driven modeling and analysis." Thesis, 2018. https://doi.org/10.7912/C2R94W.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Thermoregulation system in mammal keeps their body temperature in a vital and yet narrow range of temperature by adjusting two main activities, heat generation, and heat loss. Also, these activities get triggered by other causes such as exercise or certain drugs. As a result, thermoregulation system will respond and try to bring back the body temperature to the normal range. Although these responses are very well experimentally explored, they often can be unpredictable and clinically deadly. Therefore, this thesis aims to analytically characterize the neural circuitry components of the system that control the heat generation and heat loss. This modeling approach can help us to analyze the relationship between different components of the thermoregulation system without directly measuring them and explain its complex responses in mathematical form. The first chapter of the thesis is dedicated to introducing a mathematical modeling approach of the circuitry components of the thermoregulation system in response to Methamphetamine which was first published in [1]. Later, in other chapters, we will expand this mathematical framework to study the other components of this system under different conditions such as different circadian phases, various pharmacological interventions, and exercise. This thesis is composed by materials from the following papers. ‎CHAPTER 1 uses the main idea, model, and figures from References [1]. Meanwhile, ‎CHAPTER 2 is based on [2] coauthored with me and is reformatted according to Purdue University Thesis guidelines. Also, ‎CHAPTER 3 interpolates materials from reference [3] coauthored and is reformatted to comply with Purdue University Thesis guidelines. ‎CHAPTER 4 is inserted from the reference [4] and is reformatted according to Purdue University Thesis guidelines. Finally, ‎CHAPTER 5 is based on Reference [5] and is reformatted according to Purdue University Thesis guidelines. Some materials from each of these references have been used in the introduction Chapter.
APA, Harvard, Vancouver, ISO, and other styles
44

Tang, Meini. "BICNet: A Bayesian Approach for Estimating Task Effects on Intrinsic Connectivity Networks in fMRI Data." Thesis, 2020. http://hdl.handle.net/10754/666140.

Full text
Abstract:
Intrinsic connectivity networks (ICNs) refer to brain functional networks that are consistently found under various conditions, during tasks or at rest. Some studies demonstrated that while some stimuli do not impact intrinsic connectivity, other stimuli actually activate intrinsic connectivity through suppression, excitation, moderation or modi cation. Most analyses of functional magnetic resonance imaging (fMRI) data use ad-hoc methods to estimate the latent structure of ICNs. Modeling the effects on ICNs has also not been fully investigated. Bayesian Intrinsic Connectivity Network (BICNet) captures the ICN structure with We propose a BICNet model, an extended Bayesian dynamic sparse latent factor model, to identify the ICNs and quantify task-related effects on the ICNs. BICNet has the following advantages: (1) It simultaneously identifies the individual and group-level ICNs; (2) It robustly identifies ICNs by jointly modeling resting-state fMRI (rfMRI) and task-related fMRI (tfMRI); (3) Compared to independent component analysis (ICA)-based methods, it can quantify the difference of ICNs amplitudes across different states; (4) The sparsity of ICNs automatically performs feature selection, instead of ad-hoc thresholding. We apply BICNet to the rfMRI and language tfMRI data from the Human Connectome Project (HCP) and identify several ICNs related to distinct language processing functions.
APA, Harvard, Vancouver, ISO, and other styles
45

Mpako, Vuyolwethu Maxabiso Wessels. "Capture effects in spread-aloha packet protocols." Thesis, 2005. http://hdl.handle.net/10413/2824.

Full text
Abstract:
Research in the field of random access protocols for narrow-band systems started as early as the 1970s with the introduction of the ALOHA protocol. From the research done in slotted narrow-band systems, it is well known that contention results in all the packets involved in the contention being unsuccessful. However, it has been shown that in the presence of unequal power levels, ore of the contending packets may be successful. Ibis is a phenomenon called capture. Packet capture has been shown to improve the performance of slotted narrow-band systems. Recently, much work has been done in the analysis of spread-spectrum ALOHA type code-division multiple access (CDMA) protocols. The issue of designing power control techniques to improve the performance of CDMA systems by reducing multiple access interference (MAl) has been a subject of much research. It has been shown that in the presence of power control schemes, the performance of spread-ALOHA CDMA systems is improved. However, it is also widely documented that the design of power control schemes capable of the ideal of compensation of radio propagation techniques is not possible for various reasons, and hence the imperfections in power control. None of the research known to the author has looked at capture in spread-ALOHA systems, and to a greater extent, looked at expressions for the performance of spreadALOHA systems in the presence of capture. In this thesis we introduce spread-ALOHA systems with capture as a manifestation of the imperfections in power control. We propose novel expressions for the computation of the perfonnance ofspread-ALOHA systems with capture.
Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, 2005.
APA, Harvard, Vancouver, ISO, and other styles
46

Čížek, Ondřej. "Projevy zneužití dominance v oblasti internetových platforem." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-357612.

Full text
Abstract:
Forms of abuse of dominance in the area of the Internet platforms The thesis is dedicated to the topic of abuse of dominant position in the area of the Internet platforms. Its aim is, firstly, to outline the challenges arising from the specific nature of the area, which might, from the competition-authorities' point of view, complicate the enforcement of competition law in the case of abuse of dominance. Secondly, the thesis tries to find the answer on the question to what extent these problems have been reflected in the existing decision-making practice. The structure of the thesis is divided into four main parts. The first part is an introduction. The second part provides an essential introduction to the area in question. It defines the term "Internet platform", provides an overview of the most important types of the Internet platforms and describes the specifics of the area in question, whose description is essential for the following parts. The third part analyses the problems that competition law may face in the context of possible abuse of dominance within the meaning of Art. 102 TFEU in the area of the Internet platforms. This section is divided according to three basic steps of a competition analysis of abuse of dominance, i.e. definition of the relevant market, the determination of market...
APA, Harvard, Vancouver, ISO, and other styles
47

(9224231), Dongdong Ma. "Ameliorating Environmental Effects on Hyperspectral Images for Improved Phenotyping in Greenhouse and Field Conditions." Thesis, 2020.

Find full text
Abstract:
Hyperspectral imaging has become one of the most popular technologies in plant phenotyping because it can efficiently and accurately predict numerous plant physiological features such as plant biomass, leaf moisture content, and chlorophyll content. Various hyperspectral imaging systems have been deployed in both greenhouse and field phenotyping activities. However, the hyperspectral imaging quality is severely affected by the continuously changing environmental conditions such as cloud cover, temperature and wind speed that induce noise in plant spectral data. Eliminating these environmental effects to improve imaging quality is critically important. In this thesis, two approaches were taken to address the imaging noise issue in greenhouse and field separately. First, a computational simulation model was built to simulate the greenhouse microclimate changes (such as the temperature and radiation distributions) through a 24-hour cycle in a research greenhouse. The simulated results were used to optimize the movement of an automated conveyor in the greenhouse: the plants were shuffled with the conveyor system with optimized frequency and distance to provide uniform growing conditions such as temperature and lighting intensity for each individual plant. The results showed the variance of the plants’ phenotyping feature measurements decreased significantly (i.e., by up to 83% in plant canopy size) in this conveyor greenhouse. Secondly, the environmental effects (i.e., sun radiation) on aerial hyperspectral images in field plant phenotyping were investigated and modeled. An artificial neural network (ANN) method was proposed to model the relationship between the image variation and environmental changes. Before the 2019 field test, a gantry system was designed and constructed to repeatedly collect time-series hyperspectral images with 2.5 minutes intervals of the corn plants under varying environmental conditions, which included sun radiation, solar zenith angle, diurnal time, humidity, temperature and wind speed. Over 8,000 hyperspectral images of corn (Zea mays L.) were collected with synchronized environmental data throughout the 2019 growing season. The models trained with the proposed ANN method were able to accurately predict the variations in imaging results (i.e., 82.3% for NDVI) caused by the changing environments. Thus, the ANN method can be used by remote sensing professionals to adjust or correct raw imaging data for changing environments to improve plant characterization.
APA, Harvard, Vancouver, ISO, and other styles
48

Shin-YehTsai and 蔡欣燁. "Effect of Data Aggregation in M2M Networks." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/12958689464869113360.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
100
Machine-to-Machine (M2M) networks are increasingly proposed for applications focused on many-to-many communication, where a sensor device is responsible for sensing data in its located area. Applying data aggregation is an efficient way to prolong the lifetime of M2M network. However, most previous M2M works only focus on routing algorithms and finding aggregation points. The objective of this thesis is to study the effects of buffering time and maximum buffered packets in data aggregation. We devise an analytical model to compute the delivery delay and energy efficiency in data aggregation. Then we develop an extensive simulation to accompany with analytical model to investigate the effects of buffering time and maximum buffered packets. Numerical results show that buffering time significantly affects energy consumption. Moreover, we observe that limiting maximum buffered packets in aggregation mechanism can significantly decrease delivery delay. Our study provides guidelines to set the buffering time and maximum buffered packets in data aggregation.
APA, Harvard, Vancouver, ISO, and other styles
49

Dias, João de Azevedo. "A problemática dos efeitos de rede e de aprisionamento no contexto do abuso de posição dominante europeia." Master's thesis, 2018. http://hdl.handle.net/10400.14/27801.

Full text
Abstract:
A presente Dissertação encontra-se alicerçada na problemática imposta pelos Efeitos de Rede e de Aprisionamento no contexto do Abuso de Posição Dominante Europeia, conforme prevista pelo Artigo 102.º TFUE. O nosso estudo foca-se, fundamentalmente, nos Mercados inseridos na Nova Economia (nomeadamente, os Mercados de Tecnologia de Ponta), cujo ritmo em termos de inovação e elevado grau de mutabilidade criam uma maior propensão para que se verifique o fenómeno The Winner Takes All Game. É ainda estudada a recolha massiva de dados pessoais dos utilizadores de serviços de determinadas empresas, enquanto reveladora de uma posição de domínio de mercado. O trabalho em apreço procura também escrutinar os conceitos de poder de mercado e definição de mercado no seio da Nova Economia, analisando as lacunas apresentadas pelo Teste SSNIP neste âmbito, assim como a apresentação de alternativas à sua utilização, com especial atenção à abordagem económica ou baseada nos efeitos, conforme proposta por diversos Autores, como sejam KATZ, SHAPIRO, ARTHUR, entre outros. Através deste trabalho – e mediante uma análise doutrinal e jurisprudencial desta matéria – foi possível concluir que a abordagem económica se apresenta, em diversos aspetos, como uma boa solução para fazer face aos problemas apresentados por mercados nos quais a inovação, investimento em investigação e recolha massiva de dados pessoais se encontram no epicentro dos mesmos.
The following Dissertation contemplates the study of the Network and Lock in effects problematic, in the context of the Abuse of a Dominant Position, crystalized in the Article 102.º TFEU. This work is mainly focused on the New Economy markets (namely the high technology industries), since their pace in terms of innovation and high mutability levels are the cause of “The Winner Takes All Effect”. We also analyse the massive recollection of personal data by certain companies’ services. Also, to mention that our study aims to dissect the concepts of market power and market definition in the New Economy context. We examine the SSNIP Test flaws, exploring the available alternatives to its use, namely the economic or effects-based approach, as proposed by diverse Authors, such as KATZ, SHAPIRO, ARTHUR, SALOP, amongst others. Through this work – and appealing to the doctrine and the jurisprudence on the issue – we were able to conclude that the economic approach presents itself, by various angles, as a fair solution to the problems created by a market whose core is innovation, R&D development and massive recollection of personal data.
APA, Harvard, Vancouver, ISO, and other styles
50

TSAO, CHING-WEN, and 曹景雯. "A Study of the Social Network Advertising Effect by Using Data Mining." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/req6hz.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography