Dissertations / Theses on the topic 'Feature management'

To see the other types of publications on this topic, follow the link: Feature management.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Feature management.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schroeter, Julia. "Feature-based configuration management of reconfigurable cloud applications." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-141415.

Full text
Abstract:
A recent trend in software industry is to provide enterprise applications in the cloud that are accessible everywhere and on any device. As the market is highly competitive, customer orientation plays an important role. Companies therefore start providing applications as a service, which are directly configurable by customers in an online self-service portal. However, customer configurations are usually deployed in separated application instances. Thus, each instance is provisioned manually and must be maintained separately. Due to the induced redundancy in software and hardware components, resources are not optimally utilized. A multi-tenant aware application architecture eliminates redundancy, as a single application instance serves multiple customers renting the application. The combination of a configuration self-service portal with a multi-tenant aware application architecture allows serving customers just-in-time by automating the deployment process. Furthermore, self-service portals improve application scalability in terms of functionality, as customers can adapt application configurations on themselves according to their changing demands. However, the configurability of current multi-tenant aware applications is rather limited. Solutions implementing variability are mainly developed for a single business case and cannot be directly transferred to other application scenarios. The goal of this thesis is to provide a generic framework for handling application variability, automating configuration and reconfiguration processes essential for self-service portals, while exploiting the advantages of multi-tenancy. A promising solution to achieve this goal is the application of software product line methods. In software product line research, feature models are in wide use to express variability of software intense systems on an abstract level, as features are a common notion in software engineering and prominent in matching customer requirements against product functionality. This thesis introduces a framework for feature-based configuration management of reconfigurable cloud applications. The contribution is three-fold. First, a development strategy for flexible multi-tenant aware applications is proposed, capable of integrating customer configurations at application runtime. Second, a generic method for defining concern-specific configuration perspectives is contributed. Perspectives can be tailored for certain application scopes and facilitate the handling of numerous configuration options. Third, a novel method is proposed to model and automate structured configuration processes that adapt to varying stakeholders and reduce configuration redundancies. Therefore, configuration processes are modeled as workflows and adapted by applying rewrite rules triggered by stakeholder events. The applicability of the proposed concepts is evaluated in different case studies in the industrial and academic context. Summarizing, the introduced framework for feature-based configuration management is a foundation for automating configuration and reconfiguration processes of multi-tenant aware cloud applications, while enabling application scalability in terms of functionality.
APA, Harvard, Vancouver, ISO, and other styles
2

Lord, Dale. "Relational Database for Visual Data Management." International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604893.

Full text
Abstract:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Often it is necessary to retrieve segments of video with certain characteristics, or features, from a large archive of footage. This paper discusses how image processing algorithms can be used to automatically create a relational database, which indexes the video archive. This feature extraction can be performed either upon acquisition or in post processing. The database can then be queried to quickly locate and recover video segments with certain specified key features
APA, Harvard, Vancouver, ISO, and other styles
3

Luo, Xi. "Feature-based Configuration Management of Applications in the Cloud." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-116674.

Full text
Abstract:
The complex business applications are increasingly offered as services over the Internet, so-called software-as-a-Service (SaaS) applications. The SAP Netweaver Cloud offers an OSGI-based open platform, which enables multi-tenant SaaS applications to run in the cloud. A multi-tenant SaaS application is designed so that an application instance is used by several customers and their users. As different customers have different requirements for functionality and quality of the application, the application instance must be configurable. Therefore, it must be able to add new configurations into a multi-tenant SaaS application at run-time. In this thesis, we proposed concepts of a configuration management, which are used for managing and creating client configurations of cloud applications. The concepts are implemented in a tool that is based on Eclipse and extended feature models. In addition, we evaluate our concepts and the applicability of the developed solution in the SAP Netwaver Cloud by using a cloud application as a concrete case example.
APA, Harvard, Vancouver, ISO, and other styles
4

Duong, Hien D. "A Feature-Oriented Software Engineering Approach to Integrate ASSISTments with Learning Management Systems." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-theses/862.

Full text
Abstract:
"Object-Oriented Programming (OOP), in the past two decades, has become the most influential and dominant programming paradigm for developing large and complex software systems. With OOP, developers can rely on design patterns that are widely accepted as solutions for recurring problems and used to develop flexible, reusable and modular software. However, recent studies have shown that Objected-Oriented Abstractions are not able to modularize these pattern concerns and tend to lead to programs with poor modularity. Feature-Oriented Programming (FOP) is an extension of OOP that aims to improve the modularity and to support software variability in OOP by refining classes and methods. In this thesis, based upon the work of integrating an online tutor systems, ASSISTments, with other online learning management systems, we evaluate FOP with respect to modularity. This proof-of-concept effort demonstrates how to reduce the effort in designing integration code."
APA, Harvard, Vancouver, ISO, and other styles
5

Otepka, Johannes, Sajid Ghuffar, Christoph Waldhauser, Ronald Hochreiter, and Norbert Pfeifer. "Georeferenced Point Clouds: A Survey of Features and Point Cloud Management." MDPI AG, 2013. http://dx.doi.org/10.3390/ijgi2041038.

Full text
Abstract:
This paper presents a survey of georeferenced point clouds. Concentration is, on the one hand, put on features, which originate in the measurement process themselves, and features derived by processing the point cloud. On the other hand, approaches for the processing of georeferenced point clouds are reviewed. This includes the data structures, but also spatial processing concepts. We suggest a categorization of features into levels that reflect the amount of processing. Point clouds are found across many disciplines, which is reflected in the versatility of the literature suggesting specific features. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
6

Mal-Sarkar, Sanchita. "Uncertainty Management of Intelligent Feature Selection in Wireless Sensor Networks." Cleveland State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=csu1268419387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Asghar, Muhammad Nabeel. "Feature based dynamic intra-video indexing." Thesis, University of Bedfordshire, 2014. http://hdl.handle.net/10547/338913.

Full text
Abstract:
With the advent of digital imagery and its wide spread application in all vistas of life, it has become an important component in the world of communication. Video content ranging from broadcast news, sports, personal videos, surveillance, movies and entertainment and similar domains is increasing exponentially in quantity and it is becoming a challenge to retrieve content of interest from the corpora. This has led to an increased interest amongst the researchers to investigate concepts of video structure analysis, feature extraction, content annotation, tagging, video indexing, querying and retrieval to fulfil the requirements. However, most of the previous work is confined within specific domain and constrained by the quality, processing and storage capabilities. This thesis presents a novel framework agglomerating the established approaches from feature extraction to browsing in one system of content based video retrieval. The proposed framework significantly fills the gap identified while satisfying the imposed constraints of processing, storage, quality and retrieval times. The output entails a framework, methodology and prototype application to allow the user to efficiently and effectively retrieved content of interest such as age, gender and activity by specifying the relevant query. Experiments have shown plausible results with an average precision and recall of 0.91 and 0.92 respectively for face detection using Haar wavelets based approach. Precision of age ranges from 0.82 to 0.91 and recall from 0.78 to 0.84. The recognition of gender gives better precision with males (0.89) compared to females while recall gives a higher value with females (0.92). Activity of the subject has been detected using Hough transform and classified using Hiddell Markov Model. A comprehensive dataset to support similar studies has also been developed as part of the research process. A Graphical User Interface (GUI) providing a friendly and intuitive interface has been integrated into the developed system to facilitate the retrieval process. The comparison results of the intraclass correlation coefficient (ICC) shows that the performance of the system closely resembles with that of the human annotator. The performance has been optimised for time and error rate.
APA, Harvard, Vancouver, ISO, and other styles
8

Beard, Ashley J. Sleath Betsy. "Cost as a feature of medication management communication in medical visits." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2008. http://dc.lib.unc.edu/u?/etd,2047.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2008.
Title from electronic title page (viewed Feb. 17, 2009). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Division of Pharmaceutical Outcomes and Policy." Discipline: Pharmaceutical Outcomes and Policy; Department/School: Pharmacy.
APA, Harvard, Vancouver, ISO, and other styles
9

Oliinyk, Olesia. "Applying Hierarchical Feature Modeling in Automotive Industry." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3820.

Full text
Abstract:
Context. Variability management (VM) in automotive domain is a promising solution to reduction of complexity. Feature modeling, as starting point of VM, deals with analysis and representation of available features in terms of commonalities and variabilities. The work is done in the context of an automotive industry – Adam Opel AG. Objectives. This work studies the automotive specific problems with respect to feature modeling, investigates what decomposition and structuring approaches exist in literature, and which one of them satisfies the industrial requirements. The approach to feature modeling is synthesized, evaluated and documented. Methods. In this work a case study with survey and literature review is performed. Survey uses semi structured interview and workshops as data collection methods. Systematic review includes articles from Compendex, Inspec, IEEE Xplore, ACM Digital Library, Science Direct and Engineering Village. Approach selection is based on mapping requirements against discovered approaches and discussion with industry practitioner on the regular meetings. Evaluation is proposed according to Goal Question Metric paradigm. Results. The approach that can be followed in the case organization is described and evaluated. The reasoning behind feature modeling approach construction and selection can be generalized for other organizations as well. Conclusions. We conclude that there is no perfect approach that would solve all the problems connected to automotive software. However, structuring approaches can be complementary and while combining give a good results. Tool support that integrates into the whole development cycle is important, as the amount of information cannot be processed using simple feature modeling tools. There is a need for further investigation in both directions – tool support and structuring approaches. The tactics that are proposed here should be introduced in organizations and formally evaluated.
Tel. +4917661965859
APA, Harvard, Vancouver, ISO, and other styles
10

SAIBENE, AURORA. "A Flexible Pipeline for Electroencephalographic Signal Processing and Management." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2022. http://hdl.handle.net/10281/360550.

Full text
Abstract:
L'elettroencefalogramma (EEG) fornisce registrazioni non-invasive delle attività e delle funzioni cerebrali sotto forma di serie temporali, a loro volta caratterizzate da una risoluzione temporale e spaziale (dipendente dai sensori), e da bande di frequenza specifiche per alcuni tipi di condizioni cerebrali. Tuttavia, i segnali EEG risultanti sono non-stazionari, cambiano nel tempo e sono eterogenei, essendo prodotti da differenti soggetti e venendo influenzati da specifici paradigmi sperimentali, condizioni ambientali e dispositivi. Inoltre, questi segnali sono facilmente soggetti a rumore e possono venire acquisiti per un tempo limitato, fornendo un numero ristretto di condizioni cerebrali sulle quali poter lavorare. Pertanto, in questa tesi viene proposta una pipeline flessibile per l'elaborazione e la gestione dei segnali EEG, affinchè possano essere più facilmente comprensibili e quindi più facilmente sfruttabili in diversi tipi di applicazioni. Inoltre, la pipeline flessibile proposta è divisa in quattro moduli riguardanti la pre-elaborazione del segnale, la sua normalizzazione, l'estrazione e la gestione di feature e la classificazione dei dati EEG. La pre-elaborazione del segnale EEG sfrutta la multivariate empirical mode decomposition (MEMD) per scomporre il segnale nelle sue modalità oscillatorie, chiamate intrinsic mode function (IMF), ed usa un criterio basato sull'entropia per selezionare le IMF più relevanti. Queste IMF dovrebbero mantenere le naturali dinamiche cerebrali e rimuovere componenti non-informative. Le risultati IMF rilevanti sono in seguito sfruttate per sostituire il segnale o aumentare la numerosità dei dati. Nonostante MEMD sia adatto alla non-stazionarietà del segnale EEG, ulteriori passi computazionali dovrebbero essere svolti per mitigare la caratteristica eterogeneità di questi dati. Pertanto, un passo di normalizzazione viene introdotto per ottenere dati comparabili per uno stesso soggetto o più soggetti e tra differenti condizioni sperimentali, quindi permettendo di estrarre feature nel dominio temporale, frequenziale e tempo-frequenziale per meglio caratterizzare il segnale EEG. Nonostante l'uso di un insieme di feature differenti fornisca la possibilità di trovare nuovi pattern nei dati, può altresì presentare alcune ridondanze ed incrementare il rischio di incorrere nella curse of dimensionality o nell'overfitting durante la classificazione. Pertanto, viene proposta una selezione delle feature basata sugli algoritmi evolutivi con un approccio completamente guidato dai dati. Inoltre, viene proposto l'utilizzo di modelli di apprendimento non o supervisionati e di nuovi criteri di stop per un algoritmo genetico modificato. Oltretutto, l'uso di diversi modelli di apprendimento automatico può influenzare il riconoscimento di differenti condizioni cerebrali. L'introduzione di modelli di deep learning potrebbe fornire una strategia in grado di apprendere informazioni direttamente dai dati disponibili, senza ulteriori elaborazioni. Fornendo una formulazione dell'input appropriata, le informazioni temporali, frequenziali e spaziali caratterizzanti i dati EEG potrebbero essere mantenute, evitando l'introduzione di architetture troppo complesse. Pertato, l'utilizzo di differenti processi ed approcci di elaborazione potrebbe fornire strategie più generiche o più legate a specifici esperimenti per gestire il segnale EEG, mantenendone le sue naturali caratteristiche.
The electroencephalogram (EEG) provides the non-invasive recording of brain activities and functions as time-series, characterized by a temporal and spatial (sensor-dependent) resolution, and by brain condition-bounded frequency bands. Moreover, it presents some cost-effective device solutions. However, the resulting EEG signals are non-stationary, time-varying, and heterogeneous, being recorded from different subjects and being influenced by specific experimental paradigms, environmental conditions, and devices. Moreover, they are easily affected by noise and they can be recorded for a limited time, thus they provide a restricted number of brain conditions to work with. Therefore, in this thesis a flexible pipeline for signal processing and management is proposed to have a better understanding of the EEG signals and exploit them for a variety of applications. Moreover, the proposed flexible pipeline is divided in 4 modules concerning signal pre-processing, normalization, feature computation and management, and EEG data classification. The EEG signal pre-processing exploits the multivariate empirical mode decomposition (MEMD) to decompose the signal in oscillatory modes, called intrinsic mode functions (IMFs), and uses an entropy criterion to select the most relevant IMFs that should maintain the natural brain dynamics, while discarding uninformative components. The resulting relevant IMFs are then exploited for signal substitution and data augmentation. Even though MEMD is adapt to the EEG signal non-stationarity, further processing steps should be undertaken to mitigate these data heterogeneity. Therefore, a normalization step is introduced to obtain comparable data inter- and intra-subject and between different experimental conditions, allowing the extraction of general features in the time, frequency, and time-frequency domain for EEG signal characterization. Even though the use of a variety of feature types may provide new data patterns, they may also present some redundancies and increase the risk of incurring in classification problems like curse of dimensionality and overfitting. Therefore, a feature selection based on evolutionary algorithms is proposed to have a completely data-driven approach, exploiting both supervised and unsupervised learning models, and suggesting new stopping criteria for a modified genetic algorithm implementation. Moreover, the use of different learning models may affect the discrimination of different brain conditions. The introduction of deep learning models may provide a strategy to learn directly from the available data. By suggesting a proper input formulation it could be possible to maintain the EEG data time, frequency, and spatial information, while avoiding too complex architectures. Therefore, using different processing steps and approaches may provide general or experimental specific strategies to manage the EEG signal, while maintaining its natural characteristics.
APA, Harvard, Vancouver, ISO, and other styles
11

Lakemond, Ruan. "Multiple camera management using wide baseline matching." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/37668/1/Ruan_Lakemond_Thesis.pdf.

Full text
Abstract:
Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.
APA, Harvard, Vancouver, ISO, and other styles
12

McArdle, Meghan P. (Meghan Patricia) 1972. "Internet-based rapid customer feedback for design feature tradeoff analysis." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/8990.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2000.
Includes bibliographical references (p. 86-88).
In an increasingly competitive consumer products market, companies are striving to create organizations, processes, and tools to reduce the product development cycle time. As product development teams strive to develop products faster and more effectively, incorporating quantitative market research or customer feedback into the design process in a time and cost effective manner becomes increasingly important. Over the last decade, the Internet has emerged as a new and exciting market research medium, which can provide product development teams with an opportunity to obtain rapid quantitative feedback from their customers before making key design decisions. This paper outlines a new methodology to incorporate customer feedback into the feature selection process of product development. This methodology was successfully employed in a new product development effort at Polaroid, and aided in the selection of 2 key product features. The research employed web-based conjoint analysis techniques and an innovative drag and drop technique, which allows customers to create their ideal product by selecting their optimal set of features at a given price. Leveraging the capabilities of the Internet to incorporate styled web design, animation, interactive activities and usability considerations into the development of an Internet-based, market research effort can reduce respondent fatigue and provide the respondent with a more enjoyable experience while collecting meaningful quantitative data on customer feature preferences.
by Meghan P. McArdle.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
13

Yamashita, Marcelo Correa. "Service versioning and compatibility at feature level." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/78867.

Full text
Abstract:
A evolução de serviços requer estratégicas para lidar adequadamente com a gerência de versões resultantes das alterações ocorridas durante o ciclo de vida do serviço. Normalmente, uma versão de serviço é exposta como um documento que descreve a funcionalidade do serviço, orientando desenvolvedores clientes sobre os detalhes de acesso ao serviço. No entanto, não existe um padrão para o tratamento de versões dos documentos que descrevem o serviço. Isso implica na dificuldade de identificação e localização de alterações, bem como na medição do seu impacto, especialmente em uma perspectiva mais granular. A compatibilidade aborda um estilo mais elegante de evolução de serviços, considerando os efeitos provenientes das alterações nas aplicações cliente. Ela define um conjunto de alterações permissivas, as quais não afetem a integração externa com o serviço. Entretanto, provedores não conseguem garantir que as alterações necessárias ao serviço estarão no conjunto de alterações compatíveis. Além disso, o conceito de compatibilidade é muitas vezes aplicado sobre a descrição do serviço como um todo, o que pode não ser representativo do uso real do serviço por uma aplicação cliente em particular. Assim, é de responsabilidade dos desenvolvedores clientes avaliar a extensão das alterações no serviço a fim de medir o impacto no seu cenário em particular. Esse processo pode ser difícil e propenso a erros sem o uso de mecanismos de identificação de mudanças. Este trabalho aborda a evolução do serviço de maneira mais granular, o que chamamos de nível de feature. Desse modo, nós propomos um modelo de controle de versões e um algoritmo de compatibilidade a nível de feature, que permite a identificação e qualificação do impacto das alterações, assim como a avaliação da compatibilidade das mudanças neste nível de feature. Este trabalho também apresenta um experimento com base em um serviço real, que explora o modelo de controle de versões para avaliar a extensão das mudanças implícitas e explícitas e sua avaliação de compatibilidade.
Service evolution requires sound strategies to appropriately manage versions resulting from changes during service lifecycle. Typically, a service version is exposed as a description document that describes the service functionality, guiding client developers on the details for accessing the service. However, there is no standard for handling the versioning of service descriptions, which implies on difficulties on identifying and tracing changes as well as measuring their impact, particularly in a finer grain perspective. Compatibility addresses the graceful evolution of services by considering the effects of changes on client applications. It defines a set of permissible change cases that do not disrupt the service external integration. However, providers cannot always guarantee that the necessary changes yield compatible service descriptions. Moreover, the concept of compatibility is often applied to the entire service description, which can not be representative of the actual use of the service by a particular client application. So, it is the client’s developers responsibility to assess the extent of the change and their impact in their particular usage scenario, which can be hard and error-prone without proper change identification mechanisms. This work addresses service evolution in a finer grain manner, which we refer to as feature level. Hence, we propose a versioning model and a compatibility algorithm at feature level, which allows the identification and qualification of changes impact points, their ripple effect, as well as the assessment of changes’ compatibility in this finer grain of features. This work also reports an experiment based on a real service, which explores the versioning model to assess the scope of implicit and explicit changes and their compatibility assessment.
APA, Harvard, Vancouver, ISO, and other styles
14

Loscalzo, Steven. "Group based techniques for stable feature selection." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cedernaes, Christopher, and Kristoffer Eriksson. "Open Innovation Software : A study of feature-related problems in idea management systems." Thesis, Uppsala universitet, Informationssystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-184977.

Full text
Abstract:
With the transition from closed to open innovation in recent years, the next trend for companies has been to bring in new ideas from external stakeholders using innovation tools, known as Open Innovation Software (OIS). The most common type of OIS, called idea management systems, allows participants to submit, evaluate, and engage in discussions around ideas. However, implementing software to support innovation is not a sure success and potential problems may arise. The purpose of this thesis is to research problems within features in current idea management systems, and to provide guidelines that suggest ways that may eliminate or reduce the impact of the particular problems. Interviews were conducted with representatives from five different idea management systems. The respondents demonstrated their systems, which made it possible to gather features and to learn about problems that exist in these systems. Five problems within features were found; these were related to engagement, duplicates, idea evaluation, complexity, and bias. Numerous recommendations regarding how the impact of these problems may be reduced have been identified. The findings of this thesis show that Problems with engagement is best dealt with using features that delivers better feedback in order to give more motivation to the participants. As for managing duplicates, it is recommended to implement a feature that suggests similar ideas during the idea submission phase. It was found that allowing users to have an unlimited amount of votes should be avoided. To prevent bias, managers should be careful of having features that displays idea ratings before users have casted their vote, features that allow users to edit their casted vote unless an idea has been edited,  and  for instance features that show ideas in order of popularity.
APA, Harvard, Vancouver, ISO, and other styles
16

Khoshneshin, Mohammad. "Latent feature networks for statistical relational learning." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/3323.

Full text
Abstract:
In this dissertation, I explored relational learning via latent variable models. Traditional machine learning algorithms cannot handle many learning problems where there is a need for modeling both relations and noise. Statistical relational learning approaches emerged to handle these applications by incorporating both relations and uncertainties in these problems. Latent variable models are one of the successful approaches for statistical relational learning. These models assume a latent variable for each entity and then the probability distribution over relationships between entities is modeled via a function over latent variables. One important example of relational learning via latent variables is text data modeling. In text data modeling, we are interested in modeling the relationship between words and documents. Latent variable models learn this data by assuming a latent variable for each word and document. The co-occurrence value is defined as a function of these random variables. For modeling co-occurrence data in general (and text data in particular), we proposed latent logistic allocation (LLA). LLA outperforms the-state-of-the-art model --- latent Dirichlet allocation --- in text data modeling, document categorization and information retrieval. We also proposed query-based visualization which embeds documents relevant to a query in a 2-dimensional space. Additionally, I used latent variable models for other single-relational problems such as collaborative filtering and educational data mining. To move towards multi-relational learning via latent variable models, we propose latent feature networks (LFN). Multi-relational learning approaches model multiple relationships simultaneously. LFN assumes a component for each relationship. Each component is a latent variable model where a latent variable is defined for each entity and the relationship is a function of latent variables. However, if an entity participates in more than one relationship, then it will have a separate random variable for each relationship. We used LFN for modeling two different problems: microarray classification and social network analysis with a side network. In the first application, LFN outperforms support vector machines --- the best propositional model for that application. In the second application, using the side information via LFN can drastically improve the link prediction task in a social network.
APA, Harvard, Vancouver, ISO, and other styles
17

Latner, Avi. "Feature performance metrics in a service as a software offering." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/67562.

Full text
Abstract:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 46-47).
Software as a Service (SaaS) delivery model has become widespread. This deployment model changes the economics of software delivery but also has an impact on development. Releasing updates to customers is immediate and the development, product and marketing teams have access to customer usage information. These dynamics create a fast feedback loop between developments to customers. To fully leverage this feedback loop the right metrics need to be set. Typically SaaS applications are a collection of features. The product is divided between development teams according to features and customers access the service through features. Thus a framework that measure feature performance is valuable. This thesis provides a framework for measuring the performance of software as a service (SaaS) product features in order to prioritize development efforts. The case is based on empirical data from HubSpot and it is generalized to provide a framework applicable to other companies with large scale software offerings and distributed development. Firstly, relative value is measured by the impact that each feature has on customer acquisition and retention. Secondly, feature value is compared to feature cost and specifically development investment to determine feature profitability. Thirdly, feature sensitivity is measured. Feature sensitivity is defined as the effect a fixed amount of development investment has on value in a given time. Fourthly, features are segmented according to their location relative to the value to cost trend line into: most valuable features, outperforming, under-performing and fledglings. Finally, results are analyzed to determine future action. Maintenance and bug fixes are prioritized according to feature value. Product enhancements are prioritized according to sensitivity with special attention to fledglings. Under-performing features are either put on "life-support", terminated or overhauled.
by Avi Latner.
S.M.in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
18

Kerschke, Pascal [Verfasser]. "Automated and Feature-Based Problem Characterization and Algorithm Selection Through Machine Learning / Pascal Kerschke." Münster : Universitäts- und Landesbibliothek Münster, 2018. http://d-nb.info/1151936758/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Schroeter, Julia [Verfasser], Uwe [Akademischer Betreuer] Aßmann, and Vander [Akademischer Betreuer] Alves. "Feature-based configuration management of reconfigurable cloud applications / Julia Schroeter. Gutachter: Uwe Aßmann ; Vander Alves." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://d-nb.info/1068446412/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Villacampa, Osiris. "Feature Selection and Classification Methods for Decision Making: A Comparative Analysis." NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/63.

Full text
Abstract:
The use of data mining methods in corporate decision making has been increasing in the past decades. Its popularity can be attributed to better utilizing data mining algorithms, increased performance in computers, and results which can be measured and applied for decision making. The effective use of data mining methods to analyze various types of data has shown great advantages in various application domains. While some data sets need little preparation to be mined, whereas others, in particular high-dimensional data sets, need to be preprocessed in order to be mined due to the complexity and inefficiency in mining high dimensional data processing. Feature selection or attribute selection is one of the techniques used for dimensionality reduction. Previous research has shown that data mining results can be improved in terms of accuracy and efficacy by selecting the attributes with most significance. This study analyzes vehicle service and sales data from multiple car dealerships. The purpose of this study is to find a model that better classifies existing customers as new car buyers based on their vehicle service histories. Six different feature selection methods such as; Information Gain, Correlation Based Feature Selection, Relief-F, Wrapper, and Hybrid methods, were used to reduce the number of attributes in the data sets are compared. The data sets with the attributes selected were run through three popular classification algorithms, Decision Trees, k-Nearest Neighbor, and Support Vector Machines, and the results compared and analyzed. This study concludes with a comparative analysis of feature selection methods and their effects on different classification algorithms within the domain. As a base of comparison, the same procedures were run on a standard data set from the financial institution domain.
APA, Harvard, Vancouver, ISO, and other styles
21

Novakovich, Stephen M. (Stephen Michael). "Controlling the feature angularity of extruded aluminum products : an efficient methodology for manufacturing process improvement." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/36482.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1994, and Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1994.
Includes bibliographical references (p. 118).
by Stephen M. Novakovich.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
22

Pratt, Dennis G. "When a product feature becomes an industry : an examination of the hardware and software fault tolerance industry." Thesis, Massachusetts Institute of Technology, 1986. http://hdl.handle.net/1721.1/15009.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1986.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND DEWEY.
Bibliography: leaves 127-133.
by Dennis G. Pratt.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
23

Corradini, Ryan Arthur. "A Hybrid System for Glossary Generation of Feature Film Content for Language Learning." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2238.

Full text
Abstract:
This report introduces a suite of command-line tools created to assist content developers with the creation of rich supplementary material to use in conjunction with feature films and other video assets in language teaching. The tools are intended to leverage open-source corpora and software (the OPUS OpenSubs corpus and the Moses statistical machine translation system, respectively), but are written in a modular fashion so that other resources could be leveraged in their place. The completed tool suite facilitates three main tasks, which together constitute this project. First, several scripts created for use in preparing linguistic data for the system are discussed. Next, a set of scripts are described that together leverage the strengths of both terminology management and statistical machine translation to provide candidate translation entries for terms of interest. Finally, a tool chain and methodology are given for enriching the terminological data store based on the output of the machine translation process, thereby enabling greater accuracy and efficiency with each subsequent application.
APA, Harvard, Vancouver, ISO, and other styles
24

Kugathasan, Praba. "A feature-based approach to design information management - multiple viewpoint dependent models for the product introduction process." Thesis, University of Bristol, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Zhao, Wei. "Feature-Based Hierarchical Knowledge Engineering for Aircraft Life Cycle Design Decision Support." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14639.

Full text
Abstract:
The design process of aerospace systems is becoming more and more complex. As the process is progressively becoming enterprise-wide, it involves multiple vendors and encompasses the entire life-cycle of the system, as well as a system-of-systems perspective. The amount of data and information generated under this paradigm has increased exponentially creating a difficult situation as it pertains to data storage, management, and retrieval. Furthermore, the data themselves are not suitable or adequate for use in most cases and must be translated into knowledge with a proper level of abstraction. Adding to the problem is the fact that the knowledge discovery process needed to support the growth of data in aerospace systems design has not been developed to the appropriate level. In fact, important design decisions are often made without sufficient understanding of their overall impact on the aircraft's life, because the data have not been efficiently converted and interpreted in time to support design. In order to make the design process adapt to the life-cycle centric requirement, this thesis proposes a methodology to provide the necessary supporting knowledge for better design decision making. The primary contribution is the establishment of a knowledge engineering framework for design decision support to effectively discover knowledge from the existing data, and efficiently manage and present the knowledge throughout all phases of the aircraft life-cycle. The second contribution is the proposed methodology on the feature generation and exploration, which is used to improve the process of knowledge discovery process significantly. In addition, the proposed work demonstrates several multimedia-based approaches on knowledge presentation.
APA, Harvard, Vancouver, ISO, and other styles
26

Gollasch, David. "Conceptual Variability Management in Software Families with Multiple Contributors." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-202775.

Full text
Abstract:
To offer customisable software, there are two main concepts yet: software product lines that allow the product customisation based on a fixed set of variability and software ecosystems, allowing an open product customisation based on a common platform. Offering a software family that enables external developers to supply software artefacts means to offer a common platform as part of an ecosystem and to sacrifice variability control. Keeping full variability control means to offer a customisable product as a product line, but without the support for external contributors. This thesis proposes a third concept of variable software: partly open software families. They combine a customisable platform similar to product lines with controlled openness similar to ecosystems. As a major contribution of this thesis a variability modelling concept is proposed which is part of a variability management for these partly open software families. This modelling concept is based on feature models and extends them to support open variability modelling by means of interfaces, structural interface specifications and the inclusion of semantic information. Additionally, the introduction of a rights management allows multiple contributors to work with the model. This is required to enable external developers to use the model for the concrete extension development. The feasibility of the proposed model is evaluated using a prototypically developed modelling tool and by means of a case study based on a car infotainment system.
APA, Harvard, Vancouver, ISO, and other styles
27

Seidl, Christoph. "Integrated Management of Variability in Space and Time in Software Families." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-218036.

Full text
Abstract:
Software Product Lines (SPLs) and Software Ecosystems (SECOs) are approaches to capturing families of closely related software systems in terms of common and variable functionality (variability in space). SPLs and especially SECOs are subject to software evolution to adapt to new or changed requirements resulting in different versions of the software family and its variable assets (variability in time). Both dimensions may be interconnected (e.g., through version incompatibilities) and, thus, have to be handled simultaneously as not all customers upgrade their respective products immediately or completely. However, there currently is no integrated approach allowing variant derivation of features in different version combinations. In this thesis, remedy is provided in the form of an integrated approach making contributions in three areas: (1) As variability model, Hyper-Feature Models (HFMs) and a version-aware constraint language are introduced to conceptually capture variability in time as features and feature versions. (2) As variability realization mechanism, delta modeling is extended for variability in time, and a language creation infrastructure is provided to devise suitable delta languages. (3) For the variant derivation procedure, an automatic version selection mechanism is presented as well as a procedure to derive large parts of the application order for delta modules from the structure of the HFM. The presented integrated approach enables derivation of concrete software systems from an SPL or a SECO where both features and feature versions may be configured.
APA, Harvard, Vancouver, ISO, and other styles
28

Tessier, Sean Michael. "Ontology-based approach to enable feature interoperability between CAD systems." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41118.

Full text
Abstract:
Data interoperability between computer-aided design (CAD) systems remains a major obstacle in the information integration and exchange in a collaborative engineering environment. The standards for CAD data exchange have remained largely restricted to geometric representations, causing the design intent portrayed through construction history, features, parameters, and constraints to be discarded in the exchange process. In this thesis, an ontology-based framework is proposed to allow for the full exchange of semantic feature data. A hybrid ontology approach is proposed, where a shared base ontology is used to convey the concepts that are common amongst different CAD systems, while local ontologies are used to represent the feature libraries of individual CAD systems as combinations of these shared concepts. A three-branch CAD feature model is constructed to reduce ambiguity in the construction of local ontology feature data. Boundary representation (B-Rep) data corresponding to the output of the feature operation is incorporated into the feature data to enhance data exchange. The Ontology Web Language (OWL) is used to construct a shared base ontology and a small feature library, which allows the use of existing ontology reasoning tools to infer new relationships and information between heterogeneous data. A combination of OWL and SWRL (Semantic Web Rule Language) rules are developed to allow a feature from an arbitrary source system expressed via the shared base ontology to be automatically classified and translated into the target system. These rules relate input parameters and reference types to expected B-Rep objects, allowing classification even when feature definitions vary or when little is known about the source system. In cases when the source system is well known, this approach also permits direct translation rules to be implemented. With such a flexible framework, a neutral feature exchange format could be developed.
APA, Harvard, Vancouver, ISO, and other styles
29

Collins, Donovan (Donovan Scott). "Feature-based investment cost estimation based on modular design of a continuous pharmaceutical manufacturing system." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66063.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering; in conjunction with the Leaders for Global Operations Program at MIT, June 2011.
"June 2011." Cataloged from PDF version of thesis.
Includes bibliographical references (p. 72-73).
Previous studies of continuous manufacturing processes have used equipment-factored cost estimation methods to predict savings in initial plant investment costs. In order to challenge and validate the existing methods of cost estimation, feature-based cost estimates were constructed based on a modular process design model. Synthesis of an existing chemical intermediate was selected as the model continuous process. A continuous process was designed that was a literal, step by step, translation of the batch process. Supporting design work included process flow diagrams and basic piping and instrumentation diagrams. Design parameters from the process model were combined with feature-based costs to develop a series of segmented cost estimates for the model continuous plant at several production scales. Based on this analysis, the continuous facility seems to be intrinsically less expensive only at a relatively high production scale. Additionally, the distribution of cost areas for the continuous facility differs significantly from the distribution previous assumed for batch plants. This finding suggests that current models may not be appropriate for generating cost estimates for continuous plants. These results should not have a significant negative impact on the value proposition for the continuous manufacturing platform. The continuous process designed for this project was not optimized. Therefore, this work reiterates that the switch to continuous must be accompanied with optimization and innovation in the underlying continuous chemistry.
by Donovan Collins.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
30

Dayibas, Orcun. "Feature Oriented Domain Specific Language For Dependency Injection In Dynamic Software Product Lines." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611071/index.pdf.

Full text
Abstract:
Base commonality of the Software Product Line (SPL) Engineering processes is to analyze commonality and variability of the product family though, SPLE defines many various processes in different abstraction levels. In this thesis, a new approach to configure (according to requirements) components as building blocks of the architecture is proposed. The main objective of this approach is to support domain design and application design processes in SPL context. Configuring the products is made into a semi-automatic operation by defining a Domain Specific Language (DSL) which is built on top of domain and feature-component binding model notions. In order to accomplish this goal, dependencies of the components are extracted from the software by using the dependency injection method and these dependencies are made definable in CASE tools which are developed in this work.
APA, Harvard, Vancouver, ISO, and other styles
31

Cai, Jing [Verfasser]. "Development of a Reference Feature-based Machining Process Planning Data Model for Web-enabled Exchange in Extended Enterprise / Jing Cai." Aachen : Shaker, 2007. http://d-nb.info/116650929X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Sandberg, Hanna. "Projektledning : En rapport om projektledning inom postproduktion av långfilm." Thesis, Södertörns högskola, Institutionen för kommunikation, medier och it, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-17202.

Full text
Abstract:
Denna rapport beskriver processen som projektledare/bildkoordinator på företaget The Chimney Pot genom hela processen inom postproduktion av en långfilm. Rapporten är utförd i kursen Praktiskt examensarbete inom programmet IT, medier och design på Södertörns högskola under VT 2012. Rapporten beskriver olika faser inom projektledning och hur man använder modeller och verktyg för planering, informationshantering, kommunikation och hur man tillämpar lämpliga modeller för olika projekt. Rapporten innehåller teorier om projektledning och beskrivning hur dessa är använda i det praktiska arbetet med att göra en långfilm, från planering och uppföljning med team till leverans av färdig film till kund. Något som visat sig under projektets gång är hur viktig del kommunikation och tydlighet är i projektarbete samt hur mycket samarbete det krävs inom projektteamet. Denna rapport beskriver kommunikation i teorin samt hur kommunikation använts i detta projekt.
This is a Bachelor thesis report from the programme: IT, Media and Design from Södertörns högskola, spring semester 2012. The report describes the view of a project manager/picture coordinator working at a company within post production, more specifically at The Chimney Pot. The post production process is fully described as well as the different phases of project management. Furthermore, this report includes the theories of project management and describes the implementation of these theories in practical work related to post production of feature film. Consequently, all different steps from planning to follow-up are covered. Hence, the importance of working as a team during big productions is addressed, this in order for a successful delivery of the final product to the client. Something proved during this project is the importance of communication and the importance of working as a team within the project. This report describes communication in theory and how the communication used in this project.
APA, Harvard, Vancouver, ISO, and other styles
33

Püschel, Georg, Christoph Seidl, Thomas Schlegel, and Uwe Aßmann. "Using Variability Management in Mobile Application Test Modeling." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-143917.

Full text
Abstract:
Mobile applications are developed to run on fast-evolving platforms, such as Android or iOS. Respective mobile devices are heterogeneous concerning hardware (e.g., sensors, displays, communication interfaces) and software, especially operating system functions. Software vendors cope with platform evolution and various hardware configurations by abstracting from these variable assets. However, they cannot be sure about their assumptions on the inner conformance of all device parts and that the application runs reliably on each of them—in consequence, comprehensive testing is required. Thereby, in testing, variability becomes tedious due to the large number of test cases required to validate behavior on all possible device configurations. In this paper, we provide remedy to this problem by combining model-based testing with variability concepts from Software Product Line engineering. For this purpose, we use feature-based test modeling to generate test cases from variable operational models for individual application configurations and versions. Furthermore, we illustrate our concepts using the commercial mobile application “runtastic” as example application.
APA, Harvard, Vancouver, ISO, and other styles
34

Hackley, Christopher E. "The social construction of advertising : a discourse analytic approach to creative advertising development as a feature of marketing communications management." Thesis, University of Strathclyde, 1999. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=20822.

Full text
Abstract:
This thesis explores the creative development of advertising through a discourse analytic method. The 'creative development of advertising' refers here to the intra-agency process of developing advertising from client brief through planning, research, creative brief, design and execution. The thesis draws on a wide ranging literature review of research papers and popular texts to locate the study within marketing management as the superordinate field, and within marketing communications and advertising as the immediate domains. The main data gathering method is the dyadic depth interview, supplemented by observation in the field, informal primary data and agency archive material. The empirical focus is placed on a top five UK advertising agency, BMP DDB Needham, London. Transcribed interview data as text is subject to coding and categorised according to the 'interpretative repertoires' agency account team professionals draw upon to articulate and substantiate their positions and arguments, following well established discourse analytic procedure in discursive psychology. The empirical section argues that eight distinctive interpretative repertoires may be discerned from the data. These repertoires interact dynamically in agency discourse to circumscribe the social construction of advertising. The repertoires also act as resources from which account team members construct their professional identities and reproduce discursively proscribed power relations within the agency. The discussion explores the implications of the study for marketing management, marketing communications and related fields of research, theory and practice.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Zongchang. "A Systematic Framework for Unsupervised Feature Mining and Fault Detection for Wind Turbine Drivetrain Systems." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1471348052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Yimin. "Exploring Hidden Coherent Feature Groups and Temporal Semantics for Multimedia Big Data Analysis." FIU Digital Commons, 2015. http://digitalcommons.fiu.edu/etd/2254.

Full text
Abstract:
Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.
APA, Harvard, Vancouver, ISO, and other styles
37

Kakar, Adarsh Kumar. "Feature selection for evolutionary commercial-off-the-shelf software| Studies focusing on time-to-market, innovation and hedonic-utilitarian trade-offs." Thesis, The University of Alabama, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3596169.

Full text
Abstract:

Feature selection is one of the most important decisions made by product managers. This three article study investigates the concepts, tools and techniques for making trade-off decisions of introducing new features in evolving Commercial-Off-The-Shelf (COTS) software products. The first article investigates the efficacy of various feature selection techniques when the trade-off is between comprehensiveness and time-to-market. The second article investigates the impact of current level of product performance when the trade-off is between providing different types of innovative features to the users. The third article investigates the impact on the ability of the COTS product to attract new users and retain existing users when the trade-off is between providing utilitarian and hedonic value through new product features.

To meet these research goals an extensive multidisciplinary study of Information Systems (IS) and Product Development literatures was conducted followed by experimental research. The experiments were conducted among youth between 19-24 years who were users of Gmail software and produced some key findings.

In the first study the Kano survey method was found to be effective in identifying those features which added value to the product and those that did not. This finding will facilitate product managers in using appropriate techniques for identifying the critical product features to be built into the COTS product thereby reducing time-to-market without sacrificing product quality. In the second study, current COTS product performance was found to significantly impact the type of innovation to be introduced into the COTS product. Basic or Core product innovations were found to have value for the users when performance is low but not when the performance is high. On the other hand, Expected or product Performance innovations and Augmented or user Excitement innovations were found to have value when the performance is high but not when the performance is low. In the third study, Hedonic value and Utilitarian value of product features were found to have distinctive impact on users. While Hedonic value impacted Word-of-Mouth, a measure of the products' capacity to attract new customers, Utilitarian value impacted User Loyalty, a measure of the products' capacity to retain existing customers.

APA, Harvard, Vancouver, ISO, and other styles
38

Wilson, Michael E. J. "An online environmental approach to service interaction management in home automation." Thesis, University of Stirling, 2005. http://hdl.handle.net/1893/104.

Full text
Abstract:
Home automation is maturing with the increased deployment of networks and intelligent devices in the home. Along with new protocols and devices, new software services will emerge and work together releasing the full potential of networked consumer devices. Services may include home security, climate control or entertainment. With such extensive interworking the phenomenon known as service interaction, or feature interaction, appears. The problem occurs when services interfere with one another causing unexpected or undesirable outcomes. The main goal of this work is to detect undesired interactions between devices and services while allowing positive interactions between services and devices. If the interaction is negative, the approach should be able to handle it in an appropriate way. Being able to carry out interaction detection in the home poses certain challenges. Firstly, the devices and services are provided by a number of vendors and will be using a variety of protocols. Secondly, the configuration will not be fixed, the network will change as devices join and leave. Services may also change and adapt to user needs and to devices available at runtime. The developed approach is able to work with such challenges. Since the goal of the automated home is to make life simpler for the occupant, the approach should require minimal user intervention. With the above goals, an approach was developed which tackles the problem. Whereas previous approaches solving service interaction have focused on the service, the technique presented here concentrates on the devices and their surrounds, as some interactions occur through conflicting effects on the environment. The approach introduces the concept of environmental variables. A variable may be room temperature, movement or perhaps light. Drawing inspiration from the Operating Systems domain, locks are used to control access to the devices and environmental variables. Using this technique, undesirable interactions are avoided. The inclusion of the environment is a key element of this approach as many interactions can happen indirectly, through the environment. Since the configuration of a home’s devices and services is continually changing, developing an off-line solution is not practical. Therefore, an on-line approach in the form of an interaction manager has been developed. It is the manager’s role to detect interactions. The approach was shown to work successfuly. The manager was able to successfully detect interactions and prevent negative interactions from occurring. Interactions were detected at both device and service level. The approach is flexible: it is protocol independent, services are unaware of the manager, and the manager can cope with new devices and services joining the network. Further, there is little user intervention required for the approach to operate.
APA, Harvard, Vancouver, ISO, and other styles
39

Begovich, Raymond S. "Planning and implementing writing coach programs at small newspapers." Virtual Press, 1993. http://liblink.bsu.edu/uhtbin/catkey/861394.

Full text
Abstract:
The purpose of this study was to identify and describe elements that may influence the effective planning and implementation of writing coach programs at small newspapers.Writing coaching at newspapers is becoming increasingly popular as a way to improve the writing abilities of reporters, to improve newsroom morale, to improve the relationship between reporters and editors, and to better serve newspaper readers. This study examined newspaper writing coach programs from an adult and continuing education program planning perspective.This study was qualitative, and was not intended to be generalized to any population. It was intended to provide information that may help the management and staff at small newspapers plan and implement writing coach programs effectively.Two techniques were used to obtain information: 1) telephone interviews with writing coaches, and 2) mini case study site visits to top editors at small newspapers.Ten writing coaches, located throughout the United States, were interviewed by telephone. The coaches selected for interviews were recommended by their peers as being among the most effective coaches in the country. Domain and taxonomic analyses were conducted of the interview transcripts. The study resulted in information relevant to eight areas related to planning and implementing newspaper writing coach programs: benefits, reasons, barriers, budgets, organizational climate, strengths and weaknesses, structure, and evaluation.Site visits were made to seven small newspapers. Before the visits, the top editors at the seven papers were sent a summary of the information gathered in the writing coach interviews. The editors were asked to react to the interview summary and to share their thoughts on planning and implementing writing coach programs at their newspapers. The site visits resulted in seven mini case studies, each containing a narrative section and a conclusions section.Following the interviews and site visits, a general model was recommended for planning and implementing effective writing coach programs at small newspapers. The Coaching Way of Life Model describes assumptions upon which a coaching program should be based, and describes the role of a coaching facilitator at a small newspaper.
Department of Educational Leadership
APA, Harvard, Vancouver, ISO, and other styles
40

Al-Yafi, Karim. "A feature-based comparison of the centralised versus market-based decision making under lens of environment uncertainty : case of the mobile task allocation problem." Thesis, Brunel University, 2012. http://bura.brunel.ac.uk/handle/2438/6535.

Full text
Abstract:
Decision making problems are amongst the most common challenges facing managers at different management levels in the organisation: strategic, tactical, and operational. However, prior reaching decisions at the operational level of the management hierarchy, operations management departments frequently have to deal with the optimisation process to evaluate the available decision alternatives. Industries with complex supply chain structures and service organisations that have to optimise the utilisation of their resources are examples. Conventionally, operational decisions used to be taken centrally by a decision making authority located at the top of a hierarchically-structured organisation. In order to take decisions, information related to the managed system and the affecting externalities (e.g. demand) should be globally available to the decision maker. The obtained information is then processed to reach the optimal decision. This approach usually makes extensive use of information systems (IS) containing myriad of optimisation algorithms and meta-heuristics to process the high amount and complex nature of data. The decisions reached are then broadcasted to the passive actuators of the system to put them in execution. On the other hand, recent advancements in information and communication technologies (ICT) made it possible to distribute the decision making rights and proved its applicability in several sectors. The market-based approach is as such a distributed decision making mechanism where passive actuators are delegated the rights of taking individual decisions matching their self-interests. The communication among the market agents is done through market transactions regulated by auctions. The system’s global optimisation, therefore, raise from the aggregated self-oriented market agents. As opposed to the centralised approach, the main characteristics of the market-based approach are the market mechanism and local knowledge of the agents. The existence of both approaches attracted several studies to compare them in different contexts. Recently, some comparisons compared the centralised versus market-based approaches in the context of transportation applications from an algorithm perspective. Transportation applications and routing problems are assumed to be good candidates for this comparison given the distributed nature of the system and due to the presence of several sources of uncertainty. Uncertainty exceptions make decisions highly vulnerable and necessitating frequent corrective interventions to keep an efficient level of service. Motivated by the previous comparison studies, this research aims at further investigating the features of both approaches and to contrast them in the context of a distributed task allocation problem in light of environmental uncertainty. Similar applications are often faced by service industries with mobile workforce. Contrary to the previous comparison studies that sought to compare those approaches at the mechanism level, this research attempts to identify the effect of the most significant characteristics of each approach to face environmental uncertainty, which is reflected in this research by the arrival of dynamic tasks and the occurrence of stochasticity delays. To achieve the aim of this research, a target optimisation problem from the VRP family is proposed and solved with both approaches. Given that this research does not target proposing new algorithms, two basic solution mechanisms are adopted to compare the centralised and the market-based approach. The produced solutions are executed on a dedicated multi-agent simulation system. During execution dynamism and stochasticity are introduced. The research findings suggest that a market-based approach is attractive to implement in highly uncertain environments when the degree of local knowledge and workers’ experience is high and when the system tends to be complex with large dimensions. It is also suggested that a centralised approach fits more in situations where uncertainty is lower and the decision maker is able to make timely decision updates, which is in turn regulated by the size of the system at hand.
APA, Harvard, Vancouver, ISO, and other styles
41

Westerdahl, Matilda. "Challenges in video game development - What does Agile management have to do with it?" Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20026.

Full text
Abstract:
The video game industry has gone through a dramatic change over the last few decades, yetseveral reports show that there are currently many challenges that developers face in their dailywork. A major challenge includes difficulties of getting projects to close within set time andresource restraints. This is something that indicates a connection to the management methodsbeing used, among which Agile management is a popular framework that many turn to. Thisthesis searches for connections between challenges in video game development and the usage ofagile methods like Scrum and Kanban. For this, a qualitative research strategy was used in orderto look into the experiences of video game developers. Five semi-structured interviews with atotal of eleven respondents were conducted. As a complement, a quantitative web-based surveywas made where 23 people participated. The results of this study show that challengespreviously defined within the video game industry, including feature creep, crunch periods anda stressful work pace can also be identified in the industry in southern Sweden to some extent.Underlying patterns indicate the industrial culture as an explanation for an incorrectimplementation of agile methods, which could eventually lead to issues surrounding riskmanagement in projects.
APA, Harvard, Vancouver, ISO, and other styles
42

Shumway, Devin James. "Hybrid State-Transactional Database for Product Lifecycle Management Features in Multi-Engineer Synchronous Heterogeneous Computer-Aided Design." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6341.

Full text
Abstract:
There are many different programs that can perform Computer Aided Design (CAD). In order for these programs to share data, file translations need to occur. These translations have typically been done by IGES and STEP files. With the work done at the BYU CAD Lab to create a multi-engineer synchronous heterogeneous CAD environment, these translation processes have become synchronous by using a server and a database to manage the data. However, this system stores part data in a database. The data in the database cannot be used in traditional Product Lifecycle Management systems. In order to remedy this, a new database was developed that enables every edit made in a CAD part across multiple CAD systems to be stored as well as worked on simultaneously. This allows users to access every action performed in a part. Branching was introduced to the database which allows users to work on multiple configurations of a part simultaneously and reduces file save sizes for different configurations by 98.6% compared to those created by traditional CAD systems.
APA, Harvard, Vancouver, ISO, and other styles
43

Serrano, Odean. "The assemblage of water quality parameters and urban feature parameters, utilizing a geographic information system model for the use of watershed management in the Dardenne Creek Watershed, St. Charles County, Missouri." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3172.

Full text
Abstract:
Thesis (Ph. D.)--George Mason University, 2008.
Vita: p. 179. Thesis director: Lee M. Talbot. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Environmental Science and Public Policy. Title from PDF t.p. (viewed July 18, 2008). Includes bibliographical references (p. 148-149). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
44

Garg, Anushka. "Comparing Machine Learning Algorithms and Feature Selection Techniques to Predict Undesired Behavior in Business Processesand Study of Auto ML Frameworks." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-285559.

Full text
Abstract:
In recent years, the scope of Machine Learning algorithms and its techniques are taking up a notch in every industry (for example, recommendation systems, user behavior analytics, financial applications and many more). In practice, they play an important role in utilizing the power of the vast data we currently generate on a daily basis in our digital world.In this study, we present a comprehensive comparison of different supervised Machine Learning algorithms and feature selection techniques to build a best predictive model as an output. Thus, this predictive model helps companies predict unwanted behavior in their business processes. In addition, we have researched for the automation of all the steps involved (from understanding data to implementing models) in the complete Machine Learning Pipeline, also known as AutoML, and provide a comprehensive survey of the various frameworks introduced in this domain. These frameworks were introduced to solve the problem of CASH (combined algorithm selection and Hyper- parameter optimization), which is basically automation of various pipelines involved in the process of building a Machine Learning predictive model.
Under de senaste åren har omfattningen av maskininlärnings algoritmer och tekniker tagit ett steg i alla branscher (till exempel rekommendationssystem, beteendeanalyser av användare, finansiella applikationer och många fler). I praktiken spelar de en viktig roll för att utnyttja kraften av den enorma mängd data vi för närvarande genererar dagligen i vår digitala värld.I den här studien presenterar vi en omfattande jämförelse av olika övervakade maskininlärnings algoritmer och funktionsvalstekniker för att bygga en bästa förutsägbar modell som en utgång. Således hjälper denna förutsägbara modell företag att förutsäga oönskat beteende i sina affärsprocesser. Dessutom har vi undersökt automatiseringen av alla inblandade steg (från att förstå data till implementeringsmodeller) i den fullständiga maskininlärning rörledningen, även känd som AutoML, och tillhandahåller en omfattande undersökning av de olika ramarna som introducerats i denna domän. Dessa ramar introducerades för att lösa problemet med CASH (kombinerat algoritmval och optimering av Hyper-parameter), vilket i grunden är automatisering av olika rörledningar som är inblandade i processen att bygga en förutsägbar modell för maskininlärning.
APA, Harvard, Vancouver, ISO, and other styles
45

Eriksson, Magnus. "Engineering Families of Software-Intensive Systems using Features, Goals and Scenarios." Doctoral thesis, Umeå : Department of Computing Science, Umeå Univ, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Lewis, Alice. "CASE STUDY FOR A LIGHTWEIGHT IMPACT ANALYSIS TOOL." Kent State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=kent1240179121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Testoni, Alberto. "Progettazione ed implementazione di un sistema generale di Human Activity Recognition attraverso l'utilizzo di sensori embedded ed algoritmi di feature selection." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13267/.

Full text
Abstract:
L'eccezionale sviluppo della microelettronica e dei sistemi informatici verificatosi nell'ultimo decennio ha consentito la nascita di sensori e dispositivi mobili con caratteristiche senza precedenti; l'elevata potenza di calcolo, il basso costo e le ridotte dimensioni hanno fatto diventare questi strumenti parte della nostra vita quotidiana. Gli smartphone dei nostri giorni integrano funzionalità di comunicazione ed elevate capacità computazionali e di sensing, attraverso un largo numero di sensori embedded. I dati provenienti dai sensori presenti sullo smartphone possono fornire informazioni utili circa l'ambiente nel quale si trova l'utente che sta utilizzando il dispositivo; come raccogliere e analizzare queste grandi quantità di dati rispettando la privacy dell'utente costituisce un'importantissima sfida per il futuro, nonché un'area di ricerca molto attiva. Sofisticate tecniche di riconoscimento si occupano di identificare automaticamente l'attività che l'utente sta svolgendo o il veicolo con il quale egli si sta spostando utilizzando i dati provenienti dai sensori dello smartphone; questo processo di inferenza statistica è possibile grazie ai modelli prodotti dagli algoritmi di classificazione. Partendo dallo stato dell'arte nel campo della cosiddetta Human Activity Recognition, si evidenziano alcune importanti domande ancora senza risposta: è possibile costruire un sistema di classificazione affidabile tracciando il maggior numero possibile di sensori? Il processo di classificazione e di creazione del modello può essere spostato su un server remoto in grado di effettuare velocemente operazioni costose? In questa tesi di laurea si cerca di rispondere alle domande appena poste, implementando un sistema di identificazione automatica dell'attività svolta dall'utente costituito da un'applicazione Android e da un server remoto dedicato al processo di classificazione.
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Yan. "Data Quality Assessment Methodology for Improved Prognostics Modeling." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1330024393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Munir, Qaiser, and Muhammad Shahid. "Software Product Line:Survey of Tools." Thesis, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57888.

Full text
Abstract:

software product line is a set of software-intensive systems that share a common, managed set of features satisfying the specificneeds of a particular market segment or mission. The main attractive part of SPL is developing a set of common assets which includes requirements, design, test plans, test cases, reusable software components and other artifacts. Tools for the development of softwareproduct line are very few in number. The purpose of these tools is to support the creation, maintenance and using different versions ofproduct line artifacts. This requires a development environment that supports the management of assets and product development,processes and sharing of assets among different products.

The objective of this master thesis is to investigate the available tools which support Software Product Line process and itsdevelopment phases. The work is carried out in two steps, in the first step available Software Product Line tools are explored and a list of tools is prepared, managed and a brief introduction of each tool is presented. The tools are classified into different categoriesaccording to their usage, relation between the tools is established for better organization and understanding. In the second step, two tools Pure::variant and MetaEdit+ are selected and the quality factors such as Usability, Performance, Reliability, MemoryConsumption and Capacity are evaluated.

APA, Harvard, Vancouver, ISO, and other styles
50

D'Ambrosio, Luca. "AffectiveDrive: sistema di Driver Assistance basato sull’analisi di sensori inerziali e tecniche di computer vision." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16209/.

Full text
Abstract:
In questa tesi viene presentata AffectiveDrive, un’applicazione di guida sicura per iPhone che rileva manovre di guida pericolose e avvisa il guidatore in caso di comportamenti non sicuri. L’applicazione utilizza algoritmi di computer vision e machine learning per monitorare il comportamento e rilevare se il conducente è in uno stato di sonnolenza o distratto, attraverso l’analisi di dati sensoriali e la fotocamera. In particolare, nell’applicazione sviluppata vengono estratti i dati sensoriali prodotti dallo smartphone. Vengono applicate tecniche di feature extraction per sintetizzare i dati prima estratti. Successivamente, viene costruito un modello di predizione partendo da un set di dati composto da misurazioni di guida e infine viene applicato l’algoritmo Random Forest alle feature estratte per riconoscere il comportamento del guidatore, classificandolo come “sicuro” / “non sicuro”. Il rilevamento di sonnolenza e distrazione viene effettuato con il Software Development Kit AffDex, il quale, una volta rilevato un volto produce valori numerici compresi tra 0 (assente) e 100 (presente) che indicano le presenza/assenza delle situazioni prima esposte. Se AffDex produce valori che superano delle soglie preimpostate il conducente verrà avvisato attraverso un allarme sonoro.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography