Dissertations / Theses on the topic 'Featue location'

To see the other types of publications on this topic, follow the link: Featue location.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Featue location.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schulte, Lukas. "Investigating topic modeling techniques for historical feature location." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-85379.

Full text
Abstract:
Software maintenance and the understanding of where in the source code features are implemented are two strongly coupled tasks that make up a large portion of the effort spent on developing applications. The concept of feature location investigated in this thesis can serve as a supporting factor in those tasks as it facilitates the automation of otherwise manual searches for source code artifacts. Challenges in this subject area include the aggregation and composition of a training corpus from historical codebase data for models as well as the integration and optimization of qualified topic modeling techniques. Building up on previous research, this thesis provides a comparison of two different techniques and introduces a toolkit that can be used to reproduce and extend on the results discussed. Specifically, in this thesis a changeset-based approach to feature location is pursued and applied to a large open-source Java project. The project is used to optimize and evaluate the performance of Latent Dirichlet Allocation models and Pachinko Allocation models, as well as to compare the accuracy of the two models with each other. As discussed at the end of the thesis, the results do not indicate a clear favorite between the models. Instead, the outcome of the comparison depends on the metric and viewpoint from which it is assessed.
APA, Harvard, Vancouver, ISO, and other styles
2

Farag, Emad William. "Online multi-person tracking using feature-less location measurements." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/112867.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 65-67).
This thesis presents a scalable real-time multi-object tracking system based on feature-less location measurements. The thesis introduces a two-stage object tracking algorithm along with a server infrastructure that allows users to view the tracking results live, replay old frames, or compute long-term analytics based on the tracking results. In the first tracking stage, consecutive measurements are connected to form short tracklets using an algorithm based on MHT. In the second stage, the tracklets are connected to form longer tracks in an algorithm that reduces the tracking problem to a minimum-cost flow problem. The system infrastructure allows for a large number of connected devices or sensors while reducing the possible points of failure. The tracking algorithms are evaluated in a controlled environment and in a daylong experiment in a real setting. In the latter, the number of people detected by the tracking algorithms was correct 83% of the time when tracking was done using noisy motion-based measurements.
by Emad William Farag.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
3

Oleg, Kravchenko, and Кравченко Олег Вадимович. "Features of location under airports technogenic landscapes." Thesis, НАУ, 2016. http://er.nau.edu.ua/handle/NAU/24677.

Full text
Abstract:
Modern changes in human activity significantly alter the appearance of the urban environment and occur in three areas: diversity and transformation functionality by creating additional conditions for recreation, entertainment, climate change due to changing the system of transport services, the creation of new transport infrastructure (eg airports ) the use of new methods of spatial organization of urban areas; desire for brightness architectural forms, unusual appearance of urban design elements, compounds in them diverse functions.
APA, Harvard, Vancouver, ISO, and other styles
4

Otterson, Scott. "Use of speaker location features in meeting diarization /." Thesis, Connect to this title online; UW restricted, 2008. http://hdl.handle.net/1773/15463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jedari, Fathi Elnaz. "NOISE IMPACT REDUCTION IN CLASSIFICATION APPROACH PREDICTING SOCIAL NETWORKS CHECK-IN LOCATIONS." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/theses/2110.

Full text
Abstract:
Since August 2010, Facebook has entered the self-reported positioning world by providing the check-in service to its users. This service allows users to share their physical location using the GPS receiver in their mobile devices such as a smart-phone, tablet, or smart-watch. Over the years, big datasets of recorded check-ins have been collected with increasing popularity of social networks. Analyzing the check-in datasets reveals valuable information and patterns in users’ check-in behavior as well as places check-in history. The analysis results can be used in several areas including business planning and financial decisions, for instance providing location-based deals. In this thesis, we leverage novel data mining methodology to learn from big check-in data and predict the next check-in place based on only places’ history and with no reference to individual users. To this end, we study a large Facebook check-in dataset. This dataset has a high level of noise in location coordinates due to multiple collection sources, which are users’ mobile devices. The research question is how we can leverage a noise impact reduction technique to enhance performance of prediction model. We design our own noise handling mechanism to deal with feature noise. The predictive model is generated by Random Forest classification algorithm in a shared-memory parallel environment. We represent how the performance of predictors is enhanced by minimizing noise impacts. The solution is a preprocessing feature noise cleansing approach implemented in R and works fast for big check-in datasets.
APA, Harvard, Vancouver, ISO, and other styles
6

Debenham, Robert Michael. "Studies in learning and feature location with an artificial neural network." Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

DeLozier, Gregory Steven. "Feature Location using Unit Test Coverage in an Agile Development Environment." Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1406157529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Riggins, Jamie N. "Location Estimation of Obstacles for an Autonomous Surface Vehicle." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/33227.

Full text
Abstract:
As the mission field for autonomous vehicles expands into a larger variety of territories, the development of autonomous surface vehicles (ASVs) becomes increasingly important. ASVs have the potential to travel for long periods of time in areas that cannot be reached by aerial, ground, or underwater autonomous vehicles. ASVs are useful for a variety of missions, including bathymetric mapping, communication with other autonomous vehicles, military reconnaissance and surveillance, and environmental data collecting. Critical to an ASV's ability to maneuver without human intervention is its ability to detect obstacles, including the shoreline. Prior topological knowledge of the environment is not always available or, in dynamic environments, reliable. While many existing obstacle detection systems can only detect 3D obstacles at close range via a laser or radar signal, vision systems have the potential to detect obstacles both near and far, including "flat" obstacles such as the shoreline. The challenge lies in processing the images acquired by the vision system and extracting useful information. While this thesis does not address the issue of processing the images to locate the pixel positions of the obstacles, we assume that we have these processed images available. We present an algorithm that takes these processed images and, by incorporating the kinematic model of the ASV, maps the pixel locations of the obstacles into a global coordinate system. An Extended Kalman Filter is used to localize the ASV and the surrounding obstacles.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Buthker, Gregory S. "Automated Vehicle Electronic Control Unit (ECU) Sensor Location Using Feature-Vector Based Comparisons." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1558613387729083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Qi. "Interactions between Visual Attention and Visual Working Memory." Kyoto University, 2015. http://hdl.handle.net/2433/199403.

Full text
Abstract:
Kyoto University (京都大学)
0048
新制・課程博士
博士(人間・環境学)
甲第19079号
人博第732号
新制||人||176(附属図書館)
26||人博||732(吉田南総合図書館)
32030
京都大学大学院人間・環境学研究科共生人間学専攻
(主査)教授 齋木 潤, 教授 船橋 新太郎, 准教授 月浦 崇
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
11

Milborrow, Stephen. "Locating facial features with active shape models." Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/5161.

Full text
Abstract:
Includes bibliographical references (p. [94]-98).
This dissertation focuses on the problem of locating features in frontal views of upright human faces. The dissertation starts with the Active Shape Model of Cootes et al. [19] and extends it with the following techniques: 1. Selectively using two-instead of one-dimensional landmark profiles. 2. Stacking two Active Shape Models in series. 3. Extending the set of landmarks. 4. Trimming covariance matrices by setting most entries to zero. 5. Using other modifications such as adding noise to the training set. The resulting feature locater is shown to compare favorably with previously published methods.
APA, Harvard, Vancouver, ISO, and other styles
12

NolÃto, Carleandro de Oliveira. "An authoring tool for location-based mobile games with augmented reality features." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=17053.

Full text
Abstract:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
Location-based mobile games are a special type of game where location technologies are employed to track players and to alter the rules of the games. Such games are a subclass of pervasive games whose playability advances according to the location of players. This work proposes an authoring tool to develop location-based mobile games enhances with augmented reality features. This tool was conceived from research studies on authoring tools and pervasive games. By analysing the related works, we have compiled the most common features of location-based mobile games and the main aspects presented in authoring tools for developing these games. Moreover, we have used focus groups to both enhance the creation of new scenarios for location-based mobile games and improve the usage of augmented reality in such games. Ultimately, we have compiled a set of requirements to design a software architecture for the development of an authoring tool. The designed architecture is composed of a server to manage running games, a web based system for developing these games, and a mobile application in which the games are executed. The main goal of this work is to provide a software solution that allows non-programmers to design, build and execute location-based mobile games. To assess the proposed work, we have designed and implemented a game called "Battle for Fortaleza" using the authoring tool. Besides, we have conducted interviews with users to validate the utility and the easiness of the tool to develop location-based mobile games.
Jogos mÃveis baseados em localizaÃÃo sÃo aqueles que fazem uso de tecnologias de localizaÃÃo e que agregam a posiÃÃo de seus jogadores nas regras do jogo. Estes jogos sÃo uma subclasse de jogos pervasivos em que a jogabilidade progride de acordo com a localizaÃÃo do jogador. Neste trabalho à proposta uma ferramenta de autoria para a criaÃÃo de jogos mÃveis baseados em localizaÃÃo, acrescidos com recursos de realidade aumentada. A concepÃÃo desta ferramenta foi feita com base em uma revisÃo da literatura sobre ferramentas de autoria e jogos pervasivos. A partir de trabalhos relacionados, buscou-se recolher as caracterÃsticas comuns de jogos mÃveis baseados em localizaÃÃo e caracterÃsticas de ferramentas de autoria para estes jogos. AlÃm disso, utilizou-se grupos focais para refinar os novos cenÃrios para jogos mÃveis baseados em localizaÃÃo, em particular a respeito de como a realidade aumentada pode ser utilizada nestes jogos. Com o estudo de ambos, na revisÃo da literatura e nos grupos focais foi produzido um conjunto de requisitos para projetar uma arquitetura de software para o desenvolvimento de uma ferramenta de criaÃÃo. Esta arquitetura à composta de um servidor para gerenciar os jogos em execuÃÃo, uma aplicaÃÃo baseada na web que permite a autoria dos jogos e um aplicativo para dispositivos mÃveis onde sÃo executados os jogos desenvolvidos. Nosso principal objetivo à fornecer uma soluÃÃo de software para permitir que usuÃrios nÃo-programadores possam projetar, construir e executar jogos mÃveis baseados em localizaÃÃo. A fim de avaliar a nossa ferramenta, foi projetado e implementado um jogo chamado âBatalha por Fortalezaâ, em que as funcionalidades da ferramentas sÃo demonstradas. AlÃm disso, buscou-se validar, com usuÃrios, se a ferramenta proposta consegue auxiliar na criaÃÃo de jogos de maneira fÃcil e intuitiva.
APA, Harvard, Vancouver, ISO, and other styles
13

Nelson, Jonas. "Methods for Locating Distinct Features in Fingerprint Images." Thesis, Linköping University, Department of Science and Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1147.

Full text
Abstract:

With the advance of the modern information society, the importance of reliable identity authentication has increased dramatically. Using biometrics as a means for verifying the identity of a person increases both the security and the convenience of the systems. By using yourself to verify your identity such risks as lost keys and misplaced passwords are removed and by virtue of this, convenience is also increased. The most mature and well-developed biometric technique is fingerprint recognition. Fingerprints are unique for each individual and they do not change over time, which is very desirable in this application. There are multitudes of approaches to fingerprint recognition, most of which work by identifying so called minutiae and match fingerprints based on these.

In this diploma work, two alternative methods for locating distinct features in fingerprint images have been evaluated. The Template Correlation Method is based on the correlation between the image and templates created to approximate the homogenous ridge/valley areas in the fingerprint. The high-dimension of the feature vectors from correlation is reduced through principal component analysis. By visualising the dimension reduced data by ordinary plotting and observing the result classification is performed by locating anomalies in feature space, where distinct features are located away from the non-distinct.

The Circular Sampling Method works by sampling in concentric circles around selected points in the image and evaluating the frequency content of the resulting functions. Each images used here contains 30400 pixels which leads to sampling in many points that are of no interest. By selecting the sampling points this number can be reduced. Two approaches to sampling points selection has been evaluated. The first restricts sampling to occur only along valley bottoms of the image, whereas the second uses orientation histograms to select regions where there is no single dominant direction as sampling positions. For each sampling position an intensity function is achieved by circular sampling and a frequency spectrum of this function is achieved through the Fast Fourier Transform. Applying criteria to the relationships of the frequency components classifies each sampling location as either distinct or non-distinct.

Using a cyclic approach to evaluate the methods and their potential makes selection at various stages possible. Only the Circular Sampling Method survived the first cycle, and therefore all tests from that point on are performed on thismethod alone. Two main errors arise from the tests, where the most prominent being the number of spurious points located by the method. The second, which is equally serious but not as common, is when the method misclassifies visually distinct features as non-distinct. Regardless of the problems, these tests indicate that the method holds potential but that it needs to be subject to further testing and optimisation. These tests should focus on the three main properties of the method: noise sensitivity, radial dependency and translation sensitivity.

APA, Harvard, Vancouver, ISO, and other styles
14

Andrade, Mariana de Carvalho Cardoso Gonçalves. "The influence of landscape and seascape features in the location of underwater recreation sites." Master's thesis, ISA/UL, 2015. http://hdl.handle.net/10400.5/11216.

Full text
Abstract:
Mestrado em Arquitectura Paisagista - Instituto Superior de Agronomia
This study aims to use the SCUBA diving activity as a mean to analyse the different components of the coastal and underwater landscape, since this recreational activity is intimately related to the processes that occur in these environments. With this approach, it is expected to enlarge the scope of landscape architecture to the underwater scenery. It intends to analyse the characteristics of each dive site individually, in relation to their neighbourhood and also the whole study area, looking to detect associations between these characteristics and their presence. Several dive centres were contacted in order to obtain data. The data gathered corresponds to the dive sites’ location as well as the frequency of visits in 2011, 2012 and 2013. The method used was based on the application of metrics used in other research contexts (e.g.: landscape ecology; urban planning). On a second phase, using an adequate regression equation applied through the “Stepwise” technique, a few variables were statistically classified as relevant for the occurrence of dive sites. The result is one model that allows us to interpret the reasons that could justify the dive sites observed and to define a probability of their future existence in the remaining seascape
APA, Harvard, Vancouver, ISO, and other styles
15

Vitas, Marko. "Designing mobile ambient applications." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14191.

Full text
Abstract:
Android is a fast growing platform with a lot of users and applications on the market. In order to challenge the competition, a new software product should be designed carefully, conforming to the platform constraints and conveying to the user expectations. This research focuses on defining a suitable architecture design for the specific use case of interest, an Android application focused on location based data. The research process is backed up by a prototype application construction with features such as location based reminders and mobile communication with web services. Moreover, an analysis has been conducted on existing products with the proven quality, to extract information on current best practice implementations of several interesting features. Furthermore, the demand for targeting multiple platforms with the same application motivated a research on portability and reuse of code among different platforms. The constructed system is divided into a client-server pair. Opposite to the client (mobile) side, the server side analyzes the process of extending an existing architecture by integrating it with a web service project used for exchanging data with the mobile devices. Finally, the thesis is not strictly constrained to the given use case because it presents several general concepts of application design through architectural and design patterns.
APA, Harvard, Vancouver, ISO, and other styles
16

(UPC), Universidad Peruana de Ciencias Aplicadas, and Pohl Milon. "SmartPhones feat SmartMolecules: eBioPhy brings rapid diagnosis to remote locations." Universidad Peruana de Ciencias Aplicadas (UPC), 2015. http://hdl.handle.net/10757/344615.

Full text
Abstract:
eBioPhy is a diagnostic platform that uses biochemical and biophysical principles together with informatic and communication tools to probe the presence of pathogens in biological samples. The platform aims to bring real-time diagnostics to remote locations where health services are rare and it is based in two main principles: 1) The recognition of pathogens using fluorescently and chemically modified molecules, smart molecules. 2) Data collection and analysis using smartphone capabilities.
Proyecto financiado por el FINCyT y el Gobierno de Canadá (Grand Challenges Canada, Bold Idea with Bog impact). Se inicia inica en Octubre 2014 y finaliza en Abril 2016. La instituciones particpantes son Universidad Peruana de Ciencias Aplicadas - UPC y el Instituto de Investigación Nutricional - IIN. Lima, Perú
APA, Harvard, Vancouver, ISO, and other styles
17

Braun, Edna. "Reverse Engineering Behavioural Models by Filtering out Utilities from Execution Traces." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/26093.

Full text
Abstract:
An important issue in software evolution is the time and effort needed to understand existing applications. Reverse engineering software to recover behavioural models is difficult and is complicated due to the lack of a standardized way of extracting and visualizing knowledge. In this thesis, we study a technique for automatically extracting static and dynamic data from software, filtering and analysing the data, and visualizing the behavioural model of a selected feature of a software application. We also investigate the usefulness of the generated diagrams as documentation for the software. We present a literature review of studies that have used static and dynamic data analysis for software comprehension. A set of criteria is created, and each approach, including this thesis’ technique, is compared using those criteria. We propose an approach to simplify lengthy traces by filtering out software components that are too low level to give a high-level picture of the selected feature. We use static information to identify and remove small and simple (or uncomplicated) software components from the trace. We define a utility method as any element of a program designed for the convenience of the designer and implementer and intended to be accessed from multiple places within a certain scope of the program. Utilityhood is defined as the extent to which a particular method can be considered a utility. Utilityhood is calculated using different combinations of selected dynamic and static variables. Methods with high utilityhood values are detected and removed iteratively. By eliminating utilities, we are left with a much smaller trace which is then visualized using the Use Case Map (UCM) notation. UCM is a scenario language used to specify and explain behaviour of complex systems. By doing so, we can identify the algorithm that generates a UCM closest to the mental model of the designers. Although when analysing our results we did not identify an algorithm that was best in all cases, there is a trend in that three of the best four algorithms (out of a total of eight algorithms investigated) used method complexity and method lines of code in their parameters. We also validated the algorithm results by doing a comparison with a list of methods given to us by the creators of the software and doing precision and recall calculations. Seven out of the eight participants agreed or strongly agreed that using UCM diagrams to visualize reduced traces is valid approach, with none who disagreed.
APA, Harvard, Vancouver, ISO, and other styles
18

Ieva, Carlo. "Révéler le contenu latent du code source : à la découverte des topoi de programme." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS024/document.

Full text
Abstract:
Le développement de projets open source à grande échelle implique de nombreux développeurs distincts qui contribuent à la création de référentiels de code volumineux. À titre d'exemple, la version de juillet 2017 du noyau Linux (version 4.12), qui représente près de 20 lignes MLOC (lignes de code), a demandé l'effort de 329 développeurs, marquant une croissance de 1 MLOC par rapport à la version précédente. Ces chiffres montrent que, lorsqu'un nouveau développeur souhaite devenir un contributeur, il fait face au problème de la compréhension d'une énorme quantité de code, organisée sous la forme d'un ensemble non classifié de fichiers et de fonctions.Organiser le code de manière plus abstraite, plus proche de l'homme, est une tentative qui a suscité l'intérêt de la communauté du génie logiciel. Malheureusement, il n’existe pas de recette miracle ou bien d’outil connu pouvant apporter une aide concrète dans la gestion de grands bases de code.Nous proposons une approche efficace à ce problème en extrayant automatiquement des topoi de programmes, c'est à dire des listes ordonnées de noms de fonctions associés à un index de mots pertinents. Comment se passe le tri? Notre approche, nommée FEAT, ne considère pas toutes les fonctions comme égales: certaines d'entre elles sont considérées comme une passerelle vers la compréhension de capacités de haut niveau observables d'un programme. Nous appelons ces fonctions spéciales points d’entrée et le critère de tri est basé sur la distance entre les fonctions du programme et les points d’entrée. Notre approche peut être résumée selon ses trois étapes principales : 1) Preprocessing. Le code source, avec ses commentaires, est analysé pour générer, pour chaque unité de code (un langage procédural ou une méthode orientée objet), un document textuel correspondant. En outre, une représentation graphique de la relation appelant-appelé (graphe d'appel) est également créée à cette étape. 2) Clustering. Les unités de code sont regroupées au moyen d’une classification par clustering hiérarchique par agglomération (HAC). 3) Sélection du point d’entrée. Dans le contexte de chaque cluster, les unités de code sont classées et celles placées à des positions plus élevées constitueront un topos de programme.La contribution de cette thèse est triple: 1) FEAT est une nouvelle approche entièrement automatisée pour l'extraction de topoi de programme, basée sur le regroupement d'unités directement à partir du code source. Pour exploiter HAC, nous proposons une distance hybride originale combinant des éléments structurels et sémantiques du code source. HAC requiert la sélection d’une partition parmi toutes celles produites tout au long du processus de regroupement. Notre approche utilise un critère hybride basé sur la graph modularity et la cohérence textuelle pour sélectionner automatiquement le paramètre approprié. 2) Des groupes d’unités de code doivent être analysés pour extraire le programme topoi. Nous définissons un ensemble d'éléments structurels obtenus à partir du code source et les utilisons pour créer une représentation alternative de clusters d'unités de code. L’analyse en composantes principales, qui permet de traiter des données multidimensionnelles, nous permet de mesurer la distance entre les unités de code et le point d’entrée idéal. Cette distance est la base du classement des unités de code présenté aux utilisateurs finaux. 3) Nous avons implémenté FEAT comme une plate-forme d’analyse logicielle polyvalente et réalisé une étude expérimentale sur une base ouverte de 600 projets logiciels. Au cours de l’évaluation, nous avons analysé FEAT sous plusieurs angles: l’étape de mise en grappe, l’efficacité de la découverte de topoi et l’évolutivité de l’approche
During the development of long lifespan software systems, specification documents can become outdated or can even disappear due to the turnover of software developers. Implementing new software releases or checking whether some user requirements are still valid thus becomes challenging. The only reliable development artifact in this context is source code but understanding source code of large projects is a time- and effort- consuming activity. This challenging problem can be addressed by extracting high-level (observable) capabilities of software systems. By automatically mining the source code and the available source-level documentation, it becomes possible to provide a significant help to the software developer in his/her program understanding task.This thesis proposes a new method and a tool, called FEAT (FEature As Topoi), to address this problem. Our approach automatically extracts program topoi from source code analysis by using a three steps process: First, FEAT creates a model of a software system capturing both structural and semantic elements of the source code, augmented with code-level comments; Second, it creates groups of closely related functions through hierarchical agglomerative clustering; Third, within the context of every cluster, functions are ranked and selected, according to some structural properties, in order to form program topoi.The contributions of the thesis is three-fold:1) The notion of program topoi is introduced and discussed from a theoretical standpoint with respect to other notions used in program understanding ;2) At the core of the clustering method used in FEAT, we propose a new hybrid distance combining both semantic and structural elements automatically extracted from source code and comments. This distance is parametrized and the impact of the parameter is strongly assessed through a deep experimental evaluation ;3) Our tool FEAT has been assessed in collaboration with Software Heritage (SH), a large-scale ambitious initiative whose aim is to collect, preserve and, share all publicly available source code on earth. We performed a large experimental evaluation of FEAT on 600 open source projects of SH, coming from various domains and amounting to more than 25 MLOC (million lines of code).Our results show that FEAT can handle projects of size up to 4,000 functions and several hundreds of files, which opens the door for its large-scale adoption for program understanding
APA, Harvard, Vancouver, ISO, and other styles
19

Lewis, Alice. "CASE STUDY FOR A LIGHTWEIGHT IMPACT ANALYSIS TOOL." Kent State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=kent1240179121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Alhindawi, Nouh Talal. "Supporting Source Code Comprehension During Software Evolution and Maintenance." Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1374790792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Dvořák, Pavel. "Popis objektů v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218957.

Full text
Abstract:
This thesis consider description of segments identified in image. At first there are described main methods of segmentation because it is a process contiguous before describing of objects. Next chapter is devoted to methods which focus on description identified regions. There are studied algorithms used for characterizing of different features. There are parts devoted to color, location, size, orientation, shape and topology. The end of this chapter is devoted to moments. Next chapters are focused on designing fit algorithms for segments description and XML files creating according to MPEG-7 standards and their implementation into RapidMiner. In the last chapter there are described results of the implementation.
APA, Harvard, Vancouver, ISO, and other styles
22

Al-Msie', Deen Ra'Fat. "Construction de lignes de produits logiciels par rétro-ingénierie de modèles de caractéristiques à partir de variantes de logiciels : l'approche REVPLINE." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20024/document.

Full text
Abstract:
Les lignes de produits logicielles constituent une approche permettant de construire et de maintenir une famille de produits logiciels similaires mettant en œuvre des principes de réutilisation. Ces principes favorisent la réduction de l'effort de développement et de maintenance, raccourcissent le temps de mise sur le marché et améliorent la qualité globale du logiciel. La migration de produits logiciels similaires vers une ligne de produits demande de comprendre leurs similitudes et leurs différences qui s'expriment sous forme de caractéristiques (features) offertes. Dans cette thèse, nous nous intéressons au problème de la construction d'une ligne de produits à partir du code source de ses produits et de certains artefacts complémentaires comme les diagrammes de cas d'utilisation, quand ils existent. Nous proposons des contributions sur l'une des étapes principales dans cette construction, qui consiste à extraire et à organiser un modèle de caractéristiques (feature model) dans un mode automatisé. La première contribution consiste à extraire des caractéristiques dans le code source de variantes de logiciels écrits dans le paradigme objet. Trois techniques sont mises en œuvre pour parvenir à cet objectif : l'Analyse Formelle de Concepts, l'Indexation Sémantique Latente et l'analyse des dépendances structurelles dans le code. Elles exploitent les parties communes et variables au niveau du code source. La seconde contribution s'attache à documenter une caractéristique extraite par un nom et une description. Elle exploite le code source mais également les diagrammes de cas d'utilisation, qui contiennent, en plus de l'organisation logique des fonctionnalités externes, des descriptions textuelles de ces mêmes fonctionnalités. En plus des techniques précédentes, elle s'appuie sur l'Analyse Relationnelle de Concepts afin de former des groupes d'entités d'après leurs relations. Dans la troisième contribution, nous proposons une approche visant à organiser les caractéristiques, une fois documentées, dans un modèle de caractéristiques. Ce modèle de caractéristiques est un arbre étiqueté par des opérations et muni d'expressions logiques qui met en valeur les caractéristiques obligatoires, les caractéristiques optionnelles, des groupes de caractéristiques (groupes ET, OU, OU exclusif), et des contraintes complémentaires textuelles sous forme d'implication ou d'exclusion mutuelle. Ce modèle est obtenu par analyse d'une structure obtenue par Analyse Formelle de Concepts appliquée à la description des variantes par les caractéristiques. L'approche est validée sur trois cas d'étude principaux : ArgoUML-SPL, Health complaint-SPL et Mobile media. Ces cas d'études sont déjà des lignes de produits constituées. Nous considérons plusieurs produits issus de ces lignes comme s'ils étaient des variantes de logiciels, nous appliquons notre approche, puis nous évaluons son efficacité par comparaison entre les modèles de caractéristiques extraits automatiquement et les modèles de caractéristiques initiaux (conçus par les développeurs des lignes de produits analysées)
The idea of Software Product Line (SPL) approach is to manage a family of similar software products in a reuse-based way. Reuse avoids repetitions, which helps reduce development/maintenance effort, shorten time-to-market and improve overall quality of software. To migrate from existing software product variants into SPL, one has to understand how they are similar and how they differ one from another. Companies often develop a set of software variants that share some features and differ in other ones to meet specific requirements. To exploit existing software variants and build a software product line, a feature model must be built as a first step. To do so, it is necessary to extract mandatory and optional features in addition to associate each feature with its name. Then, it is important to organize the mined and documented features into a feature model. In this context, our thesis proposes three contributions.Thus, we propose, in this dissertation as a first contribution a new approach to mine features from the object-oriented source code of a set of software variants based on Formal Concept Analysis, code dependency and Latent Semantic Indexing. The novelty of our approach is that it exploits commonality and variability across software variants, at source code level, to run Information Retrieval methods in an efficient way. The second contribution consists in documenting the mined feature implementations based on Formal Concept Analysis, Latent Semantic Indexing and Relational Concept Analysis. We propose a complementary approach, which aims to document the mined feature implementations by giving names and descriptions, based on the feature implementations and use-case diagrams of software variants. The novelty of our approach is that it exploits commonality and variability across software variants, at feature implementations and use-cases levels, to run Information Retrieval methods in an efficient way. In the third contribution, we propose an automatic approach to organize the mined documented features into a feature model. Features are organized in a tree which highlights mandatory features, optional features and feature groups (and, or, xor groups). The feature model is completed with requirement and mutual exclusion constraints. We rely on Formal Concept Analysis and software configurations to mine a unique and consistent feature model. To validate our approach, we applied it on three case studies: ArgoUML-SPL, Health complaint-SPL, Mobile media software product variants. The results of this evaluation validate the relevance and the performance of our proposal as most of the features and its constraints were correctly identified
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Zai Yong. "Simultaneous fault diagnosis of automotive engine ignition systems using pairwise coupled relevance vector machine, extracted pattern features and decision threshold optimization." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2493967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Catling, Aaron. "The Ending Needs Work AKA the Good, the Bad and the Ugly of being an independent filmmaker in Australia." Thesis, Queensland University of Technology, 2005. https://eprints.qut.edu.au/16091/1/Aaron_Catling_Thesis.pdf.

Full text
Abstract:
Over the period of candidature, write and direct a feature film to completion. Furthermore, undertake a thorough reflective phase which involves the analysis of each aspect relating to those key components, writing and directing. Through this form of creative practice and utilising state of the art digital filmmaking techniques it is hoped that an addition to knowledge will be achieved.
APA, Harvard, Vancouver, ISO, and other styles
25

Catling, Aaron. "The Ending Needs Work AKA the Good, the Bad and the Ugly of being an independent filmmaker in Australia." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16091/.

Full text
Abstract:
Over the period of candidature, write and direct a feature film to completion. Furthermore, undertake a thorough reflective phase which involves the analysis of each aspect relating to those key components, writing and directing. Through this form of creative practice and utilising state of the art digital filmmaking techniques it is hoped that an addition to knowledge will be achieved.
APA, Harvard, Vancouver, ISO, and other styles
26

Schunnesson, Håkan. "Drill process monitoring in percussive drilling for location of structural features, lithological boundaries and rock properties, and for drill productivity evaluation." Doctoral thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 1997. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-26697.

Full text
Abstract:
This thesis deals with the application of percussive drill monitoring in the mining and underground construction industries. The technique has been used to provide information on different ground properties and conditions and for drill productivity evaluation. Five different test sites have been used: the OSCAR area in the Kiirunavaara magnetite mine in Kiruna, the Viscaria copper mine in Kiruna, the Zinkgruvan mine in south-central Sweden, the Glödberget tunneling site in Västerbotten county and the Hallandsåsen tunneling site in southern Sweden. A methodology has been suggested and tested for treatment of raw data in order to extract rock dependent parameter variations from variations generated by the drill system itself and other external influences. Prediction of rock hardness and fracturing can be done without initial calibration, providing a good foundation for interpretation by site personnel. The mining applications show that drill monitoring has a very high potential for ore boundary delineation and also for classification of existing rock types. In tunneling applications drill monitoring demonstrates a good capability of foreseeing rock conditions ahead of the tunnel face. Other benefits are the speed of the method, its practicality and the fact that it requires no additional equipment, time or access to the production front. The potential for detailed drill productivity evaluation by drill monitoring has been demonstrated. Detailed information of the time consumption for each activity in the drilling cycle can be presented as well as the distribution of the total production. With this information in hand an indication can be given as to how the overall drilling capacity can be increased. The impact on production of automation, new developments and organization can also be predicted with high accuracy.
Godkänd; 1997; 20061128 (haneit)
APA, Harvard, Vancouver, ISO, and other styles
27

Lomax, Jamie R., John P. Wisniewski, Carol A. Grady, Michael W. McElwain, Jun Hashimoto, Tomoyuki Kudo, Nobuhiko Kusakabe, et al. "CONSTRAINING THE MOVEMENT OF THE SPIRAL FEATURES AND THE LOCATIONS OF PLANETARY BODIES WITHIN THE AB AUR SYSTEM." IOP PUBLISHING LTD, 2016. http://hdl.handle.net/10150/622048.

Full text
Abstract:
We present a new analysis of multi-epoch, H-band, scattered light images of the AB Aur system. We use a Monte Carlo radiative transfer code to simultaneously model the system's spectral energy distribution (SED) and H-band polarized intensity (PI) imagery. We find that a disk-dominated model, as opposed to one that is envelope-dominated, can plausibly reproduce AB Aur's SED and near-IR imagery. This is consistent with previous modeling attempts presented in the literature and supports the idea that at least a subset of AB Aur's spirals originate within the disk. In light of this, we also analyzed the movement of spiral structures in multi-epoch H-band total light and PI imagery of the disk. We detect no significant rotation or change in spatial location of the spiral structures in these data, which span a 5.8-year baseline. If such structures are caused by disk-planet interactions, the lack of observed rotation constrains the location of the orbit of planetary perturbers to be >47 au.
APA, Harvard, Vancouver, ISO, and other styles
28

Stoppel, Christian Michael [Verfasser], and Mircea Ariel [Akademischer Betreuer] Schönfeld. "Neural mechanisms of attentional selection in vision : locations, features, and objects / Christian Michael Stoppel. Betreuer: Mircea Ariel Schönfeld." Magdeburg : Universitätsbibliothek, 2011. http://d-nb.info/105144554X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ballarin, Naya Manuel. "Definition of Descriptive and Diagnostic Measurements for Model Fragment Retrieval." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/171604.

Full text
Abstract:
Tesis por compendio
[ES] Hoy en día, el software existe en casi todo. Las empresas a menudo desarrollan y mantienen colecciones de sistemas de software personalizados que comparten algunas características entre ellos, pero que también tienen otras características particulares. Conforme el número de características y el número de variantes de un producto crece, el mantenimiento del software se vuelve cada vez más complejo. Para hacer frente a esta situación la Comunidad de Ingeniería del Software basada en Modelos está abordando una actividad clave: la Localización de Fragmentos de Modelo. Esta actividad consiste en la identificación de elementos del modelo que son relevantes para un requisito, una característica o un bug. Durante los últimos años se han propuesto muchos enfoques para abordar la identificación de los elementos del modelo que corresponden a una funcionalidad en particular. Sin embargo, existe una carencia a la hora de cómo se reportan las medidas del espacio de búsqueda, así como las medidas de la solución a encontrar. El objetivo de nuestra tesis radica en proporcionar a la comunidad dedicada a la actividad de localización de fragmentos de modelo una serie de medidas (tamaño, volumen, densidad, multiplicidad y dispersión) para reportar los problemas de localización de fragmentos de modelo. El uso de estas novedosas medidas ayuda a los investigadores durante la creación de nuevos enfoques, así como la mejora de aquellos enfoques ya existentes. Mediante el uso de dos casos de estudio reales e industriales, esta tesis pone en valor la importancia de estas medidas para comparar resultados de diferentes enfoques de una manera precisa. Los resultados de este trabajo han sido redactados y publicados en foros, conferencias y revistas especializadas en los temas y contexto de la investigación. Esta tesis se presenta como un compendio de artículos acorde a la regulación de la Universitat Politècnica de València. Este documento de tesis presenta los temas, el contexto y los objetivos de la investigación. Presenta las publicaciones académicas que se han publicado como resultado del trabajo y luego analiza los resultados de la investigación.
[CA] Hui en dia, el programari existix en quasi tot. Les empreses sovint desenrotllen i mantenen col·leccions de sistemes de programari personalitzats que compartixen algunes característiques entre ells, però que també tenen altres característiques particulars. Conforme el nombre de característiques i el nombre de variants d'un producte creix, el manteniment del programari es torna cada vegada més complex. Per a fer front a esta situació la Comunitat d'Enginyeria del Programari basada en Models està abordant una activitat clau: la Localització de Fragments de Model. Esta activitat consistix en la identificació d'elements del model que són rellevants per a un requisit, una característica o un bug. Durant els últims anys s'han proposat molts enfocaments per a abordar la identificació dels elements del model que corresponen a una funcionalitat en particular. No obstant això, hi ha una carència a l'hora de com es reporten les mesures de l'espai de busca, així com les mesures de la solució a trobar. L'objectiu de la nostra tesi radica a proporcionar a la comunitat dedicada a l'activitat de localització de fragments de model una sèrie de mesures (grandària, volum, densitat, multiplicitat i dispersió) per a reportar els problemes de localització de fragments de model. L'ús d'estes noves mesures ajuda als investigadors durant la creació de nous enfocaments, així com la millora d'aquells enfocaments ja existents. Per mitjà de l'ús de dos casos d'estudi reals i industrials, esta tesi posa en valor la importància d'estes mesures per a comparar resultats de diferents enfocaments d'una manera precisa. Els resultats d'este treball han sigut redactats i publicats en fòrums, conferències i revistes especialitzades en els temes i context de la investigació. Esta tesi es presenta com un compendi d'articles d'acord amb la regulació de la Universitat Politècnica de València. Este document de tesi presenta els temes, el context i els objectius de la investigació. Presenta les publicacions acadèmiques que s'han publicat com resultat del treball i després analitza els resultats de la investigació.
[EN] Nowadays, software exists in almost everything. Companies often develop and maintain a collection of custom-tailored software systems that share some common features but also support customer-specific ones. As the number of features and the number of product variants grows, software maintenance is becoming more and more complex. To keep pace with this situation, Model-Based Software Engineering Community is addressing a key-activity: Model Fragment Location (MFL). MFL aims at identifying model elements that are relevant to a requirement, feature, or bug. Many MFL approaches have been introduced in the last few years to address the identification of the model elements that correspond to a specific functionality. However, there is a lack of detail when the measurements about the search space (models) and the measurements about the solution to be found (model fragment) are reported. The goal of this thesis is to provide insights to MFL Research Community of how to improve the report of location problems. We propose using five measurements (size, volume, density, multiplicity, and dispersion) to report the location problems during MFL. The usage of these novel measurements support researchers during the creation of new MFL approaches and during the improvement of those existing ones. Using two different case studies, both real and industrial, we emphasize the importance of these measurements in order to compare results in a deeply way. The results of the research have been redacted and published in forums, conferences, and journals specialized in the topics and context of the research. This thesis is presented as compendium of articles according the regulations in Universitat Politècnica de València. This thesis document introduces the topics, context, and objectives of the research, presents the academic publications that have been published as a result of the work, and then discusses the outcomes of the investigation.
Ballarin Naya, M. (2021). Definition of Descriptive and Diagnostic Measurements for Model Fragment Retrieval [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/171604
TESIS
Compendio
APA, Harvard, Vancouver, ISO, and other styles
30

McLoughlin, Aaron. "Gravy." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/50947/1/Aaron_McLoughlin_Thesis.pdf.

Full text
Abstract:
To many aspiring writer/directors of feature film breaking into the industry may be perceived as an insurmountable obstacle. In contemplating my own attempt to venture into the world of feature filmmaking I have reasoned that a formulated strategy could be of benefit. As the film industry is largely concerned with economics I decided that writing a relatively low-cost feature film may improve my chances of being allowed directorship by a credible producer. As a result I have decided to write a modest feature film set in a single interior shooting location in an attempt to minimise production costs, therefore also attempting to reduce the perceived risk in hiring the writer as debut director. As a practice-led researcher, the primary focus of this research is to create a screenplay in response to my greater directorial aspirations and to explore the nature in which the said strategic decision to write a single-location film impacts on not only the craft of cinematic writing but also the creative process itself, as it pertains to the project at hand. The result is a comedy script titled Gravy, which is set in a single apartment and strives to maintain a fast comedic pace whilst employing a range of character and plot devices in conjunction with creative decisions that help to sustain cinematic interest within the confines of the apartment. In addition to the screenplay artifact, the exegesis also includes a section that reflects on the writing process in the form of personal accounts, decisions, problems and solutions as well as examination of other author’s works.
APA, Harvard, Vancouver, ISO, and other styles
31

Ghabach, Eddy. "Prise en charge du « copie et appropriation » dans les lignes de produits logiciels." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4056/document.

Full text
Abstract:
Une Ligne de Produits Logiciels (LPL) supporte la gestion d’une famille de logiciels. Cette approche se caractérise par une réutilisation systématique des artefacts communs qui réduit le coût et le temps de mise sur le marché et augmente la qualité des logiciels. Cependant, une LPL exige un investissement initial coûteux. Certaines organisations qui ne peuvent pas faire face à un tel investissement, utilisent le « Clone-and-own » C&O pour construire et faire évoluer des familles de logiciels. Cependant, l'efficacité de cette pratique se dégrade proportionnellement à la croissance de la famille de produits, qui devient difficile à maintenir. Dans cette thèse, nous proposons une approche hybride qui utilise à la fois une LPL et l'approche C&O pour faire évoluer une famille de produits logiciels. Un mécanisme automatique d’identification des correspondances entre les « features » caractérisant les produits et les artéfacts logiciels, permet la migration des variantes de produits développées en C&O dans une LPL. L’originalité de ce travail est alors d’aider à la dérivation de nouveaux produits en proposant différents scenarii d’opérations C&O à effectuer pour dériver un nouveau produit à partir des features requis. Le développeur peut alors réduire ces possibilités en exprimant ses préférences (e.g. produits, artefacts) et en utilisant les estimations de coûts sur les opérations que nous proposons. Les nouveaux produits ainsi construits sont alors facilement intégrés dans la LPL. Nous avons étayé cette thèse en développant le framework SUCCEED (SUpporting Clone-and-own with Cost-EstimatEd Derivation) et l’avons appliqué à une étude de cas sur des familles de portails web
A Software Product Line (SPL) manages commonalities and variability of a related software products family. This approach is characterized by a systematic reuse that reduces development cost and time to market and increases software quality. However, building an SPL requires an initial expensive investment. Therefore, organizations that are not able to deal with such an up-front investment, tend to develop a family of software products using simple and intuitive practices. Clone-and-own (C&O) is an approach adopted widely by software developers to construct new product variants from existing ones. However, the efficiency of this practice degrades proportionally to the growth of the family of products in concern, that becomes difficult to manage. In this dissertation, we propose a hybrid approach that utilizes both SPL and C&O to develop and evolve a family of software products. An automatic mechanism of identification of the correspondences between the features of the products and the software artifacts, allows the migration of the product variants developed in C&O in an SPL The originality of this work is then to help the derivation of new products by proposing different scenarios of C&O operations to be performed to derive a new product from the required features. The developer can then reduce these possibilities by expressing her preferences (e.g. products, artifacts) and using the proposed cost estimations on the operations. We realized our approach by developing SUCCEED, a framework for SUpporting Clone-and-own with Cost-EstimatEd Derivation. We validate our works on a case study of families of web portals
APA, Harvard, Vancouver, ISO, and other styles
32

Drochytka, Jan. "Vliv specifické lokality na cenu rezidenčního objektu na Brněnsku." Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-413826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Hung, Kun-Ting, and 洪昆鼎. "A Real-time Face and Feature Location System." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/83697093429475909812.

Full text
Abstract:
碩士
國立中興大學
資訊科學系所
94
Face detection and feature location are the pre-processing of many facial researches having the condition of limit face size and the environment to detect the faces. Our proposed method is able to detect face in a complex environment and light conditions, even if the resolution of the image is low. For the part of face detection, we use rectangle features and integral image to instead of using traditional pixel based methods. In this method, we use Adaboost algorithm to obtain rectangle features for different face. A different weight can be updated automatically to classify the feature of face by using this algorithm. In order to keep the face-like region, we apply the method of cascade in which the background region can be discarded more quickly to separate with the face-like region. In the part of feature detection, we identify the region of face by using previous steps and locate the organs of face via the relative position. Then, we can get the exact point of feature based on the color and edge information. To reduce the error rate of identification, the character of moving image can be used here as a reference in which the continuous frame having the feature of slight change within 5fps in general. Thus, we can discard the diverse frame and replace it with the previous five frames.
APA, Harvard, Vancouver, ISO, and other styles
34

Huang, Yi-Yun, and 黃怡昀. "A study of point location of Fingerprint feature." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/d3zb73.

Full text
Abstract:
碩士
中原大學
應用數學研究所
107
The unique nature of fingerprints can be used to identify identity. Access control and criminal investigation are two major fields that take advantages of the unique nature of fingerprints. In the process of fingerprint analysis, fingerprint comparison through the position of singular points to confirm whether there are the same fingerprint. Therefore, the positioning accuracy of singular point is very important. The definition of the position of singular points in Henry and FBI are very hard to implement by computer programs. They must rely on human experts to locate, and the positioning method is more subjective than computational definitions. Liu uses a deep neural network to propose a the method of computational definition of singular point. The proposed method defines singular point at the pixel level and efficiently define singular point. in a very small circle In this paper, the experiment verify correctness of the algorithm for defining the position of the singular point according to the method of "Fingerprint Analysis and Singular Point Definition by Deep Neural Network" published by Liu in 2018 , and determine if it can be located at the actual location of singular point . The results suggested that the algorithm is prone to the problem of positioning offset on the fingerprint of loop type. In addition, the idea of correcting offset positioning has also been proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
35

Rohatgi, Abhishek. "An approach towards feature location based on impact analysis." Thesis, 2008. http://spectrum.library.concordia.ca/975920/1/MR45500.pdf.

Full text
Abstract:
System evolution depends greatly on the ability of a maintainer to locate these parts of the source code that implement specific features. Until recently, quite a number of feature location techniques have been proposed. These techniques suffer from a number of limitations. They either require exercising several features of the system, or rely heavily on domain experts to guide the feature location process. In this thesis, we present a novel approach for feature location that combines static and dynamic analysis techniques. An execution trace is generated by exercising the feature under study (dynamic analysis). A component dependency graph (static analysis) is used to rank the components invoked in the trace according to their relevance to the feature. Our ranking technique is based on the impact of a component modification on the rest of the system. We hypothesize that the smaller the impart of a component modification, the more likely it is that this component is specific to the feature. The proposed approach is automatic to a large extent relieving the user from any decision that would otherwise require extensive knowledge of the system. We present a case study involving features from two software systems to evaluate the applicability and effectiveness of our approach.
APA, Harvard, Vancouver, ISO, and other styles
36

Hoseini, Salman. "Software Feature Location in Practice: Debugging Aircraft Simulation Systems." Thesis, 2013. http://spectrum.library.concordia.ca/978135/1/Hoseini_MASc_S2014.pdf.

Full text
Abstract:
In this thesis, we report on a study that we have conducted at CAE, one of the largest civil aircraft simulation companies in the world, in which we have developed a feature location approach to help software engineers debug simulation scenarios. A simulation scenario consists of a set of software components, configured in a certain way. A simulation fails when it does not behave as intended. This is typically a sign of a configuration problem. To detect configuration errors, we propose FELODE (Feature Location for Debugging), an approach that uses a single trace combined with user queries. When applied to CAE systems, FELODE achieves in average a precision of 50% and a recall of up to 100%.
APA, Harvard, Vancouver, ISO, and other styles
37

Lee, Jhe-An, and 李哲安. "License Plate Location based on Multi-Resolution Fractal Feature Vector." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/85869734328053350490.

Full text
Abstract:
碩士
銘傳大學
資訊工程學系碩士班
103
Due to the rapid growth of developing countries, the demand of vehicles increased astonishingly. In dealing with the complexity of transportation management, license plate recognition has become an important topic, in which many researches were involved particularly in the field of computer vision. In the process of license plate recognition, the most critical and difficult operation is locating the positioning of a license plate. Not only interfered by external environmental factors, the difference in license plate design of the various countries and regions accumulates the challenges. Many proposed methods have preset restrictions. When encountering relatively poor external environment, some recognition systems could not cope condition effectively. In this study, an efficient pre-processing algorithms is introduced in order to reduce the external interference on the image quality of license plates. Meanwhile candidate-positions of license plate is generated. Thereafter, a multi-resolution analyze is activated and fractal feature vector will be extracted to train classifier. That simply allows the system to adapt to all changes. The present study shows, under different angles, different formats and different lighting conditions, the proposed method is able to identify license plate location errorless, and reject non-license plate image nearly perfect.
APA, Harvard, Vancouver, ISO, and other styles
38

WANG, PAO HSIANG, and 王保翔. "Location-based Feature Extraction and Associative Classification for Quality Prediction." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/x5kutd.

Full text
Abstract:
碩士
中原大學
資訊工程研究所
106
This thesis aims at finding spatial relationships between target objects and surrounding objects of various types. We consider hotels as target objects. The types of other objects surrounding the hotels and their quantities may have impacts on the ratings of hotels. We propose an approach to extract environmental features based on different distances. In our approach, within a certain distance the number of objects surrounding each hotel is computed to form its environmental features. After that, for each type of surrounding objects, only a portion of features are chosen to discover association rules with respect to hotel rating. A classifier can be built after these rules are sorted and pruned. Experimental data are the ratings of legal hotels announced by the Tourism Bureau in Taiwan and the objects of 50 types on Google map together with their distance information. According to different distances, or various degrees of correlation with hotel ratings, we respectively extract features and then generate association rules and the corresponding classifier. In this way, we can observe the influence of the distance or correlation degree on the classification accuracy. The best set of features can achieve an average precision of 93%. In order to verify whether the discovered association rules are in line with people’s general cognition of the factors affecting hotel quality, we label the association rules in the classifier by manual inspection. The results produced by the best set of features show that 85% of the association rules in the classifier are considered reasonable.
APA, Harvard, Vancouver, ISO, and other styles
39

Chou, Yi-chin, and 周逸秦. "License plate location based on Haar-like features." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/07582850215571991891.

Full text
Abstract:
碩士
國立屏東教育大學
資訊科學系
98
In the traditional way, license plate location and normalization are very important stages in license plate recognition for automated transport system. In the literature, the processing of Automatic license plate recognition system includes two parts: (1) To extract the license plate region from an image. (2) To recognize character from license plate region. In the processing, the situation and enviroment of shooting always affect the result of license plate recoginition. Therefore, we are devote to improving the detection rate of license plate location for improving the recongition rate of license plate recongition. In this research, we introduce the digital image processing techniques, and rapid object detection based-on the Haar-like features. First, we train samples of characters to generate feature files, and then to detect the license plate region with features in image.In the experiment, we use two ways to test the license plate location in the 0°, 10° and 15° image. Finally, the results of experiment show that successful detection rate of license plate location is very well.
APA, Harvard, Vancouver, ISO, and other styles
40

Al-Dahoud, A., and Hassan Ugail. "A method for location based search for enhancing facial feature design." 2016. http://hdl.handle.net/10454/9482.

Full text
Abstract:
No
In this paper we present a new method for accurate real-time facial feature detection. Our method is based on local feature detection and enhancement. Previous work in this area, such as that of Viola and Jones, require looking at the face as a whole. Consequently, such approaches have increased chances of reporting negative hits. Furthermore, such algorithms require greater processing power and hence they are especially not attractive for real-time applications. Through our recent work, we have devised a method to identify the face from real-time images and divide it into regions of interest (ROI). Firstly, based on a face detection algorithm, we identify the face and divide it into four main regions. Then, we undertake a local search within those ROI, looking for specific facial features. This enables us to locate the desired facial features more efficiently and accurately. We have tested our approach using the Cohn-Kanade’s Extended Facial Expression (CK+) database. The results show that applying the ROI has a relatively low false positive rate as well as provides a marked gain in the overall computational efficiency. In particular, we show that our method has a 4-fold increase in accuracy when compared to existing algorithms for facial feature detection.
APA, Harvard, Vancouver, ISO, and other styles
41

ZHENG, KAI-YUAN, and 鄭開元. "Feature Selection Method Based On Term Frequency, Location And Category Relations." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/n7n4j2.

Full text
Abstract:
碩士
銘傳大學
資訊管理學系碩士班
106
With the advent of the age of big data, how to analyze and mine data has become an important topic today. Text mining is an important part of data analysis that focuses on the analysis of text data. It can help people get the information in the text more quickly and effectively. As an important branch of text mining, text classification is mainly the process of assigning texts to a specific category by using an algorithm in a given classification system. It is widely used in the application of rapid classification of press and publication products, web page classification, personalized news wisdom recommendation, spam filtering, user analysis, etc. The general Chinese text classification will be divided into several steps such as text preprocessing, feature selection and building a word vector matrix, constructing the classifier and testing, and classifier performance evaluation. Faced with the feature word set after the text is preprocessed, we often need to use feature selection to reduce the dimension of the feature word set to avoid problems such as inefficiency and ‘dimensional disaster’. And a good feature selection method will affect the subsequent classification effect directly. Therefore, the improvement of existing feature selection methods deserves further study and discussion. For this reason, this paper focuses on the shortcomings of feature selection by introducing the importance of term location, the term frequency of inter-category relationship, the term frequency of intra-class relationship, the degree of inter-class concentration, and the degree of intra-class dispersion. The Chi-square and the cross-entropy are improved, and a Chinese text classification algorithm based on multiple factors is proposed. The improved method of the present study is better than other methods in comparing F1 values of category items. It is more stable than other methods in classifying unbalanced documents. Whether it is a balanced document dataset or an unbalanced document dataset, our feature selection method proposed in this study does have a significant improvement over traditional methods and other methods.
APA, Harvard, Vancouver, ISO, and other styles
42

Bo-HengChen and 陳泊亨. "Temporal Recommendation with Event Features in Location-aware Databases." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/ey8d8s.

Full text
Abstract:
博士
國立成功大學
多媒體系統與智慧型運算工程博士學位學程
107
With the advance in mobile devices and various sensors, many events can be noted as spatio-temporal records such as crime data, traffic data, and check-in data, and so on. Spatio-temporal mining has become the emerging research fields that attract a lot of attention. How to extract and analyze in location-aware databases for mining patterns and recommending friends or point-of-interest has become an attractive and challenging issue over the past few years. In this dissertation, we develop a series of novel and effective data mining frameworks for mining spaito-temporal chaining patterns, actively inferring acquaintance and recommending shielding check-in. Mining Spatio-Temporal Chaining Patterns with Non-identity Event in Location-aware Databases: Spatio-temporal pattern mining attempts to discover unknown, potentially interesting and useful event sequences in which events occur within a specific time interval and spatial region. In the literature, mining of spatio-temporal sequential patterns generally relies on the existence of identity information for the accumulation of pattern appearances. For the recent trend of open data, which are mostly released without the specific identity information due to privacy concern, previous work will encounter the challenging difficulty to properly transform such extit{non-identity data into the mining process. In this work, we propose a practical approach, called emph{Top K Spatio-Temporal Chaining Patterns Discovery (abbreviated as emph{TKSTP), to discover frequent spatio-temporal chaining patterns. The emph{TKSTP framework is applied on two real criminal datasets which are released without the identity information. Our experimental studies show that the proposed framework effectively discovers high-quality spatio-temporal chaining patterns. In addition, case studies of crime pattern analysis also demonstrate their applicability and reveal several interestingly hidden phenomenons. Active Learning-based Approach for Acquaintance Inference with Check-in Event Features in Location-aware Databases: With the popularity of mobile devices and various sensors, the local geographical activities of human beings can be easily accessed than ever. Yet due to the privacy concern, it is difficult to acquire the social connections among people possessed by services providers, which can benefit applications such as identifying terrorists and recommender systems. In this work, we propose the extit{Location-aware Acquaintance Inference (LAI) problem, which aims at finding the acquaintances for any given query individual based on extit{solely people's extit{local geographical activities, such as geo-tagged posts in Instagram and meeting events in Meetup, within a targeted geo-spatial area. We propose to leverage the concept of extit{active learning to tackle the LAI problem. We develop a novel semi-supervised model, extit{Active Learning-enhanced Random Walk (ARW), which imposes the idea of active learning into the technique of Random Walk with Restart (RWR) in an extit{activity graph. Specifically, we devise a series of extit{Candidate Selection strategies to select unlabeled individuals for labeling, and perform the different extit{Graph Refinement mechanisms that reflect the labeling feedback to guide the RWR random surfer. Experiments conducted on Instagram and Meetup datasets exhibit the promising performance, compared with a set of state-of-the-art methods. With a series of empirical settings, ARW is demonstrated to derive satisfying results of acquaintance inference in different real scenarios. Shielding Check-in Recommendation against Acquaintance Inference with Check-in Event Features in Location-aware Databases: Location-based social services such as Foursquare and Facebook Place allow users to perform check-ins at places and interact with each other in geography (e.g. check-in together). While existing studies have exhibited that the adversary can accurately infer social ties based on check-in data, the traditional check-in mechanism cannot protect the acquaintance privacy of users. Therefore, we propose a novel extit{shielding check-in system, whose goal is to guide users to check-in at secure places. We accordingly propose a novel research problem, extit{Check-in Shielding against Acquaintance Inference (CSAI), which aims at recommending a list of secure places when users intend to check-ins so that the potential that the adversary correctly identifies the friends of users can be significantly reduced. We develop the extit{Check-in Shielding Scheme (CSS) framework to solve the CSAI problem. CSS consists of two steps, namely estimating the social strength between users and generating a list of secure places. Experiments conducted on Foursquare and Gowalla check-in datasets show that CSS is able to not only outperform several competing methods under various scenario settings, but also lead to the check-in distance preserving and ensure the usability of the new check-in data in Point-of-Interest (POI) recommendation.
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Szu-Hung, and 林思宏. "The Role of Non-Spatial Features in Location Negative Priming." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/02060328088874431277.

Full text
Abstract:
碩士
臺灣大學
心理學研究所
96
Using static displays, we investigated the role of non-spatial features in the location negative priming effect. Participants responded to the location of a pre-defined color while ignoring a distractor on screen. Identity was irrelevant to the task. A small set of stimuli was presented repetitively in Experiment 1 so that the activation of each stimulus identity was high and competition from the distractor was strong. Location positive and negative priming effects were observed. A large set of non-repeated stimuli was used in Experiment 2 to reduce the competition imposed by the distractor. Location negative priming was not found, although positive priming was still observed. Experiment 3 used a small set of repeated stimuli, but the target and the distractor colors were randomly defined in each trial while the stimulus colors in the probe trials were unrelated to those of the prime stimuli. Negative location effects were observed, but identity priming effects were not observed. The results showed that identity and location are processed independently. Strong competition based on the whole object including the task-irrelevant identity feature is important to activate the inhibitory mechanisms. Once instigated, inhibition operated selectively on the goal-relevant feature of location. Color, the task-defining feature for selection, influences how location is processed but is inessential in triggering the retrieval of location information.
APA, Harvard, Vancouver, ISO, and other styles
44

Gao, Di. "Identification and location derivation of grapevine features through point clouds." Thesis, 2014. http://hdl.handle.net/2440/99572.

Full text
Abstract:
An automatic pruning machine is desirable due to the limitations and drawbacks of current labor intensive grapevine pruning methods. Automation mitigates the issue of skilled worker shortages and reduces overall labor cost. To achieve autonomous grapevine pruning accurately and effectively, it is crucial to identify and locate some key features including post, trunk, cordon and cane in order to open/close the cutter and adjust the height of the cutter appropriately. In this thesis, a new method is proposed to automatically identify these features and derive their locations using point clouds. This method combines the advantages of cylinder extraction, density clustering and skeleton extraction for identification purposes. More importantly, it fills the gap of non-uniformed feature extraction in vineyards using point clouds. The results of applying this method to different data sets obtained from vineyards are presented and its effectiveness is demonstrated.
Thesis (M.Eng.Sc.) -- University of Adelaide, School of Mechanical Engineering, 2014.
APA, Harvard, Vancouver, ISO, and other styles
45

Yusuff, Adedayo Ademola. "Ultra fast fault feature extraction and diagnosos in power transmission lines." 2012. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1000498.

Full text
Abstract:
D. Tech. Electrical Engineering.
Discusses how to mitigate unnecessary and expensive damage to a power transmission grid, the purpose of this work are therefore: to identify the unique signature of various types of short circuit faults on electric power transmission lines. To formulate mathematical techniques for the characterisation of faults on the electric power transmission grid.To evaluate algorithms that can classify various types of short circuit faults on electric power transmission lines. To develop an ultra fast fault diagnosis system. In addition, this work would make a contribution in the following areas: filtering of decaying DC offset in post fault measurement, formulation of a features extraction algorithm for all short circuit faults on electric transmission lines, evaluation of efficient classiers and regression algorithms, and formulation of faults diagnostic scheme for electric power transmission lines.
APA, Harvard, Vancouver, ISO, and other styles
46

Lin, Jau-You, and 林昭佑. "A Study of Face Replacement Based on Facial Feature Location and Skin Color Consistence." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/39351018123058923077.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系所
93
Face replacement system plays an important role in the entertainment industries. Recently, however, most of these are assisted by hand and specific tools. Therefore, there are not many papers in this field. In this thesis, we describe a new face replacement system for automatically replacing a face with image processing. In the system, we segment the facial region by skin color analysis and morphology. Feature detection and connectivity analysis are applied to find candidates of facial feature. Then we combine mouth color analysis to obtaining features of the mouth. The eyes are located by geometric relation with mouth and projection function. We obtain features of the chin by information of gradient and entropy. Then we use the least square method to construct the chin line. We can wrap and locate the target face by feature matching. We extract skin color pixels on face region. We make the skin color of the original face similar to that of the original face by histogram matching. Image blending and smoothing is used to eliminate the seam. As can be seen, the experimental results show that this face replacement system has good performance if both original and target faces are front-facing as long as there are no large variations in illumination and skin tones.
APA, Harvard, Vancouver, ISO, and other styles
47

Heffernan, Kevin Michael. "Locating the height features : evidence from Japanese." Thesis, 2002. http://hdl.handle.net/2429/13744.

Full text
Abstract:
Both the alternations of the Japanese verb paradigm and the formation of the go 'on and kan 'on substrata of the Sino-Japanese contain rules that refer to high vowels and velar consonants. In the verb paradigm, there is a rule that deletes velar stops before the high front vowel in certain environments. Thus we have kaku 'write' (non-past), but kaita 'wrote'. The Middle Chinese velar nasal coda was borrowed as a nasalized high vowel in the go'on and kan'on substrata of Sino- Japanese. The kan 'on reading of a Middle Chinese word such as təwη 'winter' was toŭ at the time of borrowing. The objective of this thesis is to argue that this relationship between high vowels and velar consonants is a result of their featural makeup - namely that high vowels contain a dorsal node. The argument for the hypothesis is presented in the form of two constraint-based analyses, of which the first is of the verb paradigm. Previous analyses of modern Japanese have all assumed that the i in forms such as kaita 'wrote' is not present in the underlying form. I argue that based on the historic and modern-day morphological data, there is insufficient evidence to support such an assumption about the underlying representation. As such, I develop two analyses, one with i in the underlying representation, and another where it is absent. It will be shown that in both cases the results are the same: a constraint-based account is able to derive the correct results only if we assume that the high front vowel contains a dorsal node. The second half of the argument is a constraint-based analysis of the way Late Middle Chinese codas were borrowed during the formation of the kan 'on substratum of Sino-Japanese. Again it will be shown that a constraint-based analysis is able to derive the correct results if we assume that the high front vowel contains a dorsal node.
APA, Harvard, Vancouver, ISO, and other styles
48

Lam, Yin. "Comparing Naïve Bayes Classifiers with Support Vector Machines for Predicting Protein Subcellular Location Using Text Features." Thesis, 2010. http://hdl.handle.net/1974/5920.

Full text
Abstract:
Proteins play many roles in the body, and the task of understanding how proteins function is very challenging. Determining a protein’s location within the cell (also referred to as the subcellular location) helps shed light on the function of that protein. Protein subcellular location can be inferred through experimental methods or predicted using computational systems. In particular, we focus on two existing computational systems, namely EpiLoc and HomoLoc, that use features derived from text (abstracts of technical papers), and apply a support vector machine (SVM) classifier to classify proteins into their respective locations. Both EpiLoc and HomoLoc’s prediction accuracy is comparable to that of state-of-the-art protein location prediction systems. However, in addition to accuracy, other factors such as training efficiency must be considered in evaluating the quality of a location prediction system. In this thesis, we replace the SVM classifier in EpiLoc and HomoLoc, by a naïve Bayes classifier and by a novel classifier which we call the Mean Weight Text classifier. The Mean Weight Text classifier and the naïve Bayes classifier are simple to implement and execute efficiently. In addition, naïve Bayes classifiers have been shown effective in the context of protein location prediction and are considered preferable to SVM due to clarity in explaining the process used to derive the results. Evaluating the performance of these classifiers on existing data sets, we find that SVM classifiers have a slightly higher accuracy than naïve Bayes and Mean Weight Text classifiers. This slight advantage is offset by the simplicity and efficiency offered by naïve Bayes and Mean Weight Text classifiers. Moreover, we find that the Mean Weight Text classifier has a slightly higher accuracy than the naïve Bayes classifier.
Thesis (Master, Computing) -- Queen's University, 2010-07-06 11:06:47.613
APA, Harvard, Vancouver, ISO, and other styles
49

Alharbi, Basma Mohammed. "Latent Feature Models for Uncovering Human Mobility Patterns from Anonymized User Location Traces with Metadata." Diss., 2017. http://hdl.handle.net/10754/623122.

Full text
Abstract:
In the mobile era, data capturing individuals’ locations have become unprecedentedly available. Data from Location-Based Social Networks is one example of large-scale user-location data. Such data provide a valuable source for understanding patterns governing human mobility, and thus enable a wide range of research. However, mining and utilizing raw user-location data is a challenging task. This is mainly due to the sparsity of data (at the user level), the imbalance of data with power-law users and locations check-ins degree (at the global level), and more importantly the lack of a uniform low-dimensional feature space describing users. Three latent feature models are proposed in this dissertation. Each proposed model takes as an input a collection of user-location check-ins, and outputs a new representation space for users and locations respectively. To avoid invading users privacy, the proposed models are designed to learn from anonymized location data where only IDs - not geophysical positioning or category - of locations are utilized. To enrich the inferred mobility patterns, the proposed models incorporate metadata, often associated with user-location data, into the inference process. In this dissertation, two types of metadata are utilized to enrich the inferred patterns, timestamps and social ties. Time adds context to the inferred patterns, while social ties amplifies incomplete user-location check-ins. The first proposed model incorporates timestamps by learning from collections of users’ locations sharing the same discretized time. The second proposed model also incorporates time into the learning model, yet takes a further step by considering time at different scales (hour of a day, day of a week, month, and so on). This change in modeling time allows for capturing meaningful patterns over different times scales. The last proposed model incorporates social ties into the learning process to compensate for inactive users who contribute a large volume of incomplete user-location check-ins. To assess the quality of the new representation spaces for each model, evaluation is done using an external application, social link prediction, in addition to case studies and analysis of inferred patterns. Each proposed model is compared to baseline models, where results show significant improvements.
APA, Harvard, Vancouver, ISO, and other styles
50

Evans, Hsiao-Chueh. "The flexibility of attentional control in selecting features and locations." 2010. https://scholarworks.umass.edu/dissertations/AAI3397698.

Full text
Abstract:
The visual processing of a stimulus is facilitated by attention when it is at an attended location compared to an unattended location. However, whether attentional selection operates on the basis of visual features (e.g., color) independently of spatial locations is less clear. Six experiments were designed to examine how color information as well as location information affected attentional selection. In Experiment 1, the color of the targets and the spatial distance between them were both manipulated. Stimuli were found to be grouped based on color similarity. Additionally, the evidence suggested direct selection on the basis of color groups, rather than selection that was mediated by location. By varying the probabilities of target location and color, Experiments 2, 3 and 4 demonstrated that the use of color in perceptual grouping and in biasing the priority of selection is not automatic, but is modulated by task demands. Experiments 5 and 6 further investigated the relationship between using color and using location as the selection basis under exogenous and endogenous orienting. The results suggest that the precise nature of the interaction between color and location varies according to the mode of attentional control. Collectively, these experiments contribute to an understanding of how different types of information are used in selection and suggest a greater degree of flexibility of attentional control than previously expected. The flexibility is likely to be determined by a number of factors, including task demands and the nature of attentional control.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography