Dissertations / Theses on the topic 'Attribute estimation'

To see the other types of publications on this topic, follow the link: Attribute estimation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 37 dissertations / theses for your research on the topic 'Attribute estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Azzeh, Mohammad Y. A. "Analogy-based software project effort estimation : contributions to projects similarity measurement, attribute selection and attribute weighting algorithms for analogy-based effort estimation." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4442.

Full text
Abstract:
Software effort estimation by analogy is a viable alternative method to other estimation techniques, and in many cases, researchers found it outperformed other estimation methods in terms of accuracy and practitioners' acceptance. However, the overall performance of analogy based estimation depends on two major factors: similarity measure and attribute selection & weighting. Current similarity measures such as nearest neighborhood techniques have been criticized that have some inadequacies related to attributes relevancy, noise and uncertainty in addition to the problem of using categorical attributes. This research focuses on improving the efficiency and flexibility of analogy-based estimation to overcome the abovementioned inadequacies. Particularly, this thesis proposes two new approaches to model and handle uncertainty in similarity measurement method and most importantly to reflect the structure of dataset on similarity measurement using Fuzzy modeling based Fuzzy C-means algorithm. The first proposed approach called Fuzzy Grey Relational Analysis method employs combined techniques of Fuzzy set theory and Grey Relational Analysis to improve local and global similarity measure and tolerate imprecision associated with using different data types (Continuous and Categorical). The second proposed approach presents the use of Fuzzy numbers and its concepts to develop a practical yet efficient approach to support analogy-based systems especially at early phase of software development. Specifically, we propose a new similarity measure and adaptation technique based on Fuzzy numbers. We also propose a new attribute subset selection algorithm and attribute weighting technique based on the hypothesis of analogy-based estimation that assumes projects that are similar in terms of attribute value are also similar in terms of effort values, using row-wise Kendall rank correlation between similarity matrix based project effort values and similarity matrix based project attribute values. A literature review of related software engineering studies revealed that the existing attribute selection techniques (such as brute-force, heuristic algorithms) are restricted to the choice of performance indicators such as (Mean of Magnitude Relative Error and Prediction Performance Indicator) and computationally far more intensive. The proposed algorithms provide sound statistical basis and justification for their procedures. The performance figures of the proposed approaches have been evaluated using real industrial datasets. Results and conclusions from a series of comparative studies with conventional estimation by analogy approach using the available datasets are presented. The studies were also carried out to statistically investigate the significant differences between predictions generated by our approaches and those generated by the most popular techniques such as: conventional analogy estimation, neural network and stepwise regression. The results and conclusions indicate that the two proposed approaches have potential to deliver comparable, if not better, accuracy than the compared techniques. The results also found that Grey Relational Analysis tolerates the uncertainty associated with using different data types. As well as the original contributions within the thesis, a number of directions for further research are presented. Most chapters in this thesis have been disseminated in international journals and highly refereed conference proceedings.
APA, Harvard, Vancouver, ISO, and other styles
2

Thiyagarajah, Murali. "Attribute cardinality maps, new query result size estimation techniques for database systems." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0007/NQ42810.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Thiyagarajah, Murali (Muralitharam) Carleton University Dissertation Computer Science. "Attribute cardinality maps; new query result size estimation techniques for database systems." Ottawa, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Divelbiss, David L. "Evaluation of the Impact of Product Detail on the Accuracy of Cost Estimates." Ohio University / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1127165055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Moser, Paolo 1985, Alexander Christian 1959 Vibrans, Ronald McRoberts, and Universidade Regional de Blumenau Programa de Pós-Graduação em Engenharia Ambiental. "Statistical and computational methods of forest attribute estimation and classification based on remotely sensed data." reponame:Biblioteca Digital de Teses e Dissertações FURB, 2018. http://www.bc.furb.br/docs/TE/2018/364705_1_1.pdf.

Full text
Abstract:
Orientador: Alexander Christian Vibrans.
Coorientador: Ronald McRoberts.
Tese (Doutorado em Engenharia Ambiental) - Programa de Pós-Graduação em Engenharia Ambiental, Centro de Ciências Tecnológicas, Universidade Regional de Blumenau, Blumenau.
APA, Harvard, Vancouver, ISO, and other styles
6

FABBRI, MATTEO. "Sfruttare i Dati Sintetici per Migliorare la Comprensione del Comportamento Umano." Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2021. http://hdl.handle.net/11380/1239978.

Full text
Abstract:
Le più recenti tecniche di Deep Learning richiedono enormi quantità di dati di addestramento per ottenere prestazioni simili a quelle umane. Soprattutto in Computer Vision, i Dataset sono costosi da creare in quanto richiedono uno sforzo manuale considerevole che non può essere automatizzato. Infatti, l'annotazione manuale è spesso soggetta ad errori, è incoerente per task soggettivi (ad es. age classification) e non è applicabile ad ogni tipo di dato (ad es. video ad elevato frame rate). Per alcuni task, come la pose estimation e il tracking, un'alternativa all'annotazione manuale implica l'utilizzo di sensori indossabili. Tuttavia, questo approccio non è praticabile in alcune circostanze (ad es. in scenari affollati), poiché la necessità di indossare tali sensori limita la sua applicazione ad ambienti controllati. Per superare questi limiti, abbiamo raccolto una serie di dati sintetici sfruttando un videogioco fotorealistico. Grazie all'utilizzo di un simulatore virtuale, le annotazioni sono prive di errori e sempre coerenti dato che non sono coinvolte operazioni manuali. Inoltre, i nostri dati sono adatti per applicazioni in-the-wild in quanto contengono un'elevata varietà di scenari e persone in ambienti non controllati. Tali dati sono conformi alle normative sulla privacy, in quanto nessun essere umano è stato coinvolto nell'acquisizione dei video. Sfruttando questi nuovi dati, sono stati condotti studi approfonditi su una serie di task. In particolare, per la pose estimation 2D e il tracking, abbiamo sviluppato un'architettura Deep che estrae congiuntamente i giunti delle persone e le associa su brevi intervalli temporali. Il nostro modello è in grado di ragionare esplicitamente riguardo a parti del corpo occluse, proponendo soluzioni plausibili di giunti non visibili. Per la pose estimation 3D, invece, abbiamo scelto di utilizzare heatmap volumetriche ad alta risoluzione per modellare le posizioni dei giunti, ideando un metodo di compressione semplice ed efficace per ridurre drasticamente le dimensioni di questa rappresentazione. Per l'attribute classification, abbiamo proposto una soluzione ad un problema comune nell'ambito della videosorveglianza, ovvero l'occlusione delle persone, progettando una rete neurale in grado di generare porzioni di persone occluse con un aspetto plausibile. Da un punto di vista pratico, abbiamo progettato un sistema di edge-AI in grado di valutare in tempo reale il rischio di contagio COVID-19 di un'area monitorata analizzando flussi video. Poiché i dati sintetici potrebbero essere suscettibili al domain-shift, abbiamo approfondito le tecniche di image-translation per head pose estimation, attribute recognition e face landmark localization.
Most recent Deep Learning techniques require large volumes of training data in order to achieve human-like performance. Especially in Computer Vision, datasets are expensive to create because they usually require a considerable manual effort that can not be automated. Indeed, manual annotation is error-prone, inconsistent for subjective tasks (e.g. age classification), and not applicable to particular data (e.g. high frame-rate videos). For some tasks, like pose estimation and tracking, an alternative to manual annotation implies the use of wearable sensors. However, this approach is not feasible under some circumstances (e.g. in crowded scenarios) since the need to wear sensors limits its application to controlled environments. To overcome all the aforementioned limitations, we collected a set of synthetic datasets exploiting a photorealistic videogame. By relying on a virtual simulator, the annotations are error-free and always consistent as there is no manual annotation involved. Moreover, our data is suitable for in-the-wild applications as it contains multiple scenarios and a high variety of people appearances. In addition, our datasets are privacy compliant as no real human was involved in the data acquisition. Leveraging this newly collected data, extensive studies have been conducted on a plethora of tasks. In particular, for 2D pose estimation and tracking, we propose a deep network architecture that jointly extracts people body parts and associates them across short temporal spans. Our model explicitly deals with occluded body parts, by hallucinating plausible solutions of not visible joints. For 3D pose estimation, we propose to use high-resolution volumetric heatmaps to model joint locations, devising a simple and effective compression method to drastically reduce the size of this representation. For attribute classification, we overcome a common problem in surveillance, namely people occlusion, by designing a network capable of hallucinating occluded people with a plausible aspect. From a more practical point of view, we design an edge-AI system capable of evaluating in real-time the COVID-19 contagion risk of a monitored area by analyzing video streams. As synthetic data might suffer domain-shift related problems, we further investigate image translation techniques for the tasks of head pose estimation, attribute recognition and face landmark localization.
APA, Harvard, Vancouver, ISO, and other styles
7

Park, Joonam. "Development and Application of Probabilistic Decision Support Framework for Seismic Rehabilitation of Structural Systems." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4898.

Full text
Abstract:
Seismic rehabilitation of structural systems is an effective approach for reducing potential seismic losses such as social and economic losses. However, little or no effort has been made to develop a framework for making decisions on seismic rehabilitation of structural systems that systematically incorporates conflicting multiple criteria and uncertainties inherent in the seismic hazard and in the systems themselves. This study develops a decision support framework for seismic rehabilitation of structural systems incorporating uncertainties inherent in both the system and the seismic hazard, and demonstrates its application with detailed examples. The decision support framework developed utilizes the HAZUS method for a quick and extensive estimation of seismic losses associated with structural systems. The decision support framework allows consideration of multiple decision attributes associated with seismic losses, and multiple alternative seismic rehabilitation schemes represented by the objective performance level. Three multi-criteria decision models (MCDM) that are known to be effective for decision problems under uncertainty are employed and their applicability for decision analyses in seismic rehabilitation is investigated. These models are Equivalent Cost Analysis (ECA), Multi-Attribute Utility Theory (MAUT), and Joint Probability Decision Making (JPDM). Guidelines for selection of a MCDM that is appropriate for a given decision problem are provided to establish a flexible decision support system. The resulting decision support framework is applied to a test bed system that consists of six hospitals located in the Memphis, Tennessee, area to demonstrate its capabilities.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Gengxiang. "Rehaussement et détection des attributs sismiques 3D par techniques avancées d'analyse d'images." Phd thesis, Université Michel de Montaigne - Bordeaux III, 2012. http://tel.archives-ouvertes.fr/tel-00731886.

Full text
Abstract:
Les Moments ont été largement utilisés dans la reconnaissance de formes et dans le traitement d'image. Dans cette thèse, nous concentrons notre attention sur les 3D moments orthogonaux de Gauss-Hermite, les moments invariants 2D et 3D de Gauss-Hermite, l'algorithme rapide de l'attribut de cohérence et les applications de l'interprétation sismique en utilisant la méthode des moments.Nous étudions les méthodes de suivi automatique d'horizon sismique à partir de moments de Gauss-Hermite en cas de 1D et de 3D. Nous introduisons une approche basée sur une étude multi-échelle des moments invariants. Les résultats expérimentaux montrent que la méthode des moments 3D de Gauss-Hermite est plus performante que les autres algorithmes populaires.Nous avons également abordé l'analyse des faciès sismiques basée sur les caractéristiques du vecteur à partir des moments 3D de Gauss -Hermite, et la méthode de Cartes Auto-organisatrices avec techniques de visualisation de données. L'excellent résultat de l'analyse des faciès montre que l'environnement intégré donne une meilleure performance dans l'interprétation de la structure des clusters.Enfin, nous introduisons le traitement parallèle et la visualisation de volume. En profitant des nouvelles performances par les technologies multi-threading et multi-cœurs dans le traitement et l'interprétation de données sismiques, nous calculons efficacement des attributs sismiques et nous suivons l'horizon. Nous discutons également l'algorithme de rendu de volume basé sur le moteur Open-Scene-Graph qui permet de mieux comprendre la structure de données sismiques.
APA, Harvard, Vancouver, ISO, and other styles
9

Tachfouti, Nabil. "Estimation de la mortalité attribuée au tabac au Maroc." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0382/document.

Full text
Abstract:
Introduction : Le tabac constitue la première cause de décès évitable dans le monde. Le Maroc constitue un bon modèle pour l’étude de la mortalité liée au tabagisme dans un pays en transition épidémiologique. Dans ce pays, les différentes études menées sur le tabagisme ont montré que sa prévalence chez les adultes a évolué de 17,2% en 2000 à 18,5% en 2006. Mais, peu de données existent sur les conséquences du tabac sur l’état de santé de la population Marocaine notamment en matière de décès prématurés. L’objectif de ce travail est d’estimer la mortalité globale liée au tabac au Maroc. Méthodes : Nous avons choisi le modèle SAMMEC (Smoking-Attributable Mortality, Morbidity, and Economic Cost) qui est une application conçue par les center of Disease Control (CDC). Le principe de cette méthode se base sur le calcul de la fraction attribuable au tabagisme (FAT) qui est la proportion des décès lié au tabac parmi l’ensemble des décès due à une maladie. Les données nécessaires pour cette modélisation sont : - Les risques relatifs (RR) de décès pour les fumeurs et les ex-fumeurs par rapport aux non- fumeurs. Ces RR ont été recueillis à partir de l’étude «American Cancer Society’s Cancer Prévention Study II (CPS-II)». -La fréquence de la consommation du tabac : les proportions des fumeurs, des anciens fumeurs et des non fumeurs ont été tirées à partir de l’étude MARTA; - Les causes de mortalité : il s’agit de 19 maladies liées au tabac regroupés en trois groupes de pathologies : les maladies cardio-vasculaires, les maladies de l’appareil respiratoire et les cancers. Ces données ont été recueillies à partir des déclarations de décès au niveau des bureaux communaux d’hygiène de la région de Casablanca durant l’année 2012. Elles ont été ensuite extrapolées sur la population Marocaine. Au terme du recueil, nous avons pu utiliser le modèle pour estimer la mortalité liée au tabac chez les personnes âgées de 35 ans et plus par sexe, et tranche d’âge.Résultats : La mortalité attribuable au tabac (MAT) durant l’année 2012 chez la population Marocaine âgée de 35 ans et plus est estimée à 4359 décès ; 3835 chez les hommes et 524 chez les femmes. La MAT représente 11,9% de la mortalité globale chez la tranche d’âge concernée par l’étude (personnes âgées de 35 ans et plus) ; 18,2% cher les hommes et 3,4 % chez les femmes. La MAT représente 66,4 % des décès par cause respiratoire, 53,9% de la mortalité par cancer et 13,7% des décès par maladies cardiovasculaires. La MAT est dominé par les décès par les cancers qui en représentent 48,4%, suivi des maladies cardiovasculaires qui en représentent 31,8% et celles de l’appareil respiratoire par 18,7%. Chez les hommes, les cancers représentent 49,8% de la MAT, les maladies cardiovasculaires en représentent 31,7% et celles de l’appareil respiratoire en représentent 18,5%. Chez les femmes, les cancers en représentent 38,5% suivi des maladies cardiovasculaires (31,8%) et celles de l’appareil respiratoire (29,7%). Discussion : Les chiffres alarmants du coût du tabagisme en termes de mortalité suggèrent l’urgence de sensibiliser davantage les décideurs politiques. Ces derniers seront menés à mettre en place une stratégie de lutte basée sur une politique prévention plus adaptée à cette situation épidémiologique et en mesure d’épargner un énorme fardeau au pays
Background : To establish the impact of tobacco smoking on mortality is essential to define and monitor public health interventions in developing countries. In Morocco, smoking prevalence has increased from 17.2% to 18.5% between 2000 and 2006. Moreover, no updated estimates are available on smoking attributable mortality (SAM). The aim of this study is to estimate the number of smoking attributable deaths in Morocco. Methods : The Smoking-Attributable Mortality, Morbidity and Economic Costs (SAMMEC) software was used to estimate the smoking attributable mortality for the year 2012. Smoking and ex-smoking prevalence’s of Moroccan’s aged 35 years or older were obtained from the national survey on tobacco “MARTA” data. Mortality data were drawn from the Mortality declaration registries in Casablanca region and extrapolated on Moroccan population. Results : Of total 36548 deaths recorded in Morocco in 2012 among person aged 35 years and older, 4359 were attributed to smoking in the three groups of selected causes; 3835 men’s and 524 women’s. Smoking accounted for 11.9% of all deaths; 18.3 % in men, and 3.4 % in women. Cancer was the most frequent cause, responsible for 50.7% (2112 deaths) of all smoking attributable deaths, followed by cardiovascular diseases (30.7%:1338 deaths) and respiratory diseases (19.6%: 864 deaths). Conclusion : Tobacco use caused one out of five male deaths. Four leading causes (lung cancer, ischemic heart disease, cerebrovascular disease and chronic airways obstruction) accounted for for 64.2% of all SAM; 65.0% among men and 61.6% among women’s. Overall, there is still a 5 high burden of tobacco-related deaths in Germany which leads to considerable costs for the German health system and economy. Effective and comprehensive actions must be taken in order to slow this epidemic in Morocco
APA, Harvard, Vancouver, ISO, and other styles
10

Hammoud, Wissam. "Attributes effecting software testing estimation; is organizational trust an issue?" Thesis, University of Phoenix, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3583478.

Full text
Abstract:

This quantitative correlational research explored the potential association between the levels of organizational trust and the software testing estimation. This was conducted by exploring the relationships between organizational trust, tester’s expertise, organizational technology used, and the number of hours, number of testers, and time-coding estimated by the software testers. The research conducted on a software testing department of a health insurance organization, employed the use of the Organizational Trust Inventory- Short Form (OTI-SF) developed by Philip Bromiley and Larry Cummings and revealed a strong relationship between organizational trust and software testing estimation. The research reviews historical theories of organizational trust and include a deep discussion about software testing practices and software testing estimation. By examining the significant impact of organizational trust on project estimating and time-coding in this research, software testing leaders can benefit from this research to improve project planning and managing process by improving the levels of trust within their organizations.

APA, Harvard, Vancouver, ISO, and other styles
11

Qiu, Xuchong. "2D and 3D Geometric Attributes Estimation in Images via deep learning." Thesis, Marne-la-vallée, ENPC, 2021. http://www.theses.fr/2021ENPC0005.

Full text
Abstract:
La perception visuelle d'attributs géométriques (ex. la translation, la rotation, la taille, etc.) est très importante dans les applications robotiques. Elle permet à un système robotique d'acquérir des connaissances sur son environnement et peut fournir des entrées pour des tâches telles que la localisation d'objets, la compréhension de scènes et la planification de trajectoire. Le principal objectif de cette thèse est d'estimer la position et l'orientation d'objets d'intérêt pour des tâches de manipulation robotique. En particulier, nous nous intéressons à la tâche de bas niveau d'estimation de la relation d'occultation, afin de mieux pouvoir discriminer objets différents, et aux tâches de plus haut niveau de suivi visuel d'objets et d'estimation de leur position et orientation. Le premier axe d'étude est le suivi (tracking) d'un objet d'intérêt dans une vidéo, avec des locations et tailles correctes. Tout d'abord, nous étudions attentivement le cadre du suivi d'objet basé sur des filtres de corrélation discriminants et proposons d'exploiter des informations sémantiques à deux niveaux~: l'étape d'encodage des caractéristiques visuelles et l'étape de localisation de la cible. Nos expériences démontrent que l'usage de la sémantique améliore à la fois les performances de la localisation et de l'estimation de taille de l'objet suivi. Nous effectuons également des analyses pour comprendre les cas d'échec. Le second axe d'étude est l'utilisation d'informations sur la forme des objets pour améliorer la performance de l'estimation de la pose 6D des objets et de son raffinement. Nous proposons d'estimer avec un modèle profond les projections 2D de points 3D à la surface de l'objet, afin de pouvoir calculer la pose 6D de l'objet. Nos résultats montrent que la méthode que nous proposons bénéficie du grand nombre de correspondances de points 3D à 2D et permet d'obtenir une meilleure précision des estimations. Dans un deuxième temps, nous étudions les contraintes des méthodes existantes pour raffiner la pose d'objets et développons une méthode de raffinement des objets dans des contextes arbitraires. Nos expériences montrent que nos modèles, entraînés sur des données réelles ou des données synthétiques générées, peuvent raffiner avec succès les estimations de pose pour les objets dans des contextes quelconques. Le troisième axe de recherche est l'étude de l'occultation géométrique dans des images, dans le but de mieux pouvoir distinguer les objets dans la scène. Nous formalisons d'abord la définition de l'occultation géométrique et proposons une méthode pour générer automatiquement des annotations d'occultation de haute qualité. Ensuite, nous proposons une nouvelle formulation de la relation d'occultation (abbnom) et une méthode d'inférence correspondante. Nos expériences sur les jeux de tests pour l'estimation d'occultations montrent la supériorité de notre formulation et de notre méthode. Afin de déterminer des discontinuités de profondeur précises, nous proposons également une méthode de raffinement de cartes de profondeur et une méthode monoculaire d'estimation de la profondeur en une étape. En utilisant l'estimation de relations d'occultation comme guide, ces deux méthodes atteignent les performances de l'état de l'art. Toutes les méthodes que nous proposons s'appuient sur la polyvalence et la puissance de l'apprentissage profond. Cela devrait faciliter leur intégration dans le module de perception visuelle des systèmes robotiques modernes. Outre les avancées méthodologiques mentionnées ci-dessus, nous avons également rendu publiquement disponibles des logiciels (pour l'estimation de l'occlusion et de la pose) et des jeux de données (informations de haute qualité sur les relations d'occultation) afin de contribuer aux outils offerts à la communauté scientifique
The visual perception of 2D and 3D geometric attributes (e.g. translation, rotation, spatial size and etc.) is important in robotic applications. It helps robotic system build knowledge about its surrounding environment and can serve as the input for down-stream tasks such as motion planning and physical intersection with objects.The main goal of this thesis is to automatically detect positions and poses of interested objects for robotic manipulation tasks. In particular, we are interested in the low-level task of estimating occlusion relationship to discriminate different objects and the high-level tasks of object visual tracking and object pose estimation.The first focus is to track the object of interest with correct locations and sizes in a given video. We first study systematically the tracking framework based on discriminative correlation filter (DCF) and propose to leverage semantics information in two tracking stages: the visual feature encoding stage and the target localization stage. Our experiments demonstrate that the involvement of semantics improves the performance of both localization and size estimation in our DCF-based tracking framework. We also make an analysis for failure cases.The second focus is using object shape information to improve the performance of object 6D pose estimation and do object pose refinement. We propose to estimate the 2D projections of object 3D surface points with deep models to recover object 6D poses. Our results show that the proposed method benefits from the large number of 3D-to-2D point correspondences and achieves better performance. As a second part, we study the constraints of existing object pose refinement methods and develop a pose refinement method for objects in the wild. Our experiments demonstrate that our models trained on either real data or generated synthetic data can refine pose estimates for objects in the wild, even though these objects are not seen during training.The third focus is studying geometric occlusion in single images to better discriminate objects in the scene. We first formalize geometric occlusion definition and propose a method to automatically generate high-quality occlusion annotations. Then we propose a new occlusion relationship formulation (i.e. abbnom) and the corresponding inference method. Experiments on occlusion reasoning benchmarks demonstrate the superiority of the proposed formulation and method. To recover accurate depth discontinuities, we also propose a depth map refinement method and a single-stage monocular depth estimation method.All the methods that we propose leverage on the versatility and power of deep learning. This should facilitate their integration in the visual perception module of modern robotic systems.Besides the above methodological advances, we also made available software (for occlusion and pose estimation) and datasets (of high-quality occlusion information) as a contribution to the scientific community
APA, Harvard, Vancouver, ISO, and other styles
12

Tiemeni, Ghislaine Livie Ngangom. "Performance estimation of wireless networks using traffic generation and monitoring on a mobile device." University of the Western Cape, 2015. http://hdl.handle.net/11394/4777.

Full text
Abstract:
In this study, a traffic generator software package namely MTGawn was developed to run packet generation and evaluation on a mobile device. The call generating software system is able to: simulate voice over Internet protocol calls as well as user datagram protocol and transmission control protocol between mobile phones over a wireless network and analyse network data similar to computer-based network monitoring tools such as Iperf and D-ITG but is self-contained on a mobile device. This entailed porting a ‘stripped down’ version of a packet generation and monitoring system with functionality as found in open source tools for a mobile platform. This mobile system is able to generate and monitor traffic over any network interface on a mobile device, and calculate the standard quality of service metrics. The tool was compared to a computer–based tool namely distributed Internet traffic generator (D-ITG) in the same environment and, in most cases, MTGawn reported comparable results to D-ITG. The important motivation for this software was to ease feasibility testing and monitoring in the field by using an affordable and rechargeable technology such as a mobile device. The system was tested in a testbed and can be used in rural areas where a mobile device is more suitable than a PC or laptop. The main challenge was to port and adapt an open source packet generator to an Android platform and to provide a suitable touchscreen interface for the tool.
>Magister Scientiae - MSc
APA, Harvard, Vancouver, ISO, and other styles
13

LIMA, LUIZ ALBERTO BARBOSA DE. "POROSITY ESTIMATION FROM SEISMIC ATTRIBUTES WITH SIMULTANEOUS CLASSIFICATION OF SPATIALLY STRUCTURED LATENT FACIES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2017. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=33718@1.

Full text
Abstract:
PETRÓLEO BRASILEIRO S. A.
Predição de porosidade em reservatórios de óleo e gás representa em uma tarefa crucial e desafiadora na indústria de petróleo. Neste trabalho é proposto um novo modelo não-linear para predição de porosidade que trata fácies sedimentares como variáveis ocultas ou latentes. Esse modelo, denominado Transductive Conditional Random Field Regression (TCRFR), combina com sucesso os conceitos de Markov random fields, ridge regression e aprendizado transdutivo. O modelo utiliza volumes de impedância sísmica como informação de entrada condicionada aos valores de porosidade disponíveis nos poços existentes no reservatório e realiza de forma simultânea e automática a classificação das fácies e a estimativa de porosidade em todo o volume. O método é capaz de inferir as fácies latentes através da combinação de amostras precisas de porosidade local presentes nos poços com dados de impedância sísmica ruidosos, porém disponíveis em todo o volume do reservatório. A informação precisa de porosidade é propagada no volume através de modelos probabilísticos baseados em grafos, utilizando conditional random fields. Adicionalmente, duas novas técnicas são introduzidas como etapas de pré-processamento para aplicação do método TCRFR nos casos extremos em que somente um número bastante reduzido de amostras rotuladas de porosidade encontra-se disponível em um pequeno conjunto de poços exploratórios, uma situação típica para geólogos durante a fase exploratória de uma nova área. São realizados experimentos utilizando dados de um reservatório sintético e de um reservatório real. Os resultados comprovam que o método apresenta um desempenho consideravelmente superior a outros métodos automáticos de predição em relação aos dados sintéticos e, em relação aos dados reais, um desempenho comparável ao gerado por técnicas tradicionais de geo estatística que demandam grande esforço manual por parte de especialistas.
Estimating porosity in oil and gas reservoirs is a crucial and challenging task in the oil industry. A novel nonlinear model for porosity estimation is proposed, which handles sedimentary facies as latent variables. It successfully combines the concepts of conditional random fields (CRFs), transductive learning and ridge regression. The proposed Transductive Conditional Random Field Regression (TCRFR) uses seismic impedance volumes as input information, conditioned on the porosity values from the available wells in the reservoir, and simultaneously and automatically provides as output the porosity estimation and facies classification in the whole volume. The method is able to infer the latent facies states by combining the local, labeled and accurate porosity information available at well locations with the plentiful but imprecise impedance information available everywhere in the reservoir volume. That accurate information is propagated in the reservoir based on conditional random field probabilistic graphical models, greatly reducing uncertainty. In addition, two new techniques are introduced as preprocessing steps for the application of TCRFR in the extreme but realistic cases where just a scarce amount of porosity labeled samples are available in a few exploratory wells, a typical situation for geologists during the evaluation of a reservoir in the exploration phase. Both synthetic and real-world data experiments are presented to prove the usefulness of the proposed methodology, which show that it outperforms previous automatic estimation methods on synthetic data and provides a comparable result to the traditional manual labored geostatistics approach on real-world data.
APA, Harvard, Vancouver, ISO, and other styles
14

Parianos, John Michael. "Geology of the Clarion-Clipperton Zone: fundamental attributes in polymetallic nodule resource development." Doctoral thesis, Universidade de Évora, 2021. http://hdl.handle.net/10174/31069.

Full text
Abstract:
The Pacific oceanic plate segment known as the Clarion Clipperton Zone (CCZ), contains polymetallic nodules of superior consistency, tonnage and quality, to those known from other deep seabed areas. Regional scale mapping reveals structural and geomorphological features that result from variance in plate segment motion rates, and other trans-plate factors, which help to contextualise and better define the environment and exploration potential of this deposit. Exploration survey datasets from the central and eastern parts of the CCZ area allow for local geological mapping that contribute to regional models on nodule formation and distribution. Stratigraphically, basement abyssal hills include flatter areas with chain(s) of volcanic knolls. Mid Eocene and younger deep-sea chalks of the Marquesas Formation, include fault escarpment exposure, potholes and “carbonate strata breccias”. Early Miocene to present silicious clayooze of the Clipperton Formation show surficial development of ripples, slumping and sediment drifts. The clay-ooze hosts the deposit of polymetallic nodules. Unconformable volcanic rock units include single and composite knolls, seamounts, dykes and sills. Nodule forms and abundance relate to the facies scale conditions of their formation. The thickness and stability of the geochemically active layer is shown to play a crucial role on their growth. Multielement chemistry indicates differing metal contributions from silicic versus calcic primary productivity. This study confirms nodule densities, host clay-ooze bulk densities and packing densities, as well as moisture content. Moreover, it is shown that nodule handling forms attrition fines that may affect safe transport at sea. Mineral resource estimation is important to resource owner and developers. Further conversion of mineral resources to reserves requires multidisciplinary modifying factors, which include: logging of fauna; concepts behind a nodule collection system; and pyrometallurgical experiments. This study aims to improve the resource classification of the CCZ deposit in specific contract areas of the International Seabed Authority; Resumo: Geologia da Zona de Fratura de Clarion-Clipperton: Atributos Fundamentais no Desenvolvimento dos Nódulos Polimetálicos como Recurso Geológico - A área do fundo marinho conhecida por zona de fratura de Clarion-Clipperton (CCZ), no Pacífico, alberga um depósito de nódulos polimetálicos de tonelagem e qualidade superiores quando comparado com outros depósitos minerais do oceano profundo. A cartografia à escala regional permite reconhecer estruturas e características morfológicas que resultam da variação da taxa de alastramento do fundo marinho, bem como de outros fatores que afetam a evolução da CCZ e que contribuem para definir com mais detalhe o potencial envolvido na prospeção daquele depósito mineral. Os dados recolhidos no decurso de campanhas conduzidas nas zonas central e oriental da CCZ permitem realizar cartografia geológica que contribui para a formulação de modelos regionais para a formação e distribuição dos nódulos polimetálicos. Estes ocorrem associados à formação de Clipperton, com idade entre o Miocénico inferior e a atualidade. Neste trabalho é particularmente evidenciado o papel da espessura e estabilidade da camada geoquimicamente ativa no crescimento e forma dos nódulos, bem como da sua distribuição. A geoquímica multi-elementar indica diferentes contributos da produtividade primária (siliciosa versus carbonatada) para a distribuição dos diferentes metais que constituem os nódulos. Neste estudo são, igualmente, estabelecidos valores para diferentes parâmetros físicos relevantes para as atividades de prospeção, como sejam a densidade dos nódulos e dos sedimentos argilosos onde ocorrem. A classificação e estimativa de recursos minerais é relevante quer para as entidades que detêm a jurisdição sobre esses recursos quer para as entidades interessadas no seu desenvolvimento. A conversão de recursos para reservas minerais requer a aplicação dos designados por modifying factors, os quais incluem, entre outros: a descrição da fauna, o conceito associado ao desenvolvimento de colectores, testes pirometalúrgicos. Este trabalho pretende melhorar, de forma significativa, a classificação de recursos da CCZ em áreas associadas a contratos administrados pela Autoridade Internacional dos Fundos Marinhos.
APA, Harvard, Vancouver, ISO, and other styles
15

Pike, Quinton David. "Empirical estimation of attributes influencing warehouse/distribution center operations an in-depth analysis of the Washington warehouse industry /." Online access for everyone, 2005. http://www.dissertations.wsu.edu/Thesis/Spring2005/q%5Fpike%5F050605.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Jimenez, Laura. "Estimating the Reliability of Concept Map Ratings Using a Scoring Rubric Based on Three Attributes." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2284.

Full text
Abstract:
Concept maps provide a way to assess how well students have developed an organized understanding of how the concepts taught in a unit are interrelated and fit together. However, concept maps are challenging to score because of the idiosyncratic ways in which students organize their knowledge (McClure, Sonak, & Suen, 1999). The construct a map or C-mapping" task has been shown to capture students' organized understanding. This "C-mapping" task involves giving students a list of concepts and asking them to produce a map showing how these concepts are interrelated. The purpose of this study was twofold: (a) to determine to what extent the use of the restricted C-mapping technique coupled with the threefold scoring rubric produced reliable ratings of students conceptual understanding from two examinations, and (b) to project how the reliability of the mean ratings for individual students would likely vary as a function of the average number of raters and rating occasions from two examinations. Nearly three-fourths (73%) of the variability in the ratings for one exam and (43 %) of the variability for the other exam were due to dependable differences in the students' understanding detected by the raters. The rater inconsistencies were higher for one exam and somewhat lower for the other exam. The person-to-rater interaction was relatively small for one exam and somewhat higher for the other exam. The rater-by-occasion variance components were zero for both exams. The unexplained variance accounted for 19% on one exam and 14% on the other. The size of the reliability coefficient of student concept map scores varied across the two examinations. A reliability of .95 and .93 for relative and absolute decision was obtained for one exam. A reliability of .88 and .78. for absolute and relative decision was obtained for the other exam. Increasing the number of raters from one to two on one rating occasion would yield a greater increase in the reliability of the ratings at a lower cost than increasing the number of rating occasions. The same pattern holds for both exams.
APA, Harvard, Vancouver, ISO, and other styles
17

Golinkoff, Jordan Seth. "Estimation and modeling of forest attributes across large spatial scales using BiomeBGC, high-resolution imagery, LiDAR data, and inventory data." Thesis, University of Montana, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3568103.

Full text
Abstract:

The accurate estimation of forest attributes at many different spatial scales is a critical problem. Forest landowners may be interested in estimating timber volume, forest biomass, and forest structure to determine their forest's condition and value. Counties and states may be interested to learn about their forests to develop sustainable management plans and policies related to forests, wildlife, and climate change. Countries and consortiums of countries need information about their forests to set global and national targets to deal with issues of climate change and deforestation as well as to set national targets and understand the state of their forest at a given point in time.

This dissertation approaches these questions from two perspectives. The first perspective uses the process model Biome-BGC paired with inventory and remote sensing data to make inferences about a current forest state given known climate and site variables. Using a model of this type, future climate data can be used to make predictions about future forest states as well. An example of this work applied to a forest in northern California is presented. The second perspective of estimating forest attributes uses high resolution aerial imagery paired with light detection and ranging (LiDAR) remote sensing data to develop statistical estimates of forest structure. Two approaches within this perspective are presented: a pixel based approach and an object based approach. Both approaches can serve as the platform on which models (either empirical growth and yield models or process models) can be run to generate inferences about future forest state and current forest biogeochemical cycling.

APA, Harvard, Vancouver, ISO, and other styles
18

Benelli, Alessandro <1978&gt. "Hyperspectral imaging and other optical techniques for (in-field/in-lab) physico-chemical attributes estimation of agri-food vegetal products." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10429/1/Benelli_Alessandro_PhD_Thesis.pdf.

Full text
Abstract:
In the agri-food sector, measurement and monitoring activities contribute to high quality end products. In particular, considering food of plant origin, several product quality attributes can be monitored. Among the non-destructive measurement techniques, a large variety of optical techniques are available, including hyperspectral imaging (HSI) in the visible/near-infrared (Vis/NIR) range, which, due to the capacity to integrate image analysis and spectroscopy, proved particularly useful in agronomy and food science. Many published studies regarding HSI systems were carried out under controlled laboratory conditions. In contrast, few studies describe the application of HSI technology directly in the field, in particular for high-resolution proximal measurements carried out on the ground. Based on this background, the activities of the present PhD project were aimed at exploring and deepening knowledge in the application of optical techniques for the estimation of quality attributes of agri-food plant products. First, research activities on laboratory trials carried out on apricots and kiwis for the estimation of soluble solids content (SSC) and flesh firmness (FF) through HSI were reported; subsequently, FF was estimated on kiwis using a NIR-sensitive device; finally, the procyanidin content of red wine was estimated through a device based on the pulsed spectral sensitive photometry technique. In the second part, trials were carried out directly in the field to assess the degree of ripeness of red wine grapes by estimating SSC through HSI, and finally a method for the automatic selection of regions of interest in hyperspectral images of the vineyard was developed. The activities described above have revealed the potential of the optical techniques for sorting-line application; moreover, the application of the HSI technique directly in the field has proved particularly interesting, suggesting further investigations to solve a variety of problems arising from the many environmental variables that may affect the results of the analyses.
APA, Harvard, Vancouver, ISO, and other styles
19

Peduzzi, Alicia. "Estimating forest attributes using laser scanning data and dual-band, single-pass interferometric aperture radar to improve forest management." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/39456.

Full text
Abstract:
The overall objectives of this dissertation were to (1) determine whether leaf area index (LAI) (Chapter 2), as well as stem density and height to live crown (Chapter 3) can be estimated accurately in intensively managed pine plantations using small-footprint, multiple-return airborne laser scanner (lidar) data, and (2) ascertain whether leaf area index in temperate mixed forests is best estimated using multiple-return airborne laser scanning (lidar) data or dual-band, single-pass interferometric synthetic aperture radar data (from GeoSAR) alone or both in combination (Chapter 4). In situ measurements of LAI, mean height, height to live crown, and stem density were made on 109 (LAI) or 110 plots (all other variables) under a variety of stand conditions. Lidar distributional metrics were calculated for each plot as a whole as well as for crown density slices (newly introduced in this dissertation). These metrics were used as independent variables in best subsets regressions with LAI, number of trees, mean height to live crown, and mean height (measured in situ) as the dependent variables. The best resulting model for LAI in pine plantations had an R2 of 0.83 and a cross-validation (CV) RMSE of 0.5. The CV-RMSE for estimating number of trees on all 110 plots was 11.8 with an R2 of 0.92. Mean height to live crown was also well-predicted (R2 = 0.96, CV-RMSE = 0.8 m) with a one-variable model. In situ measurements of temperate mixed forest LAI were made on 61 plots (21 hardwood, 36 pine, 4 mixed pine hardwood). GeoSAR metrics were calculated from the X-band backscatter coefficients (four looks) as well as both X- and P-band interferometric heights and magnitudes. Both lidar and GeoSAR metrics were used as independent variables in best subsets regressions with LAI (measured in situ) as the dependent variable. Lidar metrics alone explained 69% of the variability in temperate mixed forest LAI, while GeoSAR metrics alone explained 52%. However, combining the LAI and GeoSAR metrics increased the R2 to 0.77 with a CV-RMSE of 0.42. Analysis of data from active sensors shows strong potential for eventual operational estimation of biophysical parameters essential to silviculture.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Meurer, Ismael. "Estudo de diferentes métodos na estimativa da curva de retenção da água no solo." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/11/11140/tde-05052014-102750/.

Full text
Abstract:
O solo fornece suporte e atua como reservatório de água para as plantas, promovendo condições fundamentais ao crescimento de raízes e à dinâmica da água e nutrientes. O conhecimento de suas propriedades hidráulicas, como a curva retenção da água, é de grande importância na descrição e predição dos processos de transporte da água e solutos. O objetivo deste trabalho foi determinar a curva de retenção pelo método tradicional dos funis e câmaras de pressão de ar com placa porosa, pelo método de campo utilizando tensiômetros e pelo método da evaporação da água em amostra de solo no laboratório munida de tensiômetro. O solo utilizado para estudo foi classificado como Nitossolo Vermelho eutrófico de textura argilosa, e estava sendo cultivado com café há mais de 10 anos. A comparação entre as curvas obtidas pelos três métodos indicou que o método da evaporação diferiu estatisticamente do método do funil e câmara e foi estatisticamente idêntico ao método do tensiômetro a campo. Por sua facilidade de execução, baixo custo e rapidez na determinação da curva de retenção até a tensão de aproximadamente 100 kPa, o método da evaporação aqui apresentado é uma boa opção de utilização. Com relação ao método do tensiômetro no campo, embora mais realístico, é muito trabalhoso.
Soil provides support and acts as a water reservoir to plants, promoting essential conditions to root growth and to water and nutrient dynamics. The understanding of its hydraulic properties, like the water retention curve, is of great importance for the description and prediction of the processes of water and solute transport. The objective of this study was to determine soil water retention curve through the traditional method using porous plate funnel and pressure chamber, through the field method using tensiometers and through the water evaporation in soil sample with tensiometer in the laboratory. The studied soil was classified as clayey Rhodic Hapludox, which had been cultivated with coffee for more than 10 years. The comparison of curves obtained through the three methods indicated that the evaporation method was statistically different from the method using funnel and chamber, and statistically equal to the method of tensiometers at field. For its easy execution, low cost and quickness to determine soil water retention curve until the tension of about 100 kPa, the evaportation method presented here is a feasible option. As for the method of tensiometers at field, although it is more realistic, it is very laborious.
APA, Harvard, Vancouver, ISO, and other styles
21

Fisher, Geoffrey W. "Value Estimation and Comparison in Multi-Attribute Choice." Thesis, 2015. https://thesis.library.caltech.edu/8862/1/Fisher_Geoffrey_2015_thesis.pdf.

Full text
Abstract:

The following work explores the processes individuals utilize when making multi-attribute choices. With the exception of extremely simple or familiar choices, most decisions we face can be classified as multi-attribute choices. In order to evaluate and make choices in such an environment, we must be able to estimate and weight the particular attributes of an option. Hence, better understanding the mechanisms involved in this process is an important step for economists and psychologists. For example, when choosing between two meals that differ in taste and nutrition, what are the mechanisms that allow us to estimate and then weight attributes when constructing value? Furthermore, how can these mechanisms be influenced by variables such as attention or common physiological states, like hunger?

In order to investigate these and similar questions, we use a combination of choice and attentional data, where the attentional data was collected by recording eye movements as individuals made decisions. Chapter 1 designs and tests a neuroeconomic model of multi-attribute choice that makes predictions about choices, response time, and how these variables are correlated with attention. Chapter 2 applies the ideas in this model to intertemporal decision-making, and finds that attention causally affects discount rates. Chapter 3 explores how hunger, a common physiological state, alters the mechanisms we utilize as we make simple decisions about foods.

APA, Harvard, Vancouver, ISO, and other styles
22

Hsieh, Hui-Lan, and 謝蕙蘭. "Multi-task Learning for Face Recognition and Attribute Estimation." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/35301995068739966019.

Full text
Abstract:
碩士
國立臺灣大學
資訊網路與多媒體研究所
104
Convolution neural network (CNN) has been shown as the state-of-the-art approach for learning face representations in recent years. However, previous works only utilized identity information instead of leveraging human attributes (e.g., gender and age) which contain high-level semantic meanings to learn robuster features. In this work, we aim to learn discriminative features to improve face recognition through multi-task learning with human attributes. Specifically, we focus on simultaneously optimizing face recognition and human attributes estimation. In our experiments, we learn face representation by training the largest publicly face dataset CASIA-WebFace with gender and age label, and then evaluate learned features on widely-used LFW benchmark for face verification and identification. We also compare the effectiveness of different attributes for identification. The results show that the proposed model outperforms hand-crafted feature such as high-dimensional LBP, and human attributes really provide useful semantic cues. We also do experiments on gender and age estimation on Adience benchmark to justify that human attribute prediction can also benefit from rich identity information.
APA, Harvard, Vancouver, ISO, and other styles
23

DI, FINA DARIO. "Multi-Target Tracking and Facial Attribute Estimation in Smart Environments." Doctoral thesis, 2016. http://hdl.handle.net/2158/1029030.

Full text
Abstract:
This dissertation presents a study on three different computer vision topics that have applications to smart environments. We first propose a solution to improve multi-target data association based on l1-regularized sparse basis expansions. The method aims to improve the data association process by addressing problems like occlusion and change of appearance. Experimental results show that, for the pure data association problem, our proposed approach achieves state-of-the-art results on standard benchmark datasets. Next, we extend our new data association approach with a novel technique based on a weighted version of sparse reconstruction that enforces long-term consistency in multi-target tracking. We introduce a two-phase approach that first performs local data association, and then periodically uses accumulated usage statistics in order to merge tracklets and enforce long-term, global consistency in tracks. The result is a complete, end-to-end tracking system that is able to reduce tracklet fragmentation and ID switches, and to improve the overall quality of tracking. Finally, we propose a method to jointly estimate face characteristics such as Gender, Age, Ethnicity and head pose. We develop a random forest based method based around a new splitting criterion for multi-objective estimation. Our system achieves results comparable to the state-of-the-art, and has the additional advantage of simultaneously estimating multiple facial characteristics using a single pool of image features rather than characteristic-specific ones.
APA, Harvard, Vancouver, ISO, and other styles
24

(7484339), Fu-Chen Chen. "Deep Learning Studies for Vision-based Condition Assessment and Attribute Estimation of Civil Infrastructure Systems." Thesis, 2021.

Find full text
Abstract:
Structural health monitoring and building assessment are crucial to acquire structures’ states and maintain their conditions. Besides human-labor surveys that are subjective, time-consuming, and expensive, autonomous image and video analysis is a faster, more efficient, and non-destructive way. This thesis focuses on crack detection from videos, crack segmentation from images, and building assessment from street view images. For crack detection from videos, three approaches are proposed based on local binary pattern (LBP) and support vector machine (SVM), deep convolution neural network (DCNN), and fully-connected network (FCN). A parametric Naïve Bayes data fusion scheme is introduced that registers video frames in a spatiotemporal coordinate system and fuses information based on Bayesian probability to increase detection precision. For crack segmentation from images, the rotation-invariant property of crack is utilized to enhance the segmentation accuracy. The architectures of several approximately rotation-invariant DCNNs are discussed and compared using several crack datasets. For building assessment from street view images, a framework of multiple DCNNs is proposed to detect buildings and predict their attributes that are crucial for flood risk estimation, including founding heights, foundation types (pier, slab, mobile home, or others), building types (commercial, residential, or mobile home), and building stories. A feature fusion scheme is proposed that combines image feature with meta information to improve the predictions, and a task relation encoding network (TREncNet) is introduced that encodes task relations as network connections to enhance multi-task learning.
APA, Harvard, Vancouver, ISO, and other styles
25

Yaghoubi, Ehsan. "Soft Biometric Analysis: MultiPerson and RealTime Pedestrian Attribute Recognition in Crowded Urban Environments." Doctoral thesis, 2021. http://hdl.handle.net/10400.6/12081.

Full text
Abstract:
Traditionally, recognition systems were only based on human hard biometrics. However, the ubiquitous CCTV cameras have raised the desire to analyze human biometrics from far distances, without people attendance in the acquisition process. Highresolution face closeshots are rarely available at far distances such that facebased systems cannot provide reliable results in surveillance applications. Human soft biometrics such as body and clothing attributes are believed to be more effective in analyzing human data collected by security cameras. This thesis contributes to the human soft biometric analysis in uncontrolled environments and mainly focuses on two tasks: Pedestrian Attribute Recognition (PAR) and person reidentification (reid). We first review the literature of both tasks and highlight the history of advancements, recent developments, and the existing benchmarks. PAR and person reid difficulties are due to significant distances between intraclass samples, which originate from variations in several factors such as body pose, illumination, background, occlusion, and data resolution. Recent stateoftheart approaches present endtoend models that can extract discriminative and comprehensive feature representations from people. The correlation between different regions of the body and dealing with limited learning data is also the objective of many recent works. Moreover, class imbalance and correlation between human attributes are specific challenges associated with the PAR problem. We collect a large surveillance dataset to train a novel gender recognition model suitable for uncontrolled environments. We propose a deep residual network that extracts several posewise patches from samples and obtains a comprehensive feature representation. In the next step, we develop a model for multiple attribute recognition at once. Considering the correlation between human semantic attributes and class imbalance, we respectively use a multitask model and a weighted loss function. We also propose a multiplication layer on top of the backbone features extraction layers to exclude the background features from the final representation of samples and draw the attention of the model to the foreground area. We address the problem of person reid by implicitly defining the receptive fields of deep learning classification frameworks. The receptive fields of deep learning models determine the most significant regions of the input data for providing correct decisions. Therefore, we synthesize a set of learning data in which the destructive regions (e.g., background) in each pair of instances are interchanged. A segmentation module determines destructive and useful regions in each sample, and the label of synthesized instances are inherited from the sample that shared the useful regions in the synthesized image. The synthesized learning data are then used in the learning phase and help the model rapidly learn that the identity and background regions are not correlated. Meanwhile, the proposed solution could be seen as a data augmentation approach that fully preserves the label information and is compatible with other data augmentation techniques. When reid methods are learned in scenarios where the target person appears with identical garments in the gallery, the visual appearance of clothes is given the most importance in the final feature representation. Clothbased representations are not reliable in the longterm reid settings as people may change their clothes. Therefore, developing solutions that ignore clothing cues and focus on identityrelevant features are in demand. We transform the original data such that the identityrelevant information of people (e.g., face and body shape) are removed, while the identityunrelated cues (i.e., color and texture of clothes) remain unchanged. A learned model on the synthesized dataset predicts the identityunrelated cues (shortterm features). Therefore, we train a second model coupled with the first model and learns the embeddings of the original data such that the similarity between the embeddings of the original and synthesized data is minimized. This way, the second model predicts based on the identityrelated (longterm) representation of people. To evaluate the performance of the proposed models, we use PAR and person reid datasets, namely BIODI, PETA, RAP, Market1501, MSMTV2, PRCC, LTCC, and MIT and compared our experimental results with stateoftheart methods in the field. In conclusion, the data collected from surveillance cameras have low resolution, such that the extraction of hard biometric features is not possible, and facebased approaches produce poor results. In contrast, soft biometrics are robust to variations in data quality. So, we propose approaches both for PAR and person reid to learn discriminative features from each instance and evaluate our proposed solutions on several publicly available benchmarks.
This thesis was prepared at the University of Beria Interior, IT Instituto de Telecomunicações, Soft Computing and Image Analysis Laboratory (SOCIA Lab), Covilhã Delegation, and was submitted to the University of Beira Interior for defense in a public examination session.
APA, Harvard, Vancouver, ISO, and other styles
26

Gould, Gregory M. "A spatial analysis of passenger vehicle attributes, environmental impact and policy /." 2006. http://www.library.umaine.edu/theses/pdf/GouldGM2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Zhou, Jiawen. "Estimating attribute-based reliability in cognitive diagnostic assessment." Phd thesis, 2010. http://hdl.handle.net/10048/1052.

Full text
Abstract:
Thesis (Ph.D.) -- University of Alberta, 2010.
"A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Measurement, Evaluation and Cognition, Department of Educational Psychology, University of Alberta." Title from pdf file main screen (viewed on May 19, 2010) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
28

Čabaravdić, Azra [Verfasser]. "Efficient estimation of forest attributes with k NN / vorgelegt von Azra Čabaravdić." 2007. http://d-nb.info/985124164/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Wen-sheng, and 吳文盛. "Applying Attribute Values Partitioning and GA Clustering Technique for Estimating Missing Values." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/68591033503377203601.

Full text
Abstract:
碩士
南華大學
資訊管理學系碩士班
97
Data mining is a vitally important technique to uncover hidden information from a set of raw data. The managers can exploit the mining results to make effective decisions. However, missing data significantly distort data mining results. Therefore, data preprocessing of missing values is very critical in successful data mining. Data clustering techniques is the partitioning of a dataset into subsets so that the data in each subset share common pattern. The shared pattern can be utilized to estimate the missing values. In this study, we propose an attribute values partitioning technique to preserve the relationships between attributes for estimating missing values. In addition, genetic algorithm is a powerful population-based stochastic search process for finding the robust clustering result. Therefore, we also propose a genetic clustering-based approach to estimate the missing data. Furthermore, we integrate the attribute values partitioning with the genetic clustering techniques to improve the estimation performance. Effectiveness of the proposed approaches is demonstrated on four datasets for four different rates of missing data. The empirical evaluation shows the integrated missing data processing approach provides competitive results or performs well compared with the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Peng, Shu-Ya, and 彭書亞. "The Effect of Firm-Level Attributes Using Market-Share Estimation Based on Aggregate Data." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/zb4tvf.

Full text
Abstract:
碩士
國立臺灣大學
商學研究所
107
This study explores the impact of different firm-level variables on the consumer utility and market share. We introduce the BLP model to analyze the consumers utility and further convert them into market share. And then, we select firms and variables from the dataset to conduct the empirical study. We modify the BLP model due to encountering the problem of parameter divergence. After modification, we obtain precise prediction and discover that product margin is the key factor to influence the market share of a firm.
APA, Harvard, Vancouver, ISO, and other styles
31

Dutton, Jennifer Michelle. "Estimating the value of brand and attributes for retail fresh beef products." 2007. http://digital.library.okstate.edu/etd/umi-okstate-2342.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Wen-Chun, and 李文鈞. "Estimating the House Economic Values of Cultural Heritage Attributes in Tainan City." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/94203200110328128681.

Full text
Abstract:
碩士
國立東華大學
社會學系
103
A historic site shows different times in a space through history reflecting social values, economic situations, and cultural background of a particular period. Base on Tourism increases, protection and managing of cultural heritage have now become the focal point of our society. Due to manpower, time and fund, Economic Evaluation of cultural goods and services have become an increasingly important hot topic in the world. These public goods properties of cultural goods and services have a great contribution to social welfare, therefore government always by remit taxation or financial support to help grow national cultural heritage organization.   In this study, the research scope of Tainan, the application of hedonic price method combined with geographic information systems (Geographic Information Systems, GIS) construct a cultural monument features covering property prices evaluation model, determined the structure of variable housing sample points (such as housing age, buildings transferred a total area and buildings pattern, etc.), monuments external variables (such as estimates Boroughs Housing sample points to national historic site, the actual distance of the city historic site and various monuments classified) and monuments feature variables (such as country houses set around Collocation, given the number of city monuments and various categories of monuments, etc.), further use of regression analysis Collocation factors affecting prices Tainan explore monuments and features external variables affect the relationship between prices of Tainan City, Tainan monuments to further assess the characteristics of the impact on the price of the marginal value.   Above all we applied geographic information systems (Geographic Information Systems, the following called GIS) with spatial scales and historic characteristics to integrate and analyze the data of preservation of cultural assets, then according to those data link connect to land administration, urban planning and cultural heritage preservation planning. Apply the characteristics of the value of cultural relics to cultural budget, preservation of cultural assets and heritage management will implement progressively. The development of cultural relic’s sustainable preservation and use will decide on its course after overall considerations.
APA, Harvard, Vancouver, ISO, and other styles
33

"The estimation of Eucalyptus plantation forest structural attributes using medium and high spatial resolution satellite imagery." Thesis, 2008. http://hdl.handle.net/10413/354.

Full text
Abstract:
Sustaining the socioeconomic and ecological benefits of South African plantation forests is challenging. A more systematic and rapid forest inventory system is required by forest managers. This study investigates the utility of medium (ASTER 15 m) and high (IKONOS 1-4 m) spatial resolution satellite imageries in an effort to improve the remote capture of structural attributes of even-aged Eucalyptus plantations grown in the warm temperate climatic zone of southern KwaZulu-Natal, South Africa. The conversion of image data to surface reflectance is a pre-requisite for the establishment of relationships between satellite remote sensing data and ground collected forest structural data. In this study image-based atmospheric correction methods applied on ASTER and IKONOS imagery were evaluated for the purpose of retrieving surface reflectance of plantation forests. Multiple linear regression and canonical correlation analyses were used to develop models for the prediction of plantation forest structural attributes from ASTER data. Artificial neural networks and multiple linear regression were also used to develop models for the assessment of plantation forests structural attributes from IKONOS data. The plantation forest structural attributes considered in this study included: stems per hectare, diameter at breast height, mean tree height, basal area, and volume. In addition, location based stems per hectare were determined using high spatial resolution panchromatic IKONOS data where variable and fixed window sizes of local maxima were employed. The image-based dark object subtraction (DOS) model was better suited for atmospheric correction of ASTER and IKONOS imagery of the study area. The medium spatial resolution data were not amenable to estimating even-aged Eucalyptus forest structural attributes. It is still encouraging that up to 64 % of variation could be explained by using medium spatial resolution data. The results from high spatial resolution data showed a promising result where the ARMSE% values obtained for stems per hectare, diameter at breast height, tree height, basal area and volume are 7.9, 5.1, 5.8, 8.7 and 8.7, respectively. Results such as these bode well for the application of high spatial resolution imagery to forest structural assessment. The results from the location based estimation of stems per hectare illustrated that a variable window size approach developed in this study is highly accurate. The overall accuracy using a variable window size was 85% (RMSE of 189 trees per hectare). The overall findings presented in this study are encouraging and show that high spatial resolution imagery was successful in predicting even-aged Eucalyptus forest structural attributes in the warm temperate climates of South Africa, with acceptable accuracy.
Thesis (Ph.D.) - University of KwaZulu-Natal, Pietermaritzburg, 2008.
APA, Harvard, Vancouver, ISO, and other styles
34

Pierson, Margaret Parker. "Price competition and the impact of service attributes: Structural estimation and analytical characterizations of equilibrium behavior." Thesis, 2012. https://doi.org/10.7916/D89029WZ.

Full text
Abstract:
This dissertation addresses a number of outstanding, fundamental questions in operations management and industrial organization literature. Operations management literature has a long history of studying the competitive impact of operational, firm-level strategic decisions within oligopoly markets. The first essay reports on an empirical study of an important industry, the drive-thru fast-food industry. We estimate a competition model, derived from an underlying Mixed MultiNomial Logit (MNML) consumer choice model, using detailed empirical data. The main goal is to measure to what extent waiting time performance, along with price levels, brand attributes, geographical and demographic factors, impacts competing firms' market shares. The primary goal of our second essay is to characterize the equilibrium behavior of price competition models with Mixed Multinomial Logit (MMNL) demand functions under affine cost structures. In spite of the huge popularity of MMNL models in both the theoretical and empirical literature, it is not known, in general, whether a Nash equilibrium (in pure strategies) of prices exists, and whether the equilibria can be uniquely characterized as the solutions to the system of First Order Condition (FOC) equations. The third essay, which is the most general in its context, we establish that in the absence of cost efficiencies resulting from a merger, aggregate profits of the merging firms increase as do equilibrium prices for general price competition models with general nonlinear demand and cost functions as long as the models are supermodular, with two additional structural conditions: (i) each firm's profit function is strictly quasi-concave in its own price(s), and (ii) markets are competitive, i.e., in the pre-merger industry, each firm's profits increase when any of his competitors increases his price, unilaterally. Even the equilibrium profits of the remaining firms in the industry increase, while the consumer ends up holding the bag, i.e., consumer welfare declines. As demonstrated by this essay, the answers to these sorts of strategy questions have implications not only for the firms and customers but also the policy makers policing these markets.
APA, Harvard, Vancouver, ISO, and other styles
35

Cheng, Wen-chieh, and 鄭文傑. "Estimating Cyclist’s Preference on Service Attributes of Special Green Trains:Heterogeneity by Recreation Specialization." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/48909192763403472122.

Full text
Abstract:
碩士
國立成功大學
交通管理學系碩博士班
97
Bicycle tourism is an increasingly important mode during vacation. However, the mobility of bicycle tourism will be affected by the remoteness from the departure to the end of the destination, so a reciprocal utilizing of bicycle and train becomes a type of new travel way. This kind of special green train is a new service for cyclist in Taiwan. Therefore, understanding cyclists’ preference in terms of green train service attributes can provide insightful information for managerial policy-making planning. The objective of this study is to valuate of cyclist’s preference and their willingness to pay for hypothetical managerial developments of the special green train service attributes, as well, to evaluate how segment of recreation specialization level of cyclists affects the different preference of service attributes. The results indicate that within baggage area, fixed frame with special area, space for bicycle and people in same carriage, and frequency exhibit a statistically significant effect on choice probability. The negative coefficient of price indicates that the increasing levels of fares lead to a negative effect on utility and reduce the choice probability. This result fluctuation indicates that high recreation specialization cyclists and low recreation specialization cyclists demonstrate preference differences in service attributes among these two segments.
APA, Harvard, Vancouver, ISO, and other styles
36

Tzi-Li, Chang, and 張自立. "A Study of Estimating Comprehensive National Power Using Data Envelope Analysis and Fuzzy Multiple Attribute Decision Making." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/35829483962529977690.

Full text
Abstract:
碩士
國防管理學院
資源管理研究所
89
There are many methods, including Index Method, Analytic Hierarchy Process and Delphi Method(FMADM)etc, for evaluating a national Comprehensive National Power(CNP). Some methods describe with qualitative modes, the others quantify with mathematical modes. Among those evaluating methods, there are different viewpoint, explanation, advantages and disadvantages. In the evaluation of CNP, we need to consider its property of multi-input and multi-output and multiple attribute involving incomplete messages. The purpose of this study is to solve the difficulties of the evaluating multi-input and multi-output and multiple attribute involving incomplete messages. Hence, in this research ,the factors for evaluating CNP are selected in accordance with the related reference and expert’s opinion on the basis of grounded theory, and the evaluation procedures for the CNP system in two stages should be suggested as follows:(1)we should apply Data Envelopment Analysis(DEA)to obtain relative efficiency among different national CPN, and sift more relative efficiency ones out..(2)we should apply Fuzzy Multiple Attribute Decision Making(FMADM)to solve such problems as incapable of quantification, incomplete messages, vague conception, for example, politics, economy, science and technology, military, and so on. The main contribution of this research states as below: (1)By means of consulting literatures and expert interviews on the basis of grounded theory, measuring criterion of evaluating CNP and conception of forming those criterion. (2)This research integrates related theories, considering that evaluating factors of a national CNP system should be human resources, natural resources, science and technology, domestic economy, business management, internationalization, government, infrastructure, defense capability. (3)This research compare six main national CNP ordering, creating a new and simple mode measuring national CNP by using DEA and FMADM methods
APA, Harvard, Vancouver, ISO, and other styles
37

(6949067), Aaron J. Staples. "Consumer Willingness-to-Pay for Sustainability Attributes in Beer: A Choice Experiment Using Eco-Labels." Thesis, 2019.

Find full text
Abstract:

Commercial and regional brewers are increasingly investing in sustainability equipment that reduces input use, operating costs, and environmental impact. These technologies often require significant upfront costs that can limit market access to microbreweries. One potential solution for these brewers is to market their product as sustainable and charge a premium for their product to offset some of the costs. A stated preference choice experiment of a nationally-representative sample is undertaken to elicit consumer willingness-to-pay (WTP) for sustainability attributes in beer, thus determining whether a market for sustainably-made beer exists. The facets of sustainability, including water reduction, energy reduction, and landfill diversion, are portrayed through eco-labels affixed the front of the primary packaging (aluminum can or glass bottle). Multiple specifications are employed to handle model shortcomings and incorporate discrete heterogeneity. Across all model specifications, consumers show a positive and statistically significant marginal WTP for landfill diversion practices and carbon reduction practices, ranging from $0.40 to $1.37 per six-pack and $0.67 to $1.21 per six-pack, respectively. These results indicate consumers do in fact place value on beer produced using sustainable practices, and the demographics of consumers with the greatest WTP are similar to that of craft beer consumer.

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography