Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Objective data.

Dissertationen zum Thema „Objective data“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Objective data" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Kwoh, Chee Keong. „Probabilistic reasoning from correlated objective data“. Thesis, Imperial College London, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307686.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kirkland, Oliver. „Multi-objective evolutionary algorithms for data clustering“. Thesis, University of East Anglia, 2014. https://ueaeprints.uea.ac.uk/51331/.

Der volle Inhalt der Quelle
Annotation:
In this work we investigate the use of Multi-Objective metaheuristics for the data-mining task of clustering. We �first investigate methods of evaluating the quality of clustering solutions, we then propose a new Multi-Objective clustering algorithm driven by multiple measures of cluster quality and then perform investigations into the performance of different Multi-Objective clustering algorithms. In the context of clustering, a robust measure for evaluating clustering solutions is an important component of an algorithm. These Cluster Quality Measures (CQMs) should rely solely on the structure of the clustering solution. A robust CQM should have three properties: it should be able to reward a \good" clustering solution; it should decrease in value monotonically as the solution quality deteriorates and, it should be able to evaluate clustering solutions with varying numbers of clusters. We review existing CQMs and present an experimental evaluation of their robustness. We find that measures based on connectivity are more robust than other measures for cluster evaluation. We then introduce a new Multi-Objective Clustering algorithm (MOCA). The use of Multi-Objective optimisation in clustering is desirable because it permits the incorporation of multiple measures of cluster quality. Since the definition of what constitutes a good clustering is far from clear, it is beneficial to develop algorithms that allow for multiple CQMs to be accommodated. The selection of the clustering quality measures to use as objectives for MOCA is informed by our previous work with internal evaluation measures. We explain the implementation details and perform experimental work to establish its worth. We compare MOCA with k-means and find some promising results. We�find that MOCA can generate a pool of clustering solutions that is more likely to contain the optimal clustering solution than the pool of solutions generated by k-means. We also perform an investigation into the performance of different implementations of MOEA algorithms for clustering. We�find that representations of clustering based around centroids and medoids produce more desirable clustering solutions and Pareto fronts. We also �find that mutation operators that greatly disrupt the clustering solutions lead to better exploration of the Pareto front whereas mutation operators that modify the clustering solutions in a more moderate way lead to higher quality clustering solutions. We then perform more specific investigations into the performance of mutation operators focussing on operators that promote clustering solution quality, exploration of the Pareto front and a hybrid combination. We use a number of techniques to assess the performance of the mutation operators as the algorithms execute. We confirm that a disruptive mutation operator leads to better exploration of the Pareto front and mutation operators that modify the clustering solutions lead to the discovery of higher quality clustering solutions. We find that our implementation of a hybrid mutation operator does not lead to a good improvement with respect to the other mutation operators but does show promise for future work.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fieldsend, Jonathan E. „Novel algorithms for multi-objective search and their application in multi-objective evolutionary neural network training“. Thesis, University of Exeter, 2003. http://hdl.handle.net/10871/11706.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Brown, Nathan C. (Nathan Collin). „Early building design using multi-objective data approaches“. Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/123573.

Der volle Inhalt der Quelle
Annotation:
Thesis: Ph. D. in Architecture: Building Technology, Massachusetts Institute of Technology, Department of Architecture, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 201-219).
During the design process in architecture, building performance and human experience are increasingly understood through computation. Within this context, this dissertation considers how data science and interactive optimization techniques can be combined to make simulation a more effective component of a natural early design process. It focuses on conceptual design, since technical principles should be considered when global decisions are made concerning the massing, structural system, and other design aspects that affect performance. In this early stage, designers might simulate structure, energy, daylighting, thermal comfort, acoustics, cost, and other quantifiable objectives. While parametric simulations offer the possibility of using a design space exploration framework to make decisions, their resulting feedback must be synthesized together, along with non-quantifiable design goals.
Previous research has developed optimization strategies to handle such multi-objective scenarios, but opportunities remain to further adapt optimization for the creative task of early building design, including increasing its interactivity, flexibility, accessibility, and ability to both support divergent brainstorming and enable focused performance improvement. In response, this dissertation proposes new approaches to parametric design space formulation, interactive optimization, and diversity-based design. These methods span in utility from early ideation, through global design exploration, to local exploration and optimization. The first presented technique uses data science methods to interrogate, transform, and, for specific cases, generate design variables for exploration. The second strategy involves interactive stepping through a design space using estimated gradient information, which offers designers more freedom compared to automated solvers during local exploration.
The third method addresses computational measurement of diversity within parametric design and demonstrates how such measurements can be integrated into creative design processes. These contributions are demonstrated on an integrated early design example and preliminarily validated using a design study that provides feedback on the habits and preferences of architects and engineers while engaging with data-driven tools. This study reveals that performance-enabled environments tend to improve simulated design objectives, while designers prefer more flexibility than traditional automated optimization approaches when given the choice. Together, these findings can stimulate further development in the integration of interactive approaches to multi-objective early building design. Key words: design space exploration, conceptual design, design tradeoffs, interactive design tools, structural design, sustainable design, multi-objective optimization, data science, surrogate modeling
by Nathan C. Brown.
Ph. D. in Architecture: Building Technology
Ph.D.inArchitecture:BuildingTechnology Massachusetts Institute of Technology, Department of Architecture
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mostaghim, Sanaz. „Multi-objective evolutionary algorithms data structures, convergence, and diversity /“. [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=974405604.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Furst, Séverine. „Multi-objective optimization for joint inversion of geodetic data“. Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS017/document.

Der volle Inhalt der Quelle
Annotation:
La surface terrestre est affectée par de nombreux processus locaux tels que des événements volcaniques, des glissements de terrain ou des tremblements de terre. Parallèlement à ces processus naturels, les activités anthropiques, y compris l’extraction et le stockage des ressources profondes (par exemple, les minéraux ou les hydrocarbures) façonnent la Terre à différentes échelles spatiales et temporelles. Ces mécanismes produisent une déformation du sol qui peut être détectée par divers instruments et techniques géodésiques tel que le GNSS, l’InSAR, les inclinomètres. Le but de cette thèse est de développer un outil numérique permettant l’inversion conjointe de multiples données géodésiques associées à la déformation de la plaque ou au changement de contrainte volumique en profondeur. Quatre types d’applications sont ciblés: la déformation intersismiques des plaques, la déformation des volcans, l’exploitation minière profonde et l’extraction de pétrole et de gaz. Différentes complexités du modèle inverse ont été considérées: le niveau I considère un seul type de données géodésiques avec un processus indépendant du temps. Une application est réalisée avec l’inversion des données GPS à travers le sud de la Californie pour déterminer les variations latérales de la rigidité lithosphérique (Furst et al., 2017). Le niveau II représente également un seul type de données géodésiques mais avec un processus dépendant du temps. La détermination conjointe de l’historique des changements de contrainte et des paramètres de dérive d’un réseau d’inclinomètres est étudiée à l’aide d’un exemple synthétique (Furst et al., soumis). Le niveau III considère différents types de données géodésiques et un processus dépendant du temps. Un réseau fictif combinant des données GNSS, InSAR, inclinométriques et de nivellement est défini pour calculer le changement de volume dépendant du temps d’une source profonde de déformation. Une méthodologie pour implémenter ces différents niveaux de complexité est développée dans un seul logiciel. Parce que le problème inverse peut être mal posé, la minimisation de la fonctionnelle peut produire plusieurs minima. Par conséquent, un algorithme d’optimisation global est utilisé (Mohammadi and Saïac, 2003). Le problème direct est traité en utilisant un ensemble de modèles élastiques numériques et analytiques permettant de modéliser les processus de déformation en profondeur. Grâce à ces développements numériques, des avancées concernant les problèmes inverses en géodésie devraient être possibles telle que l’inversion jointe de différents types de données géodésiques acquises lors de la surveillance des volcans. Dans cette perspective, la possibilité de déterminer par inversion les paramètres de dérive des inclinomètres permettrait une détermination précise des sources de déformation profondes. En outre, la méthodologie développée peut être utilisée pour une surveillance précise de la déformation des réservoirs de pétrole et de gaz
The Earth’s surface is affected by numerous local processes like volcanic events, landslides or earthquakes. Along with these natural processes, anthropogenic activities including extraction and storage of deep resources (e.g. minerals, hydrocarbons) shape the Earth at different space and time scales. These mechanisms produce ground deformation that can be detected by various geodetic instruments like GNSS, InSAR, tiltmeters, for example. The purpose of the thesis is to develop a numerical tool to provide the joint inversion of multiple geodetic data associated to plate deformation or volume strain change at depth. Four kinds of applications are targeted: interseismic plate deformation, volcano deformation, deep mining, and oil & gas extraction. Different inverse model complexities were considered: the I-level considers a single type of geodetic data with a time independent process. An application is made with inverting GPS data across southern California to determine the lateral variations of lithospheric rigidity (Furst et al., 2017). The II-level also accounts for a single type of geodetic data but with a time-dependent process. The joint determination of strain change history and the drift parameters of a tiltmeter network is studied through a synthetic example (Furst et al., submitted). The III-level considers different types of geodetic data and a timedependent process. A fictitious network made by GNSS, InSAR, tiltmeters and levelling surveys is defined to compute the time dependent volume change of a deep source of strain. We develop a methodology to implement these different levels of complexity in a single software. Because the inverse problem is possibly ill-posed, the functional to minimize may display several minima. Therefore, a global optimization algorithm is used (Mohammadi and Saïac, 2003). The forward part of the problem is treated by using a collection of numerical and analytical elastic models allowing to model the deformation processes at depth. Thanks to these numerical developments, new advances for inverse geodetic problems should be possible like the joint inversion of various types of geodetic data acquired for volcano monitoring. In this perspective, the possibility to determine by inverse problem the tiltmeter drift parameters should allow for a precise determination of deep strain sources. Also, the developed methodology can be used for an accurate monitoring of oil & gas reservoir deformation
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ray, Subhasis. „Multi-objective optimization of an interior permanent magnet motor“. Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116021.

Der volle Inhalt der Quelle
Annotation:
In recent years, due to growing environmental awareness regarding global warming, green cars, such as hybrid electric vehicles, have gained a lot of importance. With the decreasing cost of rare earth magnets, brushless permanent magnet motors, such as the Interior Permanent Magnet Motor, have found usage as part of the traction drive system in these types of vehicles. As a design issue, building a motor with a performance curve that suits both city and highway driving has been treated in this thesis as a multi-objective problem; matching specific points of the torque-speed curve to the desired performance output. Conventionally, this has been treated as separate problems or as a combination of several individual problems, but doing so gives little information about the trade-offs involved. As a means of identifying the compromising solutions, we have developed a stochastic optimizer for tackling electromagnetic device optimization and have also demonstrated a new innovative way of studying how different design parameters affect performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Habib, Irfan. „Multi-objective optimisation of compute and data intensive e-science workflows“. Thesis, University of the West of England, Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.573383.

Der volle Inhalt der Quelle
Annotation:
Abstract Raw e-Science data, which may be, for example, MRI brain scans, data from a high energy physics detector or metric data from the earth observation projects needs to undergo a series of computations before meaningful knowledge can be derived. One way to. de- scribe these series of computations on raw e-Science data are workflows. Workflows have emerged as the principle mechanism for describing and enacting complex e-Science analy- sis on distributed infrastructures such as Grids. Workflows provide domain scientists with a systematic, repeatable and reproducible means of conducting scientific analyses. Due to the demands of state-of-the-art e-Science applications, scientific workflows are increas- ing in complexity. This complexity is multi-dimensional; scientific workflows are scaling in terms of the number of computations and tasks they carry out. They are also scaling in terms of the data they manage and generate. Scientific workflows are also scaling in terms of the resources they consume. Due to all of these factors the optimisation of these workflows is a prime concern. State-of-the-art approaches to workflow optimisation pri- marily focus on compute optimisation. However, as e-Science is becoming increasingly data-centric, data optimisation is gaining increasing importance. This thesis explores the development of a multi-objective approach to the optimi- sation of scientific workflows. Differing and conflicting considerations are required to optimise a workflow for compute or data efficiency. The approach proposed formulates the optimisation of a scientific workflow as a multi-objective optimisation problem and demonstrates the optimisation of the same through the use of a multi-objective evolution- ary meta-heuristic. The results demonstrate that significant optimisation can be achieved through this approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mostaghim, Sanaz [Verfasser]. „Multi-Objective Evolutionary Algorithms : Data Structures, Convergence, and Diversity / Sanaz Mostaghim“. Aachen : Shaker, 2005. http://d-nb.info/1181620465/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ludick, Chantel Judith. „Disaggregating employment data to building level : a multi-objective optimisation approach“. Diss., University of Pretoria, 2020. http://hdl.handle.net/2263/75596.

Der volle Inhalt der Quelle
Annotation:
The land use policies and development plans that are implemented in a city contribute to whether the city will be sustainable in the future. Therefore, when these policies are being established they should consider the potential impact on development. An analytical tool, such as land use change models, allow decision-makers to see the possible impact that these policies could have on development. Land use change models like UrbanSim make use of the relationship between households, buildings, and employment opportunities to model the decisions that people make on where to live and work. To be able to do this the model needs accurate data. When there is a more accurate location for the employment opportunities in an area, the decisions made by individuals can be better modelled and therefore the projected results are expected to be better. Previous research indicated that the methods that are traditionally used to disaggregate employment data to a lower level in UrbanSim projects are not applicable in the South African context. This is because the traditional methods require a detailed employment dataset for the disaggregation and this detailed employment dataset is not available in South Africa. The aim of this project was to develop a methodology for a metropolitan municipality in South Africa that could be used to disaggregate the employment data that is available at a higher level to a more detailed building level. To achieve this, the methodology consisted of two parts. The first part of the methodology was establishing a method that could be used to prepare a base dataset that is used for disaggregating the employment data. The second part of the methodology was using a multi-objective optimisation approach to allocate the number of employment opportunities within a municipality to building level. The algorithm was developed using the Distributed Evolutionary Algorithm in Python (DEAP) computational framework. DEAP is an open-source evolutionary algorithm framework that is developed in Python and enables users to rapidly create prototypes by allowing them to customise the algorithm to suit their needs The evaluation showed that it is possible to make use of multi-objective optimisation to disaggregate employment data to building level. The results indicate that the employment allocation algorithm was successful in disaggregating employment data from municipal level to building level. All evolutionary algorithms come with some degree of uncertainty as one of the main features of evolutionary algorithms is that they find the most optimal solution, and so there are other solutions available as well. Thus, the results of the algorithm also come with that same level of uncertainty. By enhancing the data used by land use change models, the performance of the overall model is improved. With this improved performance of the model, an improved view of the impact that land use policies could have on development can also be seen. This will allow decision-makers to draw the best possible conclusions and allow them the best possible opportunity to develop policies that will contribute to creating sustainable and lasting urban areas.
Dissertation (MSc (Geoinformatics))--University of Pretoria, 2020.
Geography, Geoinformatics and Meteorology
MSc (Geoinformatics)
Unrestricted
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Ashoor, Khalil Layla Ali. „Performance analysis integrating data envelopment analysis and multiple objective linear programming“. Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/performance-analysis-integrating-data-envelopment-analysis-and-multiple-objective-linear-programming(65485f28-f6c5-4eff-b422-6dd05f1b46fe).html.

Der volle Inhalt der Quelle
Annotation:
Firms or organisations implement performance assessment to improve productivity but evaluating the performance of firms or organisations may be complex and complicated due to the existence of conflicting objectives. Data Envelopment Analysis (DEA) is a non-parametric approach utilized to evaluate the relative efficiencies of decision making units (DMUs) within firms or organizations that perform similar tasks. Although DEA measures the relative efficiency of a set of DMUs the efficiency scores generated do not consider the decision maker’s (DM’s) or expert preferences. DEA is used to measure efficiency and can be extended to include DM’s and expert preferences by incorporating value judgements. Value judgements can be implemented by two techniques: weight restrictions or constructing an equivalence Multiple Objective Linear Programming (MOLP) model. Weight restrictions require prior knowledge to be provided by the DM and moreover the DM cannot interfere during the assessment analysis. On the other hand, the second approach enables the DM to interfere during performance assessment without prior knowledge whilst providing alternative objectives that allow the DM to reach the most preferred decision subject to available resources. The main focus of this research was to establish interactive frameworks to allow the DM to set targets, according to his preferences, and to test alternatives that can realistically be measured through an interactive procedure. These frameworks are based on building an equivalence model between extended DEA and MOLP minimax formulation incorporating an interactive procedure. In this study two frameworks were established. The first is based on an equivalence model between DEA trade-off approach and MOLP minimax formulation which allows for incorporating DM’s and expert preferences. The second is based on an equivalence model between DEA bounded model and MOLP minimax formulation. This allows for integrating DM’s preferences through interactive steps to measure the whole efficiency score (i.e. best and worst efficiency) of individual DMU. In both approaches a gradient projection interactive approach is implemented to estimate, regionally, the most preferred solution along the efficient frontier. The second framework was further extended by including ranking based on the geometric average. All the frameworks developed and presented were tested through implementation on two real case studies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Namavari, Hamed. „Essays on Objective Procedures for Bayesian Hypothesis Testing“. University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563872718411158.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Burvall, Benjamin. „Improvement of Container Placement Using Multi-Objective Ant Colony Optimization“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-249709.

Der volle Inhalt der Quelle
Annotation:
High resource requirements on software containers lead to the need for cloud users to find an optimal placement for each container to maximize the resource utilization in the cloud environment. Previous methods have scheduled containers in a cloud environment, optimizing a single objective, both in theory and practice. This thesis presents a Multi-Objective Container Placement Ant Colony Optimization (MOCP-ACO) algorithm, which has not been previously researched. MOCP-ACO is a modified implementation of Ant Colony Optimization, known for solving similar optimization problems, and is compared to the Spread scheduling strategy in Docker Swarm mode. The aim of this thesis is to optimize network usage and virtual machine (VM) cost, when using replicated redundant containers, in simulation and deployed as a network heavy application in Docker Swarm mode in a cloud environment. The results show that the implemented MOCP-ACO simulations with random network traffic have a significant reduction of VM cost and total network traffic. The application-specific simulation performed better in reducing VM cost but is only slightly better in reducing network traffic. Finally, the results from the cloud environment deployment showed reduced VM cost and network usage, and that the reduced network usage resulted in improved application performance.
Stigande resurskrav för mjukvarucontainrar i en molnmiljö har orsakat ett användarbehov av att hitta en optimal placering av containrar för att maximera resursanvändningen. Flera metoder har använts för att schemalägga containrar i en molnmiljö, men endast med fokus på att optimera ett mål, exempelvis maximerad CPU-användning. Denna uppsats presenterar en ny metod för containerschemaläggning med flera mål, en algoritm för Multi-Objective Container Placement Ant Colony Optimization (MOCP-ACO). MOCP-ACO är en modifierad version av Ant Colony Optimization, vilken är känd för att effektivt lösa liknande optimeringsproblem. Målet är att optimera nätverksanvändning och kostnaden för virtuella maskiner (VMs), genom användandet av replikerade redundanta containers. Resultaten visar att MOCP-ACO bidrar till en stor minskning av nätverkstrafik och VM-kostnad vid användning av slumpmässig nätverkstrafik, samt en stor minskning i VM-kostnad med applikationsspecifik nätverkstrafik. Utöver detta visas det hur MOCP-ACO kan användas i en molnmiljö för att minska VM-kostnad och förbättra applikationsprestanda genom minskad nätverkstrafik.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Chan, Ruby Wai-Shan. „A psychovisually-based objective image quality evaluator for DCT-based lossy data compression“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ65150.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Baker, Keith Richard. „Multiple objective optimisation of data and control paths in a behavioural silicon compiler“. Thesis, University of Southampton, 1992. https://eprints.soton.ac.uk/361608/.

Der volle Inhalt der Quelle
Annotation:
The objective of this research was to implement an `intelligent' silicon compiler that provides the ability to automatically explore the design space and optimise a design, given as a behavioural description, with respect to multiple objectives. The objective has been met by the implementation of the MOODS Silicon Compiler. The user submits goals or objectives to the system which automatically finds near optimal solutions. As objectives may be conflicting, trade-offs between synthesis tasks are essential and consequently their simultaneous execution must occur. Tasks are decomposed into behaviour preserving transformations which, due to their completeness, can be applied in any sequence to a multi-level representation of the design. An accurate evaluation of the design is ensured by feeding up technology dependent information to a cost function. The cost function guides the simulated annealing algorithm in applying transformations to iteratively optimise the design. The simulated annealing algorithm provides an abstractness from the transformations and designer's objectives. This abstractness avoids the construction of tailored heuristics which pre-program trade-offs into a system. Pre-programmed trade-offs are used in most systems by assuming a particular shape to the trade-off curve and are inappropriate as trade-offs are technology dependent. The lack of pre-programmed trade-offs in the MOODS system allows it to adapt to changes in technology or library cells. The choice of cells and their subsequent sharing are based on the user's criteria expressed in the cost function, rather than being pre-programmed into the system. The results show that implementations created by MOODS are better than or equal to those achieved by other systems. Comparisons with other systems highlighted the importance of specifying all of a design's data as the lack of data misrepresents the design leading to misleading comparisons. The MOODS synthesis system includes an efficient method for automated design space exploration where a varied set of near optimal implementations can be produced from a single behavioural specification. Design space exploration is an important aspect of designing by high-level synthesis and in the development of synthesis systems. It allows the designer to obtain a perspicuous characterization of a design's design space allowing him to investigate alternative designs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Yaman, Sibel. „A multi-objective programming perspective to statistical learning problems“. Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26470.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Chin-Hui Lee; Committee Member: Anthony Yezzi; Committee Member: Evans Harrell; Committee Member: Fred Juang; Committee Member: James H. McClellan. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Zabaleta, de Larrañaga Iñaki. „Using objective data from movies to predict other movies’ approval rating through Machine Learning“. Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-22111.

Der volle Inhalt der Quelle
Annotation:
Machine Learning is improving at being able to analyze data and find patterns in it, but does machine learning have the capabilities to predict something subjective like a movie’s rating using exclusively objective data such as actors, directors, genres, and their runtime? Previous research has shown the profit and performance of actors on certain genres are somewhat predictable. Other studies have had reasonable results using subjective data such as how many likes the actors and directors have on Facebook or what people say about the movie on Twitter and YouTube. This study presents several machine learning algorithms using data provided by IMDb in order to predict the ratings also provided by IMDb and which features of a movie have the biggest impact on its performance. This study found that almost all conducted algorithms are on average 0.7 stars away from the real rating which might seem quite accurate, but at the same time, 85% of movies have ratings between 5 and 8, which means the importance of the data used seems less relevant.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Xu, Cong. „Multi-objective optimization approaches to efficiency assessment and target setting for bank branches“. Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/multiobjective-optimization-approaches-to-efficiency-assessment-and-target-setting-for-bank-branches(eef70a4a-359d-40ed-9b6c-3eeb98fe477a).html.

Der volle Inhalt der Quelle
Annotation:
This thesis focuses on combining data envelopment analysis (DEA) and multi-objective linear programming (MOLP) methods to set targets by referencing peers' performances and decision-makers' (DMs) preferences. A large number of past papers have proven the importance of a company having a target; however, obtaining a feasible but challenging target has always been a difficult topic for companies. Since DEA was proposed in 1978, it has become one of the most popular performance assessment tools. The performance possibility set and efficient frontier established by DEA provide solid and scientific reference information for managers to evaluate an individual's efficiency. Based on the successful experience of DEA in performance assessment, many scholars have mentioned that DEA can be used to set appropriate targets as well; however, traditional DEA models do not include DMs' preference information that is crucial to a target-setting process. Therefore, several MOLP methods have been introduced to include DMs' preferences in the target-setting process based on the DEA efficient frontier and performance possibility set. The trade-off-based method is one of the most popular interactive methods that have been incorporated with DEA. However, there are several gaps in the current research: (1) the trade-off-based method could take so many interactions that no DMs could finish the interactive process; (2) DMs might find it very difficult to provide the preference information required by MOLP models; and (3) DMs cannot have an intuitive view in terms of the efficient frontier. Regarding the gaps above, this thesis proposes three new trade-off-based interactive target-setting models based on the DEA performance possibility set and efficient frontier to improve DMs' experience when setting targets. The three models can work independently or can be combined during the decision-making process. The piecewise linear model uses a piecewise linear assumption to simulate DMs' real utility function. It gradually narrows down the region that could contain DMs' most-preferred solution (MPS) until it reaches an acceptable range. This model could help DMs who have limited time for interaction but want to have a global view of the entire efficient frontier. This model has also been proven very helpful when DMs are not sensitive to close efficient solutions. The prioritized trade-off model provides a new way for a DM to know about the efficient frontier, which allows the DM to explore the efficient frontier following the preferred direction with a series of trade-off tables and trade-off figures as visual aids. The stepwise trade-off model focuses on situations where the number of objectives (outputs/inputs for the DEA model) is quite large and DMs cannot provide all indifference trade-offs between all the objectives simultaneously. To release the DMs' burden, the stepwise model starts from two objectives and gradually includes new objectives in the decision-making process, with the assumption that the indifference trade-offs between previous objectives are fixed, until all objectives are included. All three models have been validated through numerical examples and case studies of a Chinese state-owned bank to help DMs to explore their MPS in the DEA production possibility set.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Calonder, Michael. „Multi-objective clustering of gene expression data with evolutionary algorithms a query gene approach /“. Zürich : ETH, Eidgenössische Technische Hochschule Zürich, Institut für Technische Informatik und Kommunikationsnetze, 2006. http://e-collection.ethbib.ethz.ch/show?type=dipl&nr=229.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Cornu, Marek. „Local Search, data structures and Monte Carlo Search for Multi-Objective Combinatorial Optimization Problems“. Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLED043/document.

Der volle Inhalt der Quelle
Annotation:
De nombreux problèmes d'optimisation combinatoire considèrent plusieurs objectifs, souvent conflictuels. Cette thèse s'intéresse à l'utilisation de méthodes de recherche locale, de structures de données et de recherche Monte-Carlo pour la recherche de l'ensemble des solutions efficaces de tels problèmes, représentant l'ensemble des meilleurs compromis pouvant être réalisés en considération de tous les objectifs.Nous proposons une nouvelle méthode d'approximation appelée 2-Phase Iterated Pareto Local Search based on Decomposition (2PIPLS/D) combinant les concepts de recherche locale Pareto (PLS) et de décomposition. La PLS est une descente de recherche locale adaptée au multi-objectif, et la décomposition consiste en la subdivision du problème multi-objectif en plusieurs problèmes mono-objectif. Deux méthodes d'optimisation mono-objectif sont considérées: la recherche locale itérée et la recherche Monte-Carlo imbriquée. Deux modules principaux sont intégrés à 2PIPLS/D. Le premier généralise et améliore une méthode existante et génère un ensemble initial de solutions. Le second réduit efficacement l'espace de recherche et permet d'accélérer la PLS sans négliger la qualité de l'approximation générée. Nous introduisons aussi deux nouvelles structures de données gérant dynamiquement un ensemble de solutions incomparables, la première est spécialisée pour le cas bi-objectif et la seconde pour le cas général.2PIPLS/D est appliquée au Problème du Voyageur de Commerce bi-objectif et tri-objectif et surpasse ses concurrents sur les instances testées. Ensuite, 2PIPLS/D est appliquée à un nouveau problème avec cinq objectifs en lien avec la récente réforme territoriale d'agrandissement des régions françaises
Many Combinatorial Optimization problems consider several, often conflicting, objectives. This thesis deals with Local Search, data structures and Monte Carlo Search methods for finding the set of efficient solutions of such problems, which is the set of all best possible trade-offs given all the objectives.We propose a new approximation method called 2-Phase Iterated Pareto Local Search based on Decomposition (2PIPLS/D) combining the notions of Pareto Local Search (PLS) and Decomposition. PLS is a local search descent adapted to Multi-Objective spaces, and Decomposition consists in the subdivision of the Multi-Objective problem into a number of Single-Objective problems. Two Single-Objective methods are considered: Iterated Local Search and Nested Monte Carlo Search. Two main components are embedded within the 2PIPLS/D framework. The first one generalizes and improves an existing method generating an initial set of solutions. The second one reduces efficiently the search space and accelerates PLS without notable impact on the quality of the generated approximation. We also introduce two new data structures for dynamically managing a set of incomparable solutions. The first one is specialized for the bi-objective case, while the second one is general.2PIPLS/D is applied to the bi-objective and tri-objective Traveling Salesman Problem and outperforms its competitors on tested instances. Then, 2PIPLS/D is instantiated on a new five-objective problem related to the recent territorial reform of French regions which resulted in the reassignment of departments to new larger regions
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Chumbinho, Rogřio Paulo Antunes. „Objective analysis of a coastal ocean eddy using satellite AVHRR and in situ hydrographic data /“. Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA275715.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Chumbinho, Rogerio Paulo Antunes. „Objective analysis of a coastal ocean eddy using satellite AVHRR and in situ hydrographic data“. Thesis, Monterey, California. Naval Postgraduate School, 1993. http://hdl.handle.net/10945/26116.

Der volle Inhalt der Quelle
Annotation:
A common characteristic of the interaction between the coastal topography and eastern boundary currents (EBC) is the appearance of cold filaments and mesoscale eddies. Hydrographic and satellite temperature data obtained during a cruise on board R/V Point Sur off Point Arena, California, in May 1993 were analyzed to study a particular eddy field in this area. The hydrographic data was first used to verify the remotely sensed surface temperature field, using three dimensional data visualization. Selected vertical levels from each hydrographic station were then interpolated into a broader, finer resolution grid domain in preparation for an eventual model initialization, using multiquadric interpolation. The results verify the existence of the eddy and show its signature in the vertical to about 300 meters depth. A sensitivity study of interpolation parameters was performed to evaluate approximately the optimal set of parameters, showing that the multiquadric interpolation resolves very well the temperature field in the upper levels and introduces small amplitude, small scale noise in the deeper levels. This noise can be eliminated by a more thorough parameter sensitivity study
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Tang, Fan. „HVAC system modeling and optimization: a data-mining approach“. Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/895.

Der volle Inhalt der Quelle
Annotation:
Heating, ventilating and air-conditioning (HVAC) system is complex non-linear system with multi-variables simultaneously contributing to the system process. It poses challenges for both system modeling and performance optimization. Traditional modeling methods based on statistical or mathematical functions limit the characteristics of system operation and management. Data-driven models have shown powerful strength in non-linear system modeling and complex pattern recognition. Sufficient successful applications of data mining have proved its capability in extracting models accurately describing the relation of inner system. The heuristic techniques such as neural networks, support vector machine, and boosting tree have largely expanded to the modeling process of HVAC system. Evolutionary computation has rapidly merged to the center stage of solving the multi-objective optimization problem. Inspired from the biology behavior, it has shown the tremendous power in finding the optimal solution of complex problem. Different applications of evolutionary computation can be found in business, marketing, medical and manufacturing domains. The focus of this thesis is to apply the evolutionary computation approach in optimizing the performance of HVAC system. The energy saving can be achieved by implementing the optimal control setpoints with IAQ maintained at an acceptable level. A trade-off between energy saving and indoor air quality maintenance is also investigated by assigning different weights to the corresponding objective function. The major contribution of this research is to provide the optimal settings for the existing system to improve its efficiency and different preference-based operation methods to optimally utilize the resources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Gerber, Ronald Evan. „The irradiance distribution at the exit pupil of the objective lens in optical disk data storage“. Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187523.

Der volle Inhalt der Quelle
Annotation:
This dissertation examines various aspects of optical disk data storage systems from the point of view of the irradiance and phase distributions at the exit pupil of the objective lens. The research topics were chosen in order to address some of the problems facing future generations of optical disk systems. Future optical disks will have a much greater areal data density, and will undoubtedly use a shorter wavelength in order to decrease the size of the optical stylus. The research in this dissertation examines some of the problems inherent in the move to shorter wavelengths. For example, at short wavelengths, the tolerance on acceptable disk tilt becomes tighter; disks must either be manufactured with a tighter flatness tolerance, or a system must be devised such that the problems caused by disk tilt are corrected inside the optical disk drive. (Such a system is described in Chapter 6.) The other topics in this dissertation address similar problems, all of which are essential for a more complete understanding of optical disk technology. After a brief introduction to the irradiance distribution at the exit pupil of the objective lens (typically called the baseball pattern), we describe a novel focusing/tracking technique. Many optical disk drives use the astigmatic technique, in which the baseball pattern is projected onto an astigmatic lens that focuses the beam onto a quadrant detector placed between the two line foci of the lens. We experimentally demonstrate that by projecting the baseball pattern onto a ring lens (i.e. a lens that focuses light to a ring rather than a single spot), we are able to produce steeper focus-error signals that are more resistant to feed-through (induced on the focus-error signal by track crossings during the seek operation) than the astigmatic technique. We then examine the effects of substrate birefringence and tilt on the irradiance and phase distributions at the exit pupil of the objective lens. The irradiance and phase patterns are calculated and experimentally verified for the cases of no substrate birefringence, birefringence aligned with the incident polarization, and birefringence aligned at 45° to the incident polarization. The irradiance at the exit pupil is also calculated and experimentally verified for a grooved substrate for various amounts of substrate tilt. We then examine two distinct effects that are dependent on the incident polarization direction. The first of these is the excitation of surface plasmons at the interface between the dielectric substrate (or air, if the optical disk's storage layer is air-incident) and the metallic thin films in the disk. These plasmons are responsible for dips in the zeroth order diffraction efficiency curves of a metal grating at certain angles of incidence. The dips appear as dark bands in the baseball pattern and are seen only when there is a component of incident polarization that lies perpendicular to the tracks. The location of these bands is derived from theoretical considerations and is shown to depend on the track pitch and the materials involved, but not on the groove depth or width. The band locations are confirmed by zeroth order diffraction efficiency measurements as a function of incident angle. A possible negative effect of these bands is the introduction of additional fluctuations and noise into the focusing and push-pull tracking signals. The second of the polarization-dependent effects concerns the differences in tracking performance with respect to the direction of the incident-light polarization. In optical disk storage systems, the signal that provides tracking information is dependent on the groove shape, the optical constants of the materials involved, and the polarization state of the incident light. We show that the tracking signals can be described by two measurable quantities, both of which are largely independent of aberrations in the optical system. Using these two quantities, we match the tracking performance of a given optical disk to an equivalent disk having rectangular grooves - the adjustable parameters being the rectangular groove depth and the duty cycle. By assumption, the rectangular grooves modulate only the phase of the incident beam and disregard its state of polarization. The effective groove depth and the duty cycle thus become dependent on the polarization state of the incident beam. We examine the dependencies for various disks having different groove geometries and different combinations of materials. Next, we use the baseball pattern as a diagnostic tool to develop and demonstrate the concepts of a servo system for the correction of disk tilt. Since disk tilt produces primarily coma in the beam focused onto the disk, the system uses a "variable coma generator" to produce and equal and opposite amount of coma as that caused by the tilted disk. The magnitude and direction of disk tilt are detected using the light reflected from the front facet of the disk substrate. Finally, we address a major obstacle in the construction of future generation optical disk testers - the use of shorter wavelengths and thinner substrates. A typical aspheric singlet used as the objective lens in optical disk data storage systems will not work at different wavelengths or with different substrate thicknesses, due to spherochromatism. Using two microscope objectives with adjustable collars and a pair of relay lenses, we have constructed a system in which a diffraction-limited spot of any wavelength in the range of 0.4 μm - 0.7 μm can be moved by as much as ± 1 00 μm in both the focusing and tracking directions. This is accomplished by simply moving an aspheric singlet mounted in an off-the-shelf optical head. The system uses the adjustable collars of the microscope objectives to correct for the spherochromatism of the singlet, and to accommodate the various thicknesses of the substrates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Zhang, Zijun. „Wind turbine vibration study: a data driven methodology“. Thesis, University of Iowa, 2009. https://ir.uiowa.edu/etd/454.

Der volle Inhalt der Quelle
Annotation:
Vibrations of a wind turbine have a negative impact on its performance and therefore approaches to effectively control turbine vibrations are sought by wind industry. The body of previous research on wind turbine vibrations has focused on physics-based models. Such models come with limitations as some ideal assumptions do not reflect reality. In this Thesis a data-driven approach to analyze the wind turbine vibrations is introduced. Improvements in the data collection of information system allow collection of large volumes of industrial process data. Although the sufficient information is contained in collected data, they cannot be fully utilized to solve the challenging industrial modeling issues. Data-mining is a novel science offers platform to identify models or recognize patterns from large data set. Various successful applications of data mining proved its capability in extracting models accurately describing the processes of interest. The vibrations of a wind turbine originate at various sources. This Thesis focuses on mitigating vibrations with wind turbine control. Data mining algorithms are utilized to construct vibration models of a wind turbine that are represented by two parameters, drive train acceleration and tower acceleration. An evolutionary strategy algorithm is employed to optimize the wind turbine performance expressed with three objectives, power generation, vibration of wind turbine drive train, and vibration of wind turbine tower. The methodology presented in this Thesis is applicable to industrial processes other than wind industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Teng, Yan. „Objective speech intelligibility assessment using speech recognition and bigram statistics with application to low bit-rate codec evaluation“. Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1456283581&sid=5&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Armstrong, Colin Andrew. „The stages of change in exercise adoption and adherence : evaluation of measures with self-report and objective data /“. Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1998. http://wwwlib.umi.com/cr/ucsd/fullcit?p9904722.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Korbmacher, Julie M. [Verfasser], und Frauke [Akademischer Betreuer] Kreuter. „New challenges for interviewers when innovating social surveys : linking survey and objective data / Julie M. Korbmacher. Betreuer: Frauke Kreuter“. München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2014. http://d-nb.info/1067752471/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Leube, Alexander [Verfasser], und Frank [Akademischer Betreuer] Schaeffel. „Depth of focus of the human eye - The transfer from objective data to subjective perception / Alexander Leube ; Betreuer: Frank Schaeffel“. Tübingen : Universitätsbibliothek Tübingen, 2018. http://d-nb.info/1199357634/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Bui, Lam Thu Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. „The role of communication messages and explicit niching in distributed evolutionary multi-objective optimization“. Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2007. http://handle.unsw.edu.au/1959.4/38739.

Der volle Inhalt der Quelle
Annotation:
Dealing with optimization problems with more than one objective has been an important research area in evolutionary computation. The class of multi-objective problems (MOPs) is an important one because multi-objectivity exists in almost all aspects of human life; whereby there usually exist several compromises in each problem. Multi-objective evolutionary algorithms (MOEAs) have been applied widely in many real-world problems. This is because (1) they work with a population during the course of action, which hence offer more flexible control to find a set of efficient solutions, and (2) real-world problems are usually black-box where an explicit mathematical representation is unknown. However, MOEAs usually require a large amount of computational effort. This is a sub- stantial challenge in bringing MOEAs to practice. This thesis primarily aims to address this challenge through an investigation into issues of scalability and the balance between exploration and exploitation. These have been outstanding research challenges, not only for MOEAs, but also for evolutionary algorithms in general. A distributed framework of local models using explicit niching is introduced as an overarching umbrella to solve multi-objective optimization problems. This framework is used to address the two-part question about first, the role of communication messages and second, the role of explicit niching in distributed evolutionary multi-objective optimization. The concept behind the framework of local models is for the search to be conducted locally in different areas of the decision search space, which allows the local models to be distributed on different processing nodes. During the optimization process, local models interact (exchange messages) with each other using rules inspired from Particle Swarm Optimization (PSO). Hence, the hypothesis of this work is that running simultaneously several search engines in different local areas is better for exploiting local information, while exchanging messages among those diverse engines can provide a better exploration strategy. For this framework, as the models work locally, they gain access to some global knowledge of each other. In order to validate the proposed framework, a series of experiments on a wide range of test problems was conducted. These experiments were motivated by the following studies which in their totality contribute to the verification of our hypothesis: (1) studying the performance of the framework under different aspects such as initialization, convergence, diversity, scalability, and sensitivity to the framework's parameters, (2) investigating interleaving guidance in both the decision and objective spaces, (3) applying local models using estimation of distributions, (4) evaluating local models in noisy environments and (5) the role of communication messages and explicit niching in distributed computing. The experimental results showed that: (1) the use of local models increases the chance of MOEAs to improve their performance in finding the Pareto optimal front, (2) interaction strategies using PSO rules are suitable for controlling local models, and that they also can be coupled with specialization in order to refine the obtained non-dominated set, (3) estimation of distribution improves when coupled with local models, (4) local models work well in noisy environments, and (5) the communication cost in distributed systems with local models can be reduced significantly by using summary information (such as the direction information naturally determined by local models) as the communication messages, in comparison with conventional approaches using descriptive information of individuals. In summary, the proposed framework is a successful step towards efficient distributed MOEAs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Hafner, Florian. „IMPROVING AIRLINE SCHEDULE RELIABILITY USING A STRATEGIC MULTI-OBJECTIVE RUNWAY SLOT ASSIGNMENT SEARCH HEURISTIC“. Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3259.

Der volle Inhalt der Quelle
Annotation:
Improving the predictability of airline schedules in the National Airspace System (NAS) has been a constant endeavor, particularly as system delays grow with ever-increasing demand. Airline schedules need to be resistant to perturbations in the system including Ground Delay Programs (GDPs) and inclement weather. The strategic search heuristic proposed in this dissertation significantly improves airline schedule reliability by assigning airport departure and arrival slots to each flight in the schedule across a network of airports. This is performed using a multi-objective optimization approach that is primarily based on historical flight and taxi times but also includes certain airline, airport, and FAA priorities. The intent of this algorithm is to produce a more reliable, robust schedule that operates in today's environment as well as tomorrow's 4-Dimensional Trajectory Controlled system as described the FAA's Next Generation ATM system (NextGen). This novel airline schedule optimization approach is implemented using a multi-objective evolutionary algorithm which is capable of incorporating limited airport capacities. The core of the fitness function is an extensive database of historic operating times for flight and ground operations collected over a two year period based on ASDI and BTS data. Empirical distributions based on this data reflect the probability that flights encounter various flight and taxi times. The fitness function also adds the ability to define priorities for certain flights based on aircraft size, flight time, and airline usage. The algorithm is applied to airline schedules for two primary US airports: Chicago O'Hare and Atlanta Hartsfield-Jackson. The effects of this multi-objective schedule optimization are evaluated in a variety of scenarios including periods of high, medium, and low demand. The schedules generated by the optimization algorithm were evaluated using a simple queuing simulation model implemented in AnyLogic. The scenarios were simulated in AnyLogic using two basic setups: (1) using modes of flight and taxi times that reflect highly predictable 4-Dimensional Trajectory Control operations and (2) using full distributions of flight and taxi times reflecting current day operations. The simulation analysis showed significant improvements in reliability as measured by the mean square difference (MSD) of filed versus simulated flight arrival and departure times. Arrivals showed the most consistent improvements of up to 80% in on-time performance (OTP). Departures showed reduced overall improvements, particularly when the optimization was performed without the consideration of airport capacity. The 4-Dimensional Trajectory Control environment more than doubled the on-time performance of departures over the current day, more chaotic scenarios. This research shows that airline schedule reliability can be significantly improved over a network of airports using historical flight and taxi time data. It also provides for a mechanism to prioritize flights based on various airline, airport, and ATC goals. The algorithm is shown to work in today's environment as well as tomorrow's NextGen 4-Dimensional Trajectory Control setup.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering PhD
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Engen, Vegard. „Machine learning for network based intrusion detection : an investigation into discrepancies in findings with the KDD cup '99 data set and multi-objective evolution of neural network classifier ensembles from imbalanced data“. Thesis, Bournemouth University, 2010. http://eprints.bournemouth.ac.uk/15899/.

Der volle Inhalt der Quelle
Annotation:
For the last decade it has become commonplace to evaluate machine learning techniques for network based intrusion detection on the KDD Cup '99 data set. This data set has served well to demonstrate that machine learning can be useful in intrusion detection. However, it has undergone some criticism in the literature, and it is out of date. Therefore, some researchers question the validity of the findings reported based on this data set. Furthermore, as identified in this thesis, there are also discrepancies in the findings reported in the literature. In some cases the results are contradictory. Consequently, it is difficult to analyse the current body of research to determine the value in the findings. This thesis reports on an empirical investigation to determine the underlying causes of the discrepancies. Several methodological factors, such as choice of data subset, validation method and data preprocessing, are identified and are found to affect the results significantly. These findings have also enabled a better interpretation of the current body of research. Furthermore, the criticisms in the literature are addressed and future use of the data set is discussed, which is important since researchers continue to use it due to a lack of better publicly available alternatives. Due to the nature of the intrusion detection domain, there is an extreme imbalance among the classes in the KDD Cup '99 data set, which poses a significant challenge to machine learning. In other domains, researchers have demonstrated that well known techniques such as Artificial Neural Networks (ANNs) and Decision Trees (DTs) often fail to learn the minor class(es) due to class imbalance. However, this has not been recognized as an issue in intrusion detection previously. This thesis reports on an empirical investigation that demonstrates that it is the class imbalance that causes the poor detection of some classes of intrusion reported in the literature. An alternative approach to training ANNs is proposed in this thesis, using Genetic Algorithms (GAs) to evolve the weights of the ANNs, referred to as an Evolutionary Neural Network (ENN). When employing evaluation functions that calculate the fitness proportionally to the instances of each class, thereby avoiding a bias towards the major class(es) in the data set, significantly improved true positive rates are obtained whilst maintaining a low false positive rate. These findings demonstrate that the issues of learning from imbalanced data are not due to limitations of the ANNs; rather the training algorithm. Moreover, the ENN is capable of detecting a class of intrusion that has been reported in the literature to be undetectable by ANNs. One limitation of the ENN is a lack of control of the classification trade-off the ANNs obtain. This is identified as a general issue with current approaches to creating classifiers. Striving to create a single best classifier that obtains the highest accuracy may give an unfruitful classification trade-off, which is demonstrated clearly in this thesis. Therefore, an extension of the ENN is proposed, using a Multi-Objective GA (MOGA), which treats the classification rate on each class as a separate objective. This approach produces a Pareto front of non-dominated solutions that exhibit different classification trade-offs, from which the user can select one with the desired properties. The multi-objective approach is also utilised to evolve classifier ensembles, which yields an improved Pareto front of solutions. Furthermore, the selection of classifier members for the ensembles is investigated, demonstrating how this affects the performance of the resultant ensembles. This is a key to explaining why some classifier combinations fail to give fruitful solutions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Svensson, Patrik. „VoiceSec by Visuera Utveckling av iOS-applikation“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177023.

Der volle Inhalt der Quelle
Annotation:
Inom rådgivningsbranschen finns ett ökat behov av bra hjälpmedel för att underlätta rådgivarnas arbete med dokumentation och arkivering av information som uppkommer vid rådgivningssituationer. Målet med examensarbetet var att kunna presentera en färdigställd iOS applikation som erbjuder en lösning på problemet med hjälp av ljudupptagningar. Utifrån en modellerad processbild över systemet togs en kravspecifikation fram. Metoden som användes i projektet har varit en Scrum inspirerad utvecklingsprocess. Först togs en prototyp fram med hjälp av en Storyboard. Därefter fick varje vy utgöra en sprint i utvecklingsprocessen. Utvecklingen skedde med hjälp av utvecklingsmiljön Xcode. De inbyggda verktygen för enhetstestning samt automatiserade gränssnittstester har använts för att säkerställa att funktionaliteten överensstämmer med kraven. Vid utvecklingen användes även tekniker för Objektorienterad design samt användning av designmönster som t.ex. Model, View, Controller (MVC).Resultatet blev en fullt funktionell iOS applikation som kommunicerar med en bakomliggande webbtjänst för att kunna erbjuda den efterfrågade funktionaliteten. Applikationen presenteras i ett stilrent och lättnavigerat grafiskt gränssnitt som är enkelt för användaren att använda. Applikationen erbjuder funktionalitet för att rådgivare skall kunna logga in i applikationen, söka efter kunder, se rådgivningshistorik, välja olika formulär och spela in rådgivningssamtal som sker med kunden. Slutligen säkerställer applikationen rådgivningen genom att ladda upp rådgivningsfilerna till webbtjänsten som arkiverar och distribuerar ljudfilerna till berörda parter. Applikationen finns tillgänglig för både iPhone och iPad och stödjer iOS-versioner från om med iOS 5 vilket gör att applikationen kan nå ca 98,5% av marknaden för dessa enheter.
In the counseling industry, there is a greater need for good tools to facilitate advisors work with documentation and archiving of information arising from the counseling situations. The aim of this thesis was to present a finalized iOS application that offers a solution to the problem by using sound recording. Based on a modeled process image of the system, a requirement specifi-cation was developed. The methodology of the project has been a Scrum inspired line of development. First a prototype was developed by using a Storyboard. Thereafter, each was represented by a sprint in the devel-opment process. The development was done using the development environment Xcode. The built-in tools for unit testing and automated interface testing has been used to ensure that the functionality complies with the requirements. The development also used techniques for objectoriented design and the use of design patterns, such as Model, View, Controller (MVC).The result was a fully functional iOS application that communicates with an underlying web service to provide the sought-after functionality. The application is presented in a stylish and easy-to-navigate graphical user interface that is easy for the user to use. The application pro-vides functionality for advisers to log in to the application, search for customers, see consultation history, choose different forms and record a consultation that takes place with a client. Finally, the application ensures the consultation by uploading the consultation files to the web service which archives and distributes audio files to interested parties. The application is available for both iPhone and iPad and supports iOS-versions as from iOS 5 which makes the application available for ap-proximately 98.5% of the market for these devices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Vanden, Berghen Frank. „Constrained, non-linear, derivative-free, parallel optimization of continuous, high computing load, noisy objective functions“. Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211177.

Der volle Inhalt der Quelle
Annotation:
The main result is a new original algorithm: CONDOR ("COnstrained, Non-linear, Direct, parallel Optimization using trust Region method for high-computing load, noisy functions"). The aim of this algorithm is to find the minimum x* of an objective function F(x) (x is a vector whose dimension is between 1 and 150) using the least number of function evaluations of F(x). It is assumed that the dominant computing cost of the optimization process is the time needed to evaluate the objective function F(x) (One evaluation can range from 2 minutes to 2 days). The algorithm will try to minimize the number of evaluations of F(x), at the cost of a huge amount of routine work. CONDOR is a derivate-free optimization tool (i.e. the derivatives of F(x) are not required. The only information needed about the objective function is a simple method (written in Fortran, C++,) or a program (a Unix, Windows, Solaris, executable) which can evaluate the objective function F(x) at a given point x. The algorithm has been specially developed to be very robust against noise inside the evaluation of the objective function F(x). This hypotheses are very general, the algorithm can thus be applied on a vast number of situations. CONDOR is able to use several CPU's in a cluster of computers. Different computer architectures can be mixed together and used simultaneously to deliver a huge computing power. The optimizer will make simultaneous evaluations of the objective function F(x) on the available CPU's to speed up the optimization process. The experimental results are very encouraging and validate the quality of the approach: CONDOR outperforms many commercial, high-end optimizer and it might be the fastest optimizer in its category (fastest in terms of number of function evaluations). When several CPU's are used, the performances of CONDOR are currently unmatched (may 2004). CONDOR has been used during the METHOD project to optimize the shape of the blades inside a Centrifugal Compressor (METHOD stands for Achievement Of Maximum Efficiency For Process Centrifugal Compressors THrough New Techniques Of Design). In this project, the objective function is based on a 3D-CFD (computation fluid dynamic) code which simulates the flow of the gas inside the compressor.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Pluskal, Jaroslav. „Pokročilé optimalizační modely v oblasti oběhového hospodářství“. Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-400459.

Der volle Inhalt der Quelle
Annotation:
This diploma thesis deals with application optimization method in circular economy branch. The introduction is focused on explaining main features of the issue and its benefits for economy and environment. Afterwards are mentioned some obstacles, which are preventing transition from current waste management. Mathematical apparatus, which is used in practical section, is described in the thesis. Core of the thesis is mathematical optimization model, which is implemented in the GAMS software, and generator of input data is made in VBA. The model includes all of significant waste management options with respect to economic and enviromental aspect, including transport. Functionality is then demostrated on a small task. Key thesis result is application of the model on real data concerning Czech Republic. In conclusion an analysis of computation difficulty, given the scale of the task, is accomplished.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Nyman, Jacob. „Machinery Health Indicator Construction using Multi-objective Genetic Algorithm Optimization of a Feed-forward Neural Network based on Distance“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298084.

Der volle Inhalt der Quelle
Annotation:
Assessment of machine health and prediction of future failures are critical for maintenance decisions. Many of the existing methods use unsupervised techniques to construct health indicators by measuring the disparity between the current state and either the healthy or the faulty states of the system. This approach can work well, but if the resulting health indicators are insufficient there is no easy way to steer the algorithm towards better ones. In this thesis a new method for health indicator construction is investigated that aims to solve this issue. It is based on measuring distance after transforming the sensor data into a new space using a feed-forward neural network. The feed-forward neural network is trained using a multi-objective optimization algorithm, NSGA-II, to optimize criteria that are desired in a health indicator. Thereafter the constructed health indicator is passed into a gated recurrent unit for remaining useful life prediction. The approach is compared to benchmarks on the NASA Turbofan Engine Degradation Simulation dataset and in regard to the size of the neural networks, the model performs relatively well, but does not outperform the results reported by a few of the more recent methods. The method is also investigated on a simulated dataset based on elevator weights with two independent failures. The method is able to construct a single health indicator with a desirable shape for both failures, although the latter estimates of time until failure are overestimated for the more rare failure type. On both datasets the health indicator construction method is compared with a baseline without transformation function and does in both cases outperform it in terms of the resulting remaining useful life prediction error using the gated recurrent unit. Overall, the method is shown to be flexible in generating health indicators with different characteristics and because of its properties it is adaptive to different remaining useful life prediction methods.
Estimering av maskinhälsa och prognos av framtida fel är kritiska steg för underhållsbeslut. Många av de befintliga metoderna använder icke-väglett (unsupervised) lärande för att konstruera hälsoindikatorer som beskriver maskinens tillstånd över tid. Detta sker genom att mäta olikheter mellan det nuvarande tillståndet och antingen de friska eller fallerande tillstånden i systemet. Det här tillvägagångssättet kan fungera väl, men om de resulterande hälsoindikatorerna är otillräckliga så finns det inget enkelt sätt att styra algoritmen mot bättre. I det här examensarbetet undersöks en ny metod för konstruktion av hälsoindikatorer som försöker lösa det här problemet. Den är baserad på avståndsmätning efter att ha transformerat indatat till ett nytt vektorrum genom ett feed-forward neuralt nätverk. Nätverket är tränat genom en multi-objektiv optimeringsalgoritm, NSGA-II, för att optimera kriterier som är önskvärda hos en hälsoindikator. Därefter används den konstruerade hälsoindikatorn som indata till en gated recurrent unit (ett neuralt nätverk som hanterar sekventiell data) för att förutspå återstående livslängd hos systemet i fråga. Metoden jämförs med andra metoder på ett dataset från NASA som simulerar degradering hos turbofan-motorer. Med avseende på storleken på de använda neurala nätverken så är resultatet relativt bra, men överträffar inte resultaten rapporterade från några av de senaste metoderna. Metoden testas även på ett simulerat dataset baserat på elevatorer som fraktar säd med två oberoende fel. Metoden lyckas skapa en hälsoindikator som har en önskvärd form för båda felen. Dock så överskattar den senare modellen, som använde hälsoindikatorn, återstående livslängd vid estimering av det mer ovanliga felet. På båda dataseten jämförs metoden för hälsoindikatorkonstruktion med en basmetod utan transformering, d.v.s. avståndet mäts direkt från grund-datat. I båda fallen överträffar den föreslagna metoden basmetoden i termer av förutsägelsefel av återstående livslängd genom gated recurrent unit- nätverket. På det stora hela så visar sig metoden vara flexibel i skapandet av hälsoindikatorer med olika attribut och p.g.a. metodens egenskaper är den adaptiv för olika typer av metoder som förutspår återstående livslängd.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Lust-Hed, Freddie, und Viktor Hedin. „Android vs iPhone : En jämförande studie i applikationsutveckling“. Thesis, Uppsala University, Computer Systems Sciences, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126461.

Der volle Inhalt der Quelle
Annotation:

Datormobiler (smartphones) har blivit ett populärt fenomen bland mobilanvändare. Det har dykt upp ett flertal stora aktörer på marknaden och i takt med att fler smartphones har utvecklats har också intresset för applikationsutveckling blivit större. En av dessa aktörer är Apple som idag har en betydande marknadsandel efter lanseringen av iPhone. Dock har Google tillsammans med Open Handset Alliance blivit en betydande konkurrent med deras mobila plattform Android.

Syftet med denna uppsats är att göra en jämförande studie av applikationsutveckling för dessa plattformar. I detta ingår att undersöka plattformarnas programmeringsspråk med tillhörande aspekter, utvecklingsmiljöer, krav på utvecklaren och de ekonomiska aspekter som hör till utveckling och publicering. Denna studie genomförde vi genom att undersöka tillgänglig och aktuell litteratur och försäljningsstatistik. Vi använde även egna erfarenheter i applikations-utveckling på plattformen Android.

Vår undersökning visar att utveckling för iPhone är endast möjligt via företagets egna produkter. Detta är inte fallet med Android då det är mer valfritt vilken plattform man vill utveckla på. Båda programmeringsspråken är objektorienterade men har några märkbara skillnader. Båda plattformarna erbjuder en pedagogisk och lätthanterlig utvecklingsmiljö där man som utvecklare snabbt kan se resultat. Det är gratis att införskaffa nödvändiga programvaror och som utvecklare får man behålla större delen av intäkterna om man väljer att publicera sin applikation med ett pris i någon av applikationsbutikerna.

Vår slutsats är att plattformarna har fler likheter än olikheter när det gäller applikations-utveckling. En av skillnaderna är att utveckling på iPhone innebär att lära sig ett programmeringsspråk som används nästan exklusivt på Apples produkter och eventuellt betala en årlig avgift. I Androids fall används ett välspritt programmeringsspråk och enda kostnaden är en relativt liten engångssumma för publicering. I sin helhet kan man tolka skillnaden som att utveckling för iPhone innebär en stängd tillvaro, men fri från skadlig kod medan utveckling för Android är mer öppen som får till följd att den är mer osäker.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Handouzi, Wahida. „Traitement d'information mono-source pour la validation objective d'un modèle d'anxiété : application au signal de pression sanguine volumique“. Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0237/document.

Der volle Inhalt der Quelle
Annotation:
La détection et l’évaluation des émotions sont des domaines qui suscitent un grand intérêt par de nombreuses communautés tant au niveau des sciences humaines que des sciences exactes. Dans cette thèse nous nous intéressons à la reconnaissance de l’anxiété sociale qui est une peur irrationnelle ressentie par une personne lors de toute forme de relation sociale. L’anxiété peut être révélée par un ensemble de traits physiques et physiologiques tels que l’intonation de la voix, les mimiques faciales, l’augmentation du rythme cardiaque, le rougissement… etc. L’avantage de l’utilisation des mesures physiologiques est que les individus ne peuvent pas les manipuler, c’est une source continue de données et chaque émotion est caractérisée par une variation physiologique particulière. Dans ce travail, nous proposons un système de mesure d’anxiété basé sur l’utilisation d’un seul signal physiologique « signal de pression sanguine volumique (Blood volume pulse BVP)». Le choix d’un seul capteur limite la gêne des sujets due au nombre de capteurs. De ce signal nous avons sélectionné des paramètres pertinents représentant au mieux les relations étroites du signal BVP avec le processus émotionnel de l’anxiété. Cet ensemble de paramètres est classé en utilisant les séparateurs à vastes marges SVM. Les travaux engagés dans le domaine de la reconnaissance des émotions utilisent fréquemment, pour support d’information, des données peu fiables ne correspondant pas toujours aux situations envisagées. Ce manque de fiabilité peut être dû à plusieurs paramètres parmi eux la subjectivité de la méthode d’évaluation utilisée (questionnaire, auto-évaluation des sujets, …etc.). Nous avons développé une approche d’évaluation objective des données basée sur les dynamiques des paramètres sélectionnés. La base de données utilisée a été enregistrée dans notre laboratoire dans des conditions réelles acquises sur des sujets présentant un niveau d’anxiété face aux situations sociales et qui ne sont pas sous traitement psychologique. L’inducteur utilisé est l’exposition à des environnements virtuels représentant quelques situations sociales redoutées. L’étape d’évaluation, nous a permis d’obtenir un modèle de données fiable pour la reconnaissance de deux niveaux d’anxiété. Ce modèle a été testé dans une clinique spécialisée dans les thérapies cognitives comportementales (TCC) sur des sujets phobiques. Les résultats obtenus mettent en lumière la fiabilité du modèle construit notamment pour la reconnaissance des niveaux d’anxiété sur des sujets sains ou sur des sujets phobiques ce qui constitue une solution au manque de données dont souffrent les différents domaines de reconnaissances
Detection and evaluation of emotions are areas of great interest in many communities both in terms of human and exact sciences. In this thesis we focus on social anxiety recognition, which is an irrational fear felt by a person during any form of social relationship. Anxiety can be revealed by a set of physical and physiological traits such as tone of voice, facial expressions, increased heart rate, flushing ... etc. The interest to the physiological measures is motivated by them robustness to avoid the artifacts created by human social masking, they are a continuous source of data and each emotion is characterized by a particular physiological variation. In this work, we propose a measurement system based on the use of a single physiological signal "Blood volume pulse BVP". The use of a single sensor limits the subjects’ discomfort. From the BVP signal we selected three relevant features which best represents the close relationship between this signal and anxiety status. This features set is classified using support vector machine SVM. The work undertaken in the field of emotion recognition frequently use, for information support, unreliable data do not always correspond to the situations envisaged. This lack of reliability may be due to several parameters among them the subjectivity of the evaluation method used (self-evaluation questionnaire, subjects…etc.). We have developed an approach to objective assessment of data based on the dynamics of selected features. The used database was recorded in our laboratory under real conditions acquired in subjects with a level of anxiety during social situations and who are not under psychological treatment. The used stimulus is the exposition to virtual environments representing some feared social situations. After the evaluation stage, we obtained a reliable model for the recognition of two levels of anxiety. The latter was tested in a clinic specializing in cognitive behavioral therapy (CBT) on phobic subjects. The results highlight the reliability of the built model specifically for the recognition of anxiety levels in healthy subjects or of phobic subjects, what constitutes a solution to the lack of data affecting different areas of recognition
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Axisa, Fabrice. „Etudes des Indices Objectifs du Confort Thermique Ressenti chez l’Homme : Microcentrale d’acquisition des paramètres physiologiques pour l’étude des réactions émotionnelles, sensorielles et cognitivesd=Objective indicators for felt comfort : Physiological data micro acquisition system for emotional, sensorial and cognitive reaction of human“. Lyon, INSA, 2005. http://www.theses.fr/2005ISAL0025.

Der volle Inhalt der Quelle
Annotation:
L’optimisation du confort thermique des bâtiments est un compromis entre les contraintes esthétique, fonctionnelles et thermiques. Les contraintes thermiques pour l’amélioration du confort thermique sont définies dans une série de normes de calculs et de mesures. Le modèle PMV, les modèles thermo physiologiques et les indices synthétiques permettent la prévision du confort thermique moyen et l’insatisfaction d’une population. Cependant ces modèles peuvent s’avérer faux pour une personne donnée ou une situation donnée. Avec les mécanismes physiologiques de thermorégulation, le système nerveux autonome est un acteur essentiel du confort thermique ressenti. La compréhension des différents acteurs physiologiques permet d’établir un modèle thermo-neuro-physiologique du confort thermique ressenti. Les mesures des réactions physiologiques dues à la thermorégulation et à la réaction du système nerveux autonome sont essentielles. Ces mesures sont la température cutanée, la densité de flux thermique cutané, la conductance et le potentiel électrique cutané et le rythme cardiaque instantané. Une analyse multiparamétrique et multi canal permet l’établissement d’un indice objectif du confort thermique ressenti. Une microcentrale ambulatoire, MARSIAN, portable au poignet, d’une autonomie de 6 heures, a été développée pour enregistrer et transmettre sans-fil les signaux physiologiques provenant des capteurs non invasifs cutanées. Le système MARSIAN permet en outre la surveillance en temps réel et de façon ambulatoire les paramètres vitaux, pour la médecine de prévention et la télémédecine. Un gant intelligent intégrant les capteurs cutanés non invasifs accompagne MARSIAN pour simplifier et fiabiliser la pose de capteurs physiologiques sur la main. La conception de MARSIAN a permit la mise au point d’une nouvelle génération d’instrumentation biomédical ambulatoire pour l’analyse de la réactivité du système nerveux autonome
The optimization of the thermal comfort in buildings is a compromise between the aesthetic, functional and thermal constraints. The thermal constraints for the improvement of thermal comfort are defined in a series of standards of calculations and measurements. The PMV model, the thermo-physiological models and the aggregative indexes allow the forecast of average thermal comfort and of the dissatisfaction of a population. However these models can prove to be false for a given person or a given situation. With the physiological mechanisms of thermoregulation, the autonomous nervous system is an essential actor of felt thermal comfort. The comprehension of the various physiological actors makes it possible to establish a thermo-neuro-physiological model of felt thermal comfort. Measurements of the physiological reactions due to the thermoregulation and the reaction of the autonomous nervous system are essential. These measurements are the skin temperature, the skin thermal flow density, the skin electrical conductance, the skin electrical potential and the instantaneous heartbeat. A multiparametric and multi channel analysis allows the establishment of an objective index of felt thermal comfort. An ambulatory measurement system, MARSIAN, portable on the wrist, of an autonomy of 5 hour, were developed to record and transmit wireless the physiological signals coming from the non-invasive skin sensors. Moreover MARSIAN allows the monitoring in real time and in an ambulatory way of the vital parameters, for the medicine of prevention and the telemedicine. An intelligent glove integrating the non-invasive skin sensors simplify and make reliable the setup of physiological sensors on the hand. MARSIAN’s conception enables the conception of a new family of ambulatory biomedical instrumentation for autonomous nervous system activity
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Rekik, Siwar. „Sécurisation de la communication parlée par une techhnique stéganographique“. Thesis, Brest, 2012. http://www.theses.fr/2012BRES0061.

Der volle Inhalt der Quelle
Annotation:
Une des préoccupations dans le domaine des communications sécurisées est le concept de sécurité de l'information. Aujourd’hui, la réalité a encore prouvé que la communication entre deux parties sur de longues distances a toujours été sujet au risque d'interception. Devant ces contraintes, de nombreux défis et opportunités s’ouvrent pour l'innovation. Afin de pouvoir fournir une communication sécurisée, cela a conduit les chercheurs à développer plusieurs schémas de stéganographie. La stéganographie est l’art de dissimuler un message de manière secrète dans un support anodin. L’objectif de base de la stéganographie est de permettre une communication secrète sans que personne ne puisse soupçonner son existence, le message secret est dissimulé dans un autre appelé medium de couverture qui peut être image, video, texte, audio,…. Cette propriété a motivé les chercheurs à travailler sur ce nouveau champ d’étude dans le but d’élaborer des systèmes de communication secrète résistante à tout type d’attaques. Cependant, de nombreuses techniques ont été développées pour dissimuler un message secret dans le but d’assurer une communication sécurisée. Les contributions majeures de cette thèse sont en premier lieu, de présenter une nouvelle méthode de stéganographie permettant la dissimulation d’un message secret dans un signal de parole. La dissimulation c’est le processus de cacher l’information secrète de façon à la rendre imperceptible pour une partie tierce, sans même pas soupçonner son existence. Cependant, certaines approches ont été étudiées pour aboutir à une méthode de stéganogaraphie robuste. En partant de ce contexte, on s’est intéressé à développer un système de stéganographie capable d’une part de dissimuler la quantité la plus élevée de paramètre tout en gardant la perceptibilité du signal de la parole. D’autre part nous avons opté pour la conception d’un algorithme de stéganographie assez complexe afin d’assurer l’impossibilité d’extraction de l’information secrète dans le cas ou son existence été détecter. En effet, on peut également garantir la robustesse de notre technique de stéganographie à l’aptitude de préservation du message secret face aux tentatives de détection des systèmes de stéganalyse. Notre technique de dissimulation tire son efficacité de l’utilisation de caractéristiques spécifiques aux signaux de parole et àl’imperfection du système auditif humain. Des évaluations comparatives sur des critères objectifs et subjectifs de qualité sont présentées pour plusieurs types de signaux de parole. Les résultats ont révélé l'efficacité du système développé puisque la technique de dissimulation proposée garantit l’imperceptibilité du message secret voire le soupçon de son existence. Dans la suite expérimentale et dans le même cadre de ce travail, la principale application visée par la thèse concerne la transmission de parole sécurisée par un algorithme de stéganographie. Dans ce but il s’est avéré primordial d’utiliser une des techniques de codage afin de tester la robustesse de notre algorithme stéganographique face au processus de codage et de décodage. Les résultats obtenus montrent la possibilité de reconstruction du signal original (contenant des informations secrètes) après codage. Enfin une évaluation de la robustesse de notre technique de stéganographie vis à vis des attaques est faite de façon à optimiser la technique afin d'augmenter le taux de sécurisation. Devant cette nécessité nous avons proposé une nouvelle technique de stéganalyse basée sur les réseaux de neurones AR-TDNN. La technique présentée ici ne permet pas d'extraire l'éventuel message caché, mais simplement de mettre en évidence sa présence
One of the concerns in the field of secure communication is the concept of information security. Today’s reality is still showing that communication between two parties over long distances has always been subject to interception. Providing secure communication has driven researchers to develop several cryptography schemes. Cryptography methods achieve security in order to make the information unintelligible to guarantee exclusive access for authenticated recipients. Cryptography consists of making the signal look garbled to unauthorized people. Thus, cryptography indicates the existence of a cryptographic communication in progress, which makes eavesdroppers suspect the existence of valuable data. They are thus incited to intercept the transmitted message and to attempt to decipher the secret information. This may be seen as weakness in cryptography schemes. In contrast to cryptography, steganography allows secret communication by camouflaging the secret signal in another signal (named the cover signal), to avoid suspicion. This quality motivated the researchers to work on this burning field to develop schemes ensuring better resistance to hostile attackers. The word steganography is derived from two Greek words: Stego (means cover) and graphy (means writing). The two combined words constitute steganography, which means covert writing, is the art of hiding written communications. Several steganography techniques were used to send message secretly during wars through the territories of enemies. The major contributions of this thesis are the following ones. We propose a new method to secure speech communication using the Discrete Wavelet Transforms (DWT) and the Fast Fourier Transform (FFT). Our method exploits first the high frequencies using a DWT, then exploits the low-pass spectral properties of the speech magnitude spectrum to hide another speech signal in the low-amplitude high-frequencies region of the cover speech signal. The proposed method allows hiding a large amount of secret information while rendering the steganalysis more complex. Comparative evaluation based on objective and subjective criteria is introduced for original speech signal, stego-signal and reconstructed secret speech signal after the hiding process. Experimental simulations on both female and male speakers revealed that our approach is capable of producing a stego speech that is indistinguishable from the cover speech. The receiver is still able to recover an intelligible copy of the secret speech message. We used an LPC10 coder to test the effect of the coding techniques on the stego-speech signals. Experimental results prove the efficiency of the used coding technique since intelligibility of the stego-speech is maintained after the encoding and decoding processes. We also advocate a new steganalysis technique to ensure the robustness of our steganography method. The proposed classifier is called Autoregressive time delay neural network (ARTDNN). The purpose of this steganalysis system is to identify the presence or not of embedded information, and does not actually attempt to extract or decode the hidden data. The low detecting rate prove the robustness of our hiding technique
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Thompson-Arjona, William G. „Curricular Optimization: Solving for the Optimal Student Success Pathway“. UKnowledge, 2019. https://uknowledge.uky.edu/ece_etds/139.

Der volle Inhalt der Quelle
Annotation:
Considering the significant investment of higher education made by students and their families, graduating in a timely manner is of the utmost importance. Delay attributed to drop out or the retaking of a course adds cost and negatively affects a student’s academic progression. Considering this, it becomes paramount for institutions to focus on student success in relation to term scheduling. Often overlooked, complexity of a course schedule may be one of the most important factors in whether or not a student successfully completes his or her degree. More often than not students entering an institution as a first time full time (FSFT) freshman follow the advised and published schedule given by administrators. Providing the optimal schedule that gives the student the highest probability of success is critical. In efforts to create this optimal schedule, this thesis introduces a novel optimization algorithm with the objective to separate courses which when taken together hurt students’ pass rates. Inversely, we combine synergistic relationships that improve a students probability for success when the courses are taken in the same semester. Using actual student data at the University of Kentucky, we categorically find these positive and negative combinations by analyzing recorded pass rates. Using Julia language on top of the Gurobi solver, we solve for the optimal degree plan of a student in the electrical engineering program using a linear and non-linear multi-objective optimization. A user interface is created for administrators to optimize their curricula at main.optimizeplans.com.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Bergström, Anton. „Novelty Search och krav inom evolutionära algoritmer : En jämförelse av FINS och PMOEA för att generera dungeon nivåer med krav“. Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17603.

Der volle Inhalt der Quelle
Annotation:
Evolutionära algoritmer har visat sig vara effektiva för att utveckla spelnivåer. Dock finns fortfarande ett behov av nivåer som både uppfyller de krav som spelen har, samt att nivåerna som skapas ska vara så olika som möjligt för att uppmuntra upprepade spelomgångar. För att åstadkomma detta kan man använda Novelty Search. Dock saknar Novelty Search funktioner som gör att populationen vill uppfylla de krav som nivåerna ska ha. Arbetet fokuserar därför på att jämföra två Novelty Search baserade algoritmer som båda uppmuntrar kravuppfyllning: Feasible Infeasible Novelty Search (FINS) och Pareto based Multi-objective evolutionary algorithm (PMOEA) med två mål: krav och Novelty Search. Studien jämför algoritmerna utifrån tre värden: hur stor andel av populationen som följer de ställda kraven, hur bra dessa individer är på att lösa ett nivårelaterat problem samt diversiteten bland dessa individer. Utöver PMOEA och FINS implementeras även en Novelty Search algoritm och en traditionell evolutionär algoritm. Tre experiment genomförs där nivåernas storlek och antalet krav varierade. Resultatet visar att PMOEA var bättre på att skapa fler individer som följde alla kraven och att dessa individer överlag var bättre på att optimera lösningar än vanlig Novelty Search och FINS. Dock hade FINS högre diversitet bland individerna än alla algoritmerna som testades. Studiens svaghet är att resultatet är subjektivt till algoritmernas uppsättning i artefakten, som sådan borde framtida arbeten fokusera på att utforska nya uppsättningar för att generalisera resultatet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Memedi, Mevludin. „Mobile systems for monitoring Parkinson's disease“. Doctoral thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:du-13797.

Der volle Inhalt der Quelle
Annotation:
A challenge for the clinical management of Parkinson's disease (PD) is the large within- and between-patient variability in symptom profiles as well as the emergence of motor complications which represent a significant source of disability in patients. This thesis deals with the development and evaluation of methods and systems for supporting the management of PD by using repeated measures, consisting of subjective assessments of symptoms and objective assessments of motor function through fine motor tests (spirography and tapping), collected by means of a telemetry touch screen device. One aim of the thesis was to develop methods for objective quantification and analysis of the severity of motor impairments being represented in spiral drawings and tapping results. This was accomplished by first quantifying the digitized movement data with time series analysis and then using them in data-driven modelling for automating the process of assessment of symptom severity. The objective measures were then analysed with respect to subjective assessments of motor conditions. Another aim was to develop a method for providing comparable information content as clinical rating scales by combining subjective and objective measures into composite scores, using time series analysis and data-driven methods. The scores represent six symptom dimensions and an overall test score for reflecting the global health condition of the patient. In addition, the thesis presents the development of a web-based system for providing a visual representation of symptoms over time allowing clinicians to remotely monitor the symptom profiles of their patients. The quality of the methods was assessed by reporting different metrics of validity, reliability and sensitivity to treatment interventions and natural PD progression over time. Results from two studies demonstrated that the methods developed for the fine motor tests had good metrics indicating that they are appropriate to quantitatively and objectively assess the severity of motor impairments of PD patients. The fine motor tests captured different symptoms; spiral drawing impairment and tapping accuracy related to dyskinesias (involuntary movements) whereas tapping speed related to bradykinesia (slowness of movements). A longitudinal data analysis indicated that the six symptom dimensions and the overall test score contained important elements of information of the clinical scales and can be used to measure effects of PD treatment interventions and disease progression. A usability evaluation of the web-based system showed that the information presented in the system was comparable to qualitative clinical observations and the system was recognized as a tool that will assist in the management of patients.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Fink, Wolfgang, Alexander J. W. Brooks, Mark A. Tarbell und James M. Dohm. „Tier-scalable reconnaissance: the future in autonomous C4ISR systems has arrived: progress towards an outdoor testbed“. SPIE-INT SOC OPTICAL ENGINEERING, 2017. http://hdl.handle.net/10150/626010.

Der volle Inhalt der Quelle
Annotation:
Autonomous reconnaissance missions are called for in extreme environments, as well as in potentially hazardous (e.g., the theatre, disaster-stricken areas, etc.) or inaccessible operational areas (e.g., planetary surfaces, space). Such future missions will require increasing degrees of operational autonomy, especially when following up on transient events. Operational autonomy encompasses: (1) Automatic characterization of operational areas from different vantages (i.e., spaceborne, airborne, surface, subsurface); (2) automatic sensor deployment and data gathering; (3) automatic feature extraction including anomaly detection and region-of-interest identification; (4) automatic target prediction and prioritization; (5) and subsequent automatic (re-) deployment and navigation of robotic agents. This paper reports on progress towards several aspects of autonomous (CISR)-I-4 systems, including: Caltech-patented and NASA award-winning multi-tiered mission paradigm, robotic platform development (air, ground, water-based), robotic behavior motifs as the building blocks for autonomous telecommanding, and autonomous decision making based on a Caltech-patented framework comprising sensor-data-fusion (feature-vectors), anomaly detection (clustering and principal component analysis), and target prioritization (hypothetical probing).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Rahat, Alma As-Aad Mohammad. „Hybrid evolutionary routing optimisation for wireless sensor mesh networks“. Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/21330.

Der volle Inhalt der Quelle
Annotation:
Battery powered wireless sensors are widely used in industrial and regulatory monitoring applications. This is primarily due to the ease of installation and the ability to monitor areas that are difficult to access. Additionally, they can be left unattended for long periods of time. However, there are many challenges to successful deployments of wireless sensor networks (WSNs). In this thesis we draw attention to two major challenges. Firstly, with a view to extending network range, modern WSNs use mesh network topologies, where data is sent either directly or by relaying data from node-to-node en route to the central base station. The additional load of relaying other nodes’ data is expensive in terms of energy consumption, and depending on the routes taken some nodes may be heavily loaded. Hence, it is crucial to locate routes that achieve energy efficiency in the network and extend the time before the first node exhausts its battery, thus improving the network lifetime. Secondly, WSNs operate in a dynamic radio environment. With changing conditions, such as modified buildings or the passage of people, links may fail and data will be lost as a consequence. Therefore in addition to finding energy efficient routes, it is important to locate combinations of routes that are robust to the failure of radio links. Dealing with these challenges presents a routing optimisation problem with multiple objectives: find good routes to ensure energy efficiency, extend network lifetime and improve robustness. This is however an NP-hard problem, and thus polynomial time algorithms to solve this problem are unavailable. Therefore we propose hybrid evolutionary approaches to approximate the optimal trade-offs between these objectives. In our approach, we use novel search space pruning methods for network graphs, based on k-shortest paths, partially and edge disjoint paths, and graph reduction to combat the combinatorial explosion in search space size and consequently conduct rapid optimisation. The proposed methods can successfully approximate optimal Pareto fronts. The estimated fronts contain a wide range of robust and energy efficient routes. The fronts typically also include solutions with a network lifetime close to the optimal lifetime if the number of routes per nodes were unconstrained. These methods are demonstrated in a real network deployed at the Victoria & Albert Museum, London, UK.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Fujimoto, Magaly Lika. „Uma metodologia para exploração de regras de associação generalizadas integrando técnicas de visualização de informação com medidas de avaliação do conhecimento“. Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-19052009-142534/.

Der volle Inhalt der Quelle
Annotation:
O processo de mineração de dados tem como objetivo encontrar o conhecimento implícito em um conjunto de dados para auxiliar a tomada de decisão. Do ponto de vista do usuário, vários problemas podem ser encontrados durante a etapa de pós-processamento e disponibilização do conhecimento extraído, como a enorme quantidade de padrões gerados por alguns algoritmos de extração e a dificuldade na compreensão dos modelos extraídos dos dados. Além do problema da quantidade de regras, os algoritmos tradicionais de regras de associação podem levar à descoberta de conhecimento muito específico. Assim, pode ser realizada a generalização das regras de associação com o intuito de obter um conhecimento mais geral. Neste projeto é proposta uma metodologia interativa que auxilie na avaliação de regras de associação generalizadas, visando melhorar a compreensibilidade e facilitar a identificação de conhecimento interessante. Este auxílio é realizado por meio do uso de técnicas de visualização em conjunto com a aplicação medidas de avaliação objetivas e subjetivas, que estão implementadas no módulo de visualização de regras de associação generalizados denominado RulEE-GARVis, que está integrado ao ambiente de exploração de regras RulEE (Rule Exploration Environment). O ambiente RulEE está sendo desenvolvido no LABIC-ICMC-USP e auxilia a etapa de pós-processamento e disponibilização de conhecimento. Neste contexto, também foi objetivo deste projeto de pesquisa desenvolver o Módulo de Gerenciamento do ambiente de exploração de regras RulEE. Com a realização do estudo dirigido, foi possível verificar que a metodologia proposta realmente facilita a compreensão e a identificação de regras de associação generalizadas interessantes
The data mining process aims at finding implicit knowledge in a data set to aid in a decision-making process. From the users point of view, several problems can be found at the stage of post-processing and provision of the extracted knowledge, such as the huge number of patterns generated by some of the extraction algorithms and the difficulty in understanding the types of the extracted data. Besides the problem of the number of rules, the traditional algorithms of association rules may lead to the discovery of very specific knowledge. Thus, the generalization of association rules can be realized to obtain a more general knowledge. In this project an interactive methodology is proposed to aid in the evaluation of generalized association rules in order to improve the understanding and to facilitate the identification of interesting knowledge. This aid is accomplished through the use of visualization techniques along with the application of objective and subjective evaluation measures, which are implemented in the visualization module of generalized association rules called RulEE-GARVis, which is integrated with the Rule Exploration Environment RulEE. The RulEE environment is being developed at LABIC-ICMC-USP and aids in the post-processing and provision of knowledge. In this context, it was also the objective of this research project to develop the Module Management of the rule exploration environment RulEE. Through this directed study, it was verified that the proposed methodology really facilitates the understanding and identification of interesting generalized association rules
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Sexson, Tejtel Sara Kristen. „Is Ohio approaching Healthy People 2010 objectives a birth certificate data analysis /“. Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1149023375.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Lynskey, Orla. „Identifying the objectives of EU data protection regulation and justifying its costs“. Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608116.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Weisenburger, Kenneth William. „Reflection seismic data acquisition and processing for enhanced interpretation of high resolution objectives“. Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/74518.

Der volle Inhalt der Quelle
Annotation:
Reflection seismic data were acquired (by CONOCO, Inc.) which targeted known channel interruption of an upper Pennsylvanian coal seam (Herrin #6) in the Illinois basin. The data were reprocessed and interpreted by the Regional Geophysics Laboratory, Virginia Tech. Conventional geophysical techniques involving field acquisition and data processing were modified to enhance and maintain high frequency content in the signal bandwidth. Single sweep processing was employed to increase spatial sampling density and reduce low pass filtering associated with the array response. Whitening of the signal bandwidth was accomplished using Vibroseis whitening (VSW) and stretched automatic gain control (SAGC). A zero-phase wavelet-shaping filter was used to optimize the waveform length allowing a thinner depositional sequence to be resolved. The high resolution data acquisition and processing led to an interpreted section which shows cyclic deposition in a deltaic environment. Complex channel development interrupted underlying sediments including the Herrin coal seam complex. Contrary to previous interpretations of channel development in the study area by Chapman and others (1981), and Nelson (1983), the channel has been interpreted as having bimodal structure leaving an"island" of undisturbed deposits. Channel activity affects the younger Pennsylvanian sediments and also the unconsolidated Pleistocene till. A limit to the eastern migration of channel development affecting the the Pennsylvanian sediments considered in this study can be identified by the abrupt change in event characteristics.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Chidley, Matthew D. „High Numerical Aperture Injection-Molded Miniature Objective For Fiber-Optic Confocal Reflectance Microscopy“. Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1300%5F1%5Fm.pdf&type=application/pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie