Dissertations / Theses on the topic 'Performance metrics'

To see the other types of publications on this topic, follow the link: Performance metrics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Performance metrics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sundfors, David. "Performance Metrics for Sustainability Value." Licentiate thesis, KTH, Bygg- och fastighetsekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-200315.

Full text
Abstract:
The trend that started with Green Building has moved on into Sustainable Building. But how do we know that something is really sustainable? This project started out with the intention to find a small set of performance indicators for commercial buildings, which could be continuously measured and monitored over time, which would give a good indication of the level of sustainability of the building and as such, and be presented as an additional part in a valuation. Since it has been shown several times over by now that properties that can prove they are sustainable generate a higher market price, these performance indicators would be interesting from the perspective of a valuation professional. In order to find these parameters, the project began with three of the international environmental certification systems and one Swedish system, to study which parameters are considered important in these systems. Following that study, surveys and interviews within the real estate business in Sweden provided an insight into how performance is measured today. Lastly, by combining those studies with a review of the sustainability information considered important by the Royal Institute of Chartered Surveyors (RICS) from a valuation professional’s point of view and an updated literature review, a simple set of indicators could indeed be identified. There is however, still a problem with defining their actual impact on market price. Other authors have come to the conclusion that although sustainability can be measured to some extent, incorporating that information into valuation of the property in a statistical secure way is not yet possible. We need to increase our knowledge about the performance of our built environment and the presented key performance indicators in this thesis would help us do just that. We can also see that real estate owners in many cases already gather much information about their buildings, but they lack the incentives to share that data with others.

QC 20170124

APA, Harvard, Vancouver, ISO, and other styles
2

Akcay, Koray. "Performance Metrics For Fundamental Estimation Filters." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606510/index.pdf.

Full text
Abstract:
This thesis analyzes fundamental estimation filters &ndash
Alpha-Beta Filter, Alpha-Beta-Gamma Filter, Constant Velocity (CV) Kalman Filter, Constant Acceleration (CA) Kalman Filter, Extended Kalman Filter, 2-model Interacting Multiple Model (IMM) Filter and 3-model IMM with respect to their resource requirements and performance. In resource requirement part, fundamental estimation filters are compared according to their CPU usage, memory needs and complexity. The best fundamental estimation filter which needs very low resources is the Alpha-Beta-Filter. In performance evaluation part of this thesis, performance metrics used are: Root-Mean-Square Error (RMSE), Average Euclidean Error (AEE), Geometric Average Error (GAE) and normalized form of these. The normalized form of performance metrics makes measure of error independent of range and the length of trajectory. Fundamental estimation filters and performance metrics are implemented in MATLAB. MONTE CARLO simulation method and 6 different air trajectories are used for testing. Test results show that performance of fundamental estimation filters varies according to trajectory and target dynamics used in constructing the filter. Consequently, filter performance is application-dependent. Therefore, before choosing an estimation filter, most probable target dynamics, hardware resources and acceptable error level should be investigated. An estimation filter which matches these requirements will be &lsquo
the best estimation filter&rsquo
.
APA, Harvard, Vancouver, ISO, and other styles
3

Tucker, Christopher John. "Performance metrics for network intrusion systems." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1547.

Full text
Abstract:
Intrusion systems have been the subject of considerable research during the past 33 years, since the original work of Anderson. Much has been published attempting to improve their performance using advanced data processing techniques including neural nets, statistical pattern recognition and genetic algorithms. Whilst some significant improvements have been achieved they are often the result of assumptions that are difficult to justify and comparing performance between different research groups is difficult. The thesis develops a new approach to defining performance focussed on comparing intrusion systems and technologies. A new taxonomy is proposed in which the type of output and the data scale over which an intrusion system operates is used for classification. The inconsistencies and inadequacies of existing definitions of detection are examined and five new intrusion levels are proposed from analogy with other detection-based technologies. These levels are known as detection, recognition, identification, confirmation and prosecution, each representing an increase in the information output from, and functionality of, the intrusion system. These levels are contrasted over four physical data scales, from application/host through to enterprise networks, introducing and developing the concept of a footprint as a pictorial representation of the scope of an intrusion system. An intrusion is now defined as “an activity that leads to the violation of the security policy of a computer system”. Five different intrusion technologies are illustrated using the footprint with current challenges also shown to stimulate further research. Integrity in the presence of mixed trust data streams at the highest intrusion level is identified as particularly challenging. Two metrics new to intrusion systems are defined to quantify performance and further aid comparison. Sensitivity is introduced to define basic detectability of an attack in terms of a single parameter, rather than the usual four currently in use. Selectivity is used to describe the ability of an intrusion system to discriminate between attack types. These metrics are quantified experimentally for network intrusion using the DARPA 1999 dataset and SNORT. Only nine of the 58 attack types present were detected with sensitivities in excess of 12dB indicating that detection performance of the attack types present in this dataset remains a challenge. The measured selectivity was also poor indicting that only three of the attack types could be confidently distinguished. The highest value of selectivity was 3.52, significantly lower than the theoretical limit of 5.83 for the evaluated system. Options for improving selectivity and sensitivity through additional measurements are examined.
APA, Harvard, Vancouver, ISO, and other styles
4

Cirtita, Horatiu. "Performance Metrics in Downstream Supply Chain." Doctoral thesis, Università degli studi di Padova, 2008. http://hdl.handle.net/11577/3425091.

Full text
Abstract:
In a downstream supply chain (DSC), consisting of manufacturers, transportation, distribution, and retail members, end customers expect timely, reliable and quality delivery of the right amount of products at low cost. The manufacturer, in its attempt to deliver the product through the DSC, however, must balance customer expectation with profitability. This balance can be achieved through a mix of strategies. One DSC strategy tool, the SCOR model, integrates and continuously improves the performance of the various DSC activities. The SCOR model metric system is considered a breakthrough given its standardized approach to assessment across organizations and industry types. The top tier of the SCOR metric system evaluates the overall strategic organizational activities in a supply chain context. These top tier metric system elements consist of: delivery reliability, flexibility and responsiveness, cost, and assets. These metrics follow the standard as recommended by Schneiderman (1996) who stated that a metric system should contain no more than five top tier metrics given that a large number diffuses the focus of the stregic activities. Schneiderman further suggests that this first tier metric system should consist of 1) internal process and 2) external results performance. The SCOR model activities fit these internal and external schemas well. For the SCOR model to work among the DSC members, these measurements, though, must be standardized among DSC members (Lambert and Pohlen, 2001). The use of a DSC metrics system leads to synergies of performance among supply chain members that facilitate the measure of total supply chain performance as opposed to isolated functional "silo" measures (Hausman, 2003). We have purposely chosen performance metrics as the standard of evaluation as opposed to the terms "performance measurement" and "performance measure." The term "performance measure" carries a connotational definition that is vague, historical, and diffused (Neely, 1999). Schneiderman (1996) stated that measures and metrics differ in the following way: measures consist of the broad set of infinite forms of evaluating a firm's process whereas metrics are a subset of the few measures actually useful to improve a company's efforts. Metrics should be monotonic, that is, improvements in metrics must lead to improvement in shareholder wealth. Gunasekaran, Patel and Tirtiroglu (2001) discussed the problem of these terms in their study of the literature and stated, "Quite often, companies have a large number of performance measures to which they keep adding based on suggestions from employees and consultants, and fail to realize that performance measurement can be better addressed using a few good metrics (p. 72)."
APA, Harvard, Vancouver, ISO, and other styles
5

Dietrich, Nathan S. "Performance metrics for correlation and tracking algorithms." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA391959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wanderman-Milne, Skye A. "Virtualized application performance prediction using system metrics." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77450.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 79-80).
Virtualized datacenter administrators would like to consolidate virtual machines (VMs) onto as few physical hosts as possible in order to decrease costs, but must leave enough physical resources for each VM to meet application service level objectives (SLOs). The threshold between good and bad performance in terms of resource settings, however, is hard to determine and rarely static due to changing workloads and resource usage. Thus, in order to avoid SLO violations, system administrators must err on the side of caution by running fewer VMs per host than necessary or setting reservations, which prevents resources from being shared. To ameliorate this situation, we are working to design and implement a system that automatically monitors VM-level metrics to predict impending application SLO violations, and takes appropriate action to prevent the SLO violation from occurring. So far we have implemented the performance prediction, which is detailed in this document, while the preventative actions are left as future work. We created a three-stage pipeline in order to achieve scalable performance prediction. The three stages are prediction, which predicts future VM ESX performance counter values based on current time-series data; aggregation, which aggregates the predicted VM metrics into a single set of global metrics; and finally classification, which for each VM classifies its performance as good or bad based on the predicted VM counters and the predicted global state. Prediction of each counter is performed by a least-squares linear fit, aggregation is performed simply by summing each counter across all VMs, and classification is performed using a support vector machine (SVM) for each application. In addition, we created an experimental system running a MongoDB instance in order to test and evaluate our pipeline implementation. Our results on this experimental system are promising, but further research will be necessary before applying these techniques to a production system.
by Skye A. Wanderman-Milne.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Burdett, Yan Liu. "Correlation of Software Quality Metrics and Performance." NSUWorks, 2012. http://nsuworks.nova.edu/gscis_etd/109.

Full text
Abstract:
Performance is an aspect of software quality that is often not considered at early stages of software design. Common approaches to performance analysis include utilizing profiling tools after the software has been developed to find bottlenecks and executing simulation models that are either manually constructed or derived from UML design diagrams. Many projects attempt to correct performance issues by adding more powerful hardware instead of attacking the root cause. Software metrics have been used to predict many aspects of software quality such as maintainability and fault-tolerance by correlation and regression analysis. Metrics proposed by Chidamber and Kemerer, also known as the CK metric suite, have been studied extensively in software quality model analyses. These studies examined maintainability, fault tolerance, error proneness, and scalability of the software as it evolved through time. Correlations were made between metrics and the likely quality models they represented. Other metrics such as Cyclomatic Complexity by McCabe and class couplings by Martin have also been used in quality predictions. No research has been conducted to analyze relationship between performance and metrics. The goal of this research was to construct a decision tree that used software metrics to forecast performance characteristics. The relationship between metrics and performance was derived by correlation between static code metrics and three runtime variables: number of objects, method call frequency, and average method call lengths on selected software benchmarks. The decision tree was constructed using the C4.5 algorithm implemented in the WEKA software. Pearson correlation coefficients were obtained for the combined datasets from all benchmarks. The decision trees and Pearson statistics showed that weighted methods per class (WMC), total lines of code (TLOC), and coupling between objects (CBO) have significant correlation with software performance. WMC showed positive correlation with number of objects and calls. CBO correlated positively with average method call lengths and negatively with number of objects. TLOC correlated positively with number of calls.
APA, Harvard, Vancouver, ISO, and other styles
8

Jackson, Kenneth J. "Forecast error metrics for Navy inventory management performance." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5756.

Full text
Abstract:
Approved for public release; distribution is unlimited.
This research establishes metrics for determining overall Navy secondary inventory forecasting accuracy when compared to actual demands at the Naval Inventory Control Point (NAVICP). Specifically, two performance metrics are introduced: the average performance index (API) and the median absolute deviation performance index (MPI). API measures forecasting accuracy of secondary inventory when compared against demand or forecast performance over a four-quarter period. MPI measures the quarterly variability of forecast errors over the same period. The API and MPI metrics allow for the identification of poorly forecasted NAVICP secondary inventory items. The metrics can be applied to entire inventories or subsets of items based on type, demand, or cost. In addition, the API metric can be used to show overall inventory performance, providing NAVICP with a graphical means to assess forecasting performance improvements (or degradations) over time. The new forecasting accuracy methods developed in this research will allow the Navy to continually gauge the overall health of their inventory management practices and provide a method for improving forecasting accuracy. Additionally, they will assist NAVICP in complying with DoD directives that require NAVICP to monitor and continually develop improvements to inventory management practices.
APA, Harvard, Vancouver, ISO, and other styles
9

Forman, Rachel Emily. "Objective performance metrics for improved space telerobotics training." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68408.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 47-50).
NASA astronauts undergo many hours of formal training and self-study to gain proficiency in space teleoperation tasks. After each lesson, instructors score an astronaut's performance in several broad skill categories, including 'General Situational Awareness', 'Maneuvers/Task Performance', and 'Hand- Controller Techniques'. A plus, check, or minus indicates that the student is ahead of, at, or behind the expected skill level. The scoring of the final evaluation for a robotics training course is also largely subjective, with the instructor designating an integer score for the student between 1 (Unsatisfactory) and 5 (Strong) in the same skill categories. This thesis research project was designed to: (1) consider the variety of quantitative metrics that could be embedded into a space robotics training simulation, and (2) investigate at what point and by what means it is most constructive for performance assessment to be revealed to an operator-in-training. We reviewed the current largely qualitative space robotics performance metrics, as well as new quantitative kinematic metrics of manual control skills-including those explored thus far only in laboratory experiments-and additional measures of executive function and supervisory control performance. Kinematic metrics include quantitative measures such as rate of change of linear and rotational acceleration. Potential measures of executive function and supervisory control include camera selection and clearance monitoring. To instantiate our ideas, we chose a specific "fly-to" space telerobotics task taught in the early phases of NASA Generic Robotics Training (GRT) and developed a pilot training experiment (n=16) using our virtual robotics training workstation. Our goal was to evaluate potential performance metrics designed to encourage use of multi-axis control, and to compare real-time ("live") performance feedback alternatives (live visual vs. live aural vs. none). Movement time decreased and multi-axis and bimanual control use gradually increased across trials. All subjects had the opportunity to view post-trial performance feedback including these metrics. Although our subjects overwhelmingly preferred the live, visual feedback condition, no reliable additional effects of live feedback condition were found, except perhaps among the more experienced subjects. However, the experiment demonstrated that embedded performance metrics potentially could quantify and improve some important aspects of GRT evaluations.
Supported by the National Space Biomedical Research Institute through NASA NCC9-58
by Rachel Emily Forman.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
10

Wolbert, Daniel (Daniel Joseph). "Utilization of visual metrics to drive intended performance." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39689.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2007.
Includes bibliographical references.
In recent years the American industrial landscape has undergone tremendous change as companies have worked to adopt Lean practices. This transformation has been difficult, but necessary, as American companies work to remain competitive with foreign competitors. A key enabler of these Lean transformations has been the use of visual metrics to communicate how a process has performed as well as to set goals for future performance. The challenge is to first identify what data is available and then create metrics that encourage and reward Lean behaviors. This thesis explores the introduction of visual metrics for a part inspection process at the Raytheon Company. Prior to the introduction of these metrics, there was limited ability to determine how the process was performing. As a result, downstream customers were able to track when a part entered the inspection process but were unable to predict when the inspection would be completed. This introduced a risk to the process and created a sense of frustration through the facility. The visual metrics for the inspection area were created on a series of visual dashboards that display common Lean metrics, such as cycle time and backlog (or work-in-process). Through these dashboards the area will be able to understand how it is performing and initiate continuous improvement projects to improve performance.
by Daniel Wolbert.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
11

Lazarevic, N. "Background modelling and performance metrics for visual surveillance." Thesis, Kingston University, 2011. http://eprints.kingston.ac.uk/21831/.

Full text
Abstract:
This work deals with the problems of performance evaluation and background modelling for the detection of moving objects in outdoor video surveillance datasets. Such datasets are typically affected by considerable background variations caused by global and partial illumination variations, gradual and sudden lighting condition changes, and non-stationary backgrounds. The large variation of backgrounds in typical outdoor video sequences requires highly adaptable and robust models able to represent the background at any time instance with sufficient accuracy. Furthermore, in real life applications it is often required to detect possible contaminations of the scene in real time or when new observations become available. A novel adaptive multi-modal algorithm for on-line background modelling is proposed. The proposed algorithm applies the principles of the Gaussian Mixture Model, previously used to model the grey-level (or colour) variations of individual pixels, to the modelling of illumination variations in image regions. The image observations are represented in the eigen-space, where the dimensionality of the data is significantly reduced using the method of the principal components analysis. The projections of image regions in the reduced eigen-space are clustered using K-means into clusters (or modes) of similar backgrounds and are modelled as multivariate Gaussian distributions. Such an approach allows the model to adapts to the changes in the dataset in a timely manner. This work proposed modifications to a previously published method for incremental update of the uni-modal eigne-models. The modifications are twofold. First, the incremental update is performed on the individual modes of the multi-modal model, and second, the mechanism for adding new dimensions is adapted to handle problems typical for outdoor video surveillance scenes with a wide range of illumination changes. Finally, a novel, objective, comparative, object-based methodology for performance evaluation of object detection is also developed. the proposed methodology is concerned with the evaluation of object detection in the context of the end-user defined quality of performance in complex video surveillance applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Hassanien, Mohamed A. M. "Error rate performance metrics for digital communications systems." Thesis, Swansea University, 2011. https://cronfa.swan.ac.uk/Record/cronfa42497.

Full text
Abstract:
In this thesis, novel error rate performance metrics and transmission solutions are investigated for delay limited communication systems and for co-channel interference scenarios. The following four research problems in particular were considered. The first research problem is devoted to analysis of the higher order ergodic moments of error rates for digital communication systems with time- unlimited ergodic transmissions and the statistics of the conditional error rates of digital modulations over fading channels are considered. The probability density function and the higher order moments of the conditional error rates are obtained. Non-monotonic behavior of the moments of the conditional bit error rates versus some channel model parameters is observed for a Ricean distributed channel fading amplitude at the detector input. Properties and possible applications of the second central moments are proposed. The second research problem is the non-ergodic error rate analysis and signaling design for communication systems processing a single finite length received sequence. A framework to analyze the error rate properties of non-ergodic transmissions is established. The Bayesian credible intervals are used to estimate the instantaneous bit error rate. A novel degree of ergodicity measure is introduced using the credible interval estimates to quantify the level of ergodicity of the received sequence with respect to the instantaneous bit error rate and to describe the transition of the data detector from the non-ergodic to ergodic zone of operation. The developed non-ergodic analysis is used to define adaptive forward error correction control and adaptive power control policies that can guarantee, with a given probability, the worst case instantaneous bit error rate performance of the detector in its transition fi'om the non-ergodic to ergodic zone of operation. In the third research problem, novel retransmission schemes are developed for delay-limited retransmissions. The proposed scheme relies on a reliable reverse link for the error-free feedback message delivery. Unlike the conventional automatic repeat request schemes, the proposed scheme does not require the use of cyclic redundancy check bits for error detection. In the proposed scheme, random permutations are exploited to locate the bits for retransmission in the predefined window within the packet. The retransmitted bits are combined using the maximal-ratio combining. The complexity-performance trade-offs of the proposed scheme is investigated by mathematical analysis as well as computer simulations. The bit error rate of the proposed scheme is independent of the packet length while the throughput is dependent on the packet length. Three practical techniques suitable for implementation are proposed. The performance of the proposed retransmission scheme was compared to the block repetition code corresponding to a conventional ARQ retransmission strategy. It was shown that, for the same number of retransmissions, and the same packet length, the proposed scheme always outperforms such repetition coding, and, in some scenarios, the performance improvement is found to be significant. Most of our analysis has been done for the case of AWGN channel, however, the case of a slow Rayleigh block fading channel was also investigated. The proposed scheme appears to provide the throughput and the BER reduction gains only for the medium to large SNR values. Finally, the last research problem investigates the link error rate performance with a single co-channel interference. A novel metric to assess whether the standard Gaussian approximation of a single interferer underestimates or overestimates the link bit error rate is derived. This metric is a function of the interference channel fading statistics. However, it is otherwise independent of the statistics of the desired signal. The key step in derivation of the proposed metric is to construct the standard Gaussian approximation of the interference by a non-linear transformation. A closed form expression of the metric is obtained for a Nakagami distributed interference fading amplitude. Numerical results for the case of Nakagami and lognormal distributed interference fading amplitude confirm the validity of the proposed metric. The higher moments, interval estimators and non-linear transformations were investigated to evaluate the error rate performance for different wireless communication scenarios. The synchronization channel is also used jointly with the communication link to form a transmission diversity and subsequently, to improve the error rate performance.
APA, Harvard, Vancouver, ISO, and other styles
13

Hugo, Thoursie, and Lucas Fageräng. "Financial Metrics Effect on Companies Performance During COVID­19." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298075.

Full text
Abstract:
Financial metrics are important tools for managers and investors to assess performance of companies. At the start of 2020, a global pandemic COVID-19 broke out, resulting in bearish markets when mandatory lockdowns were announced. On the 16th of March the Dow Jones Industrial average dropped 13% and the S&P500 north of 12%. The lockdowns all over the world have had massive impact on businesses with layoffs required to stay afloat together with closing of factories. This study aims to establish which, if any, economic metrics have given companies the edge during the pandemic. Through the use of Multiple Linear Regression, models were created for three sectors in the Swedish market, Healthcare, Industry and Consumers Discretionary. From the models there were four different variables found to have correlation to how well companies performed during the pandemic, with different combinations for each sector. These were current ratio for Healthcare and Industry, as well as D/E and ROE for Consumers Discretionary. The study also contains a qualitative study of the models and an evaluation. The evaluation and results did indicate that certain financial metrics had some type of correlation with the Net Income change of a company. Most important on the other hand was that other factors did probably influence the results of a company more than previously mentioned financial metrics.
APA, Harvard, Vancouver, ISO, and other styles
14

Miehling, Mathew J. "Correlation of affiliate performance against web evaluation metrics." Thesis, Edinburgh Napier University, 2014. http://researchrepository.napier.ac.uk/Output/7250.

Full text
Abstract:
Affiliate advertising is changing the way that people do business online. Retailers are now offering incentives to third-party publishers for advertising goods and services on their behalf in order to capture more of the market. Online advertising spending has already over taken that of traditional advertising in all other channels in the UK and is slated to do so worldwide as well [1]. In this highly competitive industry, the livelihood of a publisher is intrinsically linked to their web site performance. Understanding the strengths and weaknesses of a web site is fundamental to improving its quality and performance. However, the definition of performance may vary between different business sectors or even different sites in the same sector. In the affiliate advertising industry, the measure of performance is generally linked to the fulfilment of advertising campaign goals, which often equates to the ability to generate revenue or brand awareness for the retailer. This thesis aims to explore the correlation of web site evaluation metrics to the business performance of a company within an affiliate advertising programme. In order to explore this correlation, an automated evaluation framework was built to examine a set of web sites from an active online advertising campaign. A purpose-built web crawler examined over 4,000 sites from the advertising campaign in approximately 260 hours gathering data to be used in the examination of URL similarity, URL relevance, search engine visibility, broken links, broken images and presence on a blacklist. The gathered data was used to calculate a score for each of the features which were then combined to create an overall HealthScore for each publishers. The evaluated metrics focus on the categories of domain and content analysis. From the performance data available, it was possible to calculate the business performance for the 234 active publishers using the number of sales and click-throughs they achieved. When the HealthScores and performance data were compared, the HealthScore was able to predict the publisher's performance with 59% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
15

Bäck, Jesper. "Domain similarity metrics for predicting transfer learning performance." Thesis, Linköpings universitet, Interaktiva och kognitiva system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153747.

Full text
Abstract:
The lack of training data is a common problem in machine learning. One solution to thisproblem is to use transfer learning to remove or reduce the requirement of training data.Selecting datasets for transfer learning can be difficult however. As a possible solution, thisstudy proposes the domain similarity metrics document vector distance (DVD) and termfrequency-inverse document frequency (TF-IDF) distance. DVD and TF-IDF could aid inselecting datasets for good transfer learning when there is no data from the target domain.The simple metric, shared vocabulary, is used as a baseline to check whether DVD or TF-IDF can indicate a better choice for a fine-tuning dataset. SQuAD is a popular questionanswering dataset which has been proven useful for pre-training models for transfer learn-ing. The results were therefore measured by pre-training a model on the SQuAD datasetand fine-tuning on a selection of different datasets. The proposed metrics were used tomeasure the similarity between the datasets to see whether there was a correlation betweentransfer learning effect and similarity. The results found a clear relation between a smalldistance according to the DVD metric and good transfer learning. This could prove usefulfor a target domain without training data, a model could be trained on a big dataset andfine-tuned on a small dataset that is very similar to the target domain. It was also foundthat even small amount of training data from the target domain can be used to fine-tune amodel pre-trained on another domain of data, achieving better performance compared toonly training on data from the target domain.
APA, Harvard, Vancouver, ISO, and other styles
16

Anderson, John C. Scott Clifford G. "Benchmarking and performance metrics for a defense distribution depot /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA359952.

Full text
Abstract:
Thesis (M.S. in Management) Naval Postgraduate School, December 1998.
"December 1998." Thesis advisor(s): Kevin R. Gue, Jane Feitler. Includes bibliographical references (p. 69-70). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
17

Gatsoulis, Ioannis. "Performance metrics and human-robot interaction for teleoperated systems." Thesis, University of Leeds, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.491752.

Full text
Abstract:
This thesis investigates human factors issues in the design and development of effective human-robot interfaces for emerging applications of teleoperated, cooperative mobile robot systems in situations such as urban search and rescue. Traditional methods of designing human-robot interaction interfaces have failed to produce effective results as witnessed in the post September 11 search operations. The thesis adopts a user-centric approach based on the human factors of situation awarenes~, telepresence and workload to support the hum'an operator because this is widely accepted as the best way of realising increased levels of collaboration between humans and robotic systems, working as a partnership to perform a complex task. The measurement of these human factors has not been explored within the robotic community in any significant way. The measurement of these subjective and complex issues is addressed in this thesis by looking to the flight traffic control domain where researchers have developed many methods of determining how to quan- .- - .. - ._-------------_._---------------_ ..-----_.._--- tify the quality of situation awareness, the level of workload and the level of telepresence of the people in the aircrafts and on the ground. Based on these methods, the research proposes five new methods (ASAGAT, QASAGAT, CARS, PASA, SPASA) for measuring situation awareness, three methods (WSPQ, SUSPQ, SPATP) for measuring telepresence and three methods (NASA-TLX, MCRS, FSWAT) for measuririg workload. A comprehensive comparison between them has shown that QASAGAT and SPASA are the most reliable and accurate for measuring situation awareness, SPATP for measuring telepresence and FSWAT for measuring workload. For the measurement of performance a new method has been developed, which is felt to be more objective for the urban search and rescue scenario than the metrics used in the RoboCup Rescue competition. Simulation studies involved extensive investigations to determine the various software tools and platforms that are available for realising robot urban search and rescue scenarios. The software of Player-Gazebo was selected as the best solution. A graphical user interface· comprising vision, laser data, map, robot locations, etc. was developed and assessed with the proposed measurement methods under the simulated robot system and search scenarios. The test subjects comprised specialist end users as well as general non-end users. . An in-between groups analysis showed that the individual characteristics of each group may have some effect on the experimental variables, however, this effect is very minimal and the l?ain influence factors are the interaction interfaces and the human factors investigated here. Moreover, the results indicated that there is no significant benefit when using professional urban search and rescue end users. Correlation analysis on the data has shown that situation awareness and telepresence have a positive effect on performance, while workload negatively affects performance. It was also found that there is a positive correlation between situation awareness and telepresence, while workload has a negative effect on both. These results validate the assumptions made. __________Amultiple.linearregressionmodel has_been.developed _to further _l.lnderstand.the _ individual contributions of each human factor in the overall performance achieved. The limited prediction capabilities of th'-e linear model suggested anon-linear relationship. For this reason,'- a non-linear model using an artificial neural network trained with the backpropagation algorithm has also been developed. The resulting neural network is able to predict the response variables more precisely and is shown to be able to generalise well to unseen cases. A physical mobile teleoperated urban search and rescue robot system has also been developed for realising future real world trials.
APA, Harvard, Vancouver, ISO, and other styles
18

Yusof, Zulkefli Muhammed. "Performance metrics and routing in vehicular ad hoc networks." Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/9136.

Full text
Abstract:
The aim of this thesis is to propose a method for enhancing the performance of Vehicular Ad hoc Networks (VANETs). The focus is on a routing protocol where performance metrics are used to inform the routing decisions made. The thesis begins by analysing routing protocols in a random mobility scenario with a wide range of node densities. A Cellular Automata algorithm is subsequently applied in order to create a mobility model of a highway, and wide range of density and transmission range are tested. Performance metrics are introduced to assist the prediction of likely route failure. The Good Link Availability (GLA) and Good Route Availability (GRA) metrics are proposed which can be used for a pre-emptive action that has the potential to give better performance. The implementation framework for this method using the AODV routing protocol is also discussed. The main outcomes of this research can be summarised as identifying and formulating methods for pre-emptive actions using a Cellular Automata with NS-2 to simulate VANETs, and the implementation method within the AODV routing protocol.
APA, Harvard, Vancouver, ISO, and other styles
19

Anderson, John C., and Clifford G. Scott. "Benchmarking and performance metrics for a defense distribution depot." Thesis, Monterey, California. Naval Postgraduate School, 1998. http://hdl.handle.net/10945/32605.

Full text
Abstract:
Department of Defense logistics activities are under increasing pressure to reduce their cost of operations. Defense Logistics Agency's response to this challenge is to reduce costs through competition--16 of 22 Defense Distribution Depots will be competed in the near future. Defense Distribution Depot San Diego (DDDC), facing this competition, must assess its relative competitiveness with respect to commercial industry. However, DDDC lacks performance metrics and measurement methods necessary to effectively measure its performance for comparison. The purpose of our thesis is threefold: to identify performance measures, measurement methods, and uses of performance measures by leaders in the physical distribution industry; to determine the depot's competitive position by quantifying the gap in performance using the performance metrics identified; and to identify the qualitative factors contributing to the gap in performance between the depot and commercial firms. We employ benchmarking methodology to argue that there is a significant gap in performance between DDDC and commercial distribution firms. We quantify the gap and discuss the qualitative factors contributing to it. We conclude with recommended productivity performance indicators for implementation at DDDC.
APA, Harvard, Vancouver, ISO, and other styles
20

Arcuri, Frank John. "A Development of Performance Metrics for Forecasting Schedule Slippage." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/32108.

Full text
Abstract:
Project schedules should mirror the project, as the project takes place. Accurate project schedules, when updated and revised, reflect the actual progress of construction as performed in the field. Various methods for monitoring progress of construction are successful in their representation of actual construction as it takes place. Progress monitoring techniques clearly identify when we are behind schedule, yet it is less obvious to recognize when we are going to slip behind schedule. This research explores how schedule performance measurement mechanisms are used to recognize construction projects that may potentially slip behind schedule, as well as what type of early warning they provide in order to take corrective action. Such early warning systems help prevent situations where the contractor and/or owner are in denial for a number of months that a possible catastrophe of a project is going to finish on time. This research develops the intellectual framework for schedule control systems, based on a review of control systems in the construction industry. The framework forms the foundation for the development of a schedule control technique for forecasting schedule slippage â the Required Performance Method (RPM). The RPM forecasts the required performance needed for timely project completion, and is based on the contractorâ s ability to expand future work. The RPM is a paradigm shift from control based on scheduled completion date to control based on required performance. This shift enables forecasts to express concern in terms that are more tangible. Furthermore, the shift represents a focus on what needs to be done to achieve a target completion date, as opposed to the traditional focus on what has been done. The RPM is demonstrated through a case study, revealing its ability to forecast impending schedule slippage.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
21

MUTHIAH, KANTHI MATHI NATHAN. "SYSTEM LEVEL EFFECTIVENESS METRICS FOR PERFORMANCE MONITORING AND DIAGNOSTICS." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1154631919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

White, Matthew J. "An analysis of NCO availability performance metrics and their relation to availability performance." Monterey, California: Naval Postgraduate School, 2013. http://hdl.handle.net/10945/34762.

Full text
Abstract:
Approved for public release; distribution is unlimited
The Navys approach to planning and executing Chief of Naval Operations maintenance availabilities has undergone significant changes since 2006. The adoption of unique lean initiatives and defined project management fundamentals have guided shipyard and project leadership as they manage scheduled industrial maintenance for ships and submarines. These business practices have resulted in performance measurement and control data being gathered for shipyard management to use as they analyze availability performance. This thesis reports on the results of exploratory analyses of these data to evaluate associations and trends pertaining to cost and schedule performance since the inception of the lean initiative. The studys analyses suggest that numerous performance metrics display trends which suggest availability performance is improving over the defined lean initiative time frame; that several metrics are functions of the length of the individual availability and require appropriate weighting considerations; and that average weekly interim production bow wave metrics evaluated early in an availability may have predictive abilities concerning availability completion success.
APA, Harvard, Vancouver, ISO, and other styles
23

Atterlönn, Anton, and Benjamin Hedberg. "GUI Performance Metrics Framework : Monitoring performance of web clients to improve user experience." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247940.

Full text
Abstract:
When using graphical user interfaces (GUIs), the main problems that frustrates users are long response times and delays. These problems create a bad impression of the GUI, as well as of the company that created it.When providing a GUI to users it is important to provide intuition, ease of use and simplicity while still delivering good performance. However, some factors that play a major role regarding the performance aspect is outside the developers’ hands, namely the client’s internet connection and hardware. Since every client has a different combination of internet connection and hardware, it can be a hassle to satisfy everyone while still providing an intuitive and responsive GUI.The aim of this study is to find a way to monitor performance of a web GUI, where performance comprises response times and render times, and in doing so, enable the improvement of response times and render times by collecting data that can be analyzed.A framework that monitors the performance of a web GUI was developed as a proof of concept. The framework collects relevant data regarding performance of the web GUI and stores the data in a database. The stored data can then be manually analyzed by developers to find weak points in the system regarding performance. This is achieved without interfering with the GUI or impacting the user experience negatively.
När man använder grafiska gränssnitt upplevs lång responstid och fördröjning som de främsta problemen. Dessa problem är frustrerande och ger användare en negativ syn på både det grafiska gränssnittet och företaget som skapat det.Det är viktigt att grafiska gränssnitt är intuitiva, lättanvända och lättfattliga samtidigt som de levererar hög prestanda. Det finns faktorer som påverkar dessa egenskaper som är utanför programmerarnas händer, t.ex. användarens internetuppkoppling och hårdvara. Eftersom varje användare har olika kombinationer av internetuppkoppling och hårdvara är det svårt att tillfredsställa alla och samtidigt tillhandahålla ett intuitivt och responsivt gränssnitt.Målet med denna studie är att hitta ett sätt att övervaka prestandan av ett grafiskt gränssnitt där begreppet prestanda omfattar responsiviteten och hastigheten av den grafiska renderingen, och genom detta möjliggöra förbättring av responstider och renderingstider.Ett ramverk som övervakar prestandan av ett grafiskt gränssnitt utvecklades. Ramverket samlar in relevant prestandamässig data om det grafiska gränssnittet och sparar datan i en databas. Datan som sparats kan sedan bli manuellt analyserad av utvecklare för att hitta svagheter i systemets prestanda. Detta uppnås utan att störa det grafiska gränssnittet och utan att ha någon negativ påverkan på användarupplevelsen.
APA, Harvard, Vancouver, ISO, and other styles
24

Turner, Jeffrey L. "A correlation between quality management metrics and technical performance measurement." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion.exe/07Mar%5FTurner.pdf.

Full text
Abstract:
Thesis (M.S. in Software Engineering)--Naval Postgraduate School, March 2007.
Thesis Advisor(s): John Osmundson, J. Bret Michael. "March 2007." Includes bibliographical references (p.129-130). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
25

Moody, Seth S. "Development of Dynamic Thermal Performance Metrics For Eco-roof Systems." Thesis, Portland State University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=1535587.

Full text
Abstract:

In order to obtain credit for an eco-roof in building energy load calculations the steady state and time-varying thermal properties (thermal mass with evapotranspiration) must be fully understood. The following study presents results of experimentation and modeling in an effort to develop dynamic thermal mass performance metrics for eco-roof systems. The work is focused on understanding the thermal parameters (foliage & soil) of an eco-roof, further validation of the EnergyPlus Green Roof Module and development of a standardized metric for assessing the time-varying thermal benefits of eco-roof systems that can be applied across building types and climate zones.

Eco-roof foliage, soil and weather parameters were continuously collected at the Green Roof Integrated PhotoVoltaic (GRIPV) project from 01/20/2011 to 08/28/2011. The parameters were used to develop an EnergyPlus eco-roof validation model. The validated eco-roof model was then used to estimate the Dynamic Benefit for Massive System (DBMS) in 4 climate-locations: Portland Oregon, Chicago Illinois, Atlanta Georgia and Houston Texas.

GRIPV30 (GRIPV soil with 30% soil organic matter) was compared to 12 previously tested eco-roof soils. GRIPV30 reduced dry soil conductivity by 50%, increased field capacity by 21% and reduced dry soil mass per unit volume by 60%. GRIPV30 soil had low conductivity at all moisture contents and high heat capacity at moderate and high moisture content. The characteristics of the GRIPV30 soil make it a good choice for moisture retention and reduction of heat flux, improved thermal mass (heat storage) when integrating an eco-roof with a building.

Eco-roof model validation was performed with constant seasonal moisture driven soil properties and resulted in acceptable measured - modeled eco-roof temperature validation. LAI has a large impact on how the Green Roof Module calculates the eco-roof energy balance with a higher impact on daytime (measured - modeled) soil temperature differential and most significant during summer.

DBMS modeling found the mild climates of Atlanta Georgia and Houston Texas with eco-roof annual DBMS of 1.03, 3% performance improvement above the standard building, based on cooling, heating and fan energy consumption. The Chicago Illinois climate with severe winter and mild spring/summer/fall has an annual DBMS of 1.01. The moderate Portland Oregon climate has a below standard DBMS of 0.97.

APA, Harvard, Vancouver, ISO, and other styles
26

Latner, Avi. "Feature performance metrics in a service as a software offering." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/67562.

Full text
Abstract:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 46-47).
Software as a Service (SaaS) delivery model has become widespread. This deployment model changes the economics of software delivery but also has an impact on development. Releasing updates to customers is immediate and the development, product and marketing teams have access to customer usage information. These dynamics create a fast feedback loop between developments to customers. To fully leverage this feedback loop the right metrics need to be set. Typically SaaS applications are a collection of features. The product is divided between development teams according to features and customers access the service through features. Thus a framework that measure feature performance is valuable. This thesis provides a framework for measuring the performance of software as a service (SaaS) product features in order to prioritize development efforts. The case is based on empirical data from HubSpot and it is generalized to provide a framework applicable to other companies with large scale software offerings and distributed development. Firstly, relative value is measured by the impact that each feature has on customer acquisition and retention. Secondly, feature value is compared to feature cost and specifically development investment to determine feature profitability. Thirdly, feature sensitivity is measured. Feature sensitivity is defined as the effect a fixed amount of development investment has on value in a given time. Fourthly, features are segmented according to their location relative to the value to cost trend line into: most valuable features, outperforming, under-performing and fledglings. Finally, results are analyzed to determine future action. Maintenance and bug fixes are prioritized according to feature value. Product enhancements are prioritized according to sensitivity with special attention to fledglings. Under-performing features are either put on "life-support", terminated or overhauled.
by Avi Latner.
S.M.in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
27

Moody, Seth Sinclair. "Development of Dynamic Thermal Performance Metrics for Eco-roof Systems." PDXScholar, 2013. http://pdxscholar.library.pdx.edu/open_access_etds/666.

Full text
Abstract:
In order to obtain credit for an eco-roof in building energy load calculations the steady state and time-varying thermal properties (thermal mass with evapotranspiration) must be fully understood. The following study presents results of experimentation and modeling in an effort to develop dynamic thermal mass performance metrics for eco-roof systems. The work is focused on understanding the thermal parameters (foliage & soil) of an eco-roof, further validation of the EnergyPlus Green Roof Module and development of a standardized metric for assessing the time-varying thermal benefits of eco-roof systems that can be applied across building types and climate zones. Eco-roof foliage, soil and weather parameters were continuously collected at the Green Roof Integrated PhotoVoltaic (GRIPV) project from 01/20/2011 to 08/28/2011. The parameters were used to develop an EnergyPlus eco-roof validation model. The validated eco-roof model was then used to estimate the Dynamic Benefit for Massive System (DBMS) in 4 climate-locations: Portland Oregon, Chicago Illinois, Atlanta Georgia and Houston Texas. GRIPV30 (GRIPV soil with 30% soil organic matter) was compared to 12 previously tested eco-roof soils. GRIPV30 reduced dry soil conductivity by 50%, increased field capacity by 21% and reduced dry soil mass per unit volume by 60%. GRIPV30 soil had low conductivity at all moisture contents and high heat capacity at moderate and high moisture content. The characteristics of the GRIPV30 soil make it a good choice for moisture retention and reduction of heat flux, improved thermal mass (heat storage) when integrating an eco-roof with a building. Eco-roof model validation was performed with constant seasonal moisture driven soil properties and resulted in acceptable measured - modeled eco-roof temperature validation. LAI has a large impact on how the Green Roof Module calculates the eco-roof energy balance with a higher impact on daytime (measured - modeled) soil temperature differential and most significant during summer. DBMS modeling found the mild climates of Atlanta Georgia and Houston Texas with eco-roof annual DBMS of 1.03, 3% performance improvement above the standard building, based on cooling, heating and fan energy consumption. The Chicago Illinois climate with severe winter and mild spring/summer/fall has an annual DBMS of 1.01. The moderate Portland Oregon climate has a below standard DBMS of 0.97.
APA, Harvard, Vancouver, ISO, and other styles
28

Bäck, Eneroth Moa. "A Feature Selection Approach for Evaluating and Selecting Performance Metrics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280817.

Full text
Abstract:
To accurately define and measure performance is a complex process for most businesses, yet crucial for optimal distribution of company resources and to accomplish alignment across business units. Despite the large amount of data available to most modern companies today, performance metrics are commonly selected based on expertise, tradition, or even gut-feeling. In this thesis, a data-driven approach is proposed in the form of a statistical framework for evaluating and selecting performance metrics. The outline of the framework is influenced by the method of time series feature selection and wraps the search for relevant features around a time series forecasting model. The framework is tuned by experiments exploring state-of-the-art forecasting models, in combination with two different feature selection methods. The results demonstrate that for metrics similar to the real-world data used in this thesis, the best frame- work incorporates the filter feature selection method in combination with an univariate time series forecasting model.
Att exakt definiera och mäta prestation är en komplex process för de flesta företag, men ändå avgörande för korrekt distribution av resurser och för att uppnå en förståelse för gemensamma mål mellan affärsenheter. Trots den stora mängd data som finns tillgänglig för de flesta moderna företag idag, väljs mått av prestationer ofta baserat på expertis, tradition eller till och med magkänsla. I detta examensarbete föreslås en datadriven strategi i form av ett statistiskt ramverk för utvärdering och val av prestationsmått. Ramverkets struktur baseras på en dimensionsreducerande metod, känd som (eng.) feature selection, för tidsserier och som i sökningen efter relevanta prestationsmått använder sig av en prediktionsalgoritm för tidsserier. För att designa ett komplett ramverk utförs experiment som utforskar moderna prediktionsalgoritmer för tidsserier i kombination med två olika dimensionsreducerande metoder. Resultaten visar att för prestationsmått baserade på den verkliga data som använts i detta examensarbete, så utgörs det bästa ramverket utav den dimensionsreducerande metoden som använder sig av filtrering i kombination med en prediktionsalgoritm för univariata tidsserier.
APA, Harvard, Vancouver, ISO, and other styles
29

Powell, Robert A. "Relationships between lane change performance and open-loop handling metrics." Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1263410104/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sreenibha, Reddy Byreddy. "Performance Metrics Analysis of GamingAnywhere with GPU accelerated Nvidia CUDA." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16846.

Full text
Abstract:
The modern world has opened the gates to a lot of advancements in cloud computing, particularly in the field of Cloud Gaming. The most recent development made in this area is the open-source cloud gaming system called GamingAnywhere. The relationship between the CPU and GPU is what is the main object of our concentration in this thesis paper. The Graphical Processing Unit (GPU) performance plays a vital role in analyzing the playing experience and enhancement of GamingAnywhere. In this paper, the virtualization of the GPU has been concentrated on and is suggested that the acceleration of this unit using NVIDIA CUDA, is the key for better performance while using GamingAnywhere. After vast research, the technique employed for NVIDIA CUDA has been chosen as gVirtuS. There is an experimental study conducted to evaluate the feasibility and performance of GPU solutions by VMware in cloud gaming scenarios given by GamingAnywhere. Performance is measured in terms of bitrate, packet loss, jitter and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the frame rate and bitrate have increased with different resolutions, and the usage of NVIDIA CUDA enhanced GPU.
APA, Harvard, Vancouver, ISO, and other styles
31

Yekhshatyan, Lora. "Detecting distraction and degraded driver performance with visual behavior metrics." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/910.

Full text
Abstract:
Driver distraction contributes to approximately 43% of motor-vehicle crashes and 27% of near-crashes. Rapidly developing in-vehicle technology and electronic devices place additional demands on drivers, which might lead to distraction and diminished capacity to perform driving tasks. This situation threatens safe driving. Technology that can detect and mitigate distraction by alerting drivers could play a central role in maintaining safety. Correctly identifying driver distraction in real time is a critical challenge in developing distraction mitigation systems, and this function has not been well developed. Moreover, the greatest benefit may be from real-time distraction detection in advance of dangerous breakdowns in driver performance. Based on driver performance, two types of distraction - visual and cognitive - are identified. These types of distraction have very different effects on visual behavior and driving performance; therefore, they require different algorithms for detection. Distraction detection algorithms typically rely on either eye measures or driver performance measures because the effect of distraction on the coordination of measures has not been established. Combining both eye glance and vehicle data could enhance the ability of algorithms to detect and differentiate visual and cognitive distraction. The goal of this research is to examine whether poor coordination between visual behavior and vehicle control can identify diminished attention to driving in advance of breakdowns in lane keeping. The primary hypothesis of this dissertation is that detection of changes in eye-steering relationship caused by distraction could provide a prospective indication of vehicle state changes. Three specific aims are pursued to test this hypothesis. The first aim examines the effect of distracting activity on eye and steering movements to assess the degree to which the correlation parameters are indicative of distraction. The second aim applies a control-theoretic system identification approach to the eye movement and steering data to distinguish between distracted and non-distracted conditions. The third aim examines whether changes of eye-steering coordination associated with distraction provide a prospective indication of breakdowns in driver performance, i.e., lane departures. Together, the three aims show how that a combination of visual and steering behavior, i.e., eye-steering model, can differentiate between non-distracted and distracted state. This model revealed sensitivity to distraction associated with off-road glances. The models derived for different drivers have similar structure and fit to data from other drivers reasonably well. In addition, the differences in model order and model coefficients indicate the variability in driving behavior: some people generate more complex behavior than others. As was expected, eye-steering correlation on straight roads is not as strong as observed on curvy roads. However, eye-steering correlation measured through correlation coefficient and time delay between two movements is sensitive to different types of distraction. Time delay mediates changes in lane position and the eye-steering system predicts breakdowns in lane keeping. This dissertation contributes to developing a distraction detection system that integrates visual and steering behavior. More broadly, these results suggest that integrating eye and steering data can be helpful in detecting and mitigating impairments beyond distraction, such as those associated with alcohol, fatigue, and aging.
APA, Harvard, Vancouver, ISO, and other styles
32

Labuschagne, Louwrens. "Investigating the relationship between mobile network performance metrics and customer satisfaction." Master's thesis, Faculty of Science, 2019. https://hdl.handle.net/11427/31605.

Full text
Abstract:
Fixed and mobile communication service providers (CSPs) are facing fierce competition among each other. In a globally saturated market, the primary di↵erentiator between CSPs has become customer satisfaction, typically measured by the Net Promoter Score (NPS) for a subscriber. The NPS is the answer to the question: ”How likely is it that you will recommend this product/company to a friend or colleague?” The responses range from 0 representing not at all likely to 10 representing extremely likely. In this thesis, we aim to identify which, if any, network performance metrics contribute to subscriber satisfaction. In particular, we investigate the relationship between the NPS survey results and 11 network performance metrics of the respondents of a major mobile operator in South Africa. We identify the most influential performance metrics by fitting both linear and non-linear statistical models to the February 2018 survey dataset and test the models on the June 2018 dataset. We find that metrics such as Call Drop Rate, Call Setup Failure Rate, Call Duration and Server Setup Latency are consistently selected as significant features in models of NPS prediction. Nevertheless we find that all the tested statistical and machine learning models, whether linear or non-linear, are poor predictors of NPS scores in a month, when only the network performance metrics in the same month are provided. This suggests that either NPS is driven primarily by other factors (such as customer service interactions at branches and contact centres) or are determined by historical network performance over multiple months.
APA, Harvard, Vancouver, ISO, and other styles
33

Taylor, Rick T. "Non-monetary performance metrics for use in a technology exchange organization." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA392803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

May, Christopher. "The use and application of performance metrics with regional climate models." Thesis, University of East Anglia, 2016. https://ueaeprints.uea.ac.uk/59406/.

Full text
Abstract:
This thesis aims to assess and develop objective and robust approaches to evaluate regional climate model (RCM) historical skill using performance metrics and to provide guidance to relevant groups as to how best utilise these metrics. Performance metrics are quantitative, scalar measures of the numerical distance, or ’error’, between historical model simulations and observations. Model evaluation practice tends to involve ad hoc approaches with little consideration to the underlying sensitivity of the method to small changes in approach. The main questions that arise are to what degree are the outputs, and subsequent applications, of these performance metrics robust? ENSEMBLES and CORDEX RCMs covering Europe are used with E-OBS observational data to assess historical and future simulation characteristics using a range of performance metrics. Metric sensitivity is found in some cases to be low, such as differences between variable types, with extreme indices often producing redundant information. In other cases sensitivity is large, particularly for temporal statistics, but not for spatial pattern statistics. Assessments made over a single decade are found to be robust with respect to the full 40-year time period. Two applications of metrics are considered: metric combinations and exploration of the stationarity of historical RCM bias characteristics. The sensitivity of metric combination procedure is found to be low with respect to the combination method and potentially high for the type of metric included, but remains uncertain for the number of metrics included. Stationarity of biases appears to be highly dependent on the potential for underlying causes of model bias to change substantially in the future, such as the case of surface albedo in the Alps. It is concluded that performance metrics and their applications can and should be considered more systematically using a range of redundancy and stationarity tests as indicators of historical and future robustness.
APA, Harvard, Vancouver, ISO, and other styles
35

Pellen, Michael Gilbert Charles. "Validity of objective metrics of psychomotor performance in laparoscopic surgical simulation." Thesis, University of Newcastle upon Tyne, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Willcox, Jeffrey Scott 1970. "Oceanographic surveys with autonomous underwater vehicles : performance metrics and survey design." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/49992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Dave, Shreya H. "Comprehensive performance metrics for Complex Fenestration Systems using a relative approach." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/70416.

Full text
Abstract:
Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 143-148).
Buildings account for over 40% of the energy consumption in the United States, nearly 40% of which is attributed to lighting. The selection of a fenestration system for a building is a critical decision as it offsets electric lighting use as well as impacts energy performance through heating and cooling systems. Further, the fenestration system contributes to both occupant comfort and ambiance of the space. Complex Fenestration Systems (CFS) address these factors with a variety of innovative technologies but the language to describe, discuss, and compare them does not exist. Existing traditional metrics for fenestration systems are unable to reveal the benefits that characterize complex fenestration systems because they are rigid, do not reflect annual performance, and were developed for a different purpose. The framework presented in this research offers a solution to this problem by using an annual climate-based methodology to provide a comprehensive evaluation of a system by incorporating three of the most relevant performance aspects: energy efficiency, occupant visual comfort, and ability to view through. Three metrics, the Relative Energy Impact (REI), the Extent of Comfortable Daylight (ECD), and the View Through Potential (VTP), were derived from these three criteria to express, in relative terms, a fagade's contribution to building energy use, comfortable daylight conditions, and the degree of transparency, respectively. Several practical matters were considered when developing a policy-relevant set of metrics, including both ease of calculation for manufacturers and usability for consumers. As such, the calculation methodology evolved from its initial proposal into a simplified approach, analytical where possible, and into a label-like concept for visual representation. These metrics are intended to exist as a mechanism by which manufacturers can evaluate and compare facade systems, provide high-level intuition of relative performance for designers and contractors, and enable the balance of performance objectives based on user preference. Ultimately, the creation of this comprehensive language is intended to stimulate innovation in fenestration systems and encourage their use in both new and retrofit building applications.
by Shreya H. Dave.
S.M.
S.M.in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
38

Gunawardena, Warnaka R. "Relationship of Hand Size and Keyboard Size to Typing Performance Metrics." Ohio University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1385075311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dias, Rodrigo Gouveia de Carvalho e. Castro. "Real time and long term performance management metrics in mobile networks." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/14926.

Full text
Abstract:
Mestrado em Engenharia Eletrónica e Telecomunicações
The Telecommunications industry is a “world” where services like data, voice and value-added services must be available at anytime, anywhere. Because of this requirement, it became a really aggressive market, where the least detail can make the difference. One of the details that can be really differentiating is related to network management. With all the changes and rapid evolution of telecommunications, this can be considered a critically important point. An efficient and optimized network management can save time and money and that is why it is a mandatory aspect in this market. There are two different paths to be considered: long term management, which consists in saving less detailed data for long periods of time, and real time management that allows much more detailed information for narrower time frame. Because of database and costs related issues, only one of them must be chosen and there is some important information that may become “invisible”, leading to unsolved problems that can be highly expensive. Regarding this situation, it would be extremely beneficial for the telecom operators if they could visualize both types of data, having the long term information along with the most important details of real time information in one view.
A indústria das Telecomunicações é um mundo onde os serviços de dados, voz e outros serviços importantes têm que estar disponíveis sempre e em todo o lado. Por causa dessas exigências, tornou-se um mercado extremamente agressivo, onde o menor detalhe pode fazer a diferença. Um desses detalhes diferenciadores está relacionado com a gestão da rede. Com todas as mudanças e rápida evolução das telecomunicações, este pode ser considerado um ponto crítico. Uma gestão eficiente e otimizada pode poupar tempo e dinheiro e por isso é um aspeto crucial deste mercado. Em termos de gestão e monitorização das redes, há dois caminhos a considerar: gestão a longo prazo, que consiste em salvar dados menos detalhados por longos períodos de tempo; e gestão em tempo real, que permite dados muito mais detalhados mas durante uma janela temporal consideravelmente menor. Devido a limitações ao nível da base de dados e elevados custos, apenas um deve ser escolhido, o que leva a que se perca alguma informação em qualquer dos casos. Assim, podem surgir problemas cuja resolução será complicada e dispendiosa. Posto isto, seria extremamente benéfico para as Operadoras de Telecomunicações, se pudessem analisar os dois tipos de dados, tendo a informação a longo prazo juntamente com os detalhes mais importantes em tempo real numa só aplicação.
APA, Harvard, Vancouver, ISO, and other styles
40

Amoedo, Maria Mercedes. "A structured methodology for identifying performance metrics and monitoring maintenance effectiveness." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/3250.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Reliability Engineering Program. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
41

Petersen, Soren Ingomar. "Design quantification design concept argumentation as related to product performance metrics /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Karimi, Zeinab. "Construction project performance metrics: conceptualization, measurement and application in metro rail." Thesis, IIT Delhi, 2016. http://localhost:8080/iit/handle/2074/6977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Raley, John B. "Factors Affecting the Programming Performance of Computer Science Students." Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/36716.

Full text
Abstract:
Two studies of factors affecting the programming performance of first- and second year Computer Science students were conducted. In one study students used GIL, a simple application framework, for their programming assignments in a second-semester programming course. Improvements in student performance were realized. In the other study, students submitted detailed logs of how time was spent on projects, along with their programs. Software metrics were computed on the students' source code. Correlations between student performance and the log data and software metric data were sought. No significant indicators of performance were found, even with factors that are commonly expected to indicate performance. However, results from previous research concerning variations in individual programmer performance and relationships between software metrics were obtained.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
44

Callahan, Jeremy. "Metrics of METOC forecast performance and operational impacts on carrier strike operations." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FCallahan.pdf.

Full text
Abstract:
Thesis (M.S. in Meteorology and Physical Oceanography)--Naval Postgraduate School, September 2006.
Thesis Advisor(s): Tom Murphree, Rebecca Stone. "September 2006." Includes bibliographical references (p. 61-62). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
45

Ivo, Akum Nji. "Comparative Analysis of Performance Routing Metrics for Multi-radio Wireless Mesh Networks." Thesis, Blekinge Tekniska Högskola, Avdelningen för telekommunikationssystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5056.

Full text
Abstract:
Traditional Ad Hoc network wireless communication in the past years have contributed tremendously to the dawn of wireless mesh networks (WMNs) which have so far been able to provide a significant improvement in capacity and scalability. Routing metrics which form the basic element for the routing protocol in this innovative communication technology is a call for concern as they must take into consideration the wireless medium network characteristics in order to provide a optimum appreciable QoS performance. In the past many single-radio routing metrics have been proposed for Ad Hoc networks which are not compatible with the multi-radio routing scenario demanded by WMNs. In our work, we provide a comparative analysis of most recently proposed multi-radio routing metrics for WMNs. We begin by providing an overview of the features of a wireless mesh network thereby presenting a better understanding of some of the research challenges of WMNs. Also, since single-radio routing forms the basis of multi-radio routing, we in this regard provide a review of some single-radio routing metrics. In our comparative analysis, an overview of routing protocols for WMNs is provided enabling an understanding of the demands to be included in a routing metric to ensure efficient routing in WMNs since different routing protocols may impose different demands; we then identify the requirements of multi-radio routing metrics from which we base our comparative analysis.
APA, Harvard, Vancouver, ISO, and other styles
46

Nallendran, Vignesh Raja. "Predicting Performance Run-time Metrics in Fog Manufacturing using Multi-task Learning." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/102501.

Full text
Abstract:
The integration of Fog-Cloud computing in manufacturing has given rise to a new paradigm called Fog manufacturing. Fog manufacturing is a form of distributed computing platform that integrates Fog-Cloud collaborative computing strategy to facilitate responsive, scalable, and reliable data analysis in manufacturing networks. The computation services provided by Fog-Cloud computing can effectively support quality prediction, process monitoring, and diagnosis efforts in a timely manner for manufacturing processes. However, the communication and computation resources for Fog-Cloud computing are limited in Fog manufacturing. Therefore, it is significant to effectively utilize the computation services based on the optimal computation task offloading, scheduling, and hardware autoscaling strategies to finish the computation tasks on time without compromising on the quality of the computation service. A prerequisite for adapting such optimal strategies is to accurately predict the run-time metrics (e.g., Time-latency) of the Fog nodes by capturing their inherent stochastic nature in real-time. It is because these run-time metrics are directly related to the performance of the computation service in Fog manufacturing. Specifically, since the computation flow and the data querying activities vary between the Fog nodes in practice. The run-time metrics that reflect the performance in the Fog nodes are heterogenous in nature and the performance cannot be effectively modeled through traditional predictive analysis. In this thesis, a multi-task learning methodology is adopted to predict the run-time metrics that reflect performance in Fog manufacturing by addressing the heterogeneities among the Fog nodes. A Fog manufacturing testbed is employed to evaluate the prediction accuracies of the proposed model and benchmark models. The proposed model can be further extended in computation tasks offloading and architecture optimization in Fog manufacturing to minimize the time-latency and improve the robustness of the system.
Master of Science
Smart manufacturing aims at utilizing Internet of things (IoT), data analytics, cloud computing, etc. to handle varying market demand without compromising the productivity or quality in a manufacturing plant. To support these efforts, Fog manufacturing has been identified as a suitable computing architecture to handle the surge of data generated from the IoT devices. In Fog manufacturing computational tasks are completed locally through the means of interconnected computing devices called Fog nodes. However, the communication and computation resources in Fog manufacturing are limited. Therefore, its effective utilization requires optimal strategies to schedule the computational tasks and assign the computational tasks to the Fog nodes. A prerequisite for adapting such strategies is to accurately predict the performance of the Fog nodes. In this thesis, a multi-task learning methodology is adopted to predict the performance in Fog manufacturing. Specifically, since the computation flow and the data querying activities vary between the Fog nodes in practice. The metrics that reflect the performance in the Fog nodes are heterogenous in nature and cannot be effectively modeled through conventional predictive analysis. A Fog manufacturing testbed is employed to evaluate the prediction accuracies of the proposed model and benchmark models. The results show that the multi-task learning model has better prediction accuracy than the benchmarks and that it can model the heterogeneities among the Fog nodes. The proposed model can further be incorporated in scheduling and assignment strategies to effectively utilize Fog manufacturing's computational services.
APA, Harvard, Vancouver, ISO, and other styles
47

Alrished, Mohamad Ayad A. "A quantitative analysis and assessment of the performance of image quality metrics." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/128987.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2020
Cataloged from student-submitted PDF of thesis.
Includes bibliographical references (pages 79-82).
Image quality assessment addresses the distortion levels and the perceptual quality of a restored or corrupted image. A plethora of metrics has been developed to that end. The usual mean of success of an image quality metric is their ability to agree with the opinions of human subjects, often represented by the mean opinion score. Despite the promising performance of some image quality metrics in predicting the mean opinion score, several problems are still unaddressed. This thesis focuses on analyzing and assessing the performance of image quality metrics. To that end, this work proposes an objective assessment criterion and considers three indicators related to the metrics: (i) robustness to local distortions; (ii) consistency in their values'; and (iii) sensitivity to distortion parameters. In addition, the implementation procedures of the proposed indicators is presented. The thesis then analyzes and assesses several image quality metrics using the developed indicators for images corrupted with Gaussian noise. This work uses both widely-used public image datasets and self-designed controlled cases to measure the performance of IQMs. The results indicate that some image quality metrics are prone to poor performance depending on the number of features. In addition, the work shows that the consistency in IQMs' values depends on the distortion level. Finally, the results highlight the sensitivity of different metrics to the Gaussian noise parameter. The objective methodology in this thesis unlocks additional insights regarding the performance of IQMs. In addition to the subjective assessment, studying the properties of IQMs outlined in the framework helps in finding a metric suitable for specific applications.
by Mohamad Ayad A. Alrished.
S.M.
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
48

Melin, Oscar. "Matching Performance Metrics with Potential Candidates : A computer automated solution to recruiting." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208311.

Full text
Abstract:
Selecting the right candidate for a job can be a challenge. Moreover, there are significant costs associated with recruiting new talent. Thus there is a requirement for precision, accuracy, and neutrality from an organization when hiring a new employee. This thesis project focuses on the restaurant and hotel industry, an industrial sector that has traditionally used a haphazard set of recruiting methods. Unlike large corporations, restaurants cannot afford to hire dedicated recruiters. In addition, the primary medium used to find jobs and job seekers in this industry often obscure comparisons between relevant positions. The complex infrastructure of this industry requires a place where both recruiter and job seeker can access a standardized overview of the entire labor market. Introducing automation in hiring aims to better address these complex demands and is becoming a common practice throughout other industries, especially with the help of internet based recruitment and pre-selection of candidates. These solutions also have the potential to minimize risks of human bias when screening candidates. This thesis aims to minimize inefficiencies and errors associated with the existing manual recruitment screening process by addressing two main issues: the rate at which applicants can be screened and the quality of the resulting matches. This thesis first discusses and analyzes related work in automated recruitment in order to propose a refined solution suitable for the target area. This solution –semantic matching of jobs and candidates - is subsequently evaluated and tested in partnership with Cheffle, a service industry networking company. The thesis concludes with suggestions for potential improvements to Cheffle´s current system and details the viability of recruiting with the assistance of an automated semantic matching application.
Att välja den rätta kandidaten för ett jobb kan vara en utmaning. Det finns dessutom betydliga kostnader i att rekrytera ny arbetskraft. På grund därav finns det ett behov för noggrannhet och neutralitet från en organisation vid rekrytering av ny personal. Detta examensprojekt fokuserar på restaurang och hotellbranschen. Denna branchsektor har traditionellt sett använt undermåliga rekryteringsmetoder. Till skillnad från stora företag så kan inte restauranger avvara resurser för egna rekryterare. Därtill så försvårar de primära medierna för rekrytering i sektorn jämförelser mellan relaterade lediga jobb. Denna komplexa infrastruktur skapar ett behov av en plats där både företag och arbetssökande har tillgång till en standardiserad översikt av hela arbetsmarknaden. Introduktionen av automatisering har som syfte att bemöta dessa komplexa krav och blir alltmer vanligt inom andra branscher. Speciellt med hjälp av internetbaserad rekrytering och förval av jobbkandidater. Dessa lösningar har även potentialen att minimera risken för mänsklig subjektivitet och opartiskhet vid förval av jobbkandidater. Detta examensprojekt har som syfte att minimera ineffektiviteter och fel samhörande med den nuvarande manuella rekryteringsmetoden genom att tackla två huvudproblem: takten i vilken förvalet av arbetssökande kan göras och kvaliteten av detta förval. Detta examensprojekt inleder med en diskussion och analys av relaterade arbeten inom automatiserad rekrytering för att sedan presentera en möjlig lösning för det behandlade målområdet. Denna lösning – semantisk matchning av jobb och jobbsökande - är senare utvärderad och testad i samarbete med Cheffle, ett nätverksföretag inom serviceindustrin. Detta examensprojekt avslutar med lösningsförslag för potentiell förbättring till Cheffles nuvarande system och en slutsats om genomförbarheten av automatisering inom rekrytering.
APA, Harvard, Vancouver, ISO, and other styles
49

Zaahid, Mohammed. "Performance Metrics Analysis of GamingAnywhere with GPU acceletayed NVIDIA CUDA using gVirtuS." Thesis, Blekinge Tekniska Högskola, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16852.

Full text
Abstract:
The modern world has opened the gates to a lot of advancements in cloud computing, particularly in the field of Cloud Gaming. The most recent development made in this area is the open-source cloud gaming system called GamingAnywhere. The relationship between the CPU and GPU is what is the main object of our concentration in this thesis paper. The Graphical Processing Unit (GPU) performance plays a vital role in analyzing the playing experience and enhancement of GamingAnywhere. In this paper, the virtualization of the GPU has been concentrated on and is suggested that the acceleration of this unit using NVIDIA CUDA, is the key for better performance while using GamingAnywhere. After vast research, the technique employed for NVIDIA CUDA has been chosen as gVirtuS. There is an experimental study conducted to evaluate the feasibility and performance of GPU solutions by VMware in cloud gaming scenarios given by GamingAnywhere. Performance is measured in terms of bitrate, packet loss, jitter and frame rate. Different resolutions of the game are considered in our empirical research and our results show that the frame rate and bitrate have increased with different resolutions, and the usage of NVIDIA CUDA enhanced GPU.
APA, Harvard, Vancouver, ISO, and other styles
50

Boyle, John K. "Performance Metrics for Depth-based Signal Separation Using Deep Vertical Line Arrays." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2198.

Full text
Abstract:
Vertical line arrays (VLAs) deployed below the critical depth in the deep ocean can exploit reliable acoustic path (RAP) propagation, which provides low transmission loss (TL) for targets at moderate ranges, and increased TL for distant interferers. However, sound from nearby surface interferers also undergoes RAP propagation, and without horizontal aperture, a VLA cannot separate these interferers from submerged targets. A recent publication by McCargar and Zurk (2013) addressed this issue, presenting a transform-based method for passive, depth-based separation of signals received on deep VLAs based on the depth-dependent modulation caused by the interference between the direct and surface-reflected acoustic arrivals. This thesis expands on that work by quantifying the transform-based depth estimation method performance in terms of the resolution and ambiguity in the depth estimate. Then, the depth discrimination performance is quantified in terms of the number of VLA elements.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography