Academic literature on the topic 'Performance metrics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Performance metrics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Performance metrics"

1

Mary, A. Viji Amutha, and Dr T. Jebarajan Dr. T. Jebarajan. "Performance Metrics of Clustering Algorithm." Indian Journal of Applied Research 4, no. 8 (October 1, 2011): 165–67. http://dx.doi.org/10.15373/2249555x/august2014/47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pijpers, Frank P. "Performance metrics." Astronomy & Geophysics 47, no. 6 (December 2006): 6.17–6.18. http://dx.doi.org/10.1111/j.1468-4004.2006.47617.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

M. Taaffe, Kevin, Robert William Allen, and Lindsey Grigg. "Performance metrics analysis for aircraft maintenance process control." Journal of Quality in Maintenance Engineering 20, no. 2 (May 6, 2014): 122–34. http://dx.doi.org/10.1108/jqme-07-2012-0022.

Full text
Abstract:
Purpose – Performance measurements or metrics are that which measure a company's performance and behavior, and are used to help an organization achieve and maintain success. Without the use of performance metrics, it is difficult to know whether or not the firm is meeting requirements or making desired improvements. During the course of this study with Lockheed Martin, the research team was tasked with determining the effectiveness of the site's existing performance metrics that are used to help an organization achieve and maintain success. Without the use of performance metrics, it is difficult to know whether or not the firm is meeting requirements or making desired improvements. The paper aims to discuss these issues. Design/methodology/approach – Research indicates that there are five key elements that influence the success of a performance metric. A standardized method of determining whether or not a metric has the right mix of these elements was created in the form of a metrics scorecard. Findings – The scorecard survey was successful in revealing good metric use, as well as problematic metrics. In the quality department, the Document Rejects metric has been reworked and is no longer within the executive's metric deck. It was also recommended to add root cause analysis, and to quantify and track the cost of non-conformance and the overall cost of quality. In total, the number of site wide metrics has decreased from 75 to 50 metrics. The 50 remaining metrics are undergoing a continuous improvement process in conjunction with the use of the metric scorecard tool developed in this research. Research limitations/implications – The metrics scorecard should be used site-wide for an assessment of all metrics. The focus of this paper is on the metrics within the quality department. Practical implications – Putting a quick and efficient metrics assessment technique in place was critical. With the leadership and participation of Lockheed Martin, this goal was accomplished. Originality/value – This paper presents the process of metrics evaluation and the issues that were encountered during the process, including insights that would not have been easily documented without this mechanism. Lockheed Martin Company has used results from this research. Other industries could also apply the methods proposed here.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Yangguang, Yangming Zhou, Shiting Wen, and Chaogang Tang. "A Strategy on Selecting Performance Metrics for Classifier Evaluation." International Journal of Mobile Computing and Multimedia Communications 6, no. 4 (October 2014): 20–35. http://dx.doi.org/10.4018/ijmcmc.2014100102.

Full text
Abstract:
The evaluation of classifiers' performances plays a critical role in construction and selection of classification model. Although many performance metrics have been proposed in machine learning community, no general guidelines are available among practitioners regarding which metric to be selected for evaluating a classifier's performance. In this paper, we attempt to provide practitioners with a strategy on selecting performance metrics for classifier evaluation. Firstly, the authors investigate seven widely used performance metrics, namely classification accuracy, F-measure, kappa statistic, root mean square error, mean absolute error, the area under the receiver operating curve, and the area under the precision-recall curve. Secondly, the authors resort to using Pearson linear correlation and Spearman rank correlation to analyses the potential relationship among these seven metrics. Experimental results show that these commonly used metrics can be divided into three groups, and all metrics within a given group are highly correlated but less correlated with metrics from different groups.
APA, Harvard, Vancouver, ISO, and other styles
5

Baker, Noel C., and Patrick C. Taylor. "A Framework for Evaluating Climate Model Performance Metrics." Journal of Climate 29, no. 5 (February 26, 2016): 1773–82. http://dx.doi.org/10.1175/jcli-d-15-0114.1.

Full text
Abstract:
Abstract Given the large amount of climate model output generated from the series of simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5), a standard set of performance metrics would facilitate model intercomparison and tracking performance improvements. However, no framework exists for the evaluation of performance metrics. The proposed framework systematically integrates observations into metric assessment to quantitatively evaluate metrics. An optimal metric is defined in this framework as one that measures a behavior that is strongly linked to model quality in representing mean-state present-day climate. The goal of the framework is to objectively and quantitatively evaluate the ability of a performance metric to represent overall model quality. The framework is demonstrated, and the design principles are discussed using a novel set of performance metrics, which assess the simulation of top-of-atmosphere (TOA) and surface radiative flux variance and probability distributions within 34 CMIP5 models against Clouds and the Earth’s Radiant Energy System (CERES) observations and GISS Surface Temperature Analysis (GISTEMP). Of the 44 tested metrics, the optimal metrics are found to be those that evaluate global-mean TOA radiation flux variance.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Shanshan, and Weiyang Sun. "Image Encryption Performance Evaluation Based on Poker Test." Advances in Multimedia 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/6714164.

Full text
Abstract:
The fast development of image encryption requires performance evaluation metrics. Traditional metrics like entropy do not consider the correlation between local pixel and its neighborhood. These metrics cannot estimate encryption based on image pixel coordinate permutation. A novel effectiveness evaluation metric is proposed in this paper to address the issue. The cipher text image is transformed to bit stream. Then, Poker Test is implemented. The proposed metric considers the neighbor correlations of image by neighborhood selection and clip scan. The randomness of the cipher text image is tested by calculating the chi-square test value. Experiment results verify the efficiency of the proposed metrics.
APA, Harvard, Vancouver, ISO, and other styles
7

Czachura, Agnieszka, Jouri Kanters, Niko Gentile, and Maria Wall. "Solar Performance Metrics in Urban Planning: A Review and Taxonomy." Buildings 12, no. 4 (March 23, 2022): 393. http://dx.doi.org/10.3390/buildings12040393.

Full text
Abstract:
Metrics are instrumental in design assessments. Solar performance metrics help designers to evaluate solar access in cities. Metrics should be used early in the urban planning stages in order to enable sustainable urban development with greater access to solar energy. Currently, solar assessments at this design stage are limited in practice; established methods or routines are lacking, and so are suitable metrics. This paper reviews the relevant literature to provide a critical overview of solar metrics commonly used in building performance assessments. The review defines key metric formulation principles—valuation, time constraint, and normalisation—which should be considered when designing a performance indicator. A new taxonomy of solar performance metrics is provided. Metric definitions, suitability, and limitations are discussed. The findings highlight the need for reliable, low-complexity metrics and adequate methods for early solar assessments for urban planning.
APA, Harvard, Vancouver, ISO, and other styles
8

Nochur, A., H. Vedam, and J. Koene. "Alarm Performance Metrics." IFAC Proceedings Volumes 34, no. 27 (June 2001): 203–8. http://dx.doi.org/10.1016/s1474-6670(17)33592-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Loo, Jessica, Traci E. Clemons, Emily Y. Chew, Martin Friedlander, Glenn J. Jaffe, and Sina Farsiu. "Beyond Performance Metrics." Ophthalmology 127, no. 6 (June 2020): 793–801. http://dx.doi.org/10.1016/j.ophtha.2019.12.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

DeLozier, Randall, and Neil Snyder. "ENGINEERING PERFORMANCE METRICS." INCOSE International Symposium 3, no. 1 (July 1993): 599–605. http://dx.doi.org/10.1002/j.2334-5837.1993.tb01632.x.

Full text
Abstract:
AbstractImplementation of a Total Quality Management (TQM) approach to engineering work required the development of a system of metrics which would serve as a meaningful management tool for evaluating effectiveness in accomplishing project objectives and in achieving improved customer satisfaction. A team effort was chartered with the goal of developing a system of engineering performance metrics which would measure customer satisfaction, quality, cost effectiveness, and timeliness. The approach to developing this system involved normal systems design phases including conceptual design, detailed design, implementation, and integration. The lessons learned from this effort will be explored in this paper. These lessons learned may provide a starting point for other large engineering organizations seeking to institute a performance measurement system.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Performance metrics"

1

Sundfors, David. "Performance Metrics for Sustainability Value." Licentiate thesis, KTH, Bygg- och fastighetsekonomi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-200315.

Full text
Abstract:
The trend that started with Green Building has moved on into Sustainable Building. But how do we know that something is really sustainable? This project started out with the intention to find a small set of performance indicators for commercial buildings, which could be continuously measured and monitored over time, which would give a good indication of the level of sustainability of the building and as such, and be presented as an additional part in a valuation. Since it has been shown several times over by now that properties that can prove they are sustainable generate a higher market price, these performance indicators would be interesting from the perspective of a valuation professional. In order to find these parameters, the project began with three of the international environmental certification systems and one Swedish system, to study which parameters are considered important in these systems. Following that study, surveys and interviews within the real estate business in Sweden provided an insight into how performance is measured today. Lastly, by combining those studies with a review of the sustainability information considered important by the Royal Institute of Chartered Surveyors (RICS) from a valuation professional’s point of view and an updated literature review, a simple set of indicators could indeed be identified. There is however, still a problem with defining their actual impact on market price. Other authors have come to the conclusion that although sustainability can be measured to some extent, incorporating that information into valuation of the property in a statistical secure way is not yet possible. We need to increase our knowledge about the performance of our built environment and the presented key performance indicators in this thesis would help us do just that. We can also see that real estate owners in many cases already gather much information about their buildings, but they lack the incentives to share that data with others.

QC 20170124

APA, Harvard, Vancouver, ISO, and other styles
2

Akcay, Koray. "Performance Metrics For Fundamental Estimation Filters." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606510/index.pdf.

Full text
Abstract:
This thesis analyzes fundamental estimation filters &ndash
Alpha-Beta Filter, Alpha-Beta-Gamma Filter, Constant Velocity (CV) Kalman Filter, Constant Acceleration (CA) Kalman Filter, Extended Kalman Filter, 2-model Interacting Multiple Model (IMM) Filter and 3-model IMM with respect to their resource requirements and performance. In resource requirement part, fundamental estimation filters are compared according to their CPU usage, memory needs and complexity. The best fundamental estimation filter which needs very low resources is the Alpha-Beta-Filter. In performance evaluation part of this thesis, performance metrics used are: Root-Mean-Square Error (RMSE), Average Euclidean Error (AEE), Geometric Average Error (GAE) and normalized form of these. The normalized form of performance metrics makes measure of error independent of range and the length of trajectory. Fundamental estimation filters and performance metrics are implemented in MATLAB. MONTE CARLO simulation method and 6 different air trajectories are used for testing. Test results show that performance of fundamental estimation filters varies according to trajectory and target dynamics used in constructing the filter. Consequently, filter performance is application-dependent. Therefore, before choosing an estimation filter, most probable target dynamics, hardware resources and acceptable error level should be investigated. An estimation filter which matches these requirements will be &lsquo
the best estimation filter&rsquo
.
APA, Harvard, Vancouver, ISO, and other styles
3

Tucker, Christopher John. "Performance metrics for network intrusion systems." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1547.

Full text
Abstract:
Intrusion systems have been the subject of considerable research during the past 33 years, since the original work of Anderson. Much has been published attempting to improve their performance using advanced data processing techniques including neural nets, statistical pattern recognition and genetic algorithms. Whilst some significant improvements have been achieved they are often the result of assumptions that are difficult to justify and comparing performance between different research groups is difficult. The thesis develops a new approach to defining performance focussed on comparing intrusion systems and technologies. A new taxonomy is proposed in which the type of output and the data scale over which an intrusion system operates is used for classification. The inconsistencies and inadequacies of existing definitions of detection are examined and five new intrusion levels are proposed from analogy with other detection-based technologies. These levels are known as detection, recognition, identification, confirmation and prosecution, each representing an increase in the information output from, and functionality of, the intrusion system. These levels are contrasted over four physical data scales, from application/host through to enterprise networks, introducing and developing the concept of a footprint as a pictorial representation of the scope of an intrusion system. An intrusion is now defined as “an activity that leads to the violation of the security policy of a computer system”. Five different intrusion technologies are illustrated using the footprint with current challenges also shown to stimulate further research. Integrity in the presence of mixed trust data streams at the highest intrusion level is identified as particularly challenging. Two metrics new to intrusion systems are defined to quantify performance and further aid comparison. Sensitivity is introduced to define basic detectability of an attack in terms of a single parameter, rather than the usual four currently in use. Selectivity is used to describe the ability of an intrusion system to discriminate between attack types. These metrics are quantified experimentally for network intrusion using the DARPA 1999 dataset and SNORT. Only nine of the 58 attack types present were detected with sensitivities in excess of 12dB indicating that detection performance of the attack types present in this dataset remains a challenge. The measured selectivity was also poor indicting that only three of the attack types could be confidently distinguished. The highest value of selectivity was 3.52, significantly lower than the theoretical limit of 5.83 for the evaluated system. Options for improving selectivity and sensitivity through additional measurements are examined.
APA, Harvard, Vancouver, ISO, and other styles
4

Cirtita, Horatiu. "Performance Metrics in Downstream Supply Chain." Doctoral thesis, Università degli studi di Padova, 2008. http://hdl.handle.net/11577/3425091.

Full text
Abstract:
In a downstream supply chain (DSC), consisting of manufacturers, transportation, distribution, and retail members, end customers expect timely, reliable and quality delivery of the right amount of products at low cost. The manufacturer, in its attempt to deliver the product through the DSC, however, must balance customer expectation with profitability. This balance can be achieved through a mix of strategies. One DSC strategy tool, the SCOR model, integrates and continuously improves the performance of the various DSC activities. The SCOR model metric system is considered a breakthrough given its standardized approach to assessment across organizations and industry types. The top tier of the SCOR metric system evaluates the overall strategic organizational activities in a supply chain context. These top tier metric system elements consist of: delivery reliability, flexibility and responsiveness, cost, and assets. These metrics follow the standard as recommended by Schneiderman (1996) who stated that a metric system should contain no more than five top tier metrics given that a large number diffuses the focus of the stregic activities. Schneiderman further suggests that this first tier metric system should consist of 1) internal process and 2) external results performance. The SCOR model activities fit these internal and external schemas well. For the SCOR model to work among the DSC members, these measurements, though, must be standardized among DSC members (Lambert and Pohlen, 2001). The use of a DSC metrics system leads to synergies of performance among supply chain members that facilitate the measure of total supply chain performance as opposed to isolated functional "silo" measures (Hausman, 2003). We have purposely chosen performance metrics as the standard of evaluation as opposed to the terms "performance measurement" and "performance measure." The term "performance measure" carries a connotational definition that is vague, historical, and diffused (Neely, 1999). Schneiderman (1996) stated that measures and metrics differ in the following way: measures consist of the broad set of infinite forms of evaluating a firm's process whereas metrics are a subset of the few measures actually useful to improve a company's efforts. Metrics should be monotonic, that is, improvements in metrics must lead to improvement in shareholder wealth. Gunasekaran, Patel and Tirtiroglu (2001) discussed the problem of these terms in their study of the literature and stated, "Quite often, companies have a large number of performance measures to which they keep adding based on suggestions from employees and consultants, and fail to realize that performance measurement can be better addressed using a few good metrics (p. 72)."
APA, Harvard, Vancouver, ISO, and other styles
5

Dietrich, Nathan S. "Performance metrics for correlation and tracking algorithms." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA391959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wanderman-Milne, Skye A. "Virtualized application performance prediction using system metrics." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77450.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 79-80).
Virtualized datacenter administrators would like to consolidate virtual machines (VMs) onto as few physical hosts as possible in order to decrease costs, but must leave enough physical resources for each VM to meet application service level objectives (SLOs). The threshold between good and bad performance in terms of resource settings, however, is hard to determine and rarely static due to changing workloads and resource usage. Thus, in order to avoid SLO violations, system administrators must err on the side of caution by running fewer VMs per host than necessary or setting reservations, which prevents resources from being shared. To ameliorate this situation, we are working to design and implement a system that automatically monitors VM-level metrics to predict impending application SLO violations, and takes appropriate action to prevent the SLO violation from occurring. So far we have implemented the performance prediction, which is detailed in this document, while the preventative actions are left as future work. We created a three-stage pipeline in order to achieve scalable performance prediction. The three stages are prediction, which predicts future VM ESX performance counter values based on current time-series data; aggregation, which aggregates the predicted VM metrics into a single set of global metrics; and finally classification, which for each VM classifies its performance as good or bad based on the predicted VM counters and the predicted global state. Prediction of each counter is performed by a least-squares linear fit, aggregation is performed simply by summing each counter across all VMs, and classification is performed using a support vector machine (SVM) for each application. In addition, we created an experimental system running a MongoDB instance in order to test and evaluate our pipeline implementation. Our results on this experimental system are promising, but further research will be necessary before applying these techniques to a production system.
by Skye A. Wanderman-Milne.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Burdett, Yan Liu. "Correlation of Software Quality Metrics and Performance." NSUWorks, 2012. http://nsuworks.nova.edu/gscis_etd/109.

Full text
Abstract:
Performance is an aspect of software quality that is often not considered at early stages of software design. Common approaches to performance analysis include utilizing profiling tools after the software has been developed to find bottlenecks and executing simulation models that are either manually constructed or derived from UML design diagrams. Many projects attempt to correct performance issues by adding more powerful hardware instead of attacking the root cause. Software metrics have been used to predict many aspects of software quality such as maintainability and fault-tolerance by correlation and regression analysis. Metrics proposed by Chidamber and Kemerer, also known as the CK metric suite, have been studied extensively in software quality model analyses. These studies examined maintainability, fault tolerance, error proneness, and scalability of the software as it evolved through time. Correlations were made between metrics and the likely quality models they represented. Other metrics such as Cyclomatic Complexity by McCabe and class couplings by Martin have also been used in quality predictions. No research has been conducted to analyze relationship between performance and metrics. The goal of this research was to construct a decision tree that used software metrics to forecast performance characteristics. The relationship between metrics and performance was derived by correlation between static code metrics and three runtime variables: number of objects, method call frequency, and average method call lengths on selected software benchmarks. The decision tree was constructed using the C4.5 algorithm implemented in the WEKA software. Pearson correlation coefficients were obtained for the combined datasets from all benchmarks. The decision trees and Pearson statistics showed that weighted methods per class (WMC), total lines of code (TLOC), and coupling between objects (CBO) have significant correlation with software performance. WMC showed positive correlation with number of objects and calls. CBO correlated positively with average method call lengths and negatively with number of objects. TLOC correlated positively with number of calls.
APA, Harvard, Vancouver, ISO, and other styles
8

Jackson, Kenneth J. "Forecast error metrics for Navy inventory management performance." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5756.

Full text
Abstract:
Approved for public release; distribution is unlimited.
This research establishes metrics for determining overall Navy secondary inventory forecasting accuracy when compared to actual demands at the Naval Inventory Control Point (NAVICP). Specifically, two performance metrics are introduced: the average performance index (API) and the median absolute deviation performance index (MPI). API measures forecasting accuracy of secondary inventory when compared against demand or forecast performance over a four-quarter period. MPI measures the quarterly variability of forecast errors over the same period. The API and MPI metrics allow for the identification of poorly forecasted NAVICP secondary inventory items. The metrics can be applied to entire inventories or subsets of items based on type, demand, or cost. In addition, the API metric can be used to show overall inventory performance, providing NAVICP with a graphical means to assess forecasting performance improvements (or degradations) over time. The new forecasting accuracy methods developed in this research will allow the Navy to continually gauge the overall health of their inventory management practices and provide a method for improving forecasting accuracy. Additionally, they will assist NAVICP in complying with DoD directives that require NAVICP to monitor and continually develop improvements to inventory management practices.
APA, Harvard, Vancouver, ISO, and other styles
9

Forman, Rachel Emily. "Objective performance metrics for improved space telerobotics training." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68408.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 47-50).
NASA astronauts undergo many hours of formal training and self-study to gain proficiency in space teleoperation tasks. After each lesson, instructors score an astronaut's performance in several broad skill categories, including 'General Situational Awareness', 'Maneuvers/Task Performance', and 'Hand- Controller Techniques'. A plus, check, or minus indicates that the student is ahead of, at, or behind the expected skill level. The scoring of the final evaluation for a robotics training course is also largely subjective, with the instructor designating an integer score for the student between 1 (Unsatisfactory) and 5 (Strong) in the same skill categories. This thesis research project was designed to: (1) consider the variety of quantitative metrics that could be embedded into a space robotics training simulation, and (2) investigate at what point and by what means it is most constructive for performance assessment to be revealed to an operator-in-training. We reviewed the current largely qualitative space robotics performance metrics, as well as new quantitative kinematic metrics of manual control skills-including those explored thus far only in laboratory experiments-and additional measures of executive function and supervisory control performance. Kinematic metrics include quantitative measures such as rate of change of linear and rotational acceleration. Potential measures of executive function and supervisory control include camera selection and clearance monitoring. To instantiate our ideas, we chose a specific "fly-to" space telerobotics task taught in the early phases of NASA Generic Robotics Training (GRT) and developed a pilot training experiment (n=16) using our virtual robotics training workstation. Our goal was to evaluate potential performance metrics designed to encourage use of multi-axis control, and to compare real-time ("live") performance feedback alternatives (live visual vs. live aural vs. none). Movement time decreased and multi-axis and bimanual control use gradually increased across trials. All subjects had the opportunity to view post-trial performance feedback including these metrics. Although our subjects overwhelmingly preferred the live, visual feedback condition, no reliable additional effects of live feedback condition were found, except perhaps among the more experienced subjects. However, the experiment demonstrated that embedded performance metrics potentially could quantify and improve some important aspects of GRT evaluations.
Supported by the National Space Biomedical Research Institute through NASA NCC9-58
by Rachel Emily Forman.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
10

Wolbert, Daniel (Daniel Joseph). "Utilization of visual metrics to drive intended performance." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39689.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2007.
Includes bibliographical references.
In recent years the American industrial landscape has undergone tremendous change as companies have worked to adopt Lean practices. This transformation has been difficult, but necessary, as American companies work to remain competitive with foreign competitors. A key enabler of these Lean transformations has been the use of visual metrics to communicate how a process has performed as well as to set goals for future performance. The challenge is to first identify what data is available and then create metrics that encourage and reward Lean behaviors. This thesis explores the introduction of visual metrics for a part inspection process at the Raytheon Company. Prior to the introduction of these metrics, there was limited ability to determine how the process was performing. As a result, downstream customers were able to track when a part entered the inspection process but were unable to predict when the inspection would be completed. This introduced a risk to the process and created a sense of frustration through the facility. The visual metrics for the inspection area were created on a series of visual dashboards that display common Lean metrics, such as cycle time and backlog (or work-in-process). Through these dashboards the area will be able to understand how it is performing and initiate continuous improvement projects to improve performance.
by Daniel Wolbert.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Performance metrics"

1

Boyd, William J., Ann Brockhaus, Matthew R. Chini, Paul A. Esposito, Kul B. Garg, Jonathan M. Haas, Judy L. Jarrell, et al., eds. Industrial Hygiene Performance Metrics. 2700 Prosperity Ave., Suite 250 Fairfax, VA 22031: American Industrial Hygiene Association, 2001. http://dx.doi.org/10.3320/978-1-931504-12-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

American Industrial Hygiene Association. Metrics Subcommittee., ed. Industrial hygiene performance metrics. Fairfax, VA: American Industrial Hygiene Association, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lehmann, Donald R. Marketing metrics and financial performance. Cambridge, Mass: Marketing Science Institute, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Albert, Sylvie, and Manish Pandey. Performance Metrics for Sustainable Cities. London: Routledge, 2021. http://dx.doi.org/10.4324/9781003096566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Samur, Evren. Performance Metrics for Haptic Interfaces. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4225-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lehmann, Donald R. Marketing metrics and financial performance. Cambridge, Mass: Marketing Science Institute, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lehmann, Donald R. Marketing metrics and financial performance. Cambridge, Mass: Marketing Science Institute, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

service), SpringerLink (Online, ed. Performance Metrics for Haptic Interfaces. London: Springer London, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Touran, Northeastern University Ali. Performance Metrics for Publicâ€"Private Partnerships. Washington, D.C.: Transportation Research Board, 2021. http://dx.doi.org/10.17226/26171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kounev, Samuel, Ian Gorton, and Kai Sachs, eds. Performance Evaluation: Metrics, Models and Benchmarks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-69814-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Performance metrics"

1

Goldshtein, Sasha, Dima Zurbalev, and Ido Flatow. "Performance Metrics." In Pro .NET Performance, 1–6. Berkeley, CA: Apress, 2012. http://dx.doi.org/10.1007/978-1-4302-4459-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tokhi, M. O., M. A. Hossain, and M. H. Shaheed. "Performance Metrics." In Parallel Computing for Real-time Signal Processing and Control, 81–110. London: Springer London, 2003. http://dx.doi.org/10.1007/978-1-4471-0087-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Carstens, Deborah Sater, and Gary L. Richardson. "Performance Metrics." In Project Management Tools and Techniques, 233–44. Second Edition. | Boca Raton : CRC Press, 2019. | Revised edition of Project management tools and techniques, [2013]: CRC Press, 2019. http://dx.doi.org/10.1201/9780429263163-17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sproull, ByBob. "Performance Metrics." In The Focus and Leverage Improvement Book, 191–206. 1 Edition. | New York, NY : Taylor & Francis, [2019]: Productivity Press, 2018. http://dx.doi.org/10.4324/9780429444456-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leasure, Bruce, David J. Kuck, Sergei Gorlatch, Murray Cole, Gregory R. Watson, Alain Darte, David Padua, et al. "Performance Metrics." In Encyclopedia of Parallel Computing, 1522–25. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-09766-4_2182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kurtz, David. "Performance Metrics." In PeopleSoft for the Oracle DBA, 213–67. Berkeley, CA: Apress, 2012. http://dx.doi.org/10.1007/978-1-4302-3708-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Eeckhout, Lieven. "Performance Metrics." In Computer Architecture Performance Evaluation Methods, 5–14. Cham: Springer International Publishing, 2010. http://dx.doi.org/10.1007/978-3-031-01727-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sproull, Bob. "Performance Metrics." In Learning from the Past, Present, and Future to Drive Profits to New Levels, 265–86. New York: Productivity Press, 2023. http://dx.doi.org/10.4324/9781003462385-19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Link, Albert N., and John T. Scott. "Performance Evaluation Metrics." In Public Accountability, 17–21. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5639-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kasilingam, Raja G. "Logistics performance metrics." In Logistics and Transportation, 214–34. Boston, MA: Springer US, 1998. http://dx.doi.org/10.1007/978-1-4615-5277-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Performance metrics"

1

Mackey, Jamie. "UDOT Signal Performance Metrics: New and Upcoming Metrics." In Automated Traffic Signal Performance Measure Workshop. Purdue University, 2016. http://dx.doi.org/10.5703/1288284316023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Talbot, R. G., Chris R. Benn, and Rene G. M. Rutten. "Telescope performance metrics." In Astronomical Telescopes and Instrumentation, edited by Peter J. Quinn. SPIE, 2002. http://dx.doi.org/10.1117/12.460734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Stevenson, Scott. "Maintaining Signal Performance Metrics." In Automated Traffic Signal Performance Measure Workshop. Purdue University, 2016. http://dx.doi.org/10.5703/1288284316040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Hui-Min, and Seungbin Moon. "Session details: Measures & metrics." In PerMIS '10: Performance Metrics for Intelligent Systems. New York, NY, USA: ACM, 2010. http://dx.doi.org/10.1145/3260126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Farnsworth, Grant. "UDOT Automated Freeway Performance Metrics." In Automated Traffic Signal Performance Measure Workshop. Purdue University, 2016. http://dx.doi.org/10.5703/1288284316026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mackey, Jamie. "UDOT Signal Performance Metrics: Configuration." In Automated Traffic Signal Performance Measure Workshop. Purdue University, 2016. http://dx.doi.org/10.5703/1288284316039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Konell, Jeremiah P., Jack Van Schenck, Joseph P. Bratton, and Steven J. Polasik. "Practical IMP Performance Metrics." In 2016 11th International Pipeline Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/ipc2016-64528.

Full text
Abstract:
Annually or as events occur, operators submit data to various regulatory agencies about the operation, maintenance and extent of their assets. Many of these figures are used by the public, non-profit organizations and private companies to independently conduct assessments about operators, ranging from safety to quality assurance to scope and nature of product deliveries. The Pipeline Hazardous Materials Safety Administration (PHMSA), the National Energy Board (NEB), and other industry organizations have recently put an emphasis on more meaningful metrics by releasing guidelines and leading discussions at industry conferences and workshops. In order to derive more strategic accuracy and pertinence, Explorer Pipeline Company (Explorer) and Det Norske Veritas (U.S.A.), Inc. (DNV GL) have developed a procedural effort to develop meaningful metrics. Several derivative benefits come from this effort such as support for calculating cost-benefit / ROI figures for maintenance projects, justification for compliance-plus activities and, most importantly, a more informed perspective of operational risk. A renewed approach to this effort is to organize the more meaningful factors into three categories: (1) Metrics of job roles and tasks within Explorer’s Asset Integrity staff, (2) Other existing influential metrics (3) Regulatory metrics. Using this approach, Explorer defined well-targeted, unitized metrics, each with a meaningful basis. Explorer anticipates the development of these more meaningful metrics to support the transparency sought by regulators and other stakeholders, benchmark and continually evaluate our Asset Integrity program and possibly support the development of practical metrics for the pipeline industry.
APA, Harvard, Vancouver, ISO, and other styles
8

Brunnert, Andreas. "Green Software Metrics." In ICPE '24: 15th ACM/SPEC International Conference on Performance Engineering. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3629527.3652883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Balakirsky, Stephen, Thomas Kramer, and Frederick Proctor. "Metrics for mixed pallet stacking." In the 10th Performance Metrics for Intelligent Systems Workshop. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/2377576.2377587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Balakirsky, Stephen, and Henrik Christensen. "Session details: Performance metrics for mixed palletizing operations." In PerMIS '10: Performance Metrics for Intelligent Systems. New York, NY, USA: ACM, 2010. http://dx.doi.org/10.1145/3260127.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Performance metrics"

1

Stephan, E. IP Performance Metrics (IPPM) Metrics Registry. RFC Editor, August 2005. http://dx.doi.org/10.17487/rfc4148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Duffield, N., A. Morton, and J. Sommers. Loss Episode Metrics for IP Performance Metrics (IPPM). RFC Editor, May 2012. http://dx.doi.org/10.17487/rfc6534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dietz, R., and R. Cole. Transport Performance Metrics MIB. RFC Editor, August 2005. http://dx.doi.org/10.17487/rfc4150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bagnulo, M., B. Claise, P. Eardley, A. Morton, and A. Akhter. Registry for Performance Metrics. RFC Editor, November 2021. http://dx.doi.org/10.17487/rfc8911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Demichelis, C., and P. Chimento. IP Packet Delay Variation Metric for IP Performance Metrics (IPPM). RFC Editor, November 2002. http://dx.doi.org/10.17487/rfc3393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Almes, G., S. Kalidindi, and M. Zekauskas. A One-Way Delay Metric for IP Performance Metrics (IPPM). Edited by A. Morton. RFC Editor, January 2016. http://dx.doi.org/10.17487/rfc7679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Almes, G., S. Kalidindi, and M. Zekauskas. A One-Way Loss Metric for IP Performance Metrics (IPPM). Edited by A. Morton. RFC Editor, January 2016. http://dx.doi.org/10.17487/rfc7680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Koles, G., R. Hitchcock, and M. Sherman. Metrics for building performance assurance. Office of Scientific and Technical Information (OSTI), July 1996. http://dx.doi.org/10.2172/374167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Paxson, V., G. Almes, J. Mahdavi, and M. Mathis. Framework for IP Performance Metrics. RFC Editor, May 1998. http://dx.doi.org/10.17487/rfc2330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pressel, D. M. Performance Metrics for Parallel Systems. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada373437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography