Dissertations / Theses on the topic 'Metrics Program'

To see the other types of publications on this topic, follow the link: Metrics Program.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Metrics Program.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hitchcock, T. L. "Metrics for object-oriented program control." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0005/MQ46256.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Robison, Dawn M. 1967. "Transformational metrics for product development." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/34724.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2001.
Includes bibliographical references (p. 113-116).
The research provides a case study of performance metrics within the framework of the product development process and team effectiveness. A comparative analysis of eight product development teams was done to evaluate the teams' effectiveness in achieving three outcomes - customer satisfaction, shareholder value and time to market. A survey was conducted to evaluate areas where no formal documentation existed and to supplement the existing historical data that were collected from databases and documents. The analysis was done on two levels - by program team and individual respondent - and looked at the level of performance and effort that influenced the specific outcomes. It was concluded that performance metrics are used within an organization to drive actions, to assess progress and to make decisions. Conclusions were consistent with the premise that people perform to how they are measured and that the team effectiveness can be driven by a set of performance metrics that are aligned with the strategic goal of the organization. Transformational metrics were developed within the framework of understanding the interdependence of the social and technical systems. Choosing the right metrics is critical to an organization's success because the metrics directly influence behavior and establish the culture within the firm. It was determined that if the right combinations of metrics are selected, teams will act in such a way as to maximize their effectiveness and behave in a manner that achieves the corporate goals.
by Dawn M. Robison.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Johnson, John H. (John Howard) 1965. "Metrics for a platform team." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dufour, Bruno. "Objective quantification of program behaviour using dynamic metrics." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=81328.

Full text
Abstract:
In order to perform meaningful experiments in optimizing compilation and runtime system design, researchers usually rely on a suite of benchmark programs of interest to the optimization technique under consideration. Programs are described as numeric, memory-intensive, concurrent, or object-oriented, based on a qualitative appraisal, in some cases with little justification.
In order to make these intuitive notions of program behaviour more concrete and subject to experimental validation, this thesis introduces a methodology to objectively quantify key aspects of program behaviour using dynamic metrics. A set of unambiguous, dynamic, robust and architecture-independent dynamic metrics is defined, and can be used to categorize programs according to their dynamic behaviour in five areas: size, data structures, memory use, polymorphism and concurrency. Each metric is also empirically validated.
A general-purpose, easily extensible dynamic analysis framework has been designed and implemented to gather empirical metric results. This framework consists of three major components. The profiling agent collects execution data from a Java virtual machine. The trace analyzer performs computations on this data, and the web interface presents the result of the analysis in a convenient and user-friendly way.
The utility of the approach as well as selected specific metrics is validated by examining metric data for a number of commonly used benchmarks. Case studies of program transformations and the consequent effects on metric data are also considered. Results show that the information that can be obtained from the metrics not only corresponds well with the intuitive notions of program behaviour, but can also reveal interesting behaviour that would have otherwise required lengthy investigations using more traditional techniques.
APA, Harvard, Vancouver, ISO, and other styles
5

Israel, Mark Abraham. "Heuristic program reorganization guided by object-oriented metrics." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq20976.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Raiyani, Sangeeta. "Incorporating design metrics into a company-wide program." Virtual Press, 1990. http://liblink.bsu.edu/uhtbin/catkey/722468.

Full text
Abstract:
Metrics calculated during the design phase of a software life-cycle can be used to predict the errors in the software project at an early stage, improve the overall software quality, and increase the efficiency of the software life-cycle.In this thesis, a design metric D(G) for the structure design G is presented. The need and importance of the design metric is shown, the metric is explained in detail, results are given and the solutions are presented to improve the design quality based on the results. A strategy is explained to implement the design metric into a company-wide program. The limitations of the metrics model are also given. A complete model of the software development life-cycle, incorporating the metrics is also presented.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
7

Blackburn, Craig D. (Craig David) S. M. Massachusetts Institute of Technology. "Metrics for enterprise transformation." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54657.

Full text
Abstract:
Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 148-161).
The objective of this thesis is to depict the role of metrics in the evolving journey of enterprise transformation. To this end, three propositions are explored: (i) metrics and measurement systems drive transformation, (ii) employee engagement is a proxy to gauge transformation progress; and (iii) metric considerations enable enterprise transformation when systematically executed as part of a transformation roadmap. To explore this problem, the aerospace measurement community was consulted to help grasp a better understanding of the context in which transformation is currently defined and measured. Once the problem space was defined, the environment of doing research with the enterprise as the unit of analysis was described with the intent of exploring the role of metrics and transformation. In particular, the performance measurement literature helped identify tools and methods used to select metrics to enable decision making at the enterprise level. After this review, two case studies were performed, considering: (1) the implementation of a bottom-up measurement system to drive transformation and (2) the effect of a top-down corporate measurement system on the enterprise. The first case study revealed insights regarding the benefits and challenges of implementing measurement systems and highlighted the use of employee engagement as a proxy to measure enterprise transformation. In the second case study, contemporary measurement issues were discussed and mapped to an Eight Views of the Enterprise analysis to identify critical enterprise interactions.
(cont.) Ultimately, the Lean Advancement Initiative's Enterprise Transformation Roadmap was used as a method for depicting how performance measurement can help enable enterprise transformation. The implications of research in metrics for enterprise transformation span across thee areas: (1) the extensive literature reviews provide an academic contribution for performing enterprise and measurement research; (2) a common language and framework for exploring measurement problems is depicted for practitioners through the case study analysis; and (3) a connection between enterprise measurement and enterprise transformation is established to drive future transformation success.
by Craig D. Blackburn.
S.M.
S.M.in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
8

Russell, Keith A. (Keith Anthony) 1966. "Reengineering metrics systems for aircraft sustainment teams : a metrics thermostat for use in strategic priority management." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/29212.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics; and, (S.M.)--Massachusetts Institute of Technology, Technology and Policy Program, February, 2001.
Includes bibliographical references (p. 133-134).
We explore the selection of metrics for the United States Air Force weapon system sustainment team empirically with emphasis placed on the incentive, structural and predictive implications of metrics. We define the term "metric" to include measures that employees impact through their efforts. We believe that even in a not-for-profit organization such as the Air Force, by putting emphasis (or weight) on a performance metric, the organization establishes inherent incentive structures within which employees will act to maximize their best interests. However, we believe that not-for-profit organizations differ from for-profit ones in their inherent structure since profit becomes cost and several mission-oriented outcome variables share a fundamental importance in achieving the organizations goals. We seek an understanding of the structural composition of Air Force sustainment's metrics systems that, when coupled with a method for practical selection of a high-quality set of metrics (and weights), will align the incentives of employees with the interests of the organization. The empirical study is grounded in emerging theoretical work, which uses our above definition of a metric to purpose a theoretical metrics feedback construct called the Metrics Thermostat. System structure is explored through common correlation and regression analysis as well as more sophisticated structural equation modeling and systems dynamics techniques used to explore potential feedback loops. The F- 16 is used as a case study for this problem, and the metrics systems are considered from the front-line base-level point of view of Air Force active duty, Air National Guard and Air Force Reserve bases worldwide. 96 low-level metrics, covariates and outcomes were examined for 45 F- 16 bases for a period of five years. Outcome importance was determined through personal interviews and internal archival documentation. -- The metrics, covariates and outcomes in the study are very interrelated. -- The primary indicator of overall performance is Command (ACC, USAFE, etc.) -- Increased Fix Rate increases Utilization, but increased Utilization decreases Fix Rate. -- Cannibalization Rate is associated with higher Fix Rates but lower Mission Capability, Flying Scheduling Effectiveness, and Aircraft Utilization. -- Active duty Mission Capability is predicted well from the dataset such that: * Active duty commands have higher mission capability. * Mission Capability is slightly higher in cool moist climates. * Increased Aircraft Utilization, Repeat Discrepancies and Flying Scheduling Effectiveness are all associated with higher Mission Capability. * Increased Break Rates and Unscheduled (engine) Maintenance are associated with lower Mission Capability. The model appears to be valid for peacetime actions only.
by Keith A. Russell.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
9

Sides, Steve P. (Steve Paul) 1963. "Driving robust jet engine design through metrics." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/88331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Day, Henry Jesse II. "An Investigation of Software Metrics Affect on Cobol Program Reliability." Diss., Virginia Tech, 1996. http://hdl.handle.net/10919/30479.

Full text
Abstract:
The purpose of this research was to predict a COBOL program's reliability from software characteristics that are found in the program's source code. The first step was to select factors based on the human information processing model that are associated with changes in computer program reliability. Then these factors (software metrics) were quantitatively studied to determine which factors affect COBOL program reliability. Then a statistical model was developed that predicts COBOL program reliability. Reliability was selected because the reliability of computer programs can be used by systems professionals and auditors to make decisions. Using the Human Information Processing Model to study the act of creating a computer program, several hypotheses were derived about program characteristics and reliability. These hypotheses were categorized as size, structure, and temporal hypotheses. These characteristics were then used to test several prediction models for the reliability of COBOL programs. Program characteristics were measured by a program called METRICS. METRICS was written by the author using the Pascal programming language. It accepts COBOL programs as input and produces as output seventeen measures of complexity. Actual programs and related data were then gathered from a large insurance company over the course of one year. The data were used to test the hypotheses and to find a model for predicting the reliability of COBOL programs. The operational definition for reliability was the probability of a program executing without abending. The size of a program, its cyclomatic complexity, and the number of times a program has been executed were used to predict reliability. A regression model was developed that predicted the reliability of a COBOL program from a program's characteristics. The model had a prediction error of 9.3%, a R2 of 15%, and an adjusted R2 of 13%. The most important thing learned from the research is that increasing the size of a program's modules, not the total size of a program, is associated with decreased reliability.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Majumder, Arpita (Arpita P. ). 1970. "Strategic metrics for product development at Ford Motor Company." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/29210.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2000.
Includes bibliographical references (leaves 78-80).
This thesis aims at developing a practical method to adjust product development metrics, which will enable effective management of the product development (PD) process. A set of good metrics is crucial to the success of a product, as metrics direct the development process by driving the actions and decisions of the PD team members which in turn define the product. Emphasizing or "weighting" certain metrics more than others can make the difference between success and failure. Through empirical exploration of metrics we seek to determine the weights, and the impact of different metrics on product success. Unlike its use in the engineering literature, the management use of the term "metric" includes both quantitative and qualitative measures which the PD team members can influence through their efforts. The theory used to determine the correct weight of a metric has its roots in the principles of Agency Theory and has been developed by "engineering" the theory to obtain two key parameters which define the weight of a metric. These two parameters are "leverage" and "risk discount factor" (RDF). Leverage is the amount by which a metric can impact the profitability of a product and RDF takes into account the inherent risk averse nature of the PD team members that influence their decisions. In order to evaluate the PD metrics and their weights within a firm, data was collected for a set of metrics across 17 programs at Ford Motor Company. The values for each metric were assigned based on information obtained through program documentation and interviews with multiple team members across various functions within the organization. Different success measures were collected and the impact and leverage of each metric was determined through empirical exploration of the various relationships. The key findings to date include: * Cronbach's Alpha for metrics regrouped using factor analysis average 0.7 demonstrating internal reliability. * Customer satisfaction correlates significantly with the rigor of the PD process, and internal coordination and communication between the core team and the other members of the value chain. * Time to market shows consistent correlation with profit and profit residuals. " The calculated weights suggest higher emphasis on capturing manufacturing need and using robust design practices, technology, and differentiation will increase profitability. " The measured RDF does not change the relative weightings of the metrics as obtained through the leverage calculation.
by Arpita Majumder.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
12

Lawler, Maureen E. (Maureen Elizabeth). "Improving shop floor visualization and metrics." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59163.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Global Operations Program at MIT, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 55-57).
Within the Technical Operations division of Novartis Pharmaceuticals, there is an aggressive vision to be the "Toyota" of the Pharma Industry by 2010. To accomplish this, PharmOps Switzerland has embraced operational excellence, IQP (Innovation, Quality, and Productivity). Still, there is more that the site, and more specifically manufacturing, can do to fully realize the benefits of adopting all aspects of IQP. Currently, there is a lack of adequate visualization on the shop floor. The current status and schedule of production cannot be quickly seen at the tools where the work is being performed. This thesis focuses is on improving the visualization and creating a set of KPIs (Key Performance Indicators) and visual displays that will improve performance Change, especially cultural, is difficult and takes considerable time and effort. Even when changes are implemented slowly with small iterations, it might not be well received. Without a strong culture of continuous improvement, teams may not perceive that there are things that can be improved. Historical metrics are comfortable and useful to the shop floor. Visual metrics have improved communication.
by Maureen E. Lawler.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
13

Frank, Carl Bernard. "Metrics thermostat for strategic priorities in military system acquisition projects." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9272.

Full text
Abstract:
Thesis (S.M.M.O.T.)--Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 2000.
Includes bibliographical references (leaves 63-65).
Innovation and rapid fielding ("commercialization") of superior technology has been a key element in the United States military's strategy throughout its history. Maintaining this edge in the current environment of increased rate of technological change but dramatically reduced military procurement budgets will require strategically developing the most cost effective systems and optimizing the productivity of new product development teams. An emerging framework for a "metrics thermostat" based on an agency theory model for selecting and prioritizing metrics for product development teams has shown promising results in two commercial applications. This study focused on applying this framework to one of the government's largest procurement organizations, the Naval Sea Systems Command (NA VSEA), the Navy Department's central activity for designing, engineering, integrating, building and procuring U.S. naval ships and shipboard weapons and combat systems. A working metrics hierarchy and construct was developed. Desired outcomes or profit analogies were defined to value and differentiate strategic priorities, enabling metrics and covariates. Five strategic priorities aligned with NA VSEA' s stated goals were selected for the study and enabling metrics and covariates directly impacting these strategic priorities and desired outcomes were defined. Approximately 50 product/systems were identified and investigated to varying degrees. Significant progress was made toward populating the defined data fields for the selected data points/systems. Preliminary analyses offer hope that the combination of a large data set and broad, robust metrics will reveal meaningful correlations and leverages. The data sources have been largely been identified but substantial data collection remains to be done. As this is completed, comprehensive regression analyses will be performed to determine the relative effectiveness of the strategic priorities and enabling metrics. These results, and corresponding directions to program mangers on which strategies and metrics to emphasize and which to de-emphasize, will be validated by NA VSEA experts.
by Carl B. Frank.
S.M.M.O.T.
APA, Harvard, Vancouver, ISO, and other styles
14

Atlee, Jennifer Robinson. "Operational sustainability metrics : a case of electronics recycling." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32276.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2005.
Includes bibliographical references (p. 119-127).
In the past 15 years corporations and governments have developed a growing appreciation of the need for "sustainability" and have worked the term into their goals, strategy and mission statements. Despite extensive efforts to define the term, there is still little clarity on how to move toward sustainability or measure improvements. Further advances toward sustainability will require system specific metrics to assess both current performance and the impact of operational, technological or regulatory changes on that performance. Not only are there currently few operational metrics by which to practically assess progress toward sustainability, there is also very little understanding of how to judge the effectiveness of such metrics. Electronics recycling is used in this thesis as a case problem in developing and evaluating system specific performance metrics for sustainability. Electronics recycling is a growing national and international concern due to the increasing volume of waste, the potential toxicity of the scrap, and reports of improper handling and disposal. Despite this concern, there is limited understanding about the electronics recycling system. There is a need for systematic ways to describe system functioning and quantitative methods to assess system performance. Existing evaluations of eco-efficiency or sustainability are either too aggregated to guide operational decisions or too complex and data intensive to be performed in the context of a low-margin system. A range of performance metrics were developed and assessed for several electronics recycling operators. These included measures of resource recovery and environmental performance.
(cont.) These metrics were assessed for their ability to provide insights on resource efficiency comparable to more complex indicators, with minimal data required beyond that collected for normal business operations. The informative value of these metrics, their ability to capture system behavior, and the similarity between evaluations using different metrics were compared. Recovery effectiveness results for three US Electronic recycling operators are presented based on several quantitative indicators. Results show that current simple measures such as "mass percent to landfill" are not sufficient to fully assess system performance. Composite indicators of systems performance can provide valuable insights even using currently available data collected by operators for business purposes.
y Jennifer Robinson Atlee.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
15

Kellam, Benjamin A. (Benjamin Alexander) 1972. "The use of process metrics to evaluate product development projects." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/30052.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2004.
Includes bibliographical references (p. 72).
Product development success is an important strategic factor in today's business environment. The ability to accurately predict the outcome of product development projects would be a useful strategic tool. This research will use a product development process assessment survey called "Perform" to evaluate project success and also evaluates the effectiveness of the "Perform" survey. Two abilities of the survey are evaluated. The first is the consistency of the responses from different members of the development team. The second is the ability of the survey to predict the outcome of the project. The survey is evaluated by applying the survey to two projects that have been completed. The results of each respondent are compared for consistency. The results of the project are also compared to the results of the survey to gauge the predictive ability of the survey. Perform was found to provide fairly consistent responses from members of the development team. The survey did a good job of predicting project outcome.
by Benjamin A. Kellam.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
16

Nicol, Robert A. (Robert Arthur) 1969. "Design and analysis of an enterprise metrics system." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/82686.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2001.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaf 82).
by Robert A. Nicol.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
17

Wolbert, Daniel (Daniel Joseph). "Utilization of visual metrics to drive intended performance." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39689.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2007.
Includes bibliographical references.
In recent years the American industrial landscape has undergone tremendous change as companies have worked to adopt Lean practices. This transformation has been difficult, but necessary, as American companies work to remain competitive with foreign competitors. A key enabler of these Lean transformations has been the use of visual metrics to communicate how a process has performed as well as to set goals for future performance. The challenge is to first identify what data is available and then create metrics that encourage and reward Lean behaviors. This thesis explores the introduction of visual metrics for a part inspection process at the Raytheon Company. Prior to the introduction of these metrics, there was limited ability to determine how the process was performing. As a result, downstream customers were able to track when a part entered the inspection process but were unable to predict when the inspection would be completed. This introduced a risk to the process and created a sense of frustration through the facility. The visual metrics for the inspection area were created on a series of visual dashboards that display common Lean metrics, such as cycle time and backlog (or work-in-process). Through these dashboards the area will be able to understand how it is performing and initiate continuous improvement projects to improve performance.
by Daniel Wolbert.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
18

Wu, Yalu. "A Framework for Analyzing Forecast Accuracy Metrics." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99572.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 2015. In conjunction with the Leaders for Global Operations Program at MIT.
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2015. In conjunction with the Leaders for Global Operations Program at MIT.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 69).
Demand Planning forecasts at Nike, Inc. are used by many groups: Supply Planning/Materials Planning, Sourcing, Categories/Merchandising, Finance, S&OP, and Sales. These groups take forecasts as an input to make key decisions. Forecasts, by nature, will be inaccurate. There are two big unknowns to answer as Nike considers how to improve forecast accuracy: 1) how accurate can or should forecasts become (target setting) and 2) what are the causes and impacts of inaccuracy. However, the first step to addressing these questions is to understand and measure forecast accuracy metrics in a consistent way across Nike's various Demand Planning groups. This project investigates the following through the design of a Tableau dashboard * which metrics should be reviewed (accuracy, bias, volatility, etc.) * how they should be computed (what to compare) * at what level of aggregation for which groups * at what level of detail for which groups (category, classification, etc.) * over how many seasons * with which filters In addition to aligning on forecast accuracy metrics, the project also focuses on the dashboard design (determining the most appropriate structure/views, how information is laid out or presented, and the use of labels and color) and on setting the long-term vision for viewing and using forecast accuracy metrics through researching and outlining the process for root cause analysis and target setting.
by Yalu Wu.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
19

Chappell, Bryan L. "Definition and validation of software complexity metrics for Ada." Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/44625.

Full text
Abstract:

One of the major goals of software engineering is to control the development and maintenance of software products. With the growing use and importance of the Ada programming language, control over the software life cycle of Ada systems is becoming even more important. Software complexity metrics have been developed to aid software engineers in the design and development of software systems. This research defines metrics for Ada and uses an automated analysis tool to calculate them. This tool can be used by the software engineer to help maintain control over Ada software products. The validation of this tool was performed by analyzing a medium-sized commercial Ada product. The flow of control and flow of information through the use of Ada packages can be measured. The results show that software complexity metrics can be applied to Ada and produce meaningful results.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
20

Raina, Ravi. "A systems perspective on cybersecurity in the cloud : frameworks, metrics and migration strategy." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107602.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, System Design and Management Program, Engineering and Management Program, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 119-124).
Cloud computing represents the next generation of disruptive technologies in computing. However, there are several barriers to massive adoption of cloud and among them security remains one of the principal concerns. Traditional failure analysis and prevention frameworks fall exceedingly short to address cybersecurity as is evident by every increasing cybersecurity breaches. New frameworks for cybersecurity are required which take a holistic view of the problem and a systems perspective. Migrating to cloud also represents a key decision point for CEO/CTO's today, especially from security perspective. The objective of this thesis is to illustrate the effectiveness of taking a Systems Approach to cybersecurity and provide a framework for migration to cloud with specific emphasis on critical cybersecurity issues pertaining to various cloud deployment models and delivery services. The thesis is divided into three phases. Firstly, it will aim to explore the major security threats and critical areas of focus for security in cloud. It will explore the major security frameworks, metrics and controls, especially the major ones from NIST, CIS and CSA. SLA's for different cloud service models will then be presented. A high level cloud migration framework strategy and framework, with special emphasis on cybersecurity will also be discussed. In the second phase, System- Theoretic Accident Model and Processes (STAMP) which is based on Systems Theory will be applied to Target security breach and key recommendations as well as new insights will be presented. The analysis will highlight the need for holistic approach and Systems Thinking to cybersecurity and new insights that are not produced by traditional methods will be presented. Finally, in the third phase, the cloud migration framework discussed in phase one will be applied to Target. A case will be made that in certain scenarios, moving the less critical applications to cloud and utilizing the security benefits of cloud can actually reduce the threat vectors and security exposures and bring IT systems from a higher risk state to lower risk state. The thesis integrates cybersecurity methods and frameworks as well as security metrics with the cloud migration strategy. Additionally, it also presents STAMP/CAST failure model for cybersecurity breaches and highlights the need for integrated view of safety and security and Systems Thinking in cybersecurity both in traditional systems and cloud.
by Ravi Raina.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
21

Latner, Avi. "Feature performance metrics in a service as a software offering." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/67562.

Full text
Abstract:
Thesis (S.M. in Engineering and Management)--Massachusetts Institute of Technology, Engineering Systems Division, System Design and Management Program, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 46-47).
Software as a Service (SaaS) delivery model has become widespread. This deployment model changes the economics of software delivery but also has an impact on development. Releasing updates to customers is immediate and the development, product and marketing teams have access to customer usage information. These dynamics create a fast feedback loop between developments to customers. To fully leverage this feedback loop the right metrics need to be set. Typically SaaS applications are a collection of features. The product is divided between development teams according to features and customers access the service through features. Thus a framework that measure feature performance is valuable. This thesis provides a framework for measuring the performance of software as a service (SaaS) product features in order to prioritize development efforts. The case is based on empirical data from HubSpot and it is generalized to provide a framework applicable to other companies with large scale software offerings and distributed development. Firstly, relative value is measured by the impact that each feature has on customer acquisition and retention. Secondly, feature value is compared to feature cost and specifically development investment to determine feature profitability. Thirdly, feature sensitivity is measured. Feature sensitivity is defined as the effect a fixed amount of development investment has on value in a given time. Fourthly, features are segmented according to their location relative to the value to cost trend line into: most valuable features, outperforming, under-performing and fledglings. Finally, results are analyzed to determine future action. Maintenance and bug fixes are prioritized according to feature value. Product enhancements are prioritized according to sensitivity with special attention to fledglings. Under-performing features are either put on "life-support", terminated or overhauled.
by Avi Latner.
S.M.in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
22

Arroyo, Acosta Ernesto 1978. "Mediating disruption in human-computer interaction from implicit metrics of attention." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41709.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.
Includes bibliographical references (p. 143-150).
Multitasking environments cause people to be interrupted constantly, often disrupting their ongoing activities and impeding reaching their goals. This thesis presents a disruption reducing approach designed to support the user's goals and optimize productivity that is based on a model of the user's receptivity to an interruption. The model uses knowledge of the interruption content, context and priority of the task(s) in progress, user actions and goal-related concepts to mediate interruptions. The disruption management model is distinct from previous work by the addition of implicit sensors that deduce the interruption content and user context to help determine when an interruption will disrupt an ongoing activity. Domain-independent implicit sensors include mouse and keyboard behaviors, and goal-related concepts extracted from the user documents. The model also identifies the contextual relationship between interruptions and user goals as an important factor in how interruptions are controlled. The degree to which interruptions are related to the user goal determines how those interruptions will be received. We tested and evolved the model in various cases and showed significant improvement in both productivity and satisfaction. A disruption manager application controls interruptions on common desktop computing activities, such as web browsing and instant messaging. The disruption manager demonstrates that mediating interruptions by supporting the user goals can improve performance and overall productivity. Our evaluation shows an improvement in success of over 25% across prioritization conditions for real life computing environments.
(cont.) Goal priority and interruption relevance play an important role in the interruption decision process and several experiments these factors on people's reactions and availability to interruptions, and overall performance. These experiments demonstrate that people recognize the potential benefits of being interrupted and adjust their susceptibility to interruptions during highly prioritized tasks. The outcome of this research includes a usable model that can be extended to tasks as diverse as driving an automobile and performing computer tasks. This thesis supports mediating technologies that will recognize the value of communication and control interruptions so that people are able to maintain concentration amidst their increasingly busy lifestyles.
by Ernesto Arroyo Acosta.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
23

Singh, Maninder S. M. Massachusetts Institute of Technology. "Application of product family research and development metrics in a power systems manufacturing environment." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106266.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, System Design and Management Program, Engineering and Management Program, 2016.
Cataloged from PDF version of thesis. Page 74 blank.
Includes bibliographical references (pages 69-73).
Without objectively measuring the process of innovation, one cannot ensure whether the research and development expenses serve the right benefit. In order to be successful in generating revenue it is imperative for research and development functions to assume a broader view in developing technology for new products. Planning for the right product platform and family should not be limited to only market application for derivative products but also planning for successive generations of product platforms and derivative products. This topic explores a method documented in literature from a variety of industries ranging from power tools to medical devices by using the correct metrics to make the right level of business and product architecture decisions. The method includes defining metrics for Effectiveness, which is a measure of product success in the market place, and Efficiency, a measure of successful utilization of the corporation's product development resources. The utilization of the proposed method was applied to four product families, which include twelve different products over two release cycles in the Diesel Power Systems industry. The metrics were analyzed in combination and further guidance on the usage of metrics was developed. When applied appropriately, the metrics can help product planners and product line architects to manage and assess the right level of technology integration from past and present product platforms. The use of Efficiency and Effectiveness metrics allows business leaders to better assess their product planning strategies on a continuous basis. It also allows for a better understanding of how historical decisions impacted the outcome of past product architecture with clarity. However, having measures in place isn't sufficient, this work explores the need for a better communication and alignment of business process between Research and Engineering and sub-business units to better develop the right technology integration and maturation of future products.
by Maninder Singh.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
24

Dave, Shreya H. "Comprehensive performance metrics for Complex Fenestration Systems using a relative approach." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/70416.

Full text
Abstract:
Thesis (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 143-148).
Buildings account for over 40% of the energy consumption in the United States, nearly 40% of which is attributed to lighting. The selection of a fenestration system for a building is a critical decision as it offsets electric lighting use as well as impacts energy performance through heating and cooling systems. Further, the fenestration system contributes to both occupant comfort and ambiance of the space. Complex Fenestration Systems (CFS) address these factors with a variety of innovative technologies but the language to describe, discuss, and compare them does not exist. Existing traditional metrics for fenestration systems are unable to reveal the benefits that characterize complex fenestration systems because they are rigid, do not reflect annual performance, and were developed for a different purpose. The framework presented in this research offers a solution to this problem by using an annual climate-based methodology to provide a comprehensive evaluation of a system by incorporating three of the most relevant performance aspects: energy efficiency, occupant visual comfort, and ability to view through. Three metrics, the Relative Energy Impact (REI), the Extent of Comfortable Daylight (ECD), and the View Through Potential (VTP), were derived from these three criteria to express, in relative terms, a fagade's contribution to building energy use, comfortable daylight conditions, and the degree of transparency, respectively. Several practical matters were considered when developing a policy-relevant set of metrics, including both ease of calculation for manufacturers and usability for consumers. As such, the calculation methodology evolved from its initial proposal into a simplified approach, analytical where possible, and into a label-like concept for visual representation. These metrics are intended to exist as a mechanism by which manufacturers can evaluate and compare facade systems, provide high-level intuition of relative performance for designers and contractors, and enable the balance of performance objectives based on user preference. Ultimately, the creation of this comprehensive language is intended to stimulate innovation in fenestration systems and encourage their use in both new and retrofit building applications.
by Shreya H. Dave.
S.M.
S.M.in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
25

Tedesco, Catherine Anne Coles 1974. "Developing metrics for concurrent engineering at Raytheon Company's Surface Radar Group." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/91334.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; in conjunction with the Leaders for Manufacturing Program at MIT, 2001.
Includes bibliographical references (leaves 165-169).
by Catherine Anne Coles Tedesco.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
26

Arnold, Ronald J. "Performance metrics for the Program Executive Office for Integrated Warfare Systems 1.0 and 2.0." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FArnold.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Romero, James S. "Software metrics a case analysis of the U.S. Army Bradley Fighting Vehicle A3 Program /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA350027.

Full text
Abstract:
Thesis (M.S. in Management) Naval Postgraduate School, June 1998.
Thesis advisor(s): David F. Matthews, Mark E. Nissen. "June 1998." Includes bibliographical references (p. 63-66). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
28

Fleming, Nathan Richard. "Metal price volatility : a study of informative metrics and the volatility mitigating effects of recycling." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66481.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (S.M. in Technology and Policy)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 97-101).
Metal price volatility is undesirable for firms that use metals as raw materials, because price volatility can translate into volatility of material costs. Volatile material costs and can erode the profitability of the firm, and limit material selection decisions. The undesirability of volatility gives firms an incentive to try to gather advanced information on fluctuations in price, and to manage-or at least control their exposure to-price volatility. It was hypothesized that since price can be a measure of the scarcity of a metal, that other metrics of scarcity risk might correlate with price. A system dynamics simulation of the aluminum supply chain was run to determine how well some commonly used metrics of scarcity correlated with future changes in price, and to explore some conditions that strengthened or weakened those correlations. Additionally, prior work has suggested that increased recycling rates can lower price volatility. The study of the correlation of scarcity risk metrics with price is accompanied by a study on how the technical substitutability of secondary metal for primary, termed secondary substitutability, affects the price volatility. The results show that some scarcity risk metrics modeled (alumina price, primary marginal cost, recycling efficiency, and the static depletion index) weakly correlate with future primary metal price, and hence volatility. Other metrics examined (recycling rate, mining industry Herfindahl Index, the acceleration of the mining rate, and the alumina producer's marginal cost) did not correlate with the future primary price. Correlations were stronger when the demand elasticity was high, the secondary substitutability was high, or the delays in adding primary capacity were low. Regarding managing price volatility, greater secondary substitutability lowers price volatility; likely because it increases the elasticity of substitution of secondary for primary metal-this result is explored mathematically. The model results show that some scarcity risk metrics do weakly correlate with future primary price, but the strength of the correlation depends on certain market conditions. Moreover, firms may have some ability to manage price volatility by increasing the limit for how much secondary metal they can use in their product.
by Nathan Richard Fleming.
S.M.in Technology and Policy
S.M.
APA, Harvard, Vancouver, ISO, and other styles
29

Alcaraz, Ochoa Maria de Lourdes. "Development of metrics for streamlined life cycle assessments : a case study on tablets." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107098.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 59-63).
Growing concern about climate change and human impact on the environment have resulted in an increase in interest for evaluating the environmental impact of products and services we consume. Life cycle assessment (LCA) has become the most prominent method for environmental evaluation. Life cycle assessment is the quantification of the environmental impacts of a product or service through its whole life cycle, from the extraction of materials to manufacturing and end of life. A carbon footprint is a subset of an LCA. LCAs are required as part of government regulations, used by companies to identify high resource use in their supply chain or to choose between product designs and by consumers to choose between alternative product choices. LCAs provide valuable information; however, they are resource intensive, time consuming and uncertain. Therefore, a methodology that addresses all these issues is needed. This study addresses the following question: Can LCAs be streamlined while still providing useful information? To answer this, an under-specification, probabilistic screening methodology is employed. The screening methodology uses a high level assessment of the footprint, incorporates uncertainty in the inputs, and refines data around the primary drivers of impact. The streamlined LCA procedure is extended to include a Sobol based sensitivity analysis methodology for identifying high impact activities. The effects of partial perfect information in subsequent data acquisition activities on the streamlining methodology are examined. Metrics to determine sufficiency in the data gathering procedure and to determine whether decision makers can sufficiently distinguish between two products or design alternatives are developed. A procedure to quantify the cost of additional information is developed. Finally, an exploration of the scenario space of the impacts is analyzed. The extended streamlined methodology is applied to a case study on tablets, with a focus on integrated circuits. This thesis finds that the streamlined, probabilistic methodology can be used to cost-effectively evaluate the environmental impact of products while still taking uncertainty into account. Metrics to determine sufficiency can be effectively used, and the presence of partial information does not limit the usefulness of the metrics. Furthermore, quantifying the cost of additional information can help determine sufficiency in data collection efforts and can help understand the challenges that companies face when performing an LCA.
by Maria de Lourdes Alcaraz Ochoa.
S.M. in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
30

Sen, Avijit. "Identifying system-wide contact center cost reduction opportunities through lean, customer-focused IT metrics." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/49791.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2009.
Includes bibliographical references (p. 71-72).
Dell's long-term success depends on its customers' future buying patterns. These patterns are largely determined by customers' satisfaction with the after-sales service they receive. Previously, Dell has been able to deliver high customer satisfaction but has done so at a high expense, further reducing the low margins on their consumer product line. Dell's Global Consumer Services and Support organization (GCSS) is constantly innovating to lower its operating costs while maintaining customer satisfaction. Their task is difficult to achieve in part because of the broad scope of problems that Dell's customer service agents (CSAs) tackle and the grey areas of support boundaries. In order to identify and correct the root-causes of these contact-center costs, Dell needs the ability to measure the specific cost of supporting individual customers. Yet, no such customer-centric data framework exists at Dell, or indeed in the contact center industry. However, it is possible to create just such a customer focused data framework by applying an automated value stream mapping (VSM) analysis to a large sample of contact-center activity data from Dell's data warehouse. The resulting data set is a collection of digital value stream maps representing the end-to-end customer service experience of each contact-center customer. After performing the proposed data transformations, these customer-focused metrics (CustFM) are shown to yield significant insights into previously unidentifiable cost reduction opportunities available across Dell's global contact-center network.
by Avijit Sen.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
31

Allen, Stacey A. "Evaluating readiness for technology in schools : developing planning tools and critical metrics to prepare for 1: 1 programs." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98549.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, Engineering Systems Division, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 99-105).
Technology use in education is rapidly expanding with varying results. The success of education technologies in schools depends on both the quality of the material presented through technology in terms of content and pedagogy and also the quality of the implementation of the program. With the acknowledgement that high quality materials are essential to the success of any technology, this thesis is concerned with the implementation of technology programs in schools, as it is impossible to utilize the technology for learning gains when students or teachers cannot access the materials. Prior research in education technology has not addressed readiness or planning practices for such large-scale programs as they exist today, specifically for 1:1 initiatives ("1:1" describes a system in which all students have personal learning devices, such as tablets or laptops). The main objective of this thesis is to determine the best practices in preparedness and planning for large-scale technology initiatives in US high schools. The research is designed to aid school system administrators and policy makers in their technology decision-making processes through the creation of a rubric of metrics and a model for sustainable implementation. The rubric and model were informed by data gathered through a case study approach, focusing on schools that are currently implementing 1:1 initiatives. The rubric outlines a spectrum of potential readiness levels across a number of critical metrics and allows school leaders to self-assess their readiness for a 1:1 program. In addition to the rubric and sustainable implementation model, this thesis aims to determine best practices in planning for a 1:1 program. Through a second round of case studies and interviews with school leaders, past planning practices and gaps in knowledge and planning were examined. From the school leaders' reflections on best practices, conclusions for improvement of current planning tools were drawn. These improvements include the creation of mentor relationships for schools and the use of a thorough, yet simple, needs assessment that includes detailed timeline for implementation. Both the readiness rubric and the study of planning practices led to a number of policy recommendations not only for schools, but for all levels of government in support of effective technology use in education.
by Stacey A. Allen.
S.M. in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
32

Naughton, Alyson B. (Alyson Bourne). "Aligning tool set metrics for operation in a Multi Technology High Mix Low Volume manufacturing environment." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34852.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2005.
Includes bibliographical references (p. 81).
Ireland Fab Operations (IFO) is transitioning and leading the way within Intel to Multi- Technology High Mix Low Volume (MT-HMLV) manufacturing. To avoid errors in estimating metrics, specific capacity tool set metrics for this manufacturing environment now need to be considered. Approximations for high volume manufacturing may be far enough from MT-HMLV realities that company revenue is affected by making delivery commitments that can not be met. The Intel Model of Record (MOR) which is used to determine the number of each tool set needed to produce a given volume of product does not consider MT-HMLV realities. Things such as product change-overs, cross qualified tools, and smaller than 'normal' lot sizes can create chaos on the manufacturing floor that has not been traditionally accounted for.
by Alyson B. Naughton.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
33

Berry, Michael CSE UNSW. "Assessment of software measurement." Awarded by:University of New South Wales. CSE, 2006. http://handle.unsw.edu.au/1959.4/25134.

Full text
Abstract:
Background and purpose. This thesis documents a program of five studies concerned with the assessment of software measurement. The goal of this program is to assist the software industry to improve the information support for managers, analysts and software engineers by providing evidence of where opportunities for improving measurement and analysis exist. Methods. The first study examined the assessment of software measurement frameworks using models of best practice based on performance/success factors. The software measurement frameworks of thirteen organisations were surveyed. The association between a factor and the outcome experienced with the organisations' frameworks was then evaluated. The subsequent studies were more info-centric and investigated using models of information quality to assess the support provided for software processes. For these studies, information quality models targeting specific software processes were developed using practitioner focus groups. The models were instantiated in survey instruments and the responses were analysed to identify opportunities to improve the information support provided. The final study compared the use of two different information quality models for the assessing and improving information support. Assessments of the same quantum of information were made using a targeted model and a generic model. The assessments were then evaluated by an expert panel in order to identify which information quality model was more effective for improvement purposes. Results. The study of performance factors for software measurement frameworks confirmed the association of some factors with success and quantified that association. In particular, it demonstrated the importance of evaluating contextual factors. The conclusion is that factor-based models may be appropriately used for risk analysis and for identifying constraints on measurement performance. Note, however, that a follow-up study showed that some initially successful frameworks subsequently failed. This implied an instability in the dependent variable, success, that could reduce the value of factor-based models for predicting success. The studies of targeted information quality models demonstrated the effectiveness of targeted assessments for identifying improvement opportunities and suggest that they are likely to be more effective for improvement purposes than using generic information quality models. The studies also showed the effectiveness of importance-performance analysis for prioritizing improvement opportunities.
APA, Harvard, Vancouver, ISO, and other styles
34

Hahs, David Allen. "Predicting adequacy of supplier responses for multi-year government contracts based on supplier performance metrics." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98981.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2015. In conjunction with the Leaders for Global Operations Program at MIT.
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015. In conjunction with the Leaders for Global Operations Program at MIT.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 51-52).
Aerospace Company X (ACX) is a designer and manufacturer of advanced aerospace systems and its primary customer is the United States Government (USG). In order to reduce cost and minimize risk, both parties have embraced a multi-year contracting model in which productions agreements are signed for up to five-year periods. This allows for significant cost savings over single-year contracts while allowing for predictable production levels for ACX and its suppliers. At the time of this research, the company was soliciting bids from suppliers for the next five-year multi-year contract. Since this is a sole-source situation, ACX must substantiate all costs to justify that the pricing is fair and reasonable. Costs of purchased hardware are substantiated through three primary means: competition, commerciality, and cost-price analysis. Competition is preferred because the pricing can be justified by free-market forces. However, due to intellectual property rights or unique capabilities, suppliers are often contracted as sole-source. The supplier then can claim commerciality (i.e. the part is for sale commercially) or submit for a complete cost review of material, labor, and overhead rates. In some cases the supplier will not release this data to ACX and a government agency performs the review. The success of the cost substantiation phase hinges on getting complete and accurate data from suppliers in a timely manner. This thesis explores the challenges of obtaining cost data from suppliers and proposes recommendations that can be applied to general supplier management situations. First, a metric of proposal adequacy is developed and used to score the adequacy of each received bid. These scores are then analyzed to determine if there is any correlation with the existing enterprise ACX supplier rating system. Finally, recommendations for process improvements are made which focused on communication, IT systems, and standard work.
by David Allen Hahs.
M.B.A.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
35

Raimundo, Sara Cristina de Jesus. "Análise e concepção de um sistema de indicadores para o social media program do SAS Portugal." Master's thesis, Instituto Superior de Economia e Gestão, 2013. http://hdl.handle.net/10400.5/11095.

Full text
Abstract:
Mestrado em Gestão de Sistemas de Informação
O aparecimento da internet originou diversas alterações aos hábitos e estilo de vida de milhões de pessoas pelo mundo inteiro. Com ela surgiram também novas plataformas e ferramentas que revolucionaram a forma de comunicação entre os seus utilizadores e a facilidade em partilhar conhecimento. Estes desenvolvimentos deram origem a diversos novos conceitos como as redes sociais e o social media. A facilidade de comunicação que o social media apresenta aos utilizadores e a ampla divulgação que um tema pode obter através deste meio atraiu as atenções de várias organizações, que participam activamente nesta área. Apesar de conhecidos alguns benefícios e limitações gerais que o envolvimento no social media pode representar, as organizações têm a necessidade de os quantificar e de provar a sua importância num contexto organizacional, surgindo assim o processo de monitorização. O presente estudo tem como principal objectivo apresentar as actividades desenvolvidas ao longo do estágio no SAS Portugal. Este estágio teve como finalidade a análise e concepção de um sistema de indicadores para o Social Media Program da empresa e o início do processo de desenvolvimento de monitorização do programa, recorrendo a ferramentas SAS. Este estudo apresenta ainda algumas conclusões acerca do cruzamento entre a literatura existente e o desenvolvimento prático, num contexto organizacional, bem como sugestões para trabalhos futuros.
The emergence of the internet brought several changes to the routine and lifestyle of millions of people worldwide. In addition, the development of new platforms has revolutionized the way the internet users communicate with each other and the ease associated with knowledge sharing. These developments have given rise to several new concepts such as social networking and social media. The straightforward communication between users and the wide dissemination of information achievable through social media captured the attention of several organizations. Despite knowing some of the general benefits and limitations associated with social media, organizations have the need to quantify and prove its importance in an organizational context, giving rise to the monitoring process. This study aims to present the activities developed during the internship at SAS Portugal. The internship's goals were to analize and develop metrics for the company's Social Media Program and to begin the program's monitoring process, using SAS tools. This study also presents various conclusions on the comparison between the existing literature and the practical case developed in an organizational context, as well as some suggestions for future work.
APA, Harvard, Vancouver, ISO, and other styles
36

Wheeler, Benjamin (Benjamin Ray). "Reducing enterprise IT fragmentation through standard metrics and decision tools : a case study in the aerospace and defense industry." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66041.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Global Operations Program at MIT, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 60-61).
Over the last several decades, manufacturing companies around the world have embraced new and powerful business tools made possible with Information Technology. Major investments are frequently made in enterprise-wide systems, such as Enterprise Resource Planning (ERP) solutions to take advantage of cost saving opportunities. While promising in concept, system implementations can grow to be expensive and complicated during execution, commonly resulting in project de-scoping and sacrifices in functionality and integration. If not carefully managed, this can ultimately lead to an environment of costly custom workaround solutions for years to follow, subverting the central goal of the original investment. This thesis presents a case study examining Raytheon's initiative to launch an enterprise ERP system (SAP PRISM) in an effort to standardize and modernize supply chain operations. Within the SAP implementation, the repair and retrofit, or depot, business had major integration components de-scoped due to cost constraints. As a result numerous systems have been developed to manage the business, leading to difficulties in process alignment across manufacturing programs. This work introduces a pilot project with the objective of re-aligning business processes by delivering a portal of common metrics and decision tools to the manufacturing and operations community. With the common portal, the user community gains access to existing centralized data, reducing the need for isolated application development and enabling richer capability.
by Benjamin Wheeler.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
37

West, James F. "An examination of the application of design metrics to the development of testing strategies in large-scale SDL models." Virtual Press, 2000. http://liblink.bsu.edu/uhtbin/catkey/1191725.

Full text
Abstract:
There exist a number of well-known and validated design metrics, and the fault prediction available through these metrics has been well documented for systems developed in languages such as C and Ada. However, the mapping and application of these metrics to SDL systems has not been thoroughly explored. The aim of this project is to test the applicability of these metrics in classifying components for testing purposes in a large-scale SDL system. A new model has been developed for this purpose. This research was conducted using a number of SDL systems, most notably actual production models provided by Motorola Corporation.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
38

Alomari, Hakam W. "Supporting Software Engineering Via Lightweight Forward Static Slicing." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1341996135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Schneider, Christian S. M. Massachusetts Institute of Technology. "Modeling end-to-end order cycle-time variability to improve on-time delivery commitments and drive future state metrics." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104390.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Thesis: S.M. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 57).
Dell is accelerating investments to simplify and improve one of the core competencies it was founded on, customer experience. One goal within this initiative is to increase the percentage of orders that are ontime to a committed Estimated Delivery Date (EDD). EDDs for products vary greatly with the complexity of the customer purchase orders. In order to remain competitive, Dell has set an aggressive goal to provide better on-time delivery performance. Dell needs to quote more accurate lead time commitments to customers and increase the stability of high variability steps in the end-to-end order supply chain. The EDD lead time, from customer order to proof of delivery, consists of a payment (processing) phase, manufacturing (build, inbound logistics, warehouse) phase, and a logistics (delivery) phase. Each of these segments are managed by different organizations within Dell. Understanding what the end-to-end future state looks like will allow functional teams to set improvement targets to achieve Dell's on-time goal. This study has three main objectives: (1) determine the key drivers of variability in the current state process, (2) identify opportunities for more detailed EDD range generation, and (3) quantify targets for individual process steps to drive towards the target future state. Three high volume Build to Order (BTO) regional product lines were chosen as cases to analyze. BTO product lines, compared to Build-to-Stock (BTS), inherently have a more variable supply chain for the processes examined. To meet the main objectives, this thesis examines the hypothesis that a simulation model based on historic order data can be used to quantify existing cycle time performance in the supply chain and deliver targets to achieve Dell's on-time performance target. Key drivers of cycle time variation were identified through process mapping and design of experiment statistical analysis. Results from the modeling and sensitivity analysis produced actionable recommendations for each of the three objectives and lead to a pilot project to improve EDD commitments for an existing desktop product line. Direct to customer shipping, inbound logistics method, and day of week were identified as attributes that were significant drivers of variability and were underutilized in the EDD commitment process. This provided an opportunity for smarter lead time setting. A pilot project for a desktop line adjusted lead times to incorporate direct to customer shipping and day of week, resulting in a 30-40% on-time performance improvement. Finally, modeling results quantified cycle time distribution targets for each process step to achieve Dell's future state goal for on-time delivery. Dell is building on this project by analyzing more regional product lines and exploring opportunities to incorporate machine learning.
by Christian Schneider.
M.B.A.
S.M. in Engineering Systems
APA, Harvard, Vancouver, ISO, and other styles
40

Curtis, Ronald Sanger. "Data structure complexity metrics." Buffalo, N.Y. : Dept. of Computer Science, State University of New York at Buffalo, 1994. http://www.cse.buffalo.edu/tech%2Dreports/94%2D39.ps.Z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Anfinsen, Jarle. "Making substitution matrices metric." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9237.

Full text
Abstract:

With the emergence and growth of large databases of information, efficient methods for storage and processing are becoming increasingly important. The existence of a metric distance measure between data entities enables efficient index structures to be applied when storing the data. Unfortunately, this is often not the case. Amino acid substitution matrices, which are used to estimate similarities between proteins, do not yield metric distance measures. Finding efficient methods for converting a non-metric matrix into a metric one is therefore highly desirable. In this work, the problem of finding such conversions is approached by embedding the data contained in the non-metric matrix into a metric space. The embedding is optimized according to a quality measure which takes the original data into account, and a distance matrix is then derived using the metric distance function of the space. More specifically, an evolutionary scheme is proposed for constructing such an embedding. The work shows how a coevolutionary algorithm can be used to find a spatial embedding and a metric distance function which try to preserve as much of the proximity structure of the non-metrix matrix as possible. The evolutionary scheme is compared to three existing embedding algorithms. Some modifications to the existing algorithms are proposed, with the purpose of handling the data in the non-metric matrix more efficiently. At a higher level, the strategy of deriving a metric distance function from a spatial embedding is compared to an existing algorithm which enforces metricity by manipulating the data in the non-metric matrix directly (the triangle fixing algorithm). The methods presented and compared are general in the sense that they can be applied in any case where a non-metric matrix must be converted into a metric one, regardless of how the data in the non-metric matrix was originally derived. The proposed methods are tested empirically on amino acid substitution matrices, and the derived metric matrices are used to search for similarity in a database of proteins. The results show that the embedding approach outperforms the triangle fixing approach when applied to matrices from the PAM family. Moreover, the evolutionary embedding algorithms perform best among the embedding algorithms. In the case of the PAM250 scoring matrix, a metric distance matrix is found which is more sensitive than the mPAM250 matrix presented in a recent paper. Possible advantages of choosing one method over another are shown to be unclear in the case of matrices from the BLOSUM family.

APA, Harvard, Vancouver, ISO, and other styles
42

Scott, Mark W. (Mark Winfield) 1961. "System architecture evaluation by single metric." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9755.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design and Management Program, February 1999.
Includes bibliographical references (leaf 62).
System architecture is driven by numerous upstream influences. Regulations, market forces, cultural biases, and a variety of other influences can significantly affect whether architecture is successful or not. To be successful the architect must include upstream influences in the design. Few if any architectural methods are available to systematically account for upstream influences. A new method, Evaluation by a Single Metric (ESM), is presented. It is based on fundamental design principles. It enhances the system architectural process by organizing upstream influences that drive architecture. The ESM method is concept independent and used before concept focused system architectural methods. Specifically, system boundaries, salient upstream elements. and functional connections thereof are systematically determined. The ESM process provides a concept neutral framework used to evaluate candidate architectural concepts. The ESM method is very general. It can be used for the design of nearly any kind of system or process. The thesis makes extensive use of a diverse set of examples which highlight ESM advantages and flexibility.
by Mark W. Scott.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
43

DUTTA, BINAMRA. "Enterprise Software Metrics: How To Add Business Value." Kent State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=kent1239239432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Canning, James Thomas. "The application of structure and code metrics to large scale systems." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54209.

Full text
Abstract:
This work extends the area of research termed software metrics by applying measures of system structure and measures of system code to three realistic software products. Previous research in this area has typically been limited to the application of code metrics such as: lines of code, McCabe's Cyclomatic number, and Halstead's software science variables. However, this research also investigates the relationship of four structure metrics: Henry's Information Flow measure, Woodfield's Syntactic Interconnection Model, Yau and Collofello's Stability measure and McClure's Invocation complexity, to various observed measures of complexity such as, ERRORS, CHANGES and CODING TIME. These metrics are referred to as structure measures since they measure control flow and data flow interfaces between system components. Spearman correlations between the metrics revealed that the code metrics were similar measures of system complexity, while the structure metrics were typically measuring different dimensions of software. Furthermore, correlating the metrics to observed measures of complexity indicated that the Information Flow metric and the Invocation Measure typically performed as well as the three code metrics when project factors and subsystem factors were taken into consideration. However, it was generally true that no single metric was able to satisfactorily identify the variations in the data for a single observed measure of complexity. Trends between many of the metrics and the observed data were identified when individual components were grouped together. Code metrics typically formed groups of increasing complexity which corresponded to increases in the mean values of the observed data. The strength of the Information Flow metric and the Invocation measure is their ability to form a group containing highly complex components which was found to be populated by outliers in the observed data.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
45

Carl, Stephen J. "United States Foreign Assistance Programs: the Requirement of Metrics for Security Assistance and Security Cooperation Programs." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/7316.

Full text
Abstract:
Foreign aid has been a signal component of United States foreign policy since the creation of the Marshall Plan. Since that time, as new requirements emerged, numerous foreign aid programs and initiatives were created and subsequently piecemealed together under various U.S. agencies. The confluence of programs, initiatives, and agencies has created a confusing and overly bureaucratized environment for expending funds in an effort to support the democratization and modernization of other countries. This study examines U.S. aid provided to Ukraine and Georgia to determine if they have progressed toward Westernized defense and military structures, in accordance with their stated national goals, within the realm of logistics. The question is whether U.S. security aid in these states has helped to achieve these goals. Addressing this question, this thesis proposes a hierarchal construct with differing assessment criteria based on how and where U.S. aid is applied. In the end, this analysis shows that U.S. aid and assistance programs and funds have assisted both Ukraine and Georgia with their modernization efforts. However, U.S. policy makers and policy implementers need to consideration alternative and new methods to accurately assess how well those funds are spent in-line with U.S. foreign policy goals.
APA, Harvard, Vancouver, ISO, and other styles
46

Olšák, Ondřej. "Verifikace za běhu systémů s vlastnostmi v MTL logice." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-449176.

Full text
Abstract:
This work is focused on the design of an algorithm for run-time verification over requirements given as formulas in metric temporal logic (MTL). Tree structure is used for verification of these requirements, which is similar to run of alternating timed automata from which the final algorithm is derivated. Designed algorithm is able to verify given MTL formulas over the runs of a program without a need to remember the whole program's trace. This allows to monitor a given program on potentially infinite runs.
APA, Harvard, Vancouver, ISO, and other styles
47

Almeida, Alberto Teixeira Bigotte de. "An empirical study of the fault-predictive ability of software control-structure metrics." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA231860.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 1990.
Thesis Advisor(s): Shimeall, Timothy J. Second Reader: Bradbury, Leigh W. "June 1990." Description based on signature page on October 16, 2009. DTIC Descriptor(s): Computer programs, costs, faults, measurement, test methods DTIC Indicator(s): Computer program verification, metric system, Theses. Author(s) subject terms: Software metrics, text-based metrics, faults, testing, empirical studies. Includes bibliographical references (p. 69-72). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
48

Letko, Zdeněk. "Analýza a testování vícevláknových programů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-261265.

Full text
Abstract:
V disertační práci je nejprve uvedena taxonomie chyb v souběžném zpracování dat a přehled technik pro jejich dynamickou detekci. Následně jsou navrženy nové metriky pro měření synchronizace a souběžného chování programů společně s metodologií jejich odvozování. Tyto techniky se zejména uplatní v testování využívajícím techniky prohledávání prostoru a v saturačním testování. Práce dále představuje novou heuristiku vkládání šumu, jejímž cílem je maximalizace proložení instrukcí pozorovaných během testování. Tato heuristika je porovnána s již existujícími heuristikami na několika testech. Výsledky ukazují, že nová heuristika překonává ty existující v určitých případech. Nakonec práce představuje inovativní aplikaci stochastických optimalizačních algoritmů v procesu testování vícevláknových aplikací. Principem metody je hledání vhodných kombinací parametrů testů a metod vkládání šumu. Tato metoda byla prototypově implementována a otestována na množině testovacích příkladů. Výsledky ukazují, že metoda má potenciál vyznamně vylepšit testování vícevláknových programů.
APA, Harvard, Vancouver, ISO, and other styles
49

Oswald, Irina [Verfasser], and Armin [Akademischer Betreuer] Reller. "Environmental Metrics for WEEE Collection and Recycling Programs / Irina Oswald. Betreuer: Armin Reller." Augsburg : Universität Augsburg, 2013. http://d-nb.info/1077702760/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Diamante, Luciana Puntillo. "Development of a tool to calculate object oriented software metrics on Java programs." Thesis, California State University, Long Beach, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1603551.

Full text
Abstract:

Software metrics have been developed and studied for several years. The idea behind these metrics is to measure software quality and complexity, and based on their results to improve software development. This project implements a tool that parses computer programs written in the Java language and evaluates several software metrics from them.

This tool, called the Super Counter, calculates both object-oriented and non-object-oriented specific metrics. The input for the tool can be one or more Java programs belonging to the same or different packages.

The Super Counter output consists of some general information about the program (number of comments, number of keywords, etc.) as well as more complex metrics: Halstead, McCabe, Chidamber and Kemerer.

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography