Journal articles on the topic 'Software evaluation'

To see the other types of publications on this topic, follow the link: Software evaluation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Software evaluation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Flood, Stephen. "Software evaluation." Aslib Proceedings 38, no. 2 (February 1986): 65–69. http://dx.doi.org/10.1108/eb050999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sobhy, Dalia, Rami Bahsoon, Leandro Minku, and Rick Kazman. "Evaluation of Software Architectures under Uncertainty." ACM Transactions on Software Engineering and Methodology 30, no. 4 (July 2021): 1–50. http://dx.doi.org/10.1145/3464305.

Full text
Abstract:
Context: Evaluating software architectures in uncertain environments raises new challenges, which require continuous approaches. We define continuous evaluation as multiple evaluations of the software architecture that begins at the early stages of the development and is periodically and repeatedly performed throughout the lifetime of the software system. Numerous approaches have been developed for continuous evaluation; to handle dynamics and uncertainties at run-time, over the past years, these approaches are still very few, limited, and lack maturity. Objective: This review surveys efforts on architecture evaluation and provides a unified terminology and perspective on the subject. Method: We conducted a systematic literature review to identify and analyse architecture evaluation approaches for uncertainty including continuous and non-continuous, covering work published between 1990–2020. We examined each approach and provided a classification framework for this field. We present an analysis of the results and provide insights regarding open challenges. Major results and conclusions: The survey reveals that most of the existing architecture evaluation approaches typically lack an explicit linkage between design-time and run-time. Additionally, there is a general lack of systematic approaches on how continuous architecture evaluation can be realised or conducted. To remedy this lack, we present a set of necessary requirements for continuous evaluation and describe some examples.
APA, Harvard, Vancouver, ISO, and other styles
3

Wellek, S., J. L. Willems, and J. Michaelis. "Reference Standards for Software Evaluation." Methods of Information in Medicine 29, no. 04 (1990): 289–97. http://dx.doi.org/10.1055/s-0038-1634806.

Full text
Abstract:
AbstractThe field of automated ECG analysis was one of the earliest topics in Medical Informatics and may be regarded as a model both for computer-assisted medical diagnosis and for evaluating medical diagnostic programs. The CSE project has set reference standards of two kinds: In a broad sense, a standard how to perform a comprehensive evaluation study, in a narrow sense, standards as specific references for evaluating computer ECG programs. The evaluation methodology used within the CSE project is described as a basis for presentation of results which are published elsewhere in this issue.
APA, Harvard, Vancouver, ISO, and other styles
4

Masood Butt, Saad, Shahid Masood Butt, Azura Onn, Nadra Tabassam, and Mazlina Abdul Majid. "Usability Evaluation Techniques for Agile Software Model." Journal of Software 10, no. 1 (January 2015): 32–41. http://dx.doi.org/10.17706/jsw.10.1.32-41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Masood, Zafar, Xuequn Shang, and Jamal Yousaf. "Usability Evaluation Framework for Software Engineering Methodologies." Lecture Notes on Software Engineering 2, no. 3 (2014): 225–32. http://dx.doi.org/10.7763/lnse.2014.v2.127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

ZHANG, Li. "Software Architecture Evaluation." Journal of Software 19, no. 6 (October 21, 2008): 1328–39. http://dx.doi.org/10.3724/sp.j.1001.2008.01328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Saputri, Theresia Ratih Dewi, and Seok-Won Lee. "Software Analysis Method for Assessing Software Sustainability." International Journal of Software Engineering and Knowledge Engineering 30, no. 01 (January 2020): 67–95. http://dx.doi.org/10.1142/s0218194020500047.

Full text
Abstract:
Software sustainability evaluation has become an essential component of software engineering (SE) owing to sustainability considerations that must be incorporated into software development. Several studies have been performed to address the issues associated with sustainability concerns in the SE process. However, current practices extensively rely on participant experiences to evaluate sustainability achievement. Moreover, there exist limited quantifiable methods for supporting software sustainability evaluation. Our primary objective is to present a methodology that can assist software engineers in evaluating a software system based on well-defined sustainability metrics and measurements. We propose a novel approach that combines machine learning (ML) and software analysis methods. To simplify the application of the proposed approach, we present a semi-automated tool that supports engineers in assessing the sustainability achievement of a software system. The results of our study demonstrate that the proposed approach determines sustainability criteria and defines sustainability achievement in terms of a traceable matrix. Our theoretical evaluation and empirical study demonstrate that the proposed support tool can help engineers identify sustainability limitations in a particular feature of a software system. Our semi-automated tool can identify features that must be revised to enhance sustainability achievement.
APA, Harvard, Vancouver, ISO, and other styles
8

Chopra, Kunal, and Monika Sachdeva. "EVALUATION OF SOFTWARE METRICS FOR SOFTWARE PROJECTS." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 14, no. 6 (April 30, 2015): 5845–53. http://dx.doi.org/10.24297/ijct.v14i6.1915.

Full text
Abstract:
Software metrics are developed and used by the many software organizations for the evaluation and confirmation of good code, working and maintenance of the software product. Software metrics measure and identify various types of software complexities such as size metrics, control flow metrics and data flow metrics. One of the significant objective of software metrics is that it is applicable to both a process and product metrics. Ndepend is the most advanced as well as flexible tool available in the market. We have ensured the Quality of the project by using Ndepend metrics. So we have concluded that software metrics are easy to understand and applicable on the software, so favourable among software professionals.It is most prevalent and important testing metrics used in organizations. Metrics are used to improve software productivity and quality. This thesis introduces the most commonly used software metrics proposed and reviews their use in constructing models of the software development process.
APA, Harvard, Vancouver, ISO, and other styles
9

Hassani, Mohammad, and Mehran Mirshams. "Remote sensing satellites evaluation software (RSSE software)." Aircraft Engineering and Aerospace Technology 81, no. 4 (July 3, 2009): 323–33. http://dx.doi.org/10.1108/00022660910967318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huber, J. T., and N. B. Giuse. "Educational Software Evaluation Process." Journal of the American Medical Informatics Association 2, no. 5 (September 1, 1995): 295–96. http://dx.doi.org/10.1136/jamia.1995.96073831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Globus, Al, and Sam Uselton. "Evaluation of visualization software." ACM SIGGRAPH Computer Graphics 29, no. 2 (May 1995): 41–44. http://dx.doi.org/10.1145/204362.204372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Stamelos, Ioannis, and Alexis Tsoukiàs. "Software evaluation problem situations." European Journal of Operational Research 145, no. 2 (March 2003): 273–86. http://dx.doi.org/10.1016/s0377-2217(02)00534-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chaudouet-Miranda, A. "Software testing and evaluation." Advances in Engineering Software (1978) 12, no. 3 (July 1990): 151. http://dx.doi.org/10.1016/0141-1195(90)90032-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Barrett, BW. "Software testing and evaluation." Information and Software Technology 32, no. 10 (December 1990): 699. http://dx.doi.org/10.1016/0950-5849(90)90106-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Choi, Hwan-sik, and Nicholas M. Kiefer. "Software evaluation: EasyReg International." International Journal of Forecasting 21, no. 3 (July 2005): 609–16. http://dx.doi.org/10.1016/j.ijforecast.2005.02.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Schleyer, Titus K. L., and Lynn A. Johnson. "Evaluation of Educational Software." Journal of Dental Education 67, no. 11 (November 2003): 1221–28. http://dx.doi.org/10.1002/j.0022-0337.2003.67.11.tb03713.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bei-Yang, Wang. "Software Reliability Evaluation Approaches using Weighted Software Network." Information Technology Journal 12, no. 21 (October 15, 2013): 6288–94. http://dx.doi.org/10.3923/itj.2013.6288.6294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Minaev, Yu N., and Yu V. Reshetnyak. "Constructing an evaluation scale in software evaluation." Measurement Techniques 30, no. 8 (August 1987): 728–30. http://dx.doi.org/10.1007/bf00865651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Vaníček, J. "Software quality requirements." Agricultural Economics (Zemědělská ekonomika) 52, No. 4 (February 17, 2012): 177–85. http://dx.doi.org/10.17221/5014-agricecon.

Full text
Abstract:
At the present time, the international standards and technical reports for system and software product quality are dispersed in several series of normative documents (ISO/IEC 9126, ISO/IEC 14598, ISO/IEC 12119 etc.). These documents are not purely consistent and do not contain a tools for exact requirements set-ups. As quality is defined as a degree to which the set of inherent characteristic fulfils requirements, the exact requirement formulation is the key point for the quality measurement evaluation. This paper presents the framework for quality requirements for software, which is recommendable to use in the new international standard series ISO/IEC 250xx developed on the SQuaRE (Software Quality Requirements and Evaluation) standardisation research project. The main part of this contribution was presented on the conference Agrarian Perspectives XIV, organised by the Czech University of Agriculture in Prague, September 20 to 21, 2005.
APA, Harvard, Vancouver, ISO, and other styles
20

Onita, Colin G., Jasbir S. Dhaliwal, and Xihui Zhang. "Emotional and Rational Components in Software Testing Service Evaluation." Journal of Database Management 33, no. 1 (January 1, 2022): 1–39. http://dx.doi.org/10.4018/jdm.313969.

Full text
Abstract:
This research investigates how individual emotional and rational components of software testing service evaluations impact behavioral intentions associated with the software testing service, and how specific, theory-driven service characteristics (complexity, proximity, and output specificity) impact the emotional and rational components of the software testing service evaluation. A controlled experiment is used, and the results indicate that (1) both emotional and rational components of software testing service evaluation have significant impacts on behavioral intentions associated with the software testing service, (2) the specificity of testing service output impacts both the emotional and rational evaluations of the software testing service, (3) the complexity of the testing service task only influences the emotional component, and (4) the proximity between the testing service provider and recipient has no significant impact on the emotional evaluation of the service.
APA, Harvard, Vancouver, ISO, and other styles
21

Kumar, Pradeep, Raghav Yadav, and Dileram Bansal. "Usability and Evaluation of Software Quality using Software Metrics." International Journal of Computer Applications 71, no. 3 (June 26, 2013): 9–14. http://dx.doi.org/10.5120/12336-8606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ajith Jubilson, E., and Ravi Sankar Sangam. "Software Metrics for Computing Quality of Software Agents." Journal of Computational and Theoretical Nanoscience 17, no. 5 (May 1, 2020): 2035–38. http://dx.doi.org/10.1166/jctn.2020.8845.

Full text
Abstract:
Metrics are the essential building blocks for any evaluation process. They establish specific goals for improvement. Multi agent system (MAS) is complex in nature, due to the increase in complexity of developing a multi agent system, the existing metrics are less sufficient for evaluating the quality of an MAS. This is due to the fact that agent react in an unpredictable manner. Existing metrics for measuring MAS quality fails to addresses potential communication, initiative behaviour and learn-ability. In this work we have proposed additional metrics for measuring the software agent. A software agent for online shopping system is developed and the metrics values are obtained from it and the quality of the multi agent system is analysed.
APA, Harvard, Vancouver, ISO, and other styles
23

JINGU, Hideo. "Sensory evaluation of computer software." Japanese journal of ergonomics 30, Supplement (1994): 32–33. http://dx.doi.org/10.5100/jje.30.supplement_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sołtysik-Piorunkiewicz, Anna, Artur Strzelecki, and Edyta Abramek. "Evaluation of Adblock Software Usage." Complex Systems Informatics and Modeling Quarterly, no. 21 (December 31, 2019): 51–63. http://dx.doi.org/10.7250/csimq.2019-21.04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Rowley, J. E. "Selection and evaluation of software." Aslib Proceedings 45, no. 3 (March 1993): 77–81. http://dx.doi.org/10.1108/eb051309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Cote, Joseph A. "An Evaluation of Graphics Software." Journal of Marketing Research 28, no. 2 (May 1991): 253. http://dx.doi.org/10.2307/3172818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Форкун, Юрій Вікторович. "Method of evaluation software developers." Technology audit and production reserves 6, no. 4(8) (December 13, 2012): 45–46. http://dx.doi.org/10.15587/2312-8372.2012.5658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kizzier, Donna L., and Jan Barton. "Evaluation of Selected Keyboarding Software." Journal of Education for Business 66, no. 5 (June 1991): 299–303. http://dx.doi.org/10.1080/08832323.1991.10117489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Anderson, T., P. A. Barrett, D. N. Halliwell, and M. R. Moulding. "Software Fault Tolerance: An Evaluation." IEEE Transactions on Software Engineering SE-11, no. 12 (December 1985): 1502–10. http://dx.doi.org/10.1109/tse.1985.231894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Gediga, Günther, and Kai-Christoph Hamborg. "Evaluation in der Software-Ergonomie:." Zeitschrift für Psychologie / Journal of Psychology 210, no. 1 (January 2002): 40–57. http://dx.doi.org/10.1026//0044-3409.210.1.40.

Full text
Abstract:
Zusammenfassung. Die ergonomische Evaluation von Software hat zunehmend eine gestaltungsunterstützende Rolle im Softwareentwicklungsprozess eingenommen. In diesem Beitrag werden Methoden der software-ergonomischen Evaluation im Rahmen eines Klassifikationsschemas, das deskriptive, prädiktive und weitere Methoden unterscheidet, dargestellt und diskutiert. Weiterhin wird auf die Bedeutung von Evaluationsmodellen eingegangen. Evaluationsmodelle legen je nach Reichweite fest, wie einzelne Evaluationsmethoden für sich oder auch in Kombination mit anderen Methoden effektiv eingesetzt werden und wie die Evaluation in den Softwareentwicklungsprozess integriert wird. Es wird zwischen methoden- und kriterienbezogenen Evaluationsmodellen sowie prozessbezogenen Modellen des Usability Engineering unterschieden und festgestellt, dass für die wenigsten Methoden Evaluationsmodelle explizit ausformuliert ist bzw. geklärt ist, wie sich diese optimal in den Entwicklungszyklus einpassen lassen. Abschließend wird auf das Verhältnis von Evaluationsmodellen und Modellen der Softwareentwicklung eingegangen. Derzeit bestehen hier allenfalls lockere Verbindungen und damit noch ein großer Handlungsbedarf technische und ergonomischer Gestaltung von Software miteinander zu verbinden.
APA, Harvard, Vancouver, ISO, and other styles
31

Doughty, Ken. "The Evaluation of Software Packages." EDPACS 15, no. 7 (January 1988): 5–9. http://dx.doi.org/10.1080/07366988809450460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Squires, D., and A. McDougall. "Software evaluation: a situated approach." Journal of Computer Assisted Learning 12, no. 3 (September 1996): 146–61. http://dx.doi.org/10.1111/j.1365-2729.1996.tb00047.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Banker, R. D., R. J. Kauffman, and D. Zweig. "Repository evaluation of software reuse." IEEE Transactions on Software Engineering 19, no. 4 (April 1993): 379–89. http://dx.doi.org/10.1109/32.223805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Krishnamurthi, Shriram. "Artifact evaluation for software conferences." ACM SIGSOFT Software Engineering Notes 38, no. 3 (May 23, 2013): 7–10. http://dx.doi.org/10.1145/2464526.2464530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Parnas, David L., A. John van Schouwen, and Shu Po Kwan. "Evaluation of safety-critical software." Communications of the ACM 33, no. 6 (June 1990): 636–48. http://dx.doi.org/10.1145/78973.78974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Misnevs, Boriss. "Software Engineering Competence Evaluation Portal." Procedia Computer Science 43 (2015): 11–17. http://dx.doi.org/10.1016/j.procs.2014.12.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Park, Jung-Yong, and Jeanette M. Daly. "Evaluation of Diabetes Management Software." Diabetes Educator 29, no. 2 (March 2003): 255–67. http://dx.doi.org/10.1177/014572170302900216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Fuggetta, Alfonso. "Open source software––an evaluation." Journal of Systems and Software 66, no. 1 (April 2003): 77–90. http://dx.doi.org/10.1016/s0164-1212(02)00065-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Thompson, William E. "Software Quality Assurance and Evaluation." Journal of Quality Technology 23, no. 3 (July 1991): 264. http://dx.doi.org/10.1080/00224065.1991.11979334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Härtig, Hendrik. "Software-basierte Evaluation freier Antwortformate." Zeitschrift für Didaktik der Naturwissenschaften 20, no. 1 (August 21, 2014): 115–28. http://dx.doi.org/10.1007/s40573-014-0012-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Consel, C., L. Hornof, R. Marlet, G. Muller, S. Thibault, E. N. Volanschi, J. Lawall, and J. Noyé. "Partial evaluation for software engineering." ACM Computing Surveys 30, no. 3es (September 1998): 20. http://dx.doi.org/10.1145/289121.289141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Boloix, G., and P. N. Robillard. "A software system evaluation framework." Computer 28, no. 12 (1995): 17–26. http://dx.doi.org/10.1109/2.476196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Smith, Eric, and Antonio Siochi. "Software Usability: Requirements by Evaluation." Proceedings of the Human Factors Society Annual Meeting 32, no. 5 (October 1988): 264–66. http://dx.doi.org/10.1177/154193128803200503.

Full text
Abstract:
Recent research has established the importance of defining usability requirements as part of the total requirements for a system. Instead of deciding in an ad hoc manner whether or not a human-computer interface is usable, measurable usability requirements are established at the outset. It is common to state such requirements in an operational manner: U% of a sample of the intended user population should accomplish T% of the benchmark tasks within M minutes and with no more than E errors. The formal experiments needed to test compliance with the requirements makes this method costly. This paper presents an alternative method of specifying usability requirements currently being developed and tested on a large software project at Virginia Tech. Briefly, usability requirements are specified by having every member of the software design team and the user interface design team specify the ease of use desired for each proposed functional requirement of the system under development. The individual ratings are then compared in order to arrive at a consensus. It is this consensus that leads to the formal usability requirements which the interface must meet or exceed. As the interface is built, it is rated in the same manner as that used originally to specify the requirements. This method thus provides a structured means of specifying measurable usability requirements and a means of determining whether or not the interface satisfies those requirements. Several other benefits of this method are presented as well.
APA, Harvard, Vancouver, ISO, and other styles
44

Krishnamurthi, Shriram. "Artifact evaluation for software conferences." ACM SIGPLAN Notices 48, no. 4S (July 9, 2013): 17–21. http://dx.doi.org/10.1145/2502508.2502518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Card, David N. "A software technology evaluation program." Information and Software Technology 29, no. 6 (July 1987): 291–300. http://dx.doi.org/10.1016/0950-5849(87)90028-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hlupic, V., Z. Irani, and R. J. Paul. "Evaluation Framework for Simulation Software." International Journal of Advanced Manufacturing Technology 15, no. 5 (May 18, 1999): 366–82. http://dx.doi.org/10.1007/s001700050079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Iglesias, Omar A., Carmen N. Paniagua, and R�ul A. Pessacq. "Evaluation of university educational software." Computer Applications in Engineering Education 5, no. 3 (1997): 181–88. http://dx.doi.org/10.1002/(sici)1099-0542(1997)5:3<181::aid-cae5>3.0.co;2-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Ahmad, Ruzita, Fauziah Baharom, and Azham Hussain. "GOSSEC: Goal Oriented Software Sustainability Evaluation Criteria." JOURNAL OF UNIVERSITY OF BABYLON for Pure and Applied Sciences 27, no. 1 (April 1, 2019): 387–407. http://dx.doi.org/10.29196/jubpas.v27i1.2196.

Full text
Abstract:
The concepts of sustainability is now aware among the software engineering researchers. It has direct and indirect impacts on three dimensions which are environment, economic and social that results from the development and implementation of the software. Although there are studies on software sustainability evaluation that defines the software sustainability criteria unfortunately, most of the studies are focusing on single criterion rather than come out with holistic criteria of software sustainability. Additionally, the studies also focused on what need to be measured instead of how to perform the evaluation systematically. This limitation was occurred due to lack of defining the measurement goal of each criteria of software sustainability dimensions. Therefore, this study aimed to develop a Goal Oriented Software Sustainability Evaluation Criteria and organize the sustainability criteria using Quality Function Deployment. On top of that, the Goal Oriented Software Sustainability Evaluation Criteria has been constructed using Goal Oriented Measurement approach by adapting the Goal Question Metric method to assist in defining the goal that clearly defined the purposes, perspectives, and point of views of measurement of software sustainability. Hence, the Goal Oriented Software Sustainability Evaluation Criteria provides nine (9) goals and thirty four (34) sub goals for measuring the software sustainability criteria and sub criteria. The findings from the study present a set of criteria and measurement goals which can be used for evaluating software sustainability. The criteria were organized into three dimensions which are environment, economic and social.
APA, Harvard, Vancouver, ISO, and other styles
49

Liu, Xiao Ying, and Jian Zhang. "Software Quality Evaluation for Vehicle Based on Set Pair Theory." Applied Mechanics and Materials 738-739 (March 2015): 1332–37. http://dx.doi.org/10.4028/www.scientific.net/amm.738-739.1332.

Full text
Abstract:
A software quality evaluation method for vehicle based on set pair theory was proposed. The evaluating index system of software quality evaluation for vehicle was constructed. The software quality evaluation models were constructed by using four-element connection number of set pair theory. Then the computer-implemented method was given, and was verified by an example. Finally, the practical application showed that the evaluation method was simple, effective and had good practicality.
APA, Harvard, Vancouver, ISO, and other styles
50

Agarwal, Jyoti, Sanjay Kumar Dubey, and Rajdev Tiwari. "Usability evaluation of component based software system using software metrics." Intelligent Decision Technologies 14, no. 3 (September 29, 2020): 281–89. http://dx.doi.org/10.3233/idt-190021.

Full text
Abstract:
Component Based Software Engineering (CBSE) provides a way to create a new Component Based Software System (CBSS) by utilizing the existing components. The primary reason for that is to minimize the software development time, cost and effort. CBSS also increases the component reusability. Due to these advantages, software industries are working on CBSS and continuously trying to provide quality product. Usability is one of the major quality factors for CBSS. It should be measured before delivering the software product to the customer, so that if there are any usability flaws, it can be removed by software development team. In this paper, work has been done to evaluate the usability of CBSS based on major usability sub-factors (learnability, operability, understandability and configurability). For this purpose, firstly software metrics are identified for each usability sub-factor and the value of each sub-factor is evaluated for a component based software project. Secondly, overall usability of the software project is evaluated by using the calculated value of each usability sub-factor. Usability for the same project was also evaluated using Fuzzy approach in MATLAB to validate the experimental work of this research paper. It was identified that the value of usability obtained from software metrics and fuzzy model was very similar. This research work will be useful for the software developer to evaluate the usability of any CBSS and will also help them to compare different version of any CBSS in term of their usability.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography