To see the other types of publications on this topic, follow the link: Evaluation and open source.

Journal articles on the topic 'Evaluation and open source'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Evaluation and open source.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fuggetta, Alfonso. "Open source software––an evaluation." Journal of Systems and Software 66, no. 1 (April 2003): 77–90. http://dx.doi.org/10.1016/s0164-1212(02)00065-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Confino, Joel P., and Phillip A. Laplante. "An Open Source Software Evaluation Model." International Journal of Strategic Information Technology and Applications 1, no. 1 (January 2010): 60–77. http://dx.doi.org/10.4018/jsita.2010101505.

Full text
Abstract:
The allure of free, industrial-strength software has many enterprises rethinking their open source strategies. However, selecting an appropriate open source software for a given problem or set of requirements is very challenging. The challenges include a lack of generally accepted evaluation criteria and a multitude of eligible open source software projects. The contribution of this work is a set of criteria and a methodology for assessing candidate open source software for fitness of purpose. To test this evaluation model, several important open source projects were examined. The results of this model were compared against the published results of an evaluation performed by the Defense Research and Development Canada agency. The proposed evaluation model relies on publicly accessible data, is easy to perform, and can be incorporated into any open source strategy.
APA, Harvard, Vancouver, ISO, and other styles
3

Castro, Hélio, Goran Putnik, Alrenice Castro, and Rodrigo Dal Bosco Fontana. "Open Design initiatives: an evaluation of CAD Open Source Software." Procedia CIRP 84 (2019): 1116–19. http://dx.doi.org/10.1016/j.procir.2019.08.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Ju-Byung, and Hae-Sool Yang. "Quality Evaluation Method of Open Source Software." Journal of the Korea Academia-Industrial cooperation Society 13, no. 5 (May 31, 2012): 2353–59. http://dx.doi.org/10.5762/kais.2012.13.5.2353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Gautam, Aparna. "Evaluation of Open Source Markup Validation Tools." International Journal for Research in Applied Science and Engineering Technology V, no. X (October 30, 2017): 1707–12. http://dx.doi.org/10.22214/ijraset.2017.10249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

M. Johnson, Wayne, Matthew Rowell, Bill Deason, and Malik Eubanks. "Comparative evaluation of an open-source FDM system." Rapid Prototyping Journal 20, no. 3 (April 14, 2014): 205–14. http://dx.doi.org/10.1108/rpj-06-2012-0058.

Full text
Abstract:
Purpose – The purpose of this paper is to present a qualitative and quantitative comparison and evaluation of an open-source fused deposition modeling (FDM) additive manufacturing (AM) system with a proprietary FDM AM system based on the fabrication of a custom benchmarking model. Design/methodology/approach – A custom benchmarking model was fabricated using the two AM systems and evaluated qualitatively and quantitatively. The fabricated models were visually inspected and scanned using a 3D laser scanning system to examine their dimensional accuracy and geometric dimensioning and tolerancing (GD&T) performance with respect to the computer-aided design (CAD) model geometry. Findings – The open-source FDM AM system (CupCake CNC) successfully fabricated most of the features on the benchmark, but the model did suffer from greater thermal warping and surface roughness, and limitations in the fabrication of overhang structures compared to the model fabricated by the proprietary AM system. Overall, the CupCake CNC provides a relatively accurate, low-cost alternative to more expensive proprietary FDM AM systems. Research limitations/implications – This work is limited in the sample size used for the evaluation. Practical implications – This work will provide the public and research AM communities with an improved understanding of the performance and capabilities of an open-source AM system. It may also lead to increased use of open-source systems as research testbeds for the continued improvement of current AM processes, and the development of new AM system designs and processes. Originality/value – This study is one of the first comparative evaluations of an open-source AM with a proprietary AM system.
APA, Harvard, Vancouver, ISO, and other styles
7

Silva, Elise, Jessica Green, and Cole Walker. "Source evaluation behaviours of first-year university students." Journal of Information Literacy 12, no. 2 (December 4, 2018): 24. http://dx.doi.org/10.11645/12.2.2512.

Full text
Abstract:
Researchers at Brigham Young University studied first-year students’ information evaluation behaviours of open-access, popular news-based, non-academic source material on a variety of subjects. Using think-aloud protocols and screen-recording, researchers coded most and least used evaluation behaviours. Students most used an article’s sources, previous experience with the source or subject matter, or a bias judgement to decide whether the source was reliable. Researchers also compared what students said was important when evaluating information vs. what behaviours students actually exhibited and found significant differences between the two. Namely, students did not think their previous experience or bias judgement affected the way they assessed sources; however, both behaviours played prominently in their observed source evaluation techniques across the study.
APA, Harvard, Vancouver, ISO, and other styles
8

Olla, Phillip. "Open Source E-Learning Systems." International Journal of Open Source Software and Processes 4, no. 4 (October 2012): 33–43. http://dx.doi.org/10.4018/ijossp.2012100103.

Full text
Abstract:
E-learning applications are becoming commonplace in most higher education institutions, and some institutions have implemented open source applications such as course management systems and electronic portfolios. These e-learning applications initiatives are the first step to moving away from proprietary software such as Blackboard and WEBCT toward open source. With open source, higher education institutions can easily and freely audit their systems. This article presents evaluation criteria that was used by a higher education institution to evaluate an open source e-learning system.
APA, Harvard, Vancouver, ISO, and other styles
9

Abdullah, Himli S. "Evaluation of Open Source Web Application Vulnerability Scanners." Academic Journal of Nawroz University 9, no. 1 (February 17, 2020): 47. http://dx.doi.org/10.25007/ajnu.v9n1a532.

Full text
Abstract:
Nowadays, web applications are essential part of our lives. Web applications are used by people for information gathering, communication, e-commerce and variety of other activities. Since they contain valuable and sensitive information, the attacks against them have increased in order to find vulnerabilities and steal information. For this reason, it is essential to check web application vulnerabilities to ensure that it is secure. However, checking the vulnerabilities manually is a tedious and time-consuming job. Therefore, there is an exigent need for web application vulnerability scanners. In this study, we evaluate two open source web application vulnerability scanners Paros and OWASP Zed Attack Proxy (OWASP ZAP) by testing them against two vulnerable web applications buggy web application (bWAPP) and Damn Vulnerable Web Application (DVWA).
APA, Harvard, Vancouver, ISO, and other styles
10

Good, Kenneth, and Kenneth Roy. "Open plan privacy index measurement source speaker evaluation." Journal of the Acoustical Society of America 121, no. 5 (May 2007): 3035. http://dx.doi.org/10.1121/1.4781683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Balnaves, Edmund. "Open Source Library Management Systems: A Multidimensional Evaluation." Australian Academic & Research Libraries 39, no. 1 (March 2008): 1–13. http://dx.doi.org/10.1080/00048623.2008.10721320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Preatoni, Damiano G., Clara Tattoni, Francesco Bisi, Elisa Masseroni, Davide D'Acunto, Stefano Lunardi, Ivana Grimod, Adriano Martinoli, and Guido Tosi. "Open source evaluation of kilometric indexes of abundance." Ecological Informatics 7, no. 1 (January 2012): 35–40. http://dx.doi.org/10.1016/j.ecoinf.2011.07.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dion Goh, Brendan Luyt, Alton Chua, See-Yong Yee, Kia-Ngoh Poh, and How-Yeu Ng. "Evaluating open source portals." Journal of Librarianship and Information Science 40, no. 2 (June 2008): 81–92. http://dx.doi.org/10.1177/0961000608089344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Alturkistani, Abrar, Ching Lam, Kimberley Foley, Terese Stenfors, Elizabeth R. Blum, Michelle Helena Van Velthoven, and Edward Meinert. "Massive Open Online Course Evaluation Methods: Systematic Review." Journal of Medical Internet Research 22, no. 4 (April 27, 2020): e13851. http://dx.doi.org/10.2196/13851.

Full text
Abstract:
Background Massive open online courses (MOOCs) have the potential to make a broader educational impact because many learners undertake these courses. Despite their reach, there is a lack of knowledge about which methods are used for evaluating these courses. Objective The aim of this review was to identify current MOOC evaluation methods to inform future study designs. Methods We systematically searched the following databases for studies published from January 2008 to October 2018: (1) Scopus, (2) Education Resources Information Center, (3) IEEE (Institute of Electrical and Electronic Engineers) Xplore, (4) PubMed, (5) Web of Science, (6) British Education Index, and (7) Google Scholar search engine. Two reviewers independently screened the abstracts and titles of the studies. Published studies in the English language that evaluated MOOCs were included. The study design of the evaluations, the underlying motivation for the evaluation studies, data collection, and data analysis methods were quantitatively and qualitatively analyzed. The quality of the included studies was appraised using the Cochrane Collaboration Risk of Bias Tool for randomized controlled trials (RCTs) and the National Institutes of Health—National Heart, Lung, and Blood Institute quality assessment tool for cohort observational studies and for before-after (pre-post) studies with no control group. Results The initial search resulted in 3275 studies, and 33 eligible studies were included in this review. In total, 16 studies used a quantitative study design, 11 used a qualitative design, and 6 used a mixed methods study design. In all, 16 studies evaluated learner characteristics and behavior, and 20 studies evaluated learning outcomes and experiences. A total of 12 studies used 1 data source, 11 used 2 data sources, 7 used 3 data sources, 4 used 2 data sources, and 1 used 5 data sources. Overall, 3 studies used more than 3 data sources in their evaluation. In terms of the data analysis methods, quantitative methods were most prominent with descriptive and inferential statistics, which were the top 2 preferred methods. In all, 26 studies with a cross-sectional design had a low-quality assessment, whereas RCTs and quasi-experimental studies received a high-quality assessment. Conclusions The MOOC evaluation data collection and data analysis methods should be determined carefully on the basis of the aim of the evaluation. The MOOC evaluations are subject to bias, which could be reduced using pre-MOOC measures for comparison or by controlling for confounding variables. Future MOOC evaluations should consider using more diverse data sources and data analysis methods. International Registered Report Identifier (IRRID) RR2-10.2196/12087
APA, Harvard, Vancouver, ISO, and other styles
15

Alnaser, Aseel, Bo Gong, and Knut Moeller. "Evaluation of open-source software for the lung segmentation." Current Directions in Biomedical Engineering 2, no. 1 (September 1, 2016): 515–18. http://dx.doi.org/10.1515/cdbme-2016-0114.

Full text
Abstract:
AbstractIn this study we evaluated open-source software for lung segmentation. Several parameters which emphasize on functionality, usability, image quality and 3D export are considered for this evaluation. Based on these parameters, a scoring system is generated. Our preliminary evaluation results indicated that the Pulmonary toolkit obtains the best overall performance according to the scoring system. However, the ranking of software shows a certain variation among different criteria. The selection of software should regard the focus and the specific interest of the user.
APA, Harvard, Vancouver, ISO, and other styles
16

Tschannen, Philipp, and Ali Ahmed. "Bitcoin’s APIs in Open-Source Projects: Security Usability Evaluation." Electronics 9, no. 7 (June 30, 2020): 1077. http://dx.doi.org/10.3390/electronics9071077.

Full text
Abstract:
Given the current state of software development, it does not seem that we are nowhere near vulnerability-free software applications, due to many reasons, and software developers are one of them. Insecure coding practices, the complexity of the task in hand, and usability issues, amongst other reasons, make it hard on software developers to maintain secure code. When it comes to cryptographic currencies, the need for assuring security is inevitable. For example, Bitcoin is a peer-to-peer software system that is primarily used as digital money. There exist many software libraries supporting various programming languages that allow access to the Bitcoin system via an Application Programming Interface (API). APIs that are inappropriately used would lead to security vulnerabilities, which are hard to discover, resulting in many zero-day exploits. Making APIs usable is, therefore, an essential aspect related to the quality and robustness of the software. This paper surveys the general academic literature concerning API usability and usable security. Furthermore, it evaluates the API usability of Libbitcoin, a well-known C++ implementation of the Bitcoin system, and assesses how the findings of this evaluation could affect the applications that use Libbitcoin. For that purpose, the paper proposes two static analysis tools to further investigate the use of Libbitcoin APIs in open-source projects from a security usability perspective. The findings of this research have improved Libbitcoin in many places, as will be shown in this paper.
APA, Harvard, Vancouver, ISO, and other styles
17

Ait Houaich, Youssef, and Mustapha Belaissaoui. "New Maturity Evaluation Model for Open Source Software Selection." International Review on Computers and Software (IRECOS) 10, no. 9 (September 30, 2015): 930. http://dx.doi.org/10.15866/irecos.v10i9.7285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Awang, Norkhushaini Bt, and Mohamad Yusof B. Darus. "Evaluation of an Open Source Learning Management System: Claroline." Procedia - Social and Behavioral Sciences 67 (December 2012): 416–26. http://dx.doi.org/10.1016/j.sbspro.2012.11.346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Alexander, Joshua M., Odile Clavier, and William Audette. "Audiologic evaluation of the tympan open source hearing aid." Journal of the Acoustical Society of America 143, no. 3 (March 2018): 1736. http://dx.doi.org/10.1121/1.5035665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Adewumi, Adewole, Sanjay Misra, Nicholas Omoregbe, and Luis Fernandez Sanz. "FOSSES: Framework for open‐source software evaluation and selection." Software: Practice and Experience 49, no. 5 (February 20, 2019): 780–812. http://dx.doi.org/10.1002/spe.2682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bhom, Jihyun, and Marcin Chrzaszcz. "HEPLike: An open source framework for experimental likelihood evaluation." Computer Physics Communications 254 (September 2020): 107235. http://dx.doi.org/10.1016/j.cpc.2020.107235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Zhao, Rongying, and Mingkun Wei. "Impact evaluation of open source software: an Altmetrics perspective." Scientometrics 110, no. 2 (December 28, 2016): 1017–33. http://dx.doi.org/10.1007/s11192-016-2204-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

García-Lucas, David, Gabriel Cebrián-Márquez, and Pedro Cuenca. "Parallelization and performance evaluation of open-source HEVC codecs." Journal of Supercomputing 73, no. 1 (October 15, 2016): 495–513. http://dx.doi.org/10.1007/s11227-016-1895-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Pangestu, Harijanto. "Model Evaluasi Perangkat Lunak: Pemodelan Visual Berbasis Open Source." ComTech: Computer, Mathematics and Engineering Applications 2, no. 2 (December 1, 2011): 923. http://dx.doi.org/10.21512/comtech.v2i2.2843.

Full text
Abstract:
Visual modeling softwares are numerously available based on both open source and proprietary. However, the softwares are not easy to use and tools available are so confusing for users. Therefore, the evaluation model is needed to choose the proper software. The purpose of this study is to create a model to evaluate the visual modeling software from the user interface and usability. Studied softwares are open source-based because they can be obtained easily for free. The softwares are Umleditor, Umlpad, Violet UML editor, ArgoUML, HE, StarUML, UMLet, Winbrello. This evaluation model is also able to evaluate other softwares. This evaluation uses DECIDE framework which is a guiding framework for evaluation. Evaluation approaches and techniques used are GOMS approach. The final results of this study is an evaluation model that provides software recommendations, namely UMLet 9.1 as well as one which has good usability, like effectiveness, safety, good utility, learnability, memorability, although for efficiency it gets the second place compared to DIA.
APA, Harvard, Vancouver, ISO, and other styles
25

Sohal, Amitpal Singh, Sunil Kumar Gupta, and Hardeep Singh. "Trust in Open Source Software Development Communities." International Journal of Open Source Software and Processes 9, no. 4 (October 2018): 1–19. http://dx.doi.org/10.4018/ijossp.2018100101.

Full text
Abstract:
This study presents the significance of trust for the formation of an Open Source Software Development (OSSD) community. OSSD has various challenges that must be overcome for its successful operation. First is the development of a community, which requires a healthy community formation environment. Taking into consideration various factors for community formation, a strong sense of TRUST among its members has been felt. Trust development is a slow process with various methods for building and maintaining it. OSSD is teamwork but the team is of unknowns and volunteers. Trust forms a pillar for effective cooperation, which leads to a reduction in conflicts and risks, associated with quality software development. This study offers an overview of various existing trust models, which aids in the development of a trust evaluation framework for OSSD communities. Towards the end of the study, various components of the trust evaluation along with an empirical framework for the same have been proposed.
APA, Harvard, Vancouver, ISO, and other styles
26

Symeonidis, Panagiotis, Ludovik Coba, and Markus Zanker. "Improving Time-Aware Recommendations in Open Source Packages." International Journal on Artificial Intelligence Tools 28, no. 06 (September 2019): 1960007. http://dx.doi.org/10.1142/s0218213019600078.

Full text
Abstract:
Collaborative filtering techniques have been studied extensively during the last decade. Many open source packages (Apache Mahout, LensKit, MyMediaLite, rrecsys etc.) have implemented them, but typically the top-N recommendation lists are only based on a highest predicted ratings approach. However, exploiting frequencies in the user/item neighborhood for the formation of the top-N recommendation lists has been shown to provide superior accuracy results in offline simulations. In addition, most open source packages use a time-independent evaluation protocol to test the quality of recommendations, which may result to misleading conclusions since it cannot simulate well the real-life systems, which are strongly related to the time dimension. In this paper, we have therefore implemented the time-aware evaluation protocol to the open source recommendation package for the R language — denoted rrecsys — and compare its performance across open source packages for reasons of replicability. Our experimental results clearly demonstrate that using the most frequent items in neighborhood approach significantly outperforms the highest predicted rating approach on three public datasets. Moreover, the time-aware evaluation protocol has been shown to be more adequate for capturing the life-time effectiveness of recommender systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Stathopoulou, E. K., M. Welponer, and F. Remondino. "OPEN-SOURCE IMAGE-BASED 3D RECONSTRUCTION PIPELINES: REVIEW, COMPARISON AND EVALUATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (November 29, 2019): 331–38. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-331-2019.

Full text
Abstract:
Abstract. State-of-the-art automated image orientation (Structure from Motion) and dense image matching (Multiple View Stereo) methods commonly used to produce 3D information from 2D images can generate 3D results – such as point cloud or meshes – of varying geometric and visual quality. Pipelines are generally robust and reliable enough, mostly capable to process even large sets of unordered images, yet the final results often lack completeness and accuracy, especially while dealing with real-world cases where objects are typically characterized by complex geometries and textureless surfaces and obstacles or occluded areas may also occur. In this study we investigate three of the available commonly used open-source solutions, namely COLMAP, OpenMVG+OpenMVS and AliceVision, evaluating their results under diverse large scale scenarios. Comparisons and critical evaluation on the image orientation and dense point cloud generation algorithms is performed with respect to the corresponding ground truth data. The presented FBK-3DOM datasets are available for research purposes.
APA, Harvard, Vancouver, ISO, and other styles
28

Dagienė, Valentina, and Gintautas Grigas. "Quantitative evaluation of the process of open source software localization." Informatica 17, no. 1 (January 1, 2006): 3–12. http://dx.doi.org/10.15388/informatica.2006.119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Good, Kenneth, and Kenneth Roy. "Measuring speech privacy–open plan source speaker evaluation: Part 2." Journal of the Acoustical Society of America 118, no. 3 (September 2005): 1854. http://dx.doi.org/10.1121/1.4778763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

López-López, Edgar, J. Jesús Naveja, and José L. Medina-Franco. "DataWarrior: an evaluation of the open-source drug discovery tool." Expert Opinion on Drug Discovery 14, no. 4 (February 26, 2019): 335–41. http://dx.doi.org/10.1080/17460441.2019.1581170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Gronle, Marc, Wolfram Lyda, Marc Wilke, Christian Kohler, and Wolfgang Osten. "itom: an open source metrology, automation, and data evaluation software." Applied Optics 53, no. 14 (May 2, 2014): 2974. http://dx.doi.org/10.1364/ao.53.002974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Marinheiro, Antonio, and Jorge Bernardino. "Experimental Evaluation of Open Source Business Intelligence Suites using OpenBRR." IEEE Latin America Transactions 13, no. 3 (March 2015): 810–17. http://dx.doi.org/10.1109/tla.2015.7069109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

W. Good, Kenneth, and Kenneth P. Roy. "An evaluation of source speakers for open‐plan privacy measurements." Journal of the Acoustical Society of America 116, no. 4 (October 2004): 2611. http://dx.doi.org/10.1121/1.4785414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ntantogian, Christoforos, Stefanos Malliaros, and Christos Xenakis. "Evaluation of password hashing schemes in open source web platforms." Computers & Security 84 (July 2019): 206–24. http://dx.doi.org/10.1016/j.cose.2019.03.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Nolè, Gabriele, Beniamino Murgante, Giuseppe Calamita, Antonio Lanorte, and Rosa Lasaponara. "Evaluation of urban sprawl from space using open source technologies." Ecological Informatics 26 (March 2015): 151–61. http://dx.doi.org/10.1016/j.ecoinf.2014.05.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hanandeh, Feras, Ahmad A. Saifan, Mohammed Akour, Noor Khamis Al-Hussein, and Khadijah Zayed Shatnawi. "Evaluating Maintainability of Open Source Software." International Journal of Open Source Software and Processes 8, no. 1 (January 2017): 1–20. http://dx.doi.org/10.4018/ijossp.2017010101.

Full text
Abstract:
Maintainability is one of the most important quality attribute that affect the quality of software. There are four factors that affect the maintainability of software which are: analyzability, changeability, stability, and testability. Open source software (OSS) developed by collaborative work done by volunteers through around the world with different management styles. Open source code is updated and modified all the time from the first release. Therefore, there is a need to measure the quality and specifically the maintainability of such code. This paper discusses the maintainability for the three domains of the open source software. The domains are: education, business and game. Moreover, to observe the most effective metrics that directly affects the maintainability of software. Analysis of the results demonstrates that OSS in the education domain is the most maintainable code and cl_stat (number of executable statements) metric has the highest degree of influence on the calculation of maintenance in all three domains.
APA, Harvard, Vancouver, ISO, and other styles
37

Donnelly, Francis P. "Evaluating open source GIS for libraries." Library Hi Tech 28, no. 1 (March 9, 2010): 131–51. http://dx.doi.org/10.1108/07378831011026742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Burtsev, Mikhail, and Varvara Logacheva. "Conversational Intelligence Challenge: Accelerating Research with Crowd Science and Open Source." AI Magazine 41, no. 3 (September 14, 2020): 18–27. http://dx.doi.org/10.1609/aimag.v41i3.5324.

Full text
Abstract:
Development of conversational systems is one of the most challenging tasks in natural language processing, and it is especially hard in the case of open-domain dialogue. The main factors that hinder progress in this area are lack of training data and difficulty of automatic evaluation. Thus, to reliably evaluate the quality of such models, one needs to resort to time-consuming and expensive human evaluation. We tackle these problems by organizing the Conversational Intelligence Challenge (ConvAI) — open competition of dialogue systems. Our goals are threefold: to work out a good design for human evaluation of open-domain dialogue, to grow open-source code base for conversational systems, and to harvest and publish new datasets. Over the course of ConvAI1 and ConvAI2 competitions, we developed a framework for evaluation of chatbots in messaging platforms and used it to evaluate over 30 dialogue systems in two conversational tasks — discussion of short text snippets from Wikipedia and personalized small talk. These large-scale evaluation experiments were performed by recruiting volunteers as well as paid workers. As a result, we succeeded in collecting a dataset of around 5,000 long meaningful human-to-bot dialogues and got many insights into the organization of human evaluation. This dataset can be used to train an automatic evaluation model or to improve the quality of dialogue systems. Our analysis of ConvAI1 and ConvAI2 competitions shows that the future work in this area should be centered around the more active participation of volunteers in the assessment of dialogue systems. To achieve that, we plan to make the evaluation setup more engaging.
APA, Harvard, Vancouver, ISO, and other styles
39

Herrera-Melo, Camila Andrea, and Juan Sebastián González Sanabria. "Proposal for the Evaluation of Open Data Portals." Revista Facultad de Ingeniería 29 (December 31, 2019): e10194. http://dx.doi.org/10.19053/01211129.v29.n0.2020.10194.

Full text
Abstract:
The provision of portals that serve as a source of access and availability of public domain data is part of the adoption of public policies that some government entities have implemented in response to the establishment of an open, transparent, multidirectional, collaborative and focused on citizen participation government, both in monitoring and in making public decisions. However, the publication of this data must meet certain characteristics to be considered open and of quality. For this reason, studies arise that focus on the approach of methodologies and indicators that measure the quality of the portals and their data. For the aim of this paper, the search of referential sources of the last six years regarding the evaluation of data quality and open data portals in Spain, Brazil, Costa Rica, Taiwan and the European Union was carried out with the objective of gathering the necessary inputs for the approach of the methodology presented in the document.
APA, Harvard, Vancouver, ISO, and other styles
40

Ahmad, Norita, and Phillip A. Laplante. "A Systematic Approach to Evaluating Open Source Software." International Journal of Strategic Information Technology and Applications 2, no. 1 (January 2011): 48–67. http://dx.doi.org/10.4018/jsita.2011010104.

Full text
Abstract:
Selecting appropriate Open Source Software (OSS) for a given problem or a set of requirements can be very challenging. Some of the difficulties are due to the fact that there is not a generally accepted set of criteria to use in evaluation and that there are usually many OSS projects available to solve a particular problem. In this study, the authors propose a set of criteria and a methodology for assessing candidate OSS for fitness of purpose using both functional and non-functional factors. The authors then use these criteria in an improved solution to the decision problem using the well-developed Analytical Hierarchy Process. In order to validate the proposed model, it is applied at a technology management company in the United Arab Emirates, which integrates many OSS solutions into its Information Technology infrastructure. The contribution of this work is to help decision makers to better identify an appropriate OSS solution using a systematic approach without the need for intensive performance testing.
APA, Harvard, Vancouver, ISO, and other styles
41

Ueberham, Maximilian, Florian Schmidt, and Uwe Schlink. "Advanced Smartphone-Based Sensing with Open-Source Task Automation." Sensors 18, no. 8 (July 29, 2018): 2456. http://dx.doi.org/10.3390/s18082456.

Full text
Abstract:
Smartphone-based sensing is becoming a convenient way to collect data in science, especially in environmental research. Recent studies that use smartphone sensing methods focus predominantly on single sensors that provide quantitative measurements. However, interdisciplinary projects call for study designs that connect both, quantitative and qualitative data gathered by smartphone sensors. Therefore, we present a novel open-source task automation solution and its evaluation in a personal exposure study with cyclists. We designed an automation script that advances the sensing process with regard to data collection, management and storage of acoustic noise, geolocation, light level, timestamp, and qualitative user perception. The benefits of this approach are highlighted based on data visualization and user handling evaluation. Even though the automation script is limited by the technical features of the smartphone and the quality of the sensor data, we conclude that task automation is a reliable and smart solution to integrate passive and active smartphone sensing methods that involve data processing and transfer. Such an application is a smart tool gathering data in population studies.
APA, Harvard, Vancouver, ISO, and other styles
42

Seungchang Lee, 서응교, and 박훈성. "Behavior-Structure-Evolution Evaluation Model(BSEM) for Open Source Software Service." Journal of Distribution Science 13, no. 1 (January 2015): 57–70. http://dx.doi.org/10.15722/jds.13.1.201501.57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

LI, Chunyan, and Xuejie ZHANG. "Performance evaluation on open source cloud platform for high performance computing." Journal of Computer Applications 33, no. 12 (December 17, 2013): 3580–85. http://dx.doi.org/10.3724/sp.j.1087.2013.03580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Akatsu, Shinji, Yoshikatu Fujita, Takumi Kato, and Kazuhiko Tsuda. "Structured analysis of the evaluation process for adopting open-source software." Procedia Computer Science 126 (2018): 1578–86. http://dx.doi.org/10.1016/j.procs.2018.08.131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Rasetshwane, Daniel M., Judy G. Kopun, Ryan W. McCreery, Stephen T. Neely, Marc A. Brennan, William Audette, and Odile Clavier. "Electroacoustic and behavioral evaluation of an open source audio processing platform." Journal of the Acoustical Society of America 143, no. 3 (March 2018): 1738. http://dx.doi.org/10.1121/1.5035671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ghapanchi, Amir Hossein, and Aybuke Aurum. "An evaluation criterion for open source software projects: enhancement process effectiveness." International Journal of Information Systems and Change Management 5, no. 3 (2011): 193. http://dx.doi.org/10.1504/ijiscm.2011.044508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ratra, Ritu, and Preeti Gulia. "Experimental Evaluation of Open Source Data Mining Tools (WEKA and Orange)." International Journal of Engineering Trends and Technology 68, no. 8 (August 25, 2020): 30–35. http://dx.doi.org/10.14445/22315381/ijett-v68i8p206s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Mukherjee, Sandip, P. K. Joshi, Samadrita Mukherjee, Aniruddha Ghosh, R. D. Garg, and Anirban Mukhopadhyay. "Evaluation of vertical accuracy of open source Digital Elevation Model (DEM)." International Journal of Applied Earth Observation and Geoinformation 21 (April 2013): 205–17. http://dx.doi.org/10.1016/j.jag.2012.09.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Pangestu, Harijanto. "Penerapan Metode GOMS untuk Evaluasi Perangkat Lunak Pemodelan Visual Berbasis Open Source." ComTech: Computer, Mathematics and Engineering Applications 3, no. 1 (June 1, 2012): 325. http://dx.doi.org/10.21512/comtech.v3i1.2418.

Full text
Abstract:
Software evaluation is needed to determine the usability level of the software. Good usability is having the following characteristics: effective to use (effectiveness), efficient to use (efficiency), safe to use (safety), having good utility (utility), easy to learn (learnability) and easy to remember how to use (memorability). Analytical evaluation is one evaluation approach that does not involve users. The evaluation is conducted by experts associated with the product. One method used in the analytical evaluation is GOMS, a method to analyze a task related to human computer interaction. GOMS does not provide accurate calculation of how users interact with the system, yet it estimates the time need to do a task dealing with the system. With the evaluation of UML users are expected to be able to use UML tools that appropriate to particular needs based on open source.
APA, Harvard, Vancouver, ISO, and other styles
50

Haynes, M., S. Coetzee, and V. Rautenbach. "SUPPORTING URBAN DESIGN WITH OPEN SOURCE GEOSPATIAL TECHNOLOGIES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W14 (August 23, 2019): 93–97. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w14-93-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Urban designers collect information about a city or neighborhood, design improvements so that the city is functional and pleasant to live in, and communicate these improvements to relevant stakeholders. The use of space and the spatial relationships between physical features play a significant role in urban design, therefore much of the information that is collected and manipulated is georeferenced. We followed a scenario-based approach for collecting requirements for urban design projects. Functional and non-functional requirements were categorized into data collection, data storage and management, and data visualization. Subsequently, we reviewed and evaluated open source geospatial tools that can be used for the collection, storage, manipulation and visualization of geospatial data in urban design projects. Based on the evaluation, we propose an open geospatial toolbox for urban design projects. The results are equally applicable for researchers and professionals in other disciplines who collect data at the neighbourhood level.</p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography