Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Automated integration testing.

Zeitschriftenartikel zum Thema „Automated integration testing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Automated integration testing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Xu, Dianxiang, Weifeng Xu, Manghui Tu, Ning Shen, William Chu und Chih-Hung Chang. „Automated Integration Testing Using Logical Contracts“. IEEE Transactions on Reliability 65, Nr. 3 (September 2016): 1205–22. http://dx.doi.org/10.1109/tr.2015.2494685.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Prabhu, Shridhar, Manoj Naik, Firdosh A D, Sohan S A und Neeta B. Malvi. „Automation in Testing with Jenkins for Software Development“. Journal of University of Shanghai for Science and Technology 23, Nr. 06 (17.06.2021): 746–55. http://dx.doi.org/10.51201/jusst/21/05340.

Der volle Inhalt der Quelle
Annotation:
Continuous Integration (CI) is a practice in the software program development process where software program builders combine code into a shared repository frequently, more than one instance throughout the day. Jenkins is a continuous integration tool which assists developer and testers by using automating the entire test, on the way to reduce their work with the aid of tracking the development at each and every stage in software development, each integration push is then tested by means of automated test cases, and an easy way to make CI quicker and accelerate. CI procedure is to automate the testing of a recent build. In this paper, a real scenario is taken into consideration, how the software program trying out is performed in corporate sectors and how Jenkins can save developers/testers important valuable hours by automating the whole software development system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Beniukh, Lada, Andrii Hlybovets und Andrii Afonin. „Behaviour Driven Testing as an Effective Approach for Automated Testing in Continuous Integration“. NaUKMA Research Papers. Computer Science 3 (28.12.2020): 62–68. http://dx.doi.org/10.18523/2617-3808.2020.3.62-68.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Joseph, Roshan Abraham, und G. Sakthivel. „Automated line integration from through hole technology to integrated circuit testing“. IOP Conference Series: Materials Science and Engineering 390 (30.07.2018): 012104. http://dx.doi.org/10.1088/1757-899x/390/1/012104.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Salvagno, Gian Luca, Elisa Danese und Giuseppe Lippi. „Mass spectrometry and total laboratory automation: opportunities and drawbacks“. Clinical Chemistry and Laboratory Medicine (CCLM) 58, Nr. 6 (25.06.2020): 994–1001. http://dx.doi.org/10.1515/cclm-2019-0723.

Der volle Inhalt der Quelle
Annotation:
AbstractThe diffusion of laboratory automation, initiated nearly 50 years ago with consolidation of preanalytical, clinical chemistry and immunochemistry workstations, is now also gradually embracing mass spectrometry (MS). As for other diagnostic disciplines, the automation of MS carries many advantages, such as efficient personnel management (i.e. improving working atmosphere by decreasing manual activities, lowering health risks, simplifying staff training), better organization (i.e. reducing workloads, improving inventory handling, increasing analytical process standardization) and the possibility to reduce the number of platforms. The development and integration of different technologies into automated MS analyzers will also generate technical and practical advantages, such as prepackaged and ready-to-use reagents, automated dispensing, incubation and measurement, automated sample processing (e.g. system fit for many models of laboratory automation, bar code readers), multiplex testing, automatic data processing, also including quality control assessment, and automated validation/interpretation (e.g. autoverification). A new generation of preanalytical workstations, which can be directly connected to MS systems, will allow the automation of manual extraction and elimination of time-consuming activities, such as tube labeling and capping/decapping. The use of automated liquid-handling platform for pipetting samples, along with addition of internal standards, may then enable the optimization of some steps of extraction and protein precipitation, thus decreasing turnaround time and increasing throughput in MS testing. Therefore, this focused review is aimed at providing a brief update on the importance of consolidation and integration of MS platforms in laboratory automation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Potluri, P., J. Atkinson und I. Porat. „Integration of mechanical testing with automated parts handling in a production environment“. Journal of Materials Processing Technology 70, Nr. 1-3 (Oktober 1997): 239–43. http://dx.doi.org/10.1016/s0924-0136(97)02925-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kim, Chorwon, Seungryong Kim und JongWon Kim. „Continuous Integration with Automated Testing for Container-based IoT-Cloud Service Composition“. KIISE Transactions on Computing Practices 25, Nr. 2 (28.02.2019): 87–98. http://dx.doi.org/10.5626/ktcp.2019.25.2.87.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kharytonova, Oleksandra, Kateryna Osadcha und Viacheslav Osadchyi. „Recruitment using automated systems“. Ukrainian Journal of Educational Studies and Information Technology 9, Nr. 2 (30.06.2021): 1–19. http://dx.doi.org/10.32919/uesit.2021.02.01.

Der volle Inhalt der Quelle
Annotation:
To identify the needs of modern recruiting and determine the features of staff selection and evaluation using psychological testing, was conducted an analysis of existing developments and systems that are used in the modern recruitment process. The problems that HR managers have when testing a candidate and the company's staff are identified, and the disadvantages of modern recruiting systems are highlighted. According to the detailed structure of the personality and the structure of the psychological portrait, the main features of the candidate's personality were identified and selected methods that can be used for testing. A web-based system "KnitTe" for HR managers was developed, which allows clients to expand the functionality of their CRM systems (initial stage integration with Odoo CRM modules) and helps in making a decision about the suitability of a candidate for a specific position in terms of their psychological personality traits, as well as their psychological compatibility with other team members. The overall structure of the web system allows the client to get data from the CRM system, use built-in psychological tests, and get results. The "KnitTe" web-system uses reliable, valid and verified psychological methods, builds a psychological portrait of the candidate, checks the candidate's compliance with the chosen position, and analyzes the candidate's compatibility with the team.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Petryk, V. F., A. G. Protasov, R. M. Galagan, A. V. Muraviov und I. I. Lysenko. „Smartphone-Based Automated Non-Destructive Testing Devices“. Devices and Methods of Measurements 11, Nr. 4 (17.12.2020): 272–78. http://dx.doi.org/10.21122/2220-9506-2020-11-4-272-278.

Der volle Inhalt der Quelle
Annotation:
Currently, non-destructive testing is an interdisciplinary field of science and technology that serves to ensure the safe functioning of complex technical systems in the face of multifactorial risks. In this regard, there is a need to consider new information technologies based on intellectual perception, recognition technology, and general network integration. The purpose of this work was to develop an ultrasonic flaw detector, which uses a smartphone to process the test results, as well as transfer them directly to an powerful information processing center, or to a cloud storage to share operational information with specialists from anywhere in the world.The proposed flaw detector consists of a sensor unit and a smartphone. The exchange of information between the sensor and the smartphone takes place using wireless networks that use "bluetooth" technology. To ensure the operation of the smartphone in the ultrasonic flaw detector mode, the smartphone has software installed that runs in the Android operating system and implements the proposed algorithm of the device, and can serve as a repeater for processing data over a considerable distance (up to hundreds and thousands of kilometers) if it necessary.The experimental data comparative analysis of the developed device with the Einstein-II flaw detector from Modsonic (India) and the TS-2028H+ flaw detector from Tru-Test (New Zealand) showed that the proposed device is not inferior to them in terms of such characteristics as the range of measured thicknesses, the relative error in determining the depth defect and the object thickness. When measuring small thicknesses from 5 to 10 mm, the proposed device even surpasses them, providing a relative measurement error of the order of 1 %, while analogues give this error within 2–3 %.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ozturk, Nurcan, Alex Undrus, Marcelo Vogel und Alessandra Forti. „Containerization in ATLAS Software Development and Data Production“. EPJ Web of Conferences 251 (2021): 02017. http://dx.doi.org/10.1051/epjconf/202125102017.

Der volle Inhalt der Quelle
Annotation:
The ATLAS experiment’s software production and distribution on the grid benefits from a semi-automated infrastructure that provides up-to-date information about software usability and availability through the CVMFS distribution service for all relevant systems. The software development process uses a Continuous Integration pipeline involving testing, validation, packaging and installation steps. For opportunistic sites that can not access CVMFS, containerized releases are needed. These standalone containers are currently created manually to support Monte-Carlo data production at such sites. In this paper we will describe an automated procedure for the containerization of ATLAS software releases in the existing software development infrastructure, its motivation, integration and testing in the distributed computing system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Dam, Sanjoy, und Fahid-Ul Alam. „Design, Construction and Testing of a Microcontroller based Robotic Arm for Spray Painting“. International Journal of Scientific & Engineering Research 12, Nr. 3 (25.03.2021): 1069–75. http://dx.doi.org/10.14299/ijser.2021.03.03.

Der volle Inhalt der Quelle
Annotation:
Robots can be used in painting operations like many other automated jobs. It can also perform welding, drilling, grinding, carrying operation, and various industrial process with the integration of proper tools with its arm. Robots are used widely in industrially developed countries to perform many operations of their industrial processes. Though the application of robots is limited in our country, the implementation has been initiated with the recent trend of moving forward to the automated industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Kanabar, Deven, Swati Roy, Chiragkumar Dodiya und Subrata Pradhan. „Development, Integration and Testing of Automated Triggering Circuit for Hybrid DC Circuit Breaker“. Journal of Physics: Conference Series 823 (19.04.2017): 012015. http://dx.doi.org/10.1088/1742-6596/823/1/012015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Parent, C., und M. C. Pinch. „NAD83 SECONDARY INTEGRATION“. CISM journal 42, Nr. 4 (Januar 1988): 331–40. http://dx.doi.org/10.1139/geomat-1988-0028.

Der volle Inhalt der Quelle
Annotation:
The Canadian geodetic network that was adjusted with networks of other North American countries, in the July 1986 Continental Adjustment, included only the 8000-station national primary framework. There still remains many thousands of stations contained in regional and local secondary networks to integrate into the North American Datum of 1983 (NAD83). Secondary Integration is a cooperative project organized by member agencies of the Canadian Control Survey Committee (CCSC) which first met in 1982. Since then, members have automated and evaluated secondary network data for approximately 100 000 stations established by conventional, inertial and satellite surveying methods. The task of compiling and testing Helmert blocks for input to the simultaneous adjustment of primary and secondary networks is now underway. This paper describes the plans and progress, and some of the problems that challenge us in the NAD83 Secondary Integration Project.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

T M, Suhas, und Sowmya Nag K. „Continuous Integration and Continuous Deployment with Jenkins in C++ Software Development“. Journal of University of Shanghai for Science and Technology 23, Nr. 07 (16.07.2021): 805–10. http://dx.doi.org/10.51201/jusst/21/05307.

Der volle Inhalt der Quelle
Annotation:
Continuous Integration is a practice in the software program development process where software program builders combine code into a shared repository frequently, more than one instance throughout the day. Jenkins is a continuous integration tool that assists developers and testers by using automating the entire test, on the way to reduce their work with the aid of tracking the development at each and every stage in software development, each integration push is then tested by means of automated build and test cases, and an easy way to make CI quicker and accelerate CI procedure is to automate the testing of a recent build. In this paper, a real scenario is taken into consideration, how the software program trying out is performed in corporate sectors and how Jenkins can save developers/testers important valuable hours by automating the whole software development system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Vatresia, Arie, Asahar Johar, Ferzha Putra Utama und Sinta Iryani. „Automated Data Integration of Biodiversity with OLAP and OLTP“. SISFORMA 7, Nr. 2 (23.11.2020): 80. http://dx.doi.org/10.24167/sisforma.v7i2.2817.

Der volle Inhalt der Quelle
Annotation:
Biodiversity is one of emerging issue over decades; many have performed research to map and to document the data over the world. This issue is very important due to the event of extinction have been accelerating happening because of human extinction. Bengkulu, as one of the province lied in one of 19 hotspots in the world, Sundanese, has experienced the degradation of flora and fauna over the case of forest degradation and habitat loss. Although many application and software has been developed to solve the case, the existences of data standardization still become an issue over this problem. In this research, study of data integration had been developed to make the process of biodiversity data acquisition can be more effective and efficient. The system proposed the integration based on OLAP and OLTP that will be connected to IUCN, as one of the biggest center for monitoring the loss of biodiversity all around the world. This application had been built with web based using UML design and followed SDLC to provide the best fit of the need. This research had also succeeded to build the automated integration to show the record of dynamic number over biodiversity existences in Bengkulu. The application had been tested using black box and has the perfect performance (100%) over the testing that can help the monitoring process over biodiversity data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Korniienko, I., S. Korniienko, S. Moskalets, S. Kaznachey und O. Zhyrna. „GEOINFORMATION SUPPORT FOR AUTOMATED TEST PLANNING SUBSYSTEM“. Наукові праці Державного науково-дослідного інституту випробувань і сертифікації озброєння та військової техніки, Nr. 3 (28.05.2020): 49–55. http://dx.doi.org/10.37701/dndivsovt.3.2020.07.

Der volle Inhalt der Quelle
Annotation:
The process of testing weapons and military equipment involves numerous manual labor-intensive operations. Such operations can be simplified by fully or partially automating the test planning stages, conducting them directly, and processing the test results. Feature of testing weapons and military equipment is the large amount of data that somehow has a spatial location. One of the modern tools of cartographic representation, processing and analysis of statistical data arrays that have spatial localization, geospatial modeling and situation forecasting is the technology of geoinformation systems. The article substantiates the feasibility of using geoinformation systems as part of the weapons testing system and military equipment. The functional scheme of integration of the geoinformation component into the structure of the test automation subsystem is presented for geoinformation support of the processes of testing planning and processing of measurement results. An approach to the creation of geoinformation models of test sites is proposed, based on the use of methods of remote sensing of land and open Web-GIS resources. The list of functional modules of spatial data processing and analysis, which can be applied to the tasks of testing, is distributed in the geoinformation toolkit. Examples of typical spatial tasks that can be performed during test planning, direct testing, processing, and analysis of measurement results, if such data are spatially linked. The use of geoinformation technology in the test system will provide an arsenal of qualitatively new methods of digital cartography, such as the technology of automated preparation of cartographic information in the accepted cartographic projections and symbols, mass processing of arrays of measured data, a wide toolkit of mathematical and cartographic methods and functions, features and functions own methods, algorithms and methods of statistical information processing, create and use object-oriented geoinformation data models, operate with a set of visualization tools for the best presentation of research and simulation results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Weiss, Kevin, Michel Rottleuthner, Thomas C. Schmidt und Matthias Wählisch. „PHiLIP on the HiL: Automated Multi-Platform OS Testing With External Reference Devices“. ACM Transactions on Embedded Computing Systems 20, Nr. 5s (31.10.2021): 1–26. http://dx.doi.org/10.1145/3477040.

Der volle Inhalt der Quelle
Annotation:
Developing an operating systems (OSs) for low-end embedded devices requires continuous adaptation to new hardware architectures and components, while serviceability of features needs to be assured for each individual platform under tight resource constraints. It is challenging to design a versatile and accurate heterogeneous test environment that is agile enough to cover a continuous evolution of the code base and platforms. This mission is even more challenging when organized in an agile open-source community process with many contributors such as for the RIOT OS. Hardware in the Loop (HiL) testing and Continuous Integration (CI) are automatable approaches to verify functionality, prevent regressions, and improve the overall quality at development speed in large community projects. In this paper, we present PHiLIP (Primitive Hardware in the Loop Integration Product), an open-source external reference device together with tools that validate the system software while it controls hardware and interprets physical signals. Instead of focusing on a specific test setting, PHiLIP takes the approach of a tool-assisted agile HiL test process, designed for continuous evolution and deployment cycles. We explain its design, describe how it supports HiL tests, evaluate performance metrics, and report on practical experiences of employing PHiLIP in an automated CI test infrastructure. Our initial deployment comprises 22 unique platforms, each of which executes 98 peripheral tests every night. PHiLIP allows for easy extension of low-cost, adaptive testing infrastructures but serves testing techniques and tools to a much wider range of applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Topalov, Angel A., Ioannis Katsounaros, Josef C. Meier, Sebastian O. Klemm und Karl J. J. Mayrhofer. „Development and integration of a LabVIEW-based modular architecture for automated execution of electrochemical catalyst testing“. Review of Scientific Instruments 82, Nr. 11 (November 2011): 114103. http://dx.doi.org/10.1063/1.3660814.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Schönwald, Julian Ralf, Christian Forsteneichner, David Vahrenhorst und Kristin Paetzold. „Improvement of Collaboration between Testing and Simulation Departments on the Example of a Motorcycle Manufacturer“. Proceedings of the Design Society: International Conference on Engineering Design 1, Nr. 1 (Juli 2019): 149–58. http://dx.doi.org/10.1017/dsi.2019.18.

Der volle Inhalt der Quelle
Annotation:
AbstractIn testing and simulation departments in product development (PD) data types, data structures and data storage are often very different. Exchange of data and information is normally not automated and often not supported by management systems. This can lead to loss of time and information. A literature study in combination with 20 expert interviews and the analysis of documents as well as data storage structures and IT systems in a PD department of a motorcycle manufacturer were performed. Test and simulation processes were classified and standardized, documentation formats analyzed, standards in Test Data Management (TDM) and Simulation Data Management (SDM) as well as verification and validation processes compared. IT support in SDM is better than in TDM. An integration of TDM and SDM could lead to improved collaboration between testing and simulation departments. Options for this integration could be specific ontologies, object-oriented interfaces, a higher-level intermediate application, use of a common standard or integration of one standard into another one.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Arcuri, Natale, Manuela De Ruggiero, Francesca Salvo und Raffaele Zinno. „Automated Valuation Methods through the Cost Approach in a BIM and GIS Integration Framework for Smart City Appraisals“. Sustainability 12, Nr. 18 (13.09.2020): 7546. http://dx.doi.org/10.3390/su12187546.

Der volle Inhalt der Quelle
Annotation:
The principle behind sustainable city movements is represented by the idea of “good living”, which is the possibility of having solutions and services that allow citizens to live in an easy, simple, and enjoyable way. Policies for urban quality play a central role in the slow cities manifesto, often suggesting the use of Information and Communication Technologies (ITC) in the development of interactive services for citizens. Among these, an interesting possibility is to offer citizens digital real estate consultancy services through the implementation of automated evaluation methods. An automated appraisal action—which is already complex in itself owing to the need to collect data in a consistent, standardized, but also differentiated way so as to require the adoption of real estate due diligence—collides on the operational level with the concrete difficulty of acquiring necessary data, much more so since the reference market is dark, atypical, and viscous. These operational difficulties are deepened by the epistemological nature of the appraisal discipline itself, which bases its methodology on the forecast postulate, recalling the need to objectify as much as possible the evaluation from the perspective of an intersubjective sharing argument. These circumstances have led, on the one hand, to the definition of internationally accepted uniform evaluation rules (IVS, 2017) and, on the other, to the testing of automated valuation methods aimed at returning computer-based appraisals (AVM). Starting from the awareness that real estate valuation refers essentially to information and georeferences, this paper aims to demonstrate how real estate appraisal analysis can be further improved through information technology (IT), directing real estate valuation towards objectivity in compliance with international valuation standards. Particularly, the paper intends to show the potential of combining geographic information systems (GISs) and building information models (BIMs) in automated valuation methods through the depreciated reproduction cost. The paper also proposes a BIM-GIS semi-automatic prototype based on the depreciated reconstruction cost through an experimentation in Rende (Italy).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Fogagnolo, Paolo, Maurizio Digiuni, Giovanni Montesano, Chiara Rui, Marco Morales und Luca Rossetti. „Compass fundus automated perimetry“. European Journal of Ophthalmology 28, Nr. 5 (22.03.2018): 481–90. http://dx.doi.org/10.1177/1120672118757667.

Der volle Inhalt der Quelle
Annotation:
Background: Compass (CenterVue, Padova, Italy) is a fundus automated perimeter which has been introduced in the clinical practice for glaucoma management in 2014. The aim of the article is to review Compass literature, comparing its performances against Humphrey Field Analyzer (Zeiss Humphrey Systems, Dublin, CA, USA). Results: Analyses on both normal and glaucoma subjects agree on the fact that Humphrey Field Analyzer and Compass are interchangeable, as the difference of their global indices is largely inferior than test -retest variability for Humphrey Field Analyzer. Compass also enables interesting opportunities for the assessment of morphology, and the integration between morphology and function on the same device. Conclusion: Visual field testing by standard automated perimetry is limited by a series of intrinsic factors related to the psychophysical nature of the examination; recent papers suggest that gaze tracking is closely related to visual field reliability. Compass, thanks to a retinal tracker and to the active dislocation of stimuli to compensate for eye movements, is able to provide visual fields unaffected by fixation instability. Also, the instrument is a true colour, confocal retinoscope and obtains high-quality 60° × 60° photos of the central retina and stereo-photos details of the optic nerve. Overlapping the image of the retina to field sensitivity may be useful in ascertaining the impact of comorbidities. In addition, the recent introduction of stereoscopic photography may be very useful for better clinical examination.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

KROLL, Lothar, Adam CZECH und Rainer WALLASCH. „MANUFACTURING AND QUALITY ASSURANCE OF LIGHTWEIGHT PARTS IN MASS PRODUCTION“. Journal of Machine Engineering 18, Nr. 3 (06.09.2018): 42–56. http://dx.doi.org/10.5604/01.3001.0012.4606.

Der volle Inhalt der Quelle
Annotation:
Production-related preliminary damage and residual stresses have significant effects on the functions and the damage development in fiber composite components. For this reason, it is important, especially for the safety-relevant components, to check each item. This task becomes a challenge in the context of serial production, with its growing importance in the field of lightweight components. The demand for continuous-reinforced thermoplastic composites increases in various industrial areas. According to this, an innovative Continuous Orbital Winding (COW) process was carried out within the framework of the Federal Cluster of Excellence EXC 1075 “MERGE Technologies for Multifunctional Lightweight Structures”. COW is aiming for mass-production-suited processing of special semi-finished fiber reinforced thermoplastic materials. This resource-efficient and function-integrated manufacturing process contains a combination of thermoplastic tape-winding with automated thermoplastic tape-laying technology. The process has a modular concept, which allows implementing other special applications and technologies, e.g. integration of different sensor types and high-speed automated quality inspection. The results show how to control quality and improve the stability of the COW process for large-scale production. This was realized by developing concepts of a fully integrated quality-testing unit for automatic damage assessment of composite structures. For this purpose, the components produced in the COW method have been examined for imperfections. This was performed based on obtained results of non-destructive or destructive materials testing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Werner, Michael, und Nick Gehrke. „Identifying the Absence of Effective Internal Controls: An Alternative Approach for Internal Control Audits“. Journal of Information Systems 33, Nr. 2 (01.04.2018): 205–22. http://dx.doi.org/10.2308/isys-52112.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT Auditors face new challenges when auditing internal controls due to the increasing integration of information systems for transaction processing and the growing amount of data. Traditional manual control testing procedures become inefficient or require highly specialized and scarce technical knowledge. This study presents audit procedures that follow a new approach. Instead of manually testing internal controls, automated procedures search for the absence of those controls. Process mining techniques are combined with advanced statistical analysis where process mining serves as a data analysis technique to create process models from the recorded transaction data. These are searched for critical data constellations in combination with an exploratory factor analysis to identify systematic deficiencies in the internal control system. The manual and time-intensive inspection of individual controls is replaced by automated audit procedures that cover the totality of recorded transactions. The study follows a design science approach and uses case study data for illustration.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Rakhmatullaev, Marat, und Uktam Karimov. „MODELS OF INTEGRATION OF INFORMATION SYSTEMS IN HIGHER EDUCATION INSTITUTIONS“. SOCIETY. INTEGRATION. EDUCATION. Proceedings of the International Scientific Conference 5 (25.05.2018): 420–29. http://dx.doi.org/10.17770/sie2018vol1.3308.

Der volle Inhalt der Quelle
Annotation:
At present a lot of automated systems are developing and implementing to support the educational and research processes in the universities. Often these systems duplicate some functions, databases, and also there are problems of compatibility of these systems. The most common educational systems are systems for creating electronic libraries, access to scientific and educational information, a program for detecting plagiarism, testing knowledge, etc. In this article, models and solutions for the integration of such educational automated systems as the information library system (ILS) and the anti-plagiarism system are examined. Integration of systems is based on the compatibility of databases, if more precisely in the metadata of different information models. At the same time, Cloud technologies are used - data processing technology, in which computer resources are provided to the user of the integrated system as an online service. ILS creates e-library of graduation papers and dissertations on the main server. During the creation of the electronic catalog, the communication format MARC21 is used. The database development is distributed for each department. The subsystem of anti-plagiarism analyzes the full-text database for the similarity of texts (dissertations, diploma works and others). Also it identifies the percentage of coincidence, creates the table of statistical information on the coincidence of tests for each author and division, indicating similar fields. The integrated system was developed and tested at the Tashkent University of Information Technologies to work in the corporate mode of various departments (faculties, departments, TUIT branches).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Waterhouse, David Michael, Andrew Guinigundo, Aimee Brown, Dan Davies, Lauren Jones und Molly Mendenhall. „First year outcomes of an initiative to increase BRCA testing among NCCN guideline-eligible breast cancer patients within a large community OCM practice.“ Journal of Clinical Oncology 38, Nr. 15_suppl (20.05.2020): 1540. http://dx.doi.org/10.1200/jco.2020.38.15_suppl.1540.

Der volle Inhalt der Quelle
Annotation:
1540 Background: Pathogenic variants in BRCA1/BRCA2 can affect a breast CA pts care: preventative interventions, surgical decisions, medical treatments, screening, and family counseling. National data suggests significant non-adherence to NCCN testing guidelines, with only 1/3 of eligible pts referred for genetic services. In 2018, OHC (Cincinnati) launched an APP-centric genetics program. Specially trained APPs carry out genetic counseling and order NCCN-compliant testing. Early data suggested a significant deficit in physician-driven referrals. From 1/01/18 - 07/31/18, 138 new breast pts were estimated to be NCCN guideline-eligible. Only 28 (20%) pts received genetic services. Methods: In 2019, the OHC genetics team implemented a standardized screening process for every new breast CA pt. An EMR template (iKnowMed G2) that included NCCN guidelines was created for initial breast CA consultation and Oncology Care Model (OCM) treatment planning. All pts, not just OCM pts, are subject to OCM treatment planning. This automated screening method ensured all breast CA pts were screened, drastically increasing compliance. Through integration of genetics screening into the templates, pts meeting NCCN criteria for testing are reflexively referred for genetic counseling. With USON/McKesson, integrated data fields were developed in the EMR to automate data collection. Results: From 01/01/19 – 12/31/19, 717 new breast CA pts were seen at OHC. 676/717 (94%) were screened. Of those screened, 279 new breast CA pts met NCCN criteria for BRCA testing. 140 (50%) eligible new pts had appts with the genetics team. Another 50 (18%) had confirmed testing outside of OHC. 57 (20%) refused appts and/or testing. 32 (11%) did not have appts, representing screen fails. Referrals in non-breast CA pts also increased by 127%; 604 (2019) vs 264 (2018) suggesting a halo effect. Analyses suggest the program to be economically viable, with a financial growth rate of 127%. Conclusions: EMR templates embedded with the NCCN guidelines for reflex genetics referral can appropriately increase the utilization of genetic services. Breast genetics screening and resultant appt/testing rates increased significantly 2019 vs 2018. Success in BRCA testing in breast CA will lead to expansion to other cancers and genes. Implementation of structured EMR genetics data fields can automate data collection and measure compliance. Integration of genetics screening into universal OCM treatment planning is feasible, economically viable and scalable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Li, Sam FY, und Larry J. Kricka. „Clinical Analysis by Microchip Capillary Electrophoresis“. Clinical Chemistry 52, Nr. 1 (01.01.2006): 37–45. http://dx.doi.org/10.1373/clinchem.2005.059600.

Der volle Inhalt der Quelle
Annotation:
Abstract Clinical analysis often requires rapid, automated, and high-throughput analytical systems. Microchip capillary electrophoresis (CE) has the potential to achieve very rapid analysis (typically seconds), easy integration of multiple analytical steps, and parallel operation. Although it is currently still in an early stage of development, there are already many reports in the literature describing the applications of microchip CE in clinical analysis. At the same time, more fully automated and higher throughput commercial instruments for microchip CE are becoming available and are expected to further enhance the development of applications of microchip CE in routine clinical testing. To put into perspective its potential, we briefly compare microchip CE with conventional CE and review developments in this technique that may be useful in diagnosis of major diseases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Pfrang, Steffen, Anne Borcherding, David Meier und Jürgen Beyerer. „Automated security testing for web applications on industrial automation and control systems“. at - Automatisierungstechnik 67, Nr. 5 (27.05.2019): 383–401. http://dx.doi.org/10.1515/auto-2019-0021.

Der volle Inhalt der Quelle
Annotation:
Abstract Industrial automation and control systems (IACS) play a key role in modern production facilities. On the one hand, they provide real-time functionality to the connected field devices. On the other hand, they get more and more connected to local networks and the internet in order to facilitate use cases promoted by “Industrie 4.0”. A lot of IACS are equipped with web servers that provide web applications for configuration and management purposes. If an attacker gains access to such a web application operated on an IACS, he can exploit vulnerabilities and possibly interrupt the critical automation process. Cyber security research for web applications is well-known in the office IT. There exist a lot of best practices and tools for testing web applications for different kinds of vulnerabilities. Security testing targets at discovering those vulnerabilities before they can get exploited. In order to enable IACS manufacturers and integrators to perform security tests for their devices, ISuTest was developed, a modular security testing framework for IACS. This paper provides a classification of known types of web application vulnerabilities. Therefore, it makes use of the worst direct impact of a vulnerability. Based on this analysis, a subset of open-source vulnerability scanners to detect such vulnerabilities is selected to be integrated into ISuTest. Subsequently, the integration is evaluated. This evaluation is twofold: At first, willful vulnerable web applications are used. In a second step, seven real IACS, like a programmable logic controller, industrial switches and cloud gateways, are used. Both evaluation steps start with the manual examination of the web applications for vulnerabilities. They conclude with an automated test of the web applications using the vulnerability scanners automated by ISuTest. The results show that the vulnerability scanners detected 53 % of the existing vulnerabilities. In a former study using commercial vulnerability scanners, 54 % of the security flaws could be found. While performing the analysis, 45 new vulnerabilities were detected. Some of them did not only break the web server but crashed the whole IACS, stopping the critical automation process. This shows that security testing is crucial in the industrial domain and needs to cover all services provided by the devices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Xu, Xue Qi, Chao Huang und Hao Lu. „Application of Lean Six Sigma Methodology in Software Continuous Integration“. Key Engineering Materials 693 (Mai 2016): 1893–98. http://dx.doi.org/10.4028/www.scientific.net/kem.693.1893.

Der volle Inhalt der Quelle
Annotation:
Lean Six Sigma (LSS) is an effective methodology that aims to maximize shareholder value by improving quality, efficiency, customer satisfaction and costs. Continuous integration is the software engineering practice of rapid and automated development and testing. A case study presented in this paper demonstrates how LSS tools help software R&D teams to improve product quality and reduce development cost. The define, measure, analyze, improve and control (DMAIC) methodology is applied to develop an action plan to achieve continuous integration at an anonymous software R&D organization's LSS Green Belt project. The LSS implementation has had a significant impact on the financial performance of the organization. It is showed that the package continuous integration (PCI) success ratio (3 months average) increased from 27% to 74%, meanwhile an operational saving of approximately 56.87K Euro was reported from this project. Finally, some key success factors that are critical to the implementation of an effective Green Belt program are examined, and managerial implications are provided.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Namli, T., und A. Dogac. „Testing Conformance and Interoperability of eHealth Applications“. Methods of Information in Medicine 49, Nr. 03 (2010): 281–89. http://dx.doi.org/10.3414/me09-02-0022.

Der volle Inhalt der Quelle
Annotation:
Summary Objective: To explain the common conformance and interoperability testing requirements of eHealth applications through two case studies; one using a prominent eHealth messaging standard, namely HL7 v3 [1], and the other using Integrating the Healthcare Enterprise (IHE) [2] Profiles and to describe how these testing requirements can be addressed through an automated, modular and scenario-based testing framework, namely Test BATN. Methods: Summarizing the conformance testing requirements of HL7 v3 messages. Illustrating the interoperability testing requirements of IHE Profiles through a scenario based on the IHE XDS, IHE XDS-MS and IHE PIX profiles. Explaining how these requirements can be handled through a dynamic and configurable test framework addressing all the layers in the interoperability stack within a single test scenario. Results: Conformance and interoperability testing are necessary to maintain correct information exchange as the correctness of the exchanged data is essential in the healthcare domain. There are many standards used in eHealth that the applications need to conform. Additionally, there are profiling initiatives such as IHE and Continua Health Alliance which publish integration profiles addressing a specific clinical need or a use case and describe how to combine or use the existing standards to provide interoperability. However, as the results of our case studies demonstrate, there are many commonalities in the conformance and interoperability testing requirements of these standards and profiles and therefore an integrated testing environment is needed. Conclusion: Our main conclusion is that rather than having individual testing tools for each standard or initiative, a generic and modular test framework exploiting the commonalities in the testing processes and fostering reusability of modular, pluggable testing components will improve the efficiency of testing. Through the TestBATN framework, we describe how this modularity can be achieved by providing common interfaces facilitating the development of adaptors which allows different testing components to be plugged into the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Balter, M. L., J. M. Leipheimer, A. I. Chen, A. Shrirao, T. J. Maguire und M. L. Yarmush. „Automated end-to-end blood testing at the point-of-care: Integration of robotic phlebotomy with downstream sample processing“. TECHNOLOGY 06, Nr. 02 (Juni 2018): 59–66. http://dx.doi.org/10.1142/s2339547818500048.

Der volle Inhalt der Quelle
Annotation:
Diagnostic blood testing is the most commonly performed clinical procedure in the world, and influences the majority of medical decisions made in hospital and laboratory settings. However, manual blood draw success rates are dependent on clinician skill and patient physiology, and results are generated almost exclusively in centralized labs from large-volume samples using labor-intensive analytical techniques. This paper presents a medical device that enables end-to-end blood testing by performing blood draws and providing diagnostic results in a fully automated fashion at the point-of-care. The system couples an image-guided venipuncture robot, developed to address the challenges of routine venous access, with a centrifuge-based blood analyzer to obtain quantitative measurements of hematology. We first demonstrate a white blood cell assay on the analyzer, using a blood mimicking fluid spiked with fluorescent microbeads, where the area of the packed bead layer is correlated with the bead concentration. Next we perform experiments to evaluate the pumping efficiency of the sample handling module. Finally, studies are conducted on the integrated device — from blood draw to analysis — using blood vessel phantoms to assess the accuracy and repeatability of the resulting white blood cell assay.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Qin, Hui Bin, Shu Fang Wu, Z. L. Hou und Zong Yan Wang. „Research of Automated Process Planning Based on 3D Feature Component Model“. Key Engineering Materials 392-394 (Oktober 2008): 234–39. http://dx.doi.org/10.4028/www.scientific.net/kem.392-394.234.

Der volle Inhalt der Quelle
Annotation:
This paper analyzed the current process planning based on 3D feature model, and pointed out the exiting disadvantages. The framework and key technologies that play a vital role in process planning were issued. A uniform manufacturing model, which was built on component model by offsetting surface features and volumetric features. It implemented the typical machining features recognition and extraction, including visional entity and conceal technological attribute. It also set up hierarchy planning model. Under the support of these theories, a feature-oriented process planning generation system based on Solidworks CAD platform was developed. With the testing work on the interrelated examples, the software system has validated the feasibility and practicability of the method in this paper, which will enrich the way to make stock model in existent CAM software system. It has significant meaning to promote the integration of CAD/CAPP.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Mechtcherine, Viktor, Albert Michel, Marco Liebscher und Tobias Schmeier. „Extrusion-Based Additive Manufacturing with Carbon Reinforced Concrete: Concept and Feasibility Study“. Materials 13, Nr. 11 (04.06.2020): 2568. http://dx.doi.org/10.3390/ma13112568.

Der volle Inhalt der Quelle
Annotation:
Additive manufacturing with cement-based materials needs sound approaches for the direct, seamless integration of reinforcement into structural and non-structural elements during their fabrication. Mineral-impregnated Carbon-Fibre (MCF) composites represent a new type of non-corrosive reinforcement that offers great potential in this regard. MCF not only exhibits high performance with respect to its mechanical characteristics and durability, but it also can be processed and shaped easily in the fresh state and, what is more, automated. This article describes different concepts for the continuous, fully automated integration of MCF reinforcement into 3D concrete printing based on layered extrusion. Moreover, for one of the approaches presented and discussed, namely 3D concrete printing with MCF supply from a continuous, stationary impregnation line and deposition of MCF between concrete filaments, a feasibility study was performed using a gantry 3D printer. Small-scale walls were printed and eventually used for the production of specimens for mechanical testing. Three-point bend tests performed on two different beam geometries showed a significant enhancement of both flexural strength and, more especially, deformability of the specimens reinforced with MCF in comparison to the specimens made of plain concrete.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Mineo, C., M. Vasilev, B. Cowan, C. N. MacLeod, S. G. Pierce, C. Wong, E. Yang, R. Fuentes und E. J. Cross. „Enabling robotic adaptive behaviour capabilities for new Industry 4.0 automated quality inspection paradigms“. Insight - Non-Destructive Testing and Condition Monitoring 62, Nr. 6 (01.06.2020): 338–44. http://dx.doi.org/10.1784/insi.2020.62.6.338.

Der volle Inhalt der Quelle
Annotation:
The seamless integration of industrial robotic arms with server computers, sensors and actuators can revolutionise the way in which automated non-destructive testing (NDT) is performed and conceived. Achieving effective integration and realising the full potential of robotic systems presents significant challenges, since robots, sensors and end-effector tools are often not necessarily designed to be put together and form a holistic system. This paper presents recent breakthroughs, opening up new scenarios for the inspection of product quality in advanced manufacturing. Many years of research have brought to software platforms the ability to integrate external data acquisition instrumentation with industrial robots to improve the inspection speed, accuracy and repeatability of NDT. Robotic manipulators have typically been operated by predefined tool-paths generated through offline path-planning software applications. Recent developments pave the way to data-driven autonomous robotic inspections, enabling real-time path planning and adaptive control. This paper presents a toolbox with highly efficient algorithms and software functions, developed to be used through high-level programming language platforms (for example MATLAB, LabVIEW and Python) and/ or integrated within low-level language (for example C# and C++) applications. The use of the toolbox can speed up the development and the robust integration of new robotic NDT systems with real-time adaptive capabilities and is compatible with all KUKA robots with six degrees of freedom (DOF), which are equipped with the Robot Sensor Interface (RSI) software add-on. The paper describes the architecture of the toolbox and shows two application examples, where performance results are provided. The concepts described in the paper are aligned with the emerging Industry 4.0 paradigms and have wider applicability beyond NDT.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Zwettler, Gerald, und Werner Backfrieder. „Automated Domain-Specific Feature Selection for Classification-based Segmentation of Tomographic Medical Image Data“. International Journal of Privacy and Health Information Management 5, Nr. 1 (Januar 2017): 53–75. http://dx.doi.org/10.4018/ijphim.2017010104.

Der volle Inhalt der Quelle
Annotation:
Classification-based segmentation is an approach to establish generic analysis of medical image data. Significant feature sets covering different characteristics of regions to segment allow for robust discrimination of topologically defined classes. In this work a method for automated domain-specific feature selection to achieve a higher level of predictability is presented, incorporating multivariate feature analysis. For calculation of the probability density function, different approaches, like histogram analysis, enumeration of the entire feature space or umbrella Monte Carlo Integration are investigated. Furthermore, meta features calculated on entire classification results rather than on particular regions are introduced. Predictability of both, single local and meta features, is evaluated for different medical datasets as well for simulated intensity volumes, allowing testing and evaluating specific classification problems. The automated feature selection proofs to be accurate for classification-based segmentation utilizing well-known machine learning approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Scholer, Matthias, Matthias Vette und Mueller Rainer. „A lightweight robot system designed for the optimisation of an automotive end-off line process station“. Industrial Robot: An International Journal 42, Nr. 4 (15.06.2015): 296–305. http://dx.doi.org/10.1108/ir-11-2014-0427.

Der volle Inhalt der Quelle
Annotation:
Purpose – This study aims to deliver an approach of how lightweight robot systems can be used to automate manual processes for higher efficiency, increased process capability and enhanced ergonomics. As a use case, a new collaborative testing system for an automated water leak test was designed using an image processing system utilized by the robot. Design/methodology/approach – The “water leak test” in an automotive final assembly line is often a significant cost factor due to its labour-intensive nature. This is particularly the case for premium car manufacturers as each vehicle is watered and manually inspected for leakage. This paper delivers an approach that optimizes the efficiency and capability of the test process by using a new automated in-line inspection system whereby thermographic images are taken by a lightweight robot system and then processed to locate the leak. Such optimization allows the collaboration of robots and manual labour, which in turn enhances the capability of the process station. Findings – This paper examines the development of a new application for lightweight robotic systems and provides a suitable process whereby the system was optimized regarding technical, ergonomic and safety-related aspects. Research limitations/implications – A new automated testing process in combination with a processing algorithm was developed. A modular system suitable for the integration of human–robot collaboration into the assembly line is presented as well. Practical implications – To optimize and validate the system, it was set up in a true to reality model factory and brought to a prototypical status. Several original equipment manufacturers showed great interest in the system. Feasibility studies for a practical implementation are running at the moment. Social implications – The direct human–robot collaboration allows humans and robots to share the same workspace without strict separation measures, which is a great advantage compared with traditional industrial robots. The workers benefit from a more ergonomic workflow and are relieved from unpleasant, repetitive and burdensome tasks. Originality/value – A lightweight robotic system was implemented in a continuous assembly line as a new area of application for these systems. The automated water leak test gives a practical example of how to enrich the assembly and commissioning lines, which are currently dominated by manual labour, with new technologies. This is necessary to reach a higher efficiency and process capability while maintaining a higher flexibility potential than fully automated systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Mudarakola, Lakshmi Prasad, und J. K. R. Sastry. „A Neural Network Based Strategy (NNBS) for Automated Construction of Test Cases for Testing an Embedded System using Combinatorial Techniques“. International Journal of Engineering & Technology 7, Nr. 1.3 (31.12.2017): 74. http://dx.doi.org/10.14419/ijet.v7i1.3.9271.

Der volle Inhalt der Quelle
Annotation:
Testing an embedded system is required to locate bugs in software, diminish risk, development, repairs costs and to improve performance for both users and the company. Embedded software testing tools are useful for catching defects during unit, integration and system testing. Embedded systems in many cases must be optimized by engaging crucial areas of the embedded systems considering all factors of the input domain. The most important concern is to build a place of test cases depend on design of the requirements that can recognize more number of faults at a least rate and point in time in the major sections of an embedded system. This paper proposes a Neural Network Based strategy (NNBS) to generate optimized test cases based on the considerations of the system. A tool called NNTCG (Neural Network Test Case Generator) has been build up based on the method proposed in this paper. Test cases are generated for testing an embedded system using NNTCG and the same are used to determine the expected output through the neural network and the output generated from the actual firmware. The faulty paths within the firmware are determined when the output generated by the neural network is not same as the output generated by the firmware.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Sorbello, Alfred, Anna Ripple, Joseph Tonning, Monica Munoz, Rashedul Hasan, Thomas Ly, Henry Francis und Olivier Bodenreider. „Harnessing scientific literature reports for pharmacovigilance“. Applied Clinical Informatics 26, Nr. 01 (2017): 291–305. http://dx.doi.org/10.4338/aci-2016-11-ra-0188.

Der volle Inhalt der Quelle
Annotation:
Summary Objectives: We seek to develop a prototype software analytical tool to augment FDA regulatory reviewers’ capacity to harness scientific literature reports in PubMed/MEDLINE for pharmacovigilance and adverse drug event (ADE) safety signal detection. We also aim to gather feedback through usability testing to assess design, performance, and user satisfaction with the tool. Methods: A prototype, open source, web-based, software analytical tool generated statistical disproportionality data mining signal scores and dynamic visual analytics for ADE safety signal detection and management. We leveraged Medical Subject Heading (MeSH) indexing terms assigned to published citations in PubMed/MEDLINE to generate candidate drug-adverse event pairs for quantitative data mining. Six FDA regulatory reviewers participated in usability testing by employing the tool as part of their ongoing real-life pharmacovigilance activities to provide subjective feedback on its practical impact, added value, and fitness for use. Results: All usability test participants cited the tool’s ease of learning, ease of use, and generation of quantitative ADE safety signals, some of which corresponded to known established adverse drug reactions. Potential concerns included the comparability of the tool’s automated literature search relative to a manual ‘all fields’ PubMed search, missing drugs and adverse event terms, interpretation of signal scores, and integration with existing computer-based analytical tools. Conclusions: Usability testing demonstrated that this novel tool can automate the detection of ADE safety signals from published literature reports. Various mitigation strategies are described to foster improvements in design, productivity, and end user satisfaction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Brogden, Ruth, Su Wang und Binghong Xu. „Finding the missing millions: Integrating automated viral hepatitis screening in a hospital with care and treatment in a primary care setting.“ Journal of Clinical Oncology 39, Nr. 15_suppl (20.05.2021): 108. http://dx.doi.org/10.1200/jco.2021.39.15_suppl.108.

Der volle Inhalt der Quelle
Annotation:
108 Background: Rates of hepatocellular carcinoma (HCC) are rising in the US. Patients at Saint Barnabas Medical Cancer Center (SBMC) present with late-stage HCC at higher rates (29%) compared to the national (16%). Chronic Hepatitis C (HCV) and Hepatitis B (HBV) are major drivers of liver cancer, yet screening rates are low. Finding these missing millions is important to reducing rates of HCC. An automated emergency department (ED) viral hepatitis (VH) screening program was initiated in 2018 at SBMC. In January 2020, it was expanded to the inpatient setting and HCV screening was modified from cohort screening (those born in 1945-65) to a one time test for anybody 18 years or over, per updated Centers for Disease Control (CDC) and USPSTF (US Preventive Services Taskforce) recommendations. Methods: The electronic medical record (EMR) was modified to automate screening. HBV testing is triggered by a patient’s country of birth or race, and HCV testing is triggered by age over 18 and no previous testing. The automated HCV (HCV Ab with reflex to HCV RNA) or HBV (HBsAg) lab orders lead to an EMR notification to the nurses of patient eligibility and education is provided to patients. Alerts of positive results are sent to nursing staff, physicians, and the patient navigator (PN). The PN is sent a real-time secure text message and works individually with patients to arrange linkage-to-care (LTC) for evaluation and treatment. Results: From March 2018 - December 2020, 44,002 patients were screened for HCV and 884 (2.0%) were HCVAb+ and 242 (0.55%) HCV RNA+. For HBV, 21,328 patients were screened and 212 (0.99%) were HBsAg+. The expanded screenings accounted for 8,716 (19.8%) of the total HCV screenings. Individuals born outside the 1945-65 birth cohort (younger and older) made up 76.2% of those screened and 41% of the infected. The top 3 countries for HBV screenings were Haiti, Jamaica, and Ecuador. LTC rates, defined as attending first medical appointment or already in care, were 86.8% for HCV and 85.4% for HBV. Of those linked to care, 43 HCV+ patients were seen at a outpatient primary care practice part of SBMC, and of those, 39 initiated HCV cure therapy and 33 were cured (confirmed sustained virologic response at 12 weeks), and 35 HBV+ patients were seen and 6 initiated treatment. Conclusions: This automated program for VH has led to a significant scale up of screening with successful LTC and treatment of patients. Expansion to universal screening of all adults and to the inpatient setting found additional viral hepatitis patients who would have otherwise been missed. In addition to the automated screening, a multidisciplinary team including internists, pharmacists, and patient navigators were part of creating a primary care based program. Integration of viral hepatitis screening and care in a hospital system can be initial steps towards establishing liver cancer prevention program.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Hancock, P. A., Tara Kajaks, Jeff K. Caird, Mark H. Chignell, Sachi Mizobuchi, Peter C. Burns, Jing Feng et al. „Challenges to Human Drivers in Increasingly Automated Vehicles“. Human Factors: The Journal of the Human Factors and Ergonomics Society 62, Nr. 2 (05.02.2020): 310–28. http://dx.doi.org/10.1177/0018720819900402.

Der volle Inhalt der Quelle
Annotation:
Objective We examine the relationships between contemporary progress in on‐road vehicle automation and its coherence with an envisioned “autopia” (automobile utopia) whereby the vehicle operation task is removed from all direct human control. Background The progressive automation of on‐road vehicles toward a completely driverless state is determined by the integration of technological advances into the private automobile market; improvements in transportation infrastructure and systems efficiencies; and the vision of future driving as a crash‐free enterprise. While there are many challenges to address with respect to automated vehicles concerning the remaining driver role, a considerable amount of technology is already present in vehicles and is advancing rapidly. Methods A multidisciplinary team of experts met to discuss the most critical challenges in the changing role of the driver, and associated safety issues, during the transitional phase of vehicle automation where human drivers continue to have an important but truncated role in monitoring and supervising vehicle operations. Results The group endorsed that vehicle automation is an important application of information technology, not only because of its impact on transportation efficiency, but also because road transport is a life critical system in which failures result in deaths and injuries. Five critical challenges were identified: driver independence and mobility, driver acceptance and trust, failure management, third-party testing, and political support. Conclusion Vehicle automation is not technical innovation alone, but is a social as much as a technological revolution consisting of both attendant costs and concomitant benefits.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Su, J. H. „Design and Analysis of a Composite Integral Wheel-Tire“. Tire Science and Technology 17, Nr. 2 (01.04.1989): 138–56. http://dx.doi.org/10.2346/1.2141680.

Der volle Inhalt der Quelle
Annotation:
Abstract An integrated design and analysis methodology for a Fiber Reinforced Plastic (FRP) Integral Wheel-Tire (IWT) was developed for applications in different automotive vehicles. The results are presented specifically for composite epoxy structures reinforced with long glass fibers and having a bonded rubber tread. This methodology includes automated design and analysis procedure and allows for closing the loop in the integration of materials, process, and design phases of a product. The results from this methodology were verified by testing real parts. The effect of a design change was predicted automatically in a short turn-around time. As an example, a study of footprint sensitivity to design parameters is discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Zhao, Rongchang, Wangmin Liao, Beiji Zou, Zailiang Chen und Shuo Li. „Weakly-Supervised Simultaneous Evidence Identification and Segmentation for Automated Glaucoma Diagnosis“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 809–16. http://dx.doi.org/10.1609/aaai.v33i01.3301809.

Der volle Inhalt der Quelle
Annotation:
Evidence identification, optic disc segmentation and automated glaucoma diagnosis are the most clinically significant tasks for clinicians to assess fundus images. However, delivering the three tasks simultaneously is extremely challenging due to the high variability of fundus structure and lack of datasets with complete annotations. In this paper, we propose an innovative Weakly-Supervised Multi-Task Learning method (WSMTL) for accurate evidence identification, optic disc segmentation and automated glaucoma diagnosis. The WSMTL method only uses weak-label data with binary diagnostic labels (normal/glaucoma) for training, while obtains pixel-level segmentation mask and diagnosis for testing. The WSMTL is constituted by a skip and densely connected CNN to capture multi-scale discriminative representation of fundus structure; a well-designed pyramid integration structure to generate high-resolution evidence map for evidence identification, in which the pixels with higher value represent higher confidence to highlight the abnormalities; a constrained clustering branch for optic disc segmentation; and a fully-connected discriminator for automated glaucoma diagnosis. Experimental results show that our proposed WSMTL effectively and simultaneously delivers evidence identification, optic disc segmentation (89.6% TP Dice), and accurate glaucoma diagnosis (92.4% AUC). This endows our WSMTL a great potential for the effective clinical assessment of glaucoma.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Park, Jihyun, und Byoungju Choi. „Automatic Method for Distinguishing Hardware and Software Faults Based on Software Execution Data and Hardware Performance Counters“. Electronics 9, Nr. 11 (02.11.2020): 1815. http://dx.doi.org/10.3390/electronics9111815.

Der volle Inhalt der Quelle
Annotation:
Debugging in an embedded system where hardware and software are tightly coupled and have restricted resources is far from trivial. When hardware defects appear as if they were software defects, determining the real source becomes challenging. In this study, we propose an automated method of distinguishing whether a defect originates from the hardware or software at the stage of integration testing of hardware and software. Our method overcomes the limitations of the embedded environment, minimizes the effects on runtime, and identifies defects by obtaining and analyzing software execution data and hardware performance counters. We analyze the effects of the proposed method through an empirical study. The experimental results reveal that our method can effectively distinguish defects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Devaraj, Harish, Rajnish Sharma, Enrico Haemmerle und Kean Aw. „A Portable & Disposable Ultra-Low Velocity Flow Sensor from Bioinspired Hair-Like Microstructures“. Proceedings 2, Nr. 13 (03.12.2018): 731. http://dx.doi.org/10.3390/proceedings2130731.

Der volle Inhalt der Quelle
Annotation:
We present, for the first time, the design, development and testing of a portable ultra-lowvelocity flow sensor with a disposable architecture for use in medical applications. 3Dmicroprintingtechnique was used to fabricate high aspect ratio microscopic hair-like structuresfrom conducting polymers, in particular, poly(3,4-ethylenedioxythiophene):polystyrene-sulfonate(PEDOT:PSS). These high aspect ratio micro-hairs are flexible and conductive that can respond toair flowing over them. A disposable and portable flow sensor with a modular design that allowstuning of measurement range was developed, for integration with an automated neonatalresuscitator to provide closed-loop feedback. The developed portable sensor architecture is capableof real-time indication of the air flow velocity range down to few millimeters/second.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Gille, Benjamin, Lieselot Dedeene, Erik Stoops, Leentje Demeyer, Cindy Francois, Stefanie Lefever, Maxim De Schaepdryver et al. „Automation on an Open-Access Platform of Alzheimer’s Disease Biomarker Immunoassays“. SLAS TECHNOLOGY: Translating Life Sciences Innovation 23, Nr. 2 (18.01.2018): 188–97. http://dx.doi.org/10.1177/2472630317750378.

Der volle Inhalt der Quelle
Annotation:
The lack of (inter-)laboratory standardization has hampered the application of universal cutoff values for Alzheimer’s disease (AD) cerebrospinal fluid (CSF) biomarkers and their transfer to general clinical practice. The automation of the AD biomarker immunoassays is suggested to generate more robust results than using manual testing. Open-access platforms will facilitate the integration of automation for novel biomarkers, allowing the introduction of the protein profiling concept. A feasibility study was performed on an automated open-access platform of the commercial immunoassays for the 42-amino-acid isoform of amyloid-β (Aβ1–42), Aβ1–40, and total tau in CSF. Automated Aβ1–42, Aβ1–40, and tau immunoassays were performed within predefined acceptance criteria for bias and imprecision. Similar accuracy was obtained for ready-to-use calibrators as for reconstituted lyophilized kit calibrators. When compared with the addition of a standard curve in each test run, the use of a master calibrator curve, determined before and applied to each batch analysis as the standard curve, yielded an acceptable overall bias of −2.6% and −0.9% for Aβ1−42 and Aβ1–40, respectively, with an imprecision profile of 6.2% and 8.4%, respectively. Our findings show that transfer of commercial manual immunoassays to fully automated open-access platforms is feasible, as it performs according to universal acceptance criteria.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Dona, Rizart, und Riccardo Di Maria. „The ESCAPE Data Lake: The machinery behind testing, monitoring and supporting a unified federated storage infrastructure of the exabyte-scale“. EPJ Web of Conferences 251 (2021): 02060. http://dx.doi.org/10.1051/epjconf/202125102060.

Der volle Inhalt der Quelle
Annotation:
The EU-funded ESCAPE project aims at enabling a prototype federated storage infrastructure, a Data Lake, that would handle data on the exabyte-scale, address the FAIR data management principles and provide science projects a unified scalable data management solution for accessing and analyzing large volumes of scientific data. In this respect, data transfer and management technologies such as Rucio, FTS and GFAL are employed along with monitoring enabling solutions such as Grafana, Elasticsearch and perf- SONAR. This paper presents and describes the technical details behind the machinery of testing and monitoring of the Data Lake – this includes continuous automated functional testing, network monitoring and development of insightful visualizations that reflect the current state of the system. Topics that are also addressed include the integration with the CRIC information system as well as the initial support for token based authentication / authorization by using OpenID Connect. The current architecture of these components is provided and future enhancements are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Sáray, Sára, Christian A. Rössert, Shailesh Appukuttan, Rosanna Migliore, Paola Vitale, Carmen A. Lupascu, Luca L. Bologna et al. „HippoUnit: A software tool for the automated testing and systematic comparison of detailed models of hippocampal neurons based on electrophysiological data“. PLOS Computational Biology 17, Nr. 1 (29.01.2021): e1008114. http://dx.doi.org/10.1371/journal.pcbi.1008114.

Der volle Inhalt der Quelle
Annotation:
Anatomically and biophysically detailed data-driven neuronal models have become widely used tools for understanding and predicting the behavior and function of neurons. Due to the increasing availability of experimental data from anatomical and electrophysiological measurements as well as the growing number of computational and software tools that enable accurate neuronal modeling, there are now a large number of different models of many cell types available in the literature. These models were usually built to capture a few important or interesting properties of the given neuron type, and it is often unknown how they would behave outside their original context. In addition, there is currently no simple way of quantitatively comparing different models regarding how closely they match specific experimental observations. This limits the evaluation, re-use and further development of the existing models. Further, the development of new models could also be significantly facilitated by the ability to rapidly test the behavior of model candidates against the relevant collection of experimental data. We address these problems for the representative case of the CA1 pyramidal cell of the rat hippocampus by developing an open-source Python test suite, which makes it possible to automatically and systematically test multiple properties of models by making quantitative comparisons between the models and electrophysiological data. The tests cover various aspects of somatic behavior, and signal propagation and integration in apical dendrites. To demonstrate the utility of our approach, we applied our tests to compare the behavior of several different rat hippocampal CA1 pyramidal cell models from the ModelDB database against electrophysiological data available in the literature, and evaluated how well these models match experimental observations in different domains. We also show how we employed the test suite to aid the development of models within the European Human Brain Project (HBP), and describe the integration of the tests into the validation framework developed in the HBP, with the aim of facilitating more reproducible and transparent model building in the neuroscience community.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Hayashi, Makoto, James T. MacGregor, David G. Gatehouse, Ilse-Dore Adler, David H. Blakey, Stephen D. Dertinger, Gopala Krishna, Takeshi Morita, Antonella Russo und Shizuyo Sutou. „In vivo rodent erythrocyte micronucleus assay. II. Some aspects of protocol design including repeated treatments, integration with toxicity testing, and automated scoring“. Environmental and Molecular Mutagenesis 35, Nr. 3 (2000): 234–52. http://dx.doi.org/10.1002/(sici)1098-2280(2000)35:3<234::aid-em10>3.0.co;2-l.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Méndez, A., S. Brunner-Agten und A. R. Huber. „Automation in haemostasis“. Hämostaseologie 33, Nr. 04 (2013): 295–98. http://dx.doi.org/10.5482/hamo-12-05-0002.

Der volle Inhalt der Quelle
Annotation:
SummaryAutomatia, an ancient Greece goddess of luck who makes things happen by themselves and on her own will without human engagement, is present in our daily life in the medical laboratory. Automation has been introduced and perfected by clinical chemistry and since then expanded into other fields such as haematology, immunology, molecular biology and also coagulation testing. The initial small and relatively simple standalone instruments have been replaced by more complex systems that allow for multitasking. Integration of automated coagulation testing into total laboratory automation has become possible in the most recent years. Automation has many strengths and opportunities if weaknesses and threats are respected. On the positive side, standardization, reduction of errors, reduction of cost and increase of throughput are clearly beneficial. Dependence on manufacturers, high initiation cost and somewhat expensive maintenance are less favourable factors. The modern lab and especially the todays lab technicians and academic personnel in the laboratory do not add value for the doctor and his patients by spending lots of time behind the machines. In the future the lab needs to contribute at the bedside suggesting laboratory testing and providing support and interpretation of the obtained results. The human factor will continue to play an important role in testing in haemostasis yet under different circumstances.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Lim, Hooi Min, Chin Hai Teo, Chirk Jenn Ng, Thiam Kian Chiew, Wei Leik Ng, Adina Abdullah, Haireen Abdul Hadi, Chee Sun Liew und Chee Seng Chan. „An Automated Patient Self-Monitoring System to Reduce Health Care System Burden During the COVID-19 Pandemic in Malaysia: Development and Implementation Study“. JMIR Medical Informatics 9, Nr. 2 (26.02.2021): e23427. http://dx.doi.org/10.2196/23427.

Der volle Inhalt der Quelle
Annotation:
Background During the COVID-19 pandemic, there was an urgent need to develop an automated COVID-19 symptom monitoring system to reduce the burden on the health care system and to provide better self-monitoring at home. Objective This paper aimed to describe the development process of the COVID-19 Symptom Monitoring System (CoSMoS), which consists of a self-monitoring, algorithm-based Telegram bot and a teleconsultation system. We describe all the essential steps from the clinical perspective and our technical approach in designing, developing, and integrating the system into clinical practice during the COVID-19 pandemic as well as lessons learned from this development process. Methods CoSMoS was developed in three phases: (1) requirement formation to identify clinical problems and to draft the clinical algorithm, (2) development testing iteration using the agile software development method, and (3) integration into clinical practice to design an effective clinical workflow using repeated simulations and role-playing. Results We completed the development of CoSMoS in 19 days. In Phase 1 (ie, requirement formation), we identified three main functions: a daily automated reminder system for patients to self-check their symptoms, a safe patient risk assessment to guide patients in clinical decision making, and an active telemonitoring system with real-time phone consultations. The system architecture of CoSMoS involved five components: Telegram instant messaging, a clinician dashboard, system administration (ie, back end), a database, and development and operations infrastructure. The integration of CoSMoS into clinical practice involved the consideration of COVID-19 infectivity and patient safety. Conclusions This study demonstrated that developing a COVID-19 symptom monitoring system within a short time during a pandemic is feasible using the agile development method. Time factors and communication between the technical and clinical teams were the main challenges in the development process. The development process and lessons learned from this study can guide the future development of digital monitoring systems during the next pandemic, especially in developing countries.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Sirinukunwattana, Korsuk, Alan Aberdeen, Helen Theissen, Nikolaos Sousos, Bethan Psaila, Adam J. Mead, Gareth D. H. Turner, Gabrielle Rees, Jens Rittscher und Daniel Royston. „Artificial intelligence–based morphological fingerprinting of megakaryocytes: a new tool for assessing disease in MPN patients“. Blood Advances 4, Nr. 14 (24.07.2020): 3284–94. http://dx.doi.org/10.1182/bloodadvances.2020002230.

Der volle Inhalt der Quelle
Annotation:
Abstract Accurate diagnosis and classification of myeloproliferative neoplasms (MPNs) requires integration of clinical, morphological, and genetic findings. Despite major advances in our understanding of the molecular and genetic basis of MPNs, the morphological assessment of bone marrow trephines (BMT) is critical in differentiating MPN subtypes and their reactive mimics. However, morphological assessment is heavily constrained by a reliance on subjective, qualitative, and poorly reproducible criteria. To improve the morphological assessment of MPNs, we have developed a machine learning approach for the automated identification, quantitative analysis, and abstract representation of megakaryocyte features using reactive/nonneoplastic BMT samples (n = 43) and those from patients with established diagnoses of essential thrombocythemia (n = 45), polycythemia vera (n = 18), or myelofibrosis (n = 25). We describe the application of an automated workflow for the identification and delineation of relevant histological features from routinely prepared BMTs. Subsequent analysis enabled the tissue diagnosis of MPN with a high predictive accuracy (area under the curve = 0.95) and revealed clear evidence of the potential to discriminate between important MPN subtypes. Our method of visually representing abstracted megakaryocyte features in the context of analyzed patient cohorts facilitates the interpretation and monitoring of samples in a manner that is beyond conventional approaches. The automated BMT phenotyping approach described here has significant potential as an adjunct to standard genetic and molecular testing in established or suspected MPN patients, either as part of the routine diagnostic pathway or in the assessment of disease progression/response to treatment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie