Artigos de revistas sobre o tema "Computer input-output equipment Design Data processing"

Siga este link para ver outros tipos de publicações sobre o tema: Computer input-output equipment Design Data processing.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Computer input-output equipment Design Data processing".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Ziauddin, Ziauddin. "Sistem Pendataan Keanggotaan pada Kantor Asosiasi Kontraktor Aceh (AKA)". Jurnal JTIK (Jurnal Teknologi Informasi dan Komunikasi) 4, n.º 2 (8 de maio de 2020): 60. http://dx.doi.org/10.35870/jtik.v4i2.108.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The purpose of this study is to design and create a member (company) data processing system at the Bireuen Branch Aceh Contractors Association Office (AKA) so that information about member data becomes more directed and easily obtained and documented. Rapid Application Development (RAD) is used as an application development model. The company's data collection system at the Aceh Contractors Association Office (AKA) follows procedures and uses a computer with the Visual Basic programming language and Microsoft Access database, inputting data in this design is input for business entities, inputting data on deed, inputting data on management, inputting data on labor, inputting on data, work equipment data and input work experience data. The resulting design can explain data based on business entity data reports, work experience reports, business entity management reports, work equipment reports, employment reports and membership card reports.Keywords:System, Data Collection, Membership, Office of the Aceh Contractors Association (AKA).
2

Wang, Yue, e Jia Ying Zhang. "Research on Embedded Numerical Control System Software Based on ARM Microprocessor". Advanced Materials Research 430-432 (janeiro de 2012): 2026–31. http://dx.doi.org/10.4028/www.scientific.net/amr.430-432.2026.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The design thought of "dual-CPU architecture" is used on the basis of a deep study of embedded systems and multi-functional CNC machine tools. That is to say, microprocessor S3C2440A of Samsung ARM9 series is used as control core, whose PWM timer produces pulse to realize the automatic control of the process. As the main control center, it plays the main functions of NC program input, keyboard entry, LCD display, the interpretation of the program, crude interpolation and so on. Using CPU as auxiliary equipment, it receives signals from master control center to realize fine interpolation through further calculation. Simultaneously the periphery of the system expands modules of human-computer interaction, communication, servo, and input/output to realize human-machine interaction, data communication and motor control. Finally, the hardware modules are debugged and the results are analyzed, which shows that the system is of fast response, reliability, low cost and good cutting and portability. This system has achieved the desired anticipation and has broad prospect in the economical CNC field which requires fast moving and proper processing speed.
3

Murfianah, Anggi, Krismadinata Krismadinata e Yoan Elviralita. "Data Acquisition of PV Mini-Grid Voltage and Current using Arduino and PLX-DAQ". MOTIVECTION : Journal of Mechanical, Electrical and Industrial Engineering 3, n.º 2 (31 de maio de 2021): 77–84. http://dx.doi.org/10.46574/motivection.v3i2.88.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Solar panels are the main equipment for a solar power generation system that functions to convert sunlight energy directly into electrical energy. The solar panel performance monitoring system is designed to be equipped with current and voltage measuring sensors that have been integrated into an Excel Spreadsheet using the PLX-DAQ application program. Arduino Nano based system design and connected to a computer via USB. The advantage of this monitoring system is that measurement results from sensors can be processed directly and displayed in the form of data and graphics in real time conditions. This makes it easy for data processing. Panel surya adalah peralatan utama sistem pembangkit listrik tenaga surya yang berfungsi untuk mengkonversikan energi cahaya matahari menjadi energi listrik secara langsung. Untuk memenuhi keperluan pemantauan output panel surya, sistem pemantauan (monitoring) kinerja panel surya dirancang dilengkapi dengan sensor pengukur arus dan tegangan yang diintegrasikan ke Spreadsheet Excel menggunakan program aplikasi PLX-DAQ. Perancangan sistem berbasis Arduino nano dan dihubungkan ke komputer melalui USB. Kelebihan dari sistem pemantauan ini adalah hasil pengukuran dari sensor dapat diproses secara langsung dan ditampilkan dalam bentuk data dan grafik pada kondisi real time. Hal ini memberikan kemudahan untuk pengolahan data.
4

Kang, Sung-O., Eul-Bum Lee e Hum-Kyung Baek. "A Digitization and Conversion Tool for Imaged Drawings to Intelligent Piping and Instrumentation Diagrams (P&ID)". Energies 12, n.º 13 (5 de julho de 2019): 2593. http://dx.doi.org/10.3390/en12132593.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In the Fourth Industrial Revolution, artificial intelligence technology and big data science are emerging rapidly. To apply these informational technologies to the engineering industries, it is essential to digitize the data that are currently archived in image or hard-copy format. For previously created design drawings, the consistency between the design products is reduced in the digitization process, and the accuracy and reliability of estimates of the equipment and materials by the digitized drawings are remarkably low. In this paper, we propose a method and system of automatically recognizing and extracting design information from imaged piping and instrumentation diagram (P&ID) drawings and automatically generating digitized drawings based on the extracted data by using digital image processing techniques such as template matching and sliding window method. First, the symbols are recognized by template matching and extracted from the imaged P&ID drawing and registered automatically in the database. Then, lines and text are recognized and extracted from in the imaged P&ID drawing using the sliding window method and aspect ratio calculation, respectively. The extracted symbols for equipment and lines are associated with the attributes of the closest text and are stored in the database in neutral format. It is mapped with the predefined intelligent P&ID information and transformed to commercial P&ID tool formats with the associated information stored. As illustrated through the validation case studies, the intelligent digitized drawings generated by the above automatic conversion system, the consistency of the design product is maintained, and the problems experienced with the traditional and manual P&ID input method by engineering companies, such as time consumption, missing items, and misspellings, are solved through the final fine-tune validation process.
5

Stević, Stevan, Momčilo Krunić, Marko Dragojević e Nives Kaprocki. "Development of ADAS perception applications in ROS and "Software-In-the-Loop" validation with CARLA simulator". Telfor Journal 12, n.º 1 (2020): 40–45. http://dx.doi.org/10.5937/telfor2001040s.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Higher levels of autonomous driving are bringing sophisticated requirements and unpredicted challenges. In order to solve these problems, the set of functionalities in modern vehicles is growing in terms of algorithmic complexity and required hardware. The risk of testing implemented solutions in real world is high, expensive and time consuming. This is the reason for virtual automotive simulation tools for testing are heavily acclaimed. Original Equipment Manufacturers (OEMs) use these tools to create closed sense, compute, act loop to have realistic testing scenarios. Production software is tested against simulated sensing data. Based on these inputs a set of actions is produced and simulated which generates consequences that are evaluated. This creates a possibility for OEMs to minimize design errors and optimize costs of the vehicle production before any physical prototypes are produced. This paper presents the development of simple C++/Python perception applications that can be used in driver assistance functionalities. Using ROS as a prototyping platform these applications are validated and tested with "Software-In-the Loop" (SIL) method. CARLA simulator is used as a generator for input data and output commands of the autonomous platform are executed as simulated actions within simulator. Validation is done by connecting Autoware autonomous platform with CARLA simulator in order to test against various scenes in which applications are applicable. Vision based lane detection, which is one of the prototypes, is also tested in a real world scenario to demonstrate the applicability of algorithms developed with simulators to real-time processing
6

Makshakov, A. V., Yu I. Shtern, O. S. Volkova e K. A. Vasilchenko. "Method and Hardware-Software for Measuring Altitude of Aircraft and Descent Facilities with Increased Accuracy". Proceedings of Universities. Electronics 25, n.º 5 (outubro de 2020): 452–64. http://dx.doi.org/10.24151/1561-5405-2020-25-5-452-464.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Reliable, accident-free operation of extra light aviation, drone aircraft, the parachute equipment for landing of people and loads, demands to increase an accuracy of determination of height for the purpose of their safe maneuvering, descent and landing. In the work the critical analysis of existing methods of the height measurement has been carried out for the purpose of defining the most accurate ones and the preference has been given to the barometric method. To decrease the measurement errors, the smart altimeter sensors (SAS) intellectual sensors have been developed, and on their basis the prototype of a barometric altimeter have been designed. In the course of computer modeling and prototyping it has been determined that in designing the altimeter it is necessary to use several SAS, and the accuracy of measurements is essentially affected by an arrangement of sensors on a flying object. The developed method of the height measurement using SAS includes the hardware-software compensation of the errors, caused by the atmospheric phenomena and aerodynamic parameters of the flying object design. The hardware – software for processing the measured data has been developed as well as the software for functioning of intelligent pressure sensor, automatic data processing and the information output to the altimeter display. The tests on the offered technique and hardware – software have been carried out in actual practice of operation. The developed altimeter has been installed on the equipment of a parachutist. In the test result it has been determined that the developed original method and the hardware – software permit to significantly decrease the errors of measurements, which do not exceed 1 meter while the airflow moving at speeds up to 8 meters per second 5 meters up to 70 meters per second
7

Micieta, Branislav, Jolanta Staszewska, Matej Kovalsky, Martin Krajcovic, Vladimira Binasova, Ladislav Papanek e Ivan Antoniuk. "Innovative System for Scheduling Production Using a Combination of Parametric Simulation Models". Sustainability 13, n.º 17 (24 de agosto de 2021): 9518. http://dx.doi.org/10.3390/su13179518.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The article deals with the design of an innovative system for scheduling piece and small series discrete production using a combination of parametric simulation models and selected optimization methods. An innovative system for solving production scheduling problems is created based on data from a real production system at the workshop level. The methodology of the innovative system using simulation and optimization methods deals with the sequential scheduling problem due to its versatility, which includes several production systems and due to the fact that in practice, several modifications to production scheduling problems are encountered. Proposals of individual modules of the innovative system with the proposed communication channels have been presented, which connect the individual elements of the created library of objects for solving problems of sequential production scheduling. With the help of created communication channels, it is possible to apply individual parameters of a real production system directly to the assembled simulation model. In this system, an initial set of optimization methods is deployed, which can be applied to solve the sequential problem of production scheduling. The benefit of the solution is an innovative system that defines the content of the necessary data for working with the innovative system and the design of output reports that the proposed system provides for production planning for the production shopfloor level. The DPSS system works with several optimization methods (CR—Critical Ratio, S/RO—Slack/Remaining Operations, FDD—Flow Due Date, MWKR—Most Work Remaining, WSL—Waiting Slack, OPFSLK/PK—Operational Flow Slack per Processing Time) and the simulation experiments prove that the most suitable solution for the FT10 problem is the critical ratio method in which the replaceability of the equipment was not considered. The total length of finding all solutions by the DPSS system was 1.68 min. The main benefit of the DPSS system is the combination of two effectively used techniques not only in practice, but also in research; the mentioned techniques are production scheduling and discrete computer simulation. By combining techniques, it is possible to generate a dynamically and interactively changing simulated production program. Subsequently, it is possible to decide in the emerging conditions of certainty, uncertainty, but also risk. To determine the conditions, models of production systems are used, which represent physical production systems with their complex internal processes. Another benefit of combining techniques is the ability to evaluate a production system with a number of emerging problem modifications.
8

David, Jiří, Pavel Švec, Vít Pasker e Romana Garzinová. "Usage of Real Time Machine Vision in Rolling Mill". Sustainability 13, n.º 7 (31 de março de 2021): 3851. http://dx.doi.org/10.3390/su13073851.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This article deals with the issue of computer vision on a rolling mill. The main goal of this article is to describe the designed and implemented algorithm for the automatic identification of the character string of billets on the rolling mill. The algorithm allows the conversion of image information from the front of the billet, which enters the rolling process, into a string of characters, which is further used to control the technological process. The purpose of this identification is to prevent the input pieces from being confused because different parameters of the rolling process are set for different pieces. In solving this task, it was necessary to design the optimal technical equipment for image capture, choose the appropriate lighting, search for text and recognize individual symbols, and insert them into the control system. The research methodology is based on the empirical-quantitative principle, the basis of which is the analysis of experimentally obtained data (photographs of billet faces) in real operating conditions leading to their interpretation (transformation into the shape of a digital chain). The first part of the article briefly describes the billet identification system from the point of view of technology and hardware resources. The next parts are devoted to the main parts of the algorithm of automatic identification—optical recognition of strings and recognition of individual characters of the chain using artificial intelligence. The method of optical character recognition using artificial neural networks is the basic algorithm of the system of automatic identification of billets and eliminates ambiguities during their further processing. Successful implementation of the automatic inspection system will increase the share of operation automation and lead to ensuring automatic inspection of steel billets according to the production plan. This issue is related to the trend of digitization of individual technological processes in metallurgy and also to the social sustainability of processes, which means the elimination of human errors in the management of the billet rolling process.
9

Wang, Xian, Zhe Wang, Tai Yong Wang, Fu Xun Lin e Yin Ming Ge. "The Design and Realization of CNC System Test Board Based on STM32". Advanced Materials Research 694-697 (maio de 2013): 1215–18. http://dx.doi.org/10.4028/www.scientific.net/amr.694-697.1215.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
According to the complicated operation,time-consuming limitations of traditional CNC system test equipment,a test board with STM32 as the central controller is designed.The board has RS232 serial communication function,together with input and output channels of CNC system interface signals.Firstly, this paper introduces the whole structure and hardware interface circuit of the test board. Secondly,in the MDK programming platform,the test programs of data receiving, processing, control and communication are designed. Finally,the system test is accomplished through the joint debugging of the software and hardware system. The test board has been proved to have good stability and scalability,and successfully applied to the practical production.
10

Takagi, Taro, e Ikuro Mizumoto. "PFC design method with input/output data of controlled system via DE". Electronics and Communications in Japan 101, n.º 9 (20 de julho de 2018): 48–54. http://dx.doi.org/10.1002/ecj.12109.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

M.D. Hassan, Mohsen. "An evaluation of input and output of expert systems for selection of material handling equipment". Journal of Manufacturing Technology Management 25, n.º 7 (26 de agosto de 2014): 1049–67. http://dx.doi.org/10.1108/jmtm-08-2012-0077.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Purpose – The purpose of this paper is to evaluate expert systems (ES) for selection of material handling (MH) equipment on their use of information and generation of equipment, and provide guidelines that can enhance developing them in the future. Design/methodology/approach – Data envelopment analysis (DEA) is used to evaluate efficiency of ES on their use of information and generation of equipment. Characteristics of benchmark ES are identified to serve as guidelines in developing future ES. Findings – Results of DEA indicate that most ES use a large amount of information that does not commensurate with the number and variety of equipment they generate. Research limitations/implications – The ideal MH equipment for a situation is not known whether it is selected by ES or other procedures. Therefore, this study focusses on efficiency of ES in using information to generate MH equipment without regard to whether ES produce the right equipment for a situation or not. Practical implications – Developers of future ES should consider the efficiency of an ES in using information and generation of equipment, in addition to considering its functions and methodologies. They should utilize means similar to those employed by benchmark methodologies and other ones that can be thought of to economize information and generate more number and variety of equipment, and thus render ES more useful to facility designers and manufacturing managers. Originality/value – The paper presents the first evaluation of ES for selection of MH equipment. The evaluation performed should enhance development of future ES in this field, and can be extended to ES in other application domains.
12

Kazama, Hideki, Masanori Hariyama e Michitaka Kameyama. "Design of a VLSI Processor Based on an Immediate Output Generation Scheduling for Ball-Trajectory Prediction". Journal of Robotics and Mechatronics 12, n.º 5 (20 de outubro de 2000): 534–40. http://dx.doi.org/10.20965/jrm.2000.p0534.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In real-world applications, it is important to develop high-performance special-purpose processors that execute intelligent processing with a tremendous amount of input data. A robot that catches a moving ball is a typical example of real-world applications. In acquisition of 3-D coordinates of a ball trajectory, ball extraction is the most time-consuming processing. This paper presents an optimal design of a ball extraction VLSI processor. To reduce a chip area under a time constraint, minimization of memory capacity is achieved based on an immediate output generation scheduling.
13

Li, Zheng Guo, Qiang Zhang e Kai Zhang. "Design on Large Power Traction Battery Formation Testing System of Electric Vehicle". Advanced Materials Research 608-609 (dezembro de 2012): 1587–93. http://dx.doi.org/10.4028/www.scientific.net/amr.608-609.1587.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Aiming at the current design research status on large power traction battery formation testing system of electric vehicle, this paper presents a system design method based on the management computer, the control computer and the output computer which is 3 layers construction including superior middle and inferior. In the hardware design, high-performance microcomputer is adopted as the main controller, high speed photo electricity coupling separated circuit is employed as the input and output interface, and pulse width modulation control method is used in control algorithm. In the software design, modular design is used to divide the software system into four function parts: data calculating and processing, data displaying, data storing, parameter setting and regulating.. At last, the data analysis of the system experimental results indicates that this kind of design method has a good application foreground.
14

Vogt, W., S. L. Braun, F. Hanssmann, F. Liebl, G. Berchtold, H. Blaschke, M. Eckert, G. E. Hoffmann e S. Klose. "Realistic modeling of clinical laboratory operation by computer simulation". Clinical Chemistry 40, n.º 6 (1 de junho de 1994): 922–28. http://dx.doi.org/10.1093/clinchem/40.6.922.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract An important objective of laboratory management is to adjust the laboratory's capability to the needs of patients' care as well as economy. The consequences of management may be changes in laboratory organization, equipment, or personnel planning. At present only one's individual experience can be used for making such decisions. We have investigated whether the techniques of operations research could be transferred to a clinical laboratory and whether an adequate simulation model of the laboratory could be realized. First we listed and documented the system design and the process flow for each single laboratory request. These input data were linked by the simulation model (programming language SIMSCRIPT II.5). The output data (turnaround times, utilization rates, and analysis of queue length) were validated by comparison with the current performance data obtained by tracking specimen flow. Congruence of the data was excellent (within +/- 4%). In planning experiments we could study the consequences of changes in order entry, staffing, and equipment on turnaround times, utilization, and queue lengths. We conclude that simulation can be a valuable tool for better management decisions.
15

Amreev, M., R. Safin, T. Pavlova, E. Temyrkanova e Y. Garmashova. "AMPLIFIER DESIGN FOR MODELING THE TRANSMISSION OF A DIGITAL VIDEO SIGNAL OVER A DATA TRANSMISSION CHANNEL". Series of Geology and Technical Sciences 445, n.º 1 (1 de fevereiro de 2021): 39–45. http://dx.doi.org/10.32014/2021.2518-170x.6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The use of video surveillance systems is used in the areas of security, law and order, in the territories of protected objects, in monitoring the movement of road vehicles and in other areas. The main disadvantage of a video surveillance system is its susceptibility to weather influences (rain, fog, snowfall, etc.), which degrades the quality of the video system by reducing the signal level. Therefore, the urgency of finding new ways and possibilities to improve the quality of video signals is one of the priority areas of signal processing. The main task of this work was to determine the main parameters, simulate the transmission line and amplifier, and select the schematic diagram of the transmitting and receiving path with the voltage and current ratings. Both the receiver and the cable video transmitter have different means of adjusting to different transmission line lengths. The signal at the output of each receiver should be in the range from 0.9 to 1.1 V, and the spread of the total ohmic resistance of the wires of the video transmission line at the input of the receiver should be no more than 2 – 3%. Based on these parameters, the equipment is configured for transmitting video over the channel. The magnitude of the mismatch is regulated by potentiometers, which allow smooth adjustment of the video transmission equipment [1]. As a rule, video transmission over the channel is carried out at a distance of 50 to 1500 m. If it is necessary to transmit video at distances less than 50 m, additional resistances are connected in series at the receiver input so that the total line resistance is 30 - 50 Ohm [1].
16

Schoonderbeek, G. W., A. Szomoru, A. W. Gunst, L. Hiemstra e J. Hargreaves. "UniBoard2, A Generic Scalable High-Performance Computing Platform for Radio Astronomy". Journal of Astronomical Instrumentation 08, n.º 02 (29 de maio de 2019): 1950003. http://dx.doi.org/10.1142/s225117171950003x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
With the ever-increasing data rates in radio astronomy, a universal Field Programmable Gate Array (FPGA)-based hardware platform which can be used at different locations in the signal processing chain, like a beamformer, data router or correlator, would reduce development time significantly. In this paper, we present the design of such a platform, the UniBoard2. With UniBoard2, both large rack-based and single-board systems can be made. Standard Quad Small Form-factor Pluggable (QSFP) input and output (IO) interfaces on the front side make it easy to interface UniBoard2 to standard 40 Gigabit Ethernet (GbE) network equipment. Hardware design challenges, like transceiver links, power supplies, power dissipation and cooling are described. The paper concludes with some examples of systems (like beamformers and correlators) that can be built using the UniBoard2 hardware platform.
17

Assenmacher, Ingo, Dominik Rausch e Torsten Kuhlen. "On Device Driver Architectures for Virtual Reality Toolkits". Presence: Teleoperators and Virtual Environments 19, n.º 2 (1 de abril de 2010): 83–94. http://dx.doi.org/10.1162/pres.19.2.83.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
One major goal for the development of virtual reality (VR) toolkits is to provide interfaces for novel input or output hardware to support multimodal interaction. The research community has produced several implementations that feature a large variety of device interfaces and abstractions. As a lesson learned from existing approaches, we sum up the requirements for the design of a driver layer that is the basis for a multimodal input and output system in this paper. We derive a general model for driver architectures based on these requirements. This model can be used for reasoning about different implementations of available architectures. As the flow of data through the system is of interest, we take a closer look at common patterns of data processing. Finally, we discuss a number of openly accessible driver architectures currently used for VR development.
18

Bai, Na, Liang Wang, Yaohua Xu e Yi Wang. "Design of a Digital Baseband Processor for UHF Tags". Electronics 10, n.º 17 (26 de agosto de 2021): 2060. http://dx.doi.org/10.3390/electronics10172060.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In this paper, we present a new digital baseband processor for UHF tags. It is a low-power and low-voltage digital circuit and adopts the Chinese military standard protocol GJB7377.1. The processor receives data or commands from the RF front-end and carries out various functions, such as receiving and writing data to memory, reading and sending memory data to the RF front-end and killing tags. The processor consists of thirteen main sub-modules: TPP decoding, clock management, random number generator, power management, memory controller, cyclic redundancy check, FM0 encoding, input data processing, output data processing, command detection module, initialization module, state machine module and controller. We use ModelSim for the TPP decoding simulation and communication simulation between tag and reader, and the simulation results meet the design requirements. The processor can be applied to UHF tags and has been taped out using a TSMC 0.18 um CMOS process.
19

Irmansyah, Muhammad. "MULTIPLEKSER BERBASIS PROGRAMMABLE LOGIC DEVICE (PLD)". Elektron : Jurnal Ilmiah 1, n.º 2 (18 de dezembro de 2009): 13–18. http://dx.doi.org/10.30630/eji.1.2.16.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In middle 1990, electronics industry had the evolution of personal Computer, telephone cellular and high speed data communication equipment. To follow this development, electronics companies have designed and produce new product. One of these innovations is Programmable Logic Devices (PLD) technology. It is a technology to change function of IC digital logic using programming. Many of Programmable Logic Device (PLD) can be used to programming logic using single chip of integrated circuit (IC). Programmable Logic Devices (PLD) technology is applied using IC PAL 22V10 to design multiplexer 4 input 1 output and 2 selector.
20

Kamel, Khaled, e Eman Kamel. "PLC Batch Process Control Design and Implementation Fundamentals". September 2020 2, n.º 3 (9 de junho de 2020): 155–61. http://dx.doi.org/10.36548/jei.2020.3.001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Batch process control is typically used for repeated chemical reaction tasks. It starts with a measured liquid material filling operations followed by a controlled reaction leading to the discharge or transport of processed quantities of material. The input materials is contained in vessel reactor and subjected to a sequence of processing activities over a recipe predefined duration of time. Batch systems are designed to measure, process, and discharge a varying volume of liquid from drums, tanks, reactors, or other large storage vessel using a programmable logic controller (PLC). These systems are common in pharmaceutical, chemical packaging, Beverage processing, personal care product, biotech manufacturing, dairy processing, soap manufacturing, and food processing industries. This paper briefly discusses the fundamental techniques used in specifying, designing, and implementing a PLC batch process control [1, 2]. A simplified batch process is used to illustrate key issues in designing and implementing such systems. In addition to the structured PLC ladder design; more focus is given to safety requirements, redundancy, interlocking, input data validation, and safe operation. The Allen Bradley (AB) SLC 500 PLC along with the LogixPro simulator are used to illustrate the concepts discussed in this paper. Two pumps are used to bring in material during the tank filling and a third pump is used to drain processed product. The three pumps are equipped with flow meters providing pulses proportional to the actual flow rate through the individual pipes. The tank material is heated to a predefined temperature duration followed by mixing for a set time before discharge. Batch control systems provides automated process controls, typically and universally using PLC’s networked to HMI’s and other data storage, analysis, and assessment computers. The overall system perform several tasks including recipe development and download, production scheduling, batch management and execution, equipment performance monitoring, inventory, production history and tracking functionalities. Flexible batch control systems are designed to accommodate smaller batches of products with greater requirements / recipes variation, efficiently and quickly. In addition to providing process consistency, continuous batch process control quality improvements are attained through the automatic collection and analysis of real-time reliable and accurate event performance data [3, 4].
21

Sait, Sadiq M., e Ghalib A. Al-Hashim. "Novel Design of Collaborative Automation Platform Using Real-Time Data Distribution Service Middleware for an Optimum Process Control Environment". Journal of Circuits, Systems and Computers 25, n.º 06 (31 de março de 2016): 1650063. http://dx.doi.org/10.1142/s0218126616500638.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Refining and petrochemical processing facilities utilize various process control applications to raise productivity and enhance plant operation. Client–server communication model is used for integrating these highly interacting applications across multiple network layers utilized in distributed control systems. This paper presents an optimum process control environment by merging sequential and regulatory control, advanced regulatory control, multivariable control, unit-based process control, and plant-wide advanced process control into a single collaborative automation platform to ensure optimum operation of processing equipment for achieving maximum yield of all manufacturing facilities. The main control module is replaced by a standard real-time server. The input/output racks are physically and logically decoupled from the controller by converting them into distributed autonomous process interface systems. Real-time data distribution service middleware is used for providing seamless cross-vendor interoperable communication among all process control applications and distributed autonomous process interface systems. Detailed performance analysis was conducted to evaluate the average communication latency and aggregate messaging capacity among process control applications and distributed autonomous process interface systems. The overall performance results confirm the viability of the new proposal as the basis for designing an optimal collaborative automation platform to handle all process control applications. It also does not impose any inherent limit on the aggregate data messaging capacity, making it suitable for scalable automation platforms.
22

Kim, Youngkuk, Siwoon Son e Yang-Sae Moon. "SPMgr: Dynamic workflow manager for sampling and filtering data streams over Apache Storm". International Journal of Distributed Sensor Networks 15, n.º 7 (julho de 2019): 155014771986220. http://dx.doi.org/10.1177/1550147719862206.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In this article, we address dynamic workflow management for sampling and filtering data streams in Apache Storm. As many sensors generate data streams continuously, we often use sampling to choose some representative data or filtering to remove unnecessary data. Apache Storm is a real-time distributed processing platform suitable for handling large data streams. Storm, however, must stop the entire work when it changes the input data structure or processing algorithm as it needs to modify, redistribute, and restart the programs. In addition, for effective data processing, we often use Storm with Kafka and databases, but it is difficult to use these platforms in an integrated manner. In this article, we derive the problems when applying sampling and filtering algorithms to Storm and propose a dynamic workflow management model that solves these problems. First, we present the concept of a plan consisting of input, processing, and output modules of a data stream. Second, we propose Storm Plan Manager, which can operate Storm, Kafka, and database as a single integrated system. Storm Plan Manager is an integrated workflow manager that dynamically controls sampling and filtering of data streams through plans. Third, as a key feature, Storm Plan Manager provides a Web client interface to visually create, execute, and monitor plans. In this article, we show the usefulness of the proposed Storm Plan Manager by presenting its design, implementation, and experimental results in order.
23

FU, LI-MIN. "A TOOL THAT SUPPORTS CONSTRUCTING CHINESE EXPERT SYSTEMS". International Journal of Pattern Recognition and Artificial Intelligence 01, n.º 03n04 (dezembro de 1987): 393–404. http://dx.doi.org/10.1142/s0218001487000266.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This paper describes EJAUNDICE, which is designed to be a general-purpose expert system building tool. Considerations behind a number of design decisions for purposes of generality are examined. EJAUNDICE provides several control schemes, including biphasical control with goal-directed reasoning, data-driven processing, and control blocks, and integrates rule-based, frame-based, and logic-based reasoning paradigms in its framework. The issues of knowledge representation and input/output in developing a Chinese expert system are also investigated.
24

Fischer, I. S., e R. N. Paul. "Kinematic Displacement Analysis of a Double-Cardan-Joint Driveline". Journal of Mechanical Design 113, n.º 3 (1 de setembro de 1991): 263–71. http://dx.doi.org/10.1115/1.2912778.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The input-output displacement relations of two Cardan joints arranged in series on a driveline has been investigated in detail, including the effects of unequal joint angles, the phase angle between the two Cardan joints and also such manufacturing tolerance errors as nonright angle moving link lengths and offset joint axes. A combined Newton-Raphson method and Davidon-Fletcher-Powell optimization algorithm using dual-number coordinate-transformation matrices was employed to perform the analysis. An experiment was conducted to validate the results of the analysis. The apparatus consisted of a double-Cardan-joint driveline whose rotations were measured by optical shaft encoders that were sampled by a computer data-acquisition system. The equipment was arranged so that the phase angle between the joints and the offset angles between the shafts at each of the two joints could be readily varied. The “relative phase angle,” the difference between the phase angle of the two joints and the angle between the planes defined by the input and intermediate and the intermediate and output shafts, was found to be the significant factor. If the offset angles at both Cardan joints are equal, the double-Cardan-joint driveline functions as a constant-velocity coupling when the magnitude of the relative phase angle is zero. If the offset angles at the two Cardan joints are unequal, a condition prevailing in the important front-wheel-drive automobile steering-column application, then fluctuation in output velocity for a constant input velocity is minimized although not eliminated for zero relative phase angle.
25

BHATTACHARYA, ARUP K., e SYED S. HAIDER. "A VLSI IMPLEMENTATION OF THE INVERSE DISCRETE COSINE TRANSFORM". International Journal of Pattern Recognition and Artificial Intelligence 09, n.º 02 (abril de 1995): 303–14. http://dx.doi.org/10.1142/s0218001495000146.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The Inverse Discrete Cosine Transform (IDCT) is an important function in HDTV, digital TV and multimedia systems complying with JPEG or MPEG standards for video compression. However, the IDCT is computationally intensive and therefore very expensive to implement in VLSI using direct matrix multiplication. By properly arranging the input coefficient sequence and the output data, the rows and columns of the transform matrix can be reordered to build modular regularity suitable for custom implementation in VLSI. This regularity can be exploited, so that a single permutation can be used to derive each output column from the previous one using a circular shift of an accumulator’s input data multiplied in a special sequence. This technique, using only one 1-dimensional IDCT processor and seven constant multipliers, and its implementation are presented. Operation of 58 MHz under worst case conditions is easily achieved, thus making the design applicable to a wide range of video and real time image processing applications. Fabricated in 0.5 micron triple metal CMOS technology, the IDCT contains 70,000 transistors occupying 7 mm2 square silicon. The design has been used on an AT&T MPEG video decoder chip.
26

Grabarczyk, Robert, Krzysztof Urbaniec, Jacek Wernik e Marian Trafczynski. "Evaluation of the Two-Stage Fermentative Hydrogen Production from Sugar Beet Molasses". Energies 12, n.º 21 (26 de outubro de 2019): 4090. http://dx.doi.org/10.3390/en12214090.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Fermentative hydrogen production from molasses—a renewable by-product of beet-sugar processing—was considered. Technical and economic evaluations were performed of a stand-alone production plant employing a two-step fermentation process (dark thermophilic fermentation and photofermentation) followed by an adsorption-based upgrading of the produced hydrogen gas. Using a state-of-the-art knowledge base and a mathematical model composed of mass and energy balances, as well as economic relationships, the process was simulated and equipment data were estimated, the hydrogen cost was calculated and a sensibility analysis was carried out. Due to high capital, operating and labor costs, hydrogen production cost was estimated at a rather high level of 32.68 EUR/kg, while the energy output in produced hydrogen was determined as 68% more than the combined input of the thermal and electric energy needed for plant operation. As the room for improvement of plant performance is limited, a perspective on the cost competitiveness of large-scale hydrogen production from fossil sources is unclear.
27

Al-Rawi, Muaayed F., e Muhanned F. Al-Rawi. "Novel approach in measurement instrument based on computer". International Review of Applied Sciences and Engineering 12, n.º 2 (29 de maio de 2021): 147–56. http://dx.doi.org/10.1556/1848.2021.00214.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
AbstractMost applications in engineering use a data acquisition device hooked up to a personal computer for data processing. Finding less costly, easily accessible and reliable devices will make personal computer (PC) based data acquisition systems less difficult. A soundcard may be used as such a device for it is standard in almost every PC. It can also process any voltage signal within its limits. This paper proposes a way to enable the PC to be used as an oscilloscope. A voltage signal is acquired via the soundcard LINE IN port. The maximum and minimum input signal amplitude requirements for the soundcard are established to be +1 V and –1 V respectively. Based on these findings, hardware circuitry is designed to clip any high amplitude input signals to the range of ±1 V while allowing low amplitude signals to go through to the soundcard unclipped. MATLAB is then employed to acquire, process and display the signal. The final output from MATLAB is compared with the original signal to determine accuracy of the designed oscilloscope. Analysis of the results obtained shows that the final oscilloscope designed enables the soundcard to process input signals with a high level of accuracy. The final design yields a hardware cost at a fraction of an iPod while providing an elegant user interface. This makes it suitable for college students, basement hackers and even professional engineers.
28

Kryukov, Oleg, Alexander Saushev, Olga Shergina e Artem Butsanets. "Electromagnetic compatibility of multifunctional automation systems for electrical equipment using the example of electric drives". E3S Web of Conferences 244 (2021): 09007. http://dx.doi.org/10.1051/e3sconf/202124409007.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The main factors of information compatibility of automatic control systems for energy-saving electric drives are proposed, built on the basis of multifunctional algorithms, when integrated into a single information space of an enterprise. The method for assessing compatibility allows one to purposefully vary the specific shares of data storage and processing processes in the software, as well as to solve the problems of computer modeling and the choice of identification and adaptation algorithms in the general cycle of work. The indicators of the systems operability are given – the coefficients of increasing the productivity of software modules, the essence of the subject area, the accuracy of assessing the input and output information, the probability of achieving the goal, the times of transient processes during information exchange, depending on information resistance, rigidity, memory volume and the level of information-driving logic. It is shown that this will allow avoiding the influence of reactive parameters of information circuits in control systems.
29

Sahardevi, Zaskia Wiedya, Oky Dwi Nurhayati e Kurniawan Teguh Martono. "Perancangan Dan Implementasi Teknologi Virtual Reality Modelling Language 3 Dimensi Pada Pengenalan Perangkat Keras Komputer Berbasis Website". Jurnal Teknologi dan Sistem Komputer 3, n.º 1 (30 de janeiro de 2015): 147–53. http://dx.doi.org/10.14710/jtsiskom.3.1.2015.147-153.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The purpose of this research is to design and implement the technology of Virtual Reality Modeling Language (VRML) 3D) on the introduction of computer hardware websites based. Steps design of multimedia applications for learning media computer hardware is done by performing data collection, literature review, analysis and system design followed by implementation and testing of the system. In this research using a black-box test method, where the acknowledgment is made to test the success of the exist functions. The result of the design of this application is a web based interactive applications in the desktop containing material computer hardware and visualized in 3D VRML form of a variety of input devices, output, processing, and storage devices. In addition, the animation demo mode petrified to get to know more computer hardware.
30

Li, Jialing, Enoch Lu e I.-Tai Lu. "Joint MMSE Transceiver Designs and Performance Benchmark for CoMP Transmission and Reception". ISRN Communications and Networking 2012 (13 de junho de 2012): 1–21. http://dx.doi.org/10.5402/2012/682090.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Coordinated Multipoint (CoMP) transmission and reception has been suggested as a key enabling technology of future cellular systems. To understand different CoMP configurations and to facilitate the configuration selection (and thus determine channel state information (CSI) feedback and data sharing requirements), performance benchmarks are needed to show what performance gains are possible. A unified approach is also needed to enable the cluster of cooperating cells to systematically take care of the transceiver design. To address these needs, the generalized iterative approach (GIA) is proposed as a unified approach for the minimum mean square error (MMSE) transceiver design of general multiple-transmitter multiple-receiver multiple-input-multiple-output (MIMO) systems subject to general linear power constraints. Moreover, the optimum decoder covariance optimization approach is proposed for downlink systems. Their optimality and relationships are established and shown numerically. Five CoMP configurations (Joint Processing-Equivalent Uplink, Joint Processing-Equivalent Downlink, Joint Processing-Equivalent Single User, Noncoordinated Multipoint, and Coordinated Beamforming) are studied and compared numerically. Physical insights, performance benchmarks, and some guidelines for CoMP configuration selection are presented.
31

Lubko, Dmytro. "Design of a reference intelligence expert system for sheep breeding in national private farms". Ukrainian Journal of Educational Studies and Information Technology 5, n.º 3 (30 de setembro de 2017): 1–18. http://dx.doi.org/10.32919/uesit.2017.03.01.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The paper deals with the design of a reference intelligence expert system for sheep breeding. The process of its design and its functional capabilities are described. The developed reference intelligence expert system makes it possible to display recommendations and tips on the computer screen to select a rational and effective sheep breeding technology, as well as to print the received data. The developed system has a two-tier structure, namely, dialog selection of input criteria (data), as well as a module for logical processing and output of reasonable technology recommendations (output data). The step-by-step method of designing the system is determined. The first step is to obtain a technical specification from the customer (farm, enterprise, farmer, etc.) for the development of a reference system. The second step is to determine criteria for the technology in keeping with the farm requirement specification according to the literature and sheep breeding requirements. The third step is to identify the most important factors influencing the process of sheep raising for each of these criteria. The fourth step is to define the main production rules for which the system will be programmed, namely, the module for logical knowledge processing for this technology (and these are the input parameters (factors) when designing the system). The fifth step is to determine the main output criteria (factors) that will be displayed after processing the input rules of the system based on logical deduction rules according to the appropriate sheep breeding technology. The sixth step is to design buttons for more convenient system usage, in addition, if necessary, or at the request of the customer (for example, a button for deleting previous information in windows, a button for storing recommendations in a separate text file, the exit button, etc.). The seventh step is to test the system by users and the customer. The eighth step is to adopt the developed system by the customer and correct it, if necessary. The ninth step is to provide maintenance of the developed system. The block of input data of the developed intelligence expert system has the following elements: a) sheep handling (pasture-stall, stall-pasture); b) sheep condition (higher condition, average condition, lower than average condition); c) sheep breeding type (meat, milk, meat and milk, wool); d) sheep feeding (meat-and-wool sheep, rams of meat-and-goat sheep, sucking females to lactation); e) methods of breeding (purebred, crossing). The block of output factors where appropriate rational recommendations for sheep breeding technology comprises: a) recommendations for sheep handling; b) recommendations for treatment of sheep diseases; c) recommendations for sheep feeding; d) sheep slaughter and storage of products; e) recommended sheep breeds; f) recommendations for sheep breeding; g) characteristics of products. The developed system is intended primarily for private national farmers or interested private householders. It is noted that the use of this system will allow private households to increase the production and quality of meat, wool, lambskin and furs when breeding sheep. This will reduce the cost of the breeding technology, save the farmer time, allow to save on medicines, feeds, which, in turn, will increase profits and profitability of farms.
32

Hung, Li, Mao e Lee. "Design of a Chamfering Tool Diagnosis System Using Autoencoder Learning Method". Energies 12, n.º 19 (27 de setembro de 2019): 3708. http://dx.doi.org/10.3390/en12193708.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
. In this paper, the autoencoder learning method is proposed for the system diagnosis of chamfering tool equipment. The autoencoder uses unsupervised learning architecture. The training dataset that requires only a positive sample is quite suitable for industrial production lines. The abnormal tool can be diagnosed by comparing the output and input of the autoencoder neural network. The adjustable threshold can effectively improve accuracy. This method can effectively adapt to the current environment when the data contain multiple signals. In the experimental setup, the main diagnostic signal is the current of the motor. The current reflects the torque change when the tool is abnormal. Four-step conversions are developed to process the current signal, including (1) current-to-voltage conversion, (2) analog-digital conversion, (3) downsampling rate, and (4) discrete Fourier transform. The dataset is used to find the best autoencoder parameters by grid search. In training results, the testing accuracy, true positive rate, and precision approach are 87.5%, 83.33%, and 90.91%, respectively. The best model of the autoencoder is evaluated by online testing. The online test means loading the diagnosis model in the production line and evaluating the model. It is shown that the proposed tool can effectively detect abnormal conditions. The online assessment accuracy, true positive rate, and precision are 75%, 90%, and 69.23% in the original threshold, respectively. The accuracy can be up to 90% after adjusting the threshold, and the true positive rate and precision are up to 80% and 100%, respectively.
33

Tambunan, Frinto, Yudi Y e Muhammad Fauzi. "Design of Artificial Neural Networks to Recognize Fingerprint Patterns". IJISTECH (International Journal of Information System & Technology) 3, n.º 1 (30 de novembro de 2019): 58. http://dx.doi.org/10.30645/ijistech.v3i1.34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Image or pattern recognition system is one of the branches in computer science, this system can help the processing of fingerprint patterns, especially in the banking, police and users of other institutions who really feel the importance of using fingerprints. Several stages in fingerprint pattern image recognition are through the process of scanning, then the resulting digital fingerprint image is converted to a certain value, among others, the threshold process, the division of images, and representation of input values. The training process is carried out using two treatments: the first with a different level of understanding and the second training with different unit numbers, the best training is obtained with a level of understanding of 0.3 and the number of hidden units 10 by producing a short training time and relatively small errors. Fingerprint pattern recognition is done by two trials, based on 1 number of training patterns and 5 number of training patterns. From the research data, the ability of the system to recognize output patterns is greater if the number of training patterns increases, with a number of 1 training patterns, the system is able to recognize 50% external patterns while the 5 system training patterns are able to recognize 70% output patterns.
34

Boier-Martin, Ioana, e Holly Rushmeier. "Reverse Engineering Methods for Digital Restoration Applications". Journal of Computing and Information Science in Engineering 6, n.º 4 (30 de maio de 2006): 364–71. http://dx.doi.org/10.1115/1.2356497.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In this paper we discuss the challenges of processing and converting 3D scanned data to representations suitable for interactive manipulation in the context of virtual restoration applications. We present a constrained parametrization approach that allows us to represent 3D scanned models as parametric surfaces defined over polyhedral domains. A combination of normal- and spatial-based clustering techniques is used to generate a partition of the model into regions suitable for parametrization. Constraints can be optionally imposed to enforce a strict correspondence between input and output features. We consider two types of virtual restoration methods: (a) a paint restoration method that takes advantage of the normal-based coarse partition to identify large regions of reduced metric distortion suitable for texture mapping and (b) a shape restoration approach that relies on a refined partition used to convert the input model to a multiresolution subdivision representation suitable for intuitive interactive manipulation during digital studies of historical artifacts.
35

Miccinesi, Lapo, Tommaso Consumi, Alessandra Beni e Massimiliano Pieraccini. "W-band MIMO GB-SAR for Bridge Testing/Monitoring". Electronics 10, n.º 18 (14 de setembro de 2021): 2261. http://dx.doi.org/10.3390/electronics10182261.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Interferometric radars are widely used for static and dynamic monitoring of large structures such as bridges, culverts, wind turbine towers, chimneys, masonry towers, stay cables, buildings, and monuments. Most of these radars operate in Ku-band (17 GHz). Nevertheless, a higher operative frequency could allow the design of smaller, lighter, and faster equipment. In this paper, a fast MIMO-GBSAR (Multiple-Input Multiple-Output Ground-Based Synthetic Aperture Radar) operating in W-band (77 GHz) has been proposed. The radar can complete a scan in less than 8 s. Furthermore, as its overall dimension is smaller than 230 mm, it can be easily fixed to the head of a camera tripod, which makes its deployment in the field very easy, even by a single operator. The performance of this radar was tested in a controlled environment and in a realistic case study.
36

Lu, Anni, Xiaochen Peng, Yandong Luo, Shanshi Huang e Shimeng Yu. "A Runtime Reconfigurable Design of Compute-in-Memory–Based Hardware Accelerator for Deep Learning Inference". ACM Transactions on Design Automation of Electronic Systems 26, n.º 6 (28 de junho de 2021): 1–18. http://dx.doi.org/10.1145/3460436.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Compute-in-memory (CIM) is an attractive solution to address the “memory wall” challenges for the extensive computation in deep learning hardware accelerators. For custom ASIC design, a specific chip instance is restricted to a specific network during runtime. However, the development cycle of the hardware is normally far behind the emergence of new algorithms. Although some of the reported CIM-based architectures can adapt to different deep neural network (DNN) models, few details about the dataflow or control were disclosed to enable such an assumption. Instruction set architecture (ISA) could support high flexibility, but its complexity would be an obstacle to efficiency. In this article, a runtime reconfigurable design methodology of CIM-based accelerators is proposed to support a class of convolutional neural networks running on one prefabricated chip instance with ASIC-like efficiency. First, several design aspects are investigated: (1) the reconfigurable weight mapping method; (2) the input side of data transmission, mainly about the weight reloading; and (3) the output side of data processing, mainly about the reconfigurable accumulation. Then, a system-level performance benchmark is performed for the inference of different DNN models, such as VGG-8 on a CIFAR-10 dataset and AlexNet GoogLeNet, ResNet-18, and DenseNet-121 on an ImageNet dataset to measure the trade-offs between runtime reconfigurability, chip area, memory utilization, throughput, and energy efficiency.
37

Sobri, Ahmad. "PERANCANGAN APLIKASI DASHBOARD UNTUK MENGHITUNG ANGKA PENGANGGURAN YANG ADA PADA KECAMATAN PURWODADI BERBASIS WEB MOBILE". JUTIM (Jurnal Teknik Informatika Musirawas) 3, n.º 2 (3 de dezembro de 2018): 122–31. http://dx.doi.org/10.32767/jutim.v3i2.368.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstrak Penerapan teknologi komputer dalam setiap aspek dunia perkantoran Indonesia sudah dianggap sebagai suatu kebutuhan. Belum terdapat aplikasi pengolahan data pengangguran pada Kecamatan Purowdadi yangmenggunakan aplikasi ms. word yang menyebabkan data-data tidaktersimpan dengan baik.Belum adanya aplikasi dashboard untuk mengetahui tinggi nya angka pengangguran yang ada di Kecamatan Purwodadi.Dalam mencari data pengangguran masih rendah tingkat ke akuratannya karena menggunakan cara manual.sistem yang sedang berjalan, pemecahan masalah, kebutuhan fungsional sistem, kebutuhan non fungsional sistem, desain input dan desain output. Metode pengembangan sistem yang digunakan adalah metode Waterwall. Perancangan menggunakan Unified Modeling Language (UML). Sedangkan untuk pengujian Black Box System.sistem secara detail, sesuai dengan rancangan dan berdasarkan tools dan bahasan pemrograman yang dipakai. Kata kunci :Web Mobile, Dashboard, Teknologi Abstract The application of computer technology in every aspect of the world of Office Indonesia is already considered a requirement. Yet there is unemployment data processing applications in the Purowdadi subdistrict of using ms word application that causes the data not stored properly. Yet the existence of the application dashboard to find out his high unemployment figures that are in the Sub District of Purwodadi. In seeking data unemployment still low level to akuratannya because it uses the manual way. the system is currently running, problem solving, needs a functional system, the non-functional requirements of the system, design input and output design. System development method that is used is the method of Waterwall. Design using Unified Modeling Language (UML). As for the Black Box testing System. system in detail, in accordance with the draft and based on the tools and programming topics. Keyword: Mobile Web, Dashboard, technology
38

Puspasari, Trevi Jayanti, e Sumirah Sumirah. "APLIKASI METODE PSEUDO 3D SEISMIK DI CEKUNGAN JAWA BARAT UTARA MENGGUNAKAN K.R. BARUNA JAYA II". Oseanika 1, n.º 2 (14 de janeiro de 2021): 1–12. http://dx.doi.org/10.29122/oseanika.v1i2.4562.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
ABSTRAK Tuntutan untuk mengikuti perkembangan kebutuhan industri migas menjadi motivasi dalam mengembangkan teknik penerapan dan aplikasi akuisisi seismik multichannel 2D. Perkembangan kebutuhan eksplorasi industri migas tidak diimbangi dengan anggaran peningkatan alat survei seismik milik negara termasuk yang terpasang di K.R. Baruna Jaya II – BPPT. Penerapan metode pseudo 3D pada disain survei dan pengolahan data dapat menjadi solusi efektif dan efisien dalam mengatasi persoalan tersebut. Metode Pseudo 3D merupakan suatu teknik akuisisi dan pengolahan data dengan menitik beratkan pada disain akuisisi dan inovasi pengolahan data seismik 2D menghasilkan penampang keruangan (3D) berdasarkan input data seismik yang hanya 2D. Penelitian ini bertujuan untuk mengaplikasikan metode pseudo 3D seismik di Cekungan Jawa Barat Utara menggunakan wahana KR. Baruna Jaya II yang dilakukan pada Desember 2009. Sebagai hasil, pengolahan data 2D lanjutan telah dilakukan dan diperoleh profil penampang seismik keruangan (3D). Profil hasil pengolahan data Pseudo 3D ini dapat menjadi acuan dalam pengambilan keputusan dan rencana survei berikutnya. Kata Kunci: Seismik Pseudo 3D, Seismik multichannel 2D, K.R. Baruna Jaya II, Cekungan Jawa Barat Utara. ABSTRACT [Aplication of Seismic Pseudo 3D in Nort West Java Basin Using K.R. Baruna Jaya II] The demand to follow the growth of needs in the oil and gas industry is a motivation in the developing of techniques for assessment and applying 2D multichannel seismic acquisition. The development of exploration needs for the oil and gas industry is not matched by budget for an upgrade Government’s seismic equipment including equipment installed in K.R. Baruna Jaya II. Applied Pseudo 3D method in survey and seismic data processing can be an effective and efficient solution. The pseudo 3D method is a data acquisition and processing technique with an emphasis on the acquisition design and 2D seismic data processing innovation to produce a 3D seismic volume. This study aims to apply the pseudo 3D seismic method in the North West Java Basin using the K.R. Baruna Jaya II which was held in Desember 2009. As a Result, advanced seismic processing was carried out to output a seismic volume (3D) profile. This profile can be used as a reference in making decisions and planning the next survey. Keywords: Pseudo 3D Seismic, Seismic 2D multichannel, K.R. Baruna Jaya II, Nort West Java Basin.
39

Dimov, Yury, e Dmitry Podashev. "Edge quality control system at finishing treatment by elastic polymer abrasive tools and its analysis. Part 1". Proceedings of Irkutsk State Technical University 24, n.º 5 (outubro de 2020): 977–92. http://dx.doi.org/10.21285/1814-3520-2020-5-977-992.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The purpose of the paper is to create a control system of edge quality under finishing machining of parts by elastic polymer abrasive tools, which are effective in surface conditioning and rounding of part edges. The principle of the system approach is used for a formalized description of the system characterized by the mutual interaction between the subsystems that form it. Consideration is given to the control system of these finishing operations design in order to ensure the required quality (in terms of surface roughness, size and geometric accuracy of edges) and optimal process performance under a large variety of tool designs and shapes of the processed surfaces and their mutual arrangement. The system includes input parameters, state space (subroutines) and output data. The input parameters of the system include equipment, workpiece and tool. The state space includes subsystems of mathematical interaction models of tool and surface and forces on the basis of which the mathematical models of material removal, roughness formation, power consumption, tool wear, and cutting zone temperature operate. The information from these subsystems is transmitted to the subsystem for optimizing finishing operation parameters. An algorithm for designing finishing technological operations is presented. It consists of source data input to the system, analysis of operation functional capability, decisions on the possibility of changes in the input data, control action formation, designing of operation, organization of preparation for operation execution, control of operation execution results for the compliance with the requirements of normative and technical documentation and finishing operation implementation in mass production. The developed system provides optimal tools and processing modes as a result of fulfillment of its functions. When this information is implemented in the manufacturing process of parts, the required quality indicators (surface roughness, size and geometric accuracy of edges) are provided as well as the optimal process performance.
40

Purwanto e Aris Sunawar. "PENCEGAH KEBAKARAN AKIBAT PANAS PADA INSTALASI LISTRIK MENGGUNAKAN DETEKSI THERMAL CAMERA BERBASIS MICROPROSESOR". Journal of Electrical Vocational Education and Technology 2, n.º 1 (21 de março de 2020): 33–38. http://dx.doi.org/10.21009/jevet.0021.06.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This study aims to produce a heat detection equipment on cables using the thermal camera method which will later become the output of the system to secure electrical equipment from the danger of fire due to excessive heat. The research method used the laboratory experimental method by making a prototype, which began with the design of the tool first, then made a prototype based on the design made and continued with testing the prototype. The design of a heat detection system prototype using a thermal camera module controlled using the Arduino system, using a CMOS camera as an image image viewer and MLX90614 thermal sensor as input input from temperature, the output of Arduino will be incorporated into a computer system that will combine the images from a digital camera and thermal sensor so that the temperature distribution map is obtained on the results of digital imaging. The research was carried out by testing each subsystem being slaughtered before assembling it into the system as a whole, so that valid data were obtained regarding the ability of each subsystem before being made into a complete system. Based on the results of measurements and testing the design of the heat detection system using thermal camera and Arduino can be concluded that the tool has been designed, created and tested to detect temperature differences by showing images, the difference in temperature can be obtained by combining the results of the camera with thermal sensors, concluding that the proposed system meets the research criteria so that the research hypothesis can be accepted. Abstrak Penelitian ini bertujuan untuk menghasilkan suatu peralatan pendeteksi panas pada kabel dengan menggunakan metode kamera panas (thermal camera) yang nantinya akan menjadi output pada system untuk mengamankan peralatan listrik dari bahaya kebakaran akibat panas berlebih. Metode penelitian menggunakan metode eksperiman laboratoriumdengan membuat prototype, yang dimulai dengan perancangan alat terlebih dahulu, selanjutnya dilakukan pembuatan prototype berdasarkan perancangan yang dibuat dan dilanjutkan dengan pengujian prototype. Perancangan prototype system pendeteksi panas menggunakan modul thermal kamera yang dikontrol dengan menggunakan sistem arduino, menggunakan kamera cmos sebagai penampil citra gambar dan sensor thermal MLX90614 sebagai input masukan dari suhu, keluaran dari arduino akan dimasukkan kedalam sistem computer yang akan memadukan hasil gambar dari kamera digital dan sensor thermal sehingga diperoleh peta sebaran suhu pada hasil pencitraan digital. Penelitian dilakukan dengan menguji masing-masing subsistem terbelih dahulu sebelum merangkai kedalam sistem secara utuh, sehingga diperoleh data yang valid mengenai kemampuan masing- masing subsistem sebelum dijadikan suatu sistem yang utuh. Berdasarkan hasil pengukuran dan pengujian Perancangan Sistem pendeteksi panas menggunakan thermal camera dan Arduino dapat disimpulkan bahwa alat telah selesai didesain, dibuat dan diuji dapat mendeteksi perbedaan suhu dengan menunjukkan gambar, perbedaan suhu diberikan dapat diperoleh dengan menggabungkan hasil kamera dengan sensor thermal, dapat diperoleh kesimpulan bahwa system yang diusulkan telah memenuhi kriteria penelitian sehingga hipotesis penelitian dapat diterima.
41

Thrush, Lisa L., Greta L. Myers e Luther D. McMillen. "The User-Computer Interface in a Telecommunications Engineering System: Impacts of Automation". Proceedings of the Human Factors Society Annual Meeting 31, n.º 9 (setembro de 1987): 969–72. http://dx.doi.org/10.1177/154193128703100909.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
A common scenario in both manufacturing and human-computer interaction is that of people linking independently-designed systems. People receive output from one system, modify it and then input it into the next system in the process. In computer systems, manual data entry tasks introduce the possibility of both human errors and delays. With increased automation and integration of computer systems, many of these human links can be eliminated. Removing the human link between independently-designed systems does not remove the user from the system network. Rather, it places the user in the role of exception processing, controlling, monitoring and responding to the overall network of systems. This new role brings with it the requirement for an expansion of user knowledge to include a complete understanding of the system network and skills for technical problem solving. It further requires that the system's design include appropriate means of notifying exception processors, a system administrator and management of system status and production volume. The appropriate combination of these components will make a substantial contribution toward the development of a successful integrated computer system.
42

Ali, Alaa Hussein, Saad Mutashar e Ali Mahdi Hammadi. "Dispersion compensation of optical systems utilizing fiber Bragg grating at 15 Gbits/s". Indonesian Journal of Electrical Engineering and Computer Science 22, n.º 1 (1 de abril de 2021): 369. http://dx.doi.org/10.11591/ijeecs.v22.i1.pp369-378.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Nowadays the technological advancement of the information transmission is developing very rapidly and it becomes necessary to achieve a high speed in the transmission of data as well as higher data rate. Developments in optical communication systems address these needs. However, despite all the features and advantages of optical communication systems, the dispersion is still the main challenges. In this paper and to this end, fiber Bragg grating (FBG) is used in order to overcome the dispersion issue in the wavelength division multiplexing (WDM) transmission system. The WDM transmission system is simulated using the advanced tools of Optisystem 13. The simulation program was used at a speed of 15 Gbits/s with 50Km optical fiber length based on the different input design parameters such as input signal power, optical fiber length and attenuation coefficient. In addition, the output performance parameters are discussed in terms of quality factor (Q-factor) and eye diagram. Moreover, a comparison between the proposed design and previous related works is presented.
43

Foster, Marva, Catherine Albanese, Qiang Chen, Kristen A. Sethares, Stewart Evans, Lisa Soleymani Lehmann, Jacqueline Spencer e Jacob Joseph. "Heart Failure Dashboard Design and Validation to Improve Care of Veterans". Applied Clinical Informatics 11, n.º 01 (janeiro de 2020): 153–59. http://dx.doi.org/10.1055/s-0040-1701257.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract Background Early electronic identification of patients at the highest risk for heart failure (HF) readmission presents a challenge. Data needed to identify HF patients are in a variety of areas in the electronic medical record (EMR) and in different formats. Objective The purpose of this paper is to describe the development and data validation of a HF dashboard that monitors the overall metrics of outcomes and treatments of the veteran patient population with HF and enhancing the use of guideline-directed pharmacologic therapies. Methods We constructed a dashboard that included several data points: care assessment need score; ejection fraction (EF); medication concordance; laboratory tests; history of HF; and specified comorbidities based on International Classification of Disease (ICD), ninth and tenth codes. Data validation testing with user test scripts was utilized to ensure output accuracy of the dashboard. Nine providers and key senior management participated in data validation. Results A total of 43 medical records were reviewed and 66 HF dashboard data discrepancies were identified during development. Discrepancies identified included: generation of multiple EF values on a few patients, missing or incorrect ICD codes, laboratory omission, incorrect medication issue dates, patients incorrectly noted as nonconcordant for medications, and incorrect dates of last cardiology appointments. Continuous integration and builds identified defects—an important process of the verification and validation of biomedical software. Data validation and technical limitations are some challenges that were encountered during dashboard development. Evaluations by testers and their focused feedback contributed to the lessons learned from the challenges. Conclusion Continuous refinement with input from multiple levels of stakeholders is crucial to development of clinically useful dashboards. Extraction of all relevant information from EMRs, including the use of natural language processing, is crucial to development of dashboards that will help improve care of individual patients and populations.
44

An, Dezhi, Shengcai Zhang, Jun Lu e Yan Li. "Efficient and Privacy-Preserving Outsourcing of 2D-DCT and 2D-IDCT". Wireless Communications and Mobile Computing 2020 (27 de julho de 2020): 1–9. http://dx.doi.org/10.1155/2020/8892838.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
As a subset of discrete Fourier transform (DFT), discrete cosine transform (DCT), especially two-dimensional discrete cosine transform (2D-DCT), is an important mathematical tool for digital signal processing. However, the computational complexity of 2D-DCT is quite high, which makes it impossible to meet the requirements in some signal processing fields with large signal sizes. In addition, to optimize the 2D-DCT algorithm itself, seeking help from a cloud platform is considered to be an excellent alternative to dramatically speeding up 2D-DCT operations. Still, there are three key challenges in cloud computing outsourcing that need to be addressed, including protecting the privacy of input and output data, ensuring the correctness of the returned results, and ensuring adequate local cost savings. In this paper, we explore the design of a practical outsourcing protocol for 2D-DCT and 2D-IDCT, which well solves the above three challenges. Both theoretical analysis and simulation experiment results not only confirm the feasibility of the proposed protocol but also show its outstanding performance in efficiency.
45

Pardjono, Sigit, e Asep Juarna. "Designing Smart Student Savings Tools Based on Arduino and Web". International Journal of Scientific Research and Management 8, n.º 03 (22 de março de 2020): 73–89. http://dx.doi.org/10.18535/ijsrm/v8i03.ft02.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In this modern era, everything is done easily, from electronic equipment to electronic transactions is developed increasingly sophisticated. The equipment and sophisticated electronic transactions turned out to have major problems that could result in dissipation, especially students in schools. The purpose of this study is to create and design a smart savings machine tool, where this machine will be controlled via RFID and a sensor with a microcontroller, so that each individual can learn to economize in managing finances. This machine is designed by adding a color sensor for depositing and withdrawing cash with only three nominal banknotes, namely Rp. 10,000, Rp. 5,000 and Rp. 2,000. Furthermore, this device is also able to calculate the financial balance that has been entered. As an input, Arduino Mega 2560 is needed for data processing. Whereas, the color sensor output is used to read the basic colors red, green and blue on the surface of the lower left corner of the rupiah banknotes. This tool works as desired, can read RFID (Radio Frequency Identification) cards that have been determined and the color sensor can respond to colors on a nominal bill of Rp 2,000, Rp.5,000 or Rp. 10,000 and able to detect the time when saving and accumulating the balance. To attract money into the storage area, two DC motors are used. When the TCS230 color sensor detects money, there are several times that the currency is unreadable due to the influence of external light which causes the sensor to be inaccurate in reading the frequency value of the money color.
46

Zhang, Cun Yi, Mu Qing Wu, Run Qian Chen e Guo Dong Ma. "Novel User Scheduling Schemes Based on Nonlinear Precoding for Multiuser MIMO Systems". Applied Mechanics and Materials 195-196 (agosto de 2012): 270–76. http://dx.doi.org/10.4028/www.scientific.net/amm.195-196.270.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This paper proposes a novel greedy user ordering algorithm for the multiuser multiple-input multiple-output (MIMO) systems employing block diagonal geometric mean decomposition method and Tomlinson-Harashima precoding (THP). Theoretical analysis and computer simulations illustrate its low computation complexity relative to the optimal user ordering achieving by brute search over all the possible ordering permutations resulting in extremely high computation complexity. Meanwhile the bit error rate (BER) performance of the proposed algorithm is very close to the optimal user ordering. Moreover, in order to mitigate the impact of users with smaller sub-channel gains to the whole systems BER performance, a joint pre-processing scheme design of adaptive data streams reduction and greedy user ordering (ADSR-GUO) is proposed. By means of choosing different values for the controlling factor, we can obtain different system sum-rate and BER performance to satisfy different quality-of-service (QoS) requirements.
47

Mavridou, Efthimia, Konstantinos M. Giannoutakis, Dionysios Kehagias, Dimitrios Tzovaras e George Hassapis. "Automatic categorization of Web service elements". International Journal of Web Information Systems 14, n.º 2 (18 de junho de 2018): 233–58. http://dx.doi.org/10.1108/ijwis-08-2017-0059.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Purpose Semantic categorization of Web services comprises a fundamental requirement for enabling more efficient and accurate search and discovery of services in the semantic Web era. However, to efficiently deal with the growing presence of Web services, more automated mechanisms are required. This paper aims to introduce an automatic Web service categorization mechanism, by exploiting various techniques that aim to increase the overall prediction accuracy. Design/methodology/approach The paper proposes the use of Error Correcting Output Codes on top of a Logistic Model Trees-based classifier, in conjunction with a data pre-processing technique that reduces the original feature-space dimension without affecting data integrity. The proposed technique is generalized so as to adhere to all Web services with a description file. A semantic matchmaking scheme is also proposed for enabling the semantic annotation of the input and output parameters of each operation. Findings The proposed Web service categorization framework was tested with the OWLS-TC v4.0, as well as a synthetic data set with a systematic evaluation procedure that enables comparison with well-known approaches. After conducting exhaustive evaluation experiments, categorization efficiency in terms of accuracy, precision, recall and F-measure was measured. The presented Web service categorization framework outperformed the other benchmark techniques, which comprise different variations of it and also third-party implementations. Originality/value The proposed three-level categorization approach is a significant contribution to the Web service community, as it allows the automatic semantic categorization of all functional elements of Web services that are equipped with a service description file.
48

Buslaev, Alexander, Vladimir I. Iglovikov, Eugene Khvedchenya, Alex Parinov, Mikhail Druzhinin e Alexandr A. Kalinin. "Albumentations: Fast and Flexible Image Augmentations". Information 11, n.º 2 (24 de fevereiro de 2020): 125. http://dx.doi.org/10.3390/info11020125.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Data augmentation is a commonly used technique for increasing both the size and the diversity of labeled training sets by leveraging input transformations that preserve corresponding output labels. In computer vision, image augmentations have become a common implicit regularization technique to combat overfitting in deep learning models and are ubiquitously used to improve performance. While most deep learning frameworks implement basic image transformations, the list is typically limited to some variations of flipping, rotating, scaling, and cropping. Moreover, image processing speed varies in existing image augmentation libraries. We present Albumentations, a fast and flexible open source library for image augmentation with many various image transform operations available that is also an easy-to-use wrapper around other augmentation libraries. We discuss the design principles that drove the implementation of Albumentations and give an overview of the key features and distinct capabilities. Finally, we provide examples of image augmentations for different computer vision tasks and demonstrate that Albumentations is faster than other commonly used image augmentation tools on most image transform operations.
49

Janik, Witold. "The Implementation of the Material Loss Detection Method in the Author Computer Aided Overhaul Software". Advanced Materials Research 1036 (outubro de 2014): 652–55. http://dx.doi.org/10.4028/www.scientific.net/amr.1036.652.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In order to prepare a refurbishing process of an element that characterize with a material loss which is a result of worn process or damage could be 3D scanned. Then it 3D model is gained after triangularization process and surface fitting. This model is compared with a CAD model which is an element design representation. In order to provide such comparison, the proper implementation of the MLDMCS (Material Loss Detection Method for Cylindrical Shape parts) in to the software is needed. The software is prepared in Visual Studio environment (.NET) with C# programing language (the algorithm code and the interface), SQL server (database) and Siemens NX software (PARASOLID internal libraries). MLDMCS is implemented as the one of the author Computer Aided Overhaul software modules (CAO). The implementation is a complex task, according to: interface preparation, data collecting, data presentation an input data form, proper set of input data configuration, preparation of 3D graphics processing an analysis algorithms, preparation of output data form and interpretation. The result of software algorithm should be an easy to interpret set of data with a possibility of automatic rapport generation. Furthermore the data should be collected in a data base for future analysis of element worn and damage repeatable progression. Data collected each time from a element during it technical state examination gives also an answer to future proceeding according critical quantity of refurbishing processes in recirculation of technical mean. Additionally collected data gives suggestion of it repeatability in worn or damage after various technologies of refurbishing. Future systems probably will be set on such solution and MLDMCS will be one of typical method used in industry oriented to overhaul of machines parts. Previously presented article, shows only pure method with it foundations. Nowadays solutions are mainly based on a metrology, also supported with CMM (Coordinate Measuring Machines) and offline measuring paths generating systems. The accuracy of proposed solution (3D scanning and transformation of point clouds to solid model) is less accurate than typical measuring methods; however foresights of application are promising. The demand accuracy is sufficient in order to prepare a cladding process by turning worn-out or damaged surface layer.
50

Benuriadi, Benuriadi, Osman Sianipar e Guardian Yoki Sanjaya. "SISTEM INFORMASI DALAM PELAYANAN LABORATORIUM". INDONESIAN JOURNAL OF CLINICAL PATHOLOGY AND MEDICAL LABORATORY 19, n.º 1 (14 de outubro de 2016): 56. http://dx.doi.org/10.24293/ijcpml.v19i1.391.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The development of information technology has altered the conventional type of hospital laboratoryservices, from mostly paper based into computerized system. In term of quicker and easier, the output of computer-based information is useful for the improving healthcare services management. Laboratory services in the public hospitals mostly used paper-based laboratory data processing, leading to problems of accessibility, usability, clarity and completeness of the information. This study aims to to know how to develop a computer-based laboratory information system for a supporting laboratory management in the hospital toward in depth and systematic assessment among relevant stakeholders. The study was conducted at Praya Public Hospital Central District of Lombok, Nusa Tenggara Barat. Five stages of prototyping method were used for the system development, namely: planning, designing, systems testing, pilot implementation and system evaluation. Data and information obtained to observe in this study were in-depth interviews and questionnaire dissemination. During the planning phase, there were four groups of information should be identified, which should be required by the hospital management, laboratory staff, physicians and other health providers and information for the patient as well. Following the need assessment, a context diagram, Data Flow Diagram (DFD), structure of database, Entity Relationship Diagram (ERD), input and output designs were created. A prototype of computer-based laboratory information system was developed according to these systematic analysis and design. Evaluation on user’s perception demonstrated that the prototype could provide laboratory information easily, understandable, as well as complete and useful for all group of users. In conclusion, developing information system that involved potential users in hospital laboratory unit demonstrated its usefulness and this encouraged that public hospitals should adopt computerized laboratory information systems.

Vá para a bibliografia