Статті в журналах з теми "Warehouses Management Data processing"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Warehouses Management Data processing.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Warehouses Management Data processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Aufaure, Marie-Aude, Alfredo Cuzzocrea, Cécile Favre, Patrick Marcel, and Rokia Missaoui. "An Envisioned Approach for Modeling and Supporting User-Centric Query Activities on Data Warehouses." International Journal of Data Warehousing and Mining 9, no. 2 (April 2013): 89–109. http://dx.doi.org/10.4018/jdwm.2013040105.

Повний текст джерела
Анотація:
In this vision paper, the authors discuss models and techniques for integrating, processing and querying data, information and knowledge within data warehouses in a user-centric manner. The user-centric emphasis allows us to achieve a number of clear advantages with respect to classical data warehouse architectures, whose most relevant ones are the following: (i) a unified and meaningful representation of multidimensional data and knowledge patterns throughout the data warehouse layers (i.e., loading, storage, metadata, etc); (ii) advanced query mechanisms and guidance that are capable of extracting targeted information and knowledge by means of innovative information retrieval and data mining techniques. Following this main framework, the authors first outline the importance of knowledge representation and management in data warehouses, where knowledge is expressed by existing ontology or patterns discovered from data. Then, the authors propose a user-centric architecture for OLAP query processing, which is the typical applicative interface to data warehouse systems. Finally, the authors propose insights towards cooperative query answering that make use of knowledge management principles and exploit the peculiarities of data warehouses (e.g., multidimensionality, multi-resolution, and so forth).
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mujiono, Mujiono, and Aina Musdholifah. "Pengembangan Data Warehouse Menggunakan Pendekatan Data-Driven untuk Membantu Pengelolaan SDM." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 10, no. 1 (January 31, 2016): 1. http://dx.doi.org/10.22146/ijccs.11184.

Повний текст джерела
Анотація:
The basis of bureaucratic reform is the reform of human resources management. One supporting factor is the development of an employee database. To support the management of human resources required including data warehouse and business intelligent tools. The data warehouse is an integrated concept of reliable data storage to provide support to all the needs of the data analysis. In this study developed a data warehouse using the data-driven approach to the source data comes from SIMPEG, SAPK and electronic presence. Data warehouses are designed using the nine steps methodology and unified modeling language (UML) notation. Extract transform load (ETL) is done by using Pentaho Data Integration by applying transformation maps. Furthermore, to help human resource management, the system is built to perform online analytical processing (OLAP) to facilitate web-based information. In this study generated BI application development framework with Model-View-Controller (MVC) architecture and OLAP operations are built using the dynamic query generation, PivotTable, and HighChart to present information about PNS, CPNS, Retirement, Kenpa and Presence
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kartanova, A. Dzh, and T. I. Imanbekov. "OVERVIEW OF OPTIMIZATION METHODS FOR PRODUCTIVITY OF THE ETL PROCESS." Heralds of KSUCTA, №1, 2022, no. 1-2022 (March 14, 2022): 64–70. http://dx.doi.org/10.35803/1694-5298.2022.1.64-70.

Повний текст джерела
Анотація:
One of the important aspects in management and acceleration of processes, operations in databases and data warehouses is ETL processes, the process of extracting, transforming and loading data. These processes without optimizing, a realization data warehouse project is costly, complex, and time-consuming. This paper provides an overview and research of methods for optimizing the performance of ETL processes; that the most important indicator of ETL system's operation is the time and speed of data processing is shown. The issues of the generalized structure of ETL process flows are considered, the architecture of ETL process optimization is proposed, and the main methods of parallel data processing in ETL systems are presented, those methods can improve its performance. The most relevant today of the problem is performance of ETL processes for data warehouses is considered in detail.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kartanova, A. Dzh, and T. I. Imanbekov. "OVERVIEW OF OPTIMIZATION METHODS FOR PRODUCTIVITY OF THE ETL PROCES." Heralds of KSUCTA, №4, 2021, no. 4-2021 (December 27, 2021): 556–63. http://dx.doi.org/10.35803/1694-5298.2021.4.556-563.

Повний текст джерела
Анотація:
One of the important aspects in management and acceleration of processes, operations in databases and data warehouses is ETL processes, the process of extracting, transforming and loading data. These processes without optimizing, a realization data warehouse project is costly, complex, and time-consuming. This paper provides an overview and research of methods for optimizing the performance of ETL processes; that the most important indicator of ETL system's operation is the time and speed of data processing is shown. The issues of the generalized structure of ETL process flows are considered, the architecture of ETL process optimization is proposed, and the main methods of parallel data processing in ETL systems are presented, those methods can improve its performance. The most relevant today of the problem is performance of ETL processes for data warehouses is considered in detail.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kihel, Yousra. "Digital Transition Methodology of a Warehouse in the Concept of Sustainable Development with an Industrial Case Study." Sustainability 14, no. 22 (November 17, 2022): 15282. http://dx.doi.org/10.3390/su142215282.

Повний текст джерела
Анотація:
Logistics is one of the sectors that is evolving in parallel with Industry 4.0, which refers to the integration of new technologies, information, and agents, with the common goal of improving the efficiency and responsiveness of a logistics management system. The warehouse is an essential link in logistics management, a factor of competitiveness, and a link between the partners of the entire logistics chain. It has become essential to manage warehouses effectively and to allocate their resources efficiently. The digitalization of warehouses is currently one of the research topics of Logistics 4.0. This work presents a methodology of the digital transition of warehouse management, which consists of four main steps: the diagnosis of a warehouse to identify the different processes, the degree of involvement of the employees, a calculation of the degree of maturity to identify the new technology and means of data transfer, and the associated software for the collection of information and the methods of data processing. This digital transition methodology was applied to an industrial company. The results obtained allowed for the improvement of all the indicators measuring the performance of the warehouse on economic, social, and environmental levels.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ribeiro de Almeida, Damião, Cláudio de Souza Baptista, Fabio Gomes de Andrade, and Amilcar Soares. "A Survey on Big Data for Trajectory Analytics." ISPRS International Journal of Geo-Information 9, no. 2 (February 1, 2020): 88. http://dx.doi.org/10.3390/ijgi9020088.

Повний текст джерела
Анотація:
Trajectory data allow the study of the behavior of moving objects, from humans to animals. Wireless communication, mobile devices, and technologies such as Global Positioning System (GPS) have contributed to the growth of the trajectory research field. With the considerable growth in the volume of trajectory data, storing such data into Spatial Database Management Systems (SDBMS) has become challenging. Hence, Spatial Big Data emerges as a data management technology for indexing, storing, and retrieving large volumes of spatio-temporal data. A Data Warehouse (DW) is one of the premier Big Data analysis and complex query processing infrastructures. Trajectory Data Warehouses (TDW) emerge as a DW dedicated to trajectory data analysis. A list and discussions on problems that use TDW and forward directions for the works in this field are the primary goals of this survey. This article collected state-of-the-art on Big Data trajectory analytics. Understanding how the research in trajectory data are being conducted, what main techniques have been used, and how they can be embedded in an Online Analytical Processing (OLAP) architecture can enhance the efficiency and development of decision-making systems that deal with trajectory data.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Rao, M. Venkata Krishna, Ch Suresh, K. Kamakshaiah, and M. Ravikanth. "Prototype Analysis for Business Intelligence Utilization in Data Mining Analysis." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 7 (July 29, 2017): 30. http://dx.doi.org/10.23956/ijarcsse.v7i7.93.

Повний текст джерела
Анотація:
Tremendous increase of high availability of more disparate data sources than ever before have raised difficulties in simplifying frequent utility report across multiple transaction systems apart from an integration of large historical data. It is main focusing concept in data exploration with high transactional data systems in real time data processing. This problem mainly occurs in data warehouses and other data storage proceedings in Business Intelligence (BI) for knowledge management and business resource planning. In this phenomenon, BI consists software construction of data warehouse query processing in report generation of high utility data mining in transactional data systems. The growth of a huge voluminous data in the real world is posing challenges to the research and business community for effective data analysis and predictions. In this paper, we analyze different data mining techniques and methods for Business Intelligence in data analysis of transactional databases. For that, we discuss what the key issues are by performing in-depth analysis of business data which includes database applications in transaction data source system analysis. We also discuss different integrated techniques in data analysis in business operational process for feasible solutions in business intelligence
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Husin, Albert Eddy, and Eko Arif Budianto. "Influential factors in the application of the Lean Six Sigma and time-cost trade-off method in the construction of the ammunition warehouse." SINERGI 26, no. 1 (February 1, 2022): 81. http://dx.doi.org/10.22441/sinergi.2022.1.011.

Повний текст джерела
Анотація:
Construction projects have developed so rapidly, one of which is the construction of warehouses. The warehouse discussed in this study is an ammunition warehouse. The construction of the ammunition warehouse has a deadline in accordance with the contract agreed between the owner, contractor and consultant. But in the implementation in the field, there was a delay in the work of the concrete structure. This study aims to obtain the dominant factors causing delays in the ammunition warehouse project by applying the Lean Six Sigma method and time-cost trade-off in solving the problem. Data processing used statistical analysis SPSS (Statistical Package for the Social Sciences), which was obtained from questionnaires filled out by experts. From this processing, the highest ten factors were obtained, namely 1. Inadequate planning and scheduling, 2. Implementation of work plans, 3. Delay in drawing up and approval of drawings, 4. Cost reduction, 5. Relationship between management and labor, 6. Relationship design internal team, 7. Lack of skilled manpower, 8. Flexibility, 9. Errors during construction and 10. Inaccurate prediction of craftsman production levels. This research is useful and beneficial for readers and can be developed again.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Datta, Anindya, and Helen Thomas. "The cube data model: a conceptual model and algebra for on-line analytical processing in data warehouses." Decision Support Systems 27, no. 3 (December 1999): 289–301. http://dx.doi.org/10.1016/s0167-9236(99)00052-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Toews, M. D., F. H. Arthur та J. F. Campbell. "Monitoring Tribolium castaneum (Herbst) in pilot-scale warehouses treated with β-cyfluthrin: are residual insecticides and trapping compatible?" Bulletin of Entomological Research 99, № 2 (24 жовтня 2008): 121–29. http://dx.doi.org/10.1017/s0007485308006172.

Повний текст джерела
Анотація:
AbstractIntegrated pest management strategies for cereal processing facilities often include both pheromone-baited pitfall traps and crack and crevice applications of a residual insecticide such as the pyrethroid cyfluthrin. In replicated pilot-scale warehouses, a 15-week-long experiment was conducted comparing population trends suggested by insect captures in pheromone-baited traps to direct estimates obtained by sampling the food patches in untreated and cyfluthrin-treated warehouses. Warehouses were treated, provisioned with food patches and then infested with all life stages of Tribolium castaneum (Herbst). Food patches, both those initially infested and additional uninfested, were surrounded by cyfluthrin bands to evaluate if insects would cross the bands. Results show that insect captures correlated with population trends determined by direct product samples in the untreated warehouses, but not the cyfluthrin-treated warehouses. However, dead insects recovered from the floor correlated with the insect densities observed with direct samples in the cyfluthrin-treated warehouses. Initially, uninfested food patches were exploited immediately and after six weeks harbored similar infestation densities to the initially infested food patches. These data show that pest management professionals relying on insect captures in pheromone-baited traps in cyfluthrin-treated structures could be deceived into believing that a residual insecticide application was suppressing population growth, when the population was actually increasing at the same rate as an untreated population.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Edara, Pavan, and Mosha Pasumansky. "Big metadata." Proceedings of the VLDB Endowment 14, no. 12 (July 2021): 3083–95. http://dx.doi.org/10.14778/3476311.3476385.

Повний текст джерела
Анотація:
The rapid emergence of cloud data warehouses like Google BigQuery has redefined the landscape of data analytics. With the growth of data volumes, such systems need to scale to hundreds of EiB of data in the near future. This growth is accompanied by an increase in the number of objects stored and the amount of metadata such systems must manage. Traditionally, Big Data systems have tried to reduce the amount of metadata in order to scale the system, often compromising query performance. In Google BigQuery, we built a metadata management system that demonstrates that massive scale can be achieved without such tradeoffs. We recognized the benefits that fine grained metadata provides for query processing and we built a metadata system to manage it effectively. We use the same distributed query processing and data management techniques that we use for managing data to handle Big metadata. Today, BigQuery uses these techniques to support queries over billions of objects and their metadata.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Maiah, Lax, DR A. GOVARDHAN DR.A.GOVARDHAN, and DR C. SUNIL KUMAR. "A FRAMEWORK FOR SPATIO-TEMPORAL DATA WAREHOUSE." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 4, no. 1 (February 1, 2013): 146–50. http://dx.doi.org/10.24297/ijct.v4i1c.3114.

Повний текст джерела
Анотація:
Data Warehouse (DW) is topic-oriented, integrated, static datasets which are used to support decision-making. Driven by the constraint of mass spatio-temporal data management and application, Spatio-Temporal Data Warehouse (STDW) was put forward, and many researchers scattered all over the world focused their energy on it.Although the research on STDW is going in depth , there are still many key difficulties to be solved, such as the design principle, system framework, spatio-temporal data model (STDM), spatio-temporal data process (STDP), spatial data mining (SDM) and etc. In this paper, the concept of STDW is discussed, and analyzes the organization model of spatio-temporal data. Based on the above, a framework of STDW is composed of data layer, management layer and application layer. The functions of STDW should include data analysis besides data process and data storage. When users apply certain kind of data services, STDW identifies the right data by metadata management system, then start data processing tool to form a data product which serves the data mining and OLAP. All varieties of distributed databases (DDBs) make up data sources of STDW, including Digital Elevation Model (DEM), Diagnosis-Related Group (DRG), Data Locator Group (DLG), Data Objects Management (DOM), Place Name and other databases in existence. The management layer implements heterogeneous data processing, metadata management and spatio-temporal data storage. The application layer provides data products service, multidimensional data cube, data mining tools and on-line analytical process.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Daneshpour, Negin, and Ahmad Abdollahzadeh Barfourosh. "Dynamic View Management System for Query Prediction to View Materialization." International Journal of Data Warehousing and Mining 7, no. 2 (April 2011): 67–96. http://dx.doi.org/10.4018/jdwm.2011040104.

Повний текст джерела
Анотація:
On-Line Analytical Processing (OLAP) systems based on data warehouses are the main systems for managerial decision making and must have a quick response time. Several algorithms have been presented to select the proper set of data and elicit suitable structured environments to handle the queries submitted to OLAP systems, which are called view selection algorithms to materialize. As users’ requirements may change during run time, materialization must be viewed dynamically. In this work, the authors propose and operate a dynamic view management system to select and materialize views with new and improved architecture, which predicts incoming queries through association rule mining and three probabilistic reasoning approaches: Conditional probability, Bayes’ rule, and Naïve Bayes’ rule. The proposed system is compared with DynaMat system and Hybrid system through two standard measures. Experimental results show that the proposed dynamic view selection system improves these measures. This system outperforms DynaMat and Hybrid for each type of query and each sequence of incoming queries.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Iskandar, Ade Rahmat, Apri Junaidi, and Asep Herman. "Extract, Transform, Load sebagai upaya Pembangunan Data Warehouse." Journal of Informatics and Communication Technology (JICT) 1, no. 1 (July 26, 2019): 25–35. http://dx.doi.org/10.52661/j_ict.v1i1.21.

Повний текст джерела
Анотація:
Paper ini dibuat untuk memberikan gambaran secara general dalam proses transformasi Ekstract, Transform, dan Load (ETL) sebagai data masukan untuk multidimensional modeling data mart dan data warehouse. Artikel ini dibuat dengan mengimplementasikan database dari Online Transaction Processing (OLTP) kedalam database Online Analytical processing (OLAP). Pada penelitian ini digunakan database classicmodels yang bersifat open source dari Mysql.Metode yang dilakukan dalam penelitian ini adalah, dengan melakukan proses Extract, Transform danLoad (ETL) pada data classic models yang dilakukan dengan cara melakukan ketiga proses tersebut (ETL) dari database OLTP kedalam database OLAP.Luaran dari penelitian ini adalah terbntuknya fact oder berisi data dari semua data dimension yang dbiuat untuk data classic model menggunakan perngkat lunak Pentaho Data Intergarion (Kettle) dan database management system MySQL
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Bhardwaj, Vinayak, and Rincy Jacob. "Data Warehousing and OLAP Technology." Journal of Advance Research in Computer Science & Engineering (ISSN: 2456-3552) 1, no. 3 (March 31, 2014): 05–11. http://dx.doi.org/10.53555/nncse.v1i3.521.

Повний текст джерела
Анотація:
Data warehousing and Online Analytical Processing (OLAP) are essential elements of decision support, which has increasingly become a focus of the database industry. Data warehouse provides an effective way for the analysis and tatic to the mass data and helps to do the decision making. Many commercial products and services are now available and all of the principal database management system vendors now have offering in these areas. The paper introduces the data warehouse and online analysis process with an accent on their new requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

McGrath, H., E. Stefanakis, and M. Nastev. "Development of a Data Warehouse for Riverine and Coastal Flood Risk Management." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-2 (November 11, 2014): 41–48. http://dx.doi.org/10.5194/isprsarchives-xl-2-41-2014.

Повний текст джерела
Анотація:
In New Brunswick flooding occurs typically during the spring freshet, though, in recent years, midwinter thaws have led to flooding in January or February. Municipalities are therefore facing a pressing need to perform risk assessments in order to identify communities at risk of flooding. In addition to the identification of communities at risk, quantitative measures of potential structural damage and societal losses are necessary for these identified communities. Furthermore, tools which allow for analysis and processing of possible mitigation plans are needed. Natural Resources Canada is in the process of adapting Hazus-MH to respond to the need for risk management. This requires extensive data from a variety of municipal, provincial, and national agencies in order to provide valid estimates. The aim is to establish a data warehouse to store relevant flood prediction data which may be accessed thru Hazus. Additionally, this data warehouse will contain tools for On-Line Analytical Processing (OLAP) and knowledge discovery to quantitatively determine areas at risk and discover unexpected dependencies between datasets. The third application of the data warehouse is to provide data for online visualization capabilities: web-based thematic maps of Hazus results, historical flood visualizations, and mitigation tools; thus making flood hazard information and tools more accessible to emergency responders, planners, and residents. This paper represents the first step of the process: locating and collecting the appropriate datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Admin, Admin. "A Extract, Transform, Load sebagai upaya Pembangunan Data Warehouse." Journal of Informatics and Communication Technology (JICT) 1, no. 1 (July 12, 2019): 11–20. http://dx.doi.org/10.52661/j_ict.v1i1.11.

Повний текст джерела
Анотація:
Paper ini dibuat untuk memberikan gambaran secara general dalam proses transformasi Ekstract, Transform, dan Load (ETL) sebagai data masukan untuk multidimensional modeling data mart dan data warehouse. Artikel ini dibuat dengan mengimplementasikan database dari Online Transaction Processing (OLTP) kedalam database Online Analytical processing (OLAP). Pada penelitian ini digunakan database classicmodels yang bersifat open source dari Mysql.Metode yang dilakukan dalam penelitian ini adalah, dengan melakukan proses Extract, Transform danLoad (ETL) pada data classic models yang dilakukan dengan cara melakukan ketiga proses tersebut (ETL) dari database OLTP kedalam database OLAP.Luaran dari penelitian ini adalah terbntuknya fact oder berisi data dari semua data dimension yang dbiuat untuk data classic model menggunakan perngkat lunak Pentaho Data Intergarion (Kettle) dan database management system MySQL
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Yoon, Seung-Chul, Tae Sung Shin, Kurt Lawrence, and Deana R. Jones. "Development of Online Egg Grading Information Management System with Data Warehouse Technique." Applied Engineering in Agriculture 36, no. 4 (2020): 589–604. http://dx.doi.org/10.13031/aea.13675.

Повний текст джерела
Анотація:
Highlights Digital data collection and management system is developed for the USDA-AMS’s shell-egg grading program. Database system consisting of OLTP, data warehouse and OLAP databases enables online data entry and trend reporting. Data and information management is done through web application servers. Users access the databases via web browsers. Abstract . This paper is concerned with development of web-based online data entry and reporting system, capable of centralized data storage and analytics of egg grading records produced by USDA egg graders. The USDA egg grading records are currently managed in paper form. While there is useful information for data-driven knowledge discovery and decision making, the paper-based egg grading record system has fundamental limitations in effective and timely management of such information. Thus, there has been a demand to electronically and digitally store and manage the egg grading records in a database for data analytics and mining, such that the quality trends of eggs observed at various levels (e.g., nation or state) are readily available to decision makers. In this study, we report the design and implementation of a web-based online data entry and reporting information system (called USDA Egg Grading Information Management System, EGIMS), based on a data warehouse framework. The developed information system consisted of web applications for data entry and reporting, and internal databases for data storage, aggregation, and query processing. The internal databases consisted of online transaction processing (OLTP) database for data entry and retrieval, data warehouse (DW) for centralized data storage and online analytical processing (OLAP) database for multidimensional analytical queries. Thus, the key design goal of the system was to build a system platform that could provide the web-based data entry and reporting capabilities while rapidly updating the OLTP, DW and OLAP databases. The developed system was evaluated by a simulation study with statistically-modeled egg grading records of one hypothetical year. The study found that the EGIMS could handle approximately up to 600 concurrent users, 32 data entries per second and 164 report requests per second, on average. The study demonstrated the feasibility of an enterprise-level data warehouse system for the USDA and a potential to provide data analytics and data mining capabilities such that the queries about historical and current trends can be reported. Once fully implemented and tested in the field, the EGIMS is expected to provide a solution to modernize the egg grading practice of the USDA and produce the useful information for timely decisions and new knowledge discovery. Keywords: Data warehouse, Database, OLTP, OLAP, Egg grading, Information management, Web application, Information system, Data.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Zeng, Jian Hua, and Zheng Xing Xiao. "Automatic Mining and Processing Dormancy Data in the Database Management System for Small and Medium Enterprises." Applied Mechanics and Materials 513-517 (February 2014): 1927–30. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.1927.

Повний текст джерела
Анотація:
The paper studies how to mine automatically and process the dormancy data and then improve the performance of database management system. It analyzes the main mode of dormant data into the data warehouse, the innovative design of SQL capture and changer function, as well as gives the statistical method of recognition dormant data in data warehouse, and the SQL change and capture function migration to the application server, automatic capture dormancy data without professional intervention. It helps to solve the problem of the lack of staff and cumulative historical data influence performance of the database management system in the small and medium-sized enterprises.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Rahm, Erhard, Toralf Kirsten, and Jörg Lange. "The GeWare data warehouse platform for the analysis of molecular-biological and clinical data." Journal of Integrative Bioinformatics 4, no. 1 (March 1, 2007): 1–11. http://dx.doi.org/10.1515/jib-2007-47.

Повний текст джерела
Анотація:
Abstract We introduce the GeWare data warehouse platform for the integrated analysis of clinical information, microarray data and annotations within large biomedical research studies. Clinical data is obtained from a commercial study management system while publicly available data is integrated using a mediator approach. The platform utilizes a generic approach to manage different types of annotations. We outline the overall architecture of the platform, its implementation as well as the main processing and analysis workflows.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Gwoździewicz, Sylwia, Dariusz Prokopowicz, Jan Grzegorek, and Martin Dahl. "APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANIZATION." International Journal of New Economics and Social Sciences 8, no. 2 (December 30, 2018): 12–14. http://dx.doi.org/10.5604/01.3001.0012.9925.

Повний текст джерела
Анотація:
Currently, business analytics uses computerized platforms containing ready-made reporting formulas in the field of Business Intelligence. In recent years, software companies supporting enterprise management offer advanced applications of information-analytical Business Intelligence class systems consisting of modular development of these systems and combining business intelligence software with platforms that use data warehouse technology, multi-dimensional analytical processing software and data mining and processing applications. This article describes an example of this type of computerized analytical platform for business entities, which is included in analytical applications that allow quick access to necessary, aggregated and multi-criteria processed information. The software allows entrepreneurs and corporate managers as well as entities from the SME sector on the one hand to use embedded patterns of reports or analyzes, and on the other hand to self-develop and configure analyzes carried out, tailored to the specifics of a specific entity. Such analytical applications make it possible to build integrated risk management systems in the organization.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Sheketa, Vasyl, Mykola Pasieka, Svitlana Chupakhina, Nadiia Pasieka, Uliana Ketsyk-Zinchenko, Yulia Romanyshyn, and Olha Yanyshyn. "Information System for Screening and Automation of Document Management in Oncological Clinics." Open Bioinformatics Journal 14, no. 1 (November 19, 2021): 39–50. http://dx.doi.org/10.2174/1875036202114010039.

Повний текст джерела
Анотація:
Introduction: Automation of business documentation workflow in medical practice substantially accelerates and improves the process and results in better service development. Methods: Efficient use of databases, data banks, and document-oriented storage (warehouses data), including dual-purpose databases, enables performing specific actions, such as adding records, introducing changes into them, performing an either ordinary or analytical search of data, as well as their efficient processing. With the focus on achieving interaction between the distributed and heterogeneous applications and the devices belonging to the independent organizations, the specialized medical client application has been developed, as a result of which the quantity and quality of information streams of data, which can be essential for effective treatment of patients with breast cancer, have increased. Results: The application has been developed, allowing automating the management of patient records, taking into account the needs of medical staff, especially in managing patients’ appointments and creating patient’s medical records in accordance with the international standards currently in force. This work is the basis for the smoother integration of medical records and genomics data to achieve better prevention, diagnosis, prediction, and treatment of breast cancer (oncology). Conclusion: Since relevant standards upgrade the functioning of health care information technology and the quality and safety of patient’s care, we have accomplished the global architectural scheme of the specific medical automation system through harmonizing the medical services specified by the HL7 international.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Putri, I. Gusti Ayu Putu Arika, and I. Nyoman Nurcaya. "PENERAPAN WAREHOUSE MANAGEMENT SYSTEM PADA PT UNIPLASTINDO INTERBUANA BALI." E-Jurnal Manajemen Universitas Udayana 8, no. 12 (December 3, 2019): 7216. http://dx.doi.org/10.24843/ejmunud.2019.v08.i12.p16.

Повний текст джерела
Анотація:
Warehouse Management System is a database-based computer application system, which used to improve warehouse efficiency in maintaining the accuracy of inventory data by recording every transaction in the warehouse. The purpose of this study was to determine efficiency of material handling costs incurred after implementation of warehouse management system. The research was conducted at PT Uniplastindo Interbuana Bali with simulation of implementation warehouse management system. Based on calculations, application of WMS able to increase cost efficiency and time efficiency in material handling because the system is able to provide accurate material placement data. Data during simulation shows a smaller amount of time compared to before simulation, which means less costs are incurred. Existence of system also makes it easy for related admin to collect data on real stock in the field. Data presentation and processing becomes faster and accurate so it can lead to increased performance in company. Keywords: Warehouse Management System (WMS), material handling, cost efficiency
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Luo, Jia, Junping Xu, Obaid Aldosari, Sara A. Althubiti, and Wejdan Deebani. "Design and Implementation of an Efficient Electronic Bank Management Information System Based Data Warehouse and Data Mining Processing." Information Processing & Management 59, no. 6 (November 2022): 103086. http://dx.doi.org/10.1016/j.ipm.2022.103086.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Wang, Zhihui, and Jinyu Wang. "Applications of Machine Learning in Public Security Information and Resource Management." Scientific Programming 2021 (October 12, 2021): 1–9. http://dx.doi.org/10.1155/2021/4734187.

Повний текст джерела
Анотація:
The data mining and big data technologies could be of utmost importance to investigate outbound and case datasets in the police records. New findings and useful information may potentially be obtained through data preprocessing and multidimensional modeling. Public security data is a kind of “big data,” having characteristics like large volume, rapid growth, various structures, large-scale storage, low density, and time sensitiveness. In this paper, a police data warehouse is constructed and a public security information analysis system is proposed. The proposed system comprises two modules: (i) case management and (ii) public security information mining. The former is responsible for the collection and processing of case information. The latter preprocesses the data of major cases that have occurred in the past ten years to create a data warehouse. Then, we use the model to create a data warehouse based on needs. By dividing the measurement values and dimensions, the analysis and prediction of criminals’ characteristics and the case environment realize relationships between them. In the process of mining and processing crime data, data mining algorithms can quickly find out the relevant information in the data. Furthermore, the system can find out relevant trends and laws to detect criminal cases faster than other methods. This can reduce the emergence of new crimes and provide a basis for decision-making in the public security department that has practical significance.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Surya, Putu Bagus Hendrayana, Rifky Lana Rahardian, and Komang Oka Saputra. "Data Warehouse Design Academic Affairs Case Study: Campus II STMIK STIKOM Bali Jimbaran." International Journal of Engineering and Emerging Technology 2, no. 1 (September 23, 2017): 104. http://dx.doi.org/10.24843/ijeet.2017.v02.i01.p21.

Повний текст джерела
Анотація:
At the management level, the information becomes a factor in the decision making process. It takes a form of support for different data processing of transactional data processing form that allows the leadership of obtaining accurate information and in a short time, so that will give birth to independence in obtaining information. Information on the student data needed by a coordinator academic campus to see the conditions in which academic information only from a centralized database in BAAK. To overcome these problems need to design a data warehouse for the system dashboard for campus coordinator can monitor academic conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Maletzky, Alexander, Carl Böck, Thomas Tschoellitsch, Theresa Roland, Helga Ludwig, Stefan Thumfart, Michael Giretzlehner, Sepp Hochreiter, and Jens Meier. "Lifting Hospital Electronic Health Record Data Treasures: Challenges and Opportunities." JMIR Medical Informatics 10, no. 10 (October 21, 2022): e38557. http://dx.doi.org/10.2196/38557.

Повний текст джерела
Анотація:
Electronic health records (EHRs) have been successfully used in data science and machine learning projects. However, most of these data are collected for clinical use rather than for retrospective analysis. This means that researchers typically face many different issues when attempting to access and prepare the data for secondary use. We aimed to investigate how raw EHRs can be accessed and prepared in retrospective data science projects in a disciplined, effective, and efficient way. We report our experience and findings from a large-scale data science project analyzing routinely acquired retrospective data from the Kepler University Hospital in Linz, Austria. The project involved data collection from more than 150,000 patients over a period of 10 years. It included diverse data modalities, such as static demographic data, irregularly acquired laboratory test results, regularly sampled vital signs, and high-frequency physiological waveform signals. Raw medical data can be corrupted in many unexpected ways that demand thorough manual inspection and highly individualized data cleaning solutions. We present a general data preparation workflow, which was shaped in the course of our project and consists of the following 7 steps: obtain a rough overview of the available EHR data, define clinically meaningful labels for supervised learning, extract relevant data from the hospital’s data warehouses, match data extracted from different sources, deidentify them, detect errors and inconsistencies therein through a careful exploratory analysis, and implement a suitable data processing pipeline in actual code. Only few of the data preparation issues encountered in our project were addressed by generic medical data preprocessing tools that have been proposed recently. Instead, highly individualized solutions for the specific data used in one’s own research seem inevitable. We believe that the proposed workflow can serve as a guidance for practitioners, helping them to identify and address potential problems early and avoid some common pitfalls.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Song, Yanan, and Xiaolong Hua. "Implementation of Data Mining Technology in Bonded Warehouse Inbound and Outbound Goods Trade." Journal of Organizational and End User Computing 34, no. 3 (May 2022): 1–18. http://dx.doi.org/10.4018/joeuc.291511.

Повний текст джерела
Анотація:
For the taxed goods, the actual freight is generally determined by multiplying the allocated freight for each KG and actual outgoing weight based on the outgoing order number on the outgoing bill. Considering the conventional logistics is insufficient to cope with the rapid response of e-commerce orders to logistics requirements, this work discussed the implementation of data mining technology in bonded warehouse inbound and outbound goods trade. Specifically, a bonded warehouse decision-making system with data warehouse, conceptual model, online analytical processing system, human-computer interaction module and WEB data sharing platform was developed. The statistical query module can be used to perform statistics and queries on warehousing operations. After the optimization of the whole warehousing business process, it only takes 19.1 hours to get the actual freight, which is nearly one third less than the time before optimization. This study could create a better environment for the development of China's processing trade.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Putra, I. Made Suwija, and Dewa Komang Tri Adhitya Putra. "Rancang Bangun Engine ETL Data Warehouse dengan Menggunakan Bahasa Python." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 3, no. 2 (August 1, 2019): 113–23. http://dx.doi.org/10.29207/resti.v3i2.872.

Повний текст джерела
Анотація:
Big companies that have many branches in different locations often have difficulty with analyzing transaction processes from each branch. The problem experienced by the company management is the rapid delivery of massive data provided by the branch to the head office so that the analysis process of the company's performance becomes slow and inaccurate. The results of this process used as a consideration in decision making which produce the right information if the data is complete and relevant. The right method of massive data collection is using the data warehouse approach. Data warehouse is a relational database designed to optimize queries in Online Analytical Processing (OLAP) from the transaction process of various data sources that can record any changes in data that occur so that the data becomes more structured. In applying the data collection, data warehouse has extracted, transform, and load (ETL) steps to read data from the Online Transaction Processing (OLTP) system, change the form of data through uniform data structures, and save to the final location in the data warehouse. This study provides an overview of the solution for implementing ETL that can work automatically or manually according to needs using the Python programming language so that it can facilitate the ETL process and can adjust to the conditions of the database in the company system.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Chen, James C., Chen Huan Cheng, Kung Jeng Wang, Po Tsang B. Huang, Chih Chiang Chang, Chien Jung Huang, and Ti Chen Ting. "Application of RFID to Warehouse Management." Key Engineering Materials 486 (July 2011): 297–300. http://dx.doi.org/10.4028/www.scientific.net/kem.486.297.

Повний текст джерела
Анотація:
This study presented the application of Radio Frequency Identification (RFID) technology to improve the efficiency and effectiveness of warehouse management. Fixed RFID readers and antenna were installed at the receiving/shipping dock and passive tags were mounted on either storage box or pallet. RFID system can quickly and simultaneously read multiply tags, compared to the sequential reading of barcodes by handy barcode scanner in manual operations due to the inconsistency in box sizes and the locations of barcode for items of various types. Significant improvements were observed in preliminary experiments. The numbers of pallets and items processed by each operator per day were increased by 425% and 438%, respectively. The data processing time at receiving and shipping docks was reduced by more than 90%. With RFID technology, the number of operators can be reduced while maintaining current service capacity at the studied warehouse. The benefit using RFID in the warehouse management is analyzed and promoted.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Agapito, Giuseppe, Chiara Zucco, and Mario Cannataro. "COVID-WAREHOUSE: A Data Warehouse of Italian COVID-19, Pollution, and Climate Data." International Journal of Environmental Research and Public Health 17, no. 15 (August 3, 2020): 5596. http://dx.doi.org/10.3390/ijerph17155596.

Повний текст джерела
Анотація:
The management of the COVID-19 pandemic presents several unprecedented challenges in different fields, from medicine to biology, from public health to social science, that may benefit from computing methods able to integrate the increasing available COVID-19 and related data (e.g., pollution, demographics, climate, etc.). With the aim to face the COVID-19 data collection, harmonization and integration problems, we present the design and development of COVID-WAREHOUSE, a data warehouse that models, integrates and stores the COVID-19 data made available daily by the Italian Protezione Civile Department and several pollution and climate data made available by the Italian Regions. After an automatic ETL (Extraction, Transformation and Loading) step, COVID-19 cases, pollution measures and climate data, are integrated and organized using the Dimensional Fact Model, using two main dimensions: time and geographical location. COVID-WAREHOUSE supports OLAP (On-Line Analytical Processing) analysis, provides a heatmap visualizer, and allows easy extraction of selected data for further analysis. The proposed tool can be used in the context of Public Health to underline how the pandemic is spreading, with respect to time and geographical location, and to correlate the pandemic to pollution and climate data in a specific region. Moreover, public decision-makers could use the tool to discover combinations of pollution and climate conditions correlated to an increase of the pandemic, and thus, they could act in a consequent manner. Case studies based on data cubes built on data from Lombardia and Puglia regions are discussed. Our preliminary findings indicate that COVID-19 pandemic is significantly spread in regions characterized by high concentration of particulate in the air and the absence of rain and wind, as even stated in other works available in literature.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Nuseir, Mohammed T. "Designing business intelligence (BI) for production, distribution and customer services: a case study of a UAE-based organization." Business Process Management Journal 27, no. 4 (January 25, 2021): 1275–95. http://dx.doi.org/10.1108/bpmj-06-2020-0266.

Повний текст джерела
Анотація:
PurposeBusiness intelligence (BI) is a strategic approach that can use analytical tools to collect and integrate information, apply business rules and ensure the appropriate visible output of organizational information. This study aims to present the design and implementation of BI in areas of business process improvement for production, distribution and customer services.Design/methodology/approachThis study highlights the process of BI in the production, distribution and customer services based on the National Food Products Company (NFPC) in the United Arab Emirates (UAE). This study discusses the step-by-step development process of BI and refers to graphical illustrations of the business needs and the organization's target key performance indicators (KPI).FindingsBased on the business needs and chosen KPIs to maximize production and improve distribution and customer services the BI tool shows that the “star scheme” is the most appropriate one. Relational Online Analytical Processing (ROLAP) based on Mondrian system is employed as Online Analytical Processing (OLAP) architecture since the NFPC's technological infrastructure was better adapted to this vision. The analysis starts with data retrieval from two databases' customer' and production and distribution databases. Finally, visualization and reporting processes that respect the end-users improve the NFPC's decisions.Practical implicationsThe study will help other organizations, BI developers, data warehouses (DW) developers and administrators, project managers as well as academic researchers understand how to develop a successful BI framework and implement BI based on business needs.Originality/valueThis is a unique and original study on the BI experience from a UAE-based organization and will encourage other organizations to apply BI in their business process.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Du, Chenglong. "Logistics and Warehousing Intelligent Management and Optimization Based on Radio Frequency Identification Technology." Journal of Sensors 2021 (October 28, 2021): 1–11. http://dx.doi.org/10.1155/2021/2225465.

Повний текст джерела
Анотація:
In today’s competitive global business environment, companies and organizations are emphasizing return on assets. By analyzing the status quo of Internet of things and warehouse management, this paper puts forward the application of radio frequency identification technology in logistics warehouse management. Based on the intelligent management of logistics and warehousing, the overall structure of intelligent management of logistics and warehousing based on radio frequency identification technology is established, and the functions of each functional module of the system are introduced in detail. Warehouse management system uses B/S architecture mode design; warehouse management system determines the location of goods in the warehouse management program, through radio frequency identification technology reader collection of goods in and out of the warehouse data and data processing results reported to the warehouse management system. Through the optimization of warehouse management process, the intelligent management of logistics and warehousing is realized efficiently. By testing the response time of the system and comparing the test efficiency before and after optimization, the reliability and efficiency of the warehousing intelligent management system are verified, which provides a basis for the study of logistics warehousing intelligent management.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Kumar, Akshay, and T. V. Vijay Kumar. "A Multi-Objective Approach to Big Data View Materialization." International Journal of Knowledge and Systems Science 12, no. 2 (April 2021): 17–37. http://dx.doi.org/10.4018/ijkss.2021040102.

Повний текст джерела
Анотація:
Big data comprises voluminous and heterogeneous data that has a limited level of trustworthiness. This data is used to generate valuable information that can be used for decision making. However, decision making queries on Big data consume a lot of time for processing resulting in higher response times. For effective and efficient decision making, this response time needs to be reduced. View materialization has been used successfully to reduce the query response time in the context of a data warehouse. Selection of such views is a complex problem vis-à-vis Big data and is the focus of this paper. In this paper, the Big data view selection problem is formulated as a bi-objective optimization problem with the two objectives being the minimization of the query evaluation cost and the minimization of the update processing cost. Accordingly, a Big data view selection algorithm that selects Big data views for a given query workload, using the vector evaluated genetic algorithm, is proposed. The proposed algorithm aims to generate views that are able to reduce the response time of decision-making queries.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Tang, Jun. "Dimensional Modeling Method Discussion for the Profits from Mineral Rights Transfer Management." Modern Economics & Management Forum 3, no. 2 (April 25, 2022): 81. http://dx.doi.org/10.32629/memf.v3i2.771.

Повний текст джерела
Анотація:
In the informatization process of the profits from mineral rights transfer management, it is of great practical significance to establish a data warehouse with easy to understand, efficient query performance and strong scalability, so as to provide good decision support services for managers. As an important method of data warehouse modeling, dimensional modeling method has significant advantages over others in data understandability, scalability and data access performance compared with other data warehouse modeling methods. Combined with the business needs of mineral rights transfer income (mineral rights price) assessment and collection decision analysis, the key design ideas of dimensional modeling are expounded from the perspectives of dimension content change processing design, dimension role-playing, shared dimension design and audit dimension design. Taking the actual engineering project as an example, dimension modeling results can be queried easily in order to help domestic peers in the construction of DW/BI projects.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Di Tria, Francesco, Ezio Lefons, and Filippo Tangorra. "Metadata for Approximate Query Answering Systems." Advances in Software Engineering 2012 (September 3, 2012): 1–13. http://dx.doi.org/10.1155/2012/247592.

Повний текст джерела
Анотація:
In business intelligence systems, data warehouse metadata management and representation are getting more and more attention by vendors and designers. The standard language for the data warehouse metadata representation is the Common Warehouse Metamodel. However, business intelligence systems include also approximate query answering systems, since these software tools provide fast responses for decision making on the basis of approximate query processing. Currently, the standard meta-model does not allow to represent the metadata needed by approximate query answering systems. In this paper, we propose an extension of the standard metamodel, in order to define the metadata to be used in online approximate analytical processing. These metadata have been successfully adopted in ADAP, a web-based approximate query answering system that creates and uses statistical data profiles.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Frolov, M. M., and M. I. Truphanov. "OPTICAL-ELECTRONIC DEVICE OF CALCULATION OF PARAMETERS OF VOLUME OBJECTS OF WORKING SCENE AT MULTIPLE SOURCES OF VIDEO DATA." Proceedings of the Southwest State University 22, no. 6 (March 27, 2019): 198–205. http://dx.doi.org/10.21869/2223-1560-2018-22-6-198-205.

Повний текст джерела
Анотація:
The paper considers approaches to the construction of a geographically distributed optical-electronic device, providing an analysis of significant and long working scenes in the interests of automating the processes of control and management of robotic tools in industrial assembly shops and warehouses. The principal difference of the proposed solution is the possibility of obtaining images of the analyzed objects using optical-electronic sensors located in different parts of the workspace to realize the function of binocular vision on a much larger area of the working scene compared to analogues. A distinctive novelty of the developed theoretical approach is the approach to binocular technical vision, which consists in iteratively performing calibration procedures for selected pairs of opticalelectronic sensors and the subsequent calculation of the spatial coordinates of the objects being analyzed using calibrated pairs of optical-electronic sensors. The results of image analysis from each of the optoelectronic sensors are used to accompany moving objects and analyze their motion paths in the working scene space. To implement the developed theoretical approaches, a modular optoelectronic device has been developed, consisting of two types of modules. The first type of module is a standalone opto-electronic module, which includes an opto-electronic sensor and means for processing and extracting primary features immediately upon receiving images for their subsequent analysis. The second type is a computational module that provides processing of primary data from a set of modules of the first type. Data transfer between device modules is provided via radio over a WiFi network. A distinctive feature of the developed device is the primary processing of images immediately upon their receipt and transmission over the radio channel of a small amount of data about the selected objects to the computing module, which performs the final stages of data processing and generates a set of parameters describing the characteristics and spatial coordinates of the objects found on the working scene for their further of use. Experimental studies were conducted on the developed simulation model, which confirmed the correctness of the developed theoretical approach and the possibility of its application in practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Kwakye, Michael Mireku. "Conceptual Model and Design of Semantic Trajectory Data Warehouse." International Journal of Data Warehousing and Mining 16, no. 3 (July 2020): 108–31. http://dx.doi.org/10.4018/ijdwm.2020070106.

Повний текст джерела
Анотація:
The trajectory patterns of a moving object in a spatio-temporal domain offers varied information in terms of the management of the data generated from the movement. The query results of trajectory objects from the data warehouse are usually not enough to answer certain trend behaviours and meaningful inferences without the associated semantic information of the trajectory object or the geospatial environment within a specified purpose or context. This article formulates and designs a generic ontology modelling framework that serves as the background model platform for the design of a semantic data warehouse for trajectories. The methodology underpins on higher granularity of data as a result of pre-processed and extract-transformed-load (ETL) data so as to offer efficient semantic inference to the underlying trajectory data. Moreover, the modelling approach outlines the thematic dimensions that offer a design platform for predictive trend analysis and knowledge discovery in the trajectory dynamics and data processing for moving objects.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Nazier, Mostafa Medhat, Dr Ayman Khedr, and Assoc Prof Mohamed Haggag. "Business Intelligence and its role to enhance Corporate Performance Management." INTERNATIONAL JOURNAL OF MANAGEMENT & INFORMATION TECHNOLOGY 3, no. 3 (January 23, 2013): 08–15. http://dx.doi.org/10.24297/ijmit.v3i3.1745.

Повний текст джерела
Анотація:
As every small or large organization requires information to promote their business by forecasting the future trends, information is now the primary tool to understand the market trends and understand their own position in the market comparison to its competitors. Business intelligence is the use of an organizations disparate data to provide meaningful information and analyses to employees, customers, suppliers, and partners for more efficient and effective decision-making. BI applications include the activities of decision support systems, query and reporting, online analytical processing (OLAP), data warehouse (DW), statistical analysis, forecasting, and data mining.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Sari, Ratna, and I. Putu Agus Eka Pratama. "Rancangan Desain Data Warehouse Kunjungan Wisatawan Ke Bali Baw Tour & Travel." JUSS (Jurnal Sains dan Sistem Informasi) 3, no. 1 (November 5, 2021): 14–18. http://dx.doi.org/10.22437/juss.v3i1.8146.

Повний текст джерела
Анотація:
This business development in the globalized era provoked fierce competition especially with the application of it to support his daily process especially in data management, therefore, the BAW Tour & Travel needs a system for data processing so that data can be stored and integrated with each other or what's called data warehouse and followed by other regulatory ETL, OLAP methods in analysis and quick decision-making. Using the data warehouse can figure out a spike in visitors' visits so it can predict people to be able to predict the number of guides during a month in case there's a spike that reduces anyone's denial of acceptance on the grounds of lack of a guide and to booked a motel because it's already full.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Tešendić, Danijela, and Danijela Boberić Krstićev. "Business Intelligence in the Service of Libraries." Information Technology and Libraries 38, no. 4 (December 16, 2019): 98–113. http://dx.doi.org/10.6017/ital.v38i4.10599.

Повний текст джерела
Анотація:
This paper describes implementation of business intelligence tools in the libraries. A complete procedure for building a data warehouse is described on the case study of the BISIS library management system. During development of a data warehouse model, user requirements about reporting are detected and structure of already existing transactional databases in the BISIS system is analysed. Based on this analysis, three data warehouse models have been proposed that would satisfy the requirements for analytical processing of data. The paper presents the usage of one OLAP tool, but the proposed data warehouse model is independent of the choice of OLAP tools and any other tool can be integrated with the proposed data warehouse.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

He, Hong, Hong Yang Xu, and Zhi Hong Zhang. "Design of Lightweight RFID Middleware for Warehouse Management System." Advanced Materials Research 706-708 (June 2013): 729–32. http://dx.doi.org/10.4028/www.scientific.net/amr.706-708.729.

Повний текст джерела
Анотація:
With the development of computer network automation and the rapid development of e-commerce, more and more enterprises need to update their own warehouse management system to adapt to the change of technology. RFID technology is widely used all over the society due to its rapid, real-time, accurate sampling and processing information ability. This paper applies RFID technology in the warehouse management system to put forward the lightweight RFID middleware architecture for the warehouse management system .According to the ALE standard launched by the EPC global, the paper mainly realizes the function of basic middleware which has the ability to receive and process data. Meanwhile, the middleware uses Mysql database to select useful things for application software system. The middleware designed in the paper is simple in structure and powerful in function which will do well for small and medium-sized enterprises.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Atmoko, Agus Dwi, and Ervani Nur Septiana. "PENERAPAN SISTEM AKUNTANSI PERSEDIAAN BARANG DAGANG PADA SUNRISE DISTRO DENGAN APLIKASI ACCURATE ACCOUNTING." Yudishtira Journal : Indonesian Journal of Finance and Strategy Inside 2, no. 1 (April 30, 2022): 14–29. http://dx.doi.org/10.53363/yud.v2i1.20.

Повний текст джерела
Анотація:
Accounting systems include all forms of management information systems that help collect, record, store, and also provide financial accounting information within a company in the process of accounting transactions. Distro Sunrise is one of the Small and Medium Micro Enterprises (MSMEs) located at Jl. Sawunggalih No. 142B Semawung Daleman Village Kutoarjo District engaged in trade by selling apparel. Based on the analysis that has been done, Sunrise Distro does not have a system of recording inventory, including procedures for recording and spending goods in warehouses that result in the unavailability of valid information about stock cards from merchandise inventory. This research aims to implement a trading inventory accounting system on Kutoarjo Sunrise Distro. The data analysis technique used in this study is a quantitative descriptive data analysis technique. Data collection techniques with interviews, documents and literature studies. Use accurate accounting application version 4. This application is expected to facilitate the recording and processing of stock data for merchandise inventory. Data collection techniques are interviews, documents and literature studies. The results of the study in the form of the proposed implementation of accurate accounting application Version 4 consisting of inventory stock card reports, sales reports, purchase reports and reports as reports
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Raffaetà, A., L. Leonardi, G. Marketos, G. Andrienko, N. Andrienko, E. Frentzos, N. Giatrakos, et al. "Visual Mobility Analysis using T-Warehouse." International Journal of Data Warehousing and Mining 7, no. 1 (January 2011): 1–23. http://dx.doi.org/10.4018/jdwm.2011010101.

Повний текст джерела
Анотація:
Technological advances in sensing technologies and wireless telecommunication devices enable research fields related to the management of trajectory data. The challenge after storing the data is the implementation of appropriate analytics for extracting useful knowledge. However, traditional data warehousing systems and techniques were not designed for analyzing trajectory data. In this paper, the authors demonstrate a framework that transforms the traditional data cube model into a trajectory warehouse. As a proof-of-concept, the authors implement T-Warehouse, a system that incorporates all the required steps for Visual Trajectory Data Warehousing, from trajectory reconstruction and ETL processing to Visual OLAP analysis on mobility data.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

E.J., Edim, and Inyang B.I. "Logistics Management and Marketing Performance of Small and Medium-Sized Manufacturing Firms." International Journal of Entrepreneurship and Business Innovation 5, no. 1 (February 9, 2022): 1–15. http://dx.doi.org/10.52589/ijebi-d1d3kf26.

Повний текст джерела
Анотація:
This study explored logistics management and marketing performance of small and medium-sized manufacturing firms. It aimed to assess the influence of order processing, transportation, inventory and warehouse management on the marketing performance of small and medium-sized manufacturing firms. As a cross-sectional study, primary data were obtained from 216 operators and personnel of small and medium-sized manufacturing firms using a structured questionnaire. The research instrument was validated through face and content validity method, while reliability was confirmed through Cronbach’s alpha method. The hypotheses developed for the study were tested using multiple linear regression. Consequently, the study revealed that order processing management, transportation management, inventory management and warehouse management had significant positive influences on the marketing performance of small and medium-sized manufacturing firms. Therefore, the study concluded that logistics management has a significant positive influence on marketing performance in the context of small and medium-sized manufacturing firms. Practical implications and a future research agenda were presented in light of the limitations of this study.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Jiang, Tian, Zhongyi Wu, Yu Song, Xianglong Liu, Haode Liu, and Haozhi Zhang. "Sustainable Transport Data Collection and Application: China Urban Transport Database." Mathematical Problems in Engineering 2013 (2013): 1–10. http://dx.doi.org/10.1155/2013/879752.

Повний текст джерела
Анотація:
Transport policy making process of national and local governments should be supported by a comprehensive database to ensure a sustainable and healthy development of urban transport. China Urban Transport Database (CUTD) has been built to play such a role. This paper is to make an introduction of CUTD framework including user management, data warehouse, and application modules. Considering the urban transport development features of Chinese cities, sustainable urban transport development indicators are proposed to evaluate the public transport service level in Chinese cities. International urban transport knowledge base is developed as well. CUTD has been applied in urban transport data processing, urban transport management, and urban transport performance evaluation in national and local transport research agencies, operators, and governments in China, and it will be applied to a broader range of fields.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Lestari, Octa Linda, Nani Kartinah, and Noor Hafizah. "Evaluasi Penyimpanan Obat di Gudang Farmasi RSUD Ratu Zalecha Martapura." Jurnal Pharmascience 7, no. 2 (October 31, 2020): 48. http://dx.doi.org/10.20527/jps.v7i2.7926.

Повний текст джерела
Анотація:
Penyimpanan adalah kegiatan untuk menghindari obat dari kerusakan baik fisik dan kimia serta memastikan kualitas obat tetap terjamin. Penelitian ini bertujuan untuk mengevaluasi penyimpanan obat di Instalasi Gudang Farmasi RSUD Ratu Zalecha Martapura berdasarkan indikator USAID. Penelitian bersifat deskriptif non-eksperimental dengan rancangan penelitian cross sectional dan pengambilan data dilakukan secara prospektif. Indikator yang dievaluasi yaitu tingkat akurasi persediaan obat, tingkat akurasi penempatan obat, tingkat akurasi pengambilan obat, waktu pemrosesan permintaan obat, pemanfaatan ruang penyimpanan, serta tingkat keamanan di lokasi penyimpanan menurut USAID. Hasil penelitian menunjukkan persentase tingkat akurasi persediaan obat 100%, persentase tingkat akurasi penempatan obat sebesar 85%, dan persentase tingkat akurasi pengambilan obat 97%. Waktu yang diperlukan oleh petugas gudang dalam pemrosesan permintaan obat yaitu 3 - 66 menit, persentase pemanfaatan gudang untuk menyimpan obat sebesar 43%, dan terdapat Standar Prosedur Operasional terkait tingkat keamanan obat di gudang penyimpanan obat. Dapat disimpulkan bahwa hasil evaluasi berdasarkan indikator USAID menunjukkan penyimpanan obat di Gudang Farmasi RSUD Ratu Zalecha Martapura masih belum efisien karena masih terdapat indikator yang belum memenuhi standar yaitu indikator tingkat akurasi penempatan obat dan tingkat akurasi pengambilan obat. Storage of medicine is an activity to prevent drugs from physical and chemical damage and to guarantee the quality of drugs. This study aims to evaluate the drug storage at the Installation of Pharmacy in Ratu Zalecha Martapura Hospital based on USAID indicators. This research is a descriptive non-experimental study with a cross-sectional study and prospectively data. The indicators evaluated are the accuracy of drug supplies, the accuracy of drug placement, the accuracy of drug collection, the processing time for drug requests, the utilization of storage space, and the level of security at storage locations according to USAID. The results showed an accurate rate of drug supplies was 100%, percent of the accuracy rate of drug placement was 85%, and percent of the accuracy of drug-taking was 97%. The time required by warehouse officers in processing drug requests is 3 - 66 minutes, the percentage of warehouse used to store drugs is 43%, and there are Standard Operating Procedures related to the safety level of drugs in drug storage warehouses. The conclusion of this research is the evaluation based on USAID indicators show that drug storage in the Pharmacy Warehouse of the Ratu Zalecha Martapura Hospital is still inefficient because there are still indicators that do not meet the standards, namely indicators of the accuracy of drug placement and the accuracy of drug-taking.Keywords : Drug Storage, Warehouse, USAID Indicator, Pharmaceutical Managemen
Стилі APA, Harvard, Vancouver, ISO та ін.
48

YERSHOVA, O., and О. STAVYTSKYI. "A Study of Modern Trends in Database and Data Repository Technologies as the Technological and Architectural Basis for the Creation of Software and Intelligent Systems by Means of Modern Programming Languages. Part 2." Scientific Bulletin of the National Academy of Statistics, Accounting and Audit, no. 1-2 (June 1, 2022): 76–85. http://dx.doi.org/10.31767/nasoa.1-2-2022.09.

Повний текст джерела
Анотація:
The article contains an analytical review of developments in database technologies, made on the basis of reports prepared by the results of eight meetings of database specialists held throughout 1988–2013. Objects of the analysis are most interesting predictions given in the reports: their realism, accuracy, pragmatism or, vice versa, utopianism or opportunism. The article consists of two parts. Part 1 is devoted to analysis and evaluation of predictions made in the reports of the four earlier meetings held in 1988, 1990, 1995, and 1996. These predictions are about creation, development and uses of decision support systems, database appliances, graphic processing units, operating systems, interface for structured query language, database applications, information distribution, universal database management systems, query optimization criteria, intellectual analysis of database within database management systems. A detailed description of research themes in the field of databases, which got the priority status in that time, is given: recording and computation of data, security and confidentiality of data, replication and harmonization of data, structuring of data, intellectual analysis of data, data warehouses. Part 2 is devoted to an analytical review of the predictions contained in the reports on the meetings held in 1998, 2003, 2008, and 2013. The predictions are about self-adjustment of database systems, rethinking of the traditional database architecture as a result of new hardware capabilities. They make special emphasis on the feasibility of manipulations with structured and unstructured data within DSS architecture, support of Big Data technology, with outlining the themes of research aimed at implementation of its potential.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Teng, Yue. "Application Research of Computer Technology in the Decision Making of China's Supermarket Logistics Centers." Advanced Materials Research 442 (January 2012): 92–96. http://dx.doi.org/10.4028/www.scientific.net/amr.442.92.

Повний текст джерела
Анотація:
Logistics centers are mainly comprised by a series of operations from purchasing of goods to preservation, allocation, delivery, packaging and distribution. By drawing support from advanced management technology and modernized information exchange network, logistics centers have effectively connected production and consumption altogether, making all goods circulating under an efficient, harmonious and ordered state, which in return achieved best commercial efficiency and social efficiency. Currently, logistics management software is far from enough to satisfy customers' needs, and still remains on the level of purchase, sales and inventory management. This research is conducted in allusion to the actual situation of China's information-based logistics industry, with the objective to improve modern management level of logistics industry. According to the demands of domestic logistics enterprises, we employed OLAP technology for data processing, and data mining technology in decision-making analysis, which realized information-based management in logistics centers. On this basis, we realized basic purchase, sales and inventory management. Furthermore, we as well created the data warehouse, so as to perform online analysis, processing and data mining, to realize intensive processing of data, as well as to provide support for decision-making. Computer-aided decision-making software is the development direction of management software.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Fu, Wenjie, Hongtao Shen, Bo Feng, Rui Zhang, Jing Liang, Chong Sun, and Kalra Anurag. "Management of Power Marketing Audit Work Based on Tobit Model and Big Data Technology." Mobile Information Systems 2022 (August 8, 2022): 1–11. http://dx.doi.org/10.1155/2022/1375331.

Повний текст джерела
Анотація:
In order to improve the management effect of power marketing audit work, this paper combines big data technology to carry out related data processing, changes the management mode of traditional power marketing audit work, analyzes the overall process and characteristics of the current power marketing audit case selection, and designs an audit system. Moreover, this paper builds a data warehouse based on data mining technology and realizes the process of automatic generation of the index system. In addition, this paper chooses the Tobit model proposed by Tobin, which is a restricted dependent variable model and can be better applied to the relevant analysis of power marketing audit indicators. Finally, this paper evaluates the effects of the method and system designed in this paper through simulation experiments. The experimental research results show that the power marketing audit management system based on big data technology can play an important role in power audits.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії