Gotowa bibliografia na temat „Business – data processing – popular works”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Business – data processing – popular works”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Business – data processing – popular works"

1

BALCIOGLU, Yavuz Selim, Melike ARTAR i Oya ERDİL. "MAKİNE ÖĞRENİMİ VE TWITTER VERİLERİNİN ANALİZİ: COVID-19 SONRASI İŞ TRENDLERİNİN BELİRLENMESİ". SOCIAL SCIENCE DEVELOPMENT JOURNAL 7, nr 33 (15.09.2022): 353–61. http://dx.doi.org/10.31567/ssd.697.

Pełny tekst źródła
Streszczenie:
With the Covid-19 epidemic, there has been a great change in the routines of social and business life. These changing routines have brought with them new needs and demands. In order for business life to adapt to this new order and develop new strategies, current trends should be analyzed. In this study, the most demanded business trends on Twitter after Covid-19 were analyzed by machine learning. Textual expressions obtained through Twitter are converted into data by methods such as natural language processing. Analyzing these data correctly makes it possible to obtain important information that will create a roadmap about the targeted issues. Within the scope of the research, a total of 48765 tweets with high impact were selected. Word frequency analysis was applied to the total number of tweets belonging to the determined business trends. Within the scope of the research, textual expressions obtained through twitter platforms were converted into data by natural language processing method. In addition, a word analysis model based on SVM, one of the machine learning algorithms, was used. As a result of the analysis; online food services, online sales specialist, remote working, healthcare professionals, personal coaching, online training and repairman have emerged as popular lines of business. Key words: Machine Learning, Trend Jobs, Neural Networks, Twitter, SVM, Covid-19
Style APA, Harvard, Vancouver, ISO itp.
2

Bathla, Gourav, Himanshu Aggarwal i Rinkle Rani. "A Novel Approach for Clustering Big Data based on MapReduce". International Journal of Electrical and Computer Engineering (IJECE) 8, nr 3 (1.06.2018): 1711. http://dx.doi.org/10.11591/ijece.v8i3.pp1711-1719.

Pełny tekst źródła
Streszczenie:
Clustering is one of the most important applications of data mining. It has attracted attention of researchers in statistics and machine learning. It is used in many applications like information retrieval, image processing and social network analytics etc. It helps the user to understand the similarity and dissimilarity between objects. Cluster analysis makes the users understand complex and large data sets more clearly. There are different types of clustering algorithms analyzed by various researchers. Kmeans is the most popular partitioning based algorithm as it provides good results because of accurate calculation on numerical data. But Kmeans give good results for numerical data only. Big data is combination of numerical and categorical data. Kprototype algorithm is used to deal with numerical as well as categorical data. Kprototype combines the distance calculated from numeric and categorical data. With the growth of data due to social networking websites, business transactions, scientific calculation etc., there is vast collection of structured, semi-structured and unstructured data. So, there is need of optimization of Kprototype so that these varieties of data can be analyzed efficiently.In this work, Kprototype algorithm is implemented on MapReduce in this paper. Experiments have proved that Kprototype implemented on Mapreduce gives better performance gain on multiple nodes as compared to single node. CPU execution time and speedup are used as evaluation metrics for comparison.Intellegent splitter is proposed in this paper which splits mixed big data into numerical and categorical data. Comparison with traditional algorithms proves that proposed algorithm works better for large scale of data.
Style APA, Harvard, Vancouver, ISO itp.
3

Shalehanny, Shafira, Agung Triayudi i Endah Tri Esti Handayani. "PUBLIC’S SENTIMENT ANALYSIS ON SHOPEE-FOOD SERVICE USING LEXICON-BASED AND SUPPORT VECTOR MACHINE". Jurnal Riset Informatika 4, nr 1 (12.12.2021): 1–8. http://dx.doi.org/10.34288/jri.v4i1.287.

Pełny tekst źródła
Streszczenie:
Technology field following how era keep evolving. Social media already on everyone’s daily life and being a place for writing their opinion, either review or response for product and service that already being used. Twitter are one of popular social media on Indonesia, according to Statista data it reach 17.55 million users. For online business sector, knowing sentiment score are really important to stepping up their business. The use of machine learning, NLP (Natural Processing Language), and text mining for knowing the real meaning of opinion words given by customer called sentiment analysis. Two methods are using for data testing, the first is Lexicon Based and the second is Support Vector Machine (SVM). Data source that used for sentiment analyst are from keyword ‘ShopeeFood’ and ‘syopifud’. The result of analysis giving accuracy score 87%, precision score 81%, recall score 75%, and f1-score 78%.
Style APA, Harvard, Vancouver, ISO itp.
4

Šuman, Sabrina, Milorad Vignjević i Tomislav Car. "Information extraction and sentiment analysis of hotel reviews in Croatia". Zbornik Veleučilišta u Rijeci 11, nr 1 (2023): 69–87. http://dx.doi.org/10.31784/zvr.11.1.5.

Pełny tekst źródła
Streszczenie:
Today, the amount of data in and around the business system requires new ways of data collection and processing. Discovering sentiments from hotel reviews helps improve hotel services and overall online reputation, as potential guests largely consult existing hotel reviews before booking. Therefore, hotel reviews of Croatian hotels (categories three, four, and five stars) in tourist regions of Croatia were studied on the Booking.com platform for the years 2019 and 2021 (before and after the start of the pandemic COVID-19). Hotels on the Adriatic coast were selected in the cities that were mentioned by several sources as the most popular: Rovinj, Pula, Krk, Zadar, Šibenik, Split, Brač, Hvar, Makarska, and Dubrovnik. The reviews were divided into four groups according to the overall rating and further divided into positive and negative in each group. Therefore, the elements that were present in the positive and negative reviews of each of the four groups were identified. Using the text processing method, the most frequent words and expressions (unigrams and bigrams), separately for the 2019 and 2021 tourism seasons, that can be useful for hotel management in managing accommodation services and achieving competitive advantages were identified. In the second part of the work, a machine learning (ML) model was built over all the collected reviews, classifying the reviews into positive or negative. The results of applying three different ML algorithms with precision and recall performance are described in the Results and Discussion section.
Style APA, Harvard, Vancouver, ISO itp.
5

Kocherzhuk, D. V. "Sound recording in pop art: differencing the «remake» and «remix» musical versions". Aspects of Historical Musicology 14, nr 14 (15.09.2018): 229–44. http://dx.doi.org/10.34064/khnum2-14.15.

Pełny tekst źródła
Streszczenie:
Background. Contemporary audio art in search of new sound design, as well as the artists working in the field of music show business, in an attempt to draw attention to the already well-known musical works, often turn to the forms of “remake” or “remix”. However, there are certain disagreements in the understanding of these terms by artists, vocalists, producers and professional sound engineer team. Therefore, it becomes relevant to clarify the concepts of “remake” and “remix” and designate the key differences between these musical phenomena. The article contains reasoned, from the point of view of art criticism, positions concerning the misunderstanding of the terms “remake” and “remix”, which are wide used in the circles of the media industry. The objective of the article is to explore the key differences between the principles of processing borrowed musical material, such as “remix” and “remake” in contemporary popular music, in particular, in recording studios. Research methodology. In the course of the study two concepts – «remake» and «remix» – were under consideration and comparison, on practical examples of some works of famous pop vocalists from Ukraine and abroad. So, the research methodology includes the methods of analysis for consideration of the examples from the Ukrainian, Russian and world show business and the existing definitions of the concepts “remake” and “remix”; as well as comparison, checking, coordination of the latter; formalization and generalization of data in getting the results of our study. The modern strategies of the «remake» invariance development in the work of musicians are taken in account; also, the latest trends in the creation of versions of «remix» by world class artists and performers of contemporary Ukrainian pop music are reflected. The results of the study. The research results reveal the significance of terminology pair «remix» and «remake» in the activities of the pop singer. It found that the differences of two similar in importance terms not all artists in the music industry understand. The article analyzes the main scientific works of specialists in the audiovisual and musical arts, in philosophical and sociological areas, which addressed this issue in the structure of music, such as the studies by V. Tormakhova, V. Otkydach, V. Myslavskyi, I. Tarasova, Yu. Koliadych, L. Zdorovenko and several others, and on this basis the essence of the concepts “remake” and “remix” reveals. The phenomenon of the “remake” is described in detail in the dictionary of V. Mislavsky [5], where the author separately outlined the concept of “remake” not only in musical art, but also in the film industry and the structure of video games. The researcher I. Tarasovа also notes the term “remake” in connection with the problem of protection of intellectual property and the certification of the copyright of the performer and the composer who made the original version of the work [13]. At the same time, the term “remix” in musical science has not yet found a precise definition. In contemporary youth pop culture, the principle of variation of someone else’s musical material called “remix” is associated with club dance music, the principle of “remake” – with the interpretation of “another’s” music work by other artist-singers. “Remake” is a new version or interpretation of a previously published work [5: 31]. Also close to the concept of “remake” the term “cover version” is, which is now even more often uses in the field of modern pop music. This is a repetition of the storyline laid down by the author or performer of the original version, however, in his own interpretation of another artist, while the texture and structure of the work are preserving. A. M. Tormakhova deciphered the term “remake” as a wide spectrum of changes in the musical material associated with the repetition of plot themes and techniques [14: 8]. In a general sense, “a wide spectrum of changes” is not only the technical and emotional interpretation of the work, including the changes made by the performer in style, tempo, rhythm, tessitura, but also it is an aspect of composing activity. For a composer this is an expression of creative thinking, the embodiment of his own vision in the ways of arrangement of material. For a sound director and a sound engineer, a “remix” means the working with computer programs, saturating music with sound effects; for a producer and media corporations it is a business. “Remake” is a rather controversial phenomenon in the music world. On the one hand, it is training for beginners in the field of art; on the other hand, the use of someone else’s musical material in the work can neighbor on plagiarism and provoke the occurrence of certain conflict situations between artists. From the point of view of show business, “remake” is only a method for remind of a piece to the public for the purpose of its commercial use, no matter who the song performed. Basically, an agreement concludes between the artists on the transfer or contiguity of copyright and the right to perform the work for profit. For example, the song “Diva” by F. Kirkorov is a “remake” of the work borrowed from another performer, the winner of the Eurovision Song Contest 1998 – Dana International [17; 20], which is reflected in the relevant agreement on the commercial use of musical material. Remix as a music product is created using computer equipment or the Live Looping music platform due to the processing of the original by introducing various sound effects into the initial track. Interest in this principle of material processing arose in the 80s of the XXth century, when dance, club and DJ music entered into mass use [18]. As a remix, one can considers a single piece of music taken as the main component, which is complemented in sequence by the components of the DJ profile. It can be various samples, the changing of the speed of sounding, the tonality of the work, the “mutation” of the soloist’s voice, the saturation of the voice with effects to achieve a uniform musical ensemble. To the development of such a phenomenon as a “remix” the commercial activities of entertainment facilities (clubs, concert venues, etc.) contributes. The remix principle is connected with the renewal of the musical “hit”, whose popularity gradually decreased, and the rotation during the broadcast of the work did not gain a certain number of listeners. Conclusions. The musical art of the 21st century is full of new experimental and creative phenomena. The process of birth of modified forms of pop works deserves constant attention not only from the representatives of the industry of show business and audiovisual products, but also from scientists-musicologists. Such popular musical phenomena as “remix” and “remake” have a number of differences. So, a “remix” is a technical form of interpreting a piece of music with the help of computer processing of both instrumental parts and voices; it associated with the introduction of new, often very heterogeneous, elements, with tempo changes. A musical product created according to this principle is intended for listeners of “club music” and is not related to the studio work of the performer. The main feature of the “remake”is the presence of studio work of the sound engineer, composer and vocalist; this work is aimed at modernizing the character of the song, which differs from the original version. The texture of the original composition, in the base, should be preserved, but it can be saturated with new sound elements, the vocal line and harmony can be partially changed according to interpreter’s own scheme. The introduction of the scientific definitions of these terms into a common base of musical concepts and the further in-depth study of all theoretical and practical components behind them will contribute to the correct orientation in terminology among the scientific workers of the artistic sphere and actorsvocalists.
Style APA, Harvard, Vancouver, ISO itp.
6

SAFONOVA, Margarita F., i Sergei M. REZNICHENKO. "Internal control models: Historical transformations and development prospects". International Accounting 26, nr 11 (16.11.2023): 1292–316. http://dx.doi.org/10.24891/ia.26.11.1292.

Pełny tekst źródła
Streszczenie:
Subject. This article examines the issue of transformation of the system and models of internal control as a guarantor of the economic security of organizations, regions and countries in the historical aspect and in relation to global changes in the world economy. Objectives. The article aims to determine further ways of development of internal control models and their conceptual foundations, taking into account the realities of the time. Methods. For the study, we used case and chronological analyses, and data systematization. Results. The article finds that internal control models are subject to continuous transformation, taking into account external economic influences and the development of automation tools. This unlocks the process synergies, in other words, more complex processes taking place in the economy and crisis phenomena that affect the conditions for the functioning of companies, make it necessary to look for internal reserves to ensure the continuity of the activities of an economic entity through constant control of risks and the search for options for their minimization. Conclusions and Relevance. The article concludes that the most popular models of internal control are those that are based on a process-oriented approach and continuous analysis of business processes of an economic entity, with further processing of the information obtained and transformation into a system-oriented model of internal control aimed at finding internal reserves. The results of the study can be used in the theory and practice of internal control, as well as for further scientific developments and practical application.
Style APA, Harvard, Vancouver, ISO itp.
7

Khomoviy, S., N. Tomilova i M. Khomovju. "Realiaof accounting automation in agricultural enterprises of Ukraine". Ekonomìka ta upravlìnnâ APK, nr 2 (143) (27.12.2018): 115–21. http://dx.doi.org/10.33245/2310-9262-2018-143-2-115-121.

Pełny tekst źródła
Streszczenie:
Accountancy is an integral part of any enterprise functioning. But it is impossible to keep an accounting without using a computer and software in modern economic conditions. Nowadays, the introduction of sanctions against the manufacturer and a number of dealers of one of the most popular software products, «1S: Accounting» introduced the problem of choosing accounting software before a considerable number of business entities that would be allowed for the use on the territory of Ukraine. There is a transformation of the accounting system and accounting procedures in the conditions of the use of the computer technologies and software products for accounting automation, which is accompanied by the increase in the quality and efficiency level of the management process. The application of automation software significantly increases the quality of accounting information process in organizations. We consider the main advantages of the use of modern information technology for automation of accounting procedures on the basis of the conducted critical analysis of special literature. They are: 1) processing and preserving a large number of identical units in the structural plan of accounting information; 2) the possibility of choosing the necessary information from a great number of data; 3) reliable and faultless realization of mathematical calculations; 4) operational obtaining of the necessary data for the adoption of reasonable management decisions; 5) repeated recreation of actions. It should be noted that in the conditions of the use of automated forms of accounting, the technological process of processing of records envisages the implementation of the following successive steps:1) collection and registration of primary data for further realization of automated processing; 2) the formation of arrays of records on electronic media, including: a journal of economic operations, the structure of synthetic and analytical accounts, manuals of analytical objects, permanent information etc.; 3) receiving, at the request of the user, the necessary accounting data for the reporting period in the form of registers of synthetic accounting, analytical tables and certificates of accounts. The overview of the major software products («Parus accounting», «SAP», «Master: accounting», «IS-pro»), which are widely used in Ukraine, showed that despite the restrictions, most enterprises, including those providing outsourcing services, continue to use the «1S: Accounting» program for keeping records. From our point of view, the most optimal accounting program of ukrainian production is «Master: accounting», which could completely replace the software product «1S: Accounting» in the field of agriculture. The software product «Master: agro» for keeping records of agrobusinesses meets the requirements of the current legislation of Ukraine and is fully adapted to the ukrainian market. It consists of functional modules embracing all areas of accounting and tax accounting. The important advantage of the program «Master: accounting» is also a training program for partners, which is made for 12 classes. The main purpose of this is to provide partners with practical skills in installing the program and the features of the configuration of its modules, the study of basic programming tools and settings for solving account tasks. The studying process is divided into 3 levels. The first level is «user» ‒ designed for anyone who can potentially work with the program. The second level «consultant» is for the automatic setting and training of users. The third «developer» is for those companies and partners who need aintenser adaptation of the product to the working process. Key words: automation, program, computer technologies, accounting of enterprise.
Style APA, Harvard, Vancouver, ISO itp.
8

Wiriyakun, Chawit, i Werasak Kurutach. "Improving misspelled word solving for human trafficking detection in online advertising data". International Journal of Electrical and Computer Engineering (IJECE) 13, nr 6 (1.12.2023): 6558. http://dx.doi.org/10.11591/ijece.v13i6.pp6558-6567.

Pełny tekst źródła
Streszczenie:
<span lang="EN-US">Social media is used by pimps to advertise their businesses for adult services due to easy accessibility. This requires the potentially computational model for law enforcement authorities to facilitate a detection of human trafficking activities. The machine learning (ML) models used to detect these activities mostly rely on text classification and often omit the correction of misspelled words, resulting in the risk of predictions error. Therefore, an improvement data processing approach is one of strategies to enhance an efficiency of human trafficking detection. This paper presents a novel approach to solving spelling mistakes. The approach is designed to select misspelled words, the replace them with the popular words having the same meaning based on an estimation of the probability of words and context used in human trafficking advertisements. The applicability of the proposed approach was demonstrated with the labeled human trafficking dataset using three classification models: k-nearest neighbor (KNN), naive Bayes (NB), and multilayer perceptron (MLP). The achievement of higher accuracy of the model predictions using the proposed method evidences an improved alert on human trafficking outperforming than the others. The proposed approach shows the potential applicability to other datasets and domains from the online advertisements.</span>
Style APA, Harvard, Vancouver, ISO itp.
9

Trivedi, Shrawan Kumar, i Shubhamoy Dey. "Analysing user sentiment of Indian movie reviews". Electronic Library 36, nr 4 (6.08.2018): 590–606. http://dx.doi.org/10.1108/el-08-2017-0182.

Pełny tekst źródła
Streszczenie:
Purpose To be sustainable and competitive in the current business environment, it is useful to understand users’ sentiment towards products and services. This critical task can be achieved via natural language processing and machine learning classifiers. This paper aims to propose a novel probabilistic committee selection classifier (PCC) to analyse and classify the sentiment polarities of movie reviews. Design/methodology/approach An Indian movie review corpus is assembled for this study. Another publicly available movie review polarity corpus is also involved with regard to validating the results. The greedy stepwise search method is used to extract the features/words of the reviews. The performance of the proposed classifier is measured using different metrics, such as F-measure, false positive rate, receiver operating characteristic (ROC) curve and training time. Further, the proposed classifier is compared with other popular machine-learning classifiers, such as Bayesian, Naïve Bayes, Decision Tree (J48), Support Vector Machine and Random Forest. Findings The results of this study show that the proposed classifier is good at predicting the positive or negative polarity of movie reviews. Its performance accuracy and the value of the ROC curve of the PCC is found to be the most suitable of all other classifiers tested in this study. This classifier is also found to be efficient at identifying positive sentiments of reviews, where it gives low false positive rates for both the Indian Movie Review and Review Polarity corpora used in this study. The training time of the proposed classifier is found to be slightly higher than that of Bayesian, Naïve Bayes and J48. Research limitations/implications Only movie review sentiments written in English are considered. In addition, the proposed committee selection classifier is prepared only using the committee of probabilistic classifiers; however, other classifier committees can also be built, tested and compared with the present experiment scenario. Practical implications In this paper, a novel probabilistic approach is proposed and used for classifying movie reviews, and is found to be highly effective in comparison with other state-of-the-art classifiers. This classifier may be tested for different applications and may provide new insights for developers and researchers. Social implications The proposed PCC may be used to classify different product reviews, and hence may be beneficial to organizations to justify users’ reviews about specific products or services. By using authentic positive and negative sentiments of users, the credibility of the specific product, service or event may be enhanced. PCC may also be applied to other applications, such as spam detection, blog mining, news mining and various other data-mining applications. Originality/value The constructed PCC is novel and was tested on Indian movie review data.
Style APA, Harvard, Vancouver, ISO itp.
10

Chen, Weiru, Jared Oliverio, Jin Ho Kim i Jiayue Shen. "The Modeling and Simulation of Data Clustering Algorithms in Data Mining with Big Data". Journal of Industrial Integration and Management 04, nr 01 (marzec 2019): 1850017. http://dx.doi.org/10.1142/s2424862218500173.

Pełny tekst źródła
Streszczenie:
Big Data is a popular cutting-edge technology nowadays. Techniques and algorithms are expanding in different areas including engineering, biomedical, and business. Due to the high-volume and complexity of Big Data, it is necessary to conduct data pre-processing methods when data mining. The pre-processing methods include data cleaning, data integration, data reduction, and data transformation. Data clustering is the most important step of data reduction. With data clustering, mining on the reduced data set should be more efficient yet produce quality analytical results. This paper presents the different data clustering methods and related algorithms for data mining with Big Data. Data clustering can increase the efficiency and accuracy of data mining.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Business – data processing – popular works"

1

Hearn, Anthony Michael. "Management information systems : a proposal for an integrated system for a Ferroalloy production facility". Thesis, Stellenbosch : University of Stellenbosch, 1996. http://hdl.handle.net/10019.1/5681.

Pełny tekst źródła
Streszczenie:
Thesis (MBA (Business Management))--University of Stellenbosch, 1996.
ENGLISH ABSTRACT: The ferroalloy industry is, in contrast to the operationally complicated chemical industry, very reliant on the wealth of experience that has been built up by the operating personnel over a long period of time. The industry has not been at the forefront of technical development and has, in many respects, lagged behind in technological development. Information technology is one such area that has not received its fair share of attention. This study resulted from the requirement that the control systems of the submerged arc furnaces at the Samancor Meyerton Works be integrated in such a way that the plant subsystem controllers could operate off a single database. This would ensure that the reliance of the operation on the experience of personnel could be reduced by the judicious application of data from the process. The integration was expanded to include the control of the electricity generation plant that will utilize the waste gasses from the submerged arc furnaces to generate the electricity. The boundaries of the study were subsequently expanded to include a proposal for the integration of the control systems into a management information system for the Meyerton Works. The study gives consideration to the theory underlying management information systems after the strategic issues of the Manganese Division of Samancor are discussed. The theoretical aspects of management information systems together with the strategic issues of the Manganese Division are brought together to form the practical proposal of the integrated control and management information system. The case studies considered are based on two incidents that occurred on one of the submerged arc furnaces where the resulting financial losses were substantial. An integrated control system would have reduced the financial losses significantly. Finally, the recommendations of the study are firstly, that the management information system as proposed be expanded to include the furnaces that were not originally envisaged to have their control systems integrated, secondly that the maintenance management function be integrated with the control systems and management information system, and finally that the production planning system be included in the management information system so as to give substance to the control and optimization of the flow of manganese units from the mines to the customer. This will entrench the position of Samancor as a world class supplier of manganese units.
AFRIKAANSE OPSOMMING: Die ferroallooi industrie is, in teenstelling met die chemiese nywerheid wat 'n ingewikkelde bedryf het, afhanklik van die ondervinding van personeel wat oor 'n lang tyd opgebou is. Die industrie was nie 'n baanbreker ten opsigte van tegniese ontwikkeling nie en het ook grotendeels agtergebly op die gebied. Inligting tegnologie is een van die gebiede wat nie die gewensde hoeveelheid aandag gekry het nie. Hierdie studie het sy oorsprong gehad in die behoefte om die beheer sisteme van die dompelboog oonde van die Meyerton Werke van Samancor te integreer sodat die beheerders van die subsisteme van aanleg vanaf 'n enkele databasis kon funksioneer. Dit sou die resultaat gehad het om die afhanklikheid van die bedryf op die ondervinding van die personeel te verminder. Die integrasie gedagte was uitgebrei om die beheer van die elektrisiteits opwekkings aanleg, wat die afgase van die oonde gebruik, in te sluit. Die afbakening van die studie was later verbreed om te dien as 'n voorstel vir die integrering van die beheersisteme in 'n bestuursinligting stelsel vir die Meyerton Werke. Nadat die strategiese aangeleenthede van die Mangaan Afdeling aandag geniet, word die teorie aangaande bestuursinligting stelsels bespreek. Die teorie van die bestuursinligting stelsels en die strategiese aangeleenthede van die Mangaan Afdeling word bymekaar gebring om die voorstel van die geintegreerde beheer sisteem en bestuursinligting stelsel te vorm. Die gevallestudies wat bespreek is, is gebaseer op twee insidente wat op een van die oonde gebeur het waar daar geweldige finansiele verliese was. Geintegreerde beheer stelsels sou die verliese beperk het. Die aanbevelings wat gemaak is, is eerstens dat die bestuursinligting stelsel soos voorgestel is uitgebrei word om die oonde waar die beheerstelsels nie oorspronlik geintegreer sou wees in te sluit, tweedens dat die instandhouding bestuursstelsel geintegreer word met die bestuursinligting stelsel, en derdens dat die produksiebeplanning stelsel in die bestuursinligting stelsel gesluit word. Hierdeur sal die beweging van mangaan eenhede vanaf die myne na die kliente geoptimiseer word om Samancor se posisie as 'n wereld klas verskaffer van mangaan eenhede te verstewig.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Business – data processing – popular works"

1

Marcia, Kaufman, Halper Fern i Kirsch Daniel, red. Hybrid cloud for dummies. Hoboken, New Jersey: John Wiley & Sons, Inc., 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Pavlenko, Mykola. Akademik Hlushkov: Pohli͡a︡d u maĭbutni͡e︡. Kyïv: Vyd-vo T͡S︡K LKSMU "Molodʹ", 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Kirkby, Dave. Spectrum maths.: For National Curriculum levels 2-5. London: CollinsEducational, 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zarubin, V. S. Kompʹi͡u︡ternye tekhnologii v dei͡a︡telʹnosti organov vnutrennikh del: Informat͡s︡ionno-spravochnoe posobie. Voronezh: Izd-vo "VVSh MVD Rossii", 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Kritzinger, P. S. What is computer science? Cape Town: University of Cape Town, 1986.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Freeman, Terry A. Advanced Microsoft Works applications on the Macintosh. Radnor, Pa: Compute! Books, 1989.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Flynn, Brian. Compute!'s mastering PC Works. Greensboro, N.C: Compute! Books, 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

A, Manning William, red. Computers and information processing. Danvers, Mass: Boyd & Fraser Pub. Co., 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Allen, Warren W. Spreadsheet accounting for the microcomputer: Using Microsoft Works Macintosh version. Cincinnati, OH: South Western Pub. Co., 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

A, Manning William, i Quasney James S, red. Computers and information processing: QuickBASIC edition. Danvers, Mass: Boyd & Fraser Pub. Co., 1994.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Business – data processing – popular works"

1

Sallin, Marc, Martin Kropp, Craig Anslow, James W. Quilty i Andreas Meier. "Measuring Software Delivery Performance Using the Four Key Metrics of DevOps". W Lecture Notes in Business Information Processing, 103–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78098-2_7.

Pełny tekst źródła
Streszczenie:
Abstract The Four Key Metrics of DevOps have become very popular for measuring IT-performance and DevOps adoption. However, the measurement of the four metrics deployment frequency, lead time for change, time to restore service and change failure rate is often done manually and through surveys - with only few data points. In this work we evaluated how the Four Key Metrics can be measured automatically and developed a prototype for the automatic measurement of the Four Key Metrics. We then evaluated if the measurement is valuable for practitioners in a company. The analysis shows that the chosen measurement approach is both suitable and the results valuable for the team with respect to measuring and improving the software delivery performance.
Style APA, Harvard, Vancouver, ISO itp.
2

Petrik, Dimitri, Anne Untermann i Henning Baars. "Functional Requirements for Enterprise Data Catalogs: A Systematic Literature Review". W Lecture Notes in Business Information Processing, 3–18. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53227-6_1.

Pełny tekst źródła
Streszczenie:
AbstractOrganizations must gain insights into often fragmented and isolated data assets and overcome data silos to profitably leverage data as a strategic resource. Data catalogs are an increasingly popular approach to achieving these objectives. Despite the perceived importance of data catalogs in practice, relatively little research exists on how to design corporate data catalogs. It is also obvious that the existing market solutions have to be customized to the specific organizational needs. This paper presents a list of functional requirements for enterprise data catalogs extracted from a systematic literature review. The requirements can be used to frame and guide more specific research on data catalogs as well as for system selection and customization in practice.
Style APA, Harvard, Vancouver, ISO itp.
3

Berti, Alessandro, Gyunam Park, Majid Rafiei i Wil M. P. van der Aalst. "An Event Data Extraction Approach from SAP ERP for Process Mining". W Lecture Notes in Business Information Processing, 255–67. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_19.

Pełny tekst źródła
Streszczenie:
AbstractThe extraction, transformation, and loading of event logs from information systems is the first and the most expensive step in process mining. In particular, extracting event logs from popular ERP systems such as SAP poses major challenges, given the size and the structure of the data. Open-source support for ETL is scarce, while commercial process mining vendors maintain connectors to ERP systems supporting ETL of a limited number of business processes in an ad-hoc manner. In this paper, we propose an approach to facilitate event data extraction from SAP ERP systems. In the proposed approach, we store event data in the format of object-centric event logs that efficiently describe executions of business processes supported by ERP systems. To evaluate the feasibility of the proposed approach, we have developed a tool implementing it and conducted case studies with a real-life SAP ERP system.
Style APA, Harvard, Vancouver, ISO itp.
4

Pery, Andrew, Majid Rafiei, Michael Simon i Wil M. P. van der Aalst. "Trustworthy Artificial Intelligence and Process Mining: Challenges and Opportunities". W Lecture Notes in Business Information Processing, 395–407. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_29.

Pełny tekst źródła
Streszczenie:
AbstractThe premise of this paper is that compliance with Trustworthy AI governance best practices and regulatory frameworks is an inherently fragmented process spanning across diverse organizational units, external stakeholders, and systems of record, resulting in process uncertainties and in compliance gaps that may expose organizations to reputational and regulatory risks. Moreover, there are complexities associated with meeting the specific dimensions of Trustworthy AI best practices such as data governance, conformance testing, quality assurance of AI model behaviors, transparency, accountability, and confidentiality requirements. These processes involve multiple steps, hand-offs, re-works, and human-in-the-loop oversight. In this paper, we demonstrate that process mining can provide a useful framework for gaining fact-based visibility to AI compliance process execution, surfacing compliance bottlenecks, and providing for an automated approach to analyze, remediate and monitor uncertainty in AI regulatory compliance processes.
Style APA, Harvard, Vancouver, ISO itp.
5

Cremerius, Jonas, Luise Pufahl, Finn Klessascheck i Mathias Weske. "Event Log Generation in MIMIC-IV Research Paper". W Lecture Notes in Business Information Processing, 302–14. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27815-0_22.

Pełny tekst źródła
Streszczenie:
AbstractPublic event logs are valuable for process mining research to evaluate process mining artifacts and identify new and promising research directions. Initiatives like the BPI Challenges have provided a series of real-world event logs, including healthcare processes, and have significantly stimulated process mining research. However, the healthcare related logs provide only excerpts of patient visits in hospitals. The Medical Information Mart for Intensive Care (MIMIC)-IV database is a public available relational database that includes data on patient treatment in a tertiary academic medical center in Boston, USA. It provides complex care processes in a hospital from end-to-end. To facilitate the use of MIMIC-IV in process mining and to increase the reproducibility of research with MIMIC, this paper provides a framework consisting of a method, an event hierarchy, and a log extraction tool for extracting useful event logs from the MIMIC-IV database. We demonstrate the framework on a heart failure treatment process, show how logs on different abstraction levels can be generated, and provide configuration files to generate event logs of previous process mining works with MIMIC.
Style APA, Harvard, Vancouver, ISO itp.
6

Alzboon, Mowafaq Salem, Muhyeeddin Kamel Alqaraleh, Emran Mahmoud Aljarrah i Saleh Ali Alomari. "Semantic Image Analysis on Social Networks and Data Processing". W Advances in Business Information Systems and Analytics, 189–214. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-7998-9016-4.ch009.

Pełny tekst źródła
Streszczenie:
In the last decade, a significant number of people have become active social network users. People utilize Twitter, Facebook, LinkedIn, and Google+. Facebook users generate a lot of data. Photos can teach people a lot. Image analysis has traditionally focused on audience emotions. Photographic emotions are essentially subjective and vary among observers. There are numerous uses for its most popular feature. People, on the other hand, use social media and applications. They handle noise, dynamics, and size. Shared text, pictures, and videos were also a focus of network analysis study. Statistic, rules, and trend analysis are all available in massive datasets. You may use them for data manipulation and retrieval, mathematical modeling, and data pre-processing and interpretation. This chapter examines social networks, basic concepts, and social network analysis components. A further study topic is picture usage in social networks. Next, a novel method for analyzing social networks, namely semantic networks, is presented. Finally, themes and routes are defined.
Style APA, Harvard, Vancouver, ISO itp.
7

Vieira, Armando. "Business Applications of Deep Learning". W Natural Language Processing, 440–62. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0951-7.ch023.

Pełny tekst źródła
Streszczenie:
Deep Learning (DL) took Artificial Intelligence (AI) by storm and has infiltrated into business at an unprecedented rate. Access to vast amounts of data extensive computational power and a new wave of efficient learning algorithms, helped Artificial Neural Networks to achieve state-of-the-art results in almost all AI challenges. DL is the cornerstone technology behind products for image recognition and video annotation, voice recognition, personal assistants, automated translation and autonomous vehicles. DL works similarly to the brain by extracting high-level, complex abstractions from data in a hierarchical and discriminative or generative way. The implications of DL supported AI in business is tremendous, shaking to the foundations many industries. In this chapter, I present the most significant algorithms and applications, including Natural Language Processing (NLP), image and video processing and finance.
Style APA, Harvard, Vancouver, ISO itp.
8

Savvas, Ilias K., Georgia N. Sofianidou i M.-Tahar Kechadi. "Applying the K-Means Algorithm in Big Raw Data Sets with Hadoop and MapReduce". W Business Intelligence, 1220–43. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9562-7.ch062.

Pełny tekst źródła
Streszczenie:
Big data refers to data sets whose size is beyond the capabilities of most current hardware and software technologies. The Apache Hadoop software library is a framework for distributed processing of large data sets, while HDFS is a distributed file system that provides high-throughput access to data-driven applications, and MapReduce is software framework for distributed computing of large data sets. Huge collections of raw data require fast and accurate mining processes in order to extract useful knowledge. One of the most popular techniques of data mining is the K-means clustering algorithm. In this study, the authors develop a distributed version of the K-means algorithm using the MapReduce framework on the Hadoop Distributed File System. The theoretical and experimental results of the technique prove its efficiency; thus, HDFS and MapReduce can apply to big data with very promising results.
Style APA, Harvard, Vancouver, ISO itp.
9

"Introduction to Data Mining". W Principles and Theories of Data Mining With RapidMiner, 1–34. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-4730-7.ch001.

Pełny tekst źródła
Streszczenie:
Data mining is a powerful and increasingly popular tool that uses machine learning to uncover patterns in data and help businesses stay competitive. Data scientists are trained to understand business objectives and select the correct techniques for data exploration and pre-processing. After formulating the business question, data mining methods are chosen and evaluated to determine their ability to fit the data set and answer the query. Results are then reported back to the business owner. Data mining is an essential part of modern business, allowing the organization to keep up with the competition and remain successful. With its growing popularity, the need for data scientists is rapidly increasing.
Style APA, Harvard, Vancouver, ISO itp.
10

Krishna, Gopal. "Social Networking Data Analysis Tools and Services". W Advances in Business Information Systems and Analytics, 19–34. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5097-6.ch002.

Pełny tekst źródła
Streszczenie:
Social networks have drawn remarkable attention from IT professionals and researchers in data sciences. They are the most popular medium for social interaction. Online social networking (OSN) can be defined as involving networking for fun, business, and communication. Social networks have emerged as universally accepted communication means and boomed in turning this world into a global town. OSN media are generally known for broadcasting information, activities posting, contents sharing, product reviews, online pictures sharing, professional profiling, advertisements and ideas/opinion/sentiment expression, or some other stuff based on business interests. For the analysis of the huge amount of data, data mining techniques are used for identifying the relevant knowledge from the huge amount of data that includes detecting trends, patterns, and rules. Data mining techniques, machine learning, and statistical modeling are used to retrieve the information. For the analysis of the data, three methods are used: data pre-processing, data analysis, and data interpretation.
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Business – data processing – popular works"

1

Wang, Ke, i Wadha Mubarak Al Araimi. "BI Dashboarding Application in Reservoir Simulation". W Gas & Oil Technology Showcase and Conference. SPE, 2023. http://dx.doi.org/10.2118/214219-ms.

Pełny tekst źródła
Streszczenie:
Abstract A Business Intelligence (BI) tool is a type of software that is used to gather, process, analyze and visualize a large volume of data, whether it is historical data, live data or forecasting data for the future. The objective of implementing BI tool is to create interactive reports, generate actionable business insights, and to simplify / accelerate the decision-making process. Depending on the size and the maturity of their fields, reservoir engineers often have to deal with a tremendous quantity of data from various categories such as simulation input or output, within challenging timelines. It is not uncommon that simulation REs spend the majority of their time in data pre/post-processing, grouping, filtration and setting up visualization templates, before being able to finally perform some value-adding results analysis and eventually improve the model forecasts. This paper focus on the applications of dashboarding in reservoir engineering and simulation work using a popular BI software (Spotfire®) that is outperforming in many ways some industry-standard software, with objective of promoting the BI dashboarding culture within reservoir engineer population. Depending on the purpose, various types of dashboard could be built, which allow RE users to better discover patterns and unveil the real meaning behind the data. In this paper, three templates (including history matching quantitative assessment, scenario comparator and PVT data QC) currently adopted by Abu Dhabi National Oil Company (ADNOC) Onshore asset are illustrated. Benchmarking against some conventional industry-standard tools is also performed to highlight their added-value. As a result, in the context of a 2G & R integrated model review for a multi-billion-barrels reservoir, some concrete examples focusing on BI dashboarding assisted well-by-well history match is illustrated, which showcases how simulation REs could boost their daily work efficiency and create added value to the organization.
Style APA, Harvard, Vancouver, ISO itp.
2

"Changing Paradigms of Technical Skills for Data Engineers". W InSITE 2018: Informing Science + IT Education Conferences: La Verne California. Informing Science Institute, 2018. http://dx.doi.org/10.28945/4001.

Pełny tekst źródła
Streszczenie:
Aim/Purpose: [This Proceedings paper was revised and published in the 2018 issue of the journal Issues in Informing Science and Information Technology, Volume 15] This paper investigates the new technical skills that are needed for Data Engineering. Past research is compared to new research which creates a list of the 20 top tech-nical skills required by a Data Engineer. The growing availability of Data Engineering jobs is discussed. The research methodology describes the gathering of sample data and then the use of Pig and MapReduce on AWS (Amazon Web Services) to count occurrences of Data Engineering technical skills from 100 Indeed.com job advertisements in July, 2017. Background: A decade ago, Data Engineering relied heavily on the technology of Relational Database Management Sys-tems (RDBMS). For example, Grisham, P., Krasner, H., and Perry D. (2006) described an Empirical Soft-ware Engineering Lab (ESEL) that introduced Relational Database concepts to students with hands-on learning that they called “Data Engineering Education with Real-World Projects.” However, as seismic im-provements occurred for the processing of large distributed datasets, big data analytics has moved into the forefront of the IT industry. As a result, the definition for Data Engineering has broadened and evolved to include newer technology that supports the distributed processing of very large amounts of data (e.g. Hadoop Ecosystem and NoSQL Databases). This paper examines the technical skills that are needed to work as a Data Engineer in today’s rapidly changing technical environment. Research is presented that re-views 100 job postings for Data Engineers from Indeed (2017) during the month of July, 2017 and then ranks the technical skills in order of importance. The results are compared to earlier research by Stitch (2016) that ranked the top technical skills for Data Engineers in 2016 using LinkedIn to survey 6,500 peo-ple that identified themselves as Data Engineers. Methodology: A sample of 100 Data Engineering job postings were collected and analyzed from Indeed during July, 2017. The job postings were pasted into a text file and then related words were grouped together to make phrases. For example, the word “data” was put into context with other related words to form phrases such as “Big Data”, “Data Architecture” and “Data Engineering”. A text editor was used for this task and the find/replace functionality of the text editor proved to be very useful for this project. After making phrases, the large text file was uploaded to the Amazon cloud (AWS) and a Pig batch job using Map Reduce was leveraged to count the occurrence of phrases and words within the text file. The resulting phrases/words with occurrence counts was download to a Personal Computer (PC) and then was loaded into an Excel spreadsheet. Using a spreadsheet enabled the phrases/words to be sorted by oc-currence count and then facilitated the filtering out of irrelevant words. Another task to prepare the data involved the combination phrases or words that were synonymous. For example, the occurrence count for the acronym ELT and the occurrence count for the acronym ETL were added together to make an overall ELT/ETL occurrence count. ETL is a Data Warehousing acronym for Extracting, Transforming and Loading data. This task required knowledge of the subject area. Also, some words were counted in lower case and then the same word was also counted in mixed or upper case, thus producing two or three occur-rence counts for the same word. These different counts were added together to make an overall occur-rence count for the word (e.g. word occurrence counts for Python and python were added together). Fi-nally, the Indeed occurrence counts were sorted to allow for the identification of a list of the top 20 tech-nical skills needed by a Data Engineer. Contribution: Provides new information about the Technical Skills needed by Data Engineers. Findings: Twelve of the 20 Stitch (2016) report phrases/words that are highlighted in bold above matched the tech-nical skills mentioned in the Indeed research. I considered C, C++ and Java a match to the broader cate-gory of Programing in the Indeed data. Although the ranked order of the two lists did not match, the top five ranked technical skills for both lists are similar. The reader of this paper might consider the skills of SQL, Python, Hadoop/HDFS to be very important technical skills for a Data Engineer. Although the programming language R is very popular with Data Scientists, it did not make the top 20 skills for Data Engineering; it was in the overall list from Indeed. The R programming language is oriented towards ana-lytical processing (e.g. used by Data Scientists), whereas the Python language is a scripting and object-oriented language that facilitates the creation of Data Pipelines (e.g. used by Data Engineers). Because the data was collected one year apart and from very different data sources, the timing of the data collection and the different data sources could account for some of the differences in the ranked lists. It is worth noting that the Indeed research ranked list introduced the technical skills of Design Skills, Spark, AWS (Amazon Web Services), Data Modeling, Kafta, Scala, Cloud Computing, Data Pipelines, APIs and AWS Redshift Data Warehousing to the top 20 ranked technical skills list. The Stitch (2016) report that did not have matches to the Indeed (2017) sample data for Linux, Databases, MySQL, Business Intelligence, Oracle, Microsoft SQL Server, Data Analysis and Unix. Although many of these Stitch top 20 technical skills were on the Indeed list, they did not make the top 20 ranked technical skills. Recommendations for Practitioners: Some of the skills needed for Database Technologies are transferable to Data Engineering. Recommendation for Researchers: None Impact on Society: There is not much peer reviewed literature on the subject of Data Engineering, this paper will add new information to the subject area. Future Research: I'm developing a Specialization in Data Engineering for the MS in Data Science degree at our university.
Style APA, Harvard, Vancouver, ISO itp.
3

Tatyanin, N., M. Andrey, R. Chernov, A. Semenova, A. Sorokin, B. Plotnikov, R. Volkov i E. Kuchkanov. "Comparison and Analysis of Modern Methods of Data Processing on Business Decisions in Exploration Works on The Volga-Ural Oil Region". W ProGREss’21. European Association of Geoscientists & Engineers, 2021. http://dx.doi.org/10.3997/2214-4609.202159037.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Al Farsi, Ghaida, i Angeni Jayawickramarajah. "Condensate Optimization Through Digital Tools". W SPE Conference at Oman Petroleum & Energy Show. SPE, 2022. http://dx.doi.org/10.2118/200186-ms.

Pełny tekst źródła
Streszczenie:
Abstract Khazzan, the gas giant in Oman, and its new gas processing facility were started up in 2017 delivering a daily contractual volume of gas o the local grid. The wells had high gas deliverability potential, and yet they were constrained to deliver the contractual volume specified by the commercial agreement with the government. To maximize the value of this excess gas potential, a new digital tool was utilized to optimize our condensate production – increasing production by 2-3% with no additional cost. The first step was to create a digital twin of the asset, including the structure of the asset and the fluid dynamics of the flow regimes, temperatures and pressures. Historical data was used to populate the digital twin, and sensors in the asset were set up to send real time data to the functioning twin of the physical asset. Any constraints in the production system were built into the digital twin to provide the most accurate simulation. The tool was then used to monitor, simulate and optimize production, by testing multiple variables until an optimal solution was found for the entire production system from wells, through facilities, to export. Significant value was obtained by utilizing the digital toolkit, delivering not only economic value but progressing innovation as well. Traditional production management requires considerable time to manually integrate the complex infrastructure and fluid dynamics. This digital twin of Khazzan enabled the work to be automated and completed much faster than conventional methods–allowing petroleum engineers to focus on evaluating well performance rather than running time-consuming scenarios of anticipated or likely events. The response time to any changes in gas nomination was also reduced significantly, by running simulations with a few clicks on the screen and updating them in the live streaming Wells Overview sheet. Condensate optimization was not only achieved through maximizing condensate production, but there an opportunity was realized to minimize condensate flared volumes during plant upsets which ultimately impact recovered condensate. This was completed in conjunction with the live streaming Wells Overview – to capitalize on utilizing a digital workflow to reduce flaring volumes. Overall, this digital twin benefited BP Oman by indicating where efficiencies can be improved and potential problems in the production system, leading to significant business value being added to the company.
Style APA, Harvard, Vancouver, ISO itp.
5

Hasegawa, Sho, i Masashi Yamada. "A Statistical Analysis of History of Japanese Light Novels". W 14th International Conference on Applied Human Factors and Ergonomics (AHFE 2023). AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1002996.

Pełny tekst źródła
Streszczenie:
In recent years, cool Japan anime, video games and manga have been recognized as important export-oriented merchandise in Japan. Another recent Japanese genre is light novels. Light novels are one of the genres of novels and construct a part of recent Japanese pop culture. In Japan recently, the anime, manga and live-action films have been adapted from the original light novels. Light novels were chiefly read by high and junior-high school student previously, but the range of the readers has been expanding widely in recent years. However, light novels have not been statistically studied yet. In the present study, 533 titles of popular light novels published in 2004-2020 were investigated to clarify the trends in the genre.Numbers were counted for each of 129 items, data of published year, publisher, label, platform first appeared on, length of time from the first publication to the release of the anime adaptation, length of time from the first publication to the manga adaptation and so on for each year. And a 17 x 129 cross-tabulation table was constructed. Then it was analyzed by correspondence analysis and cluster analysis.The results of the correspondence analysis showed a three-dimensional solution with a cumulative contribution ratio of 64 %. Each year was plotted on the three-dimensional space, then cluster analysis was performed on the plots with Ward’s method. The results of the cluster analysis showed that the plots could be divided into four periods; 2004-005, 2006-2015, 2016-2017 and 2018-2020.The results showed that KADOKAWA’s labels were popular between 2004 and 2014, but the other publishers’ labels which collect novels appearing in individuals’ private home pages or posted on web-sites became popular in recent years. The most important part of the business strategy of light novels is mass-production, which means a large number of works and writers are needed. As such, it stands to reason that internet novel submission sites have become an important source for publishers.The results also showed that it only took one or two years to produce anime for many titles between 2006 and 2014, but in recent years, the period of time has increased to four years and the number of cases where no anime was produced has also increased. Additionally, manga based on the light novels tends to be published within one year. This suggests that the center of the multimedia strategy of light novels has shifted from anime to manga. Since the mid-2010s, tablet devices and e-comics applications have rapidly developed. This has enabled publishers to rapidly produce and publish manga based on the original light novels. Moreover, it now takes longer to produce anime, because the quality of frames and productions costs have increased. Therefore, anime studios tend to select an original work of light novel carefully, observing the sales of the e-manga adaptation.
Style APA, Harvard, Vancouver, ISO itp.
6

Bumanis, Nikolajs, Gatis Vitols, Irina Arhipova i Inga Meirane. "Deep learning solution for children long-term identification". W Research for Rural Development 2020. Latvia University of Life Sciences and Technologies, 2020. http://dx.doi.org/10.22616/rrd.26.2020.039.

Pełny tekst źródła
Streszczenie:
Deep learning algorithms are becoming default solution for application in business processes where recognition, identification and automated learning are involved. For human identification, analysis of various features can be applied. Face feature analysis is most popular method for identification of person in various stages of life, including children and infants. The aim of this research was to propose deep learning solution for long-term identification of children in educational institutions. Previously proposed conceptual model for long-term re-identification was enhanced. The enhancements include processing of unexpected persons’ scenarios, knowledge base improvements based on results of supervised and unsupervised learning, implementation of video surveillance zones within educational institutions and object tracking results’ data chaining between multiple logical processes. Object tracking results are the solution we found for long-term identification realization.
Style APA, Harvard, Vancouver, ISO itp.
7

Alias, Cyril, Udo Salewski, Viviana Elizabeth Ortiz Ruiz, Frank Eduardo Alarcón Olalla, José do Egypto Neirão Reymão i Bernd Noche. "Adapting Warehouse Management Systems to the Requirements of the Evolving Era of Industry 4.0". W ASME 2017 12th International Manufacturing Science and Engineering Conference collocated with the JSME/ASME 2017 6th International Conference on Materials and Processing. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/msec2017-2611.

Pełny tekst źródła
Streszczenie:
With global megatrends like automation and digitization changing societies, economies, and ultimately businesses, shift is underway, disrupting current business plans and entire industries. Business actors have accordingly developed an instinctive fear of economic decline and realized the necessity of taking adequate measures to keep up with the times. Increasingly, organizations find themselves in an evolve-or-die race with their success depending on their capability of recognizing the requirements for serving a specific market and adopting those requirements accurately into their own structure. In the transportation and logistics sector, emerging technological and information challenges are reflected in fierce competition from within and outside. Especially, processes and supporting information systems are put to the test when technological innovation start to spread among an increasing number of actors and promise higher performance or lower cost. As to warehousing, technological innovation continuously finds its way into the premises of the heterogeneous warehouse operators, leading to modifications and process improvements. Such innovation can be at the side of the hardware equipment or in the form of new software solutions. Particularly, the fourth industrial revolution is globally underway. Same applies to Future Internet technologies, a European term for innovative software technologies and the research upon them. On the one hand, new hardware solutions using robotics, cyber-physical systems and sensors, and advanced materials are constantly put to widespread use. On the other one, software solutions based on intensified digitization including new and more heterogeneous sources of information, higher volumes of data, and increasing processing speed are also becoming an integral part of popular information systems for warehouses, particularly for warehouse management systems. With a rapidly and dynamically changing environment and new legal and business requirements towards processes in the warehouses and supporting information systems, new performance levels in terms of quality and cost of service are to be obtained. For this purpose, new expectations of the functionality of warehouse management systems need to be derived. While introducing wholly new solutions is one option, retrofitting and adapting existing systems to the new requirements is another one. The warehouse management systems will need to deal with more types of data from new and heterogeneous data sources. Also, it will need to connect to innovative machines and represent their respective operating principles. In both scenarios, systems need to satisfy the demand for new features in order to remain capable of processing information and acting and, thereby, to optimize logistics processes in real time. By taking a closer look at an industrial use case of a warehouse management system, opportunities of incorporating such new requirements are presented as the system adapts to new data types, increased processing speed, and new machines and equipment used in the warehouse. Eventually, the present paper proves the adaptability of existing warehouse management systems to the requirements of the new digital world, and viable methods to adopt the necessary renovation processes.
Style APA, Harvard, Vancouver, ISO itp.
8

Ionita, Mirela, Veronica Pastae i Alexandru Stoica. "BENEFITS OF SETTING UP ACADEMIC PORTALS ON GOOGLE APPS". W eLSE 2014. Editura Universitatii Nationale de Aparare "Carol I", 2014. http://dx.doi.org/10.12753/2066-026x-14-091.

Pełny tekst źródła
Streszczenie:
As far as the academic field is concerned, the use of Google applications has constantly increased. The facilities which at the beginning benefited only the business domain now have become quite popular among educational institutions. Nowadays, the possibilities of using Google Apps in universities go beyond the learning process and can influence various aspects of academic life. As a result, Google Apps have become an inherent part of data processing in universities, ensuring performance and quality management, by facilitating research and cooperation among educational institutions, as well as communication among teaching staff. Free of charge, Google Apps for Education provide integrated services on a domain which bears the acronym and the website of the respective educational institution in the form of a g-mail account, provided and hosted by Google. One of the major advantages of such applications is the speed of the data traffic, accelerated by storage in a cloud system which offers multiuser, quick access to information, as if it were stored in one's own server. The services provided include a multitude of technical possibilities to be explored in teaching and research, such as g-mail, drive, sites, calendar, google plus, etc., which can all enhace the quality of the educational offer. The present article primarily focuses on the advantages of using Google Apps in the attempt to improve academic standards. Consequently we shall analyse the cooperation possibilities among members of the academic community - staff and students - in the virtual environment. Last but not least, we shall discuss the impact of computer technology on the process of learning, emphasizing the importance of accepting change and seeking self-improvement.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Business – data processing – popular works"

1

Opportunities and drivers for SME agribusinesses to reduce food loss in Africa and Asia. Commercial Agriculture for Smallholders and Agribusiness (CASA), 2023. http://dx.doi.org/10.1079/20240191175.

Pełny tekst źródła
Streszczenie:
Climate change, conflict, and the COVID-19 pandemic and its aftermath have caused a sharp increase in food insecurity globally. Reducing food loss - a decrease in the quantity and/or quality of food that takes place from production through to processing - in places where food insecurity is most severe has the potential to be a win-win for food security, climate outcomes, and for commercially driven agribusinesses. This report reviews the common drivers of food loss in sub-Saharan Africa and South Asia, which include inadequate storage, lack of cold chain, and poor post-harvest and distribution practices. It then highlights five technologies or approaches which have the potential to address food loss, and which are appropriate for agricultural small and medium-sized enterprises (agri-SMEs) operating in much of sub-Saharan Africa and South Asia, which face particular challenges (e.g. an unreliable electrical grid and fragmented value chains). Finally, the report highlights the main barriers to adoption and scale for these technologies and approaches, and identifies opportunities for governments, development partners, investors, and technology manufacturers to improve their uptake among agri-SMEs. The five technologies and approaches covered in this report are as follows: Decentralization of processing using solar dryers: The decentralization of primary food processing, in which some portion of value addition is undertaken close to the farm gate by farmers or SMEs, can have multiple benefits, including reducing food loss, lowering transport costs, and increasing rural incomes. Solar drying technology can enable this model, particularly in areas where there is a tradition of sun drying fruits and vegetables and there is a viable domestic or regional market for these products. Successful models typically involve an agribusiness off-taker who works with farmers and SME producers, providing technology and services (e.g., guaranteed off-take, training etc.) that ensure the production of high-quality produce. Hermetic storage (e.g. bags and cocoons): This maturing technology is increasingly available in local markets and represents a potentially easy-to-implement solution which could help to substantially address food loss during storage - where most loss occurs - for key staple grains. Cost and usage remain challenges for smallholders, with greater potential for small- to medium-scale traders and aggregators in rural areas with limited storage infrastructure. By creating a hypoxic environment around the produce, these solutions can achieve 100% insect mortality and reduce the growth of mould and aflatoxins. Bags are more appropriate for agri-SMEs involved in distribution, whereas cocoons (i.e. storage containers consisting of two plastic halves joined together by an airtight zip) are more useful for those storing large volumes for periods of six months or longer. Off-grid cold storage (e.g. solar-powered cold rooms): Innovative technologies and delivery mechanisms are still being tested in markets in India, Nigeria, and Kenya. Despite the high upfront cost, there are several examples of agri-SMEs and co-operatives achieving payback periods of as little as two years across a range of fruit and vegetable value chains, with returns driven by reductions in food loss and improved pricing due to better quality of the produce. Cooling as a service business models also offer the potential to reach smaller agri-SMEs and micro-entrepreneurs operating in informal rural and peri-urban value chains, but their application is limited to high-value crops that are generally out of the reach of the rural poor. Agri-ecommerce platforms: Agri-ecommerce platforms are a well-developed technology that aims to reduce food loss by improving the availability of information on market demand for farmers. Technology providers can also engage in logistics, warehousing, and quality control, taking collection of the produce from rural-based hubs, combining it at a central packing house, and delivering to urban retailers. Models of this kind have scaled more effectively in South Asia than sub-Saharan Africa, where they are constrained by poor road and logistics infrastructure. Waste-to-value approaches: Waste-to-value or circular economy approaches have the potential to reduce food loss by utilizing bruised or damaged fruits and vegetables which are unable to be sold as intended as inputs into other food products. Although the application of these approaches to the production of products such as condiments and oils is popular, they are unlikely to have a material impact on food security. However, models such as using black soldier fly larvae (BSFL) to produce animal feed (after consuming the food waste) are more promising, with a range of related technologies and business models operating in markets in both Africa and Asia. The main barriers to the success and scaling up of these technologies and approaches include a lack of knowledge and awareness of their commercial benefits, a lack of finance for manufacturers and agri-SME customers, a need for further research and development (R&D) and business model innovation (e.g. to bring down cost), and a lack of supportive policies and regulatory frameworks. Policymakers, development partners, investors, and the private sector can all play important roles in addressing these barriers.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii