Articles de revues sur le sujet « Business – data processing – popular works »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Business – data processing – popular works.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Business – data processing – popular works ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

BALCIOGLU, Yavuz Selim, Melike ARTAR et Oya ERDİL. « MAKİNE ÖĞRENİMİ VE TWITTER VERİLERİNİN ANALİZİ : COVID-19 SONRASI İŞ TRENDLERİNİN BELİRLENMESİ ». SOCIAL SCIENCE DEVELOPMENT JOURNAL 7, no 33 (15 septembre 2022) : 353–61. http://dx.doi.org/10.31567/ssd.697.

Texte intégral
Résumé :
With the Covid-19 epidemic, there has been a great change in the routines of social and business life. These changing routines have brought with them new needs and demands. In order for business life to adapt to this new order and develop new strategies, current trends should be analyzed. In this study, the most demanded business trends on Twitter after Covid-19 were analyzed by machine learning. Textual expressions obtained through Twitter are converted into data by methods such as natural language processing. Analyzing these data correctly makes it possible to obtain important information that will create a roadmap about the targeted issues. Within the scope of the research, a total of 48765 tweets with high impact were selected. Word frequency analysis was applied to the total number of tweets belonging to the determined business trends. Within the scope of the research, textual expressions obtained through twitter platforms were converted into data by natural language processing method. In addition, a word analysis model based on SVM, one of the machine learning algorithms, was used. As a result of the analysis; online food services, online sales specialist, remote working, healthcare professionals, personal coaching, online training and repairman have emerged as popular lines of business. Key words: Machine Learning, Trend Jobs, Neural Networks, Twitter, SVM, Covid-19
Styles APA, Harvard, Vancouver, ISO, etc.
2

Bathla, Gourav, Himanshu Aggarwal et Rinkle Rani. « A Novel Approach for Clustering Big Data based on MapReduce ». International Journal of Electrical and Computer Engineering (IJECE) 8, no 3 (1 juin 2018) : 1711. http://dx.doi.org/10.11591/ijece.v8i3.pp1711-1719.

Texte intégral
Résumé :
Clustering is one of the most important applications of data mining. It has attracted attention of researchers in statistics and machine learning. It is used in many applications like information retrieval, image processing and social network analytics etc. It helps the user to understand the similarity and dissimilarity between objects. Cluster analysis makes the users understand complex and large data sets more clearly. There are different types of clustering algorithms analyzed by various researchers. Kmeans is the most popular partitioning based algorithm as it provides good results because of accurate calculation on numerical data. But Kmeans give good results for numerical data only. Big data is combination of numerical and categorical data. Kprototype algorithm is used to deal with numerical as well as categorical data. Kprototype combines the distance calculated from numeric and categorical data. With the growth of data due to social networking websites, business transactions, scientific calculation etc., there is vast collection of structured, semi-structured and unstructured data. So, there is need of optimization of Kprototype so that these varieties of data can be analyzed efficiently.In this work, Kprototype algorithm is implemented on MapReduce in this paper. Experiments have proved that Kprototype implemented on Mapreduce gives better performance gain on multiple nodes as compared to single node. CPU execution time and speedup are used as evaluation metrics for comparison.Intellegent splitter is proposed in this paper which splits mixed big data into numerical and categorical data. Comparison with traditional algorithms proves that proposed algorithm works better for large scale of data.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Shalehanny, Shafira, Agung Triayudi et Endah Tri Esti Handayani. « PUBLIC’S SENTIMENT ANALYSIS ON SHOPEE-FOOD SERVICE USING LEXICON-BASED AND SUPPORT VECTOR MACHINE ». Jurnal Riset Informatika 4, no 1 (12 décembre 2021) : 1–8. http://dx.doi.org/10.34288/jri.v4i1.287.

Texte intégral
Résumé :
Technology field following how era keep evolving. Social media already on everyone’s daily life and being a place for writing their opinion, either review or response for product and service that already being used. Twitter are one of popular social media on Indonesia, according to Statista data it reach 17.55 million users. For online business sector, knowing sentiment score are really important to stepping up their business. The use of machine learning, NLP (Natural Processing Language), and text mining for knowing the real meaning of opinion words given by customer called sentiment analysis. Two methods are using for data testing, the first is Lexicon Based and the second is Support Vector Machine (SVM). Data source that used for sentiment analyst are from keyword ‘ShopeeFood’ and ‘syopifud’. The result of analysis giving accuracy score 87%, precision score 81%, recall score 75%, and f1-score 78%.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Šuman, Sabrina, Milorad Vignjević et Tomislav Car. « Information extraction and sentiment analysis of hotel reviews in Croatia ». Zbornik Veleučilišta u Rijeci 11, no 1 (2023) : 69–87. http://dx.doi.org/10.31784/zvr.11.1.5.

Texte intégral
Résumé :
Today, the amount of data in and around the business system requires new ways of data collection and processing. Discovering sentiments from hotel reviews helps improve hotel services and overall online reputation, as potential guests largely consult existing hotel reviews before booking. Therefore, hotel reviews of Croatian hotels (categories three, four, and five stars) in tourist regions of Croatia were studied on the Booking.com platform for the years 2019 and 2021 (before and after the start of the pandemic COVID-19). Hotels on the Adriatic coast were selected in the cities that were mentioned by several sources as the most popular: Rovinj, Pula, Krk, Zadar, Šibenik, Split, Brač, Hvar, Makarska, and Dubrovnik. The reviews were divided into four groups according to the overall rating and further divided into positive and negative in each group. Therefore, the elements that were present in the positive and negative reviews of each of the four groups were identified. Using the text processing method, the most frequent words and expressions (unigrams and bigrams), separately for the 2019 and 2021 tourism seasons, that can be useful for hotel management in managing accommodation services and achieving competitive advantages were identified. In the second part of the work, a machine learning (ML) model was built over all the collected reviews, classifying the reviews into positive or negative. The results of applying three different ML algorithms with precision and recall performance are described in the Results and Discussion section.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kocherzhuk, D. V. « Sound recording in pop art : differencing the «remake» and «remix» musical versions ». Aspects of Historical Musicology 14, no 14 (15 septembre 2018) : 229–44. http://dx.doi.org/10.34064/khnum2-14.15.

Texte intégral
Résumé :
Background. Contemporary audio art in search of new sound design, as well as the artists working in the field of music show business, in an attempt to draw attention to the already well-known musical works, often turn to the forms of “remake” or “remix”. However, there are certain disagreements in the understanding of these terms by artists, vocalists, producers and professional sound engineer team. Therefore, it becomes relevant to clarify the concepts of “remake” and “remix” and designate the key differences between these musical phenomena. The article contains reasoned, from the point of view of art criticism, positions concerning the misunderstanding of the terms “remake” and “remix”, which are wide used in the circles of the media industry. The objective of the article is to explore the key differences between the principles of processing borrowed musical material, such as “remix” and “remake” in contemporary popular music, in particular, in recording studios. Research methodology. In the course of the study two concepts – «remake» and «remix» – were under consideration and comparison, on practical examples of some works of famous pop vocalists from Ukraine and abroad. So, the research methodology includes the methods of analysis for consideration of the examples from the Ukrainian, Russian and world show business and the existing definitions of the concepts “remake” and “remix”; as well as comparison, checking, coordination of the latter; formalization and generalization of data in getting the results of our study. The modern strategies of the «remake» invariance development in the work of musicians are taken in account; also, the latest trends in the creation of versions of «remix» by world class artists and performers of contemporary Ukrainian pop music are reflected. The results of the study. The research results reveal the significance of terminology pair «remix» and «remake» in the activities of the pop singer. It found that the differences of two similar in importance terms not all artists in the music industry understand. The article analyzes the main scientific works of specialists in the audiovisual and musical arts, in philosophical and sociological areas, which addressed this issue in the structure of music, such as the studies by V. Tormakhova, V. Otkydach, V. Myslavskyi, I. Tarasova, Yu. Koliadych, L. Zdorovenko and several others, and on this basis the essence of the concepts “remake” and “remix” reveals. The phenomenon of the “remake” is described in detail in the dictionary of V. Mislavsky [5], where the author separately outlined the concept of “remake” not only in musical art, but also in the film industry and the structure of video games. The researcher I. Tarasovа also notes the term “remake” in connection with the problem of protection of intellectual property and the certification of the copyright of the performer and the composer who made the original version of the work [13]. At the same time, the term “remix” in musical science has not yet found a precise definition. In contemporary youth pop culture, the principle of variation of someone else’s musical material called “remix” is associated with club dance music, the principle of “remake” – with the interpretation of “another’s” music work by other artist-singers. “Remake” is a new version or interpretation of a previously published work [5: 31]. Also close to the concept of “remake” the term “cover version” is, which is now even more often uses in the field of modern pop music. This is a repetition of the storyline laid down by the author or performer of the original version, however, in his own interpretation of another artist, while the texture and structure of the work are preserving. A. M. Tormakhova deciphered the term “remake” as a wide spectrum of changes in the musical material associated with the repetition of plot themes and techniques [14: 8]. In a general sense, “a wide spectrum of changes” is not only the technical and emotional interpretation of the work, including the changes made by the performer in style, tempo, rhythm, tessitura, but also it is an aspect of composing activity. For a composer this is an expression of creative thinking, the embodiment of his own vision in the ways of arrangement of material. For a sound director and a sound engineer, a “remix” means the working with computer programs, saturating music with sound effects; for a producer and media corporations it is a business. “Remake” is a rather controversial phenomenon in the music world. On the one hand, it is training for beginners in the field of art; on the other hand, the use of someone else’s musical material in the work can neighbor on plagiarism and provoke the occurrence of certain conflict situations between artists. From the point of view of show business, “remake” is only a method for remind of a piece to the public for the purpose of its commercial use, no matter who the song performed. Basically, an agreement concludes between the artists on the transfer or contiguity of copyright and the right to perform the work for profit. For example, the song “Diva” by F. Kirkorov is a “remake” of the work borrowed from another performer, the winner of the Eurovision Song Contest 1998 – Dana International [17; 20], which is reflected in the relevant agreement on the commercial use of musical material. Remix as a music product is created using computer equipment or the Live Looping music platform due to the processing of the original by introducing various sound effects into the initial track. Interest in this principle of material processing arose in the 80s of the XXth century, when dance, club and DJ music entered into mass use [18]. As a remix, one can considers a single piece of music taken as the main component, which is complemented in sequence by the components of the DJ profile. It can be various samples, the changing of the speed of sounding, the tonality of the work, the “mutation” of the soloist’s voice, the saturation of the voice with effects to achieve a uniform musical ensemble. To the development of such a phenomenon as a “remix” the commercial activities of entertainment facilities (clubs, concert venues, etc.) contributes. The remix principle is connected with the renewal of the musical “hit”, whose popularity gradually decreased, and the rotation during the broadcast of the work did not gain a certain number of listeners. Conclusions. The musical art of the 21st century is full of new experimental and creative phenomena. The process of birth of modified forms of pop works deserves constant attention not only from the representatives of the industry of show business and audiovisual products, but also from scientists-musicologists. Such popular musical phenomena as “remix” and “remake” have a number of differences. So, a “remix” is a technical form of interpreting a piece of music with the help of computer processing of both instrumental parts and voices; it associated with the introduction of new, often very heterogeneous, elements, with tempo changes. A musical product created according to this principle is intended for listeners of “club music” and is not related to the studio work of the performer. The main feature of the “remake”is the presence of studio work of the sound engineer, composer and vocalist; this work is aimed at modernizing the character of the song, which differs from the original version. The texture of the original composition, in the base, should be preserved, but it can be saturated with new sound elements, the vocal line and harmony can be partially changed according to interpreter’s own scheme. The introduction of the scientific definitions of these terms into a common base of musical concepts and the further in-depth study of all theoretical and practical components behind them will contribute to the correct orientation in terminology among the scientific workers of the artistic sphere and actorsvocalists.
Styles APA, Harvard, Vancouver, ISO, etc.
6

SAFONOVA, Margarita F., et Sergei M. REZNICHENKO. « Internal control models : Historical transformations and development prospects ». International Accounting 26, no 11 (16 novembre 2023) : 1292–316. http://dx.doi.org/10.24891/ia.26.11.1292.

Texte intégral
Résumé :
Subject. This article examines the issue of transformation of the system and models of internal control as a guarantor of the economic security of organizations, regions and countries in the historical aspect and in relation to global changes in the world economy. Objectives. The article aims to determine further ways of development of internal control models and their conceptual foundations, taking into account the realities of the time. Methods. For the study, we used case and chronological analyses, and data systematization. Results. The article finds that internal control models are subject to continuous transformation, taking into account external economic influences and the development of automation tools. This unlocks the process synergies, in other words, more complex processes taking place in the economy and crisis phenomena that affect the conditions for the functioning of companies, make it necessary to look for internal reserves to ensure the continuity of the activities of an economic entity through constant control of risks and the search for options for their minimization. Conclusions and Relevance. The article concludes that the most popular models of internal control are those that are based on a process-oriented approach and continuous analysis of business processes of an economic entity, with further processing of the information obtained and transformation into a system-oriented model of internal control aimed at finding internal reserves. The results of the study can be used in the theory and practice of internal control, as well as for further scientific developments and practical application.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Khomoviy, S., N. Tomilova et M. Khomovju. « Realiaof accounting automation in agricultural enterprises of Ukraine ». Ekonomìka ta upravlìnnâ APK, no 2 (143) (27 décembre 2018) : 115–21. http://dx.doi.org/10.33245/2310-9262-2018-143-2-115-121.

Texte intégral
Résumé :
Accountancy is an integral part of any enterprise functioning. But it is impossible to keep an accounting without using a computer and software in modern economic conditions. Nowadays, the introduction of sanctions against the manufacturer and a number of dealers of one of the most popular software products, «1S: Accounting» introduced the problem of choosing accounting software before a considerable number of business entities that would be allowed for the use on the territory of Ukraine. There is a transformation of the accounting system and accounting procedures in the conditions of the use of the computer technologies and software products for accounting automation, which is accompanied by the increase in the quality and efficiency level of the management process. The application of automation software significantly increases the quality of accounting information process in organizations. We consider the main advantages of the use of modern information technology for automation of accounting procedures on the basis of the conducted critical analysis of special literature. They are: 1) processing and preserving a large number of identical units in the structural plan of accounting information; 2) the possibility of choosing the necessary information from a great number of data; 3) reliable and faultless realization of mathematical calculations; 4) operational obtaining of the necessary data for the adoption of reasonable management decisions; 5) repeated recreation of actions. It should be noted that in the conditions of the use of automated forms of accounting, the technological process of processing of records envisages the implementation of the following successive steps:1) collection and registration of primary data for further realization of automated processing; 2) the formation of arrays of records on electronic media, including: a journal of economic operations, the structure of synthetic and analytical accounts, manuals of analytical objects, permanent information etc.; 3) receiving, at the request of the user, the necessary accounting data for the reporting period in the form of registers of synthetic accounting, analytical tables and certificates of accounts. The overview of the major software products («Parus accounting», «SAP», «Master: accounting», «IS-pro»), which are widely used in Ukraine, showed that despite the restrictions, most enterprises, including those providing outsourcing services, continue to use the «1S: Accounting» program for keeping records. From our point of view, the most optimal accounting program of ukrainian production is «Master: accounting», which could completely replace the software product «1S: Accounting» in the field of agriculture. The software product «Master: agro» for keeping records of agrobusinesses meets the requirements of the current legislation of Ukraine and is fully adapted to the ukrainian market. It consists of functional modules embracing all areas of accounting and tax accounting. The important advantage of the program «Master: accounting» is also a training program for partners, which is made for 12 classes. The main purpose of this is to provide partners with practical skills in installing the program and the features of the configuration of its modules, the study of basic programming tools and settings for solving account tasks. The studying process is divided into 3 levels. The first level is «user» ‒ designed for anyone who can potentially work with the program. The second level «consultant» is for the automatic setting and training of users. The third «developer» is for those companies and partners who need aintenser adaptation of the product to the working process. Key words: automation, program, computer technologies, accounting of enterprise.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Wiriyakun, Chawit, et Werasak Kurutach. « Improving misspelled word solving for human trafficking detection in online advertising data ». International Journal of Electrical and Computer Engineering (IJECE) 13, no 6 (1 décembre 2023) : 6558. http://dx.doi.org/10.11591/ijece.v13i6.pp6558-6567.

Texte intégral
Résumé :
<span lang="EN-US">Social media is used by pimps to advertise their businesses for adult services due to easy accessibility. This requires the potentially computational model for law enforcement authorities to facilitate a detection of human trafficking activities. The machine learning (ML) models used to detect these activities mostly rely on text classification and often omit the correction of misspelled words, resulting in the risk of predictions error. Therefore, an improvement data processing approach is one of strategies to enhance an efficiency of human trafficking detection. This paper presents a novel approach to solving spelling mistakes. The approach is designed to select misspelled words, the replace them with the popular words having the same meaning based on an estimation of the probability of words and context used in human trafficking advertisements. The applicability of the proposed approach was demonstrated with the labeled human trafficking dataset using three classification models: k-nearest neighbor (KNN), naive Bayes (NB), and multilayer perceptron (MLP). The achievement of higher accuracy of the model predictions using the proposed method evidences an improved alert on human trafficking outperforming than the others. The proposed approach shows the potential applicability to other datasets and domains from the online advertisements.</span>
Styles APA, Harvard, Vancouver, ISO, etc.
9

Trivedi, Shrawan Kumar, et Shubhamoy Dey. « Analysing user sentiment of Indian movie reviews ». Electronic Library 36, no 4 (6 août 2018) : 590–606. http://dx.doi.org/10.1108/el-08-2017-0182.

Texte intégral
Résumé :
Purpose To be sustainable and competitive in the current business environment, it is useful to understand users’ sentiment towards products and services. This critical task can be achieved via natural language processing and machine learning classifiers. This paper aims to propose a novel probabilistic committee selection classifier (PCC) to analyse and classify the sentiment polarities of movie reviews. Design/methodology/approach An Indian movie review corpus is assembled for this study. Another publicly available movie review polarity corpus is also involved with regard to validating the results. The greedy stepwise search method is used to extract the features/words of the reviews. The performance of the proposed classifier is measured using different metrics, such as F-measure, false positive rate, receiver operating characteristic (ROC) curve and training time. Further, the proposed classifier is compared with other popular machine-learning classifiers, such as Bayesian, Naïve Bayes, Decision Tree (J48), Support Vector Machine and Random Forest. Findings The results of this study show that the proposed classifier is good at predicting the positive or negative polarity of movie reviews. Its performance accuracy and the value of the ROC curve of the PCC is found to be the most suitable of all other classifiers tested in this study. This classifier is also found to be efficient at identifying positive sentiments of reviews, where it gives low false positive rates for both the Indian Movie Review and Review Polarity corpora used in this study. The training time of the proposed classifier is found to be slightly higher than that of Bayesian, Naïve Bayes and J48. Research limitations/implications Only movie review sentiments written in English are considered. In addition, the proposed committee selection classifier is prepared only using the committee of probabilistic classifiers; however, other classifier committees can also be built, tested and compared with the present experiment scenario. Practical implications In this paper, a novel probabilistic approach is proposed and used for classifying movie reviews, and is found to be highly effective in comparison with other state-of-the-art classifiers. This classifier may be tested for different applications and may provide new insights for developers and researchers. Social implications The proposed PCC may be used to classify different product reviews, and hence may be beneficial to organizations to justify users’ reviews about specific products or services. By using authentic positive and negative sentiments of users, the credibility of the specific product, service or event may be enhanced. PCC may also be applied to other applications, such as spam detection, blog mining, news mining and various other data-mining applications. Originality/value The constructed PCC is novel and was tested on Indian movie review data.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Chen, Weiru, Jared Oliverio, Jin Ho Kim et Jiayue Shen. « The Modeling and Simulation of Data Clustering Algorithms in Data Mining with Big Data ». Journal of Industrial Integration and Management 04, no 01 (mars 2019) : 1850017. http://dx.doi.org/10.1142/s2424862218500173.

Texte intégral
Résumé :
Big Data is a popular cutting-edge technology nowadays. Techniques and algorithms are expanding in different areas including engineering, biomedical, and business. Due to the high-volume and complexity of Big Data, it is necessary to conduct data pre-processing methods when data mining. The pre-processing methods include data cleaning, data integration, data reduction, and data transformation. Data clustering is the most important step of data reduction. With data clustering, mining on the reduced data set should be more efficient yet produce quality analytical results. This paper presents the different data clustering methods and related algorithms for data mining with Big Data. Data clustering can increase the efficiency and accuracy of data mining.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Jankowski, Maciej. « Ensemble Methods for Improving Classification of Data Produced by Latent Dirichlet Allocation ». Computer Science and Mathematical Modelling, no 8/2018 (25 mars 2019) : 17–28. http://dx.doi.org/10.5604/01.3001.0013.1458.

Texte intégral
Résumé :
Topic models are very popular methods of text analysis. The most popular algorithm for topic modelling is LDA (Latent Dirichlet Allocation). Recently, many new methods were proposed, that enable the usage of this model in large scale processing. One of the problem is, that a data scientist has to choose the number of topics manually. This step, requires some previous analysis. A few methods were proposed to automatize this step, but none of them works very well if LDA is used as a preprocessing for further classification. In this paper, we propose an ensemble approach which allows us to use more than one model at prediction phase, at the same time, reducing the need of finding a single best number of topics. We have also analyzed a few methods of estimating topic number.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Gevorkyan, Migran N., Anna V. Korolkova et Dmitry S. Kulyabov. « Julia language features for processing statistical data ». Discrete and Continuous Models and Applied Computational Science 31, no 1 (30 mars 2023) : 5–26. http://dx.doi.org/10.22363/2658-4670-2023-31-1-5-26.

Texte intégral
Résumé :
The Julia programming language is a specialized language for scientific computing. It is relatively new, so most of the libraries for it are in the active development stage. In this article, the authors consider the possibilities of the language in the field of mathematical statistics. Special emphasis is placed on the technical component, in particular, the process of installing and configuring the software environment is described in detail. Since users of the Julia language are often not professional programmers, technical issues in setting up the software environment can cause difficulties that prevent them from quickly mastering the basic features of the language. The article also describes some features of Julia that distinguish it from other popular languages used for scientific computing. The third part of the article provides an overview of the two main libraries for mathematical statistics. The emphasis is again on the technical side in order to give the reader an idea of the general possibilities of the language in the field of mathematical statistics.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Yazidi Alaoui, O., S. Hamdoune, H. Zili, H. Boulassal, M. Wahbi et O. El Kharki. « CREATING STRATEGIC BUSINESS VALUE FROM BIG DATA ANALYSIS – APPLICATION TELECOM NETWORK DATA AND PLANNING DOCUMENTS ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W16 (1 octobre 2019) : 691–95. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w16-691-2019.

Texte intégral
Résumé :
Abstract. Mobile networks carrier gather and accumulate in their database system a considerable volume of data, that carries geographic information which is crucial for the growth of the company. This work aimed develop a prototype called Spatial On -Line Analytic Processing (SOLAP) to carry out multidimensional analysis and to anticipate the extension of the area of radio antennas.To this end, the researcher started by creating a Data warehouse that allows storing Big Data received from the Radio antennas. Then, doing the OLAP(online analytic processing) in order to perform multidimensional Analysis which used through GIS to represent the Data in different scales in satellite image as a topographic background). As a result, this prototype enables the carriers to receive continuous reports on different scales (Town, city, country) and to identify the BTS that works and performs well or shows the rate of its working (the way behaves) its pitfalls. By the end, it gives a clear image on the future working strategy respecting the urban planning, and the digital terrain model (DTM).
Styles APA, Harvard, Vancouver, ISO, etc.
14

Kolesnikov, Alexey, Egor Plitchenko et Maria Kropacheva. « Automation of data preparation for mapping using natural language processing systems ». InterCarto. InterGIS 28, no 1 (2022) : 659–69. http://dx.doi.org/10.35595/2414-9179-2022-1-28-659-669.

Texte intégral
Résumé :
The current level of development of information technology makes it possible to automate the processing of those types of data that only a specialist could previously work with. One such example is natural language processing technologies that implement the functions of sentiment analysis, machine translation, and question-answer systems. For the processes of creating cartographic and geoinformation works, the methods of extracting named entities are of the greatest interest, which allows extracting geographical names from unstructured text and linking named entities, which make it possible to create logical links between the extracted names of spatial objects. Their processing, through a local or network database of the service for geocoding, will automate the creation of map layers in a geographic information system based on text messages. The article describes the most popular approaches and their software implementations for solving the problem of extracting named entities in the example of texts of biographies and works of Siberian writers. Rule-based methodologies, maximum entropy models, and convolutional neural networks are analyzed. To assess the quality of the results of extracting geographical names and objects from the text, in addition to the standard F1-score, the authors propose an additional variant of the evaluation method that takes into account a larger number of criteria and is also based on an error matrix. The description of text block markup formats is given to improve the quality of recognition and expand the possible options for geographical names of named entities based on additional training of the neural network model.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Ibrahimy, Shafkat M., et Ahmad I. Ibrahimy. « The Impact of Big Data Analytics on Business Intelligence in E-Commerce : A Review ». Asian Journal of Electrical and Electronic Engineering 3, no 2 (30 septembre 2023) : 44–48. http://dx.doi.org/10.69955/ajoeee.2023.v3i2.54.

Texte intégral
Résumé :
Big Data Analytics (BDA) is becoming a popular tool to gain insight into businesses and increase their competitive advantage. So, it is important to understand how it works and the opportunities it presents to growing businesses. The primary aim of this study is to evaluate the impact of using Data Analytics on Business Intelligence for e-commerce, focusing on how to promote the use of Big Data Analytics in the e-commerce sector by SMEs. One objective is to develop strategies that can be used by small businesses to take advantage of complicated tools such as BDA and sentiment analysis of social media platforms in order to boost their economic growth.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Wadowska, Agata, Agnieszka Pęska-Siwik et Kamil Maciuk. « PROBLEMS OF COLLECTING, PROCESSING AND SHARING GEOSPATIAL DATA ». Acta Scientiarum Polonorum Formatio Circumiectus 21, no 3/4 (8 avril 2023) : 5–16. http://dx.doi.org/10.15576/asp.fc/2022.21.3/4.5.

Texte intégral
Résumé :
Aim of the study: The paper describes the problems of collecting, processing and sharing geospatial data on the example of the National Geodetic and Cartographic Resource (GUGiK). The Central Office of Geodesy and Cartography (GUGiK), on the basis of the acquired data, prepares spatial databases for the whole country, such as the database of topographic objects (BDOT) or the numerical terrain model. These data are used for further studies, environmental analyses such as hydrographic or sozological maps of Poland. Material and methods: The article indicates what functionalities and, above all, what geospatial data are collected in the geoportal, a government map service managed by the General Office of Geodesy and Cartography. The study included a survey of the geoportal. The purpose of the survey was to obtain information on whether this service is known to potential audiences seeking information on spatial data. Results and conclusions: The survey proved that the geoportal is a popular service used to display and process spatial data. Survey respondents use a lot of the data collected in the geoportal and take advantage of its functionality. Finally, an analysis was carried out of data obtained from the District Geodetic and Cartographic Documentation Center of Małopolska regarding the number of notifications of geodetic works, the number of applications for access to materials from the PZGiK, and the number of applications for extracts or extracts from the cadastral record - for the period 2014-2019.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Akbar, Ricky, Ria Oktaviani, Shabrina Tamimi, Syifa Shavira et Tri Winda Rahmadani. « IMPLEMENTASI BUSINESS INTELLIGENCE UNTUK MENENTUKAN TINGKAT KEPOPULERAN JURUSAN PADA UNIVERSITAS ». Jurnal Ilmiah Informatika 2, no 2 (19 décembre 2017) : 135–38. http://dx.doi.org/10.35316/jimi.v2i2.465.

Texte intégral
Résumé :
In this globalization era that we face nowadays, information technology is the most important thing to be learnt or just to be known. Because in this all-sophisticated era, almost all the live's aspects are related with information and technology. It's not exception to determine the level of major popularized in on of University. In it's implementation, for sure it's difficult to determine which major is being the most popular, because there are so many data, so the process of decision making run slowly if it is done with standard query on database. Here is the used of Business Intelligence is really needed. By implemented Business Intelligence, the big data can be processed without any difficulties. On this research, application that we use is TABLEAU. TABLEAU is chosen because of its ease and its velocity on processing data. So that, it's hoped can ease in decision making for determining the most popular major at Andalas University.
Styles APA, Harvard, Vancouver, ISO, etc.
18

A. SULTAN, Nagham, et Dhuha B. ABDULLAH. « A COMPREHENSIVE STUDY ON BIG DATA FRAMEWORKS ». MINAR International Journal of Applied Sciences and Technology 05, no 01 (1 mars 2023) : 34–48. http://dx.doi.org/10.47832/2717-8234.14.4.

Texte intégral
Résumé :
With the advent of cloud computing technology, the generation of data from various sources has increased during the last few years. The current data processing technology must handle the enormous volumes of newly created data. Therefore, the studies in the literature have concentrated on big data, which has enormous volumes of almost unstructured data. Dealing with such data needs well-designed frameworks that fulfil developers’ needs and fit colourful purposes. Moreover, these frameworks can use for storing, processing, structuring, and analyzing data. The main problem facing cloud computing developers is selecting the most suitable framework for their applications. The literature includes many works on these frameworks. However, there is still a severe gap in providing comprehensive studies on this crucial area of research. Hence, this article presents a novel comprehensive comparison among the most popular frameworks for big data, such as Apache Hadoop, Apache Spark, Apache Flink, Apache Storm, and MongoDB. In addition, the main characteristics of each framework in terms of advantages and drawbacks are also deeply investigated in this article. Our research provides a comprehensive analysis of various metrics related to data processing, including data flow, computational model, overall performance, fault tolerance, scalability, interval processing, language support, latency, and processing speed. To our knowledge, no previous research has conducted a detailed study of all these characteristics simultaneously. Therefore, our study contributes significantly to the understanding of the factors that impact data processing and provides valuable insights for practitioners and researchers in the field
Styles APA, Harvard, Vancouver, ISO, etc.
19

Trivedi, Shrawan Kumar, et Shubhamoy Dey. « A study of boosted evolutionary classifiers for detecting spam ». Global Knowledge, Memory and Communication 69, no 4/5 (1 novembre 2019) : 269–87. http://dx.doi.org/10.1108/gkmc-05-2019-0060.

Texte intégral
Résumé :
Purpose Email is a rapid and cheapest medium of sharing information, whereas unsolicited email (spam) is constant trouble in the email communication. The rapid growth of the spam creates a necessity to build a reliable and robust spam classifier. This paper aims to presents a study of evolutionary classifiers (genetic algorithm [GA] and genetic programming [GP]) without/with the help of an ensemble of classifiers method. In this research, the classifiers ensemble has been developed with adaptive boosting technique. Design/methodology/approach Text mining methods are applied for classifying spam emails and legitimate emails. Two data sets (Enron and SpamAssassin) are taken to test the concerned classifiers. Initially, pre-processing is performed to extract the features/words from email files. Informative feature subset is selected from greedy stepwise feature subset search method. With the help of informative features, a comparative study is performed initially within the evolutionary classifiers and then with other popular machine learning classifiers (Bayesian, naive Bayes and support vector machine). Findings This study reveals the fact that evolutionary algorithms are promising in classification and prediction applications where genetic programing with adaptive boosting is turned out not only an accurate classifier but also a sensitive classifier. Results show that initially GA performs better than GP but after an ensemble of classifiers (a large number of iterations), GP overshoots GA with significantly higher accuracy. Amongst all classifiers, boosted GP turns out to be not only good regarding classification accuracy but also low false positive (FP) rates, which is considered to be the important criteria in email spam classification. Also, greedy stepwise feature search is found to be an effective method for feature selection in this application domain. Research limitations/implications The research implication of this research consists of the reduction in cost incurred because of spam/unsolicited bulk email. Email is a fundamental necessity to share information within a number of units of the organizations to be competitive with the business rivals. In addition, it is continually a hurdle for internet service providers to provide the best emailing services to their customers. Although, the organizations and the internet service providers are continuously adopting novel spam filtering approaches to reduce the number of unwanted emails, the desired effect could not be significantly seen because of the cost of installation, customizable ability and the threat of misclassification of important emails. This research deals with all the issues and challenges faced by internet service providers and organizations. Practical implications In this research, the proposed models have not only provided excellent performance accuracy, sensitivity with low FP rate, customizable capability but also worked on reducing the cost of spam. The same models may be used for other applications of text mining also such as sentiment analysis, blog mining, news mining or other text mining research. Originality/value A comparison between GP and GAs has been shown with/without ensemble in spam classification application domain.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Spruit, Marco, Marcin Kais et Vincent Menger. « Automated Business Goal Extraction from E-mail Repositories to Bootstrap Business Understanding ». Future Internet 13, no 10 (23 septembre 2021) : 243. http://dx.doi.org/10.3390/fi13100243.

Texte intégral
Résumé :
The Cross-Industry Standard Process for Data Mining (CRISP-DM), despite being the most popular data mining process for more than two decades, is known to leave those organizations lacking operational data mining experience puzzled and unable to start their data mining projects. This is especially apparent in the first phase of Business Understanding, at the conclusion of which, the data mining goals of the project at hand should be specified, which arguably requires at least a conceptual understanding of the knowledge discovery process. We propose to bridge this knowledge gap from a Data Science perspective by applying Natural Language Processing techniques (NLP) to the organizations’ e-mail exchange repositories to extract explicitly stated business goals from the conversations, thus bootstrapping the Business Understanding phase of CRISP-DM. Our NLP-Automated Method for Business Understanding (NAMBU) generates a list of business goals which can subsequently be used for further specification of data mining goals. The validation of the results on the basis of comparison to the results of manual business goal extraction from the Enron corpus demonstrates the usefulness of our NAMBU method when applied to large datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Lin, Wei-Chao, Shih-Wen Ke et Chih-Fong Tsai. « Top 10 data mining techniques in business applications : a brief survey ». Kybernetes 46, no 7 (7 août 2017) : 1158–70. http://dx.doi.org/10.1108/k-10-2016-0302.

Texte intégral
Résumé :
Purpose Data mining is widely considered necessary in many business applications for effective decision-making. The importance of business data mining is reflected by the existence of numerous surveys in the literature focusing on the investigation of related works using data mining techniques for solving specific business problems. The purpose of this paper is to answer the following question: What are the widely used data mining techniques in business applications? Design/methodology/approach The aim of this paper is to examine related surveys in the literature and thus to identify the frequently applied data mining techniques. To ensure the recent relevance and quality of the conclusions, the criterion for selecting related studies are that the works be published in reputed journals within the past 10 years. Findings There are 33 different data mining techniques employed in eight different application areas. Most of them are supervised learning techniques and the application area where such techniques are most often seen is bankruptcy prediction, followed by the areas of customer relationship management, fraud detection, intrusion detection and recommender systems. Furthermore, the widely used ten data mining techniques for business applications are the decision tree (including C4.5 decision tree and classification and regression tree), genetic algorithm, k-nearest neighbor, multilayer perceptron neural network, naïve Bayes and support vector machine as the supervised learning techniques and association rule, expectation maximization and k-means as the unsupervised learning techniques. Originality/value The originality of this paper is to survey the recent 10 years of related survey and review articles about data mining in business applications to identify the most popular techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Nambiar, Athira, et Divyansh Mundra. « An Overview of Data Warehouse and Data Lake in Modern Enterprise Data Management ». Big Data and Cognitive Computing 6, no 4 (7 novembre 2022) : 132. http://dx.doi.org/10.3390/bdcc6040132.

Texte intégral
Résumé :
Data is the lifeblood of any organization. In today’s world, organizations recognize the vital role of data in modern business intelligence systems for making meaningful decisions and staying competitive in the field. Efficient and optimal data analytics provides a competitive edge to its performance and services. Major organizations generate, collect and process vast amounts of data, falling under the category of big data. Managing and analyzing the sheer volume and variety of big data is a cumbersome process. At the same time, proper utilization of the vast collection of an organization’s information can generate meaningful insights into business tactics. In this regard, two of the popular data management systems in the area of big data analytics (i.e., data warehouse and data lake) act as platforms to accumulate the big data generated and used by organizations. Although seemingly similar, both of them differ in terms of their characteristics and applications. This article presents a detailed overview of the roles of data warehouses and data lakes in modern enterprise data management. We detail the definitions, characteristics and related works for the respective data management frameworks. Furthermore, we explain the architecture and design considerations of the current state of the art. Finally, we provide a perspective on the challenges and promising research directions for the future.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Alkadi, Ihssan. « Data Mining ». Review of Business Information Systems (RBIS) 12, no 1 (1 janvier 2008) : 17–24. http://dx.doi.org/10.19030/rbis.v12i1.4394.

Texte intégral
Résumé :
Recently data mining has become more popular in the information industry. It is due to the availability of huge amounts of data. Industry needs turning such data into useful information and knowledge. This information and knowledge can be used in many applications ranging from business management, production control, and market analysis, to engineering design and science exploration. Database and information technology have been evolving systematically from primitive file processing systems to sophisticated and powerful databases systems. The research and development in database systems has led to the development of relational database systems, data modeling tools, and indexing and data organization techniques. In relational database systems data are stored in relational tables. In addition, users can get convenient and flexible access to data through query languages, optimized query processing, user interfaces and transaction management and optimized methods for On-Line Transaction Processing (OLTP). The abundant data, which needs powerful data analysis tools, has been described as a data rich but information poor situation. The fast-growing, tremendous amount of data, collected and stored in large and numerous databases. Humans can not analyze these large amounts of data. So we need powerful tools to analyze this large amount of data. As a result, data collected in large databases become data tombs. These are data archives that are seldom visited. So, important decisions are often not made based on the information-rich data stored in databases rather based on a decision maker's intuition. This is because the decision maker does not have the tools to extract the valuable knowledge embedded in the vast amounts of data. Data mining tools which perform data analysis may uncover important data patterns, contributing greatly to business strategies, knowledge bases, and scientific and medical research. So data mining tools will turn data tombs into golden nuggets of knowledge.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Zhang, Qian, Jingyao Li, Hongyao Zhao, Quanqing Xu, Wei Lu, Jinliang Xiao, Fusheng Han, Chuanhui Yang et Xiaoyong Du. « Efficient Distributed Transaction Processing in Heterogeneous Networks ». Proceedings of the VLDB Endowment 16, no 6 (février 2023) : 1372–85. http://dx.doi.org/10.14778/3583140.3583153.

Texte intégral
Résumé :
Countrywide and worldwide business, like gaming and social networks, drives the popularity of inter-data-center transactions. To support inter-data-center transaction processing and data center fault tolerance simultaneously, existing protocols suffer from significant performance degradation due to high-latency and unstable networks. In this paper, we propose RedT, a novel distributed transaction processing protocol that works in heterogeneous networks. In detail, nodes within a data center are inter-connected via the RDMA-capable network and nodes across data centers are inter-connected via TCP/IP networks. RedT extends two-phase commit (2PC) by decomposing transactions into sub-transactions in terms of the data center granularity, and proposing a pre-write-log mechanism that is able to reduce the number of inter-data-center round-trips from a maximal of 6 to 2. Extensive evaluation against state-of-the-art protocols shows that RedT can achieve up to 1.57× higher throughputs and 0.56× lower latency.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Jing, Changhong, Wenjie Liu, Jintao Gao et Ouya Pei. « Research and implementation of HTAP for distributed database ». Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, no 2 (avril 2021) : 430–38. http://dx.doi.org/10.1051/jnwpu/20213920430.

Texte intégral
Résumé :
Data processing can be roughly divided into two categories, online transaction processing OLTP(on-line transaction processing) and online analytical processing OLAP(on-line analytical processing). OLTP is the main application of traditional relational databases, and it is some basic daily transaction processing, such as bank pipeline transactions and so on. OLAP is the main application of the data warehouse system, it supports some more complex data analysis operations, focuses on decision support, and provides popular and intuitive analysis results. As the amount of data processed by enterprises continues to increase, distributed databases have gradually replaced stand-alone databases and become the mainstream of applications. However, the current business supported by distributed databases is mainly based on OLTP applications, lacking OLAP implementation. This paper proposes an implementation method of HTAP for distributed database CBase, which provides an implementation method of OLAP analysis for CBase, and can easily deal with data analysis of large amounts of data.
Styles APA, Harvard, Vancouver, ISO, etc.
26

VARMA, P. ROHITH. « Natural Language Processing in the Era of BIG DATA ». INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no 06 (20 juin 2024) : 1–5. http://dx.doi.org/10.55041/ijsrem35947.

Texte intégral
Résumé :
Natural Language Processing (NLP) has become a revolution in the age of big data. The large and diverse vocabulary generated by big data makes it possible to develop advanced NLP models using machine learning algorithms and distributed computing techniques. The combination of NLP and big data has led to the emergence of powerful language models such as BERT and GPT, allowing NLP to better understand content and provide insights in many applications such as thinking, machine translation, response and custom NLP text. The application of NLP in the big data environment provides solutions to many problems in industries. Business intelligence can benefit from collecting data and providing real-time insights, while collaboration can be enhanced through networking. Sentiment analysis helps improve product and market research by allowing organisations to understand customers' thoughts and preferences.This study demonstrates the effectiveness of NLP in analysing large datasets, especially for sentiment analysis using the Map Reduce framework. Overall, this paper highlights the potential and challenges of integrating NLP with Big Data and gives insight into how the combination works. This power can be used in many ways, leading to a better understanding of language and data analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Irtaimeh, Hani J., Abdallah Mishael Obeidat, Shadi H. Abualloush et Amineh A. Khaddam. « Impact of Business Intelligence on Technical Creativity : A Case Study on AlHekma Pharmaceutical Company ». European Scientific Journal, ESJ 12, no 28 (31 octobre 2016) : 502. http://dx.doi.org/10.19044/esj.2016.v12n28p502.

Texte intégral
Résumé :
Business Intelligence, through its dimensions (data warehousing, data mining, direct analytical processing), helps the members of an organization to perceive and interpret their role in the organization’s creativity. For this reason, we may assume that Business Intelligence has an impact on Technical Creativity, and that matching of Business Intelligence and Technical Creativity will improve and achieve excellence in an organization. The aim of this study is to explore the impact of business intelligence dimensions (data warehousing, data mining, direct analytical processing) on Technical Creativity in AlHekma Pharmaceutical Company as a case study. For this purpose, a questionnaire was developed to collect data from the study population which consists of 50 employees. This is aimed at testing the hypotheses and achieving the objectives of the study. The most important results that the study achieved were that there was a statistically significant impact of business intelligence with its dimensions (data warehousing, data mining, and direct analytical processing) in technical creativity. The most important recommendations of the study were the necessity of organizations dependence on modern technology in order to develop their works. Thus, this is because this technology is recognized by its high accuracy on a completion of the work, as well as deepening the concept of technical creativity which gives them a competitive advantage in the marke
Styles APA, Harvard, Vancouver, ISO, etc.
28

Sharma, Dr Ashish Kumar, et Dr Nirmal Kumar. « Safety in Business-to-Business Online Transactions ». International Journal for Research in Applied Science and Engineering Technology 11, no 4 (30 avril 2023) : 3273–76. http://dx.doi.org/10.22214/ijraset.2023.50860.

Texte intégral
Résumé :
Abstract: B2B supply chain funding is a new financial model that aims to make it easier to raise money and improve processes, efficiency, and growth in the supply chain. It is used for online purchases between businesses (B2B), and it is built on the B2B ecommerce platform. For processing and analyzing, you need data on how logistics, business, information, and money move together. Customers who want to take advantage of the speed and ease of e-commerce must not only choose products wisely, but also know and follow all the rules and laws that apply. It is important to avoid getting into trouble with the law. This paper looks at how the B2B platform's online supply chain financial business model creates credit risk and how it works. It then builds a method to reduce risk based on the supply chain financial risk. This article starts by looking at how the B2B platform's online supply chain financial business model creates and operates credit risks. It then builds a system to prevent and control the risks created by this financial business model risk evaluation index system.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Chakraborty, Sutapa. « Sentiment Analysis in the Perspective of Natural Language Processing ». International Journal for Research in Applied Science and Engineering Technology 11, no 11 (30 novembre 2023) : 2235–41. http://dx.doi.org/10.22214/ijraset.2023.56925.

Texte intégral
Résumé :
Abstract: Sentiment analysis, which is also called opinion mining, is a way to use natural language processing to figure out how someone feels about something written down. It includes reading the text and putting it into one of three groups: positive, negative, or neutral. In this paper we are going to give an overview about Natural Language Processing (NLP) and its subset Sentimental Analysis and how Natural Language Processing (NLP) is used to function the sentiment analysis. Natural language processing (NLP) is a subfield of artificial intelligence or AI—concerned with giving computers the ability to understand text and spoken words in much the same way human beings can. Right now, this is a very popular and new way of doing things because it works so well. In this paper we will explore the different ways NLP can be used for Sentiment Analysis, the challenges to look for and how it can revolutionize marketing strategies of MNCs and improve customer experiences. Sentiment analysis also known as opinion mining, is the process of analyzing digital text to determine if the emotional tone of the message is positive, negative, or neutral. It's common for businesses to use this method to find out and sort people's thoughts on a product, service, or idea. It is an important business intelligence tool that helps companies improve their products and services
Styles APA, Harvard, Vancouver, ISO, etc.
30

Malki, Abdelhamid, Sidi Mohammed Benslimane et Mimoun Malki. « Towards Rank-Aware Data Mashups ». International Journal of Web Services Research 17, no 4 (octobre 2020) : 1–14. http://dx.doi.org/10.4018/ijwsr.2020100101.

Texte intégral
Résumé :
Data mashups are web applications that combine complementary (raw) data pieces from different data services or web data APIs to provide value added information to users. They became so popular over the last few years; their applications are numerous and vary from addressing transient business needs in modern enterprises. Even though data mashups have been the focus of many research works, they still face many challenging issues that have never been explored. The ranking of the data returned by a data mashup is one of the key issues that have received little consideration. Top-k query model ranks the pertinent answers according to a given ranking function and returns only the best results. This paper proposes two algorithms that optimize the evaluation of top-k queries over data mashups. These algorithms are built based on the web data APIs' access methods: bind probe and indexed probe.
Styles APA, Harvard, Vancouver, ISO, etc.
31

ЛИПАТОВА, М. Н. « ARTIFICIAL INTELLIGENCE IN BUSINESS ». Экономика и предпринимательство, no 11(160) (21 décembre 2023) : 868–71. http://dx.doi.org/10.34925/eip.2023.160.11.164.

Texte intégral
Résumé :
Данная статья рассматривает преимущества и риски применения искусственного интеллекта в бизнесе. В статье обсуждаются плюсы, связанные от внедрения ИИ, такие как повышение эффективности и производительности, умение принимать верные решения на основе анализа больших объемов данных, персонализация и защита данных. Также обсуждаются вызовы, связанные с внедрением ИИ, такие как ошибки в исходных данных, высокие первоначальные вложения, сокращение рабочих мест и безработица, объективность принятых решений. В статье указывается необходимость балансирования положительных и отрицательных сторон при внедрении ИИ в бизнесе. Рассмотрены наиболее популярные сферы бизнеса, в которых внедрение ИИ является наиболее эффективным. Данная статья рекомендуется к прочтению всем, кто работает в бизнес-среде и интересуется возможностями ИИ. This article considers the advantages and risks of using artificial intelligence in business. The article discusses the advantages associated with AI implementation, such as improving efficiency and performance, the ability to make the right decisions based on analyzing large amounts of data, personalization and data protection. Challenges related to the introduction of AI are also discussed, such as errors in the source data, high initial investments, job cuts and unemployment, and the objectivity of decisions made. The article indicates the need to balance the positive and negative aspects when implementing AI in business. The most popular areas of business in which the introduction of AI is the most effective are considered. This article is recommended for reading to everyone who works in a business environment and is interested in the capabilities of AI.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Amartasya, Magfira Shifa, et Dwi Ari Cahyani. « Kajian Analisa Usaha Sale Pisang Gulung di UD. Putra Roti Banjarnegara ». Proceedings Series on Physical & ; Formal Sciences 2 (10 novembre 2021) : 269–76. http://dx.doi.org/10.30595/pspfs.v2i.199.

Texte intégral
Résumé :
Banana sale is an alternative food that is processed by drying. Traditionally, sale processing is done by drying, but some are smoked first with firewood, but this smoking makes the quality of banana sale less good. The business analysis study in this study was to determine the extent to which the banana roll business was profitable, to determine the profit and loss and sustainability of the business managed by UD Putra Roti Banjarnegara. This research was conducted at UD Putra Roti. The location was chosen because UD Putra Roti is one of the most popular food businesses in Banjarnegara district. The research was conducted in October – December 2020. The data collected in the study were primary data and secondary data. The data collected was then analyzed using data analysis. Data analysis methods used, namely: gross income, net income, depreciation of equipment, BEP, RC Ratio, BC Ratio and ROI. Processing of sale banana rolls at UD Putra Roti includes procurement of raw materials, sorting, slicing, arranging, drying, rolling dried bananas, cutting rolls, standing, making dough rolls, frying, draining, and packaging. The results of the analysis show that the sale of banana rolls processing business carried out at UD. Putra Roti is feasible to do with an RC ratio > 1, which is 1.39. BC Ratio of 0.39 and 39% ROI The ROI obtained is 39%, which means that each capital is Rp. 100, obtained a profit of Rp. 39.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Soleh, Ahmad, et Resista Vikaliana. « Analisis penerapan system application and product in data processing (SAP) pada sistem inventory logistik pada PT. Haier Sales Indonesia, Jakarta Utara ». Operations Excellence : Journal of Applied Industrial Engineering 12, no 1 (2 avril 2020) : 124. http://dx.doi.org/10.22441/oe.2020.v12.i1.011.

Texte intégral
Résumé :
Rapid economic development causes increasingly fierce business competition. Moreover, the development of increasingly sophisticated technology, companies are competing to use technology to increase competitive advantage and efficiency in the company. One of the most popular information technology investments today is the Application of Systems and Products in Data Processing (SAP), systems that do business using a single database that can be accessed by all divisions within the company. The purpose of this research is to study and discuss the application of Systems and Products in Data Processing (SAP) conducted at PT. Haier Sales Indonesia. This study uses qualitative methods through in-depth interviews with IT Managers and Logistics Managers and warehouse administration staff using Pareto diagrams and fishbone diagrams. The results of the evaluation of the application of Application Systems and Products in Data Processing (SAP) in Inventory Logistics that occurred at PT. Haier Sales Indonesia is getting better, all data from every activity has been documented and integrated into the system much faster than the manual process that has been done before and can produce accurate reports. Suggestions are given in order to continue training in SAP Systems and Applications for users in developing SAP Systems and Applications.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Soleh, Ahmad, et Resista Vikaliana. « Analisis penerapan system application and product in data processing (SAP) pada sistem inventory logistik pada PT. Haier Sales Indonesia, Jakarta Utara ». Operations Excellence : Journal of Applied Industrial Engineering 12, no 1 (2 avril 2020) : 124. http://dx.doi.org/10.22441/oe.v12.1.2020.051.

Texte intégral
Résumé :
Rapid economic development causes increasingly fierce business competition. Moreover, the development of increasingly sophisticated technology, companies are competing to use technology to increase competitive advantage and efficiency in the company. One of the most popular information technology investments today is the Application of Systems and Products in Data Processing (SAP), systems that do business using a single database that can be accessed by all divisions within the company. The purpose of this research is to study and discuss the application of Systems and Products in Data Processing (SAP) conducted at PT. Haier Sales Indonesia. This study uses qualitative methods through in-depth interviews with IT Managers and Logistics Managers and warehouse administration staff using Pareto diagrams and fishbone diagrams. The results of the evaluation of the application of Application Systems and Products in Data Processing (SAP) in Inventory Logistics that occurred at PT. Haier Sales Indonesia is getting better, all data from every activity has been documented and integrated into the system much faster than the manual process that has been done before and can produce accurate reports. Suggestions are given in order to continue training in SAP Systems and Applications for users in developing SAP Systems and Applications.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Mirov, Y. A. « The Analysis of Provisions of the Theory of Graves’s Emergent Levels of Biopsychosocial Systems ». Bulletin of Irkutsk State University. Series Psychology 44 (2023) : 44–56. http://dx.doi.org/10.26516/2304-1226.2023.44.44.

Texte intégral
Résumé :
Since early 21 century, there have been numerous works in scientific and popular literature devoted to the theory of emergent levels biopsychosocial systems by Clare Graves and theories derived from it, such as Spiral Dynamic. The theory was developed on the basis of the analysis of significant amount of data gathered to reveal patterns of «normal» behavior of an individual. The theory is widely used in business consulting and other spheres. As popularization of any theory can lead to its mythologization, it is necessary to analyze basic provisions of Graves’s theory and its derivatives to elicit the provisions that are scientifically grounded, have a hypothetical status, and require experimental verification, as well as pseudoscientific popular views. The present review is an attempt of such analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Tyshchuk, Yurii, Victoria Vysotska et Olha Vlasenko. « Information system for converting audio in Ukrainian language into its textual representation using nlp methods and machine learning ». Vìsnik Nacìonalʹnogo unìversitetu "Lʹvìvsʹka polìtehnìka". Serìâ Ìnformacìjnì sistemi ta merežì 12 (15 décembre 2022) : 23–51. http://dx.doi.org/10.23939/sisn2022.12.023.

Texte intégral
Résumé :
Speech recognition involves various models, methods and algorithms for analysing and processing the user’s recorded voice. This allows people to control different systems that support one type of speech recognition. A speech-to-text conversion system is a type of speech recognition that uses spoken data for further processing. It also provides several stages for processing an audio file, which uses electroacoustic means, filtering algorithms in the audio file to isolate relevant sounds, electronic data arrays for the selected language, as well as mathematical models that make up the most likely words from phonemes. Thanks to the conversion of speech to text, people whose professions are closely related to typing a large amount of text on the keyboard, significantly speed up and facilitate the work process, as well as reduce the amount of stress. In addition, such systems help businesses, because the concept of remote work is becoming more and more popular, and therefore companies need tools to record and systematize meetings in the form of written text. The object of the research is the process of converting the Ukrainian-language text into a written one based on NLP and machine learning methods. The subject of the research is file processing algorithms for extracting relevant sounds and recognizing phonemes, as well as mathematical models for recognizing an array of phonemes as specific words. The purpose of the work is to design and develop an information system for converting audio Ukrainian-language text into written text based on the Ukrainian Speech-to-text Web application, which is a technology for accurate and easy analysis of Ukrainian-language audio files and their subsequent transcription into text. The application supports downloading files from the file system and recording using the microphone, as well as saving the analysed data. The article also describes the stages of design and the general typical architecture of the corresponding system for converting audio Ukrainian-language text into written text. According to the results of the experimental testing of the developed system, it was found that the number of words does not affect the accuracy of the conversion algorithm, and the decrease in percentage is not large and occurred due to the complexity of the words and the low quality of the microphone, and therefore the recorded file.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Wong, Kok Wai, Tamás Gedeon et Chun Che Fung. « Special Issue on Advances in Intelligent Data Processing ». Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no 3 (20 mars 2007) : 259–60. http://dx.doi.org/10.20965/jaciii.2007.p0259.

Texte intégral
Résumé :
Technological advancement using intelligent techniques has provided solutions to many applications in diverse engineering disciplines. In application areas such as web mining, image processing, medical, and robotics, just one intelligent data processing technique may be inadequate for handling a task, and a combination or hybrid of intelligent data processing techniques becomes necessary. The sharp increase in activities in the development of innovative intelligent data processing technologies also attracted the interest of many researchers in applying intelligent data processing techniques in other application domains. In this special issue, we presented 12 research papers focusing on different aspects of intelligent data processing and its applications. We start with a paper entitled "An Activity Monitor Design Based on Wavelet Analysis and Wireless Sensor Networks," which focuses on using wavelet analysis and wireless sensor networks for monitoring the human physical condition. The second paper, "An Approach in Designing Hierarchy of Fuzzy Behaviors for Mobile Robot Navigation," presents a hierarchical approach using fuzzy theory to assist in the task of mobile robot navigation. It also discusses the design of hierarchical behavior of mobile robots using sensors. The third paper, "Toward Natural Communication: Human-Robot Gestural Interaction Using Pointing," also works with robots focusing more on the interaction between users and robots in which the robot recognizes pointing by a human user through intelligent data processing. The fourth paper, "Embodied Conversational Agents for H5N1 Pandemic Crisis," examines the use of intelligent software bots as an interaction tool for crisis communication. linebreaknewpage The work is based on a novel Automated Knowledge Extraction Agent (AKEA). There are many interests of using intelligent data processing techniques for image processing and analysis, as shown in the next few papers. The fifth paper, "A Feature Vector Approach for Inter-Query Learning for Content-Based Image Retrieval," presents relevance feedback based technique for content based image retrieval. It extends the relevance feedback approach to capture the inter-query relationship between current and previous queries. The sixth paper, "Abstract Image Generation Based on Local Similarity Pattern," also falls in the area of image retrieval using local similarity patterns to generate abstract images from a given set of images. Along the same line of similarity measure for image retrieval, the seventh paper, "Cross-Resolution Image Similarity Modeling," works on cross resolution using probabilistic and fuzzy theory to formulate cross resolution image similarity modeling. The eighth paper, "Bayesian Spatial Autoregressive for Reducing Blurring Effect in Image," presents a Bayesian Spatial Autoregressive technique developed by Geweke and LeSage. The ninth paper, "Logistic GMDH-Type Neural Network and its Application to Identification of X-Ray Film Characteristic Curve," presents a class of neural networks for X-Ray Film processing and compares results with some conventional techniques. As digital entertainment and games grow increasingly popular, the tenth paper, "Classification of Online Game Players Using Action Transition Probability and Kullback Leibler Entropy," looks into the use of intelligent data processing for classifying of online game players. The eleventh paper, "Parallel Learning Model and Topological Measurement for Self-Organizing Maps," presents the concept of a SOM parallel learning model that appears both robust and efficient. The twelfth paper, "Optimal Size Fuzzy Models," delineates concepts on how to make fuzzy systems more efficient. As guest editors for this issue, we thank the authors for their hard work. We also thank the reviewers for their assistance in the review process. All full papers submitted to this special issue have been peer-reviewed by at least two international reviewers in the area.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Kiruthika, S., U. Sneha Dharshini, K. R. Vaishnavi et R. V. Vishwa Priya. « Sentiment Analysis of Flipkart Product Reviews using Natural Language Processing ». International Journal of Recent Technology and Engineering (IJRTE) 12, no 2 (30 juillet 2023) : 54–62. http://dx.doi.org/10.35940/ijrte.b7774.0712223.

Texte intégral
Résumé :
In this contemporary world, people depend more on ecommerce sites or applications to purchase items on-line. People purchase items on-line based upon the scores and evaluates offered by individuals that purchased items previously which identifies the success or failing of the item. Furthermore, business suppliers or manufacturers identify the success or failing of their item by evaluating the evaluates offered by the clients. In current system, a number of techniques were utilized to evaluate a dataset of item evaluates. It likewise provided belief category formulas to use a monitored discovering of the item evaluates situated in 2 various datasets. The proposed speculative methods examined the precision of all belief category formulas, and ways to identify which formula is more precise. Additionally, the existing system unable to spot phony favorable evaluates and phony negative reviews with discovery procedures. One of the most popular works was done "Bad" and "Outstanding" seed words are utilized by him to determine the semantic positioning, factor smart shared info technique is utilized to determine the semantic positioning. The belief positioning of a file was determined as the typical semantic positioning of all such expressions. Semantic Positioning of context independent viewpoints is identified and the context reliant viewpoints utilizing linguistic guidelines to infer positioning of context unique reliant viewpoint are thought about. Contextual info from various other evaluates that discuss the exact same item function to identify the context indistinct-dependent viewpoints were drawn out.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Baruah, Nirvik, Peter Kraft, Fiodar Kazhamiaka, Peter Bailis et Matei Zaharia. « Parallelism-Optimizing Data Placement for Faster Data-Parallel Computations ». Proceedings of the VLDB Endowment 16, no 4 (décembre 2022) : 760–71. http://dx.doi.org/10.14778/3574245.3574260.

Texte intégral
Résumé :
Systems performing large data-parallel computations, including online analytical processing (OLAP) systems like Druid and search engines like Elasticsearch, are increasingly being used for business-critical real-time applications where providing low query latency is paramount. In this paper, we investigate an underexplored factor in the performance of data-parallel queries: their parallelism. We find that to minimize the tail latency of data-parallel queries, it is critical to place data such that the data items accessed by each individual query are spread across as many machines as possible so that each query can leverage the computational resources of as many machines as possible. To optimize parallelism and minimize tail latency in real systems, we develop a novel parallelism-optimizing data placement algorithm that defines a linearly-computable measure of query parallelism, uses it to frame data placement as an optimization problem, and leverages a new optimization problem partitioning technique to scale to large cluster sizes. We apply this algorithm to popular systems such as Solr and MongoDB and show that it reduces p99 latency by 7-64% on data-parallel workloads.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Török, Réka Melinda. « Artificial intelligence algorithms applied in business and accounting ». Timisoara Journal of Economics and Business 15, no 1 (1 décembre 2022) : 73–90. http://dx.doi.org/10.2478/tjeb-2022-0005.

Texte intégral
Résumé :
Abstract The paper provides an explanation of some terms used in the field of business and accounting when it comes to the implementation of artificial intelligence in these areas. The development of artificial intelligence began in the 1950s, of course at first with small steps, but in the last two years it is developing at the speed of light. In order to understand the algorithms with which artificial intelligence works, I chose to outline the work machine learning, big data, neural networks. The benefits of business and accounting can be observed in easing and reducing the time in data processing. From the applications used in accounting we chose the presentation of AlphaSense, TensorFlow, Kensho, Clarifai. If we think about accounting, that until now it involved archiving on paper, blockchain and cloud accounting intervene towards our help which, thanks to distributed accounting technology, eliminate the need to enter accounting information in several databases.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Gamzayev, Rustam, et Bohdan Shkoda. « Development and Investigation of Adaptive Micro-Service Architecture for Messaging Software Systems ». Modeling Control and Information Technologies, no 5 (21 novembre 2021) : 46–49. http://dx.doi.org/10.31713/mcit.2021.13.

Texte intégral
Résumé :
Messaging Software systems (MSS) are one of the most popular tools used by huge amount of people. They could be used for personal communication and for business purposes. Building an own MSS system requires analysis of the quality attributes and considering adaptation to the changing environment. In this paper an overview of existing MSS architecture was done. Data model was developed to support historical and real time data storage and processing. An own approach to build Adaptive Microservice MSS based on the messaging middleware and NoSQL database was proposed.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Li, Jian. « Experimental and Numerical Simulation of Multipass Hot-Rolling on 7150 Aluminum Alloy ». Applied Mechanics and Materials 624 (août 2014) : 138–42. http://dx.doi.org/10.4028/www.scientific.net/amm.624.138.

Texte intégral
Résumé :
The content discussed in this paper about 7150 aluminum alloy is based on the thermal simulation technology, which is an experiment using the way of temperature compression to do high temperature shaping and passes softening. And using the popular business-oriented finite element software simulates the related data of the multipass hot-rolling on 7150 aluminum alloy. By the obtained parameters, it carries out the hot-rolling experiment and processing of RRA technology. Based on this, it analyzes the experimental and data simulation of multipass hot-rolling on 7150 aluminum alloy.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Diana Yusuf. « Penerapan Data Mining Untuk Memprediksi Pembelian Mobil Bekas Menggunakan Algoritma Naïve Bayes ». Jurnal Sistem Informasi (JUSIN) 4, no 1 (13 juin 2023) : 29–38. http://dx.doi.org/10.32546/jusin.v4i1.2070.

Texte intégral
Résumé :
Database can also be interpreted as a data warehouse. The amount of data collected in the database can be processed to generate valuable knowledge for science. One popular and widely used technique for processing databases is data mining. Data mining is the process of extracting knowledge from large and complex data warehouse. Data mining encompasses various algorithm to generate knowledge, one of which is naïve bayes. The dataset used in this research, employing the naïve bayes algorithm, consists of attributes relevant to the purchase of used cars, such year, transmission, mileage, car condition, and brand. This research aims to produce patterns and additional knowledge for participants in the used car business to identify the supporting factors in purchasing used cars.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Yusuf, Diana. « Penerapan Data Mining Untuk Memprediksi Pembelian Mobil Bekas Menggunakan Algoritma Naive Bayes ». Jurnal Sistem Informasi (JUSIN) 3, no 1 (13 janvier 2022) : 34–38. http://dx.doi.org/10.32546/jusin.v3i1.2054.

Texte intégral
Résumé :
Database can also be interpreted as a data warehouse. The amount of data collected in the database can be processed to generate valuable knowledge for science. One popular and widely used technique for processing databases is data mining. Data mining is the process of extracting knowledge from large and complex data warehouse. Data mining encompasses various algorithm to generate knowledge, one of which is naïve bayes. The dataset used in this research, employing the naïve bayes algorithm, consists of attributes relevant to the purchase of used cars, such year, transmission, mileage, car condition, and brand. This research aims to produce patterns and additional knowledge for participants in the used car business to identify the supporting factors in purchasing used cars.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Rosengren, Sara, Martin Eisend, Scott Koslow et Micael Dahlen. « A Meta-Analysis of When and How Advertising Creativity Works ». Journal of Marketing 84, no 6 (24 juin 2020) : 39–56. http://dx.doi.org/10.1177/0022242920929288.

Texte intégral
Résumé :
Although creativity is often considered a key success factor in advertising, the marketing literature lacks a systematic empirical account of when and how advertising creativity works. The authors use a meta-analysis to synthesize the literature on advertising creativity and test different theoretical explanations for its effects. The analysis covers 93 data sets taken from 67 papers that provide 878 effect sizes. The results show robust positive effects but also highlight the importance of considering both originality and appropriateness when investing in advertising creativity. Moderation analyses show that the effects of advertising creativity are stronger for high- (vs. low-) involvement products, and that the effects on ad (but not brand) reactions are marginally stronger for unfamiliar brands. An empirical test of theoretical mechanisms shows that affect transfer, processing, and signaling jointly explain these effects, and that originality mainly leads to affect transfer, whereas appropriateness leads to signaling. The authors also call for further research connecting advertising creativity with sales and studying its effects in digital contexts.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Pal, Samyajoy, et Christian Heumann. « Clustering compositional data using Dirichlet mixture model ». PLOS ONE 17, no 5 (18 mai 2022) : e0268438. http://dx.doi.org/10.1371/journal.pone.0268438.

Texte intégral
Résumé :
A model-based clustering method for compositional data is explored in this article. Most methods for compositional data analysis require some kind of transformation. The proposed method builds a mixture model using Dirichlet distribution which works with the unit sum constraint. The mixture model uses a hard EM algorithm with some modification to overcome the problem of fast convergence with empty clusters. This work includes a rigorous simulation study to evaluate the performance of the proposed method over varied dimensions, number of clusters, and overlap. The performance of the model is also compared with other popular clustering algorithms often used for compositional data analysis (e.g. KMeans, Gaussian mixture model (GMM) Gaussian Mixture Model with Hard EM (Hard GMM), partition around medoids (PAM), Clustering Large Applications based on Randomized Search (CLARANS), Density-Based Spatial Clustering of Applications with Noise (DBSCAN) etc.) for simulated data as well as two real data problems coming from the business and marketing domain and physical science domain, respectively. The study has shown promising results exploiting different distributional patterns of compositional data.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Pelt, Daniël M., et James A. Sethian. « A mixed-scale dense convolutional neural network for image analysis ». Proceedings of the National Academy of Sciences 115, no 2 (26 décembre 2017) : 254–59. http://dx.doi.org/10.1073/pnas.1715832114.

Texte intégral
Résumé :
Deep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results in practice, a large number of trainable parameters are often required. Here, we introduce a network architecture based on using dilated convolutions to capture features at different image scales and densely connecting all feature maps with each other. The resulting architecture is able to achieve accurate results with relatively few parameters and consists of a single set of operations, making it easier to implement, train, and apply in practice, and automatically adapts to different problems. We compare results of the proposed network architecture with popular existing architectures for several segmentation problems, showing that the proposed architecture is able to achieve accurate results with fewer parameters, with a reduced risk of overfitting the training data.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Schnitzer, Julia. « Generative Design For Creators – The Impact Of Data Driven Visualization And Processing In The Field Of Creative Business ». Electronic Imaging 2021, no 3 (18 juin 2021) : 22–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.3.mobmu-022.

Texte intégral
Résumé :
In how far can algorithms take care of your creative work? Generative design is currently changing our conventional understanding of design in its basic principles. For decades, design was a handmade issue and postproduction a job for highly specialized professionals. Generative Design nowadays has become a popular instrument for creating artwork, models and animations with programmed algorithms. By using simple languages such as JavaScript’s p5.js and Processing based on Java, artists and makers can create everything from interactive typography and textiles to 3D-printed products to complex infographics. Computers are not only able to provide images, but also generate variations and templates in a professional quality. Pictures are being pre-optimized, processed and issued by algorithms. The profession of a designers will become more and more that of a director or conductor at the human-computer-interface. What effects does generative design have on the future creative field of designers? To find an answer to this complex field we analyze several examples of projects from a range of international designers and fine arts as well as commercial projects. In an exercise I will guide you step-by-step through a tutorial for creating your own visual experiments that explore possibilities in color, form and images.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Manalu, Doni Sahat Tua, Gustiya Kevin Nugrahaini et Nurinda Utami Padoma. « VALUE ADDED PROCESSING MUSHROOMS FLOUR INTO COOKIES AT CV ASSALAM CIANJUR WITH BUSINESS MODEL CANVAS (BMC) APPROACH ». AgriDev 1, no 2 (27 mars 2023) : 1–13. http://dx.doi.org/10.33830/agridev.v1i2.3719.2023.

Texte intégral
Résumé :
CV Assalam is a company engaged in the cultivation of oyster mushrooms and produces mushroom flour as one of its products. The most popular flour derivative products are cookies. This is an opportunity for CV Assalam to produce cookies in order to gain added value and increase the company's revenue. The objectives of the research are 1) to formulate and review a business plan for oyster mushroom cookies at CV Assalam with the BMC approach, 2) to analyze the financial impact of the oyster mushroom cookies business on CV Assalam. The research method was the data source using primary and secondary data and descriptive analysis using the BMC approach. The results showed that processing oyster mushrooms into mushroom cookies is the right thing to do because it will get added value from mushroom products. CV Assalam must maximize product promotion through e-commerce and social media so that it is better known to the public. Based on financial analysis, it was found that processing mushroom flour into cookies could increase CV Assalam's net income by IDR 5,156,906/month.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Weichselbraun, Albert, Daniel Streiff et Arno Scharl. « Consolidating Heterogeneous Enterprise Data for Named Entity Linking and Web Intelligence ». International Journal on Artificial Intelligence Tools 24, no 02 (avril 2015) : 1540008. http://dx.doi.org/10.1142/s0218213015400084.

Texte intégral
Résumé :
Linking named entities to structured knowledge sources paves the way for state-of-the-art Web intelligence applications which assign sentiment to the correct entities, identify trends, and reveal relations between organizations, persons and products. For this purpose this paper introduces Recognyze, a named entity linking component that uses background knowledge obtained from linked data repositories, and outlines the process of transforming heterogeneous data silos within an organization into a linked enterprise data repository which draws upon popular linked open data vocabularies to foster interoperability with public data sets. The presented examples use comprehensive real-world data sets from Orell Füssli Business Information, Switzerland's largest business information provider. The linked data repository created from these data sets comprises more than nine million triples on companies, the companies' contact information, key people, products and brands. We identify the major challenges of tapping into such sources for named entity linking, and describe required data pre-processing techniques to use and integrate such data sets, with a special focus on disambiguation and ranking algorithms. Finally, we conduct a comprehensive evaluation based on business news from the New Journal of Zurich and AWP Financial News to illustrate how these techniques improve the performance of the Recognyze named entity linking component.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie