Academic literature on the topic 'Business knowledge extraction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Business knowledge extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Business knowledge extraction"

1

Höpken, Wolfram, Matthias Fuchs, Dimitri Keil, and Maria Lexhagen. "Business intelligence for cross-process knowledge extraction at tourism destinations." Information Technology & Tourism 15, no. 2 (May 6, 2015): 101–30. http://dx.doi.org/10.1007/s40558-015-0023-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Spruit, Marco, Marcin Kais, and Vincent Menger. "Automated Business Goal Extraction from E-mail Repositories to Bootstrap Business Understanding." Future Internet 13, no. 10 (September 23, 2021): 243. http://dx.doi.org/10.3390/fi13100243.

Full text
Abstract:
The Cross-Industry Standard Process for Data Mining (CRISP-DM), despite being the most popular data mining process for more than two decades, is known to leave those organizations lacking operational data mining experience puzzled and unable to start their data mining projects. This is especially apparent in the first phase of Business Understanding, at the conclusion of which, the data mining goals of the project at hand should be specified, which arguably requires at least a conceptual understanding of the knowledge discovery process. We propose to bridge this knowledge gap from a Data Science perspective by applying Natural Language Processing techniques (NLP) to the organizations’ e-mail exchange repositories to extract explicitly stated business goals from the conversations, thus bootstrapping the Business Understanding phase of CRISP-DM. Our NLP-Automated Method for Business Understanding (NAMBU) generates a list of business goals which can subsequently be used for further specification of data mining goals. The validation of the results on the basis of comparison to the results of manual business goal extraction from the Enron corpus demonstrates the usefulness of our NAMBU method when applied to large datasets.
APA, Harvard, Vancouver, ISO, and other styles
3

Mohamed, Mona, Sharma Pillutla, and Stella Tomasi. "Extraction of knowledge from open government data." VINE Journal of Information and Knowledge Management Systems 50, no. 3 (January 24, 2020): 495–511. http://dx.doi.org/10.1108/vjikms-05-2019-0065.

Full text
Abstract:
Purpose The purpose of this paper is to establish a new conceptual iterative framework for extracting knowledge from open government data (OGD). OGD is becoming a major source for knowledge and innovation to generate economic value, if properly used. However, currently there are no standards or frameworks for applying knowledge continuum tactics, techniques and procedures (TTPs) to improve elicit knowledge extraction from OGD in a consistent manner. Design/methodology/approach This paper is based on a comprehensive review of literature on both OGD and knowledge management (KM) frameworks. It provides insights into the extraction of knowledge from OGD by using a vast array of phased KM TTPs into the OGD lifecycle phases. Findings The paper proposes a knowledge iterative value network (KIVN) as a new conceptual model that applies the principles of KM on OGD. KIVN operates through applying KM TTPs to transfer and transform discrete data into valuable knowledge. Research limitations/implications This model covers the most important knowledge elicitation steps; however, users who are interested in using KIVN phases may need to slightly customize it based on their environment and OGD policy and procedure. Practical implications After its validation, the model allows facilitating systemic manipulation of OGD for both data-consuming industries and data-producing governments to establish new business models and governance schemes to better make use of OGD. Originality/value This paper offers new perspectives on eliciting knowledge from OGD and discussing crucial, but overlooked area of the OGD arena, namely, knowledge extraction through KM principles.
APA, Harvard, Vancouver, ISO, and other styles
4

De Toni, Alberto Felice, Andrea Fornasier, and Fabio Nonino. "The nature and value of knowledge." Kybernetes 46, no. 06 (June 5, 2017): 966–79. http://dx.doi.org/10.1108/k-01-2017-0016.

Full text
Abstract:
Purpose This paper aims to explain and discuss the complex nature and value of knowledge as an exploitable resource for business. Design/methodology/approach The authors propose a conceptual explanation of knowledge based on three pillars: the plurality of its nature, understood to be conservative, multipliable and generative, its contextual value and the duality of carrier incorporating business knowledge, objects or processes. After conceptualizing the nature of knowledge, the authors offer a metaphor based on the classic transformation from “potential” to “kinetic” energy in an inclined plane assuming that the conservative nature of knowledge makes it act as energy. Findings The metaphor uses the concept of potential and kinetic energy: if energy is only potential, it has a potential value not yet effective, whereas if the potential energy (knowledge) becomes kinetic energy (products and/or services), it generates business value. In addition, business value is a function of the speed acquired and caused by the angle of inclined plan, namely, the company’s business model. Knowledge is the source of the value and can be maintained and regenerated only through continuous investments. Several years later the value extraction reaches a null value of the company (potential energy) which will cease to act (kinetic energy) for triggering both the value generated and the value extracted. Originality/value The paper proposes an initial attempt to explain the meaning of the transformation of knowledge using a metaphor derived from physics. The metaphor of the energy of knowledge clearly depicts the managerial dilemma of balancing a company’s resources for both the generating and extracting value. Similarly, future study should try to associate other knowledge peculiarities to physical phenomena.
APA, Harvard, Vancouver, ISO, and other styles
5

Saura, Jose Ramon, Ana Reyes-Menendez, and Ferrão Filipe. "Comparing Data-Driven Methods for Extracting Knowledge from User Generated Content." Journal of Open Innovation: Technology, Market, and Complexity 5, no. 4 (September 24, 2019): 74. http://dx.doi.org/10.3390/joitmc5040074.

Full text
Abstract:
This study aimed to compare two techniques of business knowledge extraction for the identification of insights related to the improvement of digital marketing strategies on a sample of 15,731 tweets. The sample was extracted from user generated content (UGC) from Twitter using two methods based on knowledge extraction techniques for business. In Method 1, an algorithm to detect communities in complex networks was applied; this algorithm, in which we applied data visualization techniques for complex networks analysis, used the modularity of nodes to discover topics. In Method 2, a three-phase process was developed for knowledge extraction that included the application of a latent Dirichlet allocation (LDA) model, a sentiment analysis (SA) that works with machine learning, and a data text mining (DTM) analysis technique. Finally, we compared the results of each of the two techniques to see whether or not the results yielded by these two methods regarding the analysis of companies’ digital marketing strategies were mutually complementary.
APA, Harvard, Vancouver, ISO, and other styles
6

Jennex, Murray E., and Summer E. Bartczak. "A Revised Knowledge Pyramid." International Journal of Knowledge Management 9, no. 3 (July 2013): 19–30. http://dx.doi.org/10.4018/ijkm.2013070102.

Full text
Abstract:
The knowledge pyramid has been used for several years to illustrate the hierarchical relationships between data, information, knowledge, and wisdom. This paper posits that the knowledge pyramid is too basic and fails to represent reality and presents a revised knowledge-KM pyramid. One key difference is that the revised knowledge-KM pyramid includes knowledge management as an extraction of reality with a focus on organizational learning. The model also posits that newer initiatives such as business and/or customer intelligence are the result of confusion in understanding the traditional knowledge pyramid that is resolved in the revised knowledge-KM pyramid.
APA, Harvard, Vancouver, ISO, and other styles
7

Deshmukh, Shilpa, P. P. Karde, and V. R. Thakare. "An Improved Approach for Deep Web Data Extraction." ITM Web of Conferences 40 (2021): 03045. http://dx.doi.org/10.1051/itmconf/20214003045.

Full text
Abstract:
The World Wide Web is a valuable wellspring of data which contains information in a wide range of organizations. The different organizations of pages go about as a boundary for performing robotized handling. Numerous business associations require information from the World Wide Web for doing insightful undertakings like business knowledge, item insight, serious knowledge, dynamic, assessment mining, notion investigation, and so on Numerous scientists face trouble in tracking down the most fitting diary for their exploration article distribution. Manual extraction is arduous which has directed the requirement for the computerized extraction measure. In this paper, approach called ADWDE is proposed. This drew closer is essentially founded on heuristic methods. The reason for this exploration is to plan an Automated Web Data Extraction System (AWDES) which can recognize the objective of information extraction with less measure of human intercession utilizing semantic marking and furthermore to perform extraction at a satisfactory degree of precision. In AWDES, there consistently exists a compromise between the degree of human intercession and precision. The objective of this examination is to diminish the degree of human intercession and simultaneously give exact extraction results independent of the business space to which the site page has a place.
APA, Harvard, Vancouver, ISO, and other styles
8

Manolova, Agata, Krasimir Tonchev, Vladimir Poulkov, Sudhir Dixir, and Peter Lindgren. "Context-Aware Holographic Communication Based on Semantic Knowledge Extraction." Wireless Personal Communications 120, no. 3 (June 3, 2021): 2307–19. http://dx.doi.org/10.1007/s11277-021-08560-7.

Full text
Abstract:
AbstractAugmented, mixed and virtual reality are changing the way people interact and communicate. Five dimensional communications and services, integrating information from all human senses are expected to emerge, together with holographic communications (HC), providing a truly immersive experience. HC presents a lot of challenges in terms of data gathering and transmission, demanding Artificial Intelligence empowered communication technologies such as 5G. The goal of the paper is to present a model of a context-aware holographic architecture for real time communication based on semantic knowledge extraction. This architecture will require analyzing, combining and developing methods and algorithms for: 3D human body model acquisition; semantic knowledge extraction with deep neural networks to predict human behaviour; analysis of biometric modalities; context-aware optimization of network resource allocation for the purpose of creating a multi-party, from-capturing-to-rendering HC framework. We illustrate its practical deployment in a scenario that can open new opportunities in user experience and business model innovation.
APA, Harvard, Vancouver, ISO, and other styles
9

Schafer, Brad A., Sarah Bee, and Margaret Garnsey. "The Lemonade Stand: An Elementary Case for Introducing Data Analytics." AIS Educator Journal 13, no. 1 (January 1, 2018): 29–43. http://dx.doi.org/10.3194/1935-8156-13.1.29.

Full text
Abstract:
Accounting education has been encouraged to increase the business knowledge, analytical skills, and data analytic skills of accounting students. This case blends these areas in a single, multi-part project for Accounting Information Systems (AIS) courses. The case includes the technical function of extracting data from databases, integrating multiple data stores and using multiple software tools (MS Access and Tableau). Additionally, students learn to assess the business needs driving the use of integrated data stores to produce quality information for decision making. Using a basic business scenario (lemonade stand), this case provides a stand-alone project focusing on incorporating data analytics into an AIS course. Students assume the role of a professional consultant to a lemonade stand and will become familiar with the business processes and the data of the company, develop queries to answer various business questions, and integrate internal and external data to graphically analyze the combined data for a business analysis. The case allows integration of the course content of data extraction and reporting elements with data analytics. Students indicated that they perceived that they increased their knowledge about business analysis and data analytics tools. Student also indicated they enjoyed the case and had many positive comments about their experience. Results from a pre-/post-test quiz reflect that students did significantly increase their knowledge of business analysis and data analytics.
APA, Harvard, Vancouver, ISO, and other styles
10

Ezeife, C. I., and Titas Mutsuddy. "Towards Comparative Mining of Web Document Objects with NFA." International Journal of Data Warehousing and Mining 8, no. 4 (October 2012): 1–21. http://dx.doi.org/10.4018/jdwm.2012100101.

Full text
Abstract:
The process of extracting comparative heterogeneous web content data which are derived and historical from related web pages is still at its infancy and not developed. Discovering potentially useful and previously unknown information or knowledge from web contents such as “list all articles on ’Sequential Pattern Mining’ written between 2007 and 2011 including title, authors, volume, abstract, paper, citation, year of publication,” would require finding the schema of web documents from different web pages, performing web content data integration, building their virtual or physical data warehouse before web content extraction and mining from the database. This paper proposes a technique for automatic web content data extraction, the WebOMiner system, which models web sites of a specific domain like Business to Customer (B2C) web sites, as object oriented database schemas. Then, non-deterministic finite state automata (NFA) based wrappers for recognizing content types from this domain are built and used for extraction of related contents from data blocks into an integrated database for future second level mining for deep knowledge discovery.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Business knowledge extraction"

1

Normantas, Kęstutis. "Verslo žinių išgavimo iš egzistuojančių programų sistemų tyrimas." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20140116_144357-31076.

Full text
Abstract:
Darbe nagrinėjama programų sistemų palaikymo ir vystymo problema. Nustatyta, jog sąnaudos šiose programų sistemos gyvavimo ciklo fazėse siekia iki 80% visų sąnaudų, skiriamų programų sistemai kurti. Pagrindinis šio reiškinio veiksnys yra nuolatinis poreikis pritaikyti sistemų funkcionalumą prie besikeičiančių verslo reikalavimų, o tokios užduotys apima didžiąją dalį visų palaikymo veiklų. Nagrinėti tyrimai parodė, kad programų sistemose įgyvendintai verslo logikai suprasti sugaištama 40–60% pakeitimams atliki skirto laiko, kadangi atsakingi už sistemų palaikymą žmonės paprastai nėra jų projektuotai, todėl turi dėti dideles pastangas, kad išsiaiškintų sistemos veikimo principus. Be to, pakeitimai, atliekami palaikymo metu, yra retai dokumentuojami (ar net nedokumentuojami visai), o supratimas įgytas įgyvendinant pakeitimus lieka individualių programuotojų galvose. Tuo tarpu kiti tyrimai atskleidė, jog paprastai tik trečdalis programų sistemos kodo įgyvendina verslo logiką, o kita dalis yra skirta platformos ir infrastruktūros funkcijoms įgyvendinti. Iš to darytina išvada, jog išgaunant dalykinės srities žinias bei išlaikant atsekamumą tarp jų ir jas įgyvendinančio programinio kodo, galima sumažinti sistemų palaikymo ir vystymo kaštus. Todėl pagrindinis šio darbo tikslas yra patobulinti verslo žinių išgavimo ir vaizdavimo procesą, pasiūlant metodą ir palaikančias priemones, kurios palengvintų egzistuojančių programų sistemų suvokimą. Darbas susideda iš įvado, 4 dalių, bendrųjų... [toliau žr. visą tekstą]
The dissertation addresses the problem of software maintenance and evolution. It identifies that spending within these software lifecycle phases may account for up to 80% of software’s total lifecycle cost, whereas the inability to adopt software quickly and reliably to meet ever-changing business requirements may lead to business opportunities being lost. The main reason of this phenomenon is the fact that the most of maintenance effort is devoted to understanding the software to be modified. On the other hand, related studies show that less than one-third of software source code contains business logic implemented within it, while the remaining part is intended for platform or infrastructure relevant activities. It follows that if the most of changes in software are made due to the need to adopt its functionality to changed business requirements, then facilitating software comprehension with automated business knowledge extraction methods may significantly reduce the cost of software maintenance and evolution. Therefore the main goal of this thesis is to improve business knowledge extraction process by proposing a method and supporting tool framework that would facilitate comprehension of existing software systems. The dissertation consists of the following parts: Introduction, 4 chapters, General Conclusions, References, and 6 Annexes. Chapter 1 presents a systematic literature review of related studies in order to summarize the state-of-the art in this research field... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
2

Musaraj, Kreshnik. "Extraction automatique de protocoles de communication pour la composition de services Web." Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10288/document.

Full text
Abstract:
La gestion des processus-métiers, des architectures orientées-services et leur rétro-ingénierie s’appuie fortement sur l’extraction des protocoles-métier des services Web et des modèles des processus-métiers à partir de fichiers de journaux. La fouille et l’extraction de ces modèles visent la (re)découverte du comportement d'un modèle mis en œuvre lors de son exécution en utilisant uniquement les traces d'activité, ne faisant usage d’aucune information a priori sur le modèle cible. Notre étude préliminaire montre que : (i) une minorité de données sur l'interaction sont enregistrées par le processus et les architectures de services, (ii) un nombre limité de méthodes d'extraction découvrent ce modèle sans connaître ni les instances positives du protocole, ni l'information pour les déduire, et (iii) les approches actuelles se basent sur des hypothèses restrictives que seule une fraction des services Web issus du monde réel satisfont. Rendre possible l'extraction de ces modèles d'interaction des journaux d'activité, en se basant sur des hypothèses réalistes nécessite: (i) des approches qui font abstraction du contexte de l'entreprise afin de permettre une utilisation élargie et générique, et (ii) des outils pour évaluer le résultat de la fouille à travers la mise en œuvre du cycle de vie des modèles découverts de services. En outre, puisque les journaux d'interaction sont souvent incomplets, comportent des erreurs et de l’information incertaine, alors les approches d'extraction proposées dans cette thèse doivent être capables de traiter ces imperfections correctement. Nous proposons un ensemble de modèles mathématiques qui englobent les différents aspects de la fouille des protocoles-métiers. Les approches d’extraction que nous présentons, issues de l'algèbre linéaire, nous permettent d'extraire le protocole-métier tout en fusionnant les étapes classiques de la fouille des processus-métiers. D'autre part, notre représentation du protocole basée sur des séries temporelles des variations de densité de flux permet de récupérer l'ordre temporel de l'exécution des événements et des messages dans un processus. En outre, nous proposons la définition des expirations propres pour identifier les transitions temporisées, et fournissons une méthode pour les extraire en dépit de leur propriété d'être invisible dans les journaux. Finalement, nous présentons un cadre multitâche visant à soutenir toutes les étapes du cycle de vie des workflow de processus et des protocoles, allant de la conception à l'optimisation. Les approches présentées dans ce manuscrit ont été implantées dans des outils de prototypage, et validées expérimentalement sur des ensembles de données et des modèles de processus et de services Web. Le protocole-métier découvert, peut ensuite être utilisé pour effectuer une multitude de tâches dans une organisation ou une entreprise
Business process management, service-oriented architectures and their reverse engineering heavily rely on the fundamental endeavor of mining business process models and Web service business protocols from log files. Model extraction and mining aim at the (re)discovery of the behavior of a running model implementation using solely its interaction and activity traces, and no a priori information on the target model. Our preliminary study shows that : (i) a minority of interaction data is recorded by process and service-aware architectures, (ii) a limited number of methods achieve model extraction without knowledge of either positive process and protocol instances or the information to infer them, and (iii) the existing approaches rely on restrictive assumptions that only a fraction of real-world Web services satisfy. Enabling the extraction of these interaction models from activity logs based on realistic hypothesis necessitates: (i) approaches that make abstraction of the business context in order to allow their extended and generic usage, and (ii) tools for assessing the mining result through implementation of the process and service life-cycle. Moreover, since interaction logs are often incomplete, uncertain and contain errors, then mining approaches proposed in this work need to be capable of handling these imperfections properly. We propose a set of mathematical models that encompass the different aspects of process and protocol mining. The extraction approaches that we present, issued from linear algebra, allow us to extract the business protocol while merging the classic process mining stages. On the other hand, our protocol representation based on time series of flow density variations makes it possible to recover the temporal order of execution of events and messages in the process. In addition, we propose the concept of proper timeouts to refer to timed transitions, and provide a method for extracting them despite their property of being invisible in logs. In the end, we present a multitask framework aimed at supporting all the steps of the process workflow and business protocol life-cycle from design to optimization.The approaches presented in this manuscript have been implemented in prototype tools, and experimentally validated on scalable datasets and real-world process and web service models.The discovered business protocols, can thus be used to perform a multitude of tasks in an organization or enterprise
APA, Harvard, Vancouver, ISO, and other styles
3

Гаутам, Аджит Пратап Сингх. "Информационная технология экстракции бизнес знаний из текстового контента интегрированной корпоративной системы." Thesis, НТУ "ХПИ", 2016. http://repository.kpi.kharkov.ua/handle/KhPI-Press/23555.

Full text
Abstract:
Диссертация на соискание ученой степени кандидата технических наук по специальности 05.13.06 – информационные технологии. – Национальный технический университет "Харьковский политехнический институт", Харьков, 2016. Цель диссертационного исследования – создание информационной технологии экстракции бизнес знаний интегрированной корпоративной системы на основе информационно-логических моделей и методов смысловой обработки текстового контента. В работе проанализированы существующие информационные технологии, модели и методы экстракции и идентификации знаний из текстов, сформулированы основные требования к разработке информационного обеспечения подсистемы экстракции бизнес знаний из текстового контента интегрированной корпоративной системы. Обосновано использование инструментов алгебры конечных предикатов в информационно-логических моделях экстракции фактов из текстовых потоков; построена математическая модель генерации фактов из текстов корпорации. Результаты диссертационного исследования внедрены в практику разработки и создания подсистем экстракции знаний из текстового контента реальных ИКС. На основе разработанных в диссертационном исследовании методов и моделей интеллектуальной обработки текстового контента предложена информационная технология формирования единого информационного пространства бизнес деятельности корпорации. При этом под информационным пространством интегрированной корпоративной системы понимается совокупность некоторых актуальных сведений, данных, оформленных таким образом, чтобы обеспечивать качество и оперативность принятия решений в области целевой деятельности корпорации. Предложенная информационная технология позволять извлекать знания из всего многообразия информационных ресурсов современного предприятия: Интернет- и интранет- сайтов предприятий и организаций, почтовых сообщений, файловых систем, хранилищ документов различных ведущих производителей, текстовых полей баз данных, репозитариев, различных бизнес-приложений т. п. Технология включает логико-лингвистическую модель генерации фактов из текстовых потоков ИКС, метод структурирования отношений фактов бизнес знаний, метод выявления актуального множества классифицированных сущностей предметной области, а также специализированные этапы Web Content Mining лингвистического процессора. Разработанные в исследовании математические модели могут быть использованы в различных системах автоматической обработки текстов, системах извлечения знаний, экстракции информации (Information Extraction) и распознавания сущностей (Named Entity Recognition).
Thesis for a candidate degree in technical science, speciality 05.13.06 – Infor-mation Technologies. – National Technical University "Kharkiv Polytechnic Institute". – Kharkiv, 2016. The aim of the thesis is to develop information technology of extraction of business knowledge of integrated corporate system (ICS) based on the information logic models and methods of text content sense processing. The main results are as follows: a logic linguistic model of fact generation from ICS text streams has been developed which is based on surface grammar characteristics of identification of entities of actions and attributes which allows to effectively extract industry specific knowledge about the subjects of monitoring from text content. The thesis further develops the method of comparator identification used for structuring of ICS business knowledge facts relationship. The method allows to classify the attributes of entities according to class relationships due to sense identity of fact triplets which are determined by the comparator objectively. The paper improves the method of determination of actual set of classified entities of a subject domain which is distinguished by an integral use of linguistic, statistical and sense characteristics in the naïve Bayes classifier. The method allows to classify entities extracted according to types determined a priori. The thesis improves the development of information technology of common information space of corporation business activity which allows complicated knowledge generation by means of explicit generalization of information hidden in the collection of partial facts using algebra logic transformations.
APA, Harvard, Vancouver, ISO, and other styles
4

Гаутам, Аджіт Пратап Сінгх. "Інформаційна технологія екстракції бізнес знань з текстового контенту інтегрованої корпоративної системи." Thesis, НТУ "ХПІ", 2016. http://repository.kpi.kharkov.ua/handle/KhPI-Press/23554.

Full text
Abstract:
Дисертація на здобуття наукового ступеня кандидата технічних наук за спеціальністю 05.13.06 – інформаційні технології. – Національний технічний університет "Харківський політехнічний інститут", Харків, 2016. Мета дисертаційного дослідження – створення інформаційної технології екстракції бізнес знань інтегрованої корпоративної системи (ІКС) на основі інформаційно-логічних моделей і методів смислового опрацювання текстового контенту. Основні результати: вперше розроблено логіко-лінгвістичну модель генерації фактів з текстових потоків ІКС, яка базується на використанні поверхневих граматичних характеристик сутностей, предикатів та атрибутів, що до-зволяє ефективно екстрагувати з текстового контенту профільні знання про суб'єкти моніторингу. Отримав подальший розвиток метод компараторної ідентифікації, який використано для структурування відношень фактів бізнес знань ІКС. Реалізація методу дозволила класифікувати атрибути сутностей за класами відношень за рахунок смислової тотожності триплетів фактів, які об'єктивно визначені компаратором. Удосконалено метод виявлення актуальної множини класифікованих сутностей предметної області, який відрізняється комплексним використанням лінгвістичних, статистичних й смислових характеристик в наївному байєсівському класифікаторі. Метод дозволяє класифікувати сутності, що екстрагуються, за апріорно виділеними типами. Удосконалено інформаційну технологію формування єдиного інформаційного простору бізнес діяльності корпорації, яка дозволяє за рахунок використання алгебро-логічних перетворень здійснювати породження складного знання шляхом експліцитного узагальнення інформації, що прихована у сукупності часткових фактів.
Thesis for a candidate degree in technical science, speciality 05.13.06 – Infor-mation Technologies. – National Technical University "Kharkiv Polytechnic Institute". – Kharkiv, 2016. The aim of the thesis is to develop information technology of extraction of business knowledge of integrated corporate system (ICS) based on the information logic models and methods of text content sense processing. The main results are as follows: a logic linguistic model of fact generation from ICS text streams has been developed which is based on surface grammar characteristics of identification of entities of actions and attributes which allows to effectively extract industry specific knowledge about the subjects of monitoring from text content. The thesis further develops the method of comparator identification used for structuring of ICS business knowledge facts relationship. The method allows to classify the attributes of entities according to class relationships due to sense identity of fact triplets which are determined by the comparator objectively. The paper improves the method of determination of actual set of classified entities of a subject domain which is distinguished by an integral use of linguistic, statistical and sense characteristics in the naïve Bayes classifier. The method allows to classify entities extracted according to types determined a priori. The thesis improves the development of information technology of common information space of corporation business activity which allows complicated knowledge generation by means of explicit generalization of information hidden in the collection of partial facts using algebra logic transformations.
APA, Harvard, Vancouver, ISO, and other styles
5

Guénec, Nadège. "Méthodologies pour la création de connaissances relatives au marché chinois dans une démarche d'Intelligence Économique : application dans le domaine des biotechnologies agricoles." Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00554743.

Full text
Abstract:
Le décloisonnement des économies et l'accélération mondiale des échanges commerciaux ont, en une décennie à peine, transformés l'environnement concurrentiel des entreprises. La zone d'activités s'est élargie en ouvrant des nouveaux marchés à potentiels très attrayants. Ainsi en est-il des BRIC (Brésil, Russie, Inde et Chine). De ces quatre pays, impressionnants par la superficie, la population et le potentiel économique qu'ils représentent, la Chine est le moins accessible et le plus hermétique à notre compréhension de par un système linguistique distinct des langues indo-européennes d'une part et du fait d'une culture et d'un système de pensée aux antipodes de ceux de l'occident d'autre part. Pourtant, pour une entreprise de taille internationale, qui souhaite étendre son influence ou simplement conserver sa position sur son propre marché, il est aujourd'hui absolument indispensable d'être présent sur le marché chinois. Comment une entreprise occidentale aborde-t-elle un marché qui de par son altérité, apparaît tout d'abord comme complexe et foncièrement énigmatique ? Six années d'observation en Chine, nous ont permis de constater les écueils dans l'accès à l'information concernant le marché chinois. Comme sur de nombreux marchés extérieurs, nos entreprises sont soumises à des déstabilisations parfois inimaginables. L'incapacité à " lire " la Chine et à comprendre les enjeux qui s'y déroulent malgré des effets soutenus, les erreurs tactiques qui découlent d'une mauvaise appréciation du marché ou d'une compréhension biaisée des jeux d'acteurs nous ont incités à réfléchir à une méthodologie de décryptage plus fine de l'environnement d'affaire qui puisse offrir aux entreprises françaises une approche de la Chine en tant que marché. Les méthodes de l'Intelligence Economique (IE) se sont alors imposées comme étant les plus propices pour plusieurs raisons : le but de l'IE est de trouver l'action juste à mener, la spécificité du contexte dans lequel évolue l'organisation est prise en compte et l'analyse se fait en temps réel. Si une approche culturelle est faite d'interactions humaines et de subtilités, une approche " marché " est dorénavant possible par le traitement automatique de l'information et de la modélisation qui s'en suit. En effet, dans toute démarche d'Intelligence Economique accompagnant l'implantation d'une activité à l'étranger, une grande part de l'information à portée stratégique vient de l'analyse du jeu des acteurs opérants dans le même secteur d'activité. Une telle automatisation de la création de connaissance constitue, en sus de l'approche humaine " sur le terrain ", une réelle valeur ajoutée pour la compréhension des interactions entre les acteurs car elle apporte un ensemble de connaissances qui, prenant en compte des entités plus larges, revêtent un caractère global, insaisissable par ailleurs. La Chine ayant fortement développé les technologies liées à l'économie de la connaissance, il est dorénavant possible d'explorer les sources d'information scientifiques et techniques chinoises. Nous sommes en outre convaincus que l'information chinoise prendra au fil du temps une importance de plus en plus cruciale. Il devient donc urgent pour les organisations de se doter de dispositifs permettant non seulement d'accéder à cette information mais également d'être en mesure de traiter les masses d'informations issues de ces sources. Notre travail consiste principalement à adapter les outils et méthodes issues de la recherche française à l'analyse de l'information chinoise en vue de la création de connaissances élaborées. L'outil MATHEO, apportera par des traitements bibliométriques une vision mondiale de la stratégie chinoise. TETRALOGIE, outil dédié au data-mining, sera adapté à l'environnement linguistique et structurel des bases de données scientifiques chinoises. En outre, nous participons au développement d'un outil d'information retreival (MEVA) qui intègre les données récentes des sciences cognitives et oeuvrons à son application dans la recherche de l'information chinoise, pertinente et adéquate. Cette thèse étant réalisée dans le cadre d'un contrat CIFRE avec le Groupe Limagrain, une application contextualisée de notre démarche sera mise en œuvre dans le domaine des biotechnologies agricoles et plus particulièrement autour des enjeux actuels de la recherche sur les techniques d'hybridation du blé. L'analyse de ce secteur de pointe, qui est à la fois une domaine de recherche fondamentale, expérimentale et appliquée donne actuellement lieu à des prises de brevets et à la mise sur le marché de produits commerciaux et représente donc une thématique très actuelle. La Chine est-elle réellement, comme nous le supposons, un nouveau territoire mondial de la recherche scientifique du 21e siècle ? Les méthodes de l'IE peuvent-elles s'adapter au marché chinois ? Après avoir fourni les éléments de réponses à ces questions dans es deux premières parties de notre étude, nous poserons en troisième partie, le contexte des biotechnologies agricoles et les enjeux mondiaux en terme de puissance économico-financière mais également géopolitique de la recherche sur l'hybridation du blé. Puis nous verrons en dernière partie comment mettre en œuvre une recherche d'information sur le marché chinois ainsi que l'intérêt majeur en terme de valeur ajoutée que représente l'analyse de l'information chinoise
APA, Harvard, Vancouver, ISO, and other styles
6

Ke, Wan-ting, and 柯婉婷. "A Knowledge Extraction Methodology for Business Process: A Case Study of A Company’s Customer Complaint Process." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/7q43au.

Full text
Abstract:
碩士
國立中山大學
資訊管理學系研究所
102
The business process is an important delivery medium of knowledge as well as an arena for the creation of knowledge, and nowadays more enterprises have begun to focus on process-oriented knowledge management. In order to properly integrate business process and knowledge management system, this research followed the design science research methodology to propose a knowledge extraction methodology for business process. This methodology includes three phases: business process analysis, process knowledge extraction and knowledge map construction. In order to verify its feasibility, we use this methodology to solve problems existing in A-company’s customer complaint process. This research proposed an integrated methodology of business process analysis and knowledge management. The research achievement could provide instruction and suggestion for enterprises to conduct the planning of process-oriented knowledge management systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Eira, Lídia da Conceição Silva. "Knowledge extraction of financial derivatives options in the maturity with data science techniques." Master's thesis, 2016. http://hdl.handle.net/10071/12992.

Full text
Abstract:
To improve the level of support in information systems and quality of services by questioning the daily routine of a team using a set of financial evidence has been an interesting and challenging problem for many researcher and decision maker professionals. As part of a well-known investment bank that deals financial instruments like European-style options derivatives, operational teams are well aware that the focus of their work are around the evolution on pricing until the expiry moment. The choice of knowing more about financial derivatives options, especially in the maturity period, was made after a long process of study on economics and financial concepts in a certain institution. A special attention was given in subjects where information technology teams have less knowledge, which are the mathematical operation of derivative financial options and their implications in financial terms. As well, the identification of areas of business could be studied with greater interest for a specific organisation.
APA, Harvard, Vancouver, ISO, and other styles
8

Dorali, Cloé. "A milestone in the health governance of France - the construction of a health information system." Master's thesis, 2019. http://hdl.handle.net/10362/89471.

Full text
Abstract:
Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics
Although France is recognized as one of the countries with the best care support, it is also a country far behind on the integration of health data and the constitution of an HIS. Yet, in some aspects, France is not an entirely autonomous country in its government. Indeed, since the integration in the European Union, certain subjects - of which health - are subjects of common agreement, for a common application that can - at this scale - be qualified as a quasi-continental application. And in its goal of global HIS, the European Union is pressuring France to build its own HIS, which will then be absorbed by the HIS of the 27 countries. It is in this scheme that France's gave full authority since ten years to the Regional Health Agencies (and through them, to Keyrus, one of the leaders in business intelligence in France) to build this information system. This is not easy because the French administration is complex and has been solidly and strictly structured for several decades. Building this decisional model is long and will take many more years. But with projects as DIAMANT and GCS, the country is in the process of building a complete HIS taking into account the innovations of the practice of medicine today.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Business knowledge extraction"

1

Nekvasil, Marek, Vojtěch Svátek, and Martin Labský. "Transforming Existing Knowledge Models to Information Extraction Ontologies." In Business Information Systems, 106–17. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-79396-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Colucci, Simona, Eufemia Tinelli, Silvia Giannini, Eugenio Di Sciascio, and Francesco M. Donini. "Knowledge Compilation for Core Competence Extraction in Organizations." In Business Information Systems, 163–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38366-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Taifi, Nouha, and Giuseppina Passiante. "The Strategic Partners Network’s Extraction: The XStrat.Net Project." In Organizational, Business, and Technological Aspects of the Knowledge Society, 303–11. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16324-1_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mues, Christophe, Bart Baesens, Rudy Setiono, and Jan Vanthienen. "From Knowledge Discovery to Implementation: A Business Intelligence Approach Using Neural Network Rule Extraction and Decision Tables." In Professional Knowledge Management, 483–95. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11590019_55.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Hung-Chen, Zi-Yuan Chen, Sin-Yi Huang, Lun-Wei Ku, Yu-Shian Chiu, and Wei-Jen Yang. "Relation Extraction in Knowledge Base Question Answering: From General-Domain to the Catering Industry." In HCI in Business, Government, and Organizations, 26–41. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91716-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Suzuki, Nobuo, and Kazuhiko Tsuda. "The Effective Extraction Method for the Gap of the Mutual Understanding Based on the Egocentrism in Business Communications." In Knowledge-Based and Intelligent Information and Engineering Systems, 317–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04592-9_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ternai, Katalin, Mátyás Török, and Krisztián Varga. "Combining Knowledge Management and Business Process Management – A Solution for Information Extraction from Business Process Models Focusing on BPM Challenges." In Electronic Government and the Information Systems Perspective, 104–17. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10178-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Repke, Tim, and Ralf Krestel. "Extraction and Representation of Financial Entities from Text." In Data Science for Economics and Finance, 241–63. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66891-4_11.

Full text
Abstract:
AbstractIn our modern society, almost all events, processes, and decisions in a corporation are documented by internal written communication, legal filings, or business and financial news. The valuable knowledge in such collections is not directly accessible by computers as they mostly consist of unstructured text. This chapter provides an overview of corpora commonly used in research and highlights related work and state-of-the-art approaches to extract and represent financial entities and relations.The second part of this chapter considers applications based on knowledge graphs of automatically extracted facts. Traditional information retrieval systems typically require the user to have prior knowledge of the data. Suitable visualization techniques can overcome this requirement and enable users to explore large sets of documents. Furthermore, data mining techniques can be used to enrich or filter knowledge graphs. This information can augment source documents and guide exploration processes. Systems for document exploration are tailored to specific tasks, such as investigative work in audits or legal discovery, monitoring compliance, or providing information in a retrieval system to support decisions.
APA, Harvard, Vancouver, ISO, and other styles
9

Goossens, Alexandre, Laure Berth, Emilia Decoene, Ziboud Van Veldhoven, and Jan Vanthienen. "Automatically Extracting Insurance Contract Knowledge Using NLP." In Business Information Systems Workshops, 27–38. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-04216-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pinheiro, Paulo, and Luís Cavique. "Extracting Actionable Knowledge to Increase Business Utility in Sport Services." In Progress in Artificial Intelligence, 397–409. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30244-3_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Business knowledge extraction"

1

Díaz-Prado, José Aldo. "Web Knowledge Extraction for Visual Business Intelligence Approach using Lixto." In Proceedings of the 2005 International Conference on Knowledge Management. WORLD SCIENTIFIC, 2005. http://dx.doi.org/10.1142/9789812701527_0054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bharara, Sanyam, A. Sai Sabitha, and Abhay Bansal. "A review on knowledge extraction for Business operations using data mining." In 2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence (Confluence). IEEE, 2017. http://dx.doi.org/10.1109/confluence.2017.7943205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cui, Yang, and Bingru Yang. "An Information Extraction System of B2B Based on Knowledge Base." In 2009 International Conference on E-Business and Information System Security (EBISS). IEEE, 2009. http://dx.doi.org/10.1109/ebiss.2009.5137925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sulaiman, Safwan, Tariq Mahmoud, Stephan Robbers, Jorge Marx Gómez, and Joachim Kurzhöfer. "A Tracing System for User Interactions towards Knowledge Extraction of Power Users in Business Intelligence Systems." In 8th International Conference on Knowledge Management and Information Sharing. SCITEPRESS - Science and Technology Publications, 2016. http://dx.doi.org/10.5220/0006053601990207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Jian, Bo Qin, Yufei Zhang, Junhua Zhou, and Hongwei Wang. "A Framework for Effective Knowledge Extraction from A Data Space Formed by Unstructured Technical Reports using Pre-trained Models." In 2021 IEEE International Conference on e-Business Engineering (ICEBE). IEEE, 2021. http://dx.doi.org/10.1109/icebe52470.2021.00028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jin, Yihong, Guanshujie Fu, Liyang Qian, Hanwen Liu, and Hongwei Wang. "Representation and Extraction of Diesel Engine Maintenance Knowledge Graph with Bidirectional Relations Based on BERT and the Bi-LSTM-CRF Model." In 2021 IEEE International Conference on e-Business Engineering (ICEBE). IEEE, 2021. http://dx.doi.org/10.1109/icebe52470.2021.00025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Keane, Michael, and Markus Hofmann. "An Investigation into Third Level Module Similarities and Link Analysis." In Third International Conference on Higher Education Advances. Valencia: Universitat Politècnica València, 2017. http://dx.doi.org/10.4995/head17.2017.5528.

Full text
Abstract:
The focus of this paper is on the extraction of knowledge from data contained within the content of web pages in relation to module descriptors as published on http://courses.itb.ie delivered within the School of Business in the Institute of Technology Blanchardstown. We show an automated similarity analysis highlighting visual exploration options. Resulting from this analysis are three issues of note. Firstly, modules although coded as being different and unique to their particular programme of study indicated substantial similarity. Secondly, substantial content overlap with a lack of clear differentiation between sequential modules was identified.. Thirdly, the document similarity statistics point to the existence of modules having very high similarity scores delivered across different years across different National Framework of Qualification (NFQ) levels of different programmes. These issues can be raised within the management structure of the School of Business and disseminated to the relevant programme boards for further consideration and action. Working within a climate of constrained resources with limited numbers of academic staff and lecture theatres the potential savings outside of the obvious quality assurance benefits illustrate a practical application of how text mining can be used to elicit new knowledge and provide business intelligence to support the quality assurance and decision making process within a higher educational environment.
APA, Harvard, Vancouver, ISO, and other styles
8

"Extracting and Maintaining Project Knowledge Using Ontologies." In The 1st International Workshop on Technologies for Collaborative Business Processes. SciTePress - Science and and Technology Publications, 2006. http://dx.doi.org/10.5220/0002477600720083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Changing Paradigms of Technical Skills for Data Engineers." In InSITE 2018: Informing Science + IT Education Conferences: La Verne California. Informing Science Institute, 2018. http://dx.doi.org/10.28945/4001.

Full text
Abstract:
Aim/Purpose: [This Proceedings paper was revised and published in the 2018 issue of the journal Issues in Informing Science and Information Technology, Volume 15] This paper investigates the new technical skills that are needed for Data Engineering. Past research is compared to new research which creates a list of the 20 top tech-nical skills required by a Data Engineer. The growing availability of Data Engineering jobs is discussed. The research methodology describes the gathering of sample data and then the use of Pig and MapReduce on AWS (Amazon Web Services) to count occurrences of Data Engineering technical skills from 100 Indeed.com job advertisements in July, 2017. Background: A decade ago, Data Engineering relied heavily on the technology of Relational Database Management Sys-tems (RDBMS). For example, Grisham, P., Krasner, H., and Perry D. (2006) described an Empirical Soft-ware Engineering Lab (ESEL) that introduced Relational Database concepts to students with hands-on learning that they called “Data Engineering Education with Real-World Projects.” However, as seismic im-provements occurred for the processing of large distributed datasets, big data analytics has moved into the forefront of the IT industry. As a result, the definition for Data Engineering has broadened and evolved to include newer technology that supports the distributed processing of very large amounts of data (e.g. Hadoop Ecosystem and NoSQL Databases). This paper examines the technical skills that are needed to work as a Data Engineer in today’s rapidly changing technical environment. Research is presented that re-views 100 job postings for Data Engineers from Indeed (2017) during the month of July, 2017 and then ranks the technical skills in order of importance. The results are compared to earlier research by Stitch (2016) that ranked the top technical skills for Data Engineers in 2016 using LinkedIn to survey 6,500 peo-ple that identified themselves as Data Engineers. Methodology: A sample of 100 Data Engineering job postings were collected and analyzed from Indeed during July, 2017. The job postings were pasted into a text file and then related words were grouped together to make phrases. For example, the word “data” was put into context with other related words to form phrases such as “Big Data”, “Data Architecture” and “Data Engineering”. A text editor was used for this task and the find/replace functionality of the text editor proved to be very useful for this project. After making phrases, the large text file was uploaded to the Amazon cloud (AWS) and a Pig batch job using Map Reduce was leveraged to count the occurrence of phrases and words within the text file. The resulting phrases/words with occurrence counts was download to a Personal Computer (PC) and then was loaded into an Excel spreadsheet. Using a spreadsheet enabled the phrases/words to be sorted by oc-currence count and then facilitated the filtering out of irrelevant words. Another task to prepare the data involved the combination phrases or words that were synonymous. For example, the occurrence count for the acronym ELT and the occurrence count for the acronym ETL were added together to make an overall ELT/ETL occurrence count. ETL is a Data Warehousing acronym for Extracting, Transforming and Loading data. This task required knowledge of the subject area. Also, some words were counted in lower case and then the same word was also counted in mixed or upper case, thus producing two or three occur-rence counts for the same word. These different counts were added together to make an overall occur-rence count for the word (e.g. word occurrence counts for Python and python were added together). Fi-nally, the Indeed occurrence counts were sorted to allow for the identification of a list of the top 20 tech-nical skills needed by a Data Engineer. Contribution: Provides new information about the Technical Skills needed by Data Engineers. Findings: Twelve of the 20 Stitch (2016) report phrases/words that are highlighted in bold above matched the tech-nical skills mentioned in the Indeed research. I considered C, C++ and Java a match to the broader cate-gory of Programing in the Indeed data. Although the ranked order of the two lists did not match, the top five ranked technical skills for both lists are similar. The reader of this paper might consider the skills of SQL, Python, Hadoop/HDFS to be very important technical skills for a Data Engineer. Although the programming language R is very popular with Data Scientists, it did not make the top 20 skills for Data Engineering; it was in the overall list from Indeed. The R programming language is oriented towards ana-lytical processing (e.g. used by Data Scientists), whereas the Python language is a scripting and object-oriented language that facilitates the creation of Data Pipelines (e.g. used by Data Engineers). Because the data was collected one year apart and from very different data sources, the timing of the data collection and the different data sources could account for some of the differences in the ranked lists. It is worth noting that the Indeed research ranked list introduced the technical skills of Design Skills, Spark, AWS (Amazon Web Services), Data Modeling, Kafta, Scala, Cloud Computing, Data Pipelines, APIs and AWS Redshift Data Warehousing to the top 20 ranked technical skills list. The Stitch (2016) report that did not have matches to the Indeed (2017) sample data for Linux, Databases, MySQL, Business Intelligence, Oracle, Microsoft SQL Server, Data Analysis and Unix. Although many of these Stitch top 20 technical skills were on the Indeed list, they did not make the top 20 ranked technical skills. Recommendations for Practitioners: Some of the skills needed for Database Technologies are transferable to Data Engineering. Recommendation for Researchers: None Impact on Society: There is not much peer reviewed literature on the subject of Data Engineering, this paper will add new information to the subject area. Future Research: I'm developing a Specialization in Data Engineering for the MS in Data Science degree at our university.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography