Zeitschriftenartikel zum Thema „IBM Watson Content Analytics“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: IBM Watson Content Analytics.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "IBM Watson Content Analytics" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Leonenko, E. A., und S. V. Kunev. „RESEARCH OF THE COMPANY’S MARKETING ENVIRONMENT BASED ON DIGITAL TECHNOLOGY TOOLS AND A PROCESS FRAMEWORK“. Scientific Review Theory and Practice 11, Nr. 7 (2021): 2122–37. http://dx.doi.org/10.35679/2226-0226-2021-11-7-2122-2137.

Der volle Inhalt der Quelle
Annotation:
The study of the marketing environment of an enterprise is one of the most difficult types of marketing activities, since it always contains an element of foreseeing a difficult and contradictory socio-economic object – the market, therefore, the content and forms of research of the marketing environment depend on the eco- nomic structure, external and internal conditions in which it is develops, and can vary significantly. In the 21st century, the processes of growth in the use of various digital tools in various areas of business are seriously accelerating. At the same time, the catalyst for these processes in 2020 was the coronavirus pandemic. However, the changes that have taken place in the rate of introduction of digital tools will not only not receive a reverse vector, but will probably be intensified already in the context of the announced reduction of quarantine measures. At the same time, the market of marketing research tools currently has full-fledged software products for a long time (ALS Base, Marketing Ana- lytic, Marketing Expert, Power analysis, BEST Marketing, etc.) that work directly in marketing or planning divisions of organizations, as well as digital online services (Google Analytics and Yadex. Metrics, IBM Watson Analitics), applications for smartphones, etc. At the same time, all digital tools imply the use of a digital footprint, which, given the still insufficient presence of Russian companies on the Internet, leads to a limitation of their use for analytical purposes. At the same time, another trend is the introduction of a digital framework, also called Scrum technology, which is a set of principles and tools that are most often used in IT development. With their help, a developer can make a workable product in iterations of limited duration, called sprints. Product capabilities are determined when planning a sprint. The short-term iteration ensures predictability of development and, at the same time, flexibility of the process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Tsoi, Kelvin K. F., Felix C. H. Chan, Hoyee W. Hirai, Gary K. S. Keung, Yong-Hong Kuo, Samson Tai und Helen M. L. Meng. „Data Visualization with IBM Watson Analytics for Global Cancer Trends Comparison from World Health Organization“. International Journal of Healthcare Information Systems and Informatics 13, Nr. 1 (Januar 2018): 45–54. http://dx.doi.org/10.4018/ijhisi.2018010104.

Der volle Inhalt der Quelle
Annotation:
Visual analytics is widely used to explore data patterns and trends. This work leverages cancer data collected by World Health Organization (WHO) across a hundred of cancer registries worldwide. In this study, the authors present a visual analytics platform, IBM Watson Analytics, to explore the patterns of global cancer incidence. They included 26 forms of cancers from eight different geographic regions which are United States, the United Kingdom, Costa Rica, Sweden, Croatia, Japan, Hong Kong and China (Shanghai). An interactive interface was applied to plot a choropleth map to show global cancer distribution, and line charts to demonstrate historical cancer trends over 29 years. Subgroup analyses were conducted for different age groups. With real-time interactive features, one can easily explore the data with a selection of any cancer type, gender, age group, or geographical region. This platform is running on the cloud, so it can handle data in huge volumes, and is accessible by any computer connected to the Internet. IBM Watson Analytics released a latest version named “IBM Watson Analytics New User Experience” in the end of 2016. The new version streamlined the process to add data, discover data meaning and display result visually. The authors discuss the new features in the end of this paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Hoyt, Robert Eugene, Dallas Snider, Carla Thompson und Sarita Mantravadi. „IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics“. JMIR Public Health and Surveillance 2, Nr. 2 (11.10.2016): e157. http://dx.doi.org/10.2196/publichealth.5810.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bensberg, Frank, Gunnar Auth und Christian Czarnecki. „Einsatz von Text Analytics zur Unterstützung literaturintensiver Forschungsprozesse“. Anwendungen und Konzepte der Wirtschaftsinformatik, Nr. 8 (20.12.2018): 6. http://dx.doi.org/10.26034/lu.akwi.2018.3221.

Der volle Inhalt der Quelle
Annotation:
Das anhaltende Wachstum wissenschaftlicher Veröffentlichungen wirft die Fragestellung auf, wie Literaturana-lysen im Rahmen von Forschungsprozessen digitalisiert und somit produktiver realisiert werden können. Insbesondere in informationstechnischen Fachgebieten ist die Forschungspraxis durch ein rasant wachsendes Publikationsaufkommen gekennzeichnet. Infolgedessen bietet sich der Einsatz von Methoden der Textanalyse (Text Analytics) an, die Textdaten automatisch vorbereiten und verarbeiten können. Erkenntnisse entstehen dabei aus Analysen von Wortarten und Subgruppen, Korrelations- sowie Zeitreihenanalysen. Dieser Beitrag stellt die Konzeption und Realisierung eines Prototypen vor, mit dem Anwender bibliographische Daten aus der etablierten Literaturdatenbank EBSCO Discovery Service mithilfe textanalytischer Methoden erschließen können. Der Prototyp basiert auf dem Analysesystem IBM Watson Explorer, das Hochschulen lizenzkostenfrei zur Verfügung steht. Potenzielle Adressaten des Prototypen sind Forschungseinrichtungen, Beratungsunternehmen sowie Entscheidungsträger in Politik und Unternehmenspraxis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Guidi, Gabriele, Roberto Miniati, Matteo Mazzola und Ernesto Iadanza. „Case Study: IBM Watson Analytics Cloud Platform as Analytics-as-a-Service System for Heart Failure Early Detection“. Future Internet 8, Nr. 3 (13.07.2016): 32. http://dx.doi.org/10.3390/fi8030032.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Marshall, Thomas Edward, und Sherwood Lane Lambert. „Cloud-Based Intelligent Accounting Applications: Accounting Task Automation Using IBM Watson Cognitive Computing“. Journal of Emerging Technologies in Accounting 15, Nr. 1 (01.03.2018): 199–215. http://dx.doi.org/10.2308/jeta-52095.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT This paper presents a cognitive computing model, based on artificial intelligence (AI) technologies, supporting task automation in the accounting industry. Drivers and consequences of task automation, globally and in accounting, are reviewed. A framework supporting cognitive task automation is discussed. The paper recognizes essential differences between cognitive computing and data analytics. Cognitive computing technologies that support task automation are incorporated into a model delivering federated knowledge. The impact of task automation on accounting job roles and the resulting creation of new accounting job roles supporting innovation are presented. The paper develops a hypothetical use case of building a cloud-based intelligent accounting application design, defined as cognitive services, using machine learning based on AI. The paper concludes by recognizing the significance of future research into task automation in accounting and suggests the federated knowledge model as a framework for future research into the process of digital transformation based on cognitive computing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Nagwanshi, Kapil Kumar, und Sipi Dubey. „Statistical Feature Analysis of Human Footprint for Personal Identification Using BigML and IBM Watson Analytics“. Arabian Journal for Science and Engineering 43, Nr. 6 (22.07.2017): 2703–12. http://dx.doi.org/10.1007/s13369-017-2711-z.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Choi, Youngkeun, und Jae W. Choi. „Assessing the Predictive Performance of Machine Learning in Direct Marketing Response“. International Journal of E-Business Research 19, Nr. 1 (07.04.2023): 1–12. http://dx.doi.org/10.4018/ijebr.321458.

Der volle Inhalt der Quelle
Annotation:
This paper intends to better understand the pre-exercise of modeling for direct marketing response prediction and assess the predictive performance of machine learning. For this, the authors are using a machine learning technique in a dataset of direct marketing, which is available at IBM Watson Analytics in the IBM community. In the results, first, among all variables, customer lifetime value, coverage, employment status, income, marital status, monthly premium auto, months since last claim, months since policy inception, renew offer type, and the total claim amount is shown to influence direct marketing response. However, others have no significance. Second, for the full model, the accuracy rate is 0.864, which implies that the error rate is 0.136. Among the patients who predicted not having a direct marketing response, the accuracy that would not have a direct marketing response was 87.23%, and the accuracy that had a direct marketing response was 66.34% among the patients predicted to have a direct marketing response.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ahmed, Mohammed Imtyaz, und G. Kannan. „Secure End to End Communications and Data Analytics in IoT Integrated Application Using IBM Watson IoT Platform“. Wireless Personal Communications 120, Nr. 1 (06.05.2021): 153–68. http://dx.doi.org/10.1007/s11277-021-08439-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Varenko, Volodymyr. „Operational (Online) Analytics: New Technologies and Opportunities“. Ukrainian Journal on Library and Information Science, Nr. 9 (17.06.2022): 10–21. http://dx.doi.org/10.31866/2616-7654.9.2022.259140.

Der volle Inhalt der Quelle
Annotation:
The aim of the article is to systematise and generalise new knowledge on operational ana­lytics and consider specific automated information systems in terms of the present and prospects for its development. The research methodology was based on the general scientific principles of unity of theory and practice, systematics, complexity, and comprehensiveness of knowledge. The use of general scientific (description, analysis, synthesis, comparison, generalisation) and special (bibliograph­ic, sample observation, grouping, content analysis) methods at the empirical and theoretical lev­els of research contributed to achieving this goal. The scientific novelty of the study is to generalise and systematise new and available knowledge on operational analytics in terms of present and prospects for its development within one study. Conclusions. The case of the International Business Machines Corporation American elec­tronic corporation (pronounced IBM, also known as IBM or «Blue Giant»), one of the world’s largest manufacturers of all types of computers and software, one of the largest providers of global information networks, the author described and analysed the species diversity and fea­tures of the use of automated information and analytical systems based on artificial intelligence, which are available today to many users, and tomorrow will become a daily reality for any com­pany. The information systems and products that IBM now offers to the Ukrainian consumer are briefly described: systems designed for analytics by areas of application; Big Data analysts; data visualisation; policy analytics; data integration systems; advanced analytics products; in­formation systems of advanced analytics based on the Internet of Things; automated systems for forecasting analytics. Attention is focused on the features and benefits of operational analytics and its capabilities in the practical plane of use. The expediency of applying certain automated information systems according to the sphere of activity and stages of information analysis is substantiated. The advantages and prospects of operational (online) analytics development in data analytics are considered.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Wrzeszczynski, Kazimierz O., Mayu O. Frank, Takahiko Koyama, Kahn Rhrissorrakrai, Nicolas Robine, Filippo Utro, Anne-Katrin Emde et al. „Comparing sequencing assays and human-machine analyses in actionable genomics for glioblastoma“. Neurology Genetics 3, Nr. 4 (11.07.2017): e164. http://dx.doi.org/10.1212/nxg.0000000000000164.

Der volle Inhalt der Quelle
Annotation:
Objective:To analyze a glioblastoma tumor specimen with 3 different platforms and compare potentially actionable calls from each.Methods:Tumor DNA was analyzed by a commercial targeted panel. In addition, tumor-normal DNA was analyzed by whole-genome sequencing (WGS) and tumor RNA was analyzed by RNA sequencing (RNA-seq). The WGS and RNA-seq data were analyzed by a team of bioinformaticians and cancer oncologists, and separately by IBM Watson Genomic Analytics (WGA), an automated system for prioritizing somatic variants and identifying drugs.Results:More variants were identified by WGS/RNA analysis than by targeted panels. WGA completed a comparable analysis in a fraction of the time required by the human analysts.Conclusions:The development of an effective human-machine interface in the analysis of deep cancer genomic datasets may provide potentially clinically actionable calls for individual patients in a more timely and efficient manner than currently possible.ClinicalTrials.gov identifier:NCT02725684.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Balan, Shilpa, und Janhavi Rege. „Mining for Social Media: Usage Patterns of Small Businesses“. Business Systems Research Journal 8, Nr. 1 (28.03.2017): 43–50. http://dx.doi.org/10.1515/bsrj-2017-0004.

Der volle Inhalt der Quelle
Annotation:
AbstractBackground: Information can now be rapidly exchanged due to social media. Due to its openness, Twitter has generated massive amounts of data. In this paper, we apply data mining and analytics to extract the usage patterns of social media by small businesses. Objectives: The aim of this paper is to describe with an example how data mining can be applied to social media. This paper further examines the impact of social media on small businesses. The Twitter posts related to small businesses are analyzed in detail. Methods/Approach: The patterns of social media usage by small businesses are observed using IBM Watson Analytics. In this paper, we particularly analyze tweets on Twitter for the hashtag #smallbusiness. Results: It is found that the number of females posting topics related to small business on Twitter is greater than the number of males. It is also found that the number of negative posts in Twitter is relatively low. Conclusions: Small firms are beginning to understand the importance of social media to realize their business goals. For future research, further analysis can be performed on the date and time the tweets were posted.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Pérez-Gutiérrez, Boris Rainiero. „Comparación de técnicas de minería de datos para identificar indicios de deserción estudiantil, a partir del desempeño académico“. Revista UIS Ingenierías 19, Nr. 1 (01.01.2020): 193–204. http://dx.doi.org/10.18273/revuin.v19n1-2020018.

Der volle Inhalt der Quelle
Annotation:
Uno de los grandes retos en las instituciones educativas consiste en poder establecer la posibilidad de retiro o deserción de sus estudiantes. En este artículo se presentan los resultados de un estudio de comparación de técnicas para apoyar la identificación de deserción estudiantil a partir del registro académico de los estudiantesde una Universidad en Colombia para el programa de Ingeniería de Sistemas. El registro académico se estableció para un periodo de 7 años. Árboles de decisión, regresión logística y Naive Bayes, fueron comparados para lograr establecer la mejor técnica de detección de desertores. Adicionalmente, la herramienta Watson Analytics de IBM fue utilizada para comparar su usabilidad y precisión para un usuario no experto. Nuestra experimentación demostró que el uso de algoritmos simples es suficiente para alcanzar niveles ideales de precisión. Estos resultados son presentados a la comunidad académica para ayudar en la disminución de la deserción estudiantil.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Freeman, Daniel, Shaurya Gupta, D. Hudson Smith, Joe Mari Maja, James Robbins, James S. Owen, Jose M. Peña und Ana I. de Castro. „Watson on the Farm: Using Cloud-Based Artificial Intelligence to Identify Early Indicators of Water Stress“. Remote Sensing 11, Nr. 22 (13.11.2019): 2645. http://dx.doi.org/10.3390/rs11222645.

Der volle Inhalt der Quelle
Annotation:
As demand for freshwater increases while supply remains stagnant, the critical need for sustainable water use in agriculture has led the EPA Strategic Plan to call for new technologies that can optimize water allocation in real-time. This work assesses the use of cloud-based artificial intelligence to detect early indicators of water stress across six container-grown ornamental shrub species. Near-infrared images were previously collected with modified Canon and MAPIR Survey II cameras deployed via a small unmanned aircraft system (sUAS) at an altitude of 30 meters. Cropped images of plants in no, low-, and high-water stress conditions were split into four-fold cross-validation sets and used to train models through IBM Watson’s Visual Recognition service. Despite constraints such as small sample size (36 plants, 150 images) and low image resolution (150 pixels by 150 pixels per plant), Watson generated models were able to detect indicators of stress after 48 hours of water deprivation with a significant to marginally significant degree of separation in four out of five species tested (p < 0.10). Two models were also able to detect indicators of water stress after only 24 hours, with models trained on images of as few as eight water-stressed Buddleia plants achieving an average area under the curve (AUC) of 0.9884 across four folds. Ease of pre-processing, minimal amount of training data required, and outsourced computation make cloud-based artificial intelligence services such as IBM Watson Visual Recognition an attractive tool for agriculture analytics. Cloud-based artificial intelligence can be combined with technologies such as sUAS and spectral imaging to help crop producers identify deficient irrigation strategies and intervene before crop value is diminished. When brought to scale, frameworks such as these can drive responsive irrigation systems that monitor crop status in real-time and maximize sustainable water use.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Feliciano, Joseph, Machaon Bonafede, Donna McMorrow, Julie Park, Julie Lisano und Thomas Manley. „Real World Prevalence of Diagnostic Revision Among Patients with Peripheral T-Cell Lymphomas (PTCL) in the US: Results of an Administrative Claims and Electronic Medical Record Analyses“. Blood 132, Supplement 1 (29.11.2018): 1633. http://dx.doi.org/10.1182/blood-2018-99-117104.

Der volle Inhalt der Quelle
Annotation:
Abstract Introduction: Peripheral T-cell lymphomas (PTCL) are a rare heterogeneous group of lymphoid malignancies, which originate from post-thymic (or mature) T-cells. Although there are approximately 7,000 new cases of PTCL in the US annually, PTCL is extremely heterogenous and difficult to diagnose even in academic settings (Teras et al. 2016. CA Cancer J Clin 2016; 66:443-459). Prior research suggests a high rate of diagnostic revision among PTCL patients. The objective of this analysis was to describe the rate of diagnostic revision among patients with a PTCL diagnosis using 2 large, real-world US databases. Methods: This retrospective analysis used the IBM MarketScan Commercial and Medicare Supplemental administrative claims databases (claims) and the IBM Explorys Research Database, an electronic health record (EHR) database; both databases have national coverage but are fundamentally different in their structure and data sources. Adults (age ≥18 years) with a new diagnosis of PTCL between January 2010 and June 2017 were identified in each database. The prevalence of diagnostic revision was the primary outcome, defined as the diagnosis of a non-PTCL diagnosis before or after the PTCL diagnosis with the latest diagnosis serving as the index date. Other lymphomas included were Hodgkin's, diffuse large-B-cell (DLBCL) and marginal zone lymphoma. The presence of anaplastic large cell lymphoma, mature T/NK-cell lymphomas, and other subtypes of T/NK-cell lymphomas closely related to PTCL were described but were not classified as diagnostic revision. Patients were required to have 12 months of pre-index and one month of post-index continuous enrollment (claims) or evidence of database activity (EHR). Results: 2561 patients met the selection criteria for the claims database analysis (mean age 59.9 [SD=15.4]; 42.9% female) and 1,881 met the EHR criteria (mean age 64.3 [SD=15.2]; 47.0% female). Diagnostic revision was present among 29.3% of PTCL patients in the claims analysis and 21.6% of PTCL patients in the EHR analysis. Among patients with diagnostic revision, 51.4% revised to PTCL and 48.6% revised from PTCL in the claims analysis, while 56.0% revised to PTCL and 44.0% revised away from PTCL in the EHR analysis. Among patients with diagnostic revision from PTCL, 37.5% were revised to DLBCL, 17.0% revised to Hodgkin's disease, and 4.9% to marginal zone lymphoma in the claims analysis, and 42.5% revised to DLBCL, 10.6% revised to Hodgkin's disease, and 7.9% to marginal zone lymphoma in the EHR analysis. Among patients with diagnostic revision to PTCL, 32.0% revised from DLBCL, 14.9% revised from Hodgkin's disease, and 5.2% from marginal zone lymphoma in the claims analysis, and 34.6% revised from DLBCL, 11.1% revised from Hodgkin's disease, and 10.3% from marginal zone lymphoma in the EHR analysis. The average time from the first diagnosis to the final diagnosis in the 12-month baseline period was 150.9 [SD=128.2; median 122.5] days in the claims analysis and 167.4 [SD=127.6; median=128.5] days in the EHR analysis (Table 1). Baseline Deyo Charlson Comorbidity Index scores were higher among patients with diagnostic revision in both the claims (4.1 [SD=2.8] vs. 2.8 [SD=2.5]) and EHR (2.4 [SD=2.5] vs. 4.4 [SD=3.1]) analysis (both p<0.01), suggesting clinical differences between these cohorts. Average post-index follow-up was 668.1 [SD=577.0] days for the claims and 852.8 [SD=676.2] for the EHR analyses. Conclusions: This analysis reaffirms the need for appropriate diagnostic criteria and expertise when diagnosing PTCL and its subtypes. The analysis is limited to patients who meet the inclusion criteria for 1 of the 2 databases and generalizability of study results are subject to those same limitations. Differences in diagnostic revision rates between the databases may be due to underlying differences in patient populations in the data sources or potential coding differences in data contributors. Diagnostic revision is common among subtypes of PTCL and can serve as area for quality improvement in the field of hematology oncology to promote accurate diagnosis closer to when lymphoma presentation becomes apparent. Disclosures Feliciano: Seattle Genetics: Employment, Equity Ownership. Bonafede:Seattle Genetics: Other: I am an employee of IBM Watson Health (formerly Truven Health Analytics), which received a research contract to conduct this study with and on behalf of Seattle Genetics.. McMorrow:Seattle Genetics: Other: I am an employee of IBM Watson Health (formerly Truven Health Analytics), which received a research contract to conduct this study with and on behalf of Seattle Genetics.. Park:Seattle Genetics: Other: I am an employee of IBM Watson Health (formerly Truven Health Analytics), which received a research contract to conduct this study with and on behalf of Seattle Genetics.. Lisano:Seattle Genetics: Employment, Equity Ownership. Manley:Seattle Genetics: Employment, Equity Ownership.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Chow, James C. L., Valerie Wong, Leslie Sanders und Kay Li. „Developing an AI-Assisted Educational Chatbot for Radiotherapy Using the IBM Watson Assistant Platform“. Healthcare 11, Nr. 17 (29.08.2023): 2417. http://dx.doi.org/10.3390/healthcare11172417.

Der volle Inhalt der Quelle
Annotation:
Objectives: This study aims to make radiotherapy knowledge regarding healthcare accessible to the general public by developing an AI-powered chatbot. The interactive nature of the chatbot is expected to facilitate better understanding of information on radiotherapy through communication with users. Methods: Using the IBM Watson Assistant platform on IBM Cloud, the chatbot was constructed following a pre-designed flowchart that outlines the conversation flow. This approach ensured the development of the chatbot with a clear mindset and allowed for effective tracking of the conversation. The chatbot is equipped to furnish users with information and quizzes on radiotherapy to assess their understanding of the subject. Results: By adopting a question-and-answer approach, the chatbot can engage in human-like communication with users seeking information about radiotherapy. As some users may feel anxious and struggle to articulate their queries, the chatbot is designed to be user-friendly and reassuring, providing a list of questions for the user to choose from. Feedback on the chatbot’s content was mostly positive, despite a few limitations. The chatbot performed well and successfully conveyed knowledge as intended. Conclusions: There is a need to enhance the chatbot’s conversation approach to improve user interaction. Including translation capabilities to cater to individuals with different first languages would also be advantageous. Lastly, the newly launched ChatGPT could potentially be developed into a medical chatbot to facilitate knowledge transfer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

C S, Anu. „Extract and Organize Information in Images with AI using IBM Services“. International Journal for Research in Applied Science and Engineering Technology 10, Nr. 7 (31.07.2022): 2031–35. http://dx.doi.org/10.22214/ijraset.2022.45670.

Der volle Inhalt der Quelle
Annotation:
Abstract: OCR is a short form of Optical character recognition or optical character reader. By the full form, we can understand it is something that can read content present in the image. Every image in the world contains any kind of object in it and some of them have characters that can be read by humans easily, programming a machine to read them can be called OCR. In machine learning, data mining is one of the major sections that cover the extraction of the data from the different platforms. OCR (Optical Character Recognition) is part of the data mining process that mainly deals with typed, handwritten, or printed documents. These documents hold the data mainly in the form of images. Extracting such data requires some optimised models which can detect and recognize the texts. Getting information from complex structured documents becomes difficult and hence they require some effective methodologies for information extraction. In this article, we will discuss OCR with IBM Watson Natural Language Understanding API, a deep learning-based tool for localizing and detecting the text in documents and images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

TP, Krishna Kumar, M. Ramachandran, Vidhya Prasanth und Chandrasekar Raja. „Developing Business Services Using IBM SPSS Statistics“. REST Journal on Banking, Accounting and Business 2, Nr. 1 (01.04.2023): 40–50. http://dx.doi.org/10.46632/jbab/2/1/9.

Der volle Inhalt der Quelle
Annotation:
Developing Business Services. This study examines business development services for entrepreneurs, which should be offered in various phases. Non-financial services and products are defined as business services. “Business services” is a common term used to describe supportive but firm objects and non-productive work. Information technology (IT) is an important supporting service in many businesses, such as shipping and finance. A good business service aligns with the company’s IT assets, employees, and customers’ requirements, supports business goals, and facilitates company profitability. The IT sector provides business documenting the value of infrastructure processes, IT service audit, IT service inventory creation or renewal, and/or delivery to improve communication, including an employee self-service portal. More information about this source text is required for additional translation; please send feedback using the side panels. Business services are a support business but produce a solid product. Information technology (IT) is an important business support service in shipping, procurement, and various businesses like finance. Business development is about promoting development in your company to increase revenue strategies and opportunities through the process of implementation. Pursuing opportunities, identifying new opportunities, and converting more customers, including commercial services, are activities that help businesses but do not provide tangible substance. For example, information technology in shipping, procurement, and various businesses like finance supports these services. To help your business grow, you need to pursue opportunities and develop strategies to increase revenue. This involves conducting extensive market research, raising visibility and awareness, promoting thought leadership, conducting outreach, generating quality leads, providing exemplary customer service, and developing sales content from success stories. SPSS statistics is a data management, advanced analytics, multivariate analytics, business intelligence, and criminal investigation developed by IBM for a statistical software package. A long time, spa Inc. was created by IBM, which purchased it in 2009. The brand name for the most recent versions is IBM SPSS statistics. The Cronbach’s alpha reliability result showed that the overall Cronbach’s alpha value for the model is .490, indicating 50% reliability. From the literature review, the above cronbach’s alpha value of 46% can be considered to analyze the model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Jung, Soon-Gyo, Joni Salminen und Bernard J. Jansen. „Engineers, Aware! Commercial Tools Disagree on Social Media Sentiment: Analyzing the Sentiment Bias of Four Major Tools“. Proceedings of the ACM on Human-Computer Interaction 6, EICS (14.06.2022): 1–20. http://dx.doi.org/10.1145/3532203.

Der volle Inhalt der Quelle
Annotation:
Large commercial sentiment analysis tools are often deployed in software engineering due to their ease of use. However, it is not known how accurate these tools are, and whether the sentiment ratings given by one tool agree with those given by another tool. We use two datasets - (1) NEWS consisting of 5,880 news stories and 60K comments from four social media platforms: Twitter, Instagram, YouTube, and Facebook; and (2) IMDB consisting of 7,500 positive and 7,500 negative movie reviews - to investigate the agreement and bias of four widely used sentiment analysis (SA) tools: Microsoft Azure (MS), IBM Watson, Google Cloud, and Amazon Web Services (AWS). We find that the four tools assign the same sentiment on less than half (48.1%) of the analyzed content. We also find that AWS exhibits neutrality bias in both datasets, Google exhibits bi-polarity bias in the NEWS dataset but neutrality bias in the IMDB dataset, and IBM and MS exhibit no clear bias in the NEWS dataset but have bi-polarity bias in the IMDB dataset. Overall, IBM has the highest accuracy relative to the known ground truth in the IMDB dataset. Findings indicate that psycholinguistic features - especially affect, tone, and use of adjectives - explain why the tools disagree. Engineers are urged caution when implementing SA tools for applications, as the tool selection affects the obtained sentiment labels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Walter, Angela Wangari, Randall P. Ellis und Yiyang Yuan. „Health care utilization and spending among privately insured children with medical complexity“. Journal of Child Health Care 23, Nr. 2 (19.07.2018): 213–31. http://dx.doi.org/10.1177/1367493518785778.

Der volle Inhalt der Quelle
Annotation:
Children with medical complexity have high health service utilization and health expenditures that can impose significant financial burdens. This study examined these issues for families with children enrolled in US private health plans. Using IBM Watson/Truven Analytics℠ MarketScan® commercial claims and encounters data (2012–2014), we analyzed through regression models, the differences in health care utilization and spending of disaggregated health care services by health plan types and children’s medical complexity levels. Children in consumer-driven and high-deductible plans had much higher out-of-pocket spending and cost shares than those in health maintenance organizations and preferred provider organizations (PPOs). Children with complex chronic conditions had higher service utilization and out-of-pocket expenditures while having lower cost shares on various categories of services than those without any chronic condition. Compared to families covered by PPOs, those with high-deductible or consumer-driven plans were 2.7 and 1.7 times more likely to spend over US$1000 out of pocket on their children’s medical care, respectively. Families with higher complexity levels were more likely to experience financial burdens from expenditures on children’s medical services. In conclusion, policymakers and families with children need to be cognizant of the significant financial burdens that can arise from children’s complex medical needs and health plan demand-side cost sharing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Rios-Campos, Carlos, Sandra Marcela Zapata Vega, Mariuxi Ileana Tejada-Castro, Erick Orlando Guerrero Zambrano, Dany Jamnier German Barreto Perez, Edilbrando Vega Calderón, Luz Magaly Fernandez Rojas und Irene Marely Ballena Alcantara. „Artificial Intelligence and Business“. South Florida Journal of Development 4, Nr. 9 (29.11.2023): 3547–64. http://dx.doi.org/10.46932/sfjdv4n9-015.

Der volle Inhalt der Quelle
Annotation:
The general objective of the research was to determine the advances related to the artificial intelligence and business. The specific objectives of the research are to identify the software of AI used in business and the countries that invest more in AI. Methodology, in this research, 62 documents have been selected, carried out in the period 2018 - 2023; including: scientific articles, review articles and information from websites of recognized organizations. Results, every day new AI tools appear that support activities in today's businesses. Conclusions, about the general objective of the research, the importance of AI in business is beginning to be understood. Therefore, it is adopted in various organizational processes worldwide. AI is impacting various human activities, so it is necessary to keep an eye on its evolution. During the pandemic, businesses worldwide had to intensively use information and communication technologies in order to reach their customers. Subsequently, AI is presented as a tool that can radically change business. About the specific objectives of the research, the software of AI used in business are: Salesforce, HubSpot, IBM Watson, ChatGPT, Alteryx Analytics Automation Platform, Amazon SageMaker, GPT-3 model, Adobe Sensei, Marketo, Supply-chain management solutions based on artificial intelligence (AI), Kasisto, Workday, HireVue, Tableau, Power BI, Dynamic Yield, Darktrace, Copy.ai, UiPath, Automation Anywhere, Merative, Google Health, Google Translate. The countries that invest the most in AI are the United States, China and the European Union.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Wilhelm, Anja, und Wolfgang Ziegler. „Extending semantic context analysis using machine learning services to process unstructured data“. SHS Web of Conferences 102 (2021): 02001. http://dx.doi.org/10.1051/shsconf/202110202001.

Der volle Inhalt der Quelle
Annotation:
The primary focus of technical communication (TC) in the past decade has been the system-assisted generation and utilization of standardized, structured, and classified content for dynamic output solutions. Nowadays, machine learning (ML) approaches offer a new opportunity to integrate unstructured data into existing knowledge bases without the need to manually organize information into topic-based content enriched with semantic metadata. To make the field of artificial intelligence (AI) more accessible for technical writers and content managers, cloud-based machine learning as a service (MLaaS) solutions provide a starting point for domain-specific ML modelling while unloading the modelling process from extensive coding, data processing and storage demands. Therefore, information architects can focus on information extraction tasks and on prospects to include pre-existing knowledge from other systems into the ML modelling process. In this paper, the capability and performance of a cloud-based ML service, IBM Watson, are analysed to assess their value for semantic context analysis. The ML model is based on a supervised learning method and features deep learning (DL) and natural language processing (NLP) techniques. The subject of the analysis is a corpus of scientific publications on the 2019 Coronavirus disease. The analysis focuses on information extractions regarding preventive measures and effects of the pandemic on healthcare workers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Schwartz, Steven M., Brigid Byrd, Helen Dempster und Tim Payne. „The Promise of Small Data for Telemedicine in Chronic Condition Management: A Real-World Case Series“. Clinical Trials and Practice – Open Journal 4, Nr. 1 (31.12.2021): 1–9. http://dx.doi.org/10.17140/ctpoj-4-117.

Der volle Inhalt der Quelle
Annotation:
Connected care is defined as the “real-time, electronic communication between a patient and a provider, including telehealth, remote patient monitoring, and secure email communication between clinicians and their patients” (Alliance of Connected Care). Connected care can create a high-value interaction strategy with patients when it makes thoughtful use of commercially available digital health technologies with demonstrated both clinical and economic effectiveness. Karantis360™, is a home sensor technology that enables real-time tracking, data analytics and predictive care for personal (at home) care powered by IBM Watson Health. IndividuALLyticsTM is a telemedicine platform driven by a patent-pending an N-of-1 analytical engine and related digital dashboards that provides individual, patient-level evaluation of treatment response. The underlying technology combines disparate digital health technology data with the best evidence-base guidelines with N-of-1 methodology. The output allows for creation of personalized treatments empirically tested at the patient-level over time (aka over the course of care). When aggregated both within and across persons, the time-ordered data can build predictive pathways of behavior and ensure the relevant care and medical treatments are in place to support effective medical and self-management of chronic illness. This case-series report describes the implementation of a joint home sensor technology (big data) and an N-of-1 analytic engine (small data) with three elderly consented volunteer customers-patients of Karantis360™. Each person underwent successive, 2-week behavioral change treatment phases to determine usability, utility regarding medical and self-management and any proximal effects on health risks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Lee, Linda W., Amir Dabirian, Ian P. McCarthy und Jan Kietzmann. „Making sense of text: artificial intelligence-enabled content analysis“. European Journal of Marketing 54, Nr. 3 (24.02.2020): 615–44. http://dx.doi.org/10.1108/ejm-02-2019-0219.

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this paper is to introduce, apply and compare how artificial intelligence (AI), and specifically the IBM Watson system, can be used for content analysis in marketing research relative to manual and computer-aided (non-AI) approaches to content analysis. Design/methodology/approach To illustrate the use of AI-enabled content analysis, this paper examines the text of leadership speeches, content related to organizational brand. The process and results of using AI are compared to manual and computer-aided approaches by using three performance factors for content analysis: reliability, validity and efficiency. Findings Relative to manual and computer-aided approaches, AI-enabled content analysis provides clear advantages with high reliability, high validity and moderate efficiency. Research limitations/implications This paper offers three contributions. First, it highlights the continued importance of the content analysis research method, particularly with the explosive growth of natural language-based user-generated content. Second, it provides a road map of how to use AI-enabled content analysis. Third, it applies and compares AI-enabled content analysis to manual and computer-aided, using leadership speeches. Practical implications For each of the three approaches, nine steps are outlined and described to allow for replicability of this study. The advantages and disadvantages of using AI for content analysis are discussed. Together these are intended to motivate and guide researchers to apply and develop AI-enabled content analysis for research in marketing and other disciplines. Originality/value To the best of the authors’ knowledge, this paper is among the first to introduce, apply and compare how AI can be used for content analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

De, Hemangee, und Koushik Deb. „Does Social Media Follow News Media?“ International Journal of Information Communication Technologies and Human Development 13, Nr. 4 (Oktober 2021): 72–82. http://dx.doi.org/10.4018/ijicthd.2021100102.

Der volle Inhalt der Quelle
Annotation:
Today, sentiment analysis is in high use to understand user reactions. In this paper, the authors have discussed this topic using news and Twitter texts as sources of data. They use TextBlob, VADER, and IBM Watson NLU as sentiment analysis tools. The news sentiment analysis data is from January to July 2020, classified under each tool. The authors get almost the same result from all of them. February shows having the maximum negative polarity news, followed by June. While Twitter data of each month when classified under each sentiment analysis tool shows the same kind of result for all the months, March has the maximum negative polarity and maximum positive polarity is seen in January. The aim of this paper is to show that sentiment analysis on newspaper content can help common people to know the bias in newspapers to prevent more negative impact on readers especially during a pandemic like COVID-19. The comparison drawn between the news data sentiment analysis and the same with Twitter data has a good correlation but still shows a difference in sentiment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Anagnoste, Sorin. „Robotic Automation Process – The operating system for the digital enterprise“. Proceedings of the International Conference on Business Excellence 12, Nr. 1 (01.05.2018): 54–69. http://dx.doi.org/10.2478/picbe-2018-0007.

Der volle Inhalt der Quelle
Annotation:
Abstract Robotic Process Automation (RPA) is going into a “maturity market”. The main vendor providers surpassed USD 1 billion in evaluation and the research they are launching these days on the market will change again radically the business landscape. It can be seen already what is coming next to RPA: intelligent optical character recognition (IOCR), chat-bots, machine learning, big data analytics, cognitive platforms, anomaly detection, pattern analysis, voice recognition, data classification and many more. As a result the top vendors developed partnerships with the main leading artificial intelligence providers, such as: IBM Watson, Microsoft Artificial Intelligence, Microsoft Cognitive services, blockchain, Google etc. On the business part, the consulting companies who are implementing the RPA solution are moving from developing Proof-of-Concepts (POCs) and Pilots to helping clients with RAP global roll-outs and developing Centre of Excellences (CoE). As a result, the experiences gathered so far by the author on this kind of projects will be tackled also in this paper. In this article will we will present also some data related to automation for different business areas (eg. Accounts Payable, Accounts Receivable etc) and how an assessment can be done correctly in order to decide if a process can be automatized and, if yes, up to which extent (ie. percent). Moreover, through the case studies we will provide (1) how now the RPA is integrated with Artificial Intelligence and Cloud, (2) how can be scaled in order to face hypes, (3) how can interpret data and (4) what savings these technologies can bring to the organizations. All the aforementioned services made Robotics Process Automation a very powerful tool since a year ago when the author did the last research. A process that was mainly not recommended for automation or was partially automated can be now fully automated with more advantages, such as: money, non-FTE savings and fulfillment time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Norman, Kim P., Anita Govindjee, Seth R. Norman, Michael Godoy, Kimberlie L. Cerrone, Dustin W. Kieschnick und William Kassler. „Natural Language Processing Tools for Assessing Progress and Outcome of Two Veteran Populations: Cohort Study From a Novel Online Intervention for Posttraumatic Growth“. JMIR Formative Research 4, Nr. 9 (23.09.2020): e17424. http://dx.doi.org/10.2196/17424.

Der volle Inhalt der Quelle
Annotation:
Background Over 100 million Americans lack affordable access to behavioral health care. Among these, military veterans are an especially vulnerable population. Military veterans require unique behavioral health services that can address military experiences and challenges transitioning to the civilian sector. Real-world programs to help veterans successfully transition to civilian life must build a sense of community, have the ability to scale, and be able to reach the many veterans who cannot or will not access care. Digitally based behavioral health initiatives have emerged within the past few years to improve this access to care. Our novel behavioral health intervention teaches mindfulness-based cognitive behavioral therapy and narrative therapy using peer support groups as guides, with human-facilitated asynchronous online discussions. Our study applies natural language processing (NLP) analytics to assess effectiveness of our online intervention in order to test whether NLP may provide insights and detect nuances of personal change and growth that are not currently captured by subjective symptom measures. Objective This paper aims to study the value of NLP analytics in assessing progress and outcomes among combat veterans and military sexual assault survivors participating in novel online interventions for posttraumatic growth. Methods IBM Watson and Linguistic Inquiry and Word Count tools were applied to the narrative writings of combat veterans and survivors of military sexual trauma who participated in novel online peer-supported group therapies for posttraumatic growth. Participants watched videos, practiced skills such as mindfulness meditation, told their stories through narrative writing, and participated in asynchronous, facilitated online discussions with peers. The writings, including online postings, by the 16 participants who completed the program were analyzed after completion of the program. Results Our results suggest that NLP can provide valuable insights on shifts in personality traits, personal values, needs, and emotional tone in an evaluation of our novel online behavioral health interventions. Emotional tone analysis demonstrated significant decreases in fear and anxiety, sadness, and disgust, as well as increases in joy. Significant effects were found for personal values and needs, such as needing or desiring closeness and helping others, and for personality traits of openness, conscientiousness, extroversion, agreeableness, and neuroticism (ie, emotional range). Participants also demonstrated increases in authenticity and clout (confidence) of expression. NLP results were generally supported by qualitative observations and analysis, structured data, and course feedback. Conclusions The aggregate of results in our study suggest that our behavioral health intervention was effective and that NLP can provide valuable insights on shifts in personality traits, personal values, and needs, as well as measure changes in emotional tone. NLP’s sensitivity to changes in emotional tone, values, and personality strengths suggests the efficacy of NLP as a leading indicator of treatment progress.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Kumar, Kartik. „An Educational Chatbot Using AI in Radiotherapy“. INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, Nr. 05 (16.05.2024): 1–5. http://dx.doi.org/10.55041/ijsrem34122.

Der volle Inhalt der Quelle
Annotation:
The surge in demand for information in cancer centers and hospitals, particularly during the pandemic, overwhelmed the limited manpower available. To address this challenge, there arose a need to develop an educational chatbot tailored for diverse user groups in the field of radiotherapy, including patients and their families, the general public, and radiation staff. Objective: In response to the pressing clinical demands, the primary aim of this endeavor is to delve into the intricacies of designing an educational chatbot for radiotherapy using artificial intelligence.Methods: The chatbot is meticulously crafted using a dialogue tree and layered structure, seamlessly integrated with artificial intelligence functionalities, notably natural language processing (NLP). This adaptable chatbot can be deployed across various platforms, such as IBM Watson Assistant, and embedded in websites or diverse social media channels.Results: Employing a question-and-answer methodology, the chatbot adeptly engages users seeking information on radiotherapy, presenting an approachable and reassuring interface. Recognizing that users, often anxious, may struggle to articulate precise questions, the chatbot facilitates the interaction by offering a curated list of questions. The NLP system augments the chatbot's ability to discern user intent, ensuring the provision of accurate and targeted responses. Notably, the study reveals that functional features, including mathematical operations, are preferred in educational chatbots, necessitating routine updates to furnish fresh content and features.Conclusions: The study culminates in the affirmation that leveraging artificial intelligence facilitates the creation of an educational chatbot capable of disseminating information to users with diverse backgrounds in radiotherapy. Furthermore, the importance of rigorous testing and evaluation, informed by user feedback, is emphasized to iteratively enhance and refine the chatbot's performance. Keywords: AI, machine learning, NLP, chatbot, radiotherapy, IoT, healthcare.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

O'Sullivan, Eugene J. „(Invited) Electrochemistry: Adventures in Metallization“. ECS Meeting Abstracts MA2022-02, Nr. 30 (09.10.2022): 1081. http://dx.doi.org/10.1149/ma2022-02301081mtgabs.

Der volle Inhalt der Quelle
Annotation:
Microelectronics has benefited enormously from electrochemistry, particularly in metallization. Metallizing through-holes in multilevel printed circuit boards was a major, successful application of electroless Cu (1). Electroless Co-based magnetic films deposited on non-magnetic electroless nickel films on rigid aluminum disks propelled the magnetic storage industry for years. A decade or more ago, it looked as if electroless Co(W)(P) was the ideal candidate to replace PVD Ta-based liners for CMOS back-end-of-line (BEOL) builds (2). Its cost undid it, however, despite meeting selectivity, diffusion barrier and reliability requirements. Electrolytic Cu has been an outstanding success for CMOS BEOL interconnect metallization, mostly because of its submicron feature superfilling ability (3). Following such success, electrolytic and electroless deposition methods have never been far from microelectronics researchers’ interest. In this talk, I will describe examples of electrochemical metallization in chip level, power conversion and MEMS areas that I have worked on. MRAM Final Interconnect Level Capping We recently developed a maskless, electroless, high-P-content, Ni(P) capping process for the final Cu bitline wiring level in our STTM MRAM 200 mm wafer test vehicles. This replaced a two litho mask, final aluminum metal interconnect level, drastically shortening process time. This novel protective layer enables functional testing of MRAM device memory state retention in an air atmosphere at elevated temperatures (4). The Ni(P)-coated wafers show virtually unchanged device resistance and magnetoresistance (MR) for MRAM 4Kb arrays. Magnetic Inductor Fabrication Magnetic inductors are increasing in importance in the ongoing development of integrated, on-chip power conversion. The latter is critical for realizing the dream of granular, DC-DC power delivery using dedicated voltage regulators (VR). Traditionally, the large size of the inductor component has impeded efforts to fabricate the VR in one module. We explored potentially manufacturable processes for magnetic-core inductors with enhanced inductance using through-mask electrodeposited Ni45Fe55 (Fig. 1) (5) and electroless Co(W)(P) layers (6). Electroless Co(W)(P) yoke material performed best overall, showing excellent magnetic properties, good magnetic anisotropy and coercivity of less than 0.1 Oe (6). The resistivity of the Co(W)(P) material was about 90-100 µΩcm; a value of 100 µΩcm is desired to limit yoke eddy current loss at high frequencies. Device scaling has finally brought magnetic inductor fabrication within reach of BEOL CMOS fabs. Magnetic Minimotor Fabrication High-aspect-ratio optical or X-ray lithography (LIGA) and electrodeposition processes were used to fabricate variable-reluctance, nearly planar, integrated minimotors with 6-mm-diameter rotors on silicon wafers (7). The motors comprised six electrodeposited Ni81Fe19 (Permalloy) horseshoe-shaped cores that surrounded the rotor. We formed copper coils around each core. LIGA processing provided vertical wall profiles, which were important for the rotor and stator core pole tips (see stator pole tip, feature D, in Fig. 2). We fabricated the rotors separately and slipped them onto the shaft after releasing them from the substrate wafer. Shaft fabrication via electrodeposition occurred as part of the stator fabrication process. The LIGA fabricated minimotor (100 μm thick Permalloy core with 40 μm thick rotor) represented the successful integration of aligned X-ray exposures and planarizing dielectric into a MEMS fabrication process, producing a working, five-layer magnetic motor. I will show some minimotor operational data. [1]. See papers in IBM J. Res. Develop., 28(6) (1984), available online. [2]. See, e.g., Y. Shacham-Diamand et al., J. Electrochem. Soc., 148 (2001) C162. [3]. P. C. Andricacos et al., IBM J. Res. Develop., 42, 567 (1998). [4]. E. J. O'Sullivan et al., 2019 Meet. Abstr. MA2019-02 916; doi: 10.1149/MA2019-02/15/916. [5]. E. J. O'Sullivan et al., ECS Transactions 50(10):93-105, doi: 10.1149/05010.0093ecst. [6]. N. Wang et al., MMM-Intermag, paper HG-11, 2013. [7]. E. J. O'Sullivan et al., IBM J. of Res. Develop., 42, 681 (1998). Acknowledgements The authors gratefully acknowledge the efforts of the staff of the Microelectronics Research Laboratory (MRL) at the IBM T. J. Watson Research Center, where some of the fabrication work described in this talk was carried out. Figure 1
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Kuzhda, Tetiana, Nataliia Shveda und Nataliia Yuryk. „Application of information technologies in business-analysis of organizations activity under crisis conditions“. Galic'kij ekonomičnij visnik 81, Nr. 2 (2023): 96–105. http://dx.doi.org/10.33108/galicianvisnyk_tntu2023.02.096.

Der volle Inhalt der Quelle
Annotation:
The article defines the main factors that have a negative influence on economic development of entities at present situation, and outlines the signs of the enterprise crisis state, which make it necessary to use the levers of anti-crisis management. The company’s exit from the crisis state is impossible to provide a well-formed system of crisis management, which provides: timely solution of enterprise problems, stabilization of unstable situation and elimination of negative factors, minimization of losses and lost opportunities of the enterprise, preventive crisis management, in-crisis management, management of crisis exit procedures, management of the enterprise operation and management of the enterprise development. The principles of a well-formed system of anti-crisis management are defined. The essence, functional purpose and algorithm of business analysis of organization activity under crisis conditions are revealed; the main vectors of business analysis development are investigated; the most common variants of classification of software products for business analysis are considered. In recent years in all areas of the world economy, the trend of transition to the era of the fifth scientific and technical revolution based on information technologies and artificial intelligence is clearly observed. That is why the issue of information technologies application in business analysis of organizations’ activity in crisis conditions is extremely important. Anti-crisis management provides management of enterprise architecture, strategic planning and analysis, formation of CRM-system, technical design, planning and implementation of measures ensuring product quality and competitiveness, object management in general and much more. Business analysis is always present in ensuring the effectiveness of the company’s activities. In the scientific literature and in business practice, four fundamentally interrelated types of business analytics are generally distinguished: descriptive, diagnostic, predictive and prescriptive. In business analysis, the following main categories of functions are mandatory: possibility of integration, information representation, data analysis, modelling, forecasting, forming of the map of indicators. In organization’s management the implementation of strategic decisions through categories like unity, coherence and internal consistency, where CRM-system forms effective tools of human resources management, task definition and control, data collection and analysis, support of all stages of sale process, construction, service, internal communications and other is effective. ERP-system – increases efficiency of planning resources process of the organization, helps you to control internal processes and make important business decisions in real-time. The most up-to-date and demanded software products for business analysis can be attributed: Qlik Sense, Qlik View, Naumen Servise Desk, MicrosoftStrategy Analytics, Roistat, GetReport, PlanFakt, Seenece, Business Scanner, Tibco Spotfire, SAP BusinessObjects, Finoko, IBM Cognos Busines Interlligence, Power BI, SAP Lumara. During their development information and analytical technologies have changed configuration of business analysis, making it an instrument for creating information content.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Poss-Doering, Regina, Aline Kunz, Sabrina Pohlmann, Helene Hofmann, Marion Kiel, Eva C. Winkler, Dominik Ose und Joachim Szecsenyi. „Utilizing a Prototype Patient-Controlled Electronic Health Record in Germany: Qualitative Analysis of User-Reported Perceptions and Perspectives“. JMIR Formative Research 2, Nr. 2 (03.08.2018): e10411. http://dx.doi.org/10.2196/10411.

Der volle Inhalt der Quelle
Annotation:
Background Personal electronic health records (PHR) are considered instrumental in improving health care quality and efficiency, enhancing communication between all parties involved and strengthening the patient’s role. Technical architectures, data privacy, and applicability issues have been discussed for many years. Nevertheless, nationwide implementation of a PHR is still pending in Germany despite legal regulations provided by the eHealth Act passed in 2015. Within the information technology for patient-oriented care project funded by the Federal Ministry of Education and Research (2012-2017), a Web-based personal electronic health record prototype (PEPA) was developed enabling patient-controlled information exchange across different care settings. Gastrointestinal cancer patients and general practitioners utilized PEPA during a 3-month trial period. Both patients and physicians authorized by them could view PEPA content online and upload or download files. Objective This paper aims to outline findings of the posttrial qualitative study carried out to evaluate user-reported experiences, perceptions, and perspectives, focusing on their interpretation of PEPA beyond technical usability and views on a future nationwide implementation. Methods Data were collected through semistructured guide-based interviews with 11 patients and 3 physicians (N=14). Participants were asked to share experiences, views of perceived implications, and perspectives towards nationwide implementation. Further data were generated through free-text fields in a subsequent study-specific patient questionnaire and researcher’s notes. Data were pseudonymized, audiotaped, and transcribed verbatim. Content analysis was performed through the Framework Analysis approach. All qualitative data were systemized by using MAXQDA Analytics PRO 12 (Rel.12.3.1). Additionally, participant characteristics were analyzed descriptively using IBM SPSS Statistics Version 24. Results Users interpreted PEPA as a central medium containing digital chronological health-related documentation that simplifies information sharing across care settings. While patients consider the implementation of PEPA in Germany in the near future, physicians are more hesitant. Both groups believe in PEPA’s concept, but share awareness of concerns about data privacy and older or impaired people’s abilities to manage online records. Patients perceive benefits for involvement in treatment processes and continuity of care but worry about financing and the implementation of functionally reduced versions. Physicians consider integration into primary systems critical for interoperability but anticipate technical challenges, as well as resistance from older patients and colleagues. They omit clear positioning regarding PEPA’s potential incremental value for health care organizations or the provider-patient relationship. Conclusions Digitalization in German health care will continue to bring change, both organizational and in the physician-patient relationship. Patients endorse and expect a nationwide PEPA implementation, anticipating various benefits. Decision makers and providers need to contribute to closing modernization gaps by committing to new concepts and by invigorating transformed roles.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Tewes, Federico R. „Artificial Intelligence in the American Healthcare Industry: Looking Forward to 2030“. Journal of Medical Research and Surgery 3, Nr. 5 (06.10.2022): 107–8. http://dx.doi.org/10.52916/jmrs224089.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence (AI) has the potential to speed up the exponential growth of cutting-edge technology, much way the Internet did. Due to intense competition from the private sector, governments, and businesspeople around the world, the Internet has already reached its peak as an exponential technology. In contrast, artificial intelligence is still in its infancy, and people all over the world are unsure of how it will impact their lives in the future. Artificial intelligence, is a field of technology that enables robots and computer programmes to mimic human intellect by teaching a predetermined set of software rules to learn by repetitive learning from experience and slowly moving toward maximum performance. Although this intelligence is still developing, it has already demonstrated five different levels of independence. Utilized initially to resolve issues. Next, think about solutions. Third, respond to inquiries. Fourth, use data analytics to generate forecasts. Fifth, make tactical recommendations. Massive data sets and "iterative algorithms," which use lookup tables and other data structures like stacks and queues to solve issues, make all of this possible. Iteration is a strategy where software rules are regularly adjusted to patterns in the data for a certain number of iterations. The artificial intelligence continuously makes small, incremental improvements that result in exponential growth, which enables the computer to become incredibly proficient at whatever it is trained to do. For each round of data processing, the artificial intelligence tests and measures its performance to develop new expertise. In order to address complicated problems, artificial intelligence aims to create computer systems that can mimic human behavior and exhibit human-like thought processes [1]. Artificial intelligence technology is being developed to give individualized medication in the field of healthcare. By 2030, six different artificial intelligence sectors will have considerably improved healthcare delivery through the utilization of larger, more accessible data sets. The first is machine learning. This area of artificial intelligence learns automatically and produces improved results based on identifying patterns in the data, gaining new insights, and enhancing the outcomes of whatever activity the system is intended to accomplish. It does this without being trained to learn a particular topic. Here are several instances of machine learning in the healthcare industry. The first is the IBM Watson Genomics, which aids in rapid disease diagnosis and identification by fusing cognitive computing with genome-based tumour sequencing. Second, a project called Nave Bayes allows for the prediction of diabetes years before an official diagnosis, before it results in harm to the kidneys, the heart, and the nerves. Third, employing two machine learning approaches termed classification and clustering to analyse the Indian Liver Patient Data (ILPD) set in order to predict liver illness before this organ that regulates metabolism becomes susceptible to chronic hepatitis, liver cancer, and cirrhosis [2]. Second, deep learning. Deep learning employs artificial intelligence to learn from data processing, much like machine learning does. Deep learning, on the other hand, makes use of synthetic neural networks that mimic human brain function to analyse data, identify relationships between the data, and provide outputs based on positive and negative reinforcement. For instance, in the fields of Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), deep learning aids in the processes of picture recognition and object detection. Deep learning algorithms for the early identification of Alzheimer's, diabetic retinopathy, and breast nodule ultrasound detection are three applications of this cutting-edge technology in the real world. Future developments in deep learning will make considerable improvements in pathology and radiology pictures [3]. Third, neural networks. The artificial intelligence system can now accept massive data sets, find patterns within the data, and respond to queries regarding the information processed because the computer learning process resembles a network of neurons in the human brain. Let's examine a few application examples that are now applicable to the healthcare sector. According to studies from John Hopkins University, surgical errors are a major contributor to medical malpractice claims since they happen more than 4,000 times a year in just the United States due to the human error of surgeons. Neural networks can be used in robot-assisted surgery to model and plan procedures, evaluate the abilities of the surgeon, and streamline surgical activities. In one study of 379 orthopaedic patients, it was discovered that robotic surgery using neural networks results in five times fewer complications than surgery performed by a single surgeon. Another application of neural networks is in visualising diagnostics, which was proven to physicians by Harvard University researchers who inserted an image of a gorilla to x-rays. Of the radiologists who saw the images, 83% did not recognise the gorilla. The Houston Medical Research Institute has created a breast cancer early detection programme that can analyse mammograms with 99 percent accuracy and offer diagnostic information 30 times faster than a human [4]. Cognitive computing is the fourth. Aims to replicate the way people and machines interact, showing how a computer may operate like the human brain when handling challenging tasks like text, speech, or image analysis. Large volumes of patient data have been analysed, with the majority of the research to date focusing on cancer, diabetes, and cardiovascular disease. Companies like Google, IBM, Facebook, and Apple have shown interest in this work. Cognitive computing made up the greatest component of the artificial market in 2020, with 39% of the total [5]. Hospitals made up 42% of the market for cognitive computing end users because of the rising demand for individualised medical data. IBM invested more than $1 billion on the development of the WATSON analytics platform ecosystem and collaboration with startups committed to creating various cloud and application-based systems for the healthcare business in 2014 because it predicted the demand for cognitive computing in this sector. Natural Language Processing (NLP) is the fifth. This area of artificial intelligence enables computers to comprehend and analyse spoken language. The initial phase of this pre-processing is to divide the data up into more manageable semantic units, which merely makes the information simpler for the NLP system to understand. Clinical trial development is experiencing exponential expansion in the healthcare sector thanks to NLP. First, the NLP uses speech-to-text dictation and structured data entry to extract clinical data at the point of care, reducing the need for manual assessment of complex clinical paperwork. Second, using NLP technology, healthcare professionals can automatically examine enormous amounts of unstructured clinical and patient data to select the most suitable patients for clinical trials, perhaps leading to an improvement in the patients' health [6]. Computer vision comes in sixth. Computer vision, an essential part of artificial intelligence, uses visual data as input to process photos and videos continuously in order to get better results faster and with higher quality than would be possible if the same job were done manually. Simply put, doctors can now diagnose their patients with diseases like cancer, diabetes, and cardiovascular disorders more quickly and at an earlier stage. Here are a few examples of real-world applications where computer vision technology is making notable strides. Mammogram images are analysed by visual systems that are intended to spot breast cancer at an early stage. Automated cell counting is another example from the real world that dramatically decreases human error and raises concerns about the accuracy of the results because they might differ greatly depending on the examiner's experience and degree of focus. A third application of computer vision in the real world is the quick and painless early-stage tumour detection enabled by artificial intelligence. Without a doubt, computer vision has the unfathomable potential to significantly enhance how healthcare is delivered. Other than for visual data analysis, clinicians can use this technology to enhance their training and skill development. Currently, Gramener is the top company offering medical facilities and research organisations computer vision solutions [7]. The usage of imperative rather than functional programming languages is one of the key difficulties in creating artificial intelligence software. As artificial intelligence starts to increase exponentially, developers employing imperative programming languages must assume that the machine is stupid and supply detailed instructions that are subject to a high level of maintenance and human error. In software with hundreds of thousands of lines of code, human error detection is challenging. Therefore, the substantial amount of ensuing maintenance may become ridiculously expensive, maintaining the high expenditures of research and development. As a result, software developers have contributed to the unreasonably high cost of medical care. Functional programming languages, on the other hand, demand that the developer use their problem-solving abilities as though the computer were a mathematician. As a result, compared to the number of lines of code needed by the programme to perform the same operation, mathematical functions are orders of magnitude shorter. In software with hundreds of thousands of lines of code, human error detection is challenging. Therefore, the substantial amount of ensuing maintenance may become ridiculously expensive, maintaining the high expenditures of research and development. As a result, software developers have contributed to the unreasonably high cost of medical care. Functional programming languages, on the other hand, demand that the developer use their problem-solving abilities as though the computer were a mathematician. As a result, compared to the number of lines of code needed by the programme to perform the same operation, mathematical functions are orders of magnitude shorter. The bulk of software developers that use functional programming languages are well-trained in mathematical logic; thus, they reason differently than most American software developers, who are more accustomed to following step-by-step instructions. The market for artificial intelligence in healthcare is expected to increase from $3.4 billion in 2021 to at least $18.7 billion by 2027, or a 30 percent annual growth rate before 2030, according to market research firm IMARC Group. The only outstanding query is whether these operational reductions will ultimately result in less expensive therapies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Tigua Moreira, Sonia, Edison Cruz Navarrete und Geovanny Cordova Perez. „Big Data: paradigm in construction in the face of the challenges and challenges of the financial sector in the 21st century“. Universidad Ciencia y Tecnología 25, Nr. 110 (26.08.2021): 127–37. http://dx.doi.org/10.47460/uct.v25i110.485.

Der volle Inhalt der Quelle
Annotation:
The world of finance is immersed in multiple controversies, laden with contradictions and uncertainties typical of a social ecosystem, generating dynamic changes that lead to significant transformations, where the thematic discussion of Big Data becomes crucial for real-time logical decision-making. In this field of knowledge is located this article, which reports as a general objective to explore the strengths, weaknesses and future trends of Big Data in the financial sector, using as a methodology for exploration a scientific approach with the bibliographic tools scopus and scielo, using as a search equation the Big Data, delimited to the financial sector. The findings showed the growing importance of gaining knowledge from the huge amount of financial data generated daily globally, developing predictive capacity towards creating scenarios inclined to find solutions and make timely decisions. Keywords: Big Data, financial sector, decision-making. References [1]D. Reinsel, J. Gantz y J. Rydning, «Data Age 2025: The Evolution of Data to Life-Critical,» IDC White Pape, 2017. [2]R. Barranco Fragoso, «Que es big data IBM Developer works,» 18 Junio 2012. [Online]. Available: https://developer.ibm.com/es/articles/que-es-big-data/. [3]IBM, «IBM What is big data? - Bringing big data to the enterprise,» 2014. [Online]. Available: http://www.ibm.com/big-data/us/en/. [4]IDC, «Resumen Ejecutivo -Big Data: Un mercado emergente.,» Junio 2012. [Online]. Available: https://www.diarioabierto.es/wp-content/uploads/2012/06/Resumen-Ejecutivo-IDC-Big-Data.pdf. [5]Factor humano Formación, «Factor humano formación escuela internacional de postgrado.,» 2014. [Online]. Available: http//factorhumanoformación.com/big-data-ii/. [6]J. Luna, «Las tecnologías Big Data,» 23 Mayo 2018. [Online]. Available: https://www.teldat.com/blog/es/procesado-de-big-data-base-de-datos-de-big-data-clusters-nosql-mapreduce/#:~:text=Tecnolog%C3%ADas%20de%20procesamiento%20Big%20Data&text=De%20este%20modo%20es%20posible,las%20necesidades%20de%20procesado%20disminuyan. [7]T.A.S Foundation, "Apache cassandra 2015", The apache cassandra project, 2015. [8]E. Dede, B. Sendir, P. Kuzlu, J. Hartog y M. Govindaraju, «"An Evaluation of Cassandra for Hadoop",» de 2013 IEEE Sixth International Conference on Cloud Computing, Santa Clara, CA, USA, 2013. [9]The Apache Software Foundation, «"Apache HBase",» 04 Agosto 2017. [Online]. Available: http://hbase.apache.org/. [10]G. Deka, «"A Survey of Cloud Database Systems",» IT Professional, vol. 16, nº 02, pp. 50-57, 2014. [11]P. Dueñas, «Introducción al sistema financiero y bancario,» Bogotá. Politécnico Grancolombiano, 2008. [12]V. Mesén Figueroa, «Contabilización de CONTRATOS de FUTUROS, OPCIONES, FORWARDS y SWAPS,» Tec Empresarial, vol. 4, nº 1, pp. 42-48, 2010. [13] A. Castillo, «Cripto educación es lo que se necesita para entender el mundo de la Cripto-Alfabetización,» Noticias Artech Digital , 04 Junio 2018. [Online].Available: https://www.artechdigital.net/cripto-educacion-cripto-alfabetizacion/. [14]Conceptodefinicion.de, «Definicion de Cienciometría,» 16 Diciembre 2020. [Online]. Available: https://conceptodefinicion.de/cienciometria/. [15]Elsevier, «Scopus The Largest database of peer-reviewed literature» https//www.elsevier.com/solutions/scopus., 2016. [16]J. Russell, «Obtención de indicadores bibliométricos a partir de la utilización de las herramientas tradicionales de información,» de Conferencia presentada en el Congreso Internacional de información-INFO 2004, La Habana, Cuba, 2004. [17]J. Durán, Industrialized and Ready for Digital Transformation?, Barcelona: IESE Business School, 2015. [18]P. Orellana, «Omnicanalidad,» 06 Julio 2020. [Online]. Available: https://economipedia.com/definiciones/omnicanalidad.html. [19]G. Electrics, «Innovation Barometer,» 2018. [20]D. Chicoma y F. Casafranca, Interviewees, Entrevista a Daniel Chicoma y Fernando Casafranca, docentes del PADE Internacional en Gerencia de Tecnologías de la Información en ESAN. [Entrevista]. 2018. [21]L.R. La república, «La importancia del mercadeo en la actualidad,» 21 Junio 2013. [Online]. Available: https://www.larepublica.co/opinion/analistas/la-importancia-del-mercadeo-en-la-actualidad-2041232#:~:text=El%20mercadeo%20es%20cada%20d%C3%ADa,en%20los%20mercados%20(clientes). [22]UNED, «Acumulación de datos y Big data: Las preguntas correctas,» 10 Noviembre 2017. [Online]. Available: https://www.masterbigdataonline.com/index.php/en-el-blog/150-el-big-data-y-las-preguntas-correctas. [23]J. García, Banca aburrida: el negocio bancario tras la crisis económica, Fundacion Funcas - economía y sociedad, 2015, pp. 101 - 150. [24]G. Cutipa, «Las 5 principales ventajas y desventajas de bases de datos relacionales y no relacionales: NoSQL vs SQL,» 20 Abril 2020. [Online]. Available: https://guidocutipa.blog.bo/principales-ventajas-desventajas-bases-de-datos-relacionales-no-relacionales-nosql-vs-sql/. [25]R. Martinez, «Jornadas Big Data ANALYTICS,»19 Septiembre 2019. [Online]. Available: https://www.cfp.upv.es/formacion-permanente/curso/jornada-big-data-analytics_67010.html. [26]J. Rifkin, The End of Work: The Decline of the Global Labor Force and the Dawn of the Post-Market Era, Putnam Publishing Group, 1995. [27]R. Conde del Pozo, «Los 5 desafíos a los que se enfrenta el Big Data,» 13 Agosto 2019. [Online]. Available: https://diarioti.com/los-5-desafios-a-los-que-se-enfrenta-el-big-data/110607.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Karunarathne, Wasana, Angela Paladino, Chris Selman, Kris Nagy, Laszlo Sajitos und Shohil Kishore. „The StatBot“. Pacific Journal of Technology Enhanced Learning 6, Nr. 1 (16.04.2024): 20–21. http://dx.doi.org/10.24135/pjtel.v6i1.193.

Der volle Inhalt der Quelle
Annotation:
Artificial Intelligence (AI) provides an opportunity for a transformative shift towards a more personalised and efficient learning environment in the contemporary education landscape (FitzGerald, 2018; Perez et al., 2020; Yang and Evans, 2019; Yin et al., 2021). This landscape is characterised by globalisation and universal education trends, which often necessitate being mindful of the challenges of managing large enrolments and diversity within student bodies. This presentation outlines the implementation and experiences of a generative AI-supported chatbot (StatBot) introduced to two cohorts of quantitative methods classes in the Faculty of Business and Economics, targeting over 2,500 students annually. Attending this presentation, participants will gain valuable insight into the effective use of AI in teaching and learning in subjects with large enrolments. The initiative aimed to enhance students' learning experience by offering personalised, subject-specific support by converting IBM Watson Assistant, renowned for its ability to process and interpret natural language queries, into an educational chatbot. The primary purpose of this AI tool was to improve student's educational experience by providing them with instant, tailored assistance that directly related to the material taught within the subject and at a time that suited the student. Recognising students' diverse needs and learning pace in a large class, the chatbot was designed to offer both administrative and conceptual support, facilitating a more inclusive and accessible learning environment. It addressed a wide range of queries, from course logistics and administrative procedures to in-depth explanations of complex concepts. It provided a comprehensive bank of practice questions and feedback process, specifically curated to reinforce learning and aid in consolidating knowledge. This repository enabled students to engage in self-directed learning, assess their understanding, and identify areas requiring further exploration, thus promoting a proactive and reflective learning approach. The benefits of implementing this AI tool were multifaceted. For educators, it alleviated the burden of addressing repetitive administrative and basic conceptual queries, freeing up valuable time to focus on more complex teaching and research activities. For students, the immediate and personalised nature of the support enhanced their learning experience, enabling them to navigate the course content more confidently and efficiently. The chatbot also fostered an environment of continuous learning, encouraging students to engage with the material and practice independently and actively. Integrating the chatbot into the curriculum offered a strategic educational intervention aimed at enhancing student learning and support, particularly in large undergraduate subjects. The platform's robust AI capabilities allowed the delivery of personalised learning experiences at scale, which is difficult through traditional teaching methods. Its ability to process student queries and provide immediate, accurate (verified) responses ensured that students received the support they needed when needed, without the constraints of office hours or limited teaching staff availability. The student feedback following the introduction of the AI-supported chatbot was overwhelmingly positive. The tool's ability to provide instant, relevant, and personalised support was particularly appreciated, as it directly contributed to a more supportive and responsive learning environment. Moreover, the availability of a practice question bank was highlighted as a critical resource that enabled students to test their knowledge and prepare more effectively for assessments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

M., Rajeshwari, und Krishna Prasad K. „IBM Watson Industry Cognitive Education Methods“. International Journal of Case Studies in Business, IT, and Education, 26.04.2020, 38–50. http://dx.doi.org/10.47992/ijcsbe.2581.6942.0066.

Der volle Inhalt der Quelle
Annotation:
Data analytics converts bulk of data into insights for business, healthcare, insurance and education. An upcoming development in IBM’s data analytic approach towards education is cognitive learning systems. Human being and machine can communicate each other by the technologies that use Natural Language processing and Machine Language together in action. Presently, many students struggle for their education without any goals. In this sense, cognitive systems should improve student education and results with a customized perception of their learning. IBM has recently pointed his service on education by its supercomputer or computing technology, IBM Watson. Such systems can provide an expert assistant to all varies of professionals in their respective fields. It is also being used widely to assess student performance and to help educators in the classroom develop more constructive instructional practices for their students. It helps the teacher to collect attendance, marks detail and to analyse the individual student’s interest based on his result. This industrial analysis will explain the power of data analytics in classroom by the teachers to assess the student’s personal behaviour and the way it is used as a tool by the teachers to determine student’s interest in finding the better career. Based on individual student outcome, Watson using AI will find solutions to improve quality and policy of education. Here AI technology gives tools to the teachers they need to be most effective and help learners perform at the top of their abilities like tutors, childhood vocabulary development and personalizes content for students based on mastery. Data Digital services and apps are used on learning and they help in the learning experience. This study will help to understand the way different technologies working together in predictive analytics and to prepare a report on admission, number of attendances, student dropout rate, their result analysis and their future. This study will analyse the success rate of personalised education of student using cognitive learning skills.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Piotrkowicz, Alicja, Owen Johnson und Geoff Hall. „Finding relevant free-text radiology reports at scale with IBM Watson Content Analytics: a feasibility study in the UK NHS“. Journal of Biomedical Semantics 10, S1 (November 2019). http://dx.doi.org/10.1186/s13326-019-0213-5.

Der volle Inhalt der Quelle
Annotation:
Abstract Background Significant amounts of health data are stored as free-text within clinical reports, letters, discharge summaries and notes. Busy clinicians have limited time to read such large amounts of free-text and are at risk of information overload and consequently missing information vital to patient care. Automatically identifying relevant information at the point of care has the potential to reduce these risks but represents a considerable research challenge. One software solution that has been proposed in industry is the IBM Watson analytics suite which includes rule-based analytics capable of processing large document collections at scale. Results In this paper we present an overview of IBM Watson Content Analytics and a feasibility study using Content Analytics with a large-scale corpus of clinical free-text reports within a UK National Health Service (NHS) context. We created dictionaries and rules for identifying positive incidence of hydronephrosis and brain metastasis from 5.6 m radiology reports and were able to achieve 94% precision, 95% recall and 89% precision, 94% recall respectively on a sample of manually annotated reports. With minor changes for US English we applied the same rule set to an open access corpus of 0.5 m radiology reports from a US hospital and achieved 93% precision, 94% recall and 84% precision, 88% recall respectively. Conclusions We were able to implement IBM Watson within a UK NHS context and demonstrate effective results that could provide clinicians with an automatic safety net which highlights clinically important information within free-text documents. Our results suggest that currently available technologies such as IBM Watson Content Analytics already have the potential to address information overload and improve clinical safety and that solutions developed in one hospital and country may be transportable to different hospitals and countries. Our study was limited to exploring technical aspects of the feasibility of one industry solution and we recognise that healthcare text analytics research is a fast-moving field. That said, we believe our study suggests that text analytics is sufficiently advanced to be implemented within industry solutions that can improve clinical safety.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Stewart, Brenton, und Jesse J. Walker. „Twitter and the Lack of a Participatory Culture in American College Libraries“. Proceedings of the Annual Conference of CAIS / Actes du congrès annuel de l'ACSI, 15.08.2018. http://dx.doi.org/10.29173/cais1033.

Der volle Inhalt der Quelle
Annotation:
This study is a social media analysis on the use of Twitter at Historically Black Colleges and University libraries in the United States. Researchers have begun examining how libraries use social media however; the vast majority of these studies are situated at large flagship research-intensive universities. We leverage the IBM Watson analytic engine, to systemically examine over 23,000 tweets around propagation and sentiment, to assess follower engagement. The analysis found little evidence of follower engagement with library generated content. However, we observed a substantial volume of library tweets coalesced around institutional boosterism, rather thanlibrary related phenomena.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Lacey, Arron S., Beata Fonferko-Shadrach, Ronan A. Lyons, Mike P. Kerr, David V. Ford, Mark I. Rees und Owen W. Pickrell. „Obtaining structured clinical data from unstructured data using natural language processing software“. International Journal of Population Data Science 1, Nr. 1 (19.04.2017). http://dx.doi.org/10.23889/ijpds.v1i1.381.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT BackgroundFree text documents in healthcare settings contain a wealth of information not captured in electronic healthcare records (EHRs). Epilepsy clinic letters are an example of an unstructured data source containing a large amount of intricate disease information. Extracting meaningful and contextually correct clinical information from free text sources, to enhance EHRs, remains a significant challenge. SCANR (Swansea University Collaborative in the Analysis of NLP Research) was set up to use natural language processing (NLP) technology to extract structured data from unstructured sources. IBM Watson Content Analytics software (ICA) uses NLP technology. It enables users to define annotations based on dictionaries and language characteristics to create parsing rules that highlight relevant items. These include clinical details such as symptoms and diagnoses, medication and test results, as well as personal identifiers. ApproachTo use ICA to build a pipeline to accurately extract detailed epilepsy information from clinic letters. MethodsWe used ICA to retrieve important epilepsy information from 41 pseudo-anonymized unstructured epilepsy clinic letters. The 41 letters consisted of 13 ‘new’ and 28 ‘follow-up’ letters (for 15 different patients) written by 12 different doctors in different styles. We designed dictionaries and annotators to enable ICA to extract epilepsy type (focal, generalized or unclassified), epilepsy cause, age of onset, investigation results (EEG, CT and MRI), medication, and clinic date. Epilepsy clinicians assessed the accuracy of the pipeline. ResultsThe accuracy (sensitivity, specificity) of each concept was: epilepsy diagnosis 98% (97%, 100%), focal epilepsy 100%, generalized epilepsy 98% (93%, 100%), medication 95% (93%, 100%), age of onset 100% and clinic date 95% (95%, 100%). Precision and recall for each concept were respectively, 98% and 97% for epilepsy diagnosis, 100% each for focal epilepsy, 100% and 93% for generalized epilepsy, 100% each for age of onset, 100% and 93% for medication, 100% and 96% for EEG results, 100% and 83% for MRI scan results, and 100% and 95% for clinic date. Conclusions ICA is capable of extracting detailed, structured epilepsy information from unstructured clinic letters to a high degree of accuracy. This data can be used to populate relational databases and be linked to EHRs. Researchers can build in custom rules to identify concepts of interest from letters and produce structured information. We plan to extend our work to hundreds and then thousands of clinic letters, to provide phenotypically rich epilepsy data to link with other anonymised, routinely collected data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Niggli, C., H. C. Pape und L. Mica. „Validation of a visual-based analytics tool to predict outcomes of polytrauma patients: The IBM WATSON trauma pathway explorer“. British Journal of Surgery 108, Supplement_4 (01.05.2021). http://dx.doi.org/10.1093/bjs/znab202.051.

Der volle Inhalt der Quelle
Annotation:
Abstract Objective Big data-based artificial intelligence (AI) is on the way to develop into a part of daily clinical life and its reasonable application could help to improve disease or injury outcomes. A visual polytrauma analytics tool based on IBM WATSON was developed and described in a previous publication. The present article relates to the validation of the IBM WATSON Trauma Pathway Explorer. Methods A retrospective prediction model validation in a level I trauma center including 107 patients with an Injury Severity Score (ISS) ≥16 and age ≥16 was performed. Age, ISS, temperature and the presence of head injury were the predictors used to validate the following three outcomes: SIRS and sepsis within 21 days since admission of the patient, as well as early death within 72 hours since admission. The area under the receiver operating characteristic (ROC) curve was used to determine predictive quality. Calibration plots showed the graphical goodness of fit. The Brier score assessed the overall performance of the two models. Results The area under the curve (AUC) is 0.77 (95% CI: 0.679-0.851) for SIRS, 0.71 (95% CI: 0.578-0.831) for sepsis and 0.90 (95% CI: 0.786-987) for early death. The Brier scores are as follows: early death 0.06, sepsis 0.12 and SIRS 0.15. Conclusion The validation has shown that the predictive performance of WATSON for SIRS and early death corresponds to the clinical outcome in nearly 80% of cases and 90% of cases, respectively. The concordance for sepsis was modest with over 70% of cases. This visual analytics tool for polytrauma patients can be used to obtain valid predictions for SIRS, sepsis and early death. Here, we can present a possible working variant of AI in trauma surgery.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

„Predictive Analysis of Cryptocurrencies for Developing an Interactive Cryptocurrency Chatbot using IBM Watson Assistant“. VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE 8, Nr. 10 (10.08.2019): 436–47. http://dx.doi.org/10.35940/ijitee.i8485.0881019.

Der volle Inhalt der Quelle
Annotation:
The main objective of this paper is to analyze the characteristics and features that affects the fluctuations of cryptocurrency prices and to develop aninteractive cryptocurrencychatbot for providing the predictive analysis of cryptocurrency prices. The chatbot is developed using IBM Watson assistant service. The predictive analytics is performed by analyzing the datasets of various cryptocurrencies and applying appropriate time series models. Time Series Forecasting is used for predicting the future values of the prices. Predictive models like ARIMA model is used for calculating the mean squared error of the fitted model. Facebook’s package prophet () which implements a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly and weekly seasonality are further used to predict cryptocurrency prices.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

„Prediction of Telecom Churn using Comparative Analysis of Three Classifiers of Artificial Neural Network“. Regular 9, Nr. 10 (10.08.2020): 17–20. http://dx.doi.org/10.35940/ijitee.j7339.0891020.

Der volle Inhalt der Quelle
Annotation:
The purpose of this study is to evaluate existing individual neural network-based classifiers to compare performance measurements to improve the accuracy of deviance predictions. The data sets used in this white paper are related to communication deviance and are available to IBM Watson Analytics in the IBM community. This study uses three classifiers from ANN and a split validation operator from one data set to predict the departure of communications services. Apply different classification techniques to different classifiers to achieve the following accuracy with 75.63% for deep running, 77.63% for perceptron, and 77.95% for autoMLP. With a limited set of features, including the information of customer, this study compares ANN's classifiers to derive the best performance model. In particular, the study shows that telecom service companies with practical implications to manage potential departures and improve revenue.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Niggli, C., H. C. Pape und L. Mica. „IBM WATSON Trauma Pathway Explorer outperforms the TRISS score to predict early death after polytrauma“. British Journal of Surgery 108, Supplement_4 (01.05.2021). http://dx.doi.org/10.1093/bjs/znab202.052.

Der volle Inhalt der Quelle
Annotation:
Abstract Objective In recent years, several big data-based artificial intelligence (AI) systems have found its way in health care, one of which we present here: The IBM WATSON Trauma Pathway Explorer, a visual analytics tool to predict early death in polytrauma patients. The aim of this study was to compare the predictive performance of the Trauma Pathway Explorer for early in-hospital mortality with an established trauma scoring system, the Trauma Revised Injury Severity Score (TRISS). Methods A retrospective comparative accuracy study in a level I trauma center including patients with an Injury Severity Score (ISS) ≥16 and age ≥16 was performed. The compared outcome was early death within 72 hours since admission of the patient. The area under the receiver operating characteristic curve (AUC) was used to measure discrimination. Hosmer-Lemeshow statistics was calculated to analyse calibration of the two predictive models. The Brier score assessed the overall performance of the two models. Results The cohort included 107 polytrauma patients with a death rate of 10.3% at 72 hours since patient admission. The Trauma Pathway Explorer and TRISS showed similar AUCs to predict early death (AUC 0.90, 95% CI 0.79-0.99 vs. AUC 0.88, 95% 0.77-0.97; p = 0.75). The calibration of the Trauma Pathway Explorer was superior to that of TRISS (chi-squared 8.19, Hosmer-Lemeshow p = 0.42 vs. chi-squared 31.93, Hosmer-Lemeshow p &lt; 0.05). The Trauma Pathway Explorer had a lower Brier score than TRISS (0.06 vs. 0.11). Conclusion The IBM WATSON Trauma Pathway Explorer showed equal results in discrimination as TRISS but outperformed in calibration. In addition to being able to provide a prediction of early death, this visual analytics tool for polytrauma patients can also show the quantitative flow of patient sub-cohorts through different events, such as coagulopathy, hemorrhagic shock class, surgical strategy and the above-mentioned outcome. Here, we can present an accurate and valid alternative to TRISS for predicting early death in polytrauma patients.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Suavo-Bulzis, Paola, Federica Albanese, Davide Mallardi, Francesco Saverio Debitonto, Ruggero Lemma, Annalisa Granatiero, Marisa Spadavecchia et al. „P0119ARTIFICIAL INTELLIGENCE IN RENAL PATHOLOGY: IBM WATSON FOR THE IDENTIFICATION OF GLOMERULOSCLEROSIS“. Nephrology Dialysis Transplantation 35, Supplement_3 (01.06.2020). http://dx.doi.org/10.1093/ndt/gfaa142.p0119.

Der volle Inhalt der Quelle
Annotation:
Abstract Background and Aims Artificial Intelligence (AI) is the branch of computer technology aimed at creating hardware and software systems solving problems similarly to human intelligence, whereas specifically machine learning (ML) is the “field of study that gives computers the ability to learn without being explicitly programmed”. Our study was carried out taking advantage of the Watson Visual Recognition System by IBM, an advanced AI tool based on ML able to classify complex visual content. The aim of the study was to train and test the ability of the system in recognizing sclerotic and non-sclerotic glomeruli. Method A dataset of 26 renal biopsies performed at the Nephrology unit of the Department of Emergency and Organ Transplantation (DETO), University of Bari, Italy, was used for the analysis. All biopsies were stained with Periodic Acid-Schiff (PAS) staining. Each bioptic section was acquired using Aperio ScanScope; glomeruli were identified and sclerotic glomeruli were marked in yellow, non-sclerotic ones in green. Annotations were validated by two renal pathologists. The final dataset consisted of 2772 glomeruli: 428 with sclerosis, 2344 with no sclerosis (ratio 1/5.5). The dataset was divided in three parts: training set (about 70% of the entire dataset), validation set (about 10%) and test set (about 20%). Watson Visual Recognition Service is customizable and the system can be trained on recognizing any visual content. Classifiers are created and trained uploading both positive (in our study, sclerotic glomeruli) and negative (non-sclerotic glomeruli) classes. The IBM Watson learning algorithm is not open, therefore in order to improve the performance of the system, it is necessary to train it with different models and choose the best one. Results We created all the possible models derived from the arrangement of the following 4 variables: color of the image (PAS staining or grey scales), dimension (original or resized), number of images in order to balance the positive and negative classes, binary (one class containing sclerotic glomeruli) or multi-class (two classes: sclerotic and non-sclerotic glomeruli). Every test had a cut-off of 0.5: if &gt;0.5, the system considered the glomerulus belonging to the tested class. After validating all the models with all the variables considered, the best performing model was the following: grey scaled, resized and multi-class. This model was then tested with a different number of input images (300, 600, 900, 1200, and 1600). The models with the most numerous dataset have been created using data augmentation, a technique that virtually increases the number of available samples. Results show that the use of larger input datasets does not yield a better linear performance. In fact, models 900 and 1200 had worst performances than other models, the best performances of the system were reached with model 1600, both in recognizing sclerotic (positive class) and non-sclerotic (negative class) glomeruli (Table 1, Figure 1 and 2). Conclusion In our study, renal biopsy images were analyzed and classified using the IBM Watson Visual Recognition tool which was able to distinguish automatically and with very high accuracy between sclerotic and non-sclerotic glomeruli. This study focuses only on the glomerular compartment and the next step will be the recognition of intermediate lesions and other portion of renal tissue. Our results prove the potential of AI and ML techniques in supporting the activity of renal pathologists.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Ferguson, Sarah Lord, Christine Pitt und Leyland Pitt. „Using artificial intelligence to examine online patient reviews“. Journal of Health Psychology, 17.04.2020, 135910532091395. http://dx.doi.org/10.1177/1359105320913954.

Der volle Inhalt der Quelle
Annotation:
Healthcare consumers are increasingly turning to online sources such as educational websites, forums and social media platforms to share their experiences with medical services and to demystify the uncertainties associated with undergoing various procedures. This study demonstrates a non-invasive way of understanding the feelings and emotions that consumers share via electronic word of mouth. By using IBM Watson, a content analysis tool that harnesses artificial intelligence, we show how a large amount of unstructured qualitative data can be transformed into quantitative data that can be subsequently analysed to generate novel insights into what patients are sharing about their healthcare experiences online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Choi, Youngkeun, und Jae Won Choi. „A study of job involvement prediction using machine learning technique“. International Journal of Organizational Analysis ahead-of-print, ahead-of-print (24.08.2020). http://dx.doi.org/10.1108/ijoa-05-2020-2222.

Der volle Inhalt der Quelle
Annotation:
Purpose Job involvement can be linked with important work outcomes. One way for organizations to increase job involvement is to use machine learning technology to predict employees’ job involvement, so that their leaders of human resource (HR) management can take proactive measures or plan succession for preservation. This paper aims to develop a reliable job involvement prediction model using machine learning technique. Design/methodology/approach This study used the data set, which is available at International Business Machines (IBM) Watson Analytics in IBM community and applied a generalized linear model (GLM) including linear regression and binomial classification. This study essentially had two primary approaches. First, this paper intends to understand the role of variables in job involvement prediction modeling better. Second, the study seeks to evaluate the predictive performance of GLM including linear regression and binomial classification. Findings In these results, first, employees’ job involvement with a lot of individual factors can be predicted. Second, for each model, this model showed the outstanding predictive performance. Practical implications The pre-access and modeling methodology used in this paper can be viewed as a roadmap for the reader to follow the steps taken in this study and to apply procedures to identify the causes of many other HR management problems. Originality/value This paper is the first one to attempt to come up with the best-performing model for predicting job involvement based on a limited set of features including employees’ demographics using machine learning technique.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Chengathir, Selvi M., T. Bhuvaneswari, J. Maruthupandi und Priyadarsini R. Naga. „Prediction of dengue using data mining classification algorithms“. International journal of health sciences, 25.05.2022. http://dx.doi.org/10.53730/ijhs.v6ns1.7907.

Der volle Inhalt der Quelle
Annotation:
Dengue is a life-threatening disease prevalent in several developed as well as developing countries like India. This is a virus born disease caused by breeding of Aedes mosquito. Datasets that are available for dengue describe information about the patients suffering with dengue disease. Dengue disease has symptoms like: Fever Temperature, WBC, Platelets, Severe Headache, Vomiting, Metallic Taste, Joint Pain, Appetite, Diarrhea, Hematocrit, Hemoglobin, and how many days suffer in different city. The main objective of this paper is to classify dengue data and assist the users in extracting useful information from data and easily identify a suitable algorithm for accurate predictive model from it. The proposed system is to determine the prediction of dengue disease and their accuracy using classifications of different algorithms to find out the best performance. Data mining is a well-known technique used by health organizations for classification of diseases such as dengue, diabetes and cancer in bioinformatics research. IBM Watson Analytics is used to analyze the influence of different parameters on the given data set. In the proposed approach, R programming is to evaluate data and compare results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Yong, Winston, und Anya Kundakchian. „Critical Care Equipment Management Reimagined in an Emergency“. Blockchain in Healthcare Today, 17.12.2020. http://dx.doi.org/10.30953/bhty.v3.146.

Der volle Inhalt der Quelle
Annotation:
Summary: The COVID19 pandemic created a surge in demand for critical care equipment against a backdrop of fast-moving geographic virus hotspots. A team from IBM Europe was put together to prove that a devolved healthcare system can be rapidly bridged by a mix of advanced and legacy technologies to provide a federated view of critical care equipment deployment and use during an emergency. This was achieved with the deployment of predictive analytics and blockchain, integrated with conventional hospital management system. The corollary investigation determined the manner in which this system can be harnessed in a postemergency recovery to provide a national supply chain efficiency backbone. Method: During a period of 2 weeks, a team of IBM consultants set up a technology sandbox environment to represent a network of an equipment manufacturer, a central national emergency monitoring center, and several hospitals managed by their respective trust organization. Within this environment, a hospital asset management system, Maximo, was configured to manage and track critical care equipment within a hospital; a blockchain traceability platform, IBM’s Blockchain Transparency System, was configured to ingest multiple hospital data reports; and a predictive analytic dashboard, Watson Analytics, would retrieve data from the blockchain platform to supplement other data sources to provide national views and support decision-making for the supply and movement of equipment. Three key principles in the design of this environment are speed, reuse, and minimal intrusion. Results: The hypothesis was to test whether the chosen technologies can overcome the challenges of misaligned demand and supply of critical care equipment during a national emergency. The execution of the tests led to successful simulation of three scenarios: (1) the tracking of the location and usage history of any single equipment that has been placed into the network; (2) the movement of equipment between independent hospitals is recorded and reported; (3) a real-time interrogation of the current location and status of all registered equipment. Conclusions: The successful completion of this proof of concept has demonstrated that emerging technology can be used to overcome poor macro level coordination and planning, which are the drawbacks of a devolved healthcare system. The corollary was that this proof also demonstrated that blockchain technology can be used to prolong the useful life of conventional technology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Leonenko, E. A., und S. V. Kunev. „DIGITAL TECHNOLOGIES AND THE PROCESS FRAMEWORK IN THE STUDY OF THE COMPANY'S MARKETING ENVIRONMENT“. Scientific Review Theory and Practice, 2020, 2627–41. http://dx.doi.org/10.35679/2226-0226-2020-10-11-2627-2641.

Der volle Inhalt der Quelle
Annotation:
Currently, there are processes of increasing the use of various digital tools in various areas of business. At the same time, the coronavirus pandemic was the catalyst for these processes. However, the changes that have occurred in the pace of implementation of digital tools will not only not receive a reverse vector, but may also be strengthened already in the context of the announced reduction in quarantine measures. While the tools of marketing research at the present time, a long time are full-fledged software products (ALS Base, Analytic Marketing, Marketing Expert, Power analysis, BEST Marketing etc.) working in the marketing or planning departments of the organizations, as well as digital online services (Google Analytics and Yandex. Metrica, IBM Watson Analytics), smartphone apps, etc. At the same time, all digital tools involve the use of a digital footprint, which, given the still insufficient presence of Russian companies on the Internet, leads to restrictions on their use for analytical purposes. At the same time, another trend is the introduction of a digital framework, also called Scrum technology, which is a set of principles and tools that are most often used in IT development. With their help, the developer can make a workable product in limited iterations, called sprints. Product capabilities are defined when planning a sprint. The short-term nature of iterations ensures the predictability of development and at the same time the flexibility of the process. The effectiveness of these tools has already been confirmed by the practice of working in the conditions of a pandemic and falling markets, which, according to experts, is the main factor in the growth of their demand among business representatives.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Ji, Jiaojiao, Hongchao Hu und Shitong Wei. „YouTube Comments on Gene-Edited Babies: What Factors Affect Diverse Opinions in Comments?“ Social Science Computer Review, 03.03.2022, 089443932110731. http://dx.doi.org/10.1177/08944393211073164.

Der volle Inhalt der Quelle
Annotation:
This study explored the factors that influence video popularity and diverse opinions in the comments of YouTube videos about gene-edited babies. 107 most viewed videos and corresponding 56,912 direct comments about gene-edited babies were collected from YouTube. We examined how the uploader characteristics, delivery styles, video tones, and video frames affect the diverse opinions measured by sentiment polarity, sentiment divergence, and the number of topics in the commentary. The sentiments and topics of comments were analyzed using IBM Watson Natural Language Understanding. We found that more effective videos are relatively longer videos from user-created channels with a large number of subscribers and presented in a neutral tone, which is more likely to provide unbiased and comprehensive knowledge about gene-edited babies. Based on our findings, suggestions were made for viewers about how to pick high-quality content, and insights were provided for content creators about how to create compelling videos.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

„Enhancing Natural Language Processing (NLP) through VIKOR Method: A Comprehensive Approach for Improved Computational Linguistics“. Computer Science, Engineering and Technology 2, Nr. 1 (15.03.2024): 21–32. http://dx.doi.org/10.46632/cset/2/1/4.

Der volle Inhalt der Quelle
Annotation:
Natural Language Processing (NLP) is a revolutionary discipline that resides at the crossroads of computer science and linguistics, with its primary emphasis on the interaction between computers and human language. With roots in artificial intelligence, NLP seeks to equip machines with the ability to comprehend, interpret, and respond to natural language, enabling more intuitive and meaningful communication between humans and computers. This multidisciplinary domain leverages advanced algorithms and models to tackle a range of linguistic challenges, from language translation and sentiment analysis to speech recognition. Recent breakthroughs, particularly in deep learning and neural networks, have propelled NLP to new heights, with applications spanning diverse sectors such as healthcare, finance, and education. As NLP continues to advance, its potential impact on enhancing human-machine interaction and information processing is increasingly evident, promising a future where technology seamlessly integrates with our natural modes of communication. The significance of Natural Language Processing (NLP) in research lies in its capacity to revolutionize human-computer interaction. NLP empowers machines to understand and generate human language, enabling advancements in sentiment analysis, language translation, and conversational AI. This transformative capability has profound implications across various industries, including healthcare, finance, and education. NLP research not only enhances the efficiency of information retrieval and processing but also holds the key to developing more intuitive and user-friendly technologies. As the frontier of NLP research expands, its potential to bridge the communication gap between humans and machines continues to shape the future of technology and information access. VIKOR functions as a multi-criteria decision-making technique that determines the best option by comparing it to the ideal option. The ranking procedure comprises determining distances concerning the optimal solution. VIKOR uses linear normalization to get the best possible outcomes. This technique was first presented by Opricovic in 1998 and was intended to optimize multi-attribute complex systems. Its main focus is on ranking lists that allow for flexibility in the strategy weight interval and incorporate compromise alternatives to obtain desired results. Alternative taken as Open AI GPT-4, Google BERT, Microsoft Azure Text Analytics, IBM Watson Natural Language Understanding, spaCy, NLTK (Natural Language Toolkit), Amazon Comprehend. Evaluation Parameter taken as Sentiment Analysis (F1 Score), NER Performance, Language Support, Processing Speed (Response Time) (ms), Translation Accuracy (BLEU Score), Data Privacy. The result it is seen that Amazon Comprehend is got the first rank where as is the IBM Watson Natural Language Understanding is having the lowest rank.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie