Academic literature on the topic 'Automated information extraction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Automated information extraction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Automated information extraction"

1

Adefowoke Ojokoh, Bolanle, Olumide Sunday Adewale, and Samuel Oluwole Falaki. "Automated document metadata extraction." Journal of Information Science 35, no. 5 (June 11, 2009): 563–70. http://dx.doi.org/10.1177/0165551509105195.

Full text
Abstract:
Web documents are available in various forms, most of which do not carry additional semantics. This paper presents a model for general document metadata extraction. The model, which combines segmentation by keywords and pattern matching techniques, was implemented using PHP, MySQL, JavaScript and HTML. The system was tested with 40 randomly selected PDF documents (mainly theses). An evaluation of the system was done using standard criteria measures namely precision, recall, accuracy and F-measure. The results show that the model is relatively effective for the task of metadata extraction, especially for theses and dissertations. A combination of machine learning with these rule-based methods will be explored in the future for better results.
APA, Harvard, Vancouver, ISO, and other styles
2

Musaev, Alexander A., and Dmitry A. Grigoriev. "TECHNOLOGIES FOR AUTOMATIC KNOWLEDGE EXTRACTION FROM POORLY STRUCTURED INFORMATION FOR MANAGEMENT TASKS IN UNSTABLE IMMERSION ENVIRONMENTS." Bulletin of the Saint Petersburg State Institute of Technology (Technical University) 63 (2022): 68–77. http://dx.doi.org/10.36807/1998-9849-2022-63-89-68-77.

Full text
Abstract:
The problem of automatic knowledge extraction from poorly structured text data is considered. The application uses the task of proactive management in unstable immersion environments. A brief overview and critical analysis of the current state of knowledge extraction technologies from text messages are presented. A formalized formulation of the task of extracting knowledge from textual information was carried out. The structures of an automated system for preprocessing text documents and a training data polygon were developed. Options for creating search and statistical technologies for extracting knowledge from text messages are presented.
APA, Harvard, Vancouver, ISO, and other styles
3

Andrade, Miguel A., and Peer Bork. "Automated extraction of information in molecular biology." FEBS Letters 476, no. 1-2 (June 26, 2000): 12–17. http://dx.doi.org/10.1016/s0014-5793(00)01661-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Townsend, Joe A., Sam E. Adams, Christopher A. Waudby, Vanessa K. de Souza, Jonathan M. Goodman, and Peter Murray-Rust. "Chemical documents: machine understanding and automated information extraction." Organic & Biomolecular Chemistry 2, no. 22 (2004): 3294. http://dx.doi.org/10.1039/b411033a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cemus, Karel, and Tomas Cerny. "Automated extraction of business documentation in enterprise information systems." ACM SIGAPP Applied Computing Review 16, no. 4 (January 13, 2017): 5–13. http://dx.doi.org/10.1145/3040575.3040576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Valls-Vargas, Josep, Jichen Zhu, and Santiago Ontanon. "Error Analysis in an Automated Narrative Information Extraction Pipeline." IEEE Transactions on Computational Intelligence and AI in Games 9, no. 4 (December 2017): 342–53. http://dx.doi.org/10.1109/tciaig.2016.2575823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Haiyan Guan, Jonathan Li, Yongtao Yu, Michael Chapman, and Cheng Wang. "Automated Road Information Extraction From Mobile Laser Scanning Data." IEEE Transactions on Intelligent Transportation Systems 16, no. 1 (February 2015): 194–205. http://dx.doi.org/10.1109/tits.2014.2328589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cook, Tessa S., Stefan Zimmerman, Andrew D. A. Maidment, Woojin Kim, and William W. Boonn. "Automated Extraction of Radiation Dose Information for CT Examinations." Journal of the American College of Radiology 7, no. 11 (November 2010): 871–77. http://dx.doi.org/10.1016/j.jacr.2010.06.026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Grant, Gerry H., and Sumali J. Conlon. "EDGAR Extraction System: An Automated Approach to Analyze Employee Stock Option Disclosures." Journal of Information Systems 20, no. 2 (September 1, 2006): 119–42. http://dx.doi.org/10.2308/jis.2006.20.2.119.

Full text
Abstract:
Past alternative accounting choices and new accounting standards for stock options have hindered analysts' ability to compare corporate financial statements. Financial analysts need specific information about stock options in order to accurately assess the financial position of companies. Finding this information is often a tedious task. The SEC's EDGAR database is the richest source of financial statement information on the Web. However, the information is stored in text or HTML files making it difficult to search and extract data. Information Extraction (IE), the process of finding and extracting useful information in unstructured text, can effectively help users find vital financial information. This paper examines the development and use of the EDGAR Extraction System (EES), a customized, automated system that extracts relevant information about employee stock options from financial statement disclosure notes on the EDGAR database.
APA, Harvard, Vancouver, ISO, and other styles
10

Reimeier, Fabian, Dominik Röpert, Anton Güntsch, Agnes Kirchhoff, and Walter G. Berendsohn. "Service-based information extraction from herbarium specimens." Biodiversity Information Science and Standards 2 (May 21, 2018): e25415. http://dx.doi.org/10.3897/biss.2.25415.

Full text
Abstract:
On herbarium sheets, data elements such as plant name, collection site, collector, barcode and accession number are found mostly on labels glued to the sheet. The data are thus visible on specimen images. With continuously improving technologies for collection mass-digitisation it has become easier and easier to produce high quality images of herbarium sheets and in the last few years herbarium collections worldwide have started to digitize specimens on an industrial scale (Tegelberg et al. 2014). To use the label data contained in these massive numbers of images, they have to be captured and databased. Currently, manual data entry prevails and forms the principal cost and time limitation in the digitization process. The StanDAP-Herb Project has developed a standard process for (semi-) automatic detection of data on herbarium sheets. This is a formal extensible workflow integrating a wide range of automated specimen image analysis services, used to replace time-consuming manual data input as far as possible. We have created web-services for OCR (Optical Character Recognition); for identifying regions of interest in specimen images and for the context-sensitive extraction of information from text recognized by OCR. We implemented the workflow as an extension of the OpenRefine platform (Verborgh and De Wilde 2013).
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Automated information extraction"

1

Bowden, Paul Richard. "Automated knowledge extraction from text." Thesis, Nottingham Trent University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Wei. "Automated spatiotemporal and semantic information extraction for hazards." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1415.

Full text
Abstract:
This dissertation explores three research topics related to automated spatiotemporal and semantic information extraction about hazard events from Web news reports and other social media. The dissertation makes a unique contribution of bridging geographic information science, geographic information retrieval, and natural language processing. Geographic information retrieval and natural language processing techniques are applied to extract spatiotemporal and semantic information automatically from Web documents, to retrieve information about patterns of hazard events that are not explicitly described in the texts. Chapters 2, 3 and 4 can be regarded as three standalone journal papers. The research topics covered by the three chapters are related to each other, and are presented in a sequential way. Chapter 2 begins with an investigation of methods for automatically extracting spatial and temporal information about hazards from Web news reports. A set of rules is developed to combine the spatial and temporal information contained in the reports based on how this information is presented in text in order to capture the dynamics of hazard events (e.g., changes in event locations, new events occurring) as they occur over space and time. Chapter 3 presents an approach for retrieving semantic information about hazard events using ontologies and semantic gazetteers. With this work, information on the different kinds of events (e.g., impact, response, or recovery events) can be extracted as well as information about hazard events at different levels of detail. Using the methods presented in Chapter 2 and 3, an approach for automatically extracting spatial, temporal, and semantic information from tweets is discussed in Chapter 4. Four different elements of tweets are used for assigning appropriate spatial and temporal information to hazard events in tweets. Since tweets represent shorter, but more current information about hazards and how they are impacting a local area, key information about hazards can be retrieved through extracted spatiotemporal and semantic information from tweets.
APA, Harvard, Vancouver, ISO, and other styles
3

Heckemann, Rolf Andreas. "Automated information extraction from images of the human brain." Thesis, Imperial College London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.444549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Malki, Khalil. "Automated Knowledge Extraction from Archival Documents." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2019. http://digitalcommons.auctr.edu/cauetds/204.

Full text
Abstract:
Traditional archival media such as paper, film, photographs, etc. contain a vast storage of knowledge. Much of this knowledge is applicable to current business and scientific problems, and offers solutions; consequently, there is value in extracting this information. While it is possible to manually extract the content, this technique is not feasible for large knowledge repositories due to cost and time. In this thesis, we develop a system that can extract such knowledge automatically from large repositories. A Graphical User Interface that permits users to indicate the location of the knowledge components (indexes) is developed, and software features that permit automatic extraction of indexes from similar documents is presented. The indexes and the documents are stored in a persistentdata store.The system is tested on a University Registrar’s legacy paper-based transcript repository. The study shows that the system provides a good solution for large-scale extraction of knowledge from archived paper and other media.
APA, Harvard, Vancouver, ISO, and other styles
5

Ortona, Stefano. "Easing information extraction on the web through automated rules discovery." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:a5a7a070-338a-4afc-8be5-a38b486cf526.

Full text
Abstract:
The advent of the era of big data on the Web has made automatic web information extraction an essential tool in data acquisition processes. Unfortunately, automated solutions are in most cases more error prone than those created by humans, resulting in dirty and erroneous data. Automatic repair and cleaning of the extracted data is thus a necessary complement to information extraction on the Web. This thesis investigates the problem of inducing cleaning rules on web extracted data in order to (i) repair and align the data w.r.t. an original target schema, (ii) produce repairs that are as generic as possible such that different instances can benefit from them. The problem is addressed from three different angles: replace cross-site redundancy with an ensemble of entity recognisers; produce general repairs that can be encoded in the extraction process; and exploit entity-wide relations to infer common knowledge on extracted data. First, we present ROSeAnn, an unsupervised approach to integrate semantic annotators and produce a unied and consistent annotation layer on top of them. Both the diversity in vocabulary and widely varying accuracy justify the need for middleware that reconciles different annotator opinions. Considering annotators as "black-boxes" that do not require per-domain supervision allows us to recognise semantically related content in web extracted data in a scalable way. Second, we show in WADaR how annotators can be used to discover rules to repair web extracted data. We study the problem of computing joint repairs for web data extraction programs and their extracted data, providing an approximate solution that requires no per-source supervision and proves effective across a wide variety of domains and sources. The proposed solution is effective not only in repairing the extracted data, but also in encoding such repairs in the original extraction process. Third, we investigate how relationships among entities can be exploited to discover inconsistencies and additional information. We present RuDiK, a disk-based scalable solution to discover first-order logic rules over RDF knowledge bases built from web sources. We present an approach that does not limit its search space to rules that rely on "positive" relationships between entities, as in the case with traditional mining of constraints. On the contrary, it extends the search space to also discover negative rules, i.e., patterns that lead to contradictions in the data.
APA, Harvard, Vancouver, ISO, and other styles
6

Ademi, Muhamet. "adXtractor – Automated and Adaptive Generation of Wrappers for Information Retrieval." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20071.

Full text
Abstract:
The aim of this project is to investigate the feasibility of retrieving unstructured automotive listings from structured web pages on the Internet. The research has two major purposes: (1) to investigate whether it is feasible to pair information extraction algorithms and compute wrappers (2) demonstrate the results of pairing these techniques and evaluate the measurements. We merge two training sets available on the web to construct reference sets which is the basis for the information extraction. The wrappers are computed by using information extraction techniques to identify data properties with a variety of techniques such as fuzzy string matching, regular expressions and document tree analysis. The results demonstrate that it is possible to pair these techniques successfully and retrieve the majority of the listings. Additionally, the findings also suggest that many platforms utilise lazy loading to populate image resources which the algorithm is unable to capture. In conclusion, the study demonstrated that it is possible to use information extraction to compute wrappers dynamically by identifying data properties. Furthermore, the study demonstrates the ability to open non-queryable domain data through a unified service.
APA, Harvard, Vancouver, ISO, and other styles
7

Xhemali, Daniela. "Automated retrieval and extraction of training course information from unstructured web pages." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/7022.

Full text
Abstract:
Web Information Extraction (WIE) is the discipline dealing with the discovery, processing and extraction of specific pieces of information from semi-structured or unstructured web pages. The World Wide Web comprises billions of web pages and there is much need for systems that will locate, extract and integrate the acquired knowledge into organisations practices. There are some commercial, automated web extraction software packages, however their success comes from heavily involving their users in the process of finding the relevant web pages, preparing the system to recognise items of interest on these pages and manually dealing with the evaluation and storage of the extracted results. This research has explored WIE, specifically with regard to the automation of the extraction and validation of online training information. The work also includes research and development in the area of automated Web Information Retrieval (WIR), more specifically in Web Searching (or Crawling) and Web Classification. Different technologies were considered, however after much consideration, Naïve Bayes Networks were chosen as the most suitable for the development of the classification system. The extraction part of the system used Genetic Programming (GP) for the generation of web extraction solutions. Specifically, GP was used to evolve Regular Expressions, which were then used to extract specific training course information from the web such as: course names, prices, dates and locations. The experimental results indicate that all three aspects of this research perform very well, with the Web Crawler outperforming existing crawling systems, the Web Classifier performing with an accuracy of over 95% and a precision of over 98%, and the Web Extractor achieving an accuracy of over 94% for the extraction of course titles and an accuracy of just under 67% for the extraction of other course attributes such as dates, prices and locations. Furthermore, the overall work is of great significance to the sponsoring company, as it simplifies and improves the existing time-consuming, labour-intensive and error-prone manual techniques, as will be discussed in this thesis. The prototype developed in this research works in the background and requires very little, often no, human assistance.
APA, Harvard, Vancouver, ISO, and other styles
8

Hedbrant, Per. "Towards a fully automated extraction and interpretation of tabular data using machine learning." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-391490.

Full text
Abstract:
Motivation A challenge for researchers at CBCS is the ability to efficiently manage the different data formats that frequently are changed. This handling includes import of data into the same format, regardless of the output of the various instruments used. There are commercial solutions available for this process, but to our knowledge, all these require prior generation of templates to which data must conform.A challenge for researchers at CBCS is the ability to efficiently manage the different data formats that frequently are changed. Significant amount of time is spent on manual pre- processing, converting from one format to another. There are currently no solutions that uses pattern recognition to locate and automatically recognise data structures in a spreadsheet. Problem Definition The desired solution is to build a self-learning Software as-a-Service (SaaS) for automated recognition and loading of data stored in arbitrary formats. The aim of this study is three-folded: A) Investigate if unsupervised machine learning methods can be used to label different types of cells in spreadsheets. B) Investigate if a hypothesis-generating algorithm can be used to label different types of cells in spreadsheets. C) Advise on choices of architecture and technologies for the SaaS solution. Method A pre-processing framework is built that can read and pre-process any type of spreadsheet into a feature matrix. Different datasets are read and clustered. An investigation on the usefulness of reducing the dimensionality is also done. A hypothesis-driven algorithm is built and adapted to two of the data formats CBCS uses most frequently. Discussions are held on choices of architecture and technologies for the SaaS solution, including system design patterns, web development framework and database. Result The reading and pre-processing framework is in itself a valuable result, due to its general applicability. No satisfying results are found when using mini-batch K means clustering method. When only reading data from one format, the dimensionality can be reduced from 542 to around 40 dimensions. The hypothesis-driven algorithm can consistently interpret the format it is designed for. More work is needed to make it more general. Implication The study contribute to the desired solution in short-term by the hypothesis-generating algorithm, and in a more generalisable way by the unsupervised learning approach. The study also contributes by initiating a conversation around the system design choices.
APA, Harvard, Vancouver, ISO, and other styles
9

Sahar, Liora. "Using remote-sensing and gis technology for automated building extraction." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/37231.

Full text
Abstract:
Extraction of buildings from remote sensing sources is an important GIS application and has been the subject of extensive research over the last three decades. An accurate building inventory is required for applications such as GIS database maintenance and revision; impervious surfaces mapping; storm water management; hazard mitigation and risk assessment. Despite all the progress within the fields of photogrammetry and image processing, the problem of automated feature extraction is still unresolved. A methodology for automatic building extraction that integrates remote sensing sources and GIS data was proposed. The methodology consists of a series of image processing and spatial analysis techniques. It incorporates initial simplification procedure and multiple feature analysis components. The extraction process was implemented and tested on three distinct types of buildings including commercial, residential and high-rise. Aerial imagery and GIS data from Shelby County, Tennessee were identified for the testing and validation of the results. The contribution of each component to the overall methodology was quantitatively evaluated as relates to each type of building. The automatic process was compared to manual building extraction and provided means to alleviate the manual procedure effort. A separate module was implemented to identify the 2D shape of a building. Indices for two specific shapes were developed based on the moment theory. The indices were tested and evaluated on multiple feature segments and proved to be successful. The research identifies the successful building extraction scenarios as well as the challenges, difficulties and drawbacks of the process. Recommendations are provided based on the testing and evaluation for future extraction projects.
APA, Harvard, Vancouver, ISO, and other styles
10

Nepal, Madhav Prasad. "Automated extraction and querying of construction-specific design features from a building information model." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/38046.

Full text
Abstract:
In recent years, several research and industry efforts have focused on developing building information models (BIMs) to support various aspects of the architectural, engineering, construction and facility management (AEC/FM) industry. BIMs provide semantically-rich information models that explicitly represent both 3D geometric and non-geometric information. While BIMs have many useful applications to the construction industry, there are enormous challenges in getting construction-specific information out of BIMs, limiting the usability of these models. This research addresses this problem by developing a novel approach to extract construction features from a given BIM and support the processing of user-driven queries on a BIM. In this dissertation, we formalized: (i) An ontology of design features that explicitly represents design conditions that are relevant to construction practitioners and supports the generation of a construction-specific feature-based model; (ii) A query specification vocabulary which characterizes spatial and non-spatial queries, and developed query templates to guide non-expert BIM users to specify queries; and (iii) An integrated approach that combines model-based reasoning and query-based approach to automatically extract design features to create a project-specific feature-based model (FBM) and provide support for answering queries on the FBM. The construction knowledge formalized in this research was gathered from a variety of sources, which included a detailed literature review, several case studies, extensive observations of design and construction meetings, and lengthy discussions with different construction practitioners. We used three different tests to validate the research contributions. We conducted semi-structured, informal interviews with four construction experts for the four building projects studied to validate the content, representativeness and the generality of the concepts formalized in this research. We conducted retrospective analysis for different features to evaluate the soundness of our research in comparison with the state-of-the-art tools. Finally, we performed descriptive and interpretive analysis to demonstrate that our approach is capable of providing richer, insightful and useful construction information. This research can help to make a BIM more accessible for construction users. The developed solutions can support decision making in a variety of construction management functions, such as cost estimating, construction planning, execution and coordination, purchasing, constructability analysis, methods selection, and productivity analysis.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Automated information extraction"

1

A, Gruen, Baltsavias E. P. 1957-, and Henricsson O. 1966-, eds. Automatic extraction of man-made objects from aerial and space images (II). Basel: Birkhäuser Verlag, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sabourin, Conrad. Computational linguistics in information science: Information retrieval (full-text or conceptual), automatic indexing, text abstraction, content analysis, information extraction, query languages : bibliography. Montréal: Infolingua, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Grishman, Ralph. Information Extraction. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0030.

Full text
Abstract:
Information extraction (IE) is the automatic identification of selected types of entities, relations, or events in free text. This article appraises two specific strands of IE — name identification and classification, and event extraction. Conventional treatment of languages pays little attention to proper names, addresses etc. Presentations of language analysis generally look up words in a dictionary and identify them as nouns etc. The incessant presence of names in a text, makes linguistic analysis of the same difficult, in the absence of the names being identified by their types and as linguistic units. Name tagging involves creating, several finite-state patterns, each corresponding to some noun subset. Elements of the patterns would match specific/classes of tokens with particular features. Event extraction typically works by creating a series of regular expressions, customized to capture the relevant events. Enhancement of each expression is corresponded by a relevant, suitable enhancement in the event patterns.
APA, Harvard, Vancouver, ISO, and other styles
4

Caselli, Tommaso, Eduard Hovy, Martha Palmer, and Piek Vossen, eds. Computational Analysis of Storylines. Cambridge University Press, 2021. http://dx.doi.org/10.1017/9781108854221.

Full text
Abstract:
Event structures are central in Linguistics and Artificial Intelligence research: people can easily refer to changes in the world, identify their participants, distinguish relevant information, and have expectations of what can happen next. Part of this process is based on mechanisms similar to narratives, which are at the heart of information sharing. But it remains difficult to automatically detect events or automatically construct stories from such event representations. This book explores how to handle today's massive news streams and provides multidimensional, multimodal, and distributed approaches, like automated deep learning, to capture events and narrative structures involved in a 'story'. This overview of the current state-of-the-art on event extraction, temporal and casual relations, and storyline extraction aims to establish a new multidisciplinary research community with a common terminology and research agenda. Graduate students and researchers in natural language processing, computational linguistics, and media studies will benefit from this book.
APA, Harvard, Vancouver, ISO, and other styles
5

Jacquemin, Christian, and Didier Bourigault. Term Extraction and Automatic Indexing. Edited by Ruslan Mitkov. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199276349.013.0033.

Full text
Abstract:
Terms are pervasive in scientific and technical documents and their identification is a crucial issue for any application dealing with the analysis, understanding, generation, or translation of such documents. In particular, the ever-growing mass of specialized documentation available on-line, in industrial and governmental archives or in digital libraries, calls for advances in terminology processing for tasks such as information retrieval, cross-language querying, indexing of multimedia documents, translation aids, document routing and summarization, etc. This article presents a new domain of research and development in natural language processing (NLP) that is concerned with the representation, acquisition, and recognition of terms. It begins with presenting the basic notions about the concept of ‘terms’, ranging from the classical view, to the recent concepts. There are two main areas of research involving terminology in NLP, which are, term acquisition and term recognition. Finally, this article presents the recent advances and prospects in term acquisition and automatic indexing.
APA, Harvard, Vancouver, ISO, and other styles
6

Turenne, Nicolas. Knowledge Needs and Information Extraction: Towards an Artificial Consciousness. Wiley & Sons, Incorporated, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Turenne, Nicolas. Knowledge Needs and Information Extraction: Towards an Artificial Consciousness. Wiley & Sons, Incorporated, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Turenne, Nicolas. Knowledge Needs and Information Extraction: Towards an Artificial Consciousness. Wiley & Sons, Incorporated, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Turenne, Nicolas. Knowledge Needs and Information Extraction: Towards an Artificial Consciousness. Wiley & Sons, Incorporated, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Turenne, Nicolas. Knowledge Needs and Information Extraction: Towards an Artificial Consciousness. Wiley & Sons, Incorporated, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Automated information extraction"

1

Cioffi-Revilla, Claudio. "Automated Information Extraction." In Texts in Computer Science, 67–88. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-5661-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cioffi-Revilla, Claudio. "Automated Information Extraction." In Texts in Computer Science, 103–40. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-50131-4_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lin, Wei-Hao, and Alexander G. Hauptmann. "Automated Analysis of Ideological Bias in Video." In Multimedia Information Extraction, 129–43. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2012. http://dx.doi.org/10.1002/9781118219546.ch8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ly, Papa Alioune, Carlos Pedrinaci, and John Domingue. "Automated Information Extraction from Web APIs Documentation." In Web Information Systems Engineering - WISE 2012, 497–511. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35063-4_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peleato, Ramón Aragüés, Jean-Cédric Chappelier, and Martin Rajman. "Automated Information Extraction out of Classified Advertisements." In Natural Language Processing and Information Systems, 203–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45399-7_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Singh, Pradeep, and Shrish Verma. "Automated Tool for Extraction of Software Fault Data." In Advances in Data and Information Sciences, 29–37. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8360-0_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Echeverría, Jorge, Francisca Pérez, Óscar Pastor, and Carlos Cetina. "Assessing the Performance of Automated Model Extraction Rules." In Lecture Notes in Information Systems and Organisation, 33–49. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-74817-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ling, Yanyan, Xiaofeng Meng, and Weiyi Meng. "Automated Extraction of Hit Numbers from Search Result Pages." In Advances in Web-Age Information Management, 73–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11775300_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Weerawardhana, Sachini, Subhojeet Mukherjee, Indrajit Ray, and Adele Howe. "Automated Extraction of Vulnerability Information for Home Computer Security." In Foundations and Practice of Security, 356–66. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17040-4_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Katyshev, Alexander, Anton Anikin, Mikhail Denisov, and Tatyana Petrova. "Intelligent Approaches for the Automated Domain Ontology Extraction." In Proceedings of Fifth International Congress on Information and Communication Technology, 410–17. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5856-6_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Automated information extraction"

1

Schwardmann, Ulrich. "Automated schema extraction for PID information types." In 2016 IEEE International Conference on Big Data (Big Data). IEEE, 2016. http://dx.doi.org/10.1109/bigdata.2016.7840957.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rudzajs, Peteris. "Towards automated education demand-offer information monitoring: The information extraction." In 2012 Sixth International Conference on Research Challenges in Information Science (RCIS). IEEE, 2012. http://dx.doi.org/10.1109/rcis.2012.6240464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ramnani, Roshni R., Karthik Shivaram, Shubhashis Sengupta, and Annervaz K. M. "Semi-Automated Information Extraction from Unstructured Threat Advisories." In ISEC '17: Innovations in Software Engineering Conference. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3021460.3021482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Biswal, Siddharth, Zarina Nip, Valdery Moura Junior, Matt T. Bianchi, Eric S. Rosenthal, and M. Brandon Westover. "Automated information extraction from free-text EEG reports." In 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2015. http://dx.doi.org/10.1109/embc.2015.7319956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Ashe X., Ashish Saxena, Jacqueline Chua, Leopold Schmetterer, and Bingyao Tan. "Automated Retinal Vascular Topological Information Extraction From OCTA." In 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2022. http://dx.doi.org/10.1109/embc48229.2022.9871160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Russell, Stuart, Ole Torp Lassen, Justin Uang, and Wei Wang. "The Physics of Text: Ontological Realism in Information Extraction." In Proceedings of the 5th Workshop on Automated Knowledge Base Construction. Stroudsburg, PA, USA: Association for Computational Linguistics, 2016. http://dx.doi.org/10.18653/v1/w16-1310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Guarino De Vasconcelos, Luiz Eduardo, Andre Yoshimi Kusumoto, Nelson Paiva Oliveira Leite, and Cristina Moniz Araujo Lopes. "Automated Extraction Information System from HUDs Images Using ANN." In 2015 12th International Conference on Information Technology - New Generations (ITNG). IEEE, 2015. http://dx.doi.org/10.1109/itng.2015.110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gutierrez, Fernando, Dejing Dou, Adam Martini, Stephen Fickas, and Hui Zong. "Hybrid Ontology-Based Information Extraction for Automated Text Grading." In 2013 12th International Conference on Machine Learning and Applications (ICMLA). IEEE, 2013. http://dx.doi.org/10.1109/icmla.2013.73.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bhatia, Jaspreet, Morgan C. Evans, Sudarshan Wadkar, and Travis D. Breaux. "Automated Extraction of Regulated Information Types Using Hyponymy Relations." In 2016 IEEE 24th International Requirements Engineering Conference Workshops (REW). IEEE, 2016. http://dx.doi.org/10.1109/rew.2016.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Jiansong, and Nora El-Gohary. "Automated Regulatory Information Extraction from Building Codes : Leveraging Syntactic and Semantic Information." In Construction Research Congress 2012. Reston, VA: American Society of Civil Engineers, 2012. http://dx.doi.org/10.1061/9780784412329.063.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Automated information extraction"

1

Small, Frank, and William Tanenbaum. Extrinsic Evaluation of Automated Information Extraction Programs. Fort Belvoir, VA: Defense Technical Information Center, May 2010. http://dx.doi.org/10.21236/ada533074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zelenskyi, Arkadii A. Relevance of research of programs for semantic analysis of texts and review of methods of their realization. [б. в.], December 2018. http://dx.doi.org/10.31812/123456789/2884.

Full text
Abstract:
One of the main tasks of applied linguistics is the solution of the problem of high-quality automated processing of natural language. The most popular methods for processing natural-language text responses for the purpose of extraction and representation of semantics should be systems that are based on the efficient combination of linguistic analysis technologies and analysis methods. Among the existing methods for analyzing text data, a valid method is used by the method using a vector model. Another effective and relevant means of extracting semantics from the text and its representation is the method of latent semantic analysis (LSA). The LSA method was tested and confirmed its effectiveness in such areas of processing the native language as modeling the conceptual knowledge of the person; information search, the implementation of which LSA shows much better results than conventional vector methods.
APA, Harvard, Vancouver, ISO, and other styles
3

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Full text
Abstract:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
4

Nurre, Joseph H. Automate Information Extraction from Scan Data. Fort Belvoir, VA: Defense Technical Information Center, November 1998. http://dx.doi.org/10.21236/ada362095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sudo, Kiyoshi, Satoshi Sekine, and Ralph Grishman. Automatic Pattern Acquisition for Japanese Information Extraction. Fort Belvoir, VA: Defense Technical Information Center, January 2001. http://dx.doi.org/10.21236/ada460210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Watts, Charles, James Cowie, and Sergei Nirenburg. Improving Recall for Automatic Information Extraction: Final Status Report. Fort Belvoir, VA: Defense Technical Information Center, December 2000. http://dx.doi.org/10.21236/ada390686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ward, Katrina, Jonathan Bisila, and Jamini Sahu. Relationship Extraction: Automatic Information Extraction and Organization for Supporting Analysts in Threat Assessment. Office of Scientific and Technical Information (OSTI), October 2021. http://dx.doi.org/10.2172/1832526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography