Academic literature on the topic 'Addresses (data processing)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Addresses (data processing).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Addresses (data processing)"

1

Romanchuk, Vitaliy. "Mathematical support and software for data processing in robotic neurocomputer systems." MATEC Web of Conferences 161 (2018): 03004. http://dx.doi.org/10.1051/matecconf/201816103004.

Full text
Abstract:
The paper addresses classification and formal definition of neurocomputer systems for robotic complexes, based on the types of associations among their elements. We suggest analytical expressions for performance evaluation in neural computer information processing, aimed at development of methods, algorithms and software that optimize such systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Hahanov, V. I., V. H. Abdullayev, S. V. Chumachenko, E. I. Lytvynova, and I. V. Hahanova. "IN-MEMORY INTELLIGENT COMPUTING." Radio Electronics, Computer Science, Control, no. 1 (April 2, 2024): 161. http://dx.doi.org/10.15588/1607-3274-2024-1-15.

Full text
Abstract:
Context. Processed big data has social significance for the development of society and industry. Intelligent processing of big data is a condition for creating a collective mind of a social group, company, state and the planet as a whole. At the same time, the economy of big data (Data Economy) takes first place in the evaluation of processing mechanisms, since two parameters are very important: speed of data processing and energy consumption. Therefore, mechanisms focused on parallel processing of large data within the data storage center will always be in demand on the IT market. Objective. The goal of the investigation is to increase the economy of big data (Data Economy) thanks to the analysis of data as truth table addresses for the identification of patterns of production functionalities based on the similarity-difference metric. Method. Intelligent computing architectures are proposed for managing cyber-social processes based on monitoring and analysis of big data. It is proposed to process big data as truth table addresses to solve the problems of identification, clustering, and classification of patterns of social and production processes. A family of automata is offered for the analysis of big data, such as addresses. The truth table is considered as a reasonable form of explicit data structures that have a useful constant – a standard address routing order. The goal of processing big data is to make it structured using a truth table for further identification before making actuator decisions. The truth table is considered as a mechanism for parallel structuring and packing of large data in its column to determine their similarity-difference and to equate data at the same addresses. Representation of data as addresses is associated with unitary encoding of patterns by binary vectors on the found universe of primitive data. The mechanism is focused on processorless data processing based on read-write transactions using in-memory computing technology with significant time and energy savings. The metric of truth table big data processing is parallelism, technological simplicity, and linear computational complexity. The price for such advantages is the exponential memory costs of storing explicit structured data. Results. Parallel algorithms of in-memory computing are proposed for economic mechanisms of transformation of large unstructured data, such as addresses, into useful structured data. An in-memory computing architecture with global feedback and an algorithm for matrix parallel processing of large data such as addresses are proposed. It includes a framework for matrix analysis of big data to determine the similarity between vectors that are input to the matrix sequencer. Vector data analysis is transformed into matrix computing for big data processing. The speed of the parallel algorithm for the analysis of big data on the MDV matrix of deductive vectors is linearly dependent on the number of bits of the input vectors or the power of the universe of primitives. A method of identifying patterns using key words has been developed. It is characterized by the use of unitary coded data components for the synthesis of the truth table of the business process. This allows you to use read-write transactions for parallel processing of large data such as addresses. Conclusions. The scientific novelty consists in the development of the following innovative solutions: 1) a new vector-matrix technology for parallel processing of large data, such as addresses, is proposed, characterized by the use of read-write transactions on matrix memory without the use of processor logic; 2) an in-memory computing architecture with global feedback and an algorithm for matrix parallel processing of large data such as addresses are proposed; 3) a method of identifying patterns using keywords is proposed, which is characterized by the use of unitary coded data components for the synthesis of the truth table of the business process, which makes it possible to use the read-write transaction for parallel processing of large data such as addresses. The practical significance of the study is that any task of artificial intelligence (similarity-difference, classification-clustering and recognition, pattern identification) can be solved technologically simply and efficiently with the help of a truth table (or its derivatives) and unitarily coded big data . Research prospects are related to the implementation of this digital modeling technology devices on the EDA market. KEYWORDS: Intelligent
APA, Harvard, Vancouver, ISO, and other styles
3

Gururaj T. and Siddesh G. M. "Hybrid Approach for Enhancing Performance of Genomic Data for Stream Matching." International Journal of Cognitive Informatics and Natural Intelligence 15, no. 4 (October 2021): 1–18. http://dx.doi.org/10.4018/ijcini.20211001.oa38.

Full text
Abstract:
In gene expression analysis, the expression levels of thousands of genes are analyzed, such as separate stages of treatments or diseases. Identifying particular gene sequence pattern is a challenging task with respect to performance issues. The proposed solution addresses the performance issues in genomic stream matching by involving assembly and sequencing. Counting the k-mer based on k-input value and while performing DNA sequencing tasks, the researches need to concentrate on sequence matching. The proposed solution addresses performance issue metrics such as processing time for k-mer counting, number of operations for matching similarity, memory utilization while performing similarity search, and processing time for stream matching. By suggesting an improved algorithm, Revised Rabin Karp(RRK) for basic operation and also to achieve more efficiency, the proposed solution suggests a novel framework based on Hadoop MapReduce blended with Pig & Apache Tez. The measure of memory utilization and processing time proposed model proves its efficiency when compared to existing approaches.
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Xihuang, Peng Liu, Yan Ma, Dingsheng Liu, and Yechao Sun. "Streaming Remote Sensing Data Processing for the Future Smart Cities." International Journal of Distributed Systems and Technologies 7, no. 1 (January 2016): 1–14. http://dx.doi.org/10.4018/ijdst.2016010101.

Full text
Abstract:
The explosion of data and the increase in processing complexity, together with the increasing needs of real-time processing and concurrent data access, make remote sensing data streaming processing a wide research area to study. This paper introduces current situation of remote sensing data processing and how timely remote sensing data processing can help build future smart cities. Current research on remote sensing data streaming is also introduced where the three typical and open-source stream processing frameworks are introduced. This paper also discusses some design concerns for remote sensing data streaming processing systems, such as data model and transmission, system model, programming interfaces, storage management, availability, etc. Finally, this research specifically addresses some of the challenges of remote sensing data streaming processing, such as scalability, fault tolerance, consistency, load balancing and throughput.
APA, Harvard, Vancouver, ISO, and other styles
5

Krishnamurthi, Rajalakshmi, Adarsh Kumar, Dhanalekshmi Gopinathan, Anand Nayyar, and Basit Qureshi. "An Overview of IoT Sensor Data Processing, Fusion, and Analysis Techniques." Sensors 20, no. 21 (October 26, 2020): 6076. http://dx.doi.org/10.3390/s20216076.

Full text
Abstract:
In the recent era of the Internet of Things, the dominant role of sensors and the Internet provides a solution to a wide variety of real-life problems. Such applications include smart city, smart healthcare systems, smart building, smart transport and smart environment. However, the real-time IoT sensor data include several challenges, such as a deluge of unclean sensor data and a high resource-consumption cost. As such, this paper addresses how to process IoT sensor data, fusion with other data sources, and analyses to produce knowledgeable insight into hidden data patterns for rapid decision-making. This paper addresses the data processing techniques such as data denoising, data outlier detection, missing data imputation and data aggregation. Further, it elaborates on the necessity of data fusion and various data fusion methods such as direct fusion, associated feature extraction, and identity declaration data fusion. This paper also aims to address data analysis integration with emerging technologies, such as cloud computing, fog computing and edge computing, towards various challenges in IoT sensor network and sensor data analysis. In summary, this paper is the first of its kind to present a complete overview of IoT sensor data processing, fusion and analysis techniques.
APA, Harvard, Vancouver, ISO, and other styles
6

Nguyen, Minh Duc. "A Scientific Workflow System for Satellite Data Processing with Real-Time Monitoring." EPJ Web of Conferences 173 (2018): 05012. http://dx.doi.org/10.1051/epjconf/201817305012.

Full text
Abstract:
This paper provides a case study on satellite data processing, storage, and distribution in the space weather domain by introducing the Satellite Data Downloading System (SDDS). The approach proposed in this paper was evaluated through real-world scenarios and addresses the challenges related to the specific field. Although SDDS is used for satellite data processing, it can potentially be adapted to a wide range of data processing scenarios in other fields of physics.
APA, Harvard, Vancouver, ISO, and other styles
7

Prabagar, S., Vinay K. Nassa, Senthil V. M, Shilpa Abhang, Pravin P. Adivarekar, and Sridevi R. "Python-based social science applications’ profiling and optimization on HPC systems using task and data parallelism." Scientific Temper 14, no. 03 (September 26, 2023): 870–76. http://dx.doi.org/10.58414/scientifictemper.2023.14.3.48.

Full text
Abstract:
This research addresses the pressing need to optimize Python-based social science applications for high-performance computing (HPC)systems, emphasizing the combined use of task and data parallelism techniques. The paper delves into a substantial body of research,recognizing Python’s interpreted nature as a challenge for efficient social science data processing. The paper introduces a Pythonprogram that exemplifies the proposed methodology. This program uses task parallelism with multi-processing and data parallelismwith dask to optimize data processing workflows. It showcases how researchers can effectively manage large datasets and intricatecomputations on HPC systems. The research offers a comprehensive framework for optimizing Python-based social science applicationson HPC systems. It addresses the challenges of Python’s performance limitations, data-intensive processing, and memory efficiency.Incorporating insights from a rich literature survey, it equips researchers with valuable tools and strategies for enhancing the efficiencyof their social science applications in HPC environments.
APA, Harvard, Vancouver, ISO, and other styles
8

Šprem, Šimun, Nikola Tomažin, Jelena Matečić, and Marko Horvat. "Building Advanced Web Applications Using Data Ingestion and Data Processing Tools." Electronics 13, no. 4 (February 9, 2024): 709. http://dx.doi.org/10.3390/electronics13040709.

Full text
Abstract:
Today, advanced websites serve as robust data repositories that constantly collect various user-centered information and prepare it for subsequent processing. The data collected can include a wide range of important information from email addresses, usernames, and passwords to demographic information such as age, gender, and geographic location. User behavior metrics are also collected, including browsing history, click patterns, and time spent on pages, as well as different preferences like product selection, language preferences, and individual settings. Interactions, device information, transaction history, authentication data, communication logs, and various analytics and metrics contribute to the comprehensive range of user-centric information collected by websites. A method to systematically ingest and transfer such differently structured information to a central message broker is thoroughly described. In this context, a novel tool—Dataphos Publisher—for the creation of ready-to-digest data packages is presented. Data acquired from the message broker are employed for data quality analysis, storage, conversion, and downstream processing. A brief overview of the commonly used and freely available tools for data ingestion and processing is also provided.
APA, Harvard, Vancouver, ISO, and other styles
9

Chatzakis, Manos, Panagiota Fatourou, Eleftherios Kosmas, Themis Palpanas, and Botao Peng. "Odyssey: A Journey in the Land of Distributed Data Series Similarity Search." Proceedings of the VLDB Endowment 16, no. 5 (January 2023): 1140–53. http://dx.doi.org/10.14778/3579075.3579087.

Full text
Abstract:
This paper presents Odyssey, a novel distributed data-series processing framework that efficiently addresses the critical challenges of exhibiting good speedup and ensuring high scalability in data series processing by taking advantage of the full computational capacity of modern distributed systems comprised of multi-core servers. Odyssey addresses a number of challenges in designing efficient and highly-scalable distributed data series index, including efficient scheduling, and load-balancing without paying the prohibitive cost of moving data around. It also supports a flexible partial replication scheme, which enables Odyssey to navigate through a fundamental trade-off between data scalability and good performance during query answering. Through a wide range of configurations and using several real and synthetic datasets, our experimental analysis demonstrates that Odyssey achieves its challenging goals.
APA, Harvard, Vancouver, ISO, and other styles
10

Vijaya, Dr V. Krishna. "INVOICE DATA EXTRACTION USING OCR TECHNIQUE." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 9, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem29981.

Full text
Abstract:
Traditional invoice processing involves manual entry of data, leading to human errors, delays,and increased operational costs. The lack of automation results in inefficiencies, hindering organizations from promptly accessing critical financial information. This research addresses the pressing need for a reliable OCR-based solution to automate invoice data extraction, ultimately improving accuracy, reducing processing time, and enhancing overall business productivity. The project aims to automate invoice data extraction through Optical Character Recognition (OCR) techniques. Leveraging advanced image processing and machine learning, the system will analyze scanned or photographed invoices, extracting relevant information such as vendor details, itemized costs, and dates.This automation streamlines manual data entry processes, enhancing accuracy and efficiency in managing financial records. OCR invoicing is the process of training a template-based OCR model for specific invoice layouts, setting up input paths for these invoices, extracting data, and integrating the extracted data with a structured database. Keywords: Invoice, OCR, YOLO algorithm, Data Extraction, Image Processing, Database Integration.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Addresses (data processing)"

1

Fowler, Robert Joseph. "Decentralized object finding using forwarding address /." Thesis, Connect to this title online; UW restricted, 1985. http://hdl.handle.net/1773/6947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Underwood, Ian. "An nMOS addressed liquid crystal spatial light modulator." Thesis, University of Edinburgh, 1987. http://hdl.handle.net/1842/1542.

Full text
Abstract:
Coherent optical data processing is recognised, for many applications, as a viable alternative to digital electronic signal processing; the case for using coherent optics is particularly strong when the data to be processed is two dimensional in nature. It has long been accpeted that, in order for coherent optical processing to achieve its full performance potential, two dimensional spatial light modulators - capable of operating in real time - are essential at both the object plane (where the data is input to the system) and the Fourier plane (where the operation carried out on the data is determined). Most previous research in the field of spatial modulators has concentrated on optically addressed devices for use in the object plane. This thesis describes a prototype liquid crystal over silicon spatial light modulator built to test the feasibility of using such devices in a coherent optical processor. Optically, the device operates as a binary amplitude modulator, consisting of a square array of 16x16 pixels, each of size 100x100 m^2 and located at 200m centres. The integrated circuit is designed for a 6m wafer fabrication process. Each pixel of the IC contains a static memory element (which stores a digital logic voltage corresponding to the optical state of that pixel) and provides a stable square wave voltage signal to drive the liquid crystal layer. The component parts of the spatial light modulator are tested individually: the liquid crystal, in test cells, for contrast and switching speed; the IC for electrical performance and optical (flatness) characteristics. The effect of pixellation on optical performance is investigated. The performance of live devices is demonstrated. The results indicate the feasibility of using such a device as a binary amplitude spatial light modulator.
APA, Harvard, Vancouver, ISO, and other styles
3

Young, Jeffrey Scott. "Global address spaces for efficient resource provisioning in the data center." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50261.

Full text
Abstract:
The rise of large data sets, or "Big Data'', has coincided with the rise of clusters with large amounts of memory and GPU accelerators that can be used to process rapidly growing data footprints. However, the complexity and performance limitations of sharing memory and accelerators in a cluster limits the options for efficient management and allocation of resources for applications. The global address space model (GAS), and specifically hardware-supported GAS, is proposed as a means to provide a high-performance resource management platform upon which resource sharing between nodes and resource aggregation across nodes can take place. This thesis builds on the initial concept of GAS with a model that is matched to "Big Data'' computing and its data transfer requirements. The proposed model, Dynamic Partitioned Global Address Spaces (DPGAS), is implemented using a commodity converged interconnect, HyperTransport over Ethernet (HToE), and a software framework, the Oncilla runtime and API. The DPGAS model and associated hardware and software components are used to investigate two application spaces, resource sharing for time-varying workloads and resource aggregation for GPU-accelerated data warehousing applications. This work demonstrates that hardware-supported GAS can be used improve the performance and power consumption of memory-intensive applications, and that it can be used to simplify host and accelerator resource management in the data center.
APA, Harvard, Vancouver, ISO, and other styles
4

Suddath, Suzanne Virginia. "Engaging girls in the use of technology : codifying software design characteristics that address girls' needs." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/17522.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Scarlato, Michele. "Sicurezza di rete, analisi del traffico e monitoraggio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3223/.

Full text
Abstract:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
APA, Harvard, Vancouver, ISO, and other styles
6

Bailey, Peter Richard. "Process-oriented language design for distributed address spaces." Phd thesis, 1997. http://hdl.handle.net/1885/145990.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Addresses (data processing)"

1

United States. Congress. House. Committee on Government Reform. Subcommittee on the Census. Oversight of the 2000 census: A midterm evaluation of the local update of census addresses program : hearing before the Subcommittee on the Census of the Committee on Government Reform, House of Representatives, One Hundred Sixth Congress, first session, September 29, 1999. Washington: U.S. G.P.O., 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Fister, Mark. A pocket tour of money on the Internet. San Francisco: Sybex, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Subcommittee, Maryland State Government Geographic Information Coordinating Committee Database and Resource Development. Recommendations on addressing in support of address matching and geocoding: Adopted April 12, 1995. [Annapolis]: The Committee, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cindy, Yager, ed. Writer's guide to Internet resources. New York: Macmillan, USA, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Briggs-Erickson, Carol. Environmental guide to the Internet. 2nd ed. Rockville, Md: Government Institutes, Inc., 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Briggs-Erickson, Carol. Environmental guide to the Internet. 3rd ed. Rockville, Md: Government Institutes, Inc., 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Daniel, Levinson, ed. Computer applications in clinical practice: An overview. New York: Macmillan Pub. Co., 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

J, Evans David, and Meeting on the Advanced Research Topic "Sparsity and Its Applications" (1983 : University of Technology, Loughborough, England), eds. Sparsity and its applications. Cambridge [Cambridgeshire]: Cambridge University Press, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Winer, Dov. Global Jewish networking handbook. Jerusalem: Department of Information, World Zionist Organization, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, Cricket. DNS and BIND cookbook. Cambridge: O'Reilly, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Addresses (data processing)"

1

Vasiliu, Laurenţiu, Dumitru Roman, and Radu Prodan. "Extreme and Sustainable Graph Processing for Green Finance Investment and Trading." In AI, Data, and Digitalization, 120–35. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53770-7_8.

Full text
Abstract:
AbstractThe Graph-Massivizer project, funded by the Horizon Europe research and innovation program, aims to create a high-performance and sustainable platform for extreme data processing. This paper focuses on one use case that addresses the limitations of financial market data for green and sustainable investments. The project allows for the fast, semi-automated creation of realistic and affordable synthetic (extreme) financial datasets of any size for testing and improving AI-enhanced financial algorithms for green investment and trading. Synthetic data usage removes biases, ensures data affordability and completeness, consolidates financial algorithms, and provides a statistically relevant sample size for advanced back-testing.
APA, Harvard, Vancouver, ISO, and other styles
2

Reshadat, Vahideh, Tess Kolkman, Kalliopi Zervanou, Yingqian Zhang, Alp Akçay, Carlijn Snijder, Ryan McDonnell, et al. "Knowledge Modeling and Incident Analysis for Special Cargo." In Technologies and Applications for Big Data Value, 519–44. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78307-5_23.

Full text
Abstract:
AbstractThe airfreight industry of shipping goods with special handling needs, also known as special cargo, suffers from nontransparent shipping processes, resulting in inefficiency. The LARA project (Lane Analysis and Route Advisor) aims at addressing these limitations and bringing innovation in special cargo route planning so as to improve operational deficiencies and customer services. In this chapter, we discuss the special cargo domain knowledge elicitation and modeling into an ontology. We also present research into cargo incidents, namely, automatic classification of incidents in free-text reports and experiments in detecting significant features associated with specific cargo incident types. Our work mainly addresses two of the main technical priority areas defined by the European Big Data Value (BDV) Strategic Research and Innovation Agenda, namely, the application of data analytics to improve data understanding and providing optimized architectures for analytics of data-at-rest and data-in-motion, the overall goal is to develop technologies contributing to the data value chain in the logistics sector. It addresses the horizontal concerns Data Analytics, Data Processing Architectures, and Data Management of the BDV Reference Model. It also addresses the vertical dimension Big Data Types and Semantics.
APA, Harvard, Vancouver, ISO, and other styles
3

Papakonstantinou, Mihalis, Manos Karvounis, Giannis Stoitsis, and Nikos Manouselis. "Deploying a Scalable Big Data Platform to Enable a Food Safety Data Space." In Data Spaces, 227–48. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98636-0_11.

Full text
Abstract:
AbstractThe main goal of this chapter is to share the technical details and best practices for setting up a scalable Big Data platform that addresses the data challenges of the food industry. The amount of data that is generated in our food supply chain is rapidly increasing. The data is published by hundreds of organizations on a daily basis, in many different languages and formats making its aggregation, processing, and exchange a challenge. The efficient linking and mining of the global food data can enable the generation of insights and predictions that can help food safety experts to make critical decisions. All the food companies as well as national authorities and agencies may highly benefit from the data services of such a data platform. The chapter focuses on the architecture and software stack that was used to set up a data platform for a specific business use case. We describe how the platform was designed following data and technology standards to ensure the interoperability between systems and the interconnection of data. We share best practices on the deployment of data platforms such as identification of records, orchestrating pipelines, automating the aggregation workflow, and monitoring of a Big Data platform. The platform was developed in the context of the H2020 BigDataGrapes project, was awarded by communities such as Elasticsearch, and is further developed in H2020 The Food Safety Market project in order to enable the setup of a data space for the food safety sector.
APA, Harvard, Vancouver, ISO, and other styles
4

Serrano, Maria A., César A. Marín, Anna Queralt, Cristovao Cordeiro, Marco Gonzalez, Luis Miguel Pinho, and Eduardo Quiñones. "An Elastic Software Architecture for Extreme-Scale Big Data Analytics." In Technologies and Applications for Big Data Value, 89–110. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-78307-5_5.

Full text
Abstract:
AbstractThis chapter describes a software architecture for processing big-data analytics considering the complete compute continuum, from the edge to the cloud. The new generation of smart systems requires processing a vast amount of diverse information from distributed data sources. The software architecture presented in this chapter addresses two main challenges. On the one hand, a new elasticity concept enables smart systems to satisfy the performance requirements of extreme-scale analytics workloads. By extending the elasticity concept (known at cloud side) across the compute continuum in a fog computing environment, combined with the usage of advanced heterogeneous hardware architectures at the edge side, the capabilities of the extreme-scale analytics can significantly increase, integrating both responsive data-in-motion and latent data-at-rest analytics into a single solution. On the other hand, the software architecture also focuses on the fulfilment of the non-functional properties inherited from smart systems, such as real-time, energy-efficiency, communication quality and security, that are of paramount importance for many application domains such as smart cities, smart mobility and smart manufacturing.
APA, Harvard, Vancouver, ISO, and other styles
5

Kotze, Haidee, Minna Korhonen, Adam Smith, and Bertus van Rooy. "Chapter 2. Salient differences between Australian oral parliamentary discourse and its official written records." In Exploring Language and Society with Big Data, 54–88. Amsterdam: John Benjamins Publishing Company, 2023. http://dx.doi.org/10.1075/scl.111.02kot.

Full text
Abstract:
This chapter addresses the question of editorial practice for the Australian Hansard with the use of an aligned corpus of transcribed audio recordings and the corresponding Hansard records, covering the period 1946–2015. A more traditional, qualitative, bottom-up approach is taken by manually analysing the data to compile a list of differences in the two types of records. In addition, a deductive, quantitative approach is adopted by using the multidimensional analysis method of Biber (1988) to identify significant differences in the frequencies of (clusters of) features between the oral transcripts and written Hansard records and interpret these. Our primary aim is to provide insight into methodological questions associated with working with big linguistic data. Alongside this, we report findings about differences between the written Hansard and the original speeches: reduction of spoken language processing features and informality, greater conservatism, and more density – although these differences decrease over time.
APA, Harvard, Vancouver, ISO, and other styles
6

Miller, Gloria J. "Artificial Intelligence Project Success Factors—Beyond the Ethical Principles." In Lecture Notes in Business Information Processing, 65–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98997-2_4.

Full text
Abstract:
AbstractThe algorithms implemented through artificial intelligence (AI) and big data projects are used in life-and-death situations. Despite research that addresses varying aspects of moral decision-making based upon algorithms, the definition of project success is less clear. Nevertheless, researchers place the burden of responsibility for ethical decisions on the developers of AI systems. This study used a systematic literature review to identify five categories of AI project success factors in 17 groups related to moral decision-making with algorithms. It translates AI ethical principles into practical project deliverables and actions that underpin the success of AI projects. It considers success over time by investigating the development, usage, and consequences of moral decision-making by algorithmic systems. Moreover, the review reveals and defines AI success factors within the project management literature. Project managers and sponsors can use the results during project planning and execution.
APA, Harvard, Vancouver, ISO, and other styles
7

Wolters, Lisan, and Marwan Hassani. "Predicting Activities of Interest in the Remainder of Customer Journeys Under Online Settings." In Lecture Notes in Business Information Processing, 145–57. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-27815-0_11.

Full text
Abstract:
AbstractCustomer journey analysis is important for organizations to get to know as much as possible about the main behavior of their customers. This provides the basis to improve the customer experience within their organization. This paper addresses the problem of predicting the occurrence of a certain activity of interest in the remainder of the customer journey that follows the occurrence of another specific activity. For this, we propose the HIAP framework which uses process mining techniques to analyze customer journeys. Different prediction models are researched to investigate which model is most suitable for high importance activity prediction. Furthermore the effect of using a sliding window or landmark model for (re)training a model is investigated. The framework is evaluated using a health insurance real dataset and a benchmark data set. The efficiency and prediction quality results highlight the usefulness of the framework under various realistic online business settings.
APA, Harvard, Vancouver, ISO, and other styles
8

Shoush, Mahmoud, and Marlon Dumas. "When to Intervene? Prescriptive Process Monitoring Under Uncertainty and Resource Constraints." In Lecture Notes in Business Information Processing, 207–23. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16171-1_13.

Full text
Abstract:
AbstractPrescriptive process monitoring approaches leverage historical data to prescribe runtime interventions that will likely prevent negative case outcomes or improve a process’s performance. A centerpiece of a prescriptive process monitoring method is its intervention policy: a decision function determining if and when to trigger an intervention on an ongoing case. Previous proposals in this field rely on intervention policies that consider only the current state of a given case. These approaches do not consider the tradeoff between triggering an intervention in the current state, given the level of uncertainty of the underlying predictive models, versus delaying the intervention to a later state. Moreover, they assume that a resource is always available to perform an intervention (infinite capacity). This paper addresses these gaps by introducing a prescriptive process monitoring method that filters and ranks ongoing cases based on prediction scores, prediction uncertainty, and causal effect of the intervention, and triggers interventions to maximize a gain function, considering the available resources. The proposal is evaluated using a real-life event log. The results show that the proposed method outperforms existing baselines regarding total gain.
APA, Harvard, Vancouver, ISO, and other styles
9

Mariani, Fabio, Lynn Rother, and Max Koss. "Teaching Provenance to AI." In Edition Museum, 163–72. Bielefeld, Germany: transcript Verlag, 2023. http://dx.doi.org/10.14361/9783839467107-014.

Full text
Abstract:
Our paper addresses how artificial intelligence technologies can transform museum records of provenance into structured and machine-readable data, which is the first critical step in undertaking a large-scale cross-institutional analysis of object history. Drawing on research on natural language processing (NLP), we have identified sentence boundary disambiguation and span categorization as highly effective techniques for extracting and structuring information from provenance texts. Our paper focuses on a provenance-specific annotation scheme that enables us to retain historical nuances when constructing provenance linked open data (PLOD).
APA, Harvard, Vancouver, ISO, and other styles
10

Mussabayev, Rustam, and Ravil Mussabayev. "Superior Parallel Big Data Clustering Through Competitive Stochastic Sample Size Optimization in Big-Means." In Intelligent Information and Database Systems, 224–36. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-4985-0_18.

Full text
Abstract:
AbstractThis paper introduces a novel K-means clustering algorithm, an advancement on the conventional Big-means methodology. The proposed method efficiently integrates parallel processing, stochastic sampling, and competitive optimization to create a scalable variant designed for big data applications. It addresses scalability and computation time challenges typically faced with traditional techniques. The algorithm adjusts sample sizes dynamically for each worker during execution, optimizing performance. Data from these sample sizes are continually analyzed, facilitating the identification of the most efficient configuration. By incorporating a competitive element among workers using different sample sizes, efficiency within the Big-means algorithm is further stimulated. In essence, the algorithm balances computational time and clustering quality by employing a stochastic, competitive sampling strategy in a parallel computing setting.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Addresses (data processing)"

1

Gol, Kursat, and Metin Salihmuhsin. "Statistical data acquisition via filtering URL addresses with NetFPGA open source hardware platform." In 2016 24th Signal Processing and Communication Application Conference (SIU). IEEE, 2016. http://dx.doi.org/10.1109/siu.2016.7495798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Qizhen, and Tochi Onuegbu. "WorkTune: A Music-Assisted Writing Efficiency Evaluation and Promotion Platform using Artificial Intelligence and Natural Language Processing." In 5th International Conference on Artificial Intelligence and Big Data. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.140423.

Full text
Abstract:
In a world where music accompanies various tasks, our paper addresses the challenge of understanding the impact of background music on work efficiency. The background problem centers on the lack of precision in existing studies, overlooking individual preferences and work types. Our proposed solution is a Python-based application that evaluates an individual's work efficiency while listening to different music genres [1]. The user-friendly interface incorporates features like music category selection, login options, and real-time statistics tracking [2][3]. Challenges, such as diverse user interactions and limited data, were addressed through a feedback channel for continuous improvement. The application underwent experiments, including regression model evaluations for essay grading and SVM parameter tuning [4]. Results indicated superior performance, emphasizing the relevance of ensemble learning and optimal parameter selection. This application provides a nuanced understanding of how background music influences work efficiency, offering a personalized approach that people can leverage for enhanced productivity and satisfaction in various work scenarios.
APA, Harvard, Vancouver, ISO, and other styles
3

Jenkins, R. Brian, and Bradley D. Clymer. "Acoustooptic comparison switch with polarization multiplexed analog addressing." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1991. http://dx.doi.org/10.1364/oam.1991.wu2.

Full text
Abstract:
Switching networks based on perfect shuffle or Banyan interconnection techniques are useful in parallel processing and telecommunications. The integral components in these networks are the comparison switches that control the routing of data through the network. An acoustooptic comparison switch has been implemented that simplifies control using analog addressing techniques and insures high-bandwidth data transmission by multiplexing the data and the address on orthogonal polarizations of light. The addresses can be easily extracted and processed apart from the data using polarizing beam splitters so that the data transfer can continue at optical data rates. The analog addresses are encoded either as the frequencies of amplitude modulation of the optical carrier or as discrete cw optical power levels. Frequency encoded addresses are compared using frequency detectors typically found in phaselocked loops and power encoded addresses can be photodetected and compared with a simple analog comparator.
APA, Harvard, Vancouver, ISO, and other styles
4

Flamant, P. H. "An Overview of CNRS Activities in Lidar Signal Modeling, Data Analysis, and Assimilation." In Coherent Laser Radar. Washington, D.C.: Optica Publishing Group, 1995. http://dx.doi.org/10.1364/clr.1995.wb1.

Full text
Abstract:
During the last decade CNRS, supported by CNES and ESA, conducted theoretical and experimental studies to advance airborne and space-based wind like a lidar application. Among others, the effort addresses modeling and numerical simulations to assess the performance, advanced signal processing, lidar data and meteorological model interaction, and critical components like single-mode TE-CO2 lidar.
APA, Harvard, Vancouver, ISO, and other styles
5

Alet, Ferran, Rohan Chitnis, Leslie P. Kaelbling, and Tomas Lozano-Perez. "Finding Frequent Entities in Continuous Data." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/275.

Full text
Abstract:
In many applications that involve processing high-dimensional data, it is important to identify a small set of entities that account for a significant fraction of detections. Rather than formalize this as a clustering problem, in which all detections must be grouped into hard or soft categories, we formalize it as an instance of the frequent items or heavy hitters problem, which finds groups of tightly clustered objects that have a high density in the feature space. We show that the heavy hitters formulation generates solutions that are more accurate and effective than the clustering formulation. In addition, we present a novel online algorithm for heavy hitters, called HAC, which addresses problems in continuous space, and demonstrate its effectiveness on real video and household domains.
APA, Harvard, Vancouver, ISO, and other styles
6

Lash, Alex, Kevin Murray, and Gregory Mocko. "Natural Language Processing Applications in Requirements Engineering." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-71084.

Full text
Abstract:
In the design process, the requirements serve as the benchmark for the entire product. Therefore, the quality of requirement statements is essential to the success of a design. Because of their ergonomic-nature, most requirements are written in natural language (NL). However, writing requirements in natural language presents many issues such as ambiguity, specification issues, and incompleteness. Therefore, identifying issues in requirements involves analyzing these NL statements. This paper presents a linguistic approach to requirement analysis, which utilizes grammatical elements of requirements statements to identify requirement statement issues. These issues are organized by the entity—word, sentence, or document—that they affect. The field of natural language processing (NLP) provides a core set of tools that can aid with this linguistic analysis and provide a method to create a requirement analysis support tool. NLP addresses requirements on processing levels: lexical, syntactic, semantic, and pragmatic. While processing on the lexical and syntactic level are well-defined, mining semantic and pragmatic data is performed in a number of different methods. This paper provides an overview of these current requirement analysis methods in light of the presented linguistic approach. This overview will be used to identify areas for further research and development. Finally, a prototype requirement analysis support tool will be presented. This tool seeks to demonstrate how the semantic processing level can begin to be addressed in requirement analysis. The tool will analyze a sample set of requirements from a family of military tactical vehicles (FMTV) requirements document. It implements NLP tools to semantically compare requirements statements based upon their grammatical subject.
APA, Harvard, Vancouver, ISO, and other styles
7

Speh, Emma, Ari Segel, Yash Thacker, Dan Marcus, Muriah D. Wheelock, and Adam T. Eggebrecht. "NeuroDOTpy: A Python Neuroimaging Toolbox for DOT." In Bio-Optics: Design and Application. Washington, D.C.: Optica Publishing Group, 2023. http://dx.doi.org/10.1364/boda.2023.jtu4b.24.

Full text
Abstract:
Adapting the existing MATLAB NeuroDOT toolbox to Python addresses challenges of processing functional near infrared spectroscopy (fNIRS) and diffuse optical tomography (DOT) data and supports common pipelines for pre-processing and analysis in a Python-library format.
APA, Harvard, Vancouver, ISO, and other styles
8

Rudusans, Gints, and Gatis Vitols. "Machine learning methods for classification of sensitive data." In Research for Rural Development 2021 : annual 27th International scientific conference proceedings. Latvia University of Life Sciences and Technologies, 2021. http://dx.doi.org/10.22616/rrd.27.2021.046.

Full text
Abstract:
In the era of Big Data there are a lot of new challenges – understanding, processing, and securing the data, assuring data quality, dealing with data growth and other challenges. One of the challenges is to identify and classify data sets in different systems which must follow the conditions defined by different regulations. The classification of these data sets can be automated using machine learning methods. The aim of the research is to provide machine learning methods for classifying sensitive data. The research is based on analysis and comparison of European Union legislation and scientific literature, which addresses issues of data classification using machine learning methods. Special attention is paid to sensitive data defined by the General Data Protection Regulation (GDPR). The main focus in this research is on supervised learning algorithms, where one of the most effective is Naïve Bayes classifier. In order to achieve good results, there is a need to find a proper training data set. Usage of hybrid methods provides a new way for increasing performance of classifiers.
APA, Harvard, Vancouver, ISO, and other styles
9

Vanderheggen, Kayo, Nate Meredith, Joost Janssen, and Alberto Morandi. "Bringing Big Data Technology to Wind Turbine Installation Vessels." In SNAME Maritime Convention. SNAME, 2021. http://dx.doi.org/10.5957/smc-2021-062.

Full text
Abstract:
Digitalization is a key component of the ongoing Energy Transition. Although the offshore and maritime industries tend to be conservative in the adoption of new technologies, in recent years a digital journey was embraced to stay competitive, safe, and efficient. Data from mobile offshore units can be transformed into something valuable. However, collecting and processing of system’s data requires proper infrastructure, a software platform that handles data delivery and applications that translate the data into valuable information. The challenge is therefore to turn good ideas and intentions into solutions that add real value. With this challenge in mind, in recent years GustoMSC | NOV worked on Big Data technology for wind turbine installation vessels (WTIVs). The purpose of this endeavor is to assist our end users in increasing the safety and efficiency of their operations. This paper addresses some key aspects and components of this digital journey and shares experiences on merging Information Technology (IT) and Operational Technology (OT) environments in an ongoing effort to fulfill the promise that Industrial Internet of Things (IIoT) technology brings. A practical example is presented where Big Data is used to boost the performance of mobile offshore wind installation units.
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Shuaishuai, and Jonathan Sahagun. "AirWatch: A Real-Time and Fine-Granularity Air Quality Monitoring and Analytical System using Machine Learning and Drone Technology." In 5th International Conference on Advanced Natural Language Processing. Academy & Industry Research Collaboration Center, 2024. http://dx.doi.org/10.5121/csit.2024.141017.

Full text
Abstract:
This paper addresses the critical environmental challenge of air quality degradation, exacerbated by industrial emissions, vehicular pollutants, and agricultural activities [1]. Our proposed solution, a Real-Time and Fine-Granularity Air Quality Monitoring and Analytical System, leverages machine learning and drone technology to dynamically monitor and analyze air quality across diverse locations and altitudes. By integrating drone-mounted sensors, advanced machine learning algorithms, and a user-friendly interface, the system offers unprecedented spatial and temporal resolution in air quality assessment. The study navigated through limitations such as data transmission reliability and the complexity of real-time data analysis, employing robust communication protocols and enhanced analytical models for improved accuracy [2]. Experimentation across various urban and rural settings demonstrated the system's effectiveness in identifying pollution hotspots and predicting air quality trends, with significant improvements over traditional stationary monitoring methods. Our findings highlight the potential of combining drone mobility with machine learning efficiency to revolutionize air quality monitoring, making it an indispensable tool for environmental management and public health protection [3].
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Addresses (data processing)"

1

Tayeb, Shahab. Taming the Data in the Internet of Vehicles. Mineta Transportation Institute, January 2022. http://dx.doi.org/10.31979/mti.2022.2014.

Full text
Abstract:
As an emerging field, the Internet of Vehicles (IoV) has a myriad of security vulnerabilities that must be addressed to protect system integrity. To stay ahead of novel attacks, cybersecurity professionals are developing new software and systems using machine learning techniques. Neural network architectures improve such systems, including Intrusion Detection System (IDSs), by implementing anomaly detection, which differentiates benign data packets from malicious ones. For an IDS to best predict anomalies, the model is trained on data that is typically pre-processed through normalization and feature selection/reduction. These pre-processing techniques play an important role in training a neural network to optimize its performance. This research studies the impact of applying normalization techniques as a pre-processing step to learning, as used by the IDSs. The impacts of pre-processing techniques play an important role in training neural networks to optimize its performance. This report proposes a Deep Neural Network (DNN) model with two hidden layers for IDS architecture and compares two commonly used normalization pre-processing techniques. Our findings are evaluated using accuracy, Area Under Curve (AUC), Receiver Operator Characteristic (ROC), F-1 Score, and loss. The experimentations demonstrate that Z-Score outperforms no-normalization and the use of Min-Max normalization.
APA, Harvard, Vancouver, ISO, and other styles
2

van der Sloot, Bart. The Quality of Life: Protecting Non-personal Interests and Non-personal Data in the Age of Big Data. Universitätsbibliothek J. C. Senckenberg, Frankfurt am Main, 2021. http://dx.doi.org/10.21248/gups.64579.

Full text
Abstract:
Under the current legal paradigm, the rights to privacy and data protection provide natural persons with subjective rights to protect their private interests, such as related to human dignity, individual autonomy and personal freedom. In principle, when data processing is based on non-personal or aggregated data or when such data pro- cesses have an impact on societal, rather than individual interests, citizens cannot rely on these rights. Although this legal paradigm has worked well for decades, it is increasingly put under pressure because Big Data processes are typically based indis- criminate rather than targeted data collection, because the high volumes of data are processed on an aggregated rather than a personal level and because the policies and decisions based on the statistical correlations found through algorithmic analytics are mostly addressed at large groups or society as a whole rather than specific individuals. This means that large parts of the data-driven environment are currently left unregu- lated and that individuals are often unable to rely on their fundamental rights when addressing the more systemic effects of Big Data processes. This article will discuss how this tension might be relieved by turning to the notion ‘quality of life’, which has the potential of becoming the new standard for the European Court of Human Rights (ECtHR) when dealing with privacy related cases.
APA, Harvard, Vancouver, ISO, and other styles
3

Guicheney, William, Tinashe Zimani, Hope Kyarisiima, and Louisa Tomar. Big Data in the Public Sector: Selected Applications and Lessons Learned. Inter-American Development Bank, October 2016. http://dx.doi.org/10.18235/0007024.

Full text
Abstract:
This paper analyzes different ways in which big data can be leveraged to improve the efficiency and effectiveness of government. It describes five cases where massive and diverse sets of information are gathered, processed, and analyzed in three different policy areas: smart cities, taxation, and citizen security. The cases, compiled from extensive desk research and interviews with leading academics and practitioners in the field of data analytics, have been analyzed from the perspective of public servants interested in big data and thus address both the technical and the institutional aspects of the initiatives. Based on the case studies, a policy guide was built to orient public servants in Latin America and the Caribbean in the implementation of big data initiatives and the promotion of a data ecosystem. The guide covers aspects such as leadership, governance arrangements, regulatory frameworks, data sharing, and privacy, as well as considerations for storing, processing, analyzing, and interpreting data.
APA, Harvard, Vancouver, ISO, and other styles
4

Ruby, Jeffrey, Richard Massaro, John Anderson, and Robert Fischer. Three-dimensional geospatial product generation from tactical sources, co-registration assessment, and considerations. Engineer Research and Development Center (U.S.), February 2023. http://dx.doi.org/10.21079/11681/46442.

Full text
Abstract:
According to Army Multi-Domain Operations (MDO) doctrine, generating timely, accurate, and exploitable geospatial products from tactical platforms is a critical capability to meet threats. The US Army Corps of Engineers, Engineer Research and Development Center, Geospatial Research Laboratory (ERDC-GRL) is carrying out 6.2 research to facilitate the creation of three-dimensional (3D) products from tactical sensors to include full-motion video, framing cameras, and sensors integrated on small Unmanned Aerial Systems (sUAS). This report describes an ERDC-GRL processing pipeline comprising custom code, open-source software, and commercial off-the-shelf (COTS) tools to geospatially rectify tactical imagery to authoritative foundation sources. Four datasets from different sensors and locations were processed against National Geospatial-Intelligence Agency–supplied foundation data. Results showed that the co-registration of tactical drone data to reference foundation varied from 0.34 m to 0.75 m, exceeding the accuracy objective of 1 m described in briefings presented to Army Futures Command (AFC) and the Assistant Security of the Army for Acquisition, Logistics and Technology (ASA(ALT)). A discussion summarizes the results, describes steps to address processing gaps, and considers future efforts to optimize the pipeline for generation of geospatial data for specific end-user devices and tactical applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, George, Grang Mei, Bulent Ayhan, Chiman Kwan, and Venu Varma. DTRS57-04-C-10053 Wave Electromagnetic Acoustic Transducer for ILI of Pipelines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), March 2005. http://dx.doi.org/10.55274/r0012049.

Full text
Abstract:
In this project, Intelligent Automation, Incorporated (IAI) and Oak Ridge National Lab (ORNL) propose a novel and integrated approach to inspect the mechanical dents and metal loss in pipelines. It combines the state-of-the-art SH wave Electromagnetic Acoustic Transducer (EMAT) technique, through detailed numerical modeling, data collection instrumentation, and advanced signal processing and pattern classifications, to detect and characterize mechanical defects in the underground pipeline transportation infrastructures. The technique has four components: (1) thorough guided wave modal analysis, (2) recently developed three-dimensional (3-D) Boundary Element Method (BEM) for best operational condition selection and defect feature extraction, (3) ultrasonic Shear Horizontal (SH) waves EMAT sensor design and data collection, and (4) advanced signal processing algorithm like a nonlinear split-spectrum filter, Principal Component Analysis (PCA) and Discriminant Analysis (DA) for signal-to-noise-ratio enhancement, crack signature extraction, and pattern classification. This technology not only can effectively address the problems with the existing methods, i.e., to detect the mechanical dents and metal loss in the pipelines consistently and reliably but also it is able to determine the defect shape and size to a certain extent.
APA, Harvard, Vancouver, ISO, and other styles
6

Papasodoro, C., D. Bélanger, G. Légaré-Couture, and H. Russel. Assessment of approaches and costs associated with the correction of the HRDEM product data in the Canadian Arctic. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/331974.

Full text
Abstract:
The High-Resolution Digital Elevation Model (HRDEM) was created as part of the National Elevation Data Strategy to provide detailed elevation data across the country. For the Canadian Arctic, the HRDEM was based on the ArcticDEM initiative with additional post-processing by the Canada Centre for Mapping and Earth Observation to create a 2-meter Digital Surface Model (DSM) covering a geographic region of approximately 4.6 million km2. This report provides details on the investigation of the data issues within HRDEM in the North, available open and commercial sources of elevation data that could be used to improve the product, and technologies available to generate high resolution DSM at similar levels of accuracy and resolution than the current HRDEM. In addition, the report summarizes the results of a research into the common, as well as more advanced (e.g., machine learning), methods for improving the product. In summary, the intent of this investigation was to provide supporting information to address the data anomalies in HRDEM and present a path forward.
APA, Harvard, Vancouver, ISO, and other styles
7

Beavers, Calvin, Chad Day, Austin Krietemeyer, Scott Peterson, Yushin Ahn, and Xiaojun Li. Mapping of Pavement Conditions Using Smartphone/Tablet LiDAR Case Study: Sensor Performance Comparison. Mineta Transportation Institute, July 2024. http://dx.doi.org/10.31979/mti.2024.2224.

Full text
Abstract:
Poor road conditions affect millions of drivers, and assessing the condition of paved surfaces is a critical step towards repairing them. This project explores the feasibility of using the Apple iPad Pro LiDAR sensor as a cost-effective tool for assessing the damage and condition of paved surfaces. Our research aims to provide accurate and precise measurements using readily available consumer devices and compare the results to state-of-the-art equipment. This investigation involved visual inspection, identification, and classification of pavement distresses, followed by a comparison of the iPad and iPhone LiDAR data with a survey-grade terrestrial laser scanner. The project revealed several limitations of the iPad Pro-based LiDAR approach. The level of detail captured in the scans was relatively low, with a best-case resolution of 1 cm and an inability to detect smaller cracks and shallow potholes. Longer scans (in terms of both time and distance) led to geometric anomalies in the surface models. Colorized scans provided some visual contrast, aiding in the identification of damage, particularly on moderately damaged concrete surfaces. The potential sources of error were identified, including the performance of the Inertial Measurement Unit (IMU), the limitations of the LiDAR sensor itself, and the opaque nature of onboard data processing within the 3D Scanner App. Suggestions for improvement included the use of gimbal stabilizers to enhance scan quality and the exploration of more intensive PC-based processing for raw data analysis. Hardware advancements by Apple and software enhancements by app developers were also highlighted as potential areas for future improvement. While the project revealed limitations and challenges, the authors acknowledge the possibility of future hardware upgrades, augmented reality advancements, and improvements in sensor accuracy and processing. However, based on this project’s findings, the iPad Pro LiDAR approach currently falls short of providing the necessary resolution and accuracy required for comprehensive roadway damage assessment. Results indicate that additional developments are necessary to address the identified limitations and make this method a viable and cost-effective solution for roadway surface evaluation.
APA, Harvard, Vancouver, ISO, and other styles
8

Raikow, David, and Kelly Kozar. Quality assurance plan for water quality monitoring in the Pacific Island Network. National Park Service, 2023. http://dx.doi.org/10.36967/2300648.

Full text
Abstract:
In accordance with guidelines set forth by the National Park Service Inventory and Monitoring Division, a quality-assurance plan has been created for use by the Pacific Island Network in the implementation of water quality monitoring protocols, including the marine water quality protocol (Raikow et al. 2023) and future water quality protocols that will address streams and standing waters. This quality-assurance plan documents the standards, policies, and procedures used by the Pacific Island Network for activities specifically related to the collection, processing, storage, analysis, and publication of monitoring data. The policies and procedures documented in this quality assurance plan complement quality assurance efforts for other components of the overall protocol workflow, including initial creation of the protocol as described in the protocol narrative and quality assurance plans for other monitoring activities conducted by the Pacific Island Network.
APA, Harvard, Vancouver, ISO, and other styles
9

Raikow, David, and Kelly Kozar. Quality assurance plan for water quality monitoring in the Pacific Island Network. National Park Service, 2023. http://dx.doi.org/10.36967/2300662.

Full text
Abstract:
In accordance with guidelines set forth by the National Park Service Inventory and Monitoring Division, a quality-assurance plan has been created for use by the Pacific Island Network in the implementation of water quality monitoring protocols, including the marine water quality protocol (Raikow et al. 2023) and future water quality protocols that will address streams and standing waters. This quality-assurance plan documents the standards, policies, and procedures used by the Pacific Island Network for activities specifically related to the collection, processing, storage, analysis, and publication of monitoring data. The policies and procedures documented in this quality assurance plan complement quality assurance efforts for other components of the overall protocol workflow, including initial creation of the protocol as described in the protocol narrative and quality assurance plans for other monitoring activities conducted by the Pacific Island Network.
APA, Harvard, Vancouver, ISO, and other styles
10

Leavy, Michelle B., Danielle Cooke, Sarah Hajjar, Erik Bikelman, Bailey Egan, Diana Clarke, Debbie Gibson, Barbara Casanova, and Richard Gliklich. Outcome Measure Harmonization and Data Infrastructure for Patient-Centered Outcomes Research in Depression: Report on Registry Configuration. Agency for Healthcare Research and Quality (AHRQ), November 2020. http://dx.doi.org/10.23970/ahrqepcregistryoutcome.

Full text
Abstract:
Background: Major depressive disorder is a common mental disorder. Many pressing questions regarding depression treatment and outcomes exist, and new, efficient research approaches are necessary to address them. The primary objective of this project is to demonstrate the feasibility and value of capturing the harmonized depression outcome measures in the clinical workflow and submitting these data to different registries. Secondary objectives include demonstrating the feasibility of using these data for patient-centered outcomes research and developing a toolkit to support registries interested in sharing data with external researchers. Methods: The harmonized outcome measures for depression were developed through a multi-stakeholder, consensus-based process supported by AHRQ. For this implementation effort, the PRIME Registry, sponsored by the American Board of Family Medicine, and PsychPRO, sponsored by the American Psychiatric Association, each recruited 10 pilot sites from existing registry sites, added the harmonized measures to the registry platform, and submitted the project for institutional review board review Results: The process of preparing each registry to calculate the harmonized measures produced three major findings. First, some clarifications were necessary to make the harmonized definitions operational. Second, some data necessary for the measures are not routinely captured in structured form (e.g., PHQ-9 item 9, adverse events, suicide ideation and behavior, and mortality data). Finally, capture of the PHQ-9 requires operational and technical modifications. The next phase of this project will focus collection of the baseline and follow-up PHQ-9s, as well as other supporting clinical documentation. In parallel to the data collection process, the project team will examine the feasibility of using natural language processing to extract information on PHQ-9 scores, adverse events, and suicidal behaviors from unstructured data. Conclusion: This pilot project represents the first practical implementation of the harmonized outcome measures for depression. Initial results indicate that it is feasible to calculate the measures within the two patient registries, although some challenges were encountered related to the harmonized definition specifications, the availability of the necessary data, and the clinical workflow for collecting the PHQ-9. The ongoing data collection period, combined with an evaluation of the utility of natural language processing for these measures, will produce more information about the practical challenges, value, and burden of using the harmonized measures in the primary care and mental health setting. These findings will be useful to inform future implementations of the harmonized depression outcome measures.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography