Journal articles on the topic 'Data capturing techniques'

To see the other types of publications on this topic, follow the link: Data capturing techniques.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data capturing techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Bolton, Amy E., Randolph S. Astwood, and Gwendolyn E. Campbell. "Policy Capturing and Fuzzy Logic: A Better Approach to Representing Judgment Data?" Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46, no. 3 (September 2002): 541–45. http://dx.doi.org/10.1177/154193120204600368.

Full text
Abstract:
At the 45th annual meeting of HFES, we conducted an alternative format session in which fuzzy logic was introduced as an alternative approach to analyzing judgment data and representing decision-making policies (see Buff et al., 2001). During the alternative format session, usability judgments were collected on-site for Advanced Distance Learning (ADL) applications. These data provided the basis for an empirical assessment of the value added of one modeling technique, fuzzy logic, over the more traditional approach to analyzing policy capturing data, multiple linear regression. This paper describes the results of an empirical assessment of the two modeling techniques. For a discussion of the empirical results of the impact of the different usability dimensions on the learning effectiveness of ADL applications, see Holness, Pharmer, and Buff (2002).
APA, Harvard, Vancouver, ISO, and other styles
2

Kelber, C., S. Marke, U. Trommler, C. Rupprecht, and S. Weis. "Development of process data capturing, analysis and controlling for thermal spray techniques - SprayTracker." IOP Conference Series: Materials Science and Engineering 181 (March 2017): 012010. http://dx.doi.org/10.1088/1757-899x/181/1/012010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

SARFRAZ, MUHAMMAD. "SOME ALGORITHMS FOR CURVE DESIGN AND AUTOMATIC OUTLINE CAPTURING OF IMAGES." International Journal of Image and Graphics 04, no. 02 (April 2004): 301–24. http://dx.doi.org/10.1142/s0219467804001427.

Full text
Abstract:
A new multipurpose curve technique has been introduced which is meant to automatically provide a fit to any ordered data in a plane. The technique is particularly economical for designing purposes as well as for the visualization of a large amount of data sets. A more flexible class of cubic functions is the basis of this technique. This class of functions involves two control parameters, to produce more flexible shapes than ordinary Bézier cubics or Hermite cubics, in each segment. These functions, together with the control parameters, are utilized to fit a design curve in an interactive way. These functions are also utilized in an optimal way to fit a design curve to the data arose from any image or a scientific phenomenon. The design curve method is highly useful to capture the outlines of images. It differs, in its methodology, from the existing techniques in the literature using Bézier cubics. The curve technique has used various ideas in its construction. These ideas include end-point interpolation, detection of characteristic points, least squares approximation. The final shape is achieved by stitching the generalized Bézier cubic pieces with GC1 smoothness. Finally, three algorithms have been proposed for various applications.
APA, Harvard, Vancouver, ISO, and other styles
4

De Ruiter, Naomi M. P., Steffie Van Der Steen, Ruud J. R. Den Hartigh, and Paul L. C. Van Geert. "Capturing moment-to-moment changes in multivariate human experience." International Journal of Behavioral Development 41, no. 5 (June 6, 2016): 611–20. http://dx.doi.org/10.1177/0165025416651736.

Full text
Abstract:
In this article, we aim to shed light on a technique to study intra-individual variability that spans the time frame of seconds and minutes, i.e., micro-level development. This form of variability is omnipresent in behavioural development and processes of human experience, yet is often ignored in empirical studies, given a lack of proper analysis tools. The current article illustrates that a clustering technique called Kohonen’s Self-Organizing Maps (SOM), which is widely used in fields outside of psychology, is an accessible technique that can be used to capture intra-individual variability of multivariate data. We illustrate this technique with a case study involving self-experience in the context of a parent–adolescent interaction. We show that, with techniques such as SOM, it is possible to reveal how multiple components of an intra-individual process (the adolescent’s self-affect and autonomy) are non-linearly connected across time, and how these relationships transition in accordance with a changing contextual factor (parental connectedness) during a single interaction. We aim to inspire researchers to adopt this technique and explore the intra-individual variability of more developmental processes, across a variety of domains, as deciphering such micro-level processes is crucial for understanding the nature of psychological and behavioural development.
APA, Harvard, Vancouver, ISO, and other styles
5

Suzuki, K., U. Rin, Y. Maeda, and H. Takeda. "FOREST COVER CLASSIFICATION USING GEOSPATIAL MULTIMODAL DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 1091–96. http://dx.doi.org/10.5194/isprs-archives-xlii-2-1091-2018.

Full text
Abstract:
To address climate change, accurate and automated forest cover monitoring is crucial. In this study, we propose a Convolutional Neural Network (CNN) which mimics professional interpreters’ manual techniques. Using simultaneously acquired airborne images and LiDAR data, we attempt to reproduce the 3D knowledge of tree shape, which interpreters potentially make use of. Geospatial features which support interpretation are also used as inputs to the CNN. Inspired by the interpreters’ techniques, we propose a unified approach that integrates these datasets in a shallow layer in the CNN network. With the proposed CNN, we show that the multi-modal CNN works robustly, which gets more than 80 % user’s accuracy. We also show that the 3D multi-modal approach is especially suited for deciduous trees thanks to the ability of capturing 3D shapes.
APA, Harvard, Vancouver, ISO, and other styles
6

Boonchoo, Thapana. "Capturing Spatial Relationship Mapping Patterns between GPS Coordinates and Road Network Using Machine Learning and Partitioning Techniques." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 16, no. 3 (June 18, 2022): 277–88. http://dx.doi.org/10.37936/ecti-cit.2022163.247801.

Full text
Abstract:
Map matching is a technique used to identify which path a vehicle is travelling on in a road map. Since it is a crucial fundamental step for a wide range of transportation applications, many map-matching algorithms have been devised ranging from simple geometric calculation methods to more sophisticated methods. However, the study of spatial relationship patterns between GPS coordinates and road segments mapped with map-matching has not received enough attention from researchers. This paper presents a framework, called Proxy Map Matching (PMM), to learn such patterns using machine learning techniques. However, we find that solely employing machine learning techniques on such data is not sufficient to capture the patterns. Solving this problem that way results in an inaccurate proxy model. In PMM, we construct several proxy map matchers and assign them to each group of data based on their spatial proximity, thereby achieving accuracy improvement. An experiment on real-world data shows that the framework achieves above 85% accuracy with integration of machine learning techniques, and outperforms the methods which solely employ machine learning techniques significantly. Moreover, the proposed proxy model can perform very fast matching. For 14,177 GPS coordinate pairs per second, we achieve 88.4% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
7

Aghili, Maryamossadat, and Ruogu Fang. "Mining Big Neuron Morphological Data." Computational Intelligence and Neuroscience 2018 (June 24, 2018): 1–13. http://dx.doi.org/10.1155/2018/8234734.

Full text
Abstract:
The advent of automatic tracing and reconstruction technology has led to a surge in the number of neurons 3D reconstruction data and consequently the neuromorphology research. However, the lack of machine-driven annotation schema to automatically detect the types of the neurons based on their morphology still hinders the development of this branch of science. Neuromorphology is important because of the interplay between the shape and functionality of neurons and the far-reaching impact on the diagnostics and therapeutics in neurological disorders. This survey paper provides a comprehensive research in the field of automatic neurons classification and presents the existing challenges, methods, tools, and future directions for automatic neuromorphology analytics. We summarize the major automatic techniques applicable in the field and propose a systematic data processing pipeline for automatic neuron classification, covering data capturing, preprocessing, analyzing, classification, and retrieval. Various techniques and algorithms in machine learning are illustrated and compared to the same dataset to facilitate ongoing research in the field.
APA, Harvard, Vancouver, ISO, and other styles
8

Bordagaray, Maria, Luigi dell’Olio, Achille Fonzone, and Ángel Ibeas. "Capturing the conditions that introduce systematic variation in bike-sharing travel behavior using data mining techniques." Transportation Research Part C: Emerging Technologies 71 (October 2016): 231–48. http://dx.doi.org/10.1016/j.trc.2016.07.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Adamopoulos, Efstathios, Fulvio Rinaudo, and Liliana Ardissono. "A Critical Comparison of 3D Digitization Techniques for Heritage Objects." ISPRS International Journal of Geo-Information 10, no. 1 (December 30, 2020): 10. http://dx.doi.org/10.3390/ijgi10010010.

Full text
Abstract:
Techniques for the three-dimensional digitization of tangible heritage are continuously updated, as regards active and passive sensors, data acquisition approaches, implemented algorithms and employed computational systems. These developments enable higher automation and processing velocities, increased accuracy, and precision for digitizing heritage assets. For large-scale applications, as for investigations on ancient remains, heritage objects, or architectural details, scanning and image-based modeling approaches have prevailed, due to reduced costs and processing durations, fast acquisition, and the reproducibility of workflows. This paper presents an updated metric comparison of common heritage digitization approaches, providing a thorough examination of sensors, capturing workflows, processing parameters involved, metric and radiometric results produced. A variety of photogrammetric software were evaluated (both commercial and open sourced), as well as photo-capturing equipment of various characteristics and prices, and scanners employing different technologies. The experimentations were performed on case studies of different geometrical and surface characteristics to thoroughly assess the implemented three-dimensional modeling pipelines.
APA, Harvard, Vancouver, ISO, and other styles
10

Utomo, Hanggara Budi, Dewi Retno Suminar, and Hamidah Hamidah. "CAPTURING TEACHING MOTIVATION OF TEACHER IN THE DISADVANTAGED AREAS." Jurnal Cakrawala Pendidikan 38, no. 3 (October 24, 2019): 398–410. http://dx.doi.org/10.21831/cp.v38i3.26411.

Full text
Abstract:
Teaching motivation of teachers is very important for student’s development. The purpose of this study is to test the teaching motivation of teachers in disadvantaged areas affected by the school climate and self-concept through the satisfaction of basic psychological needs. The subjects of this research were 241 teachers. Data collection techniques used instruments in the form of a school climate scale, a scale of self-concept, a scale of basic psychological need satisfaction, and a scale of teaching motivation. The data analysis techniques used a structural equation modeling. The results showed that the attention of teachers who teach in disadvantaged areas shows that the school climate and teacher’s self-concept differently provide an important role in teaching motivation mediated by the basic psychological needs satisfaction. This means that teaching motivation is a result of the role of the school climate as an external factor and satisfaction of basic psychological needs and self-concept as an internal factor. The implication of this research is the need for programs to develop teacher motivation in disadvantaged areas by optimizing and considering school climate, self-concept, and satisfaction of basic psychological needs as influential factors.
APA, Harvard, Vancouver, ISO, and other styles
11

Bertoni, A. "DATA-DRIVEN DESIGN IN CONCEPT DEVELOPMENT: SYSTEMATIC REVIEW AND MISSED OPPORTUNITIES." Proceedings of the Design Society: DESIGN Conference 1 (May 2020): 101–10. http://dx.doi.org/10.1017/dsd.2020.4.

Full text
Abstract:
AbstractThe paper presents a systematic literature review investigating definitions, uses, and application of data-driven design in the concept development process. The analysis shows a predominance of the use of text mining techniques on social media and online reviews to identify customers’ needs, not exploiting the opportunity granted by the increased accessibility of IoT in cyber-physical systems. The paper argues that such a gap limits the potential of capturing tacit customers’ needs and highlights the need to proactively plan and design for a transition toward data-driven design.
APA, Harvard, Vancouver, ISO, and other styles
12

Lee, Jihong, and Insoo Ha. "Real-Time Motion Capture for a Human Body using Accelerometers." Robotica 19, no. 6 (September 2001): 601–10. http://dx.doi.org/10.1017/s0263574701003319.

Full text
Abstract:
In this paper we propose a set of techniques for a real-time motion capture of a human body. The proposed motion capture system is based on low cost accelerometers, and is capable of identifying the body configuration by extracting gravity-related terms from the sensor data. One sensor unit is composed of 3 accelerometers arranged orthogonally to each other, and is capable of identifying 2 rotating angles of joints with 2 degrees of freedom. A geometric fusion technique is applied to cope with the uncertainty of sensor data. A practical calibration technique is also proposed to handle errors in aligning the sensing axis to the coordination axis. In the case where motion acceleration is not negligible compared with gravity acceleration, a compensation technique to extract gravity acceleration from the sensor data is proposed. Experimental results not only for individual techniques but also for human motion capturing with graphics are included.
APA, Harvard, Vancouver, ISO, and other styles
13

Kasapović, Suad, Emir Skejić, and Tarik Huremović. "Testing the Data Protection Through Implementation of Secure Communication." B&H Electrical Engineering 15, no. 2 (December 1, 2021): 21–30. http://dx.doi.org/10.2478/bhee-2021-0014.

Full text
Abstract:
Abstract The security of using applications in cloud services and on the Internet is an important topic in the field of engineering. In this paper, two laboratory tests for data transmission protection, specifically designed for different security analysis techniques, are presented and explained. During lab tests on public Wi-Fi networks from the MIDM (“Man in the Middle”) attacks, various monitoring techniques were applied, using a special lab test scenario with Kali Linux penetration tools by creating an SSH tunnel on an Android mobile device. These test benches allow easy data capturing, and the captured data is processed using available software programs. Expected outcomes, practical improvement and security performance assessment are presented in detail, and considered in terms of their value in security engineering. The aim of this paper is to detect and overcome some of the weaknesses of the application of security protocols in a Wi-Fi network environment.
APA, Harvard, Vancouver, ISO, and other styles
14

Bandeen-Roche, Karen. "SIGNAL DETECTION AND VALIDATION IN AN ERA OF BIG GERONTOLOGICAL DATA." Innovation in Aging 6, Supplement_1 (November 1, 2022): 178. http://dx.doi.org/10.1093/geroni/igac059.712.

Full text
Abstract:
Abstract Older adult health assessment long has posed measurement challenges—multidimensionality of sentinel outcomes like functioning and frailty, for example. This presentation discusses three developments creating opportunities for gerontologic biostatistics (GBS) over the past 10 years. Firstly, modeling to internally validate measurements or to quantify systematic heterogeneity in assessing older adult health has become considerably more widespread. Confirmatory latent variable modeling, harmonization, and mixture models will be addressed. Secondly, signal intensive behavioral phenotypes are proliferating, e.g. accelerometry, sleep actigraphy, and ecological momentary assessment. Functional data analysis is described as a data analytic technique to extract signal capturing main behavioral features or most relevant for health outcomes. Thirdly, “deep” characterization is under hot pursuit—whether by single- or multi-‘omics studies, or by multi-modal phenotyping as is increasingly common in the study of cognition. Techniques to accomplish this replicably are discussed. Throughout, potential pitfalls and implications for gerontologic data science development are identified.
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Shu-Ching. "Multimedia Databases and Data Management." International Journal of Multimedia Data Engineering and Management 1, no. 1 (January 2010): 1–11. http://dx.doi.org/10.4018/jmdem.2010111201.

Full text
Abstract:
The exponential growth of the technological advancements has resulted in high-resolution devices, such as digital cameras, scanners, monitors, and printers, which enable the capturing and displaying of multimedia data in high-density storage devices. Furthermore, more and more applications need to live with multimedia data. However, the gap between the characteristics of various media types and the application requirements has created the need to develop advanced techniques for multimedia data management and the extraction of relevant information from multimedia databases. Though many research efforts have been devoted to the areas of multimedia databases and data management, it is still far from maturity. The purpose of this article is to discuss how the existing techniques, methodologies, and tools addressed relevant issues and challenges to enable a better understanding in multimedia databases and data management. The focuses include: (1) how to develop a formal structure that can be used to capture the distinguishing content of the media data in a multimedia database (MMDB) and to form an abstract space for the data to be queried; (2) how to develop advanced content analysis and retrieval techniques that can be used to bridge the gaps between the semantic meaning and low-level media characteristics to improve multimedia information retrieval; and (3) how to develop query mechanisms that can handle complex spatial, temporal, and/or spatio-temporal relationships of multimedia data to answer the imprecise and incomplete queries issued to an MMDB.
APA, Harvard, Vancouver, ISO, and other styles
16

Cheliotis, Kostas. "Capturing Real-Time Public Space Activity Using Publicly Available Digital Traces." Proceedings of the International AAAI Conference on Web and Social Media 10, no. 2 (August 4, 2021): 8–13. http://dx.doi.org/10.1609/icwsm.v10i2.14823.

Full text
Abstract:
The study of public space activity has been one of the main foci in the debate on urban space transformations for the past decades, with many researchers adding to the debate through theoretical work, as well as empirical/quantitative evidence, drawing from their direct observations of public space activity. This paper attempts to enhance this approach of urban research and public space observation, by investigating the application of remote sensing techniques in public space analysis. More specifically, it attempts to capture public space activity using publicly available digital traces, such as environmental and temporal data, as well as social media data streams. By applying bivariate and multivariate analysis techniques to these datasets, it illustrates the possibility of capturing current activity in public spaces, with some degree of confidence. Furthermore, given the ubiquitous and real-time nature of these datasets, it also becomes possible to provide continuous estimates as well as short-term predictions on current and near-future public space use. Finally, it outlines the capabilities of this approach, to be used in complementary fashion to direct observation methods mentioned above, in building high resolution models and simulations of public space activity.
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Huacheng, Chunhe Xia, Tianbo Wang, Sheng Wen, Chao Chen, and Yang Xiang. "Capturing Dynamics of Information Diffusion in SNS: A Survey of Methodology and Techniques." ACM Computing Surveys 55, no. 1 (January 31, 2023): 1–51. http://dx.doi.org/10.1145/3485273.

Full text
Abstract:
Studying information diffusion in SNS (Social Networks Service) has remarkable significance in both academia and industry. Theoretically, it boosts the development of other subjects such as statistics, sociology, and data mining. Practically, diffusion modeling provides fundamental support for many downstream applications (e.g., public opinion monitoring, rumor source identification, and viral marketing). Tremendous efforts have been devoted to this area to understand and quantify information diffusion dynamics. This survey investigates and summarizes the emerging distinguished works in diffusion modeling. We first put forward a unified information diffusion concept in terms of three components: information, user decision, and social vectors, followed by a detailed introduction of the methodologies for diffusion modeling. And then, a new taxonomy adopting hybrid philosophy (i.e., granularity and techniques) is proposed, and we made a series of comparative studies on elementary diffusion models under our taxonomy from the aspects of assumptions, methods, and pros and cons. We further summarized representative diffusion modeling in special scenarios and significant downstream tasks based on these elementary models. Finally, open issues in this field following the methodology of diffusion modeling are discussed.
APA, Harvard, Vancouver, ISO, and other styles
18

M. Monica, K., G. Bindu, and S. Sridevi. "Survey on Image Dimensionality Reduction Using Deep Learning Techniques." International Journal of Engineering & Technology 7, no. 3.27 (August 15, 2018): 179. http://dx.doi.org/10.14419/ijet.v7i3.27.17755.

Full text
Abstract:
Images provide rich information. With reference to the data set which may be related or unrelated in nature, locates step by step, a wide range of application and its attributes through capturing mechanism by sensing the suitable technologies. On the other hand, it also creates a huge quantity of data which may be relevant, irrelevant or redundant in nature and it is used for detailed task of the image. Also, Many brings a lot of problems such as increase in computational time of image, density of image and range of mapping of data, semantics of the data set and also it also there is a scope of huge amount of labeled data for the process of training to the new environment setup. Mostly, this is not easy and costly for users to obtain sufficient training models in several application modules. This research paper deals with these problems by exploring the more classical dimension reduction algorithms with deep knowledge for supporting communities.
APA, Harvard, Vancouver, ISO, and other styles
19

Lawes, R. A., Y. M. Oliver, and M. J. Robertson. "Capturing the in-field spatial - temporal dynamic of yield variation." Crop and Pasture Science 60, no. 9 (2009): 834. http://dx.doi.org/10.1071/cp08346.

Full text
Abstract:
Many researchers have predicted that within-field spatial variation in crop yield could be exploited for economic benefit. However, the spatial variation of yield is influenced by season and is often temporally unstable. Parts of a field may yield well relative to the remainder of the field in one season and poorly in another, suggesting that regions in the field vary in their response to season type. We evaluate the capacity of two analytical techniques, the regression on the field mean and the regression on growing-season rainfall, to capture the variation in responsiveness of yield variation across a field. We applied these indices to a commercial 134-ha field that had been sown to wheat, Triticum aestivum cv. Calingiri, in 1996, 1999, 2001, 2003, and 2005. The slope from the regression on field mean was variable with a mean of 1 and standard deviation of 0.67. The technique successfully identified regions that were responsive and unresponsive to variations in the cropping environment. In contrast, the average slope derived from the regression on growing-season rainfall was just –0.003 ± 0.003 t/ha.mm of growing-season rainfall. This approach failed to capture the spatial–temporal dynamic of yield variation, and implies that the overall cropping environment was poorly characterised by growing-season rainfall. Crop yields, derived from 10 soils in this field, were simulated from 1900 to 2006. The analytical techniques were applied to these simulated yields and revealed that the spatial–temporal dynamic observed in the field is partially explained by the interactions between soil type and climate. In addition, the spatial–temporal dynamic is best captured when mean field yields vary temporally by more than 1.2 t/ha if the assessments are made with 5 years of data. We further discuss the application and interpretation of these indices and the role they play in identifying soils that are responsive to season type.
APA, Harvard, Vancouver, ISO, and other styles
20

Arjunan, Tamilselvan. "Building Business Intelligence Data Extractor using NLP and Python." International Journal for Research in Applied Science and Engineering Technology 10, no. 10 (October 31, 2022): 23–28. http://dx.doi.org/10.22214/ijraset.2022.46945.

Full text
Abstract:
Abstract: The goal of the Business Intelligence data extractor (BID- Extractor) tool is to offer high-quality, usable data that is freely available to the public. To assist companies across all industries in achieving their objectives, we prefer to use cuttingedge, business-focused web scraping solutions. The World wide web contains all kinds of information of different origins; some of those are social, financial, security, and academic. Most people access information through the internet for educational purposes. Information on the web is available in different formats and through different access interfaces. Therefore, indexing or semantic processing of the data through websites could be cumbersome. Web Scraping/Data extracting is the technique that aims to address this issue. Web scraping is used to transform unstructured data on the web into structured data that can be stored and analyzed in a central local database or spreadsheet. There are various web scraping techniques including Traditional copy-and-paste, Text capturing and regular expression matching, HTTP programming, HTML parsing, DOM parsing, Vertical aggregation platforms, Semantic annotation recognition, and Computer vision webpage analyzers. Traditional copy and paste is the basic and tiresome web scraping technique where people need to scrap lots of datasets. Web scraping software is the easiest scraping technique since all the other techniques except traditional copy and pastes require some form of technical expertise. Even though there are many webs scraping software available today, most of them are designed to serve one specific purpose. Businesses cannot decide using the data. This research focused on building web scraping software using Python and NLP. Convert the unstructured data to structured data using NLP. We can also train the NLP NER model. The study's findings provide a way to effectively gauge business impact
APA, Harvard, Vancouver, ISO, and other styles
21

TSURTSUMIA, Mamuka. "Medieval Georgian Poliorcetica." Historia i Świat 4 (September 16, 2015): 175–204. http://dx.doi.org/10.34739/his.2015.04.10.

Full text
Abstract:
In the medieval art of war siege constituted one of the principal forms of fight. Several basic techniques were used in taking a stronghold, such as assaulting the walls of the fortress, breaching the wall, digging a subterranean tunnel under the wall and enfeeblement of the garrison by lengthy siege. Bearing in mind various data, in the Middle Ages Georgians used the following technical means to capture fortresses: assault ladders, battering rams and other engines for breaching walls, ballistas, stone throwing engines and subterranean tunnels. In the article light is shed on the siege capabilities of the Georgian army of the period. Extensively discussed are the Georgian army’s stone throwing artillery, various types of stone hurling engines and the time of their spread in Georgia. Various techniques of capturing fortresses, applied by the Georgians are described. These include mounting the walls with ladder or various improvised means. The hazardous technique of directly assaulting the fortress without preliminary preparation or bringing up heavy siege engines is shown. The capturing of fortresses by means of underground tunnels isdiscussed separately. By the available evidence it is not apparent that Georgians made use of all the siege techniques known in the medieval world; however, it can be said that they were familiar with and used successfully the basic methods of siege warfare.
APA, Harvard, Vancouver, ISO, and other styles
22

Abdul Rahim, Asiah, Srazali Aripin, Abdul Razak Sapian, and Hazwan Zubir. "Capturing the Heritage of British Colonial School Building Through Measured Drawings in Malaysia." International Journal of Environment, Architecture, and Societies 2, no. 01 (February 28, 2022): 16–26. http://dx.doi.org/10.26418/ijeas.2022.2.01.16-26.

Full text
Abstract:
This research is to convey the analytical studies of the Pusat Latihan Polis (PULAPOL) Batu Lama School. This research aims to present detailed information of the endogenous influences, particularly during the British colonization era. PULAPOL was the first Police Training Centre in Malaysia. It is located at Jalan Sultan Yahya Petra, Kuala Lumpur. It was constructed in 1938. The structural system used for the school’s construction was load-bearing brick masonry. The objectives of the studies are to: a) Increase the understanding and appreciation of Architectural Heritage of the Muslim World, b) Learn the techniques of measured drawings of heritage buildings with the emphasis on the building construction, detailing, research and documentation of the historical aspects of the building and its development, c) Appreciate the total concept and richness of the architectural heritage as well as to be able to value the environment and to understand the socio-economic-cultural way of life of local inhabitants. In general, the methodology adopted for the study is divided into pre-fieldwork, fieldwork and post-fieldwork. Data collected from interviews, case studies, observations, and measurements of the building. The data obtained from various methods are documented in reports and measured drawing. In conclusion, these research findings provide the chronology and history of the PULAPOL Batu Lama School that is rich in its historical values, materials used, construction techniques and passive design techniques.
APA, Harvard, Vancouver, ISO, and other styles
23

Abdul Rahim, Asiah, Srazali Aripin, Abdul Razak Sapian, and Hazwan Zubir. "Capturing the Heritage of British Colonial School Building Through Measured Drawings in Malaysia." International Journal of Environment, Architecture, and Societies 2, no. 1 (February 28, 2022): 16–26. http://dx.doi.org/10.26418/ijeas.2022.2.1.16-26.

Full text
Abstract:
This research is to convey the analytical studies of the Pusat Latihan Polis (PULAPOL) Batu Lama School. This research aims to present detailed information of the endogenous influences, particularly during the British colonization era. PULAPOL was the first Police Training Centre in Malaysia. It is located at Jalan Sultan Yahya Petra, Kuala Lumpur. It was constructed in 1938. The structural system used for the school’s construction was load-bearing brick masonry. The objectives of the studies are to: a) Increase the understanding and appreciation of Architectural Heritage of the Muslim World, b) Learn the techniques of measured drawings of heritage buildings with the emphasis on the building construction, detailing, research and documentation of the historical aspects of the building and its development, c) Appreciate the total concept and richness of the architectural heritage as well as to be able to value the environment and to understand the socio-economic-cultural way of life of local inhabitants. In general, the methodology adopted for the study is divided into pre-fieldwork, fieldwork and post-fieldwork. Data collected from interviews, case studies, observations, and measurements of the building. The data obtained from various methods are documented in reports and measured drawing. In conclusion, these research findings provide the chronology and history of the PULAPOL Batu Lama School that is rich in its historical values, materials used, construction techniques and passive design techniques.
APA, Harvard, Vancouver, ISO, and other styles
24

Wood, Emma H., and Jonathan Moss. "Capturing emotions: experience sampling at live music events." Arts and the Market 5, no. 1 (May 5, 2015): 45–72. http://dx.doi.org/10.1108/am-02-2013-0002.

Full text
Abstract:
Purpose – Using techniques developed mainly in subjective well-being and “happiness” studies, the purpose of this paper is to discuss the applicability of these and related methods for understanding and evaluating the emotional responses experienced within the live music event environment. Design/methodology/approach – The concept of “experience” is debated and set within the context of music events designed to create a specific type of emotional experience for the attendees. The main tools for researching experiences over a time period are considered focusing on the “experience sampling method” (ESM) (Csikszentmihalyi, 1997) and the “day reconstruction method” (Kahneman et al., 2004). These methods are critiqued in terms of their usefulness and practicality as research tools in the study of audience emotions. Findings – A revised method was then developed and a small-scale trial undertaken at a live music event, the results of which are presented and discussed. A conceptual model illustrating the interconnectedness of experience is introduced as an example of the application of the data gathered through this method to theory development. The paper concludes by reflecting on both the methodological appropriateness and practicality of ESMs as a way of gathering valuable data on the emotions engendered by events. Research limitations/implications – An obstacle yet to be overcome is using this data to predict attitudinal and behavioural change related to arts marketing goals. However, studies in other areas have clearly shown that emotional response is a significant indicator of future behaviour suggesting that the potential is there. Practical implications – The trialled method provides a useful starting point for better understanding the complexity of emotional effects triggered at live music events. Originality/value – The paper concludes that an adaptation of these methods has the potential to provide much needed rich and credible data on the feelings and emotional reactions triggered by different elements of a live event.
APA, Harvard, Vancouver, ISO, and other styles
25

Abd Elhamid, Hossam Eldin M., Wael Khalif, Mohamed Roushdy, and Abdel-Badeeh M. Salem. "Machine Learning Techniques for Credit Card Fraud Detection." Future Computing and Informatics Journal 4, no. 2 (September 29, 2020): 98–112. http://dx.doi.org/10.54623/fue.fcij.4.2.5.

Full text
Abstract:
The term “fraud”, it always concerned about credit card fraud in our minds. And after the significant increase in the transactions of credit card, the fraud of credit card increased extremely in last years. So the fraud detection should include surveillance of the spending attitude for the person/customer to the determination, avoidance, and detection of unwanted behavior. Because the credit card is the most payment predominant way for the online and regular purchasing, the credit card fraud raises highly. The Fraud detection is not only concerned with capturing of the fraudulent practices, but also, discover it as fast as they can, because the fraud costs millions of dollar business loss and it is rising over time, and that affects greatly the worldwide economy. . In this paper we introduce 14 different techniques of how data mining techniques can be successfully combined to obtain a high fraud coverage with a high or low false rate, the Advantage and The Disadvantages of every technique, and The Data Sets used in the researches by researcher
APA, Harvard, Vancouver, ISO, and other styles
26

Rachuri, Sudarsan, Young-Hyun Han, Sebti Foufou, Shaw C. Feng, Utpal Roy, Fujun Wang, Ram D. Sriram, and Kevin W. Lyons. "A Model for Capturing Product Assembly Information." Journal of Computing and Information Science in Engineering 6, no. 1 (June 2, 2005): 11–21. http://dx.doi.org/10.1115/1.2164451.

Full text
Abstract:
The important issue of mechanical assemblies has been a subject of intense research over the past several years. Most electromechanical products are assemblies of several components, for various technical as well as economic reasons. This paper provides an object-oriented definition of an assembly model called the Open Assembly Model (OAM) and defines an extension to the NIST Core Product Model (NIST-CPM). The assembly model represents the function, form, and behavior of the assembly and defines both a system level conceptual model and associated hierarchical relationships. The model provides a way for tolerance representation and propagation, kinematics representation, and engineering analysis at the system level. The assembly model is open so as to enable plug-and-play with various applications, such as analysis (FEM, tolerance, assembly), process planning, and virtual assembly (using VR techniques). With the advent of the Internet more and more products are designed and manufactured globally in a distributed and collaborative environment. The class structure defined in OAM can be used by designers to collaborate in such an environment. The proposed model includes both assembly as a concept and assembly as a data structure. For the latter it uses STEP. The OAM together with CPM can be used to capture the assembly evolution from the conceptual to the detailed design stages. It is expected that the proposed OAM will enhance the assembly information content in the STEP standard. A case study example is discussed to explain the Usecase analysis of the assembly model.
APA, Harvard, Vancouver, ISO, and other styles
27

Azzam, Baher, Freia Harzendorf, Ralf Schelenz, Walter Holweger, and Georg Jacobs. "Pattern Discovery in White Etching Crack Experimental Data Using Machine Learning Techniques." Applied Sciences 9, no. 24 (December 14, 2019): 5502. http://dx.doi.org/10.3390/app9245502.

Full text
Abstract:
White etching crack (WEC) failure is a failure mode that affects bearings in many applications, including wind turbine gearboxes, where it results in high, unplanned maintenance costs. WEC failure is unpredictable as of now, and its root causes are not yet fully understood. While WECs were produced under controlled conditions in several investigations in the past, converging the findings from the different combinations of factors that led to WECs in different experiments remains a challenge. This challenge is tackled in this paper using machine learning (ML) models that are capable of capturing patterns in high-dimensional data belonging to several experiments in order to identify influential variables to the risk of WECs. Three different ML models were designed and applied to a dataset containing roughly 700 high- and low-risk oil compositions to identify the constituting chemical compounds that make a given oil composition high-risk with respect to WECs. This includes the first application of a purpose-built neural network-based feature selection method. Out of 21 compounds, eight were identified as influential by models based on random forest and artificial neural networks. Association rules were also mined from the data to investigate the relationship between compound combinations and WEC risk, leading to results supporting those of previous analyses. In addition, the identified compound with the highest influence was proved in a separate investigation involving physical tests to be of high WEC risk. The presented methods can be applied to other experimental data where a high number of measured variables potentially influence a certain outcome and where there is a need to identify variables with the highest influence.
APA, Harvard, Vancouver, ISO, and other styles
28

Geetha Bhargava, Mandava, P. Vidyullatha, P. Venkateswara Rao, and V. Sucharita. "A Study on Potential of Big Visual Data Analytics in Construction Arena." International Journal of Engineering & Technology 7, no. 2.7 (March 18, 2018): 652. http://dx.doi.org/10.14419/ijet.v7i2.7.10916.

Full text
Abstract:
In most construction and Infrastructure management projects, it is important to ensure and maintain the performance, safety as well as quality in the work to execute the construction in expected period , for monitoring the above parameters i.e. Performance, Safety, Quality and as well as Security, requires data to analyze, determine and test the algorithms, due to eternal increase amount of captured data thorough modern improvements in technology i.e. devices, camera equipped vehicles, Sensors, etc. accommodates an innovative scope to capture present status of construction sites at a less cost analogized to more alternative techniques such as laser scanning technique. Vast endeavours on documenting as-built status, nevertheless, stay at retrieving the visual data and updating Building Information Model (BIM). Hundreds of images and videos are captured but most of the data becomes scrap without proper localize with plan document and time. To take full benefits of visual data for construction status analytics where performance analytics is also included in it, three aspects (reliable, relevance and speed) of capturing, analysing and reporting visual data are captious and tracking development in construction sites needs two direction communication between field crew and management so that performances and changes issues related to task management, completion and outlook can be convey effectively. This paper deals with the investigation of current techniques for influence with help of arising BIM and big data in performance monitoring at construction from reliable, relevance and speed.
APA, Harvard, Vancouver, ISO, and other styles
29

Morrison, Annie O., and Jerad M. Gardner. "Microscopic Image Photography Techniques of the Past, Present, and Future." Archives of Pathology & Laboratory Medicine 139, no. 12 (May 19, 2015): 1558–64. http://dx.doi.org/10.5858/arpa.2014-0315-ra.

Full text
Abstract:
Context The field of pathology is driven by microscopic images. Educational activities for trainees and practicing pathologists alike are conducted through exposure to images of a variety of pathologic entities in textbooks, publications, online tutorials, national and international conferences, and interdepartmental conferences. During the past century and a half, photographic technology has progressed from primitive and bulky, glass-lantern projector slides to static and/or whole slide digital-image formats that can now be transferred around the world in a matter of moments via the Internet. Objective To provide a historic and technologic overview of the evolution of microscopic-image photographic tools and techniques. Data Sources Primary historic methods of microscopic image capture were delineated through interviews conducted with senior staff members in the Emory University Department of Pathology. Searches for the historic image-capturing methods were conducted using the Google search engine. Google Scholar and PubMed databases were used to research methods of digital photography, whole slide scanning, and smart phone cameras for microscopic image capture in a pathology practice setting. Conclusions Although film-based cameras dominated for much of the time, the rise of digital cameras outside of pathology generated a shift toward digital-image capturing methods, including mounted digital cameras and whole slide digital-slide scanning. Digital image capture techniques have ushered in new applications for slide sharing and second-opinion consultations of unusual or difficult cases in pathology. With their recent surge in popularity, we suspect that smart phone cameras are poised to become a widespread, cost-effective method for pathology image acquisition.
APA, Harvard, Vancouver, ISO, and other styles
30

Morris, J. F., and D. V. Pow. "Capturing and quantifying the exocytotic event." Journal of Experimental Biology 139, no. 1 (September 1, 1988): 81–103. http://dx.doi.org/10.1242/jeb.139.1.81.

Full text
Abstract:
Although exocytosis is now known to be the universal method by which proteins are released from eukaryotic cells, we know surprisingly little of the mechanism by which exocytosis occurs. One reason for this is that it has proved difficult to capture sufficient of these evanescent events to permit their study. The difficulty with which exocytoses can be visualized with standard preparative techniques varies among tissues, but the problem is particularly apparent in the mammalian nervous system. Tannic acid has recently been introduced as an agent by which exocytosed granule cores can be captured and visualized electron-microscopically. Application of tannic acid to the magnocellular neurosecretory system reveals exocytoses from all parts of their terminal arborization within the neural lobe, and also from their dendrites within the hypothalamus. Quantification of the exocytoses in unstimulated tissue and in tissue stimulated by a variety of exogenous and endogenous mechanisms indicates: (a) that exocytosis occurs equally from each unit of membrane of the perivascular nerve endings, and of the axonal swellings that were previously thought to be sites of granule storage, rather than release; (b) that, in the nerve endings, a greater proportion of the stored granules are exocytosed, and thus the endings are specialized for release not by any particular property of their membrane, but by a high surface membrane:volume ratio. Together, the data cast doubt on the hypothesis that exocytosis occurs only at some functionally specialized sites at certain loci in the membrane. Rather, the data favour the hypothesis that magnocellular granules can fuse with any part of the membrane, depending on constraints imposed by the cytoskeleton, and a local increase in cytosolic free calcium level. When applied to hypothalamic central nervous tissue, tannic acid reveals that exocytosis of dense-cored synaptic vesicles occurs preferentially, but not exclusively, at the membrane apposed to the postsynaptic element. However, about half of all exocytoses from synaptic boutons occur at bouton membrane unrelated to the synaptic cleft. In all tissues studied, tannic acid reveals a heterogeneity among secretory cells in the extent of exocytosis that occurs in response to stimulation, and permits an analysis of the degree to which secretion is polarized in any one direction. These results question long-held assumptions concerning the site at which neurones release transmitters and modulators. Tannic acid seems likely to prove a potent tool in the investigation of both the mechanism of exocytosis and the ways in which different types of cells adapt the process to perform their physiol
APA, Harvard, Vancouver, ISO, and other styles
31

Grifoni, Emanuela, Letizia Bonizzoni, Marco Gargano, Jacopo Melada, Nicola Ludwig, Silvia Bruni, and Ilaria Mignani. "Hyper-dimensional Visualization of Cultural Heritage: A Novel Multi-analytical Approach on 3D Pomological Models in the Collection of the University of Milan." Journal on Computing and Cultural Heritage 15, no. 2 (June 30, 2022): 1–15. http://dx.doi.org/10.1145/3477398.

Full text
Abstract:
Digital close-range photogrammetry allows us to acquire high-fidelity tridimensional models useful to document cultural heritage objects with an impressive level of detail. In addition, this technique carries a strong analytical potentiality, able to gain improved knowledge of cultural objects and their preservation conditions. This project is focused on a comprehensive diagnostic survey using 3D multispectral modeling, high-resolution digital radiography, pulsed thermography, XRF, FT-IR, and FORS spectroscopies to document and characterize from a conservative point of view the poly-material objects that belong to the “Garnier Valletti” pomological collection—a unique collection from both scientific and artistic points of view. The analytical integration of imaging techniques, 3D modeling, and spectroscopic techniques provides information from the surface, sub-surface, and innermost layers of the object, respectively, capturing both accurate morphometric, spectral, and compositional data. The article presents the results obtained on typical poly-material and multilayered objects of this collection for which the combination of the considered techniques provided important data to the technical knowledge of the realization highlighting a particular predictive ability from a conservative point of view.
APA, Harvard, Vancouver, ISO, and other styles
32

Das, Rik, Sudeep Thepade, Subhajit Bhattacharya, and Saurav Ghosh. "Retrieval Architecture with Classified Query for Content Based Image Recognition." Applied Computational Intelligence and Soft Computing 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/1861247.

Full text
Abstract:
The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene) dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.
APA, Harvard, Vancouver, ISO, and other styles
33

Mohajan, Devajit, and Haradhan Kumar Mohajan. "Exploration of Coding in Qualitative Data Analysis: Grounded Theory Perspective." Research and Advances in Education 1, no. 6 (December 2022): 50–60. http://dx.doi.org/10.56397/rae.2022.12.07.

Full text
Abstract:
This study tries to identify, define, and analyze the coding techniques that the grounded theory researchers follow when they develop the qualitative research. Grounded theory is a qualitative research analysis that systematically collects and analyzes data to develop a new theory on human behavior in social welfare perceptions. It also tries to discuss coding method, coding cycle, theoretical sensitivity, theoretical sampling, theoretical saturation, etc. Usually, a code is a word or short phrases that symbolically allocate a summative, salient, and essence capturing for a portion of visual data. In qualitative grounded theory research coding plays an important role that enables the researchers to identify, organize, and build new theory that is grounded in data. The purpose of this study is to provide an overview of codes, coding, and coding methods that form a qualitative grounded theory.
APA, Harvard, Vancouver, ISO, and other styles
34

Vu, Huy Quan, Gang Li, Rob Law, and Yanchun Zhang. "Travel Diaries Analysis by Sequential Rule Mining." Journal of Travel Research 57, no. 3 (February 1, 2017): 399–413. http://dx.doi.org/10.1177/0047287517692446.

Full text
Abstract:
Because of the inefficiency in analyzing the comprehensive travel data, tourism managers are facing the challenge of gaining insights into travelers’ behavior and preferences. In most cases, existing techniques are incapable of capturing the sequential patterns hidden in travel data. To address these issues, this article proposes to analyze the travelers’ behavior through geotagged photos and sequential rule mining. Travel diaries, constructed from the photo sequences, can capture comprehensive travel information, and then sequential patterns can be discovered to infer the potential destinations. The effectiveness of the proposed framework is demonstrated in a case study of Australian outbound tourism, using a data set of more than 890,000 photos from 3,623 travelers. The introduced framework has the potential to benefit tourism researchers and practitioners from capturing and understanding the behaviors and preferences of travelers. The findings can support destination-marketing organizations (DMOs) in promoting appropriate destinations to prospective travelers.
APA, Harvard, Vancouver, ISO, and other styles
35

Rehany, N., A. Barsi, and T. Lovas. "CAPTURING FINE DETAILS INVOLVING LOW-COST SENSORS –A COMPARATIVE STUDY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W8 (November 14, 2017): 213–20. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w8-213-2017.

Full text
Abstract:
Capturing the fine details on the surface of small objects is a real challenge to many conventional surveying methods. Our paper discusses the investigation of several data acquisition technologies, such as arm scanner, structured light scanner, terrestrial laser scanner, object line-scanner, DSLR camera, and mobile phone camera. A palm-sized embossed sculpture reproduction was used as a test object; it has been surveyed by all the instruments. The result point clouds and meshes were then analyzed, using the arm scanner’s dataset as reference. In addition to general statistics, the results have been evaluated based both on 3D deviation maps and 2D deviation graphs; the latter allows even more accurate analysis of the characteristics of the different data acquisition approaches. Additionally, own-developed local minimum maps were created that nicely visualize the potential level of detail provided by the applied technologies. Besides the usual geometric assessment, the paper discusses the different resource needs (cost, time, expertise) of the discussed techniques. Our results proved that even amateur sensors operated by amateur users can provide high quality datasets that enable engineering analysis. Based on the results, the paper contains an outlook to potential future investigations in this field.
APA, Harvard, Vancouver, ISO, and other styles
36

Lorente-Martínez, Héctor, Ainhoa Agorreta, and Diego San Mauro. "Genomic Fishing and Data Processing for Molecular Evolution Research." Methods and Protocols 5, no. 2 (March 7, 2022): 26. http://dx.doi.org/10.3390/mps5020026.

Full text
Abstract:
Molecular evolution analyses, such as detection of adaptive/purifying selection or ancestral protein reconstruction, typically require three inputs for a target gene (or gene family) in a particular group of organisms: sequence alignment, model of evolution, and phylogenetic tree. While modern advances in high-throughput sequencing techniques have led to rapid accumulation of genomic-scale data in public repositories and databases, mining such vast amount of information often remains a challenging enterprise. Here, we describe a comprehensive, versatile workflow aimed at the preparation of genome-extracted datasets readily available for molecular evolution research. The workflow involves: (1) fishing (searching and capturing) specific gene sequences of interest from taxonomically diverse genomic data available in databases at variable levels of annotation, (2) processing and depuration of retrieved sequences, (3) production of a multiple sequence alignment, (4) selection of best-fit model of evolution, and (5) solid reconstruction of a phylogenetic tree.
APA, Harvard, Vancouver, ISO, and other styles
37

Alattar, Mohammad Anwar, Mark Beecroft, and Caitlin Cottrill. "Geographic Information System and Atomized Transportation Modes." Encyclopedia 2, no. 2 (May 25, 2022): 1069–81. http://dx.doi.org/10.3390/encyclopedia2020070.

Full text
Abstract:
Transportation is a spatial activity. The geographic Information System (GIS) is the process of capturing, managing, analyzing, and presenting spatial data. GIS techniques are essential to the study of various aspects of transportation. In this entry, the state of knowledge regarding atomized transportation modes is presented. Atomized transportation modes are defined as transportation modes which deal with low passenger numbers.
APA, Harvard, Vancouver, ISO, and other styles
38

Gonzalez-Hernandez, G., A. Sarker, K. O’Connor, and G. Savova. "Capturing the Patient’s Perspective: a Review of Advances in Natural Language Processing of Health-Related Text." Yearbook of Medical Informatics 26, no. 01 (August 2017): 214–27. http://dx.doi.org/10.1055/s-0037-1606506.

Full text
Abstract:
Summary Background: Natural Language Processing (NLP) methods are increasingly being utilized to mine knowledge from unstructured health-related texts. Recent advances in noisy text processing techniques are enabling researchers and medical domain experts to go beyond the information encapsulated in published texts (e.g., clinical trials and systematic reviews) and structured questionnaires, and obtain perspectives from other unstructured sources such as Electronic Health Records (EHRs) and social media posts. Objectives: To review the recently published literature discussing the application of NLP techniques for mining health-related information from EHRs and social media posts. Methods: Literature review included the research published over the last five years based on searches of PubMed, conference proceedings, and the ACM Digital Library, as well as on relevant publications referenced in papers. We particularly focused on the techniques employed on EHRs and social media data. Results: A set of 62 studies involving EHRs and 87 studies involving social media matched our criteria and were included in this paper. We present the purposes of these studies, outline the key NLP contributions, and discuss the general trends observed in the field, the current state of research, and important outstanding problems. Conclusions: Over the recent years, there has been a continuing transition from lexical and rule-based systems to learning-based approaches, because of the growth of annotated data sets and advances in data science. For EHRs, publicly available annotated data is still scarce and this acts as an obstacle to research progress. On the contrary, research on social media mining has seen a rapid growth, particularly because the large amount of unlabeled data available via this resource compensates for the uncertainty inherent to the data. Effective mechanisms to filter out noise and for mapping social media expressions to standard medical concepts are crucial and latent research problems. Shared tasks and other competitive challenges have been driving factors behind the implementation of open systems, and they are likely to play an imperative role in the development of future systems.
APA, Harvard, Vancouver, ISO, and other styles
39

Gonzalez-Hernandez, G., A. Sarker, K. O’Connor, and G. Savova. "Capturing the Patient’s Perspective: a Review of Advances in Natural Language Processing of Health-Related Text." Yearbook of Medical Informatics 26, no. 01 (2017): 214–27. http://dx.doi.org/10.15265/iy-2017-029.

Full text
Abstract:
Summary Background: Natural Language Processing (NLP) methods are increasingly being utilized to mine knowledge from unstructured health-related texts. Recent advances in noisy text processing techniques are enabling researchers and medical domain experts to go beyond the information encapsulated in published texts (e.g., clinical trials and systematic reviews) and structured questionnaires, and obtain perspectives from other unstructured sources such as Electronic Health Records (EHRs) and social media posts. Objectives: To review the recently published literature discussing the application of NLP techniques for mining health-related information from EHRs and social media posts. Methods: Literature review included the research published over the last five years based on searches of PubMed, conference proceedings, and the ACM Digital Library, as well as on relevant publications referenced in papers. We particularly focused on the techniques employed on EHRs and social media data. Results: A set of 62 studies involving EHRs and 87 studies involving social media matched our criteria and were included in this paper. We present the purposes of these studies, outline the key NLP contributions, and discuss the general trends observed in the field, the current state of research, and important outstanding problems. Conclusions: Over the recent years, there has been a continuing transition from lexical and rule-based systems to learning-based approaches, because of the growth of annotated data sets and advances in data science. For EHRs, publicly available annotated data is still scarce and this acts as an obstacle to research progress. On the contrary, research on social media mining has seen a rapid growth, particularly because the large amount of unlabeled data available via this resource compensates for the uncertainty inherent to the data. Effective mechanisms to filter out noise and for mapping social media expressions to standard medical concepts are crucial and latent research problems. Shared tasks and other competitive challenges have been driving factors behind the implementation of open systems, and they are likely to play an imperative role in the development of future systems.
APA, Harvard, Vancouver, ISO, and other styles
40

Onuigbo, I. C., and J. Y. Jwat. "Change Detection Analysis Using Surveying and Geoinformatics Techniques." Nigerian Journal of Environmental Sciences and Technology 2, no. 1 (March 2018): 28–38. http://dx.doi.org/10.36263/nijest.2018.01.0051.

Full text
Abstract:
The study was on change detection using Surveying and Geoinformatics techniques. For effective research study, Landsat satellite images and Quickbird imagery of Minna were acquired for three periods, 2000, 2005 and 2012. The research work demonstrated the possibility of using Surveying and Geoinformatics in capturing spatial-temporal data. The result of the research work shows a rapid growth in built-up land between 2000 and 2005, while the periods between 2005 and 2012 witnessed a reduction in this class. It was also observed that change by 2020 may likely follow the trend in 2005 – 2012 all things being equal. Built up area may increase to 11026.456 hectares, which represent 11% change. The study has shown clearly the extent to which MSS imagery and Landsat images together with extensive ground- truthing can provide information necessary for land use and land cover mapping. Attempt was made to capture as accurate as possible four land use and land cover classes as they change through time.
APA, Harvard, Vancouver, ISO, and other styles
41

Grützmacher, Florian, Jochen Kempfle, Kristof Van Laerhoven, and Christian Haubelt. "fastSW: Efficient Piecewise Linear Approximation of Quaternion-Based Orientation Sensor Signals for Motion Capturing with Wearable IMUs." Sensors 21, no. 15 (July 30, 2021): 5180. http://dx.doi.org/10.3390/s21155180.

Full text
Abstract:
In the past decade, inertial measurement sensors have found their way into many wearable devices where they are used in a broad range of applications, including fitness tracking, step counting, navigation, activity recognition, or motion capturing. One of their key features that is widely used in motion capturing applications is their capability of estimating the orientation of the device and, thus, the orientation of the limb it is attached to. However, tracking a human’s motion at reasonable sampling rates comes with the drawback that a substantial amount of data needs to be transmitted between devices or to an end point where all device data is fused into the overall body pose. The communication typically happens wirelessly, which severely drains battery capacity and limits the use time. In this paper, we introduce fastSW, a novel piecewise linear approximation technique that efficiently reduces the amount of data required to be transmitted between devices. It takes advantage of the fact that, during motion, not all limbs are being moved at the same time or at the same speed, and only those devices need to transmit data that actually are being moved or that exceed a certain approximation error threshold. Our technique is efficient in computation time and memory utilization on embedded platforms, with a maximum of 210 instructions on an ARM Cortex-M4 microcontroller. Furthermore, in contrast to similar techniques, our algorithm does not affect the device orientation estimates to deviate from a unit quaternion. In our experiments on a publicly available dataset, our technique is able to compress the data to 10% of its original size, while achieving an average angular deviation of approximately 2° and a maximum angular deviation below 9°.
APA, Harvard, Vancouver, ISO, and other styles
42

Pegoraro, Marco, Merih Seran Uysal, and Wil M. P. van der Aalst. "Efficient Time and Space Representation of Uncertain Event Data." Algorithms 13, no. 11 (November 9, 2020): 285. http://dx.doi.org/10.3390/a13110285.

Full text
Abstract:
Process mining is a discipline which concerns the analysis of execution data of operational processes, the extraction of models from event data, the measurement of the conformance between event data and normative models, and the enhancement of all aspects of processes. Most approaches assume that event data is accurately captured behavior. However, this is not realistic in many applications: data can contain uncertainty, generated from errors in recording, imprecise measurements, and other factors. Recently, new methods have been developed to analyze event data containing uncertainty; these techniques prominently rely on representing uncertain event data by means of graph-based models explicitly capturing uncertainty. In this paper, we introduce a new approach to efficiently calculate a graph representation of the behavior contained in an uncertain process trace. We present our novel algorithm, prove its asymptotic time complexity, and show experimental results that highlight order-of-magnitude performance improvements for the behavior graph construction.
APA, Harvard, Vancouver, ISO, and other styles
43

LACROIX, ZOÉ, LOUIQA RASCHID, and BARBARA A. ECKMAN. "TECHNIQUES FOR OPTIMIZATION OF QUERIES ON INTEGRATED BIOLOGICAL RESOURCES." Journal of Bioinformatics and Computational Biology 02, no. 02 (June 2004): 375–411. http://dx.doi.org/10.1142/s0219720004000648.

Full text
Abstract:
Today, scientific data are inevitably digitized, stored in a wide variety of formats, and are accessible over the Internet. Scientific discovery increasingly involves accessing multiple heterogeneous data sources, integrating the results of complex queries, and applying further analysis and visualization applications in order to collect datasets of interest. Building a scientific integration platform to support these critical tasks requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web, as well as data that are locally materialized in warehouses or generated by software. The lack of efficiency of existing approaches can significantly affect the process with lengthy delays while accessing critical resources or with the failure of the system to report any results. Some queries take so much time to be answered that their results are returned via email, making their integration with other results a tedious task. This paper presents several issues that need to be addressed to provide seamless and efficient integration of biomolecular data. Identified challenges include: capturing and representing various domain specific computational capabilities supported by a source including sequence or text search engines and traditional query processing; developing a methodology to acquire and represent semantic knowledge and metadata about source contents, overlap in source contents, and access costs; developing cost and semantics based decision support tools to select sources and capabilities, and to generate efficient query evaluation plans.
APA, Harvard, Vancouver, ISO, and other styles
44

Borra, Erik, and Bernhard Rieder. "Programmed method: developing a toolset for capturing and analyzing tweets." Aslib Journal of Information Management 66, no. 3 (May 19, 2014): 262–78. http://dx.doi.org/10.1108/ajim-09-2013-0094.

Full text
Abstract:
Purpose – The purpose of this paper is to introduce Digital Methods Initiative Twitter Capture and Analysis Toolset, a toolset for capturing and analyzing Twitter data. Instead of just presenting a technical paper detailing the system, however, the authors argue that the type of data used for, as well as the methods encoded in, computational systems have epistemological repercussions for research. The authors thus aim at situating the development of the toolset in relation to methodological debates in the social sciences and humanities. Design/methodology/approach – The authors review the possibilities and limitations of existing approaches to capture and analyze Twitter data in order to address the various ways in which computational systems frame research. The authors then introduce the open-source toolset and put forward an approach that embraces methodological diversity and epistemological plurality. Findings – The authors find that design decisions and more general methodological reasoning can and should go hand in hand when building tools for computational social science or digital humanities. Practical implications – Besides methodological transparency, the software provides robust and reproducible data capture and analysis, and interlinks with existing analytical software. Epistemic plurality is emphasized by taking into account how Twitter structures information, by allowing for a number of different sampling techniques, by enabling a variety of analytical approaches or paradigms, and by facilitating work at the micro, meso, and macro levels. Originality/value – The paper opens up critical debate by connecting tool design to fundamental interrogations of methodology and its repercussions for the production of knowledge. The design of the software is inspired by exchanges and debates with scholars from a variety of disciplines and the attempt to propose a flexible and extensible tool that accommodates a wide array of methodological approaches is directly motivated by the desire to keep computational work open for various epistemic sensibilities.
APA, Harvard, Vancouver, ISO, and other styles
45

Das, Rik, and Subhajit Bhattacharya. "A Novel Feature Extraction Technique for Content Based Image Classification in Digital Marketing Platform." American Journal of Advanced Computing 1, no. 3 (July 1, 2020): 1–8. http://dx.doi.org/10.15864/ajac.1302.

Full text
Abstract:
Increasing size and complexity of information about different products offered through digital media has made it essential to target the effective and efficient techniques of data management and categorization. Text data has gradually been replaced by image or visual data due to increasing importance of image capturing devices and social media. Images have been used as one of the foremost linkages between the brand and the consumer in digital marketing domain. Classification has been considered as a vital component of machine learning necessary for data identification which can be initiated before retrieval to restrict the search within the class of interest. The authors have proposed a novel technique for feature extraction for content based image classification. Analyses of customer satisfaction related to content based product identification with images in diverse media have also been carried out. The classification results with the proposed technique have outperformed the existing methods and have shown an increment of 20% in precision results.
APA, Harvard, Vancouver, ISO, and other styles
46

Gupta, Sangeeta, and Raghuram Godavarti. "IoT Data Management Using Cloud Computing and Big Data Technologies." International Journal of Software Innovation 8, no. 4 (October 2020): 50–58. http://dx.doi.org/10.4018/ijsi.2020100104.

Full text
Abstract:
With recent developments in technology, devices like vehicles and home appliances are able to connect to the Internet and communicate, contributing to the Internet of Things. These advancements lead to generation of huge amount of data. This data is needed to derive all the metrics of the IoT devices, which can later be used to make suitable analysis and henceforth take some business decisions. Moreover, these huge amounts of data are very difficult to handle with conventional data warehousing techniques and need a better system. The existing data centers that are located on-site are mostly relational databases which are not scalable to handle increasing needs of storage and compute. These systems are also inefficient to handle different types of data which is mandatory for IoT devices capturing different metrics. In the proposed work, a model is designed to better handle the data generated by IoT devices via Rest API's. Results are presented to depict the functioning of Rest API across all the nodes deployed in a cluster via JSON request. The input to the model is a corresponding JSON payload as a request. The transactions get added to the registered nodes, without a necessity to add payload for the second time. A new batch is created with readings from all the devices. The contents of the entire batch and all systems are obtained while retrieving the results, thus signifying the effectiveness of the proposed work.
APA, Harvard, Vancouver, ISO, and other styles
47

Vinod, D. Franklin, and V. Vasudevan. "LNTP-MDBN: Big Data Integrated Learning Framework for Heterogeneous Image Set Classification." Current Medical Imaging Formerly Current Medical Imaging Reviews 15, no. 2 (January 10, 2019): 227–36. http://dx.doi.org/10.2174/1573405613666170721103949.

Full text
Abstract:
Background: With the explosive growth of global data, the term Big Data describes the enormous size of dataset through the detailed analysis. The big data analytics revealed the hidden patterns and secret correlations among the values. The major challenges in Big data analysis are due to increase of volume, variety, and velocity. The capturing of images with multi-directional views initiates the image set classification which is an attractive research study in the volumetricbased medical image processing. Methods: This paper proposes the Local N-ary Ternary Patterns (LNTP) and Modified Deep Belief Network (MDBN) to alleviate the dimensionality and robustness issues. Initially, the proposed LNTP-MDBN utilizes the filtering technique to identify and remove the dependent and independent noise from the images. Then, the application of smoothening and the normalization techniques on the filtered image improves the intensity of the images. Results: The LNTP-based feature extraction categorizes the heterogeneous images into different categories and extracts the features from each category. Based on the extracted features, the modified DBN classifies the normal and abnormal categories in the image set finally. Conclusion: The comparative analysis of proposed LNTP-MDBN with the existing pattern extraction and DBN learning models regarding classification accuracy and runtime confirms the effectiveness in mining applications.
APA, Harvard, Vancouver, ISO, and other styles
48

Kent, L., J. Gopsill, L. Giunta, M. Goudswaard, C. Snider, and B. Hicks. "Prototyping through the Lens of Network Analysis and Visualisation." Proceedings of the Design Society 2 (May 2022): 743–52. http://dx.doi.org/10.1017/pds.2022.76.

Full text
Abstract:
AbstractPrototyping is a well-established and valued design process activity. However, capturing prototypes and the tacit knowledge that led to and was gained from their creation is a challenge. Beyond that, questions remain on how best to utilise that captured data. This paper looks at how one can exploit and generate insights from data that has been captured, specifically looking at graph databases, the network analysis techniques they permit and the differing fidelities of visualisation and interactivity that they enable.
APA, Harvard, Vancouver, ISO, and other styles
49

Labonté, Katherine, Daniel Lafond, Bénédicte Chatelais, Aren Hunter, Folakemi Akpan, Heather F. Neyedli, and Sébastien Tremblay. "Combining Process Tracing and Policy Capturing Techniques for Judgment Analysis in an Anti-Submarine Warfare Simulation." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65, no. 1 (September 2021): 1557–61. http://dx.doi.org/10.1177/1071181321651113.

Full text
Abstract:
The Cognitive Shadow is a prototype decision support tool that can notify users when they deviate from their usual judgment pattern. Expert decision policies are learned automatically online while performing one’s task using a combination of machine learning algorithms. This study investigated whether combining this system with the use of a process tracing technique could improve its ability to model human decision policies. Participants played the role of anti-submarine warfare commanders and rated the likelihood of detecting a submarine in different ocean areas based on their environmental characteristics. In the process tracing condition, participants were asked to reveal only the information deemed necessary, and only that information was sent to the system for model training. In the control condition, all the available information was sent to the system with each decision. Results showed that process tracing data improved the model’s ability to predict human decisions compared to the control condition.
APA, Harvard, Vancouver, ISO, and other styles
50

Knyaz, V. A., N. A. Leybova, R. Galeev, M. Novikov, and A. V. Gaboutchian. "PHOTOGRAMMETRIC TECHNIQUES FOR PALEOANTHROPOLOGICAL OBJECTS PRESERVING AND STUDYING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 525–30. http://dx.doi.org/10.5194/isprs-archives-xlii-2-525-2018.

Full text
Abstract:
Paleo-anthropological research has its specificity closely related with studied objects. Their complicated shape arises from anatomical features of human skull and other skeletal bones. The degree of preservation is associated with the fragility of palaeo-anthropological material which usually has high historical and scientific value. The circumstances mentioned above enhance the relevance of photogrammetry implementation in anthropological studies. Thus, such combination of scientific methodologies with up-to-date technology creates a potential for improvement of various stages of palaeo-anthropological studies. This can be referred to accurate documenting of anthropological material and creation of databases accessible for wide range of users, predominantly research scientists and students; preservation of highly valuable samples and possibility of sharing information as 3D images or printed copies, improving co-operation of scientists world-wide; potential for replication of contact anthropometric studies on 3D images or printed copies providing for development of new biometric methods, and etc. This paper presents an approach based on photogrammetric techniques and non-contact measurements, providing technological and methodological development of paleo-anthropological studies, including data capturing, processing and representing.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography