Academic literature on the topic 'Data management and data science not elsewhere classified'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Data management and data science not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Data management and data science not elsewhere classified"

1

Bindewald, A., S. Miocic, A. Wedler, and J. Bauhus. "Forest inventory-based assessments of the invasion risk of Pseudotsuga menziesii (Mirb.) Franco and Quercus rubra L. in Germany." European Journal of Forest Research 140, no. 4 (March 26, 2021): 883–99. http://dx.doi.org/10.1007/s10342-021-01373-0.

Full text
Abstract:
AbstractIn Europe, some non-native tree species (NNT) are classified as invasive because they have spread into semi-natural habitats. Yet, available risk assessment protocols are often based on a few limited case studies with unknown representativeness and uncertain data quality. This is particularly problematic when negative impacts of NNT are confined to particular ecosystems or processes, whilst providing valuable ecosystem services elsewhere. Here, we filled this knowledge gap and assessed invasion risks of two controversially discussed NNT in Germany (Quercus rubra L., Pseudotsuga menziesii (Mirb.) Franco) for broad forest types using large scale inventory data. For this purpose, establishment success of natural regeneration was quantified in terms of cover and height classes. The current extent of spread into protected forest habitats was investigated in south-west Germany using regional data. Establishment was most successful at sites where the NNT are abundant in the canopy and where sufficient light is available in the understory. Natural regeneration of both NNT was observed in 0.3% of the total area of protected habitats. In forest habitats with sufficient light in the understory and competitively inferior tree species, there is a risk that Douglas fir and red oak cause changes in species composition in the absence of management interventions. The installation of buffer zones and regular removal of unwanted regeneration could minimize such risks for protected areas. Our study showed that forest inventories can provide valuable data for comparing the establishment risk of NNT amongst ecosystem types, regions or jurisdictions. This information can be improved by recording the abundance and developmental stage of widespread NNT, particularly in semi-natural ecosystems.
APA, Harvard, Vancouver, ISO, and other styles
2

Feng, Shuxian, and Toshiya Yamamoto. "Preliminary research on sponge city concept for urban flood reduction: a case study on ten sponge city pilot projects in Shanghai, China." Disaster Prevention and Management: An International Journal 29, no. 6 (November 9, 2020): 961–85. http://dx.doi.org/10.1108/dpm-01-2020-0019.

Full text
Abstract:
PurposeThis research aimed to determine the differences and similarities in each pilot project to understand the primary design forms and concepts of sponge city concept (SCC) projects in China. It also aimed to examine ten pilot projects in Shanghai to extrapolate their main characteristics and the processes necessary for implementing SCC projects effectively.Design/methodology/approachA literature review and field survey case study were employed. Data were mostly collected through a field survey in Shanghai, focusing on both the projects and the surrounding environment. Based on these projects' examination, a comparative method was used to determine the characteristics of the ten pilot SCC projects and programs in Shanghai.FindingsSix main types of SCC projects among 30 pilot cities were classified in this research to find differences and similarities among the pilot cities. Four sponge design methods were classified into ten pilot projects. After comparing each project size using the same geographical size, three geometrical types were categorized into both existing and new city areas. SCC project characteristics could be identified by combining four methods and three geometrical types and those of the SCC programs by comparing the change in land-use and the surrounding environment in ten pilot projects.Originality/valueThe results are valuable for implementing SCC projects in China and elsewhere and future research on the impact of SCC projects.
APA, Harvard, Vancouver, ISO, and other styles
3

Geromont, H. F., and D. S. Butterworth. "Generic management procedures for data-poor fisheries: forecasting with few data." ICES Journal of Marine Science 72, no. 1 (January 15, 2014): 251–61. http://dx.doi.org/10.1093/icesjms/fst232.

Full text
Abstract:
Abstract The majority of fish stocks worldwide are not managed quantitatively as they lack sufficient data, particularly a direct index of abundance, on which to base an assessment. Often these stocks are relatively “low value”, which renders dedicated scientific management too costly, and a generic solution is therefore desirable. A management procedure (MP) approach is suggested where simple harvest control rules are simulation tested to check robustness to uncertainties. The aim of this analysis is to test some very simple “off-the-shelf” MPs that could be applied to groups of data-poor stocks which share similar key characteristics in terms of status and demographic parameters. For this initial investigation, a selection of empirical MPs is simulation tested over a wide range of operating models (OMs) representing resources of medium productivity classified as severely depleted, to ascertain how well these different MPs perform. While the data-moderate MPs (based on an index of abundance) perform somewhat better than the data-limited ones (which lack such input) as would be expected, the latter nevertheless perform surprisingly well across wide ranges of uncertainty. These simple MPs could well provide the basis to develop candidate MPs to manage data-limited stocks, ensuring if not optimal, at least relatively stable sustainable future catches.
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Yang. "Intelligent Community Management System Based on Big Data Technology." Scientific Programming 2022 (February 23, 2022): 1–10. http://dx.doi.org/10.1155/2022/5396636.

Full text
Abstract:
Community safety has become an important part of social public safety. The construction of a safe community focuses on the accumulation of community safety capabilities. This paper discusses the application of big data technology in community safety construction and the improvement of community safety promotion capabilities. We analyzed the sources and collection methods of community data, classified multisource heterogeneous community data, and constructed seven types of community data. We designed the conceptual structure and storage structure of the community database. On the basis of the construction of the community database, the architecture design of the big data platform for community security was launched. From the perspective of different user types, the functional requirements of the big data platform were analyzed. Combined with demand analysis, the overall architecture design of the community big data platform was carried out. On the basis of the overall architecture, the application architecture and technical architecture were designed in more detail, and the key technologies of the community big data platform were analyzed. Finally, it analyzes how to use the community big data platform to predict public security risks by constructing a CART regression tree model.
APA, Harvard, Vancouver, ISO, and other styles
5

Aljabhan, Basim, and Melese Abeyie. "Big Data Analytics in Supply Chain Management: A Qualitative Study." Computational Intelligence and Neuroscience 2022 (September 16, 2022): 1–10. http://dx.doi.org/10.1155/2022/9573669.

Full text
Abstract:
This work explores the leading supply chain processes impacted by big data analytic techniques. Although these concepts are being extensively applied to supply chain management, the number of works that examine and classify the main processes in the current literature is still scarce. This article, therefore, provides a classification of the current literature on the use of big data analytics and provides insight from professionals in the field in relation to this topic. A well-established set of practical guidelines was used to design and carry out a systematic literature mapping. A total of 50 primary studies were analysed and classified, chosen from a sample of 5, 437 studies after careful filtering to answer six research questions. In addition, a survey was prepared and applied by professionals working in the area. In total, 25 professionals answered a questionnaire with eleven questions, ten of which seek to explore the importance of big data analytics for the areas of the supply chain addressed in this work, and one intends to list the three areas where BDA can be more shocking. More than 60% of the studies are directly linked to the area of chain management; most studies performed empirical studies but rarely classified or detailed methodological procedures; almost 50% bring models to optimize some process or forecasts for better decision-making; more than 50% of professionals working in the area believe that the processes where big data analytics can effectively contribute are related to inventory and stockout management. This study serves as a basis for further research and future work, as it reviews the literature, pointing out the main areas that are being addressed and making a relationship with understanding these areas in practice.
APA, Harvard, Vancouver, ISO, and other styles
6

Sant'Anna, Annibal P. "Data envelopment analysis of randomized ranks." Pesquisa Operacional 22, no. 2 (December 2002): 203–15. http://dx.doi.org/10.1590/s0101-74382002000200007.

Full text
Abstract:
Probabilities and odds, derived from vectors of ranks, are here compared as measures of efficiency of decision-making units (DMUs). These measures are computed with the goal of providing preliminary information before starting a Data Envelopment Analysis (DEA) or the application of any other evaluation or composition of preferences methodology. Preferences, quality and productivity evaluations are usually measured with errors or subject to influence of other random disturbances. Reducing evaluations to ranks and treating the ranks as estimates of location parameters of random variables, we are able to compute the probability of each DMU being classified as the best according to the consumption of each input and the production of each output. Employing the probabilities of being the best as efficiency measures, we stretch distances between the most efficient units. We combine these partial probabilities in a global efficiency score determined in terms of proximity to the efficiency frontier.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Juan. "Application of Intelligent Archives Management Based on Data Mining in Hospital Archives Management." Journal of Electrical and Computer Engineering 2022 (April 7, 2022): 1–13. http://dx.doi.org/10.1155/2022/6217328.

Full text
Abstract:
Data mining belongs to knowledge discovery, which is the process of revealing implicit, unknown, and valuable information from a large amount of fuzzy application data. The potential information revealed by data mining can help decision makers adjust market strategies and reduce market risks. The information excavated must be real and not universally known, and it can be the discovery of a specific problem. Data mining algorithms mainly include the neural network method, decision tree method, genetic algorithm, rough set method, fuzzy set method, association rule method, and so on. Archives management, also known as archive work, is the general term for various business works, in which archives directly manage archive entities and archive information and provide utilization services. It is also the most basic part of national archives. Hospital archives are an important part of hospital management, and hospital archives are the accumulation of work experience and one of the important elements for building a modern hospital. Hospital archives are documents, work records, charts, audio recordings, videos, photos, and other types of documents, audio-visual materials, and physical materials, such as certificates, trophies, and medals obtained by hospitals, departments, and individuals. The purpose of this paper is to study the application of intelligent archives management based on data mining in hospital archives management, expecting to use the existing data mining technology to improve the current hospital archives management. This paper investigates the age and educational background of hospital archives management workers and explores the relationship between them and the quality of archives management. Based on the decision number algorithm, on the basis of the database, the hospital data is classified and analyzed, and the hospital file data is classified and processed through the decision number algorithm to improve the system data processing capability. The experimental results of this paper show that among the staff working in the archives management department of the hospital, 20-to-30-year-olds account for 46.2% of the total group. According to the data, the staff in the archives management department of the hospital also tends to be younger. Among the staff under the age of 30, the file pass rate was 98.3% and the failure rate was 1.7%. Among the staff over 50 years old, the file pass rate was 99.9% and the failure rate was 0.1%. According to the data, the job is related to the experience of the employee.
APA, Harvard, Vancouver, ISO, and other styles
8

Beránek, Václav, Tomáš Olšan, Martin Libra, Vladislav Poulek, Jan Sedláček, Minh-Quan Dang, and Igor Tyukhov. "New Monitoring System for Photovoltaic Power Plants’ Management." Energies 11, no. 10 (September 20, 2018): 2495. http://dx.doi.org/10.3390/en11102495.

Full text
Abstract:
An innovative solar monitoring system has been developed. The system aimed at measuring the main parameters and characteristics of solar plants; collecting, diagnosing and processing data. The system communicates with the inverters, electrometers, metrological equipment and additional components of the photovoltaic arrays. The developed and constructed long working system is built on special data collecting technologies. At the generating plants, a special data logger BBbox is installed. The new monitoring system has been used to follow 65 solar plants in the Czech Republic and elsewhere for 175 MWp. As an example, we have selected 13 PV plants in this paper that are at least seven years old. The monitoring system contributes to quality management of plants, and it also provides data for scientific purposes. Production of electricity in the built PV plants reflects the expected values according to internationally used software PVGIS (version 5) during the previous seven years of operation. A comparison of important system parameters clearly shows the new solutions and benefits of the new Solarmon-2.0 monitoring system. Secured communications will increase data protection. A higher frequency of data saving allows higher accuracy of the mathematical models.
APA, Harvard, Vancouver, ISO, and other styles
9

Rocha, Rafael Brandão, and Maria Aparecida Cavalcanti Netto. "A data envelopment analysis model for rank ordering suppliers in the oil industry." Pesquisa Operacional 22, no. 2 (December 2002): 123–31. http://dx.doi.org/10.1590/s0101-74382002000200002.

Full text
Abstract:
The benefits of integration companies-suppliers top the strategic agendas of managers. Developing a system showing which suppliers merit continuing and deepening the partnership is difficult because of the large quantity of variables to be analyzed. The internationalized petroleum industry, requiring a large variety of materials, is no different. In this context, the Brazilian company PETROBRAS S.A. has a system to evaluate its suppliers based on a consensus panel formed by its managers. This paper shows a two phase methodology for classifying and awarding suppliers using the DEA model. Firstly, the suppliers are classified according to their efficiency based on commercial transactions realized. Secondly they are classified according to the opinions of the managers, using a DEA model for calculating votes, with the assurance regions and superefficiency defining the best suppliers. The paper presents a case study in the E&P segment of PETROBRAS and the results obtained with the methodology.
APA, Harvard, Vancouver, ISO, and other styles
10

Wilson, Lee, Tiong T. Goh, and William Yu Chung Wang. "Big Data Management Challenges in a Meteorological Organisation." International Journal of E-Adoption 4, no. 2 (April 2012): 1–14. http://dx.doi.org/10.4018/jea.2012040101.

Full text
Abstract:
Data management practices strongly impact enterprise performance, especially for e-science organisations dealing with big data. This study identifies the key challenges and issues facing information system managers in growing demand for big data operations to deliver timely meteorological products. Data was collected from in-depth interviews with five MetService information system managers, including the CIO. Secondary data sources include internal documents and relevant literatures. The study revealed the pressing and challenging big data management issues can broadly be classified as data governance, infrastructure management, and workflow management. The study identifies a gap in adopting effective workflow management system and coordinated outsourcing plan within the organisation. Although the study is limited by its sample size and generalisation, the findings are useful for other IT managers and practitioners of data-intensive organisations to examine their data management practices on the need to balance the demand for efficient scientific operations and sustainable business growth. This study recognised that although the organisation is implementing up-to-date and practical solutions to meet these challenges, effort is needed to harmonise and align these solutions with business growth strategies to sustain future growth. This study enhanced societies’ understanding to the current practices of a real world organization.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Data management and data science not elsewhere classified"

1

Woon, Wei Lee. "Analysis of magnetoencephalographic data as a nonlinear dynamical system." Thesis, Aston University, 2002. http://publications.aston.ac.uk/13266/.

Full text
Abstract:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
APA, Harvard, Vancouver, ISO, and other styles
2

Jiang, Feng. "Capturing event metadata in the sky : a Java-based application for receiving astronomical internet feeds : a thesis presented in partial fulfilment of the requirements for the degree of Master of Computer Science in Computer Science at Massey University, Auckland, New Zealand." Massey University, 2008. http://hdl.handle.net/10179/897.

Full text
Abstract:
When an astronomical observer discovers a transient event in the sky, how can the information be immediately shared and delivered to others? Not too long time ago, people shared the information about what they discovered in the sky by books, telegraphs, and telephones. The new generation of transferring the event data is the way by the Internet. The information of astronomical events is able to be packed and put online as an Internet feed. For receiving these packed data, an Internet feed listener software would be required in a terminal computer. In other applications, the listener would connect to an intelligent robotic telescope network and automatically drive a telescope to capture the instant Astrophysical phenomena. However, because the technologies of transferring the astronomical event data are in the initial steps, the only resource available is the Perl-based Internet feed listener developed by the team of eSTAR. In this research, a Java-based Internet feed listener was developed. The application supports more features than the Perl-based application. After applying the rich Java benefits, the application is able to receive, parse and manage the Internet feed data in an efficient way with the friendly user interface. Keywords: Java, socket programming, VOEvent, real-time astronomy
APA, Harvard, Vancouver, ISO, and other styles
3

Gales, Mathis. "Collaborative map-exploration around large table-top displays: Designing a collaboration interface for the Rapid Analytics Interactive Scenario Explorer toolkit." Thesis, Ludwig-Maximilians-University Munich, 2018. https://eprints.qut.edu.au/115909/1/Master_Thesis_Mathis_Gales_final_opt.pdf.

Full text
Abstract:
Sense-making of spatial data on an urban level and large-scale decisions on new infrastructure projects need teamwork from experts with varied backgrounds. Technology can facilitate this collaboration process and magnify the effect of collective intelligence. Therefore, this work explores new useful collaboration interactions and visualizations for map-exploration software with a strong focus on usability. Additionally, for same-time and same-place group work, interactive table-top displays serve as a natural platform. Thus, the second aim of this project is to develop a user-friendly concept for integrating table-top displays with collaborative map-exploration. To achieve these goals, we continuously adapted the user-interface of the map-exploration software RAISE. We adopted a user-centred design approach and a simple iterative interaction design lifecycle model. Alternating between quick prototyping and user-testing phases, new design concepts were assessed and consequently improved or rejected. The necessary data was gathered through continuous dialogue with users and experts, a participatory design workshop, and a final observational study. Adopting a cross-device concept, our final prototype supports sharing information between a user’s personal device and table-top display(s). We found that this allows for a comfortable and practical separation between private and shared workspaces. The tool empowers users to share the current camera-position, data queries, and active layers between devices and with other users. We generalized further findings into a set of recommendations for designing user-friendly tools for collaborative map-exploration. The set includes recommendations regarding the sharing behaviour, the user-interface design, and the idea of playfulness in collaboration.
APA, Harvard, Vancouver, ISO, and other styles
4

(7360664), Gary Lee Johns. "STEM AND DATA: INSTRUCTIONAL DECISION MAKING OF SECONDARY SCIENCE AND MATHEMATICS TEACHERS." Thesis, 2019.

Find full text
Abstract:
This research is focused on the intersection of secondary teachers’ data-use to inform instructional decisions and their teaching of STEM in STEM-focused high schools. Teaching STEM requires presenting more than just the content knowledge of the STEM domains. The methods of inquiry (e.g., scientific inquiry, engineering design) are skills that should be taught as part of STEM activities (e.g., science labs). However, under the data- and standards-based accountability focus of education, it is unclear how data from STEM activities is used in instructional decision-making. While teachers give tremendous weight to the data they collect directly from their observations of their classrooms, it is data from standardized testing that strongly influences practices through accountability mandates. STEM education alters this scenario because, while there is a growing focus on teaching STEM, important aspects of STEM education are not readily standardized. This mixed-methods study will examine the perspectives of 9th through 12th grade science and mathematics teachers, in STEM-focused schools, on data-use and STEM teaching. We developed a framework, adapted from existing frameworks of data-use, to categorize these perspectives and outline contexts influencing them. Through a concurrent triangulation design we will combine quantitative and qualitative data for a comprehensive synthesis of these perspectives.
APA, Harvard, Vancouver, ISO, and other styles
5

(10514360), Uttara Vinay Tipnis. "Data Science Approaches on Brain Connectivity: Communication Dynamics and Fingerprint Gradients." Thesis, 2021.

Find full text
Abstract:
The innovations in Magnetic Resonance Imaging (MRI) in the recent decades have given rise to large open-source datasets. MRI affords researchers the ability to look at both structure and function of the human brain. This dissertation will make use of one of these large open-source datasets, the Human Connectome Project (HCP), to study the structural and functional connectivity in the brain.
Communication processes within the human brain at different cognitive states are neither well understood nor completely characterized. We assess communication processes in the human connectome using ant colony-inspired cooperative learning algorithm, starting from a source with no a priori information about the network topology, and cooperatively searching for the target through a pheromone-inspired model. This framework relies on two parameters, namely pheromone and edge perception, to define the cognizance and subsequent behaviour of the ants on the network and the communication processes happening between source and target. Simulations with different configurations allow the identification of path-ensembles that are involved in the communication between node pairs. In order to assess the different communication regimes displayed on the simulations and their associations with functional connectivity, we introduce two network measurements, effective path-length and arrival rate. These measurements are tested as individual and combined descriptors of functional connectivity during different tasks. Finally, different communication regimes are found in different specialized functional networks. This framework may be used as a test-bed for different communication regimes on top of an underlying topology.
The assessment of brain fingerprints has emerged in the recent years as an important tool to study individual differences. Studies so far have mainly focused on connectivity fingerprints between different brain scans of the same individual. We extend the concept of brain connectivity fingerprints beyond test/retest and assess fingerprint gradients in young adults by developing an extension of the differential identifiability framework. To do so, we look at the similarity between not only the multiple scans of an individual (subject fingerprint), but also between the scans of monozygotic and dizygotic twins (twin fingerprint). We have carried out this analysis on the 8 fMRI conditions present in the Human Connectome Project -- Young Adult dataset, which we processed into functional connectomes (FCs) and time series parcellated according to the Schaefer Atlas scheme, which has multiple levels of resolution. Our differential identifiability results show that the fingerprint gradients based on genetic and environmental similarities are indeed present when comparing FCs for all parcellations and fMRI conditions. Importantly, only when assessing optimally reconstructed FCs, we fully uncover fingerprints present in higher resolution atlases. We also study the effect of scanning length on subject fingerprint of resting-state FCs to analyze the effect of scanning length and parcellation. In the pursuit of open science, we have also made available the processed and parcellated FCs and time series for all conditions for ~1200 subjects part of the HCP-YA dataset to the scientific community.
Lastly, we have estimated the effect of genetics and environment on the original and optimally reconstructed FC with an ACE model.
APA, Harvard, Vancouver, ISO, and other styles
6

(8802305), Tian Qi. "THE IMPACT OF DATA BREACH ON SUPPLIERS' PERFORMANCE: THE CASE OF TARGET." Thesis, 2020.

Find full text
Abstract:
The author investigated the condition under which competition effect and contagion effect impact the suppliers of the firm encountering data breach. An event study was conducted to analyze the stock price of 104 suppliers of Target after the large-scale data breach in 2013. The result showed that suppliers with high dependence on Target experienced negative abnormal return on the day after Target’s announcement, while those with low dependence experienced positive abnormal return. After regressing the abnormal return on some explanatory variables, the result showed that firms with better operational performance and high information technology capability were less negatively affected. This study suggested that suppliers who relatively highly rely on one customer company are susceptible for the negative shock from that customer because of contagion effect. Furthermore, maintaining good performance and investing in information technology can help firms reduce losses from negative events happened in customer companies.
APA, Harvard, Vancouver, ISO, and other styles
7

(11167785), Nicolae Christophe Iovanac. "GENERATIVE, PREDICTIVE, AND REACTIVE MODELS FOR DATA SCARCE PROBLEMS IN CHEMICAL ENGINEERING." Thesis, 2021.

Find full text
Abstract:
Data scarcity is intrinsic to many problems in chemical engineering due to physical constraints or cost. This challenge is acute in chemical and materials design applications, where a lack of data is the norm when trying to develop something new for an emerging application. Addressing novel chemical design under these scarcity constraints takes one of two routes: the traditional forward approach, where properties are predicted based on chemical structure, and the recent inverse approach, where structures are predicted based on required properties. Statistical methods such as machine learning (ML) could greatly accelerate chemical design under both frameworks; however, in contrast to the modeling of continuous data types, molecular prediction has many unique obstacles (e.g., spatial and causal relationships, featurization difficulties) that require further ML methods development. Despite these challenges, this work demonstrates how transfer learning and active learning strategies can be used to create successful chemical ML models in data scarce situations.
Transfer learning is a domain of machine learning under which information learned in solving one task is transferred to help in another, more difficult task. Consider the case of a forward design problem involving the search for a molecule with a particular property target with limited existing data, a situation not typically amenable to ML. In these situations, there are often correlated properties that are computationally accessible. As all chemical properties are fundamentally tied to the underlying chemical topology, and because related properties arise due to related moieties, the information contained in the correlated property can be leveraged during model training to help improve the prediction of the data scarce property. Transfer learning is thus a favorable strategy for facilitating high throughput characterization of low-data design spaces.
Generative chemical models invert the structure-function paradigm, and instead directly suggest new chemical structures that should display the desired application properties. This inversion process is fraught with difficulties but can be improved by training these models with strategically selected chemical information. Structural information contained within this chemical property data is thus transferred to support the generation of new, feasible compounds. Moreover, transfer learning approach helps ensure that the proposed structures exhibit the specified property targets. Recent extensions also utilize thermodynamic reaction data to help promote the synthesizability of suggested compounds. These transfer learning strategies are well-suited for explorative scenarios where the property values being sought are well outside the range of available training data.
There are situations where property data is so limited that obtaining additional training data is unavoidable. By improving both the predictive and generative qualities of chemical ML models, a fully closed-loop computational search can be conducted using active learning. New molecules in underrepresented property spaces may be iteratively generated by the network, characterized by the network, and used for retraining the network. This allows the model to gradually learn the unknown chemistries required to explore the target regions of chemical space by actively suggesting the new training data it needs. By utilizing active learning, the create-test-refine pathway can be addressed purely in silico. This approach is particularly suitable for multi-target chemical design, where the high dimensionality of the desired property targets exacerbates data scarcity concerns.
The techniques presented herein can be used to improve both predictive and generative performance of chemical ML models. Transfer learning is demonstrated as a powerful technique for improving the predictive performance of chemical models in situations where a correlated property can be leveraged alongside scarce experimental or computational properties. Inverse design may also be facilitated through the use of transfer learning, where property values can be connected with stable structural features to generate new compounds with targeted properties beyond those observed in the training data. Thus, when the necessary chemical structures are not known, generative networks can directly propose them based on function-structure relationships learned from domain data, and this domain data can even be generated and characterized by the model itself for closed-loop chemical searches in an active learning framework. With recent extensions, these models are compelling techniques for looking at chemical reactions and other data types beyond the individual molecule. Furthermore, the approaches are not limited by choice of model architecture or chemical representation and are expected to be helpful in a variety of data scarce chemical applications.
APA, Harvard, Vancouver, ISO, and other styles
8

(9868160), Wan-Eih Huang. "Image Processing, Image Analysis, and Data Science Applied to Problems in Printing and Semantic Understanding of Images Containing Fashion Items." Thesis, 2020.

Find full text
Abstract:
This thesis aims to address problems in printing and semantic understanding of images.
The first one is developing a halftoning algorithm for multilevel output with unequal resolution printing pixels. We proposed a design method and implemented several versions of halftone screens. They all show good visual results in a real, low-cost electrophotographic printer.
The second problem is related to printing quality and self-diagnosis. Firstly, we incorporated logistic regression for classification of visible and invisible bands defects in the detection pipeline. In addition, we also proposed a new cost-function based algorithm with synthetic missing bands to estimate the repetitive interval of periodic bands for self-diagnosing the failing component. It is much more accurate than the previous method. Second, we addressed this problem with acoustic signals. Due to the scarcity of printer sounds, an acoustic signal augmentation method is needed to help a classifier perform better. The key idea is to mimic the situation that occurs when a component begins to fail.
The third problem deals with recommendation systems. We explored the similarity metrics in the loss function for a neural matrix factorization network.
The last problem is about image understanding of fashion items. We proposed a weakly supervised framework that includes mask-guided teacher network training and attention-based transfer learning to mitigate the domain gap in datasets and acquire a new dataset with rich annotations.
APA, Harvard, Vancouver, ISO, and other styles
9

(8067608), Zhi Li. "COPING WITH LIMITED DATA: MACHINE-LEARNING-BASED IMAGE UNDERSTANDING APPLICATIONS TO FASHION AND INKJET IMAGERY." Thesis, 2019.

Find full text
Abstract:
Machine learning has been revolutionizing our approach to image understanding problems. However, due to the unique nature of the problem, finding suitable data or learning from limited data properly is a constant challenge. In this work, we focus on building machine learning pipelines for fashion and inkjet image analysis with limited data.

We first look into the dire issue of missing and incorrect information on online fashion marketplace. Unlike professional online fashion retailers, sellers on P2P marketplaces tend not to provide correct color categorical information, which is pivotal for fashion shopping. Therefore, to assist users to provide correct color information, we aim to build an image understanding pipeline that can extract garment region in the fashion image and match the color of the fashion item to a pre-defined color categories on the fashion marketplace. To cope with the challenges of lack of suitable data, we propose an autonomous garment color extraction system that uses both clustering and semantic segmentation algorithm to extract the identify fashion garments in the image. In addition, a psychophysical experiment is designed to collect human subjects' color naming schema, and a random forest classifier is trained to given close prediction of color label for the fashion item. Our system is able to perform pixel level segmentation on fashion product portraits and parse human body parts and various fashion categories with human presence.

We also develop an inkjet printing analysis pipeline using pre-trained neural network. Our pipeline is able to learn to perceive print quality, namely high frequency noise level, of the test targets, without intense training. Our research also suggests that in spite of being trained on large scale dataset for object recognition, features generated from neural networks reacts to textural component of the image without any localized features. In addition, we expand our pipeline to printer forensics, and the pipeline is able to identify the printer model by examining the inkjet dot pattern at a microscopic level. Overall, the data-driven computer vision approach presents great value and potential to improve future inkjet imaging technology, even when the data source is limited.
APA, Harvard, Vancouver, ISO, and other styles
10

(10292846), Zhipeng Deng. "RECOGNITION OF BUILDING OCCUPANT BEHAVIORS FROM INDOOR ENVIRONMENT PARAMETERS BY DATA MINING APPROACH." Thesis, 2021.

Find full text
Abstract:
Currently, people in North America spend roughly 90% of their time indoors. Therefore, it is important to create comfortable, healthy, and productive indoor environments for the occupants. Unfortunately, our resulting indoor environments are still very poor, especially in multi-occupant rooms. In addition, energy consumption in residential and commercial buildings by HVAC systems and lighting accounts for about 41% of primary energy use in the US. However, the current methods for simulating building energy consumption are often not accurate, and various types of occupant behavior may explain this inaccuracy.
This study first developed artificial neural network models for predicting thermal comfort and occupant behavior in indoor environments. The models were trained by data on indoor environmental parameters, thermal sensations, and occupant behavior collected in ten offices and ten houses/apartments. The models were able to predict similar acceptable air temperature ranges in offices, from 20.6 °C to 25 °C in winter and from 20.6 °C to 25.6 °C in summer. We also found that the comfortable air temperature in the residences was 1.7 °C lower than that in the offices in winter, and 1.7 °C higher in summer. The reason for this difference may be that the occupants of the houses/apartments were responsible for paying their energy bills. The comfort zone obtained by the ANN model using thermal sensations in the ten offices was narrower than the comfort zone in ASHRAE Standard 55, but that using behaviors was wider.
Then this study used the EnergyPlus program to simulate the energy consumption of HVAC systems in office buildings. Measured energy data were used to validate the simulated results. When using the collected behavior from the offices, the difference between the simulated results and the measured data was less than 13%. When a behavioral ANN model was implemented in the energy simulation, the simulation performed similarly. However, energy simulation using constant thermostat set point without considering occupant behavior was not accurate. Further simulations demonstrated that adjusting the thermostat set point and the clothing could lead to a 25% variation in energy use in interior offices and 15% in exterior offices. Finally, energy consumption could be reduced by 30% with thermostat setback control and 70% with occupancy control.
Because of many contextual factors, most previous studies have built data-driven behavior models with limited scalability and generalization capability. This investigation built a policy-based reinforcement learning (RL) model for the behavior of adjusting the thermostat and clothing level. We used Q-learning to train the model and validated with collected data. After training, the model predicted the behavior with R2 from 0.75 to 0.80 in an office building. This study also transferred the behavior knowledge of the RL model to other office buildings with different HVAC control systems. The transfer learning model predicted with R2 from 0.73 to 0.80. Going from office buildings to residential buildings, the transfer learning model also had an R2 over 0.60. Therefore, the RL model combined with transfer learning was able to predict the building occupant behavior accurately with good scalability, and without the need for data collection.
Unsuitable thermostat settings lead to energy waste and an undesirable indoor environment, especially in multi-occupant rooms. This study aimed to develop an HVAC control strategy in multi-occupant offices using physiological parameters measured by wristbands. We used an ANN model to predict thermal sensation from air temperature, relative humidity, clothing level, wrist skin temperature, skin relative humidity and heart rate. Next, we developed a control strategy to improve the thermal comfort of all the occupants in the room. The control system was smart and could adjust the thermostat set point automatically in real time. We improved the occupants’ thermal comfort level that over half of the occupants reported feeling neutral, and fewer than 5% still felt uncomfortable. After coupling with occupancy-based control by means of lighting sensors or wristband Bluetooth, the heating and cooling loads were reduced by 90% and 30%, respectively. Therefore, the smart HVAC control system can effectively control the indoor environment for thermal comfort and energy saving.
As for proposed studies in the future, at first, we will use more advanced sensors to collect more kinds of occupant behavior-related data. We will expand the research on more occupant behavior related to indoor air quality, noise and illuminance level. We can use these data to recognize behavior instead of questionnaire survey now. We will also develop a personalized zonal control system for the multi-occupant office. We can find the number and location of inlet diffusers by using inverse design.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Data management and data science not elsewhere classified"

1

Office, General Accounting. Information security: Safeguarding of data in excessed Department of Energy computers : report to the Chairman, Committee on Science, House of Representatives. Washington, D.C. (P.O. Box 37050, Washington 20013): U.S. General Accounting Office, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Office, General Accounting. Information security: Vulnerabilities in DOE's systems for unclassified civilian research : report to the Committee on Science, House of Representatives. Washington, D.C. (P.O. Box 37050, Washington, D.C. 20013): U.S. General Accounting Office, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ausink, John A. An Optimization Approach to Workforce Planning for the Information Technology Field. RAND Corporation, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Data management and data science not elsewhere classified"

1

Sparrow, Elena B., and Janice C. Dawe. "Communication of Alaskan Boreal Science with Broader Communities." In Alaska's Changing Boreal Forest. Oxford University Press, 2006. http://dx.doi.org/10.1093/oso/9780195154313.003.0027.

Full text
Abstract:
An important responsibility of all researchers is to communicate effectively with the rest of the scientific community, students, and the general public. Communication is “a process by which information is exchanged between individuals through a common system of symbols, signs or behavior” (Merriam-Webster 1988). It is a two-way process that requires collaborations, best-information exchange practices, and effective formal and informal education. Communication of this knowledge and understanding about the boreal forest is important because it benefits scientists, policymakers, program managers, teachers, students, and other community members. Good data and a firm knowledge base are needed for improving understanding of the functioning of the boreal forest, implementing best-management practices regarding forests and other resources, making personal and communal decisions regarding livelihoods and quality of life, coping with changes in the environment, and preparing future cadres of science-informed decision makers. Communication among scientists is an essential step in the research process because it informs researchers about important ideas and observations elsewhere in the world and allows boreal researchers to contribute to general scientific understanding. For example, the Bonanza Creek LTER has developed its research program by incorporating many important concepts developed elsewhere, including ecosystem dynamics (Tansley 1935), succession (Clements 1916), state factors (Jenny 1941), predator interactions (Elton 1958), and landscape dynamics (Turner et al. 2001). Through active research and regular communication and collaboration with the international scientific community, these “imported” ideas have been adapted to the boreal forest and new ideas and insights have been developed or communicated to the scientific community, as described in detail throughout this book. New ideas have originated among boreal researchers, and their “export” has sparked research elsewhere in the world (Chapter 21). The pathways of communication are changing. Alaskan boreal researchers have participated actively in traditional modes of communication, including hundreds of peer-reviewed publications, several books, reports intended for managers, and participation in meetings and workshops. However, some of the greatest benefits of longterm research reside in the records of changes that occur. These long-term data are now available to the rest of the world through internet Web sites that house databases, publications, photographs, and other information (http://www.lter.uaf.edu).
APA, Harvard, Vancouver, ISO, and other styles
2

Premananthan, Premisha, Banujan Kuhaneswaran, Banage T. G. S. Kumara, and Enoka P. Kudavidanage. "Predicting the Future Research Gaps Using Hybrid Approach." In Advances in Library and Information Science, 157–75. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-7258-0.ch009.

Full text
Abstract:
Sri Lanka is one of the global biodiversity hotspots that contain a large variety of fauna and flora. But nowadays Sri Lankan wildlife has faced many issues because of poor management and policies to protect wildlife. The lack of technical and research support leads many researchers to retreat to select wildlife as their domain of study. This study demonstrates a novel approach to data mining to find hidden keywords and automated labeling for past research work in this area. Then use those results to predict the trending topics of researches in the field of biodiversity. To model topics and extract the main keywords, the authors used the latent dirichlet allocation (LDA) algorithms. Using the topic modeling performance, an ontology model was also developed to describe the relationships between each keyword. They classified the research papers using the artificial neural network (ANN) using ontology instances to predict the future gaps for wildlife research papers. The automatic classification and labeling will lead many researchers to find their desired research papers accurately and quickly.
APA, Harvard, Vancouver, ISO, and other styles
3

Diogo, Julien, and Pedro Mota Veiga. "Metaverse Applications in Business." In Advances in Marketing, Customer Relationship Management, and E-Services, 110–36. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-5538-8.ch006.

Full text
Abstract:
This study aims to identify the main applications of metaverse in business based on scientific publications. More specifically, the chapter aims to identify conceptual frameworks used in the literature. The authors conducted a systematic review based on the Web of Science and Scopus databases to achieve the objectives. This study followed a systematic literature review methodology consisting of protocol development, identified inclusion, and exclusion criteria for relevant publications; data extraction; and synthesis. The research resulted in 26 articles, which were identified and classified according to applications to the business environment, concluding that the main implications and applications of the metaverse in the business environment focus on an integrative framework that includes ethical issues, innovation management, marketing management, and knowledge management. This research has put down the foundation for establishing metaverse as a research topic in business research. With this study, the contribution lies both at the theoretical and practical levels.
APA, Harvard, Vancouver, ISO, and other styles
4

Silbergeld, Ellen K. "Risk Assessment and Risk Management: An Uneasy Divorce." In Acceptable Evidence. Oxford University Press, 1994. http://dx.doi.org/10.1093/oso/9780195089295.003.0011.

Full text
Abstract:
Over the past decade, the concept of risk has become central to environmental policy. Environmental decision making has been recast as reducing risk by assessing and managing it. Risk assessment is increasingly employed in environmental policymaking to set standards and initiate regulatory consideration and, even in epidemiology, to predict the health effects of environmental exposures. As such, it standardizes the methods of evaluation used in dealing with environmental hazards. Nonetheless, risk assessment remains controversial among scientists, and the policy results of risk assessment are generally not accepted by the public. It is not my purpose to examine the origin of these controversies, which I and others have considered elsewhere (see, e.g., EPA 1987), but rather to consider some of the consequences of the recent formulation of risk assessment as specific decisions and authorities distinguishable from other parts of environmental decision making. The focus of this chapter is the relatively new policy of separating certain aspects of risk assessment from risk management, a category that includes most decision-making actions. Proponents of this structural divorce contend that risk assessment is value neutral, a field of objective scientific analysis, while risk management is the arena where these objective data are processed into appropriate social policy. This raises relatively new problems to complicate the already contentious arena of environmental policy. This separation has created problems that interfere with the recognition and resolution of both scientific and transscientific issues in environmental policymaking. Indeed, both science and policy could be better served by recognizing the scientific limits of risk-assessment methods and allowing scientific and policy judgment to interact to resolve unavoidable uncertainties in the decisionmaking process. This chapter will discuss the forces that encouraged separating the performance of assessment and management at the EPA in the 1980s, which I characterize as an uneasy divorce. I shall examine some scientific and policy issues, especially regarding uncertainty, that have been aggravated by this policy of deliberate separation. Various interpretations of uncertainty have become central, and value-laden issues in decision making and appeals to uncertainty have often been an excuse for inaction.
APA, Harvard, Vancouver, ISO, and other styles
5

Crouch, Dora P. "Water System Evidence of Greek Civilization." In Water Management in Ancient Greek Cities. Oxford University Press, 1993. http://dx.doi.org/10.1093/oso/9780195072808.003.0010.

Full text
Abstract:
Attention to water supply and drainage is the sine qua non for urbanization, and hence for that human condition we call civilization. In fact, development of water supply, waste removal, and drainage made dense settlement possible. (In this book, drainage is used to mean the leading away from a site of all sorts of water, whether clean or dirty.) In spite of the importance of this factor for human history, relatively little attention has been paid to the history of water management, more to the histories of food supply and of commerce as determiners of urbanization. To compensate for that deficit, this is a study of the relationship between water management and urbanization. Other factors contributing to urbanization are discussed briefly in Chapter 6. Many of the “working conclusions” in this chapter and elsewhere are my inferences from the physical data discovered by archaeologists. Very little written evidence has come down to us from the Greek period. We are in the position of reasoning backward from the answers to the questions—always a risky business (Pierce, 1965, 5.590). This is not an uncommon problem in Greek history. Mortimer Chambers has pointed out in a talk on travelers to ancient Greece, given at the American Institute of Archaeology meeting, San Francisco, 1990, that if we had to rely on Greek literature for evidence, we would never know that they had ever painted any vases! Yet no one is suggesting that we desist from the study of vases because the surviving ancient Greek writings do not discuss them. No—we go to the vases themselves for the strongest evidence. In this chapter the emphasis will be on what had to be discovered and organized so that there could be a complete system of water management for an ancient Greek city. If we try to put ourselves back into pre-Hellenic centuries when the world seemed “new,” and look about with curious eyes and that great tool the inquiring mind, what will we see? What did “the water problem” consist of? Millennia of observation had enabled the ancient peoples to become expert about many aspects of their environment such as the stars, so that by about 5000 years ago the constellations of the zodiac were recognized and named, and the science of astronomy well begun.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Data management and data science not elsewhere classified"

1

Li, Yunjia, Zhiyong Zhang, Zhixin Chen, and Xinxiang Xiao. "Study and Analysis of Collaborative Management System of Network Security in Universities (CMSNSU) Under the Background of 2.0 Criteria of Classified Protection of Network Security." In 2021 2nd International Conference on Computing and Data Science (CDS). IEEE, 2021. http://dx.doi.org/10.1109/cds52072.2021.00074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Parameswaran, A., and K. A. T. O. Ranadewa. "Construction industry on the brink: The COVID-19 impact." In 10th World Construction Symposium. Building Economics and Management Research Unit (BEMRU), University of Moratuwa, 2022. http://dx.doi.org/10.31705/wcs.2022.19.

Full text
Abstract:
The COVID-19 pandemic has affected all industries globally, including the construction industry. As a result, the construction industry is experiencing several challenges in terms of delivering projects on time and on budget. However, a few studies have shown that the COVID-19 pandemic has a positive impact on the construction industry. Hence, analysing the issues caused by COVID-19 is vital to lessen the effects of the pandemic. Therefore, this study aims to investigate the impact of COVID-19 on the construction industry. Accordingly, a detailed literature review was carried out to gain a theoretical understanding of the topic. A quantitative research approach was used to collect data. The questionnaire survey was conducted using snowball sampling with a total of one 108 respondents. Statistical Package for Social Science" (SPSS) was used to analyse the collected data. The findings revealed 86 negative impacts for the construction industry owing to the pandemic, which was classified as resources-related issues, project management issues, quality issues, financial issues, contractual issues, safety issues, technology-related issues, and other issues for the construction industry. An increase in the price of materials and equipment, project cost, exchange rate, and inflation rate were noted as significant negative impacts to the construction industry. The research further identified twelve (12) favourable impacts for the construction industry as a result of the pandemic. Encouraging risk assessment and collaboration and encouraging Personal Protective Equipment (PPE) were highlighted as the significant positive impacts. Therefore, strategies need to be identified to neutralise the negative impacts using the positive impacts caused by the pandemic. This study contributes to the body of knowledge to advance the construction industry towards the next level during the post- COVID-19 scenario, which will be the focus of the next phase of this research.
APA, Harvard, Vancouver, ISO, and other styles
3

Barreto Fernandes, Francisco António, and Bernabé Hernandis Ortuño. "Usability and User-Centered Design - User Evaluation Experience in Self-Checkout Technologies." In Systems & Design 2017. Valencia: Universitat Politècnica València, 2017. http://dx.doi.org/10.4995/sd2017.2017.6634.

Full text
Abstract:
The increasing advance of the new technologies applied in the retail market, make it common to sell products without the personal contact between seller and buyer, being the registration and payment of the products made in electronic equipment of self-checkout. The large-scale use of these devices forces the consumer to participate in the service process, which was previously done through interaction with the company's employees. The user of the self-checkout system thus performs all the steps of the purchase, from weighing the products, registering them and making the payment. This is seen as a partial employee, whose participation or performance in providing services can be used by the company to improve the quality of its operations (KELLEY, et al 1993). However this participation does not always satisfy the user, and may cause negative experiences related to usability failures. This article presents the results of the evaluation by the users of the self-checkout system. The data were collected in Portugal through a questionnaire to 400 users. The study analyzes the degree of satisfaction regarding the quality and usability of the system, the degree of motivation for its adoption, as well as the profile of the users. Analysis of the sample data reveals that users have basic or higher education and use new technologies very often. They also have a high domain of the system and an easy learning of its use. The reason for using self-checkout instead of the traditional checkout is mainly due to "queues at checkout with operator" and "at the small volume of products". In general, the sample reveals a high degree of satisfaction with the service and with quality, however, in comparative terms, self-checkout is not considered better than operator checkout. The evaluation of the interaction with the self-checkout was classified according to twenty-six attributes of the system. The analysis identifies five groups with similar characteristics, of which two have low scores. "Cancellation of registered articles", "search for articles without a bar code", "manual registration", "bagging area", "error messages", "weight sensor" and “invoice request "are seven critical attributes of the system. The results indicate that the usability analysis oriented to the self-checkout service can be determinant for the user-system interaction. The implications of empirical findings are discussed together with guidelines for future research.Keywords: Interaction Design, Self service, Self-checkout, User evaluation, UsabilityReferencias ABRAHÃO, J., et al (2013). Ergonomia e Usabilidade. 1ª Edição. São Paulo: Blucher. ALEXANDRE, J. W. C., et al (2013). Análise do número de categorias da escala de Likert aplicada à gestão pela qualidade total através da teoria da resposta ao item. In: XXIII Encontro Nacional de Engenharia de Produção, Ouro Preto. BOOTH, P. (2014). An Introduction to Human-Computer Interaction (Psychology Revivals). London Taylor and Francis. CASTRO, D., ATKINSON, R., EZELL, J., (2010). Embracing the Self-Service Economy, Information Technology and Innovation Foundation. Available at SSRN: http://dx.doi.org/10.2139/ssrn.1590982 CHANG, L.A. (1994). A psychometric evaluation of 4-point and 6-point Likert-type scale in relation to reliability and validity. Applied Psychological Measurement. v. 18, n. 2, p. 05-15. DABHOLKAR, P. A. (1996). Consumer Evaluations of New Technology-based Self-service Options: An Investigation of Alternative Models of Service Quality. International Journal of Research in Marketing, Vol. 13, pp. 29-51. DABHOLKAR, P. A., BAGOZZI, R. P. (2002). An Attitudinal Model of Technology-based Selfservice: Moderating Effects of Consumer Traits and Situational Factors. Journal of the Academy of Marketing Science, Vol. 30 (3), pp. 184-201. DABHOLKAR, P. A., BOBBITT, L. M. & LEE, E. (2003). Understanding Consumer Motivation and Behavior related to Self-scanning in Retailing. International Journal of Service Industry Management, Vol. 14 (1), pp. 59-95. DIX, A. et al (2004). Human-Computer Interaction. Third edition. Pearson/Prentice-Hall. New York. FERNANDES, F. et al, (2015). Do Ensaio à Investigação – Textos Breves Sobre a Investigação, Bernabé Hernandis, Carmen Lloret e Francisco Sanmartín (Editores), Oficina de Acción Internacional - Universidade Politécnica de Valência Edições ESAD.cr/IPL, Leiria. HELANDER, M., LANDAUER, T., PRABHU, P. (1997). Handbook of Human – Computer Interaction. North–Holland: Elsevier. KALLWEIT, K., SPREER, P. & TOPOROWSKI, W. (2014). Why do Customers use Self-service Information Technologies in Retail? The Mediating Effect of Perceived Service Quality. Journal of Retailing and Consumer Services, Vol. 21, pp. 268-276. KELLEY SW, HOFFMAN KD, DAVIS MA. (1993). A typology of retail failures and recoveries. J Retailing. 69(4):429 – 52.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography