Academic literature on the topic 'Non-structured data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Non-structured data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Non-structured data":

1

Paradis, Rosemary D., Daniel Davenport, David Menaker, and Sarah M. Taylor. "Detection of Groups in Non-Structured Data." Procedia Computer Science 12 (2012): 412–17. http://dx.doi.org/10.1016/j.procs.2012.09.095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Genzel, Martin, and Peter Jung. "Recovering Structured Data From Superimposed Non-Linear Measurements." IEEE Transactions on Information Theory 66, no. 1 (January 2020): 453–77. http://dx.doi.org/10.1109/tit.2019.2932426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cai, Ting, and Xuemei Yang. "Non-structured Data Integration Access Policy Using Hadoop." Wireless Personal Communications 102, no. 2 (December 13, 2017): 895–908. http://dx.doi.org/10.1007/s11277-017-5112-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Luo, Wen Hua. "The Processing and Analyzing of Non-Structured Data in Digital Investigation." Advanced Materials Research 774-776 (September 2013): 1807–11. http://dx.doi.org/10.4028/www.scientific.net/amr.774-776.1807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The proportion of Non-structured data in total amount is much more than that of structured data. But the research on the method of processing and analyzing non-structured data is not as deep as that on structured data. This paper illustrates the importance of the research on non-structured data. Then from the angle of digital investigation, it illustrates the key techniques of its processing and analyzing ways. And it combines the self-developed Intelligent Analyzing System of Mass Case Information and the background of handling the online ball gambling in Chinese mainland, it illustrates in detail the specific application of non-structured data processing and analyzing in digital investigation.
5

Deng, Song. "Dynamic Non-Cooperative Structured Deep Web Selection." Applied Mechanics and Materials 644-650 (September 2014): 2911–14. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.2911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Most of structured deep web data sources are non-cooperation, therefore establish an accurate data source content summary by sampling is the core technology of data source selection. The content of deep web data source is updated from time to time, however existing efficient methods of non-cooperation structured data source selection does not consider summary update problem. Unrenewed summary of data source can not accurately characterize the content of the data source that produce a greater impact on data source selection. Base on this, we propose a dynamic data source selection method for non-cooperative structured deep web by combining subject headings sampling technology and subject headings extension technology. The experiment results show that our dynamic structured data source selection method has good recall ratio and precision besides being efficient.
6

Silva, Carlos Anderson Oliveira, Rafael Gonzalez-Otero, Michel Bessani, Liliana Otero Mendoza, and Cristiano L. de Castro. "Interpretable risk models for Sleep Apnea and Coronary diseases from structured and non-structured data." Expert Systems with Applications 200 (August 2022): 116955. http://dx.doi.org/10.1016/j.eswa.2022.116955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Changjun, Chunping Ouyang, Jinbin Wu, Xiaoming Zhang, and Chongchong Zhao. "Non-Structured Materials Science Data Sharing Based on Semantic Annotation." Data Science Journal 8 (2009): 52–61. http://dx.doi.org/10.2481/dsj.007-042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gibiino, Fabio, Vincenzo Positano, Florian Wiesinger, Giulio Giovannetti, Luigi Landini, and Maria Filomena Santarelli. "Structured errors in reconstruction methods for Non-Cartesian MR data." Computers in Biology and Medicine 43, no. 12 (December 2013): 2256–62. http://dx.doi.org/10.1016/j.compbiomed.2013.10.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xin, Rui, Tinghua Ai, Ruoxin Zhu, Bo Ai, Min Yang, and Liqiu Meng. "A Multi-Scale Virtual Terrain for Hierarchically Structured Non-Location Data." ISPRS International Journal of Geo-Information 10, no. 6 (June 3, 2021): 379. http://dx.doi.org/10.3390/ijgi10060379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Metaphor are commonly used rhetorical devices in linguistics. Among the various types, spatial metaphors are relatively common because of their intuitive and sensible nature. There are also many studies that use spatial metaphors to express non-location data in the field of visualization. For instance, some virtual terrains can be built based on computer technologies and visualization methods. In virtual terrains, the original abstract data can obtain specific positions, shapes, colors, etc. and people’s visual and image thinking can play a role. In addition, the theories and methods used in the space field could be applied to help people observe and analyze abstract data. However, current research has limited the use of these space theories and methods. For instance, many existing map theories and methods are not well combined. In addition, it is difficult to fully display data in virtual terrains, such as showing the structure and relationship at the same time. Facing the above problems, this study takes hierarchical data as the research object and expresses both the data structure and relationship from a spatial perspective. First, the conversion from high-dimensional non-location data to two-dimensional discrete points is achieved by a dimensionality reduction algorithm to reflect the data relationship. Based on this, kernel density estimation interpolation and fractal noise algorithms are used to construct terrain features in the virtual terrains. Under the control of the kernel density search radius and noise proportion, a multi-scale terrain model is built with the help of level of detail (LOD) technology to express the hierarchical structure and support the multi-scale analysis of data. Finally, experiments with actual data are carried out to verify the proposed method.
10

Fan, Jianqing, and Donggyu Kim. "Structured volatility matrix estimation for non-synchronized high-frequency financial data." Journal of Econometrics 209, no. 1 (March 2019): 61–78. http://dx.doi.org/10.1016/j.jeconom.2018.12.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Non-structured data":

1

Blampied, Paul Alexander. "Structured recursion for non-uniform data-types." Thesis, University of Nottingham, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ni, Weizeng. "Ontology-based Feature Construction on Non-structured Data." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439309340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Chao. "Exploiting non-redundant local patterns and probabilistic models for analyzing structured and semi-structured data." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1199284713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cho, Myung. "Convex and non-convex optimizations for recovering structured data: algorithms and analysis." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Optimization theories and algorithms are used to efficiently find optimal solutions under constraints. In the era of “Big Data”, the amount of data is skyrocketing,and this overwhelms conventional techniques used to solve large scale and distributed optimization problems. By taking advantage of structural information in data representations, this thesis offers convex and non-convex optimization solutions to various large scale optimization problems such as super-resolution, sparse signal processing,hypothesis testing, machine learning, and treatment planning for brachytherapy. Super-resolution: Super-resolution aims to recover a signal expressed as a sum of a few Dirac delta functions in the time domain from measurements in the frequency domain. The challenge is that the possible locations of the delta functions are in the continuous domain [0,1). To enhance recovery performance, we considered deterministic and probabilistic prior information for the locations of the delta functions and provided novel semidefinite programming formulations under the information. We also proposed block iterative reweighted methods to improve recovery performance without prior information. We further considered phaseless measurements, motivated by applications in optic microscopy and x-ray crystallography. By using the lifting method and introducing the squared atomic norm minimization, we can achieve super-resolution using only low frequency magnitude information. Finally, we proposed non-convex algorithms using structured matrix completion. Sparse signal processing: L1 minimization is well known for promoting sparse structures in recovered signals. The Null Space Condition (NSC) for L1 minimization is a necessary and sufficient condition on sensing matrices such that a sparse signal can be uniquely recovered via L1 minimization. However, verifying NSC is a non-convex problem and known to be NP-hard. We proposed enumeration-based polynomial-time algorithms to provide performance bounds on NSC, and efficient algorithms to verify NSC precisely by using the branch and bound method. Hypothesis testing: Recovering statistical structures of random variables is important in some applications such as cognitive radio. Our goal is distinguishing two different types of random variables among n>>1 random variables. Distinguishing them via experiments for each random variable one by one takes lots of time and efforts. Hence, we proposed hypothesis testing using mixed measurements to reduce sample complexity. We also designed efficient algorithms to solve large scale problems. Machine learning: When feature data are stored in a tree structured network having time delay in communication, quickly finding an optimal solution to the regularized loss minimization is challenging. In this scenario, we studied a communication-efficient stochastic dual coordinate ascent and its convergence analysis. Treatment planning: In the Rotating-Shield Brachytherapy (RSBT) for cancer treatment, there is a compelling need to quickly obtain optimal treatment plans to enable clinical usage. However, due to the degree of freedom in RSBT, finding optimal treatment planning is difficult. For this, we designed a first order dose optimization method based on the alternating direction method of multipliers, and reduced the execution time around 18 times compared to the previous research.
5

Zhu, Chuan. "Exploring structured predictions from sensorimotor data during non-prehensile manipulation using both simulations and robots." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/45552.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Robots are equipped with an increasingly wide array of sensors in order to enable advanced sensorimotor capabilities. However, the efficient exploitation of the resulting data streams remains an open problem. We present a framework for learning when and where to attend in a sensorimotor stream in order to estimate specific task properties, such as the mass of an object. We also identify the qualitative similarity of this ability between simulation and robotic system. The framework is evaluated for a non-prehensile ”topple-and-slide” task, where the data from a set of sensorimotor streams are used to predict the task property, such as object mass, friction coefficient, and compliance of the block being manipulated. Given the collected data streams for situations where the block properties are known, the method combines the use of variance-based feature selection and partial least-squares estimation in order to build a robust predictive model for the block properties. This model can then be used to make accurate predictions during a new manipulation. We demonstrate results for both simulation and robotic system using up to 110 sensorimotor data streams, which include joint torques, wrist forces/torques, and tactile information. The results show that task properties such as object mass, friction coefficient and compliance can be estimated with good accuracy from the sensorimotor streams observed during a manipulation.
6

Saes, Keylla Ramos. "Abordagem para integração automática de dados estruturados e não estruturados em um contexto Big Data." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-16012019-212403/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
O aumento de dados disponíveis para uso tem despertado o interesse na geração de conhecimento pela integração de tais dados. No entanto, a tarefa de integração requer conhecimento dos dados e também dos modelos de dados utilizados para representá-los. Ou seja, a realização da tarefa de integração de dados requer a participação de especialistas em computação, o que limita a escalabilidade desse tipo de tarefa. No contexto de Big Data, essa limitação é reforçada pela presença de uma grande variedade de fontes e modelos heterogêneos de representação de dados, como dados relacionais com dados estruturados e modelos não relacionais com dados não estruturados, essa variedade de representações apresenta uma complexidade adicional para o processo de integração de dados. Para lidar com esse cenário é necessário o uso de ferramentas de integração que reduzam ou até mesmo eliminem a necessidade de intervenção humana. Como contribuição, este trabalho oferece a possibilidade de integração de diversos modelos de representação de dados e fontes de dados heterogêneos, por meio de uma abordagem que permite o do uso de técnicas variadas, como por exemplo, algoritmos de comparação por similaridade estrutural dos dados, algoritmos de inteligência artificial, que através da geração do metadados integrador, possibilita a integração de dados heterogêneos. Essa flexibilidade permite lidar com a variedade crescente de dados, é proporcionada pela modularização da arquitetura proposta, que possibilita que integração de dados em um contexto Big Data de maneira automática, sem a necessidade de intervenção humana
The increase of data available to use has piqued interest in the generation of knowledge for the integration of such data bases. However, the task of integration requires knowledge of the data and the data models used to represent them. Namely, the accomplishment of the task of data integration requires the participation of experts in computing, which limits the scalability of this type of task. In the context of Big Data, this limitation is reinforced by the presence of a wide variety of sources and heterogeneous data representation models, such as relational data with structured and non-relational models with unstructured data, this variety of features an additional complexity representations for the data integration process. Handling this scenario is required the use of integration tools that reduce or even eliminate the need for human intervention. As a contribution, this work offers the possibility of integrating diverse data representation models and heterogeneous data sources through the use of varied techniques such as comparison algorithms for structural similarity of the artificial intelligence algorithms, data, among others. This flexibility, allows dealing with the growing variety of data, is provided by the proposed modularized architecture, which enables data integration in a context Big Data automatically, without the need for human intervention
7

Elleuch, Marwa. "Business process discovery from emails, a first step towards business process management in less structured information systems." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La fouille de processus vise à analyser les traces d'exécution des systèmes d'information (SI), utilisés dans le cadre des activités métiers, pour découvrir des connaissances sur les processus métiers (PM). D'importants travaux de recherche ont été menés dans ce domaine. Cependant, ils supposent généralement que ces traces d'exécution ont un niveau de structuration élevé. Cela signifie que: (i) ils sont composés d'enregistrements structurés, chacun capturant l'exécution d'une activité, et (ii) une partie des attributs des événements d'exécution (comme le nom de l'activité, l'horodatage) sont explicitement inclus dans ces enregistrements, ce qui facilite leur inférence. Néanmoins, les PM peuvent être entièrement ou partiellement réalisés dans des SI moins structurés générant des traces d’exécution de faible niveau de structuration. Les systèmes de courriels sont largement utilisés pour réaliser de manière collaborative des activités de PM. Cependant, leurs traces d’exécution sont de nature non-structurée de point de vue découverte des PM, ce qui empêche l’application directe des techniques existantes. Pour celles qui découvrent les PM à partir des courriels, elles: (i) nécessitent généralement une intervention humaine, et (ii) se sont limitées à la découverte des PM selon la perspective comportementale. Dans cette thèse, nous proposons de découvrir des fragments de PM à partir des courriels selon leurs perspectives fonctionnelles, données, organisationnelles et comportementales. Nous formalisons d'abord ces perspectives en considérant les spécificités des systèmes de courriels. Nous introduisons la notion de contribution des acteurs à la réalisation des activités pour enrichir les perspectives organisationnelles et comportementales. Nous considérons en outre les entités informationnelles manipulées par les activités de PM pour décrire la perspective des données. Pour automatiser la découverte de l’ensemble des perspectives, nous introduisons une approche complètement non-supervisée. Cette approche transforme principalement les traces non structurées des courriels en un journal d'événements structuré avant de l'analyser pour découvrir les PM selon différentes perspectives. Nous introduisons dans ce contexte un ensemble de solutions algorithmiques pour: (i) l'apprentissage non supervisé des activités basé sur la découverte de motifs fréquents de mots dans les courriels, (ii) la découverte des occurrences des activités dans les emails pour capturer les attributs des événements, (iii) la découverte des actes de parole des expéditeurs pour reconnaître leurs intentions de mentionner les activités dans les emails afin de déduire leurs contributions dans leur réalisation, (iv) le regroupement par chevauchement des activités pour découvrir leurs artefacts manipulés (c.-à-d. les entités informationnelles), et (v) la découverte des contraintes séquentielles entre les types d'événements pour découvrir la perspective comportementale des PM. Notre approche est validée en utilisant des courriels publics d’Enron. Nos résultats sont en outre rendus publics pour assurer la reproductibilité dans le domaine étudié. Nous montrons enfin l'utilité de nos résultats pour améliorer la gestion des PM à travers deux applications: (i) un outil de découverte et de recommandation des connaissances de PM à intégrer dans un système de gestion de courriels, et (ii) l'analyse de données CRM pour l'exploration des raisons de la satisfaction/non-satisfaction des utilisateurs
Process discovery aims at analysing the execution logs of information systems (IS), used when performing business activities, for discovering business process (BP) knowledge. Significant research works has been conducted in such area. However, they generally assume that these execution logs are of high or of middle level of maturity w.r.t BP discovery. This means that (i) they are composed of structured records while each one captures evidence of one activity execution, and (ii) a part of events’ attributes (e.g. activity name, timestamp) are explicitly included in these records which facilitates their inference. Nevertheless, BP can be entirely or partially performed through less structured IS generating execution logs of low level of maturity. More precisely, emailing systems are widely used as an alternative tool to collaboratively perform BP tasks. Traditional BP discovery techniques could not be applied or at least not directly applied due to the unstructured nature of email logs data. Recently, there have been several initiatives to extend the scope of BP discovery to consider email logs. However, most of them: (i) mostly require human intervention, and (ii) were limited to BP discovery according to its behavioral perspective. In this thesis, we propose to discover BP fragments from email logs w.r.t their functional, data, organizational and behavioral perspectives. We first formalize these perspectives considering emailing systems specifities. We introduce the notion of actors’ contributions towards performing activities to enrich the organizational and the behavioral perspectives. We additionally consider the informational entities manipulated by BP activities to describe the data perspective. To automate their discovery, we introduce a completely unsupervised approach. This approach mainly transforms the unstructured email log into a structured event log before mining it for discovering BP w.r.t multiple perspectives. We introduce in this context several algorithmic solutions for: (i) unsupervised learning activities based on discovering frequent patterns of words from emails, (ii) discovering activity occurrences in emails for capturing event attributes, (iii) discovering speech acts of activity occurrences for recognizing the sender purposes of including activities in emails, (iv) overlapping clustering of activities to discover their manipulated artifacts (i.e. informational entities), and (v) mining sequencing constraints between event types to discover BP behavioral perspective. We validated our approach using emails from the public dataset Enron to show the effectiveness of the obtained results. We publically provide these results to ensure reproducibility in the studied area. We finally show the usefulness of our results for improving BPM through two potential applications: (i) a BP discovery & recommendation tool to be integrated in emailing systems, and (ii) CRM data analysis for mining reasons of users’ satisfaction/non-satisfaction
8

Da, Silva De Aguiar Raquel Stella. "Optimization-based design of structured LTI controllers for uncertain and infinite-dimensional systems." Thesis, Toulouse, ISAE, 2018. http://www.theses.fr/2018ESAE0020/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les techniques d’optimisation non-lisse permettent de résoudre des problèmes difficiles de l’ingénieur automaticien qui étaient inaccessibles avec les techniques classiques. Il s’agit en particulier de problèmes de commande ou de filtrage impliquant de multiples modèles ou faisant intervenir des contraintes de structure pour la réduction des couts et de la complexité. Il en résulte que ces techniques sont plus à même de fournir des solutions réalistes dans des problématiques pratiques difficiles. Les industriels européens de l’aéronautique et de l’espace ont récemment porté un intérêt tout particulier à ces nouvelles techniques. Ces dernières font parfois partie du "process" industriel (THALES, AIRBUS DS Satellite, DASSAULT A) ou sont utilisées dans les bureaux d’étude: (SAGEM, AIRBUS Transport). Des études sont également en cours comme celle concernant le pilotage atmosphérique des futurs lanceurs tels d’Ariane VI. L’objectif de cette thèse concerne l'exploration, la spécialisation et le développement des techniques et outils de l'optimisation non-lisse pour des problématiques d'ingénierie non résolues de façon satisfaisante - incertitudes de différente nature - optimisation de l'observabilité et de la contrôlabilité - conception simultanée système et commande Il s’agit aussi d’évaluer le potentiel de ces techniques par rapport à l’existant avec comme domaines applicatifs l’aéronautique, le spatial ou les systèmes de puissance de grande dimension qui fournissent un cadre d’étude particulièrement exigeant
Non-smooth optimization techniques help solving difficult engineering problems that would be unsolvable otherwise. Among them, control problems with multiple models or with constraints regarding the structure of the controller. The thesis objectives consist in the exploitation, specialization and development of non smooth optmization techniques and tools for solving engineering problems that are not satisfactorily solved to the present
9

Sans, Virginie. "Maintenance de vues XML matérialisées à partir de sources web non coopérantes." Cergy-Pontoise, 2008. http://biblioweb.u-cergy.fr/theses/08CERG0383.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Proposer des services en intégrant des informations à partir de sources hétérogènes est l'un des objectifs d'une architecture de médiation. Dans le cas de sources Web, celles-ci peuvent être non-coopérantes et parfois indisponibles. Des vues sont alors matérialisées pour garantir un accès aux données. Lorsqu'une mise à jour est faite sur les sources, la vue doit être maintenue consistante. Nous proposons une approche afin d'assurer la maintenance des vues XML dans ce contexte. La première étape consiste à détecter et identifier des mises à jour faites sur les sources, la seconde est le processus de maintenance lui-même. Nos travaux se fondent sur une extension de la XAlgèbre permettant une annotation par identifiant des données, ainsi que sur un procédé de reconstitution partielle des sources
Providing services by integrating information available in heterogeneous data sources is one of the targets of a mediation architecture. In the Web context, sources may be non cooperative and be sometimes not available. Then, views are materialized in order to allow data access. When an update occurs on underlying data sources, the view must be maintained. Accordingly, we propose an approach for maintaining XML views in this context. The first step of our approach consists in detecting and identifying source updates, and the second step consists in the maintenance process itself. Our work is based upon an extension of the XAlgebra which annotates data with identifiers and upon a process of partial recovery of underlying sources
10

Du, Lan. "Non-parametric bayesian methods for structured topic models." Phd thesis, 2011. http://hdl.handle.net/1885/149800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The proliferation of large electronic document archives requires new techniques for automatically analysing large collections, which has posed several new and interesting research challenges. Topic modelling, as a promising statistical technique, has gained significant momentum in recent years in information retrieval, sentiment analysis, images processing, etc. Besides existing topic models, the field of topic modelling still needs to be further explored using more powerful tools. One potentially useful area is to directly consider the document structure ranging from semantically high-level segments (e.g., chapters, sections, or paragraphs) to low-level segments (e.g., sentences or words) in topic modeling. This thesis introduces a family of structured topic models for statistically modeling text documents together with their intrinsic document structures. These models take advantage of non-parametric Bayesian techniques (e.g., the two-parameter Poisson-Dirichlet process (PDP)) and Markov chain Monte Carlo methods. Two preliminary contributions of this thesis are 1. The Compound Poisson-Dirichlet process (CPDP): it is an extension of the PDP that can be applied to multiple input distributions. 2. Two Gibbs sampling algorithms for the PDP in a finite state space: these two samplers are based on the Chinese restaurant process that provides an elegant analogy of incremental sampling for the PDP. The first, a two-stage Gibbs sampler, arises from a table multiplicity representation for the PDP. The second is built on top of a table indicator representation. In a simply controlled environment of multinomial sampling, the two new samplers have fast convergence speed. These support the major contribution of this thesis, which is a set of structured topic models: Segmented Topic Model (STM) which models a simple document structure with a four-level hierarchy by mapping the document layout to a hierarchical subject structure. It performs significantly better than the latent Dirichlet allocation model and other segmented models at predicting unseen words. Sequential Latent Dirichlet Allocation (SeqLDA) which is motivated by topical correlations among adjacent segments (i.e., the sequential document structure). This new model uses the PDP and a simple first-order Markov chain to link a set of LDAs together. It provides a novel approach for exploring the topic evolution within each individual document. Adaptive Topic Model (AdaTM) which embeds the CPDP in a simple directed acyclic graph to jointly model both hierarchical and sequential document structures. This new model demonstrates in terms of per-word predictive accuracy and topic distribution profile analysis that it is beneficial to consider both forms of structures in topic modelling. - provided by Candidate.

Books on the topic "Non-structured data":

1

Miner, Gary. Practical text mining and statistical analysis for non-structured text data applications. Waltham, MA: Academic Press, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Practical Text Mining and Statistical Analysis for Non-structured Text Data Applications. Elsevier, 2012. http://dx.doi.org/10.1016/c2010-0-66188-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nisbet, Robert, Gary D. Miner, Hill Thomas, Elder John IV, and Andrew Fast. Practical Text Mining and Statistical Analysis for Non-Structured Text Data Applications. Elsevier Science & Technology, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kelly, Ann, and Dan McCreary. Making Sense of NoSQL: A guide for managers and the rest of us. Manning Publications, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Erdos, David. European Data Protection Regulation, Journalism, and Traditional Publishers. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198841982.001.0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This book explores the interface between European data protection and the freedom of expression activities of traditional journalism, professional artists, and both academic and non-academic writers from both an empirical and normative perspective. It draws on an exhaustive examination of both historical and contemporary public domain material and a comprehensive questionnaire of European Data Protection Authorities (DPAs). Empirically it is found that, notwithstanding an often confusing statutory landscape, DPAs have sought to develop an approach to regulating the journalistic media based on contextual rights balancing. However, they have struggled to secure a clear and specified criterion of strictness as regards standard-setting or a consistent and reliable approach to enforcement. DPAs have appeared even more confused as regards other traditional publishers, largely abstaining from regulating most professional artists and writers but attempting to subject all academic disciplines to onerous statutory restrictions established for medical, scientific, and related research. From these findings, it is argued that balancing contextual rights has value and should be both generalized across all traditional publishers and systematically and sensitively developed through structured and robust co-regulation. Such co-regulation should adopt the new code of conduct and monitoring provisions included in the General Data Protection Regulation (GDPR) as a broad guideline. DPAs should accord strong deference to any codes and monitoring bodies which verifiably meet the accredited criteria but must engage more proactively when these are absent. In any case, DPAs should also intervene directly as regards particularly serious or systematic issues and have an increasingly important role in ensuring a joined-up approach between traditional publishing and new media activity.
6

Virdi, Sundeep, and Robert L. Trestman. Personality disorders. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199360574.003.0036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Personality disorders are highly prevalent and highly problematic in jails in prisons. Personality disorders, by definition, are associated with significant functional impairment of the affected individual and may negatively impact those around them. That impairment results from the way these individuals think and feel about themselves and others. Patients with personality disorder are often challenging to manage in the community. The difficulties associated with their care are accentuated in the confines and highly structured environments presented by jails and prisons. Inmates with personality disorders often require a disproportionate level of attention from correctional staff and their behavior can contribute to a dangerous environment inside a facility. Additionally, when compared to offenders with other psychiatric disorders or non-mentally disordered offenders, offenders with personality disorders have higher rates of violence, criminality, and recidivism. There are 4 personality disorders that are of particular clinical relevance to the correctional psychiatry setting: borderline personality disorder, antisocial personality disorder, narcissistic personality disorder, and paranoid personality disorder. Research also reflects that these disorders have the highest correctional prevalence rates among the personality disorders. For each of these four disorders, this chapter presents in turn a description and some management concerns and challenges, data on correctional prevalence, appropriate psychotherapy, and potential psychopharmacologic interventions.
7

Virole, Louise, and Elise Ricadat. Combining interviews and drawings: methodological considerations. Ludomedia, 2022. http://dx.doi.org/10.36367/ntqr.11.e545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Framework: In qualitative research, drawing on a blank sheet of paper during the interview is one of the tools in the researcher’s toolbox. This technique is increasingly used in social sciences, but is still rarely included in research on social support for the chronically ill. Goals and Methods: The objective of this paper is to analyze the advantages of an innovative research method that uses both drawings and semi-structured interviews to study support networks of chronically ill patients. This method was used to conduct a qualitative research on changes in chronically ill support networks in France during the lockdown period (March-May 2020). The study triangulates three types of sources: 1. From chronically ill patients' oral accounts of their experience of lockdown, collected during 32 semi-directive interviews; 2. From the chronically ill patients’ drawings of support networks they were asked to make by the end of the interviews; 3. From their oral description of the drawn elements. Results: The drawing technique has several advantages: i. the playful nature of the drawing facilitates the degree of adhesion and interest in the investigation process, ii. it leads to greater reflexivity on the part of the respondents, iii. triangulation of the data from the narratives and the network drawings brings to light some unexpected results: it highlighted which types of support are valued or invisibilized and revealed the important support role of non-humans during lockdown. Conclusions: The complementary use of drawings and narratives allows a more detailed and complex qualitative analysis. However, this method requires investigators to take special precautions before, during and after the field work.
8

VIROLE, Louise, and Elise RICADAT. Combining interviews and drawings: methodological considerations. Ludomedia, 2022. http://dx.doi.org/10.36367/ntqr.11.2022.e545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Framework: In qualitative research, drawing on a blank sheet of paper during the interview is one of the tools in the researcher’s toolbox. This technique is increasingly used in social sciences, but is still rarely included in research on social support for the chronically ill. Goals and Methods: The objective of this paper is to analyze the advantages of an innovative research method that uses both drawings and semi-structured interviews to study support networks of chronically ill patients. This method was used to conduct a qualitative research on changes in chronically ill support networks in France during the lockdown period (March-May 2020). The study triangulates three types of sources: 1. From chronically ill patients' oral accounts of their experience of lockdown, collected during 32 semi-directive interviews; 2. From the chronically ill patients’ drawings of support networks they were asked to make by the end of the interviews; 3. From their oral description of the drawn elements. Results: The drawing technique has several advantages: i. the playful nature of the drawing facilitates the degree of adhesion and interest in the investigation process, ii. it leads to greater reflexivity on the part of the respondents, iii. triangulation of the data from the narratives and the network drawings brings to light some unexpected results: it highlighted which types of support are valued or invisibilized and revealed the important support role of non-humans during lockdown. Conclusions: The complementary use of drawings and narratives allows a more detailed and complex qualitative analysis. However, this method requires investigators to take special precautions before, during and after the field work.
9

Turnock, Bryan. Studying Horror Cinema. Liverpool University Press, 2019. http://dx.doi.org/10.3828/liverpool/9781911325895.001.0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Aimed at teachers and students new to the subject, this book is a comprehensive survey of the genre from silent cinema to its twenty-first century resurgence. Structured as a series of thirteen case studies of easily accessible films, it covers the historical, production, and cultural context of each film, together with detailed textual analysis of key sequences. Sitting alongside such acknowledged classics as Psycho and Rosemary's Baby are analyses of influential non-English language films as Kwaidan, Bay of Blood, and Let the Right One In. The book concludes with a chapter on 2017's blockbuster It, the most financially successful horror film of all time, making this book the most up-to-date overview of the genre available.

Book chapters on the topic "Non-structured data":

1

Chen, Wei, and Xiangyu Zhao. "Similarity-Based Classification for Big Non-Structured and Semi-Structured Recipe Data." In Database Systems for Advanced Applications, 57–64. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-32055-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Méndez, J., M. Hernández, and J. Lorenzo. "A procedure to compute prototypes for data mining in non-structured domains." In Principles of Data Mining and Knowledge Discovery, 396–404. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0094843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Elia, Annibale, Daniela Guglielmo, Alessandro Maisto, and Serena Pelosi. "A Linguistic-Based Method for Automatically Extracting Spatial Relations from Large Non-Structured Data." In Algorithms and Architectures for Parallel Processing, 193–200. Cham: Springer International Publishing, 2013. http://dx.doi.org/10.1007/978-3-319-03889-6_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chernyshov, Artyom, Anita Balandina, Anastasiya Kostkina, and Valentin Klimov. "Intelligent Search System for Huge Non-structured Data Storages with Domain-Based Natural Language Interface." In Advances in Intelligent Systems and Computing, 27–33. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63940-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Martyshkin, A. I., I. I. Salnikov, and E. A. Artyushina. "R&D in Collection and Representation of Non-structured Open-Source Data for Use in Decision-Making Systems." In Lecture Notes in Electrical Engineering, 1098–112. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-39225-3_116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ponzio, Pablo, Ariel Godio, Nicolás Rosner, Marcelo Arroyo, Nazareno Aguirre, and Marcelo F. Frias. "Efficient Bounded Model Checking of Heap-Manipulating Programs using Tight Field Bounds." In Fundamental Approaches to Software Engineering, 218–39. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-71500-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractSoftware model checkers are able to exhaustively explore different bounded program executions arising from various sources of non-determinism. These tools provide statements to produce non-deterministic values for certain variables, thus forcing the corresponding model checker to consider all possible values for these during verification. While these statements offer an effective way of verifying programs handling basic data types and simple structured types, they are inappropriate as a mechanism for nondeterministic generation of pointers, favoring the use of insertion routines to produce dynamic data structures when verifying, via model checking, programs handling such data types.We present a technique to improve model checking of programs handling heap-allocated data types, by taming the explosion of candidate structures that can be built when non-deterministically initializing heap object fields. The technique exploits precomputed relational bounds, that disregard values deemed invalid by the structure’s type invariant, thus reducing the state space to be explored by the model checker. Precomputing the relational bounds is a challenging costly task too, for which we also present an efficient algorithm, based on incremental SAT solving.We implement our approach on top of the bounded model checker, and show that, for a number of data structures implementations, we can handle significantly larger input structures and detect faults that is unable to detect.
7

Peruffo, Andrea, Daniele Ahmed, and Alessandro Abate. "Automated and Formal Synthesis of Neural Barrier Certificates for Dynamical Models." In Tools and Algorithms for the Construction and Analysis of Systems, 370–88. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72016-2_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractWe introduce an automated, formal, counterexample-based approach to synthesise Barrier Certificates (BC) for the safety verification of continuous and hybrid dynamical models. The approach is underpinned by an inductive framework: this is structured as a sequential loop between a learner, which manipulates a candidate BC structured as a neural network, and a sound verifier, which either certifies the candidate’s validity or generates counter-examples to further guide the learner. We compare the approach against state-of-the-art techniques, over polynomial and non-polynomial dynamical models: the outcomes show that we can synthesise sound BCs up to two orders of magnitude faster, with in particular a stark speedup on the verification engine (up to three orders less), whilst needing a far smaller data set (up to three orders less) for the learning part. Beyond improvements over the state of the art, we further challenge the new approach on a hybrid dynamical model and on larger-dimensional models, and showcase the numerical robustness of our algorithms and codebase.
8

Di Tria, Francesco, Ezio Lefons, and Filippo Tangorra. "Big Data Warehouse Automatic Design Methodology." In Big Data, 454–92. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9840-6.ch023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Traditional data warehouse design methodologies are based on two opposite approaches. The one is data oriented and aims to realize the data warehouse mainly through a reengineering process of the well-structured data sources solely, while minimizing the involvement of end users. The other is requirement oriented and aims to realize the data warehouse only on the basis of business goals expressed by end users, with no regard to the information obtainable from data sources. Since these approaches are not able to address the problems that arise when dealing with big data, the necessity to adopt hybrid methodologies, which allow the definition of multidimensional schemas by considering user requirements and reconciling them against non-structured data sources, has emerged. As a counterpart, hybrid methodologies may require a more complex design process. For this reason, the current research is devoted to introducing automatisms in order to reduce the design efforts and to support the designer in the big data warehouse creation. In this chapter, the authors present a methodology based on a hybrid approach that adopts a graph-based multidimensional model. In order to automate the whole design process, the methodology has been implemented using logical programming.
9

Zemmouchi-Ghomari, Leila. "Linked Data." In Advances in Human and Social Aspects of Technology, 87–113. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-6367-9.ch005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The data on the web is heterogeneous and distributed, which makes its integration a sine qua non-condition for its effective exploitation within the context of the semantic web or the so-called web of data. A promising solution for web data integration is the linked data initiative, which is based on four principles that aim to standardize the publication of structured data on the web. The objective of this chapter is to provide an overview of the essential aspects of this fairly recent and exciting field, including the model of linked data: resource description framework (RDF), its query language: simple protocol, and the RDF query language (SPARQL), the available means of publication and consumption of linked data, and the existing applications and the issues not yet addressed in research.
10

Sarkar, Anirban. "Design of Semi-Structured Database System." In Designing, Engineering, and Analyzing Reliable and Efficient Software, 74–95. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2958-5.ch005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The chapter focuses on a graph – semantic based conceptual data model for semi-structured data, called Graph Object Oriented Semi-Structured Data Model (GOOSSDM), to conceptualize the different facets of such system in object oriented paradigm. The model defines a set of graph based formal constructs, varieties of relationship types with participation constraints. It is accompanied with a rich set of graphical notations and those are used to specify the conceptual level design of semi-structured database system. The approach facilitates modeling of irregular, heterogeneous, hierarchical, and non-hierarchical semi-structured data at the conceptual level. The GOOSSDM is also able to represent the mixed content in semi-structured data. Moreover, the approach is capable to model XML document at conceptual level with the facility of document-centric design, ordering and disjunction characteristic. The chapter also includes a rule based transformation mechanism for GOOSSDM schema into the equivalent XML Schema Definition (XSD). Moreover, the chapter also provides comparative study of several similar kinds of proposals for semi-structured data models based on the properties of semi-structured data and future research scope in this area.

Conference papers on the topic "Non-structured data":

1

Zhaoshun Wang, Guicheng Shen, and Jinjin Huang. "Synthetic retrieval technology for structured data and Non-structured data." In 2010 2nd International Conference on Information Science and Engineering (ICISE). IEEE, 2010. http://dx.doi.org/10.1109/icise.2010.5691394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lesbegueries, Julien, Mauro Gaio, and Pierre Loustau. "Geographical information access for non-structured data." In the 2006 ACM symposium. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1141277.1141296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gelernter, Judith, and Wei Zhang. "Cross-lingual geo-parsing for non-structured data." In the 7th Workshop. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2533888.2533943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Paul, Sujoy, Jawadul H. Bappy, and Amit K. Roy-Chowdhury. "Non-uniform Subset Selection for Active Learning in Structured Data." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Poussot-Vassal, Charles, Denis Matignon, Ghilslain Haine, and Pierre Vuillemin. "Data-driven port-Hamiltonian structured identification for non-strictly passive systems." In 2023 European Control Conference (ECC). IEEE, 2023. http://dx.doi.org/10.23919/ecc57647.2023.10178249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Florentino, Érick, Ronaldo Goldschmidt, and Maria Cavalcanti. "Identifying Suspects on Social Networks: An Approach based on Non-structured and Non-labeled Data." In 23rd International Conference on Enterprise Information Systems. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010440300510062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gupta, Bidyut, Nick Rahimi, Shahram Rahimi, and Ashraf Alyanbaawi. "Efficient data lookup in non-DHT based low diameter structured P2P network." In 2017 IEEE 15th International Conference on Industrial Informatics (INDIN). IEEE, 2017. http://dx.doi.org/10.1109/indin.2017.8104899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Chang-Sup. "Keyword Search over Graph-structured Data for Finding Effective and Non-redundant Answers." In The 28th International Conference on Software Engineering and Knowledge Engineering. KSI Research Inc. and Knowledge Systems Institute Graduate School, 2016. http://dx.doi.org/10.18293/seke2016-140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Stern, S., M. Nevers, Y. Jian, M. A. Christensen, A. Ochoa, C. Rhee, R. Jin, et al. "Surveillance and Outcomes for Non-Ventilator Hospital-Acquired Pneumonia Events Using Structured Electronic Clinical Data." In American Thoracic Society 2021 International Conference, May 14-19, 2021 - San Diego, CA. American Thoracic Society, 2021. http://dx.doi.org/10.1164/ajrccm-conference.2021.203.1_meetingabstracts.a1679.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Melzi, Stefano, Ferruccio Resta, and Edoardo Sabbioni. "Vehicle Sideslip Angle Estimation Through Neural Networks: Application to Numerical Data." In ASME 8th Biennial Conference on Engineering Systems Design and Analysis. ASMEDC, 2006. http://dx.doi.org/10.1115/esda2006-95376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Aim of this paper is to evaluate the possibility of estimating the vehicle sideslip angle through a non-structured algorithm based on neural networks. Results reported are relevant to a numerical investigation of the network performance which can be regarded as preliminary stage for the application on a real vehicle. A numerical model is used to describe the vehicle dynamics and to generate the inputs for the neural network; with an appropriate set of manoeuvres for network training the non-structured algorithm provides reliable results when applied to a complete series of handling manoeuvres carried out with different tire-road friction coefficients.

Reports on the topic "Non-structured data":

1

Tarasenko, Roman A., Viktor B. Shapovalov, Stanislav A. Usenko, Yevhenii B. Shapovalov, Iryna M. Savchenko, Yevhen Yu Pashchenko, and Adrian Paschke. Comparison of ontology with non-ontology tools for educational research. [б. в.], June 2021. http://dx.doi.org/10.31812/123456789/4432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Providing complex digital support for scientific research is an urgent problem that requires the creation of useful tools. Cognitive IT-platform Polyhedron has used to collect both existing informational ontology- based tools, and specially designed to complement a full-stack of instruments for digital support for scientific research. Ontological tools have generated using the Polyhedron converter using data from Google sheets. Tools “Search systems”, “Hypothesis test system”, “Centre for collective use”, “The selection of methods”, “The selection of research equipment”, “Sources recommended by Ministry of Education and Science of Ukraine”, “Scopus sources”, “The promising developments of The National Academy of Sciences of Ukraine” were created and structured in the centralized ontology. A comparison of each tool to existing classic web-based analogue provided and described.
2

Wallach, Rony, Tammo Steenhuis, Ellen R. Graber, David DiCarlo, and Yves Parlange. Unstable Flow in Repellent and Sub-critically Repellent Soils: Theory and Management Implications. United States Department of Agriculture, November 2012. http://dx.doi.org/10.32747/2012.7592643.bard.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Water repellency causes unstable wetting fronts that result in water moving in preferential flowpaths through homogeneous soils as well in structured soils where macropores enhance the preferential flow pattern. Water repellency is typically associated with extended water ponding on the soil surface, but we have found that repellency is important even before the water ponds. Preferential flow fingers can form under conditions where the contact angle is less than 90o, but greater than 0o. This means that even when the soil is considered wettable (i.e., immediate penetration of water), water distribution in the soil profile can be significantly non-uniform. Our work concentrated on various aspects of this subject, with an emphasis on visualizing water and colloid flow in soil, characterizing mathematically the important processes that affect water distribution, and defining the chemical components that are important for determining contact angle. Five papers have been published to date from this research, and there are a number of papers in various stages of preparation.
3

Tucker-Blackmon, Angelicque. Engagement in Engineering Pathways “E-PATH” An Initiative to Retain Non-Traditional Students in Engineering Year Three Summative External Evaluation Report. Innovative Learning Center, LLC, July 2020. http://dx.doi.org/10.52012/tyob9090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The summative external evaluation report described the program's impact on faculty and students participating in recitation sessions and active teaching professional development sessions over two years. Student persistence and retention in engineering courses continue to be a challenge in undergraduate education, especially for students underrepresented in engineering disciplines. The program's goal was to use peer-facilitated instruction in core engineering courses known to have high attrition rates to retain underrepresented students, especially women, in engineering to diversify and broaden engineering participation. Knowledge generated around using peer-facilitated instruction at two-year colleges can improve underrepresented students' success and participation in engineering across a broad range of institutions. Students in the program participated in peer-facilitated recitation sessions linked to fundamental engineering courses, such as engineering analysis, statics, and dynamics. These courses have the highest failure rate among women and underrepresented minority students. As a mixed-methods evaluation study, student engagement was measured as students' comfort with asking questions, collaboration with peers, and applying mathematics concepts. SPSS was used to analyze pre-and post-surveys for statistical significance. Qualitative data were collected through classroom observations and focus group sessions with recitation leaders. Semi-structured interviews were conducted with faculty members and students to understand their experiences in the program. Findings revealed that women students had marginalization and intimidation perceptions primarily from courses with significantly more men than women. However, they shared numerous strategies that could support them towards success through the engineering pathway. Women and underrepresented students perceived that they did not have a network of peers and faculty as role models to identify within engineering disciplines. The recitation sessions had a positive social impact on Hispanic women. As opportunities to collaborate increased, Hispanic womens' social engagement was expected to increase. This social engagement level has already been predicted to increase women students' persistence and retention in engineering and result in them not leaving the engineering pathway. An analysis of quantitative survey data from students in the three engineering courses revealed a significant effect of race and ethnicity for comfort in asking questions in class, collaborating with peers outside the classroom, and applying mathematical concepts. Further examination of this effect for comfort with asking questions in class revealed that comfort asking questions was driven by one or two extreme post-test scores of Asian students. A follow-up ANOVA for this item revealed that Asian women reported feeling excluded in the classroom. However, it was difficult to determine whether these differences are stable given the small sample size for students identifying as Asian. Furthermore, gender differences were significant for comfort in communicating with professors and peers. Overall, women reported less comfort communicating with their professors than men. Results from student metrics will inform faculty professional development efforts to increase faculty support and maximize student engagement, persistence, and retention in engineering courses at community colleges. Summative results from this project could inform the national STEM community about recitation support to further improve undergraduate engineering learning and educational research.

To the bibliography